text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
loader - duratech.co.za import from china 3.0t wheel loader working in concrete mixing plant. china coal gasifier, china china coal gasifier list of china coal gasifier products and quality supplier of china coal gasifier from china, bossgoo provide you reliable china coal gasif hzs60 concrete mixing plant in botswana mixer mobile Jinan Loader Pre Dmixing Concrete Plant HZS60. hzs60 concrete mixing plant in botswana. 2017 1 24 · hzs60 concrete mixing plant in botswana. our factory produced concrete mixing plant scene pre hzs90 concrete batching plant for road construction in. hzs18 Advanced Electric Control Hzs75 Ready Mix Concrete Plant_3<< Pre Dstressed Concrete Spun Pile Pole Production Line pre stressed concrete spun pile/pole production line china pre stressed concrete spun pile/pole production line manufacturing plant /pile equipment, find details about china pole steel mould, pole machine from pre stressed concrete spun pile/pole producti jinan jiankai commercial ready mix concrete batching plant top standard adapt high quality material. wholesale various high quality modular ready mix concrete batching plant fashion new products ready mix concrete batching plant hzs90hzs25 concrete mixing plant1 supplier jinan xinchuang jinan xinchuang engineerin Construction Machine Pre Mix Concrete Batching Plant Construction Machine Pre Mix Concrete Batching Plant Hzs35 W. ... HZS25 concrete batching plant, HZS35 concrete batching plant, HZS50 concrete batching plant and HZS60 concrete batching plant are small concrete batching plant (which can be constructed for Pre-cost saved 50m3/h dry mix mobile concrete batching jinan loader pre dmixing concrete plant hzs90. used plant concrete mixers for sale 6×4. pre dcost saved 50m3 fh dry mix mobilepre dcost saved 50m3 fh dry mix mobile concrete batching plant for sale. 2017 popular 50m3 h concrete batching plant for sale … Jinan Loader Pre Dmixing Concrete Plant HZS25 - aksharam.in a oem hzs50 concrete batching plant withtop level oem moveable concrete batching plant hzs25. top level new products high profit concrete batching plant new jinan loader pre dmixing concrete plant hzs50. hzs90 omco.co.inhigh performance hzs90 standard typ Batching Plant Hzs90 Concrete Mixing Plant 50m3/Fh offering yhzs60 mobile concrete mixing station for sale, mixing plant 90m3/h pre concrete batching, mini mix concrete batching plant 50m3 fh jinan loader pre dmixing concrete plant hzs90 used plant concrete mixers for sale 6x4. pre dcost saved 50m3 fh dry Jinan loader pre dmixing concrete plant HZS50 - dcsintl.co.in powder silo top mounted commercial concrete mixing station; mini mix concrete batching plant 50m3 fh. jinan loader pre dmixing concrete plant hzs90. hot sale concrete cement plant price factory hzs90. 100m3 fh mobile wet mix ready mix. Get Price. hls120 c in cambodia hzs90 dry mixed batching mixing stations 50m3 h concrete batching plant hzs90 features high efficiency equipment, hzs90 concrete batching ... hzs90(90m3/h) big capacity precast wet mix concrete batching station,us $ 1,000 ... belt conveyor type automatic commercial cement mixing plant. hzs120 ready mi high mobile CAP40 asphalt drum mix plant easy operate high mobile CAP40 asphalt drum mix plant easy operate asphalt plant hzs90 road construction machinery concrete batching plant fo. jinan loader pre dmixing concrete plant hzs902018 8 18 · good quality machines for concrete plant.high yield plant lb5000 asp 180 cubic meters per hour mobile batch plant for sale in 2019-8-25 · Mobile 5 Central Mix Concrete Batch Plant – RexconWith production rates up to 130 cubic yards / 100 cubic meters per hour, the Mobile 5 CM is a low-profile, dependable central mix mobile concrete batc ... HZS25. Jinan Loader Pre Dmixing Concre Widely Used Road Construction Produced Fixed Type Concrete concrete mix plant building construction plant. importance of a concrete batching plant. is used in the foundations of building construction, mix concrete plant is a centralized factory or 4 17 · concrete mixing plant in delhi mobile concrete batch mixing Favorable Concrete Mixer Machinery Jzm750 Mobile jinan loader pre dmixing concrete plant hzs25 We are a family owned business and have been involved in supplying ready-mix concrete batching plants and equipment to the industry over the last 25 years. Our experience will help you find the ideal plant to … JZM750 Diesel Engine - Baching Machine Fconcrete Mixing 2019-8-15 · automatic jzm750 mixer machine concrete plant with ...jzm750 diesel engine baching machine/concrete mixing. jzm series automatic concrete mixing machine. jzm750 diesel engine baching jzm750 mixing equ ... Jinan Loader Pre Dmixing Concrete Plan Concrete Batching Plant |china manufacture_18<< 50m3 fh mini concrete batching plant wholesale online offering yhzs60 mobile concrete mixing station for sale, mixing plant 90m3/h pre concrete batching, mini mix concrete batching plant 50m3 fh. jinan loader pre dmixing concrete plant hzs90. A EP Machinery Brand 25m3 Mobile Mini Concrete Batching hzs60 in botswana concrete mixing station - minuteplate.co.za Jun 11, 2018 · investmen
https://www.hgweb.nl/jinan-loader-pre-dmixing-plant.html
CC-MAIN-2020-24
refinedweb
786
50.77
All of oomph-lib's existing elements implement the GeneralisedElement::output(...) functions, allowing the computed solution to be documented via a simple call to the Mesh::output(...) function, e.g. By default, the output is written in a format that is suitable for displaying the data with tecplot, a powerful and easy-to-use commercial plotting package – possibly a somewhat odd choice for a an open-source library. We also provide the capability to output data in a format that is suitable for display with paraview, an open-source 3D plotting package. For elements for which the relevant output functions are implemented (they are defined as broken virtual functions in the FiniteElement base class) output files for all the elements in a certain mesh (here the one pointed t by Bulk_mesh_pt) can be written as where the unsigned integer npts controls the number of plot points per element (just as in the tecplot-based output functions). If npts is set to 2, the solution is output at the elements' vertices. For larger values of npts the solution is sampled at greater number of (equally spaced) plot points within the element – this makes sense for higher-order elements, i.e. elements in which the finite-element solution is not interpolated linearly between the vertex nodes. It is important to note that when displaying such a solution in paraview's "Surface with Edges" mode, the "mesh" that is displayed does not represent the actual finite element mesh but is a finer auxiliary mesh that is created merely to establish the connectivity between the plot points. Paraview makes it possible to animate sequences of plots from time-dependent simulations. To correctly animate results from temporally adaptive simulations (where the timestep varies) paraview can operate on pvd files which provide a list of filenames and the associated time. These can be written automatically from within oomph-lib, using the functions in the ParaviewHelper namespace: Once the pvd file is opened, call ParaviewHelper::write_pvd_header(...) to write the header information required by paraview; then add the name of each output file and the value of the associated value of the continuous time, using ParaviewHelper::write_pvd_information(...). When the simulation is complete write the footer information using ParaviewHelper::write_pvd_footer(...), then close to the pvd file. Currently, the paraview output functions are only implemented for a relatively small number of elements but it is straightforward to implement them for others. The FAQ contain an entry that discusses how to display oomph-lib's output with gnuplot and how to adjust oomph-lib's output functions to different formats. Angelo Simone has written a python script that converts oomph-lib's output to the vtu format that can be read by paraview. This has since been improved and extended significantly with input from Alexandre Raczynski and Jeremy van Chu. The conversion script can currently deal with output from meshes that are composed of 2D triangles and quad and 3D brick and tet elements. The oomph-lib distribution contains three scripts: bin/oomph-convert.py: The python conversion script itself. bin/oomph-convert: A shell script wrapper that allows the processing of multiple files. bin/makePvd: A shell script the creates the * .pvd files required by paraview to produce animations..pvd files required by paraview to produce animations. oomph-lib'sbin directory to your path (in the example shown here, oomph-libis installed in the directory /home/mheil/version185/oomph): curved_pipe.datis the oomph-liboutput produced from a simulation of steady flow through a curved pipe. oomph-convert.py * .vtu file.vtu file If your output file is invalid or contains elements that cannot currently be converted, you can use the -p option (followed by 2 or 3 to indicate the spatial dimension of the problem) to extract points only: The output is now a .vtp data file (Visualization Toolkit Polygonal) which is also supported by Paraview. To display your .vtp data, use the Glyphfilter (displaying the points as crosses, say). Here is a representative plot in which the adaptive solution of a 2D Poisson equation in a fish-shaped domain is displayed with points. Here are a few screenshots from a paraview session to get you started. When paraview starts up, you have to select the arrays of values you want to load and click on Apply: Select the array of values you want to display in the active window (V1, V2, V3...). You can only display one at a time. It is applied on the data set selected in the pipeline: Now choose the plot style of your data. Outline display a box containing the data but not the data itself (it's not entirely clear to us why you would want to do this, but...). Points and Wireframe best suited for 3D computations because they allow you to "see through" the data set. Surface and Surface With Edges is best suited for 2D computations because only the surface is displayed. Here is a view of the data in Wireframe mode: You can move the figure with buttons You can also display the colour legend by clicking on You can split a window by clicking on Here is a quick demonstration of oomph-convert and makePvd scripts in action oomph-lib'sbin directory to your path (in the example shown here, oomph-libis installed in the directory /home/mheil/version185/oomph): soln?. datare the oomph-liboutput files that illustrate the progress of the mesh adaptation during the adaptive solution of a Poisson equation in a fish-shaped domain. oomph-converton all files (the -z option adds zeroes to the numbers – this is only required if the files are to combined into an animation by paraview) * .vtu files.vtu files * .vtu files can be displayed individually as discussed above..vtu files can be displayed individually as discussed above. * .pvd file using.pvd file using makePvd Here's a screenshot from the paraview session: once the * .pvd file is loaded you can customise the plot style as discussed in the previous example, and then use the Play/Stop/... buttons to animate the progress of the mesh adaptation. oomph-lib typically outputs results from parallel (distributed) computations on a processor-by-processor basis, resulting in filenames of the form where NPROC is the number of processors. An animation of such data obviously requires the output from different processors (but for the the same timestep) to be combined. Provided, the filenames have the pattern (note the "proc" and "_", both of which are required), the pvd file can be generated by first processing the files with oomph-convert, followed by So, for the files listed above, to produce a pvd file that contains data from a computation with four processors the commands followed by would create the file soln.pvd from which paraview can create an animation of the solution. In order to analyse the data, we can apply filters. Some filters are accessible directly via the navigation bar; a full list is available in the Filters menu: Here are few examples of filters available: Calculator:Evaluates a user-defined expression e.g Contour:Extracts the points, curves or surfaces where a scalar field is equal to a user-defined value e.g Clip:Intersects the geometry with a half space. (Warning: with some versions of Paraview, zooming on the clipped surface can cause the X server to crash.) Slice:Intersects the geometry with a plane. (Warning: with some versions of Paraview, zooming on the clipped surface can cause the X server to crash.) Threshold:Extracts cells that lie within a specified range of values Click on the button: Cells Onto select elements on the surface (2D selection) Points Onto select points on the surface (2D selection) Cells Throughto select elements through the region selected (3D selection) Points Throughto select points through the region selected (3D selection) Filters->Data Analysis->Extract Selectionand Applythe filter to extract the selected elements. You can now modify or apply filters on the extracted data only. Here is an example of extraction of the surface elements of the curved pipe data: A pdf version of this document is available.
http://oomph-lib.maths.man.ac.uk/doc/paraview/html/index.html
CC-MAIN-2021-39
refinedweb
1,351
50.97
class Solution { public: int minArea(vector<vector<char>>& image, int x, int y) { int left = INT_MAX, right = INT_MIN, up = INT_MAX, down = INT_MIN, N = image.size(), M = image[0].size(); for (int i = 0; i < N; i++) for (int j = 0; j < M; j++) if (image[i][j] == '1') { left = min(left, j); right = max(right, j); up = min(up, i); down = max(down, i); } return (down - up + 1) * (right - left + 1); } }; By the way, what's point of giving us x and y location? No matter which location we got the answer is always the same, cause all the '1' s are connected. Actually I didn't got the point of this problem but my code got AC anyway. The point of x and y is that we can use them to do something faster than full brute force. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/30069/5-lines-c-code-compute-the-area-directly-no-use-of-x-and-y-location
CC-MAIN-2017-47
refinedweb
155
76.45
Originally posted by Steve Luke: When I run similar code (see below) the Strings are sorted inside the Set - as the contract for TreeMap says they should be. Originally posted by Campbell Ritchie: But your code only demonstrates sorting a List. What happens when you print the Set before the Collections.sort call? Of course you are sorting Strings which implement Comparable already. Originally posted by Steve Luke: When I run similar code (see below) the Strings are sorted inside the Set - as the contract for TreeMap says they should be. Can you post exact code that 1) Generates the Map 2) Gets the List of Entries 3) Iterates over the List that shows your problem? public class SortableStrings { public static void main(String[] args) { TreeMap data = new TreeMap(); data.put("Fred", "Fred"); data.put("Adam", "Adam"); data.put("Mikey", "Mikey"); data.put("Charlie", "Charlie"); Set keys = data.keySet(); ArrayList sortMe = new ArrayList(keys); Collections.sort(sortMe); for(Object o : sortMe) { System.out.println(o); } } } [ August 22, 2008: Message edited by: Steve Luke ] Originally posted by M Burke: Once thing that is different with my TreeMap (forumMap) does not contain a String as the contained type. Originally posted by M Burke: But the key is a String. Originally posted by Steve Luke: That would be one of the first things I would double check. When you get the List of Keys, iterate over it displaying the Class of the value in the list. As stated above, the values will be sorted in the Map. They will be sorted when the keys are pulled outs as a Set, and the Collections.sort will properly sort Strings because String implements Comparable. I could conceive that the Set could have un-sorted data returned to the List if the Map is still being built while you do ArrayList lKeys = new ArrayList(keys); The Set is backed by the original Map, so changes in the Map are reflected in the Set. If these changes happen while the List is iterating the Set you are likely to see inconsistent data. But the List will be a snap-shot of the Set, and any further changes in the Set (or Map) will not affect the List once it is made, and the Collections.sort will work on that snap shot. Originally posted by M Burke: There is something about my TreeMap not using Strings as data that is messing thing up. When I use a TreeMap that contains Strings as the data Originally posted by M Burke: The list reads like this... Inventory Sales Energy Operations Trucks And it remains unchanged after the sort()
http://www.coderanch.com/t/386055/java/java/Collections-sort-sort-ArrayList
CC-MAIN-2014-52
refinedweb
438
72.76
TI Home » TI E2E Community » Support Forums » Embedded Software » TI-RTOS » TI-RTOS Forum » Tasks Not Displaying in Outline window of SYS/BIOS I'm in the process of migrating from DSP/BIOS to SYS/BIOS. I'm using CCS 5.2.0.00069, SYS/BIOS 6.33.05.46, and XDC Tools 3.23.03.53. I've converted my *.tcf file to a *.cfg file (attached (1526.VibrationMonitor.cfg) using the migration tool. None of the tasks in my *.cfg file are displaying in the Outline window. In addtion, CCS 5 is displaying related errors such as: Description Resource Path Location Type#20 identifier "UsbComms" is undefined main.cpp /VibrationMonitorSYSBIOS line 1474 C/C++ Problem when UsbComms is clearly defined in the *.cfg file. This is true for all my tasks. My (two) memory heaps are also not displaying in the Outline window when they had been previously. Could someone look at my *.cfg file and let me know why my tasks and heaps aren't displaying? Thanks. Mark, I discussed the series of problems you've been struggling against with some of the engineers on the development team. Unfortunately the verdict is that using the GUI tool with a legacy encoded config file is NOT a supported use case. None of the instance objects created using the legacy syntax will appear in the GUI tool. If you need to use the GUI config tool, you'll have to manually convert your legacy BIOS tcf APIs into the corresponding SYS/BIOS syntax. If you have no compelling need to upgrade to SYS/BIOS for this application, a strong argument could be made for you to stay with BIOS 5.4x. If you need to use SYS/BIOS but want to continue using the old tcf syntax, any changes you make to the configuration will have to be done using a standard text editor. --- Regarding the missing UsbComms identifier, the additional command you need to add to your TSK definitions to make the instance objects public is: bios.TSK.instance("UsbComms").name = "UsbComms"; /* create a global symbol for this TSK object */ The two HeapMem instances are gone because I commented out the HeapMem.create() lines associated with them yesterday since they were duplicates of and conflicted symbolically with the MEM instances defined near the top of your file: bios.MEM.instance("DDR").heapLabel = "DDR_HEAP"; bios.MEM.instance("IRAM").heapLabel = "FAST_HEAP"; Alan Hi Alan, Many thanks for helping out with this. We'd really like to continue with SYS/BIOS... Also, yes, saw the commented out HeapMem related lines after I had sent the last email... On to the errors I'm seeing: I tried this change: bios.TSK.instance("UsbComms").name = "UsbComms"; /* create a global symbol for this TSK object */ not only for "UsbComms" but also for any other error (180+) of this type: but whether the line is in the cfg file or commented out, I still get the same error. Most errors are occurring in my main.cpp file. Is there a header file I need? Latest cfg file is attached 0474.VibrationMonitor.cfg You only need to add the .name fields for objects that your application is going to reference at runtime. You have to have: #include <xdc/cfg/global.h> At the top of your .c file to pick up all the definitions from the config file. 99.9% of all my files include file baseos.h which has a #include VibrationMonitorcfg.h. VibrationMonitorcfg.h has a #include <xdc/cfg/global.h>. I even tried #include <xdc/cfg/global.h> in my main.cpp but I still get (183) errors such as: Description Resource Path Location Type#20 identifier "ImcComms" is undefined main.cpp /VibrationMonitorSYSBIOS line 452 C/C++ Problem It appears that all of the name attributes in the cfg file are not getting read by CCS v5...suggestions? ideas? Thanks. I did some digging into this and see that my previous post declaring that you had to explicitly set the .name field of each object was bogus. ALL object instances are given extern declarations in the generated app.h file like this: extern ti_sysbios_knl_Task_Struct TestTask; and the symbols are then equated to their corresponding object within the generated linker command file like this: _TestTask = _ti_sysbios_knl_Task_Object__table__V + 0; Perhaps the issue you're having is due to some kind of C++ name mangling. How are you referencing the Object instances in your C++ code? Here's a line of code referencing IMCComms, for example: TSK_setpri(&ImcComms, 5); // Resume IMCComms I'm not sure if this is what you mean by "referencing the Object instances in your C++ code." Also, I don't have an app.h (anywhere in my project/workspace directory) but I have an app.cfg (in the project/workspace root). I'm suspecting this is a major problem...? When/how should the app.h file get created? "app.h" is my shorthand for the generated .h file that has the name of your application encoded in it. You'll find this file in your project's Debug/configPkg/package/cfg/ directory (assuming you're building a Debug .out file, otherwise its in Release/...). In my case, I named the project 'legacy6x' which results in the generated .h file being called "legacy6x_p674.h". When you add #include <xdc/cfg/global.h> in your C files, this application specific, generated header file gets magically included. The corresponding generated linker command file that contains the object label assignments resides in the same directory and is called "legacy6x_p674.xdl". So here's everything in VibrationMonitorSYSBIOS\Debug\configPkg\package\cfg\build_p674.h /* * Q1: should there be more? Regarding the #include <xdc/cfg/global.h> issue, Q2: should I do what I'm doing (99.9% of all my files include file baseos.h which has a #include VibrationMonitorcfg.h. VibrationMonitorcfg.h has a #include <xdc/cfg/global.h>) or do #include <xdc/cfg/global.h> in each .cpp file that references an object in the .cfg fiel? When I build an application using your config script, the generated header file is 458 lines long and contains definitions for every object in the config file. Do you have a config project separate from your application project? If so then you need to include the config project's generated header file in your application .c files. And you'll have to use an explicit path to that named .h file. Otherwise, I don't know why your build_p674.h file has so little content. I created a project using the separate configuration model and used your .cfg file for the configuration part. I had to include the following line in my main.c to import all the object definitions from the config project: #include <c:/Documents and Settings/a0868325/workspace_v5_2/legacy6x/Debug/configPkg/package/cfg/legacy6x_p674.h> Notice the explicit path (unique to my build environment of course) to my configuration project's generated .h file... You'll have to do something similar. Not sure what you mean by " a config project separate from your application project." The project that I'm trying to migrate from DSP/BIOS 5 to SYS/BIOS 6 is named "VibrationMonitorSYSBIOS", has all its relevant directories, sub-directories, files, and a cfg file in the root. I tried putting the absolute path to build_p674.h in my main but still got 163 errors and no difference in the build_p674.h file. Also, why does the file have "build" in the file name and not "VibrationMonitorSYSBIOS?" In fact, all files in $home\Debug\configPkg\package\cfg are prefixed with "build_". I'm at a loss to understand why your generated .h file does not include all the objects from your config file. Can you export your project to an archive file and post it to the forum so I can take a closer look? Attached... Thanks for the continued effort. Alan, Could you please remove any zip file I've posted from the web site as soon as you can...thanks. Your project seems to be a combination of several other projects all thrown in together in a mutually exclusive manner. The attached project uses your VibrationMonitor.cfg and main.c files. It also uses your xdc target and platform settings. It does not include any of your other .cpp, .c, .or .h files. What I'd like you to try is importing the attached project and add to it only those .cpp, .c, and .h files that are absolutely necessary from your original project into this project. Do NOT add any other .cfg, .tcf, .tci, .cmd files to this new project. I think this approach will resolve the #include <xdc/cfg/global.h> problem and give you a clean start. 4314.VibrationMonitor.
http://e2e.ti.com/support/embedded/tirtos/f/355/t/193719.aspx
CC-MAIN-2014-35
refinedweb
1,458
59.6
A coroutine is a special way to make logic happen over time. I must admit, I never used coroutines until Unity, I had been using event-based programming in every other comparable scenario. However, coroutines are a quick and easy alternative which is definitely worth a look. In this lesson I will show how Unity works with coroutines, including various ways of yielding control and even linking coroutines together in order to have full control over time-based logic. In the end, I will also show how to work with coroutines natively for anyone curious about how they work. The Unity Coroutine A coroutine is really just a method with a return type of “IEnumerator”. One key difference is that you don’t simply “return” the data type like you would in a normal method. Instead, you “yield” a value which causes execution of the logic to pause in place – it can be resumed again later. Let’s take a look at a quick sample: using UnityEngine; using System.Collections; public class Demo : MonoBehaviour { void OnEnable () { StartCoroutine("DoStuff"); } void OnDisable () { StopCoroutine("DoStuff"); } IEnumerator DoStuff () { int value = 0; while (true) { yield return new WaitForSeconds(1); value++; Debug.Log("value:" + value); } } } Note that we must make sure that the “System.Collections” namespace is used, or you will get an error: “The type or namespace name ‘IEnumerator’ could not be found. Are you missing a using directive or an assembly reference?” In order to use a coroutine with Unity, you use a method called “StartCoroutine”. We do this inside of the OnEnable method (line 8). StartCoroutine is overloaded to allow you to pass an “IEnumerator” or a string representing the name of the coroutine to start. In the example I used the later version, because only that version can be manually “stopped” using StopCoroutine. I show StopCoroutine in the OnDisable method (line 13) although its use in this example is unnecessary because all coroutines managed by unity would stop when the script was disabled regardless of how they are started. You should also keep in mind that you can “StartCoroutine” multiple times – even for the same method, which may often be unintentional behavior. You may want to set a flag letting you know when a coroutine is active, so you don’t start it multiple times, or alternatively use StopCoroutine before using StartCoroutine just to be safe. Here are a few variations of calling StartCoroutine: // Version 1, started by string (name of coroutine) - this version is compatible with StopCoroutine StartCoroutine("DoStuff"); // Version 2, Same as Version 1 but with a parameter - note, the target method must be modified to accept a parameter StartCoroutine("DoStuff", 5); // Version 3, This version is started by passing the IEnumerator - you CANT use StopCoroutine StartCoroutine(DoStuff()); // Version 4, Same as Version 3 but with parameter - note, the target method must be modified to accept a parameter StartCoroutine(DoStuff(5)); The method “DoStuff” is our “Coroutine”. The first statement creates a local variable named “value” and initializes it to zero. Then we begin an “infinite loop” (a “while” loop which loops for as long as “true” is equal to “true” – which is always). Inside the loop we see our first “yield” statement (line 21) which instructs Unity that we wish to wait for one second. At this point execution of this method is suspended and will not continue until our wait condition is satisfied. After waiting for one second, Unity resumes the Coroutine right where it left off, even keeping in tact the values of your local variables (in this case “value”). We increment value by one, and print it to the console window. If you attach this script to something in your scene, you will see a new value printed to the console once every second. Try it out. Unity has provided several options for yielding your Coroutine including: - WaitForEndOfFrame - WaitForFixedUpdate - WaitForSeconds - WWW Note that you can also use “ yield return null;” to simply wait a frame or “ yield break;” to abort a Coroutine early. A More Fun Sample Our first example was functional but pretty boring. Let’s make another version where we make an object move across a series of locations (waypoints). I could imagine this as a piece on a game board that moves from one tile to another along a specified path. using UnityEngine; using System.Collections; public class Demo : MonoBehaviour { public Vector3[] waypoints; public float speed; void OnEnable () { StartCoroutine(DoStuff()); } IEnumerator DoStuff () { for (int i = 0; i < waypoints.Length; ++i) { while (transform.position != waypoints[i]) { yield return null; transform.position = Vector3.MoveTowards(transform.position, waypoints[i], speed * Time.deltaTime); } } Debug.Log("Complete!"); } } In this version of the script I declared a public array of Vector3 which represents locations in world space that I want the object to move through. I also specified a speed variable which determines how fast the object will cover those distances. I start the coroutine by passing the IEnumerator directly. Note that this is the preferred way to begin a coroutine unless you MUST be able to stop the coroutine using Unity’s StopCoroutine method (You could insert logic into the method to abort early as an alternative). The Coroutine has two loops which are nested together. The outer “for” loop iterates over the array of Vector3 waypoints, and the inner “while” loop iterates for as long as it takes for the object to actually reach its desired location. Note that I wait a frame before updating the objects position. If there were no yield statement inside the while loop, the object would complete its path before showing any of the “steps” of its progress to the user. Once the path has been followed to its final point, a message prints to the console indicating that our job is complete. Create a new scene, and add a Cube. Attach this demo script and make sure to assign values to each of our public properties via the inspector. For example your waypoints could be: 5,0,0 5,1,0 5,1,3 3,0,0 0,0,0 and your speed could be 1. Of course you can use any values you like, but your speed should at least be greater than zero. Nested Coroutines Sometimes you may find it convenient to nest coroutines so you can reuse bits of logic. Here is a sample which does that: using UnityEngine; using System.Collections; public class Demo : MonoBehaviour { Vector3 m1 = new Vector3(-1, 0, 0); Vector3 m2 = new Vector3(1, 0, 0); Vector3 s1 = new Vector3(1, 1, 1); Vector3 s2 = new Vector3(0.5f, 0.5f, 0.5f); void Start () { StartCoroutine(DoStuff()); } IEnumerator DoStuff () { while (true) { switch (UnityEngine.Random.Range(0, 2)) { case 0: yield return StartCoroutine(Move ()); break; case 1: yield return StartCoroutine(Scale ()); break; } } } IEnumerator Move () { Vector3 target = transform.position == m1 ? m2 : m1; while (transform.position != target) { yield return null; transform.position = Vector3.MoveTowards(transform.position, target, Time.deltaTime); } } IEnumerator Scale () { Vector3 target = transform.localScale == s1 ? s2 : s1; while (transform.localScale != target) { yield return null; transform.localScale = Vector3.MoveTowards(transform.localScale, target, Time.deltaTime); } } } This sample created three Coroutines. The DoStuff coroutine is the “main” coroutine which is triggered by our Start method. The Move and Scale coroutines are triggered randomly within the loop and will play until they are completed before the original main coroutine continues. If you used the scene from the previous demo and just updated the script, you would now see your cube move randomly back and forth as well as scale up and down. The Native Coroutine You may be curious about how to use Coroutine’s outside of Unity’s implementation. If so check out the following simple example: using UnityEngine; using System.Collections; public class Demo : MonoBehaviour { IEnumerator trend; void Start () { trend = TrendLine(); } void Update () { if (trend.MoveNext()) Debug.Log((int)trend.Current); } IEnumerator TrendLine () { int value = 0; while (true) { value += UnityEngine.Random.Range(-1, 2); yield return value; if (Mathf.Abs(value) >= 10) yield break; } } } On line 6 I created a variable to hold a reference to an IEnumerator called trend. I assign it to our coroutine method in Start (line 10). I use Unity’s update loop to handle resuming the Coroutine from its yielded execution points, although you could have used any sort of event to do so. Resuming the coroutine occurs with the “MoveNext” method which returns a true or false based on whether or not execution was completed. If the coroutine was not complete, I print the return value of the coroutine which is accessed by the “Current” property (line 16). In this Coroutine, I generate the value of an imaginary trend line and watch how it grows until it reaches a maximum value at which point the coroutine is complete. I achieved this effect by causing a value to either go up, down, or remain the same based on the result of the Random number which Unity generates. Attach this sample to something in scene and run it and you will see a bunch of numbers generate in the console window. At some point the trend line should reach its maximum extent at either positive or negative 10 and the output will stop. Summary In this lesson we learned all about a language feature called a coroutine. We learned to start and stop coroutines managed by Unity. We covered coroutines with and without parameters, used different yield options, and nested coroutines together. Finally we explored how a coroutine works natively and handled stepping through a coroutine and retrieving its values manually.
http://theliquidfire.com/2015/02/17/coroutines/
CC-MAIN-2020-45
refinedweb
1,587
61.97
#include <deal.II/distributed/tria_base.h> A structure that contains information about the distributed triangulation. Definition at line 257 of file tria_base.h. Definition at line 110 of file tria_base.cc. Number of locally owned active cells of this MPI rank. Definition at line 262 of file tria_base.h. The total number of active cells (sum of n_locally_owned_active_cells). Definition at line 267 of file tria_base.h. The global number of levels computed as the maximum number of levels taken over all MPI ranks, so n_levels()<=n_global_levels = max(n_levels() on proc i). Definition at line 273 of file tria_base.h. A set containing the subdomain_id (MPI rank) of the owners of the ghost cells on this processor. Definition at line 278 of file tria_base.h. A set containing the MPI ranks of the owners of the level ghost cells on this processor (for all levels). Definition at line 283 of file tria_base.h.
https://www.dealii.org/developer/doxygen/deal.II/structparallel_1_1TriangulationBase_1_1NumberCache.html
CC-MAIN-2020-34
refinedweb
152
60.61
DOM Testing React Applications with Jest Jest is a test runner for testing Javascript code. A test runner is a piece of software that looks for tests on your codebase and runs them. It also takes care of displaying the result on the CLI interface. And when it comes to Jest, it boasts of having very fast test execution as it runs tests in parallel. Key FeaturesKey Features - Easy configuration: For trivial cases, jestcan be run without any configuration. And if needed, Jest also provides a variety of ways to customize your tests by providing relevant parameters in package.jsonor as a separate jest.config.jsfile. jest --config jest.config.js Snapshot testing: Much has been written about this kind of testing. Jest provides a way to take "snapshots" of your DOM components, which is essentially a file with the HTML of the component's render stored as a string. The snapshots are human readable and act as an indicator of any DOM change due to component code changes. Mocking: Jest provides easy ways to auto-mock or manually mock entities. You can mock functions and modules which are irrelevant to your test. Jest also provides fake timers for you to control the setTimeoutfamily of functions in tests. Interactive CLI: Jest has a watch mode which watches for any test or relevant code changes and re-runs the respective tests. The watch has an interactive feature to it wherein it provides options to run all tests or fix snapshots while in watch mode itself. This is incredibly convenient. Coverage metrics: Jest provides some awesome coverage metrics right out of the box. It can be configured to show different levels of these metrics. Also, it is possible to cap the coverage to a threshold and bail or error out if the threshold is breached. Your tests, however, will complete very slowly if this feature is enabled. jest --coverage Setting Up JestSetting Up Jest In this post, we will look at installing Jest and getting started with DOM testing a React application. Theoretically, these techniques could be used with other view frameworks like deku, preact and so on. But there may be challenges with finding proper rendering utils which are mature and feature rich. Install jest for your project npm install --save-dev jest Add npm test script package.json { ... scripts: { ... test: 'jest --watch --verbose' } ... } DOM TestingDOM Testing In the following examples, we will mostly test trivial methods of general testing areas for a front-end application. This involves: - Basic DOM testing - Event simulation and testing - Window events simulation The sample code can be found at jest-blog-samples Basic DOM TestBasic DOM Test The app itself consists of a React component App which prints out a div with text "Hello jest from react". import React, {Component} from 'react'; import {render} from 'react-dom'; export default class App extends Component { render() { return <div>Hello jest from react</div>; } } render(<App/>, document.body) The above app is bundled using Webpack My first test is to check if the App component renders out the div. import React from 'react'; import App from '../index'; import {shallow} from 'enzyme'; describe('The main app', () => { it('the app should have text', () => { const app = shallow(<App/>); expect(app.contains(<div>Hello jest from react</div>)).toBe(true); }) }) Let's break this down. - We need to import react(for the purpose of using JSX) and the App(to instantiate and test) source. - Enzyme is JavaScript test utility that helps render React components for testing. Funny enough, a lot of people think that they should pick between React and Enzyme, and this tutorial will clarify this use. But for this tutorial, we will use it to shallow renderour Appcomponent and inspect the resulting tree. - We describe a test suite named "The main app." - Our first test is named "the app should have text." - We shallow render our component and store it in a const. - Add an expectassertion for the app to contain the expected div. Run the test with JestRun the test with Jest Assuming Jest is installed locally for the application and you have configured the test script in package.json as recommended, let's first start the test runner. npm run test This automatically finds the tests to be run. If required, one can specify testPathDirectories and testIgnorePaths. FAIL __tests__/index.test.js ● Test suite failed to run SyntaxError: /Users/pavithra.k/Workspace/jest-blog-samples/dom-testing/__tests__/index.test.js: Unexpected token (8:29) 6 | describe('The main app', () => { 7 | it('the app should have text', () => { > 8 | const app = shallow(<App/>); | ^ 9 | expect(app.contains(<div>Hello jest from react</div>)).toBe(true); 10 | }) 11 | }) at Parser.pp.raise (node_modules/babylon/lib/index.js:4215:13) Test Suites: 1 failed, 1 total Tests: 0 total Snapshots: 0 total Time: 1.07s Ran all test suites. We have our first failing test! The error mainly points at a syntax error. This is because our test code is written in ES2015 and it is not transpiled and ready for Node. This means Jest needs to preprocess our test code and imported source code to ES5. For this, Jest has great integration with Babel Add Babel SupportAdd Babel Support Add a .babelrc file to your application. In this, you can specify the Babel plugins required to transpile every ES2015 feature that your application uses. babel provides excellent presets for react and es2015. Presets are just a collection of relevant babel plugins for a logical set of transformations. Jest comes with an inbuilt Babel preprocessor, called babel-jest. Essentially, this just transpiles all the tests before running them, if there is a .babelrc file present in the application. Now on running Jest, we have: PASS __tests__/index.test.js The main app ✓ the app should have text (8ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 total Time: 2.175s Ran all test suites. Test an event executionTest an event execution Using Enzyme, we can simulate events in our rendered components. We intend to build an App Component that is clickable. It should change the text displayed after the body is clicked on. To test this component, we need to: - Test for the default content on the first load. - Test for the first toggle of click and expect the change in text. - Test for the second toggle of click and again expect the change in text from the default. import React from 'react'; import App, {DEFAULT_TEXT, CLICKED_TEXT} from '../index'; import {shallow} from 'enzyme'; describe('The main app', () => { let app; beforeEach(() => { app = shallow(<App/>); }) it('the app should have text', () => { expect(app.text()).toBe(DEFAULT_TEXT); }) it ('should change text on click', () => { app.simulate('click'); expect(app.text()).toBe(CLICKED_TEXT); app.simulate('click'); expect(app.text()).toBe(DEFAULT_TEXT); }) }) Jest provides us with the following global testing methods describe(name,fn), it(name, fn), test(name, fn) These are the basic tools in a JavaScript testing environment's arsenal. describeis used to group a set of related tests together into a test suite. itor testis used to write your actual test where you can build an independent environment and test particular units or behavior. afterEach(fn)/beforeEach(fn)/afterAll(fn)/beforeAll(fn) These functions run before/after each test or before/after the test suite. You can group all your initialization and tear down the code in these hooks which are called during a test cycle. Keeping the above info in context, we write a beforeEach function which creates a shallow rendering of the Button App. After this, we write our tests for each of the behavior we expect out of the component. Enzyme has a bunch of helper APIs to inspect a rendered object. We can use these to check rendered DOM for desired changes. The corresponding React component would look like this: import React, {Component} from 'react'; import {render} from 'react-dom'; export const DEFAULT_TEXT = "Hello jest from react"; export const CLICKED_TEXT = "This has been clicked"; export default class App extends Component { constructor() { super(); this.state = { clicked: false } this.handleClick = this.handleClick.bind(this); } render() { return <div onClick={this.handleClick}>{this.state.clicked ? CLICKED_TEXT : DEFAULT_TEXT}</div>; } handleClick() { this.setState({ clicked: !this.state.clicked }) } } render(<App/>, document.body) window event testing Many times, our apps have the functionality to respond to outer stimuli like window events. We would like to test if our app behaves correctly under these conditions. For this case, we will have to mock the document and document.window object. We will use JSDOM API which provides a way of faking document and window objects. npm install --save-dev jsdom We are now going to use JSDOM to wire up our fake objects. Since this activity needs to happen before every test as any of them could want the window object, we will write this code at a setupFile which can be configured in Jest. Go to your package.json or jest.config.js file and add the following option jest: { "setupFiles" : ["<rootDir>/setupFile.js"] } import {jsdom} from 'jsdom'; const documentHTML = '<!doctype html><html><body><div id="root"></div></body></html>'; global.document = jsdom(documentHTML); global.window = document.parentWindow; global.window.resizeTo = (width, height) => { global.window.innerWidth = width || global.window.innerWidth; global.window.innerHeight = width || global.window.innerHeight; global.window.dispatchEvent(new Event('resize')); }; Note that the above code also gives us a div element with the ID root. We can now mount our react component onto this element for inclusion in the DOM. Also, resizeTo will not be defined by default in this fake window object. It will return undefined. Hence, we need to mock up the whole function as well. You will need to do this to trigger any event on the window from your test. Currently, I'm only bothered with the innerWidth and innerHeight parameters. Hence, I will only update them as part of the mock. Our App code for this test is a React component that responds to window's resize event. If the resize results in the innerWidth > 1024 then it changes its width to 350. If not width = 300. import React, {Component} from 'react'; export const WIDTH_ABOVE_1024 = 350; export const WIDTH_BELOW_1024 = 300; export const THRESHOLD_WIDTH = 1024; export default class App extends Component { constructor() { super(); this.state = { width: window.innerWidth > THRESHOLD_WIDTH ? WIDTH_ABOVE_1024: WIDTH_BELOW_1024 } this.handleResize = this.handleResize.bind(this); } componentDidMount() { window.addEventListener('resize', this.handleResize) } componentWillUnmount() { window.removeEventListener('resize', this.handleResize) } render() { return <div style={{width:this.state.width}}>My width is {this.state.width}</div>; } handleResize() { this.setState({ width: window.innerWidth > THRESHOLD_WIDTH ? WIDTH_ABOVE_1024 : WIDTH_BELOW_1024 }) } } For the above component, we again need to test if: - Default behaviour. - Resize to width > 1024 - Resize to width <=1024 import React from 'react'; import App, {WIDTH_ABOVE_1024, WIDTH_BELOW_1024, THRESHOLD_WIDTH} from './App'; import {mount} from 'enzyme'; describe('The main app', () => { let app; beforeEach(() => { app = mount(<App/>, {attachTo: document.getElementById('root')}); }) it('the app should have text', () => { var width = window.innerWidth > THRESHOLD_WIDTH ? WIDTH_ABOVE_1024 : WIDTH_BELOW_1024; expect(app.text()).toBe("My width is " + width); app.unmount(); }) it ('should change text on click', () => { window.resizeTo(1000, 1000); expect(app.text()).toBe("My width is " + WIDTH_BELOW_1024); window.resizeTo(1025, 1000) expect(app.text()).toBe("My width is " + WIDTH_ABOVE_1024); app.unmount(); }) }) Things to note are: - We can totally use the component's constants for testing. - Once you mountyour app using Enzyme, you will also need to unmount()it at the end of each test. window.resizeTowill trigger a change in window.innerWidth. The app code rerenders as per app logic and we can test for printed width. Key TakeawaysKey Takeaways - Testing is now way easier than before with Jest. - DOM testing with React needs Enzyme as a renderer util and JSDOM as a DOM helper. - Mocking can be tricky. It's fine to have hacky mocks. I hope this article has inspired you to take up testing your application with Jest and you will be motivated to add it to your dev workflow!
https://www.codementor.io/pkodmad/dom-testing-react-application-jest-k4ll4f8sd
CC-MAIN-2017-47
refinedweb
1,982
59.09
Does this warrant a 1.6.1? I'm actually quite stuck because of this bug.... I've been making extensive use of namespaces in my builds based on beta1! --DD > -----Original Message----- > From: Peter Reilly [mailto:peter.reilly@corvil.com] > > Thanks for the report. > Yes, the handling of the ant namespace is incorrect due > to a change between 1.6beta3 and 1.6.0. > I have placed a fix now. > Peter > > Dominique Devienne wrote: > > >Just starting updating from Ant 1.6 beta1 to 1.6 official, > >and I've having a bad surprise. It seems there's a problem > >handling the default Ant namespace when explicitly specified, > >or when used as the default namespace, at least for the <tstamp> > >task. > > > >Have I doing something wrong??? --DD --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-dev/200401.mbox/%3CD44A54C298394F4E967EC8538B1E00F10248CC41@lgchexch002.lgc.com%3E
CC-MAIN-2015-35
refinedweb
146
63.25
time. There is the possibility to create an ADF read only dynamic table, that works more or less like the richfaces columns <rich:columns/> element. However, I needed an updatable table, but ADF doesn’t know such a component. I had to come up with a different solution. Have a look at the use case: I have a table in which I need to show revenue data of a department. I do not know how many years I have to show neither do I know if the user wants to see years or quarters. I have to start showing data from the first year the department has revenue, and grouped by year, untill the last year which might be 2010, but can also be 2004. The columnheader shows the year. When I click on the header, the table needs to refresh, and display more or less the same data except now grouped by quarter for the selected year, and eventually drill further down to month level. Preparation of the database. The datamodel for this post is not that complex. I used the HR schema which I extended with a couple of tables and types. The department table is linked to a new table which holds revenue data. There is also a timeframe table. This table contains definitions of timeframes. For this blog post it is not very relevant how the database part is implemented, however if you want to implement the database objects that I used, you can find the scripts to implement this datamodel and some data here:technology.amis.nl/blog/wp-content/uploads/prepareDatabase.txt The script will do several things; First it will create the two extra tables, add some data in these tables. After that it will create 4 database types that can hold the data that is needed to show in the ADF application. Finally a package is created to query the database tables and to return the data to the calling ADF-BC method. Creation of the ADF application. I started with the creation of a new ADF application in JDeveloper. Usually the next step would be the creation of of ADF Business Components. However, in this case the model project initially only contains an empty application module. There are no entity objects or view objects needed for this use case. What I do need, is a class that contains the methods to implement the functionallity. For that I made sure to create the applicationModuleImpl class as well. Preparation of the ADF Model. The ADF model implementation is partially based on two posts by colleague Lucas Jellema: use sql type objects part I and use sql type objects part II. Both describe how to base an ADF application on SQL types. For the example used in this post I created three new objects in the model project. These objects will represent the database types created in the previous step (prepareDatabase.txt). The objects are created in a separate package within the model (revenue.types). Next step is the creation of two simple methods in the applicationModuleImpl: One to get the revenue data form the database, and one to save the data. getRevenue(String deptId, String year) and saveRevenue(List<RevenueData> newRevenue). These are only very simple methods because the actual logic will be implemented in a class outside the application module, but in the revenue package where the types are in. The class that holds the getRevenue and saveRevenue is the RevenueHandler class. Initially I wanted to use the revenueDataTable object (revenueDataTable List <revenueData>). Saddly this object is not of a type that is supported by the ADF binding framework. Instead of I used a List<revenueData> (which in the end is exactly the same). With the methods on the application module in place it is time to publish them to the client by shuttling them from available to selected. Preparation of the ADF Bindings and ADF Faces. The final application contains just one page in which the updatable dynamic table is used. The page is called revenueOverview.jspx. Since the two methods (getRevenue and saveRevenue) are published to the client, they can now be dropped on the page as an “ADF parameter form”. By doing so a button is created to call the method and two input fields are created for the parameters. The button will not be used “as is” however, I mis-used it only to create the appropriate method binding in the page Definition file. The most important part however is the table that will display the revenue data. I used a backing bean (revenueOverViewBB) to process the data that is retrieved from the method. This bean will also take care of saving the data it. The backing bean also contains a method for the action Listener on the button that was created on the parameter Form. The actionListener now points to this new method in the revenueOverViewBB bean. Here is where the created method in the pageDef come’s in handy. I will make a call to this method in the getRevenue method via de binding framework. Simply get the binding container, find the operation binding, and execute it. The result of the method is used to set the value of the revenueData property which in fact is a List of <RevenueData>. DCBindingContainer bindings = getCurrentBindingContainer(); OperationBinding operationBinding = (OperationBinding)bindings.getOperationBinding("getRevenue"); operationBinding.getParamsMap().put("deptId", this.deptId); operationBinding.getParamsMap().put("year", this.year); setRevenueData((List<RevenueData>)operationBinding.execute()); With the getRevenue method in place, it’s time to create the table that will display the data. For that I first created a simple empty table tag () on the page, and via de binding editor the table was bound to the backingBean. This binding is necessary to programmatically refresh the table. The number of columns that are being displayed in the table depends on the number of periodData rows in the RevenueData object. Remember: Every set of Revenue data has dynamic number of periods. After getting the revenue data (see previous code fragment), I simply determine the number of periodeData. // if there is revenue data, set number of columns to add. // be aware; This number is used in an af:forEach which index is zero based if (revenueData.get(0).getPeriodData().size() > 0) { setNumberOfColsToAdd(revenueData.get(0).getPeriodData().size() - 1); } This I can now use in an to add the appropriate number of columns to the table. The for each will start at zero, and continue untill the last period has been processed. .................... <af:forEach <af:column .................... In the column header I want to display the name of the displayed period, for instanbce 2008 and 2009, or 2008-Q1……2008-4 or even 2008-1… 2008-12. To achieve this I created one more object; The columnsHeader object which is a simple object with only one property being “value”. package nl.amis.demo.view.type; public class ColumnHeader { private String value; public ColumnHeader() { } public void setValue(String value) { this.value = value; } public String getValue() { return value; } } In the backing bean the value for the column header is retrieved from the periodName, and for every period in the collection a header is added to the columnHeaders list. for (int i = 0; i < _RevenueDataLine.getPeriodData().size(); i++) { String _year = _RevenueDataLine.getPeriodData().get(i).getPeriodName(); ColumnHeader header = new ColumnHeader(); header.setValue(_year); columnHeaders.add(header); } In the <af:forEach> I now read the value of the columnheader for the appropriate period. Notice the columnHeaders[index.current] construction where I point to the correct columnHeader and get its value, simply by using the for each index. ................. <af:forEach <af:column <f:facet <af:commandLink text="#{revenueOverViewBB.columnHeaders[index.current].value}" ................. I use a nested column construction because that enables me to display more information, like percentage of total revenue, within the same period. In this example it is not really of any use, however, I kept it in for possible future use. In the end, the code for the dynamic table looks like this: <af:table <af:column <af:outputText </af:column> <af:column <af:outputText</af:outputText> </af:column> <af:forEach <af:column <f:facet <af:commandLink <af:setActionListener </af:commandLink> </f:facet> <af:column <f:facet <af:outputLabel </f:facet> <af:inputText <af:convertNumber <f:attribute </af:inputText> </af:column> </af:column> </af:forEach> </af:table> Saving the changed table content should be easy, however there is a catch. The binding framework does not support the object type, so I need to invoke the saveRevenue method directly on the application module. Be very careful with that! Because the value of input text fields are a direct reference to the periodeData collection, I just call the saveRevenue method and send the whole revenueData object to the application module. try { String EL = "#{data.DynaColsServiceDataControl.dataProvider}"; FacesContext fc = FacesContext.getCurrentInstance(); ValueBinding vb = fc.getApplication().createValueBinding(EL); DynaColsServiceImpl svc = (DynaColsServiceImpl)vb.getValue(fc); result = svc.saveRevenue(revenueData); } catch (JboException e) { JsfUtils.getInstance().addFormattedError(e.getMessage()); return result; } How the actual translation of the revenueData object to the database sql types in the applicationModule is performed is not relevant for this article, however this is described in the posts by Lucas Jellema that I mentioned earlier. Summary. This post combines some of the posts that where published here in the past, and adds some new stuff. It describes how to implement an updatable dynamic table. It is about creating an adf application based on plsql-api and reading/writing data as a collection of sql type objects. The post also describes how to use information from a backing bean (or actually the data collection returned by a method call) to determine the number of columns in a table, or the header of table columns. After reading this post you should have some understanding of the implementation of this functionality. The use case described in this post is a simplified version of the actual functionality implemented in my project. To get going you download a version of the workspace here. Add the jsf-impl.jar and the adf-faces-impl.jar before running. Potential issues after deployment to iAS. After deployment It works perfectly……………. on your embedded OC4J. If you deploy this to a IAS you will possibly encounter two bugs. The first one is described at. You will find a workaround here as well. It’s about using typed arguments in the application modules client interface. The second one is only encountered when using database access based on roles instead of instant direct grants. This bug is described on metalink Bug 4439777 – JDBC SQLException binding user ADT to PLSQL. A workaround and solution is described as well. This bug is about using DB object types in ADF. JDBC Driver does not support access via synonyms. You must use the fully qualified name. Hi Luc, Could you look at this post – this is similar question: Updated the post to add syntax highlighting. I now know how to use our source code publisher. Wow, that is quite an impressive story you wrote. For those not yet able to work with the ADF 11g Rich Client components, this is a very interesting option for creating some pretty cool functionality. Thanks for sharing this. Lucas
https://technology.amis.nl/2010/01/11/adf-10g-dynamic-columns-or-how-to-implement-an-updatable-dynamic-table/
CC-MAIN-2016-30
refinedweb
1,865
56.55
Toll free (in US and Canada): 888-426-6840 Access code: 6460142# Caller pay alternative number: 215-861-6239 Full list of global access numbers Call Time: 11:00 AM Eastern We discussed and decided the following points: Kolyan gave update on Trademark issue: it has been established no one can use "OSGi <anything>". They are now looking at something like "Enterprise Tools for OSGi Platform". Note the project's "short name" will be "Libra" and package namespace will be "org.eclipse.libra". Neil gave "heads up" that they want to "include" (or depend on) some Eclipselink bunles that do JPQL parsing in Dali features (for validation, and code completion). The PMC did want to make sure we are sensitive to adopters and not give any appearance to an end-user of "depending on EclipseLink runtime" but Neil said no, just a few bundles that would not surface in any end-user UI ... they would not see "eclipselink runtime" installed, or anything similar. We'll work out how to "reship" those bundles similar to how we would Orbit bundles, so we don't have to instruct users to "go and install the EclipseLink runtime". If those bundles are not singletons, would be especially easy. If they are singletons, will need to use extra care to make sure same version is used by WTP and EclipseLink. Back to meeting list. Please send any additions or corrections to David Williams. Back to the top
http://www.eclipse.org/webtools/development/pmc_call_notes/pmcMeeting.php?meetingDate=2010-12-07
CC-MAIN-2017-17
refinedweb
242
68.91
I'm coding a simple game (pick-up sticks) to get use to coding. Once you click begin, I have a popup Form that allows you to select the computer type (Basically type 1, 2, or 3). I open the second Form in a new thread and keep the first thread paused until the second is closed. namespace Sticks { public partial class guiMain : Form { //Set Public Namespace Variables public int compType = 0; //1=timid 2=greedy 3=smart public guiMain() { InitializeComponent(); } private void btnBegin_Click(object sender, EventArgs e) { btnHelp.Enabled = false; btnBegin.Enabled = false; System.Threading.Thread t = new System.Threading.Thread(new System.Threading.ThreadStart(ThreadProc)); t.Start(); this.Invoke((MethodInvoker)delegate { label2.Text = "Test"; }); while (t.IsAlive) { Thread.Sleep(100); this.Refresh(); } } } } The second Form contains buttons that are suppose to be able to set compType variable but I don't know how to update it from a Form on another Thread. namespace Sticks { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { //set compType to 1 Close(); } private void button2_Click(object sender, EventArgs e) { //set compType to 2 Close(); } private void button3_Click(object sender, EventArgs e) { //set compType to 3 Close(); } } } Now remember, I'm still starting out. I am using Microsoft Visual C# 2010 Express for my coding. I have Google this issue but only came up solutions that I did not understand how to implement. Thanks in advanced!
http://www.dreamincode.net/forums/topic/255216-question-updating-a-variable-from-form-on-2nd-thread/
CC-MAIN-2016-30
refinedweb
239
55.13
README remark-message-controlremark-message-control remark plugin to enable, disable, and ignore messages. ContentsContents - What is this? - When should I use this? - Install - Use - API - Syntax - Types - Compatibility - Security - Related - Contribute - License What is this?What is this? This package is a unified (remark) plugin that lets authors write comments in markdown to show and hide messages. unified is a project that transforms content with abstract syntax trees (ASTs). remark adds support for markdown to unified. mdast is the markdown AST that remark uses. remark plugins can inspect the tree and emit warnings and other messages. This is a remark plugin that gives authors the ability to configure those messages from markdown. When should I use this?When should I use this? You can use this package when you’re building a linter such as remark-lint. But you probably don’t need to, because remark-lint already exists and it uses this package. InstallInstall This package is ESM only. In Node.js (version 12.20+, 14.14+, or 16.0+), install with npm: npm install remark-message-control import remarkMessageControl from '' In browsers with Skypack: <script type="module"> import remarkMessageControl from '' </script> UseUse Say we have the following file example.md: <!--foo ignore--> ## Heading And our module example.js looks as follows: import {read} from 'to-vfile' import {reporter} from 'vfile-reporter' import {remark} from 'remark' import remarkMessageControl from 'remark-message-control' main() async function main() { const file = await remark() .use(() => (tree, file) => { file.message('Whoops!', tree.children[1], 'foo:thing') }) .use(remarkMessageControl, {name: 'foo'}) .process(await read('example.md')) console.error(reporter(file)) } Now running node example.js yields: example.md: no issues found APIAPI This package exports no identifiers. The default export is remarkMessageControl. unified().use(remarkMessageControl, options) Plugin to enable, disable, and ignore messages. options Configuration (required). options.name Name of markers that can control the message sources ( string). For example, {name: 'alpha'} controls alpha markers: <!--alpha ignore--> options.known List of allowed ruleIds ( Array<string>, optional). When given, a warning is shown when someone tries to control an unknown rule. For example, {name: 'alpha', known: ['bravo']} results in a warning if charlie is configured: <!--alpha ignore charlie--> options.reset Whether to treat all messages as turned off initially ( boolean, default: false). options.enable List of ruleIds to initially turn on if reset: true ( Array<string>, optional). All rules are turned on by default ( reset: false). options.disable List of ruleIds to turn on if reset: false ( Array<string>, optional). options.sources Sources that can be controlled with name markers ( string or Array<string>, default: options.name). SyntaxSyntax This plugin looks for comments in markdown (MDX is also supported). If the first word in those comments does not match options.name, the comment is skipped. The second word is expected to be disable, enable, or ignore. Further words are rule identifiers of messages which are configurated. In EBNF, the grammar looks as follows: marker ::= identifier whitespace+ keyword ruleIdentifiers? identifier ::= word+ /* restriction: must match `options.name` */ keyword ::= 'enable' | 'disable' | 'ignore' ruleIdentifiers ::= word+ (whitespace+ word+)* whitespace ::= ' ' | '\t' | '\r' | '\n' | '\f' word ::= letter | digit | punctuation letter ::= letterLowercase | letterUppercase punctuation ::= '-' | '_'' Which keyword is used defines how messages with those rule identifiers are handled: disable The disable keyword turns off all messages of the given rule identifiers. When without identifiers, all messages are turned off. For example, to turn off certain messages: <!--lint disable list-item-bullet-indent strong-marker--> * **foo** A paragraph, and now another list. * __bar__ enable The enable keyword turns on all messages of the given rule identifiers. When without identifiers, all messages are turned on. For example, to enable certain messages: <!--lint enable strong-marker--> **foo** and __bar__. ignore The ignore keyword turns off all messages of the given ruleIds occurring in the following node. When without ruleIds, all messages are ignored. Messages are turned on again after the end of the following node. For example, to turn off certain messages for the next node: <!--lint ignore list-item-bullet-indent strong-marker--> * **foo** * __bar__ TypesTypes This package is fully typed with TypeScript. An extra Options type is exported which models the interface of the accepted options. CompatibilityCompatibility Projects maintained by the unified collective are compatible with all maintained versions of Node.js. As of now, that is Node.js 12.20+, 14.14+, and 16.0+. Our projects sometimes work with older versions, but this is not guaranteed. This plugin works with unified version 6+ and remark version 7+. SecuritySecurity Use of remark-message-control does not involve rehype (hast) or user content so there are no openings for cross-site scripting (XSS) attacks. Messages may be hidden from user content though, causing builds to fail or pass, or changing a report. RelatedRelated remark-lint— plugin to lint code style mdast-comment-marker— mdast utility to parse comment markers ContributeContribute See contributing.md in remarkjs/.github for ways to get started. See support.md for ways to get help. This project has a code of conduct. By interacting with this repository, organization, or community you agree to abide by its terms.
https://www.skypack.dev/view/remark-message-control
CC-MAIN-2022-05
refinedweb
847
52.56
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Redux Initial Setup - index.js3:20 with Guil Hernandez and Beau Palmquist In this video, we'll create an index.js file that will act as the entry point for the application. We'll also turn the app.js file from the React Basics course into a Scoreboard component that will be exported as a module. New Concept ESLint: A tool that analyzes JavaScript for syntactical, logical, and conventional issues based on an .eslintrc file. It defines what rules should be used to evaluate the code. This process of analyzing JavaScript is known as linting. Further Reading - 0:00 In the previous video, we installed redux and - 0:03 react-redux NPN packages for our project. - 0:06 So in this video, we're going to add an index.jsfile that - 0:09 will act as our entry point for the application. - 0:12 But we're also going to turn the App.js file from the reactor basic's - 0:15 course into a scoreboard component that will be exported as a module. - 0:20 Then we'll fire up the application and - 0:21 demonstrate that the scoreboard application still works. - 0:24 So let's start by adding a new file to our application called index.js. - 0:30 In the new file, add the following imports, - 0:35 import React from 'react'. - 0:38 Import { render } from 'react-dom'. - 0:46 And import Scoreboard from './Scoreboard'. - 0:55 Next let's render the Scoreboard component using the render method like so. - 1:12 Currently our application will not run because we don't have a Scoreboard - 1:15 component. - 1:16 So let's remedy this by renaming this App.js file to Scoreboard.js. - 1:26 Then open up Scoreboard.js and - 1:29 import React And - 1:34 at the top of the file, you'll also see a const Application assignment. - 1:39 So let's change Application to Scoreboard. - 1:46 And finally to export the component, - 1:50 at the very bottom of the file, add export default Scoreboard. - 1:58 Before we start the dev server, - 2:00 let's make sure that all of our additional NPN packages have been installed. - 2:04 To do this, bring up your terminal or console, - 2:07 make sure you're in the project directory for this course, then run npm install. - 2:14 This will ensure that all the packages we need have been installed. - 2:17 Now type npm start to launch the app. - 2:21 Now some of you may have noticed here in the console output - 2:24 that a pre start script was executed before our dev server was launched. - 2:29 The prestart script is the ES let's step I mentioned earlier that will scan our - 2:33 JavaScript code for errors and report anything it finds in the terminal or - 2:37 console window. - 2:38 Once the server has started up, open up a browser and - 2:40 navigate to local host org 8080 and you'll see that the Scoreboard application from - 2:45 the React Basics Core still works, as you would expect. - 2:48 You can add players, remove players, - 2:52 adjust scores, and start and stop the stopwatch. - 3:03 Now that we've taken care of the project setup, the next step is to - 3:06 take the existing scoreboard app and break it into discrete components and modules. - 3:11 This will make working with Redux much easier and will be a good step towards - 3:15 isolating code and responsibilities within our application moving forward. - 3:19 See you in the next stage.
https://teamtreehouse.com/library/redux-initial-setup-indexjs
CC-MAIN-2017-30
refinedweb
639
73.68
Cloud Storage enables your application to serve large data objects such as video or image files, and enables your users to upload large data files. In the Python 2 runtime, App Engine provides its own client library for writing and reading objects in Cloud Storage. This App Engine library is not available in newer App Engine runtimes, including the Python 3 runtime. If your Python 2 app uses the GoogleAppEngineCloudStorageClient library, you need to migrate to the Cloud Client Library for Cloud Storage before you can run the app in the Python 3 runtime. Note that you only need to migrate your app to use a new client library. All of the data objects and permissions in your existing Cloud Storage buckets remain unchanged, and you can access your existing buckets using the new client library. Comparison of the App Engine and Cloud Client Libraries Similarities: The Cloud Client Library supports all of the Cloud Storage features enabled by the App Engine client library, such as reading, writing, removing, and listing objects. Migration should require only small changes to your code. The Cloud Client Library also supports additional features, such as creating and labeling buckets, and retrieving older versions of objects. Differences: In the App Engine library, the function that retrieves a list of objects works asynchronously. The Cloud Client Library doesn't provide an asynchronous function for listing objects, though you can use paging and iterate through a small set of objects. The App Engine client library requires you to use access control lists (ACL) to control access to buckets and objects. However, Cloud Storage and the Cloud Client Library support two systems for granting users permission to access your buckets and objects: ACLs and uniform bucket-level access. Uniform bucket-level access provides a simpler, consistent access control experience across all of your Cloud resources. All ACLs that you used with the App Engine client library remain in effect for your existing buckets after you migrate to the Cloud Client Library, and you can continue to use ACLs if needed. If uniform bucket-level access meets your needs, we recommend you use this simpler system for any new buckets you create. While you can convert existing buckets to use uniform bucket-level access, this may require significant changes to how your app secures access to its storage objects. Code samples: Basic storage operations using the App Engine APIs Basic storage operations using the Cloud client library for Cloud Storage Before you start migrating Understanding Cloud Storage permissions By default, your app's default service account has read and write privileges to the buckets in your project, and it has full rights to the objects it creates, both before and after migration. If you used a different service account or user account to secure access to your Cloud Storage buckets and objects, be sure you continue to use the same accounts and authentication techniques before and after migration. Overview of the migration process To migrate your Python app to use the Cloud Client Library for Cloud Storage instead of the App Engine client library: Install the Cloud Client Library for Cloud Storage. Update your code to use the Cloud Client Library. - Deploy your app to App Engine. Installing the Cloud Client Library for Cloud Storage To make the Cloud Client Library for Cloud Storage available to your app when it runs in App Engine: Create a requirements.txtfile in the same folder as your app.yamlfile and add the following lines: google-cloud-storage==1.24.1 googleapis_common_protos We recommend you use the 1.24.1 version of the Cloud Storage. Updating your code to use the Cloud Client Library Creating a Cloud Storage Client To use the Cloud Client Library for Cloud Storage, create a Client object. The client contains credentials and other data needed to connect to Cloud Storage. For example: from google.cloud import storage client = storage.Client() In the default authorization scenario described previously, the Cloud Storage client contains credentials from App Engine's default service account, which is authorized to interact with your project's buckets and objects. If you aren't working in this default scenario, see Application Default Credentials (ADC) for information on how to provide credentials. Using Cloud Client Library methods The following table summarizes which methods from the Cloud Client Library to use when implementing specific Cloud Storage features. Testing your updates You can test your updates to your app in a local environment, but all Cloud Storage requests must be sent over the internet to an actual Cloud Storage bucket. Neither App Engine nor Cloud Storage provides a Cloud Storage emulator. For more information about testing Python 2 apps, see Using the local development server. For more information about testing Python 3 apps, see Testing and deploying your application. Deploying your app Once you are ready to deploy your app, you should: Test the app on App Engine. If the app runs without errors, use traffic splitting to slowly ramp up traffic for your updated app. Monitor the app closely for any issues before routing more traffic to the updated app.
https://cloud.google.com/appengine/docs/standard/python/migrate-to-python3/migrate-to-storage-apis
CC-MAIN-2021-21
refinedweb
852
59.33
CSS for an encapsulated web Massimo Artizzu Oct 11 '17 Or, how to (d)evolve from convoluted coding conventions to simpler rule definitions. CSS is 20 years old, hooray! And we're still struggling with the same old problems of CSS at scale. Boo. Things are changing, though. To be fair, the spec for a real change came a while ago already, but this kind of things take their time to become standard and widespread. The problem The biggest concern about Cascading Style Sheets is... the cascade. In short, a plethora of rules that could all influence each other, also because they've all a global scope. It soon became evident that things like these don't work very well: Name clashing. Specificty races. Unimportant declaration marked as !important out of frustration. Styles that spill everywhere. Even differences among the browsers didn't help. At all. The first solutions As web pages and applications grew in complexity, some best practices emerged: - no more ID's in selectors; - no "brute-force" values (as that z-index); - no more !importantor exotic CSS properties; - no strict bindings between CSS and the DOM structure; - keep the specifity as low as possible, ideally 0.1.0; - use namespaces for your classes. It still wasn't enough, so CSS - the simplest of all web languages - got its coding style guides too. Conventions like OOCSS, SMACSS, Atomic CSS and BEM were born. The last is arguably the most successful one, so I'll refer to it during the article, although the same concepts apply more or less for the others. Scoping the style Let's say we have our card prototype: And we back this design using this HTML structure: <div class="content-card"> <header class="content-card__header">...</header> <picture class="content-card__picture"> <img src="..." class="content-card__image"> </picture> <section class="content-card__abstract"> <p>...</p> </section> <footer class="content-card__footer"> <button type="button" class="content-card__favorite">☆</button> <nav class="content-card__actions"> <a href="#" class="content-card__action">Read later</a> <a href="#" class="content-card__action content-card__action--view">View</a> </nav> </footer> </div> Behold the code in all its glorious BEM verbosity! Using BEM allowed us to name our classes clearly, without caring (much) about name clashing, specificity headaches and all the rest. But it came at the cost of lengthy class names, not always easy to read. Even using preprocessor rule nesting tricks like this one has its disavantages: .content-card { ... &__action { ... &--view { ... } } } Now, if I want to track down the rule for the class content-card__action, I can't just copy the name and paste in the search box of my editor of choice. Not really a deal breaker but still annoying. Scoped styles What we really need is to have a consistent - possibly native - system to apply our styles locally. BEM shows the way to do it but it's still a manual operation, which means it's prone to errors. One of the first native approaches (back in 2012) to the problem came in the form of the scoped attribute of <style> elements: <div> <style scoped> div { width: 30em; } p { color: #333; } <style> <p>I look black but I'm actually nicer</p> </div> <p>I'm pitch black</p> Thanks to that, the rules defined inside affect only on the <style>'s parent and all of its descendants. While it seemed cool and right on track, it didn't gain much momentum (Chrome removed its support on v36), because it didn't solve the other, big problem of shielding our styles from external influences. In short, global and parent scoped style rules can still override our local ones. The (almost) definitive solution When web development became aware that we need to split our interfaces into components and develop with this concept in mind, the path was set - although not easy to follow. Web Components are a concept that actually predates scoped styles, as they've been introduced in 2011. But they had a long journey before a decently widespread adoption, and also API maturity. But together with the concept of Shadow DOM, it came the encapsulated style sheet. This means that not only our component's style sheet with have a local effect, but also that external one will have no effect to our component! The main implication of this is that we have no need for a namespace for our classes, as there's no risk of name clashing or style spilling anymore. In short, we can reduce the HTML for our picture card like this: <div class="content-card"> <header class="header">...</header> <picture class="picture"> <img src="..." class="image"> </picture> <section class="abstract"> <p>...</p> </section> <footer class="footer"> <button type="button" class="favorite">☆</button> <nav class="actions"> <a href="#" class="action">Read later</a> <a href="#" class="action action--view">View</a> </nav> </footer> </div> That's already much better! But we can do even more if we fully take advantage of the encapsulation. This means that we can also ditch the convention of using only selectors of specifity 0.1.0 for our rules. Should we avoid using classes, then? Not really: we can use them only when necessary. Because not only our style sheets are now encapsulated, but the whole concept of componentization of our interface naturally brings the habit of writing small things: small JavaScript modules, small HTML and small CSS. This implies that our semantically chosen HTML elements are possibily the only ones in the whole component: we'll presumably have only one header, one footer, or maybe multiple li but for just one type of element. In the end we can shrink our markup even further: <content-card> <header>...</header> <picture> <img src="..."> </picture> <section> <p>...</p> </section> <footer> <button type="button">☆</button> <nav> <a href="#">Read later</a> <a href="#" class="view">View</a> </nav> </footer> </content-card> Now this is even better, readable and clear. We even shaved some meaningful bytes from the payload. And if the semantic meaning (for the developers) has vanished, we can use ID's again, as they're encapsulated too. But I want to alter a component's appearance from the parent! It's a legitimate need, no doubt. Unofrtunately this issue still lacks a definitve solution. There has been a proposal to introduce a "shadow-piercing" descendant combinator in CSS (the >>> or /deep/ combinator), but has been deprecated since then as it's been considered as "too powerful" for the intent. Indeed, it would have reduced CSS encapsulation to mere scoped CSS. The only way to change a custom element's appearance is by using CSS "variables" (or better "custom properties"): /* styles.css */ :root { --base-button-color: rebeccapurple; } /* base-button's style */ button { background-color: var(--base-button-color, midnightblue); } This is a little bit inconvenient as it's deemed too "fine-grained", which also could lead to name clashing (again!) when it comes to naming said custom properties. This is why Tab Atkins proposed the @apply rule, that essentially meant mixins in CSS. While that sounded great, it didn't solve other problems in the matter and the proposal has been abandoned by its champion and we won't probably see it spec'ed. The linked article mentions the ::part and ::theme pseudo-elements that could finally solve the problem, but there's still quite some road to walk. In the end, we have to stick to custom properties for now. How can I use encapsulated CSS though? Web Components have a complex API that might put off some developers. That's why a library like Polymer has been born. Even if Polymer or even Web Components are out of question for your development needs, there are frameworks that let you take advantage of encapsulation for styles. When Web Components or even just Shadow DOM are ruled out, style encapsulation can be emulated. This usually means generating an attribute with a random name for each component, and attach it to every style rule and DOM element of the component. So, starting with the following markup and style sheet: <div> <h2>Hello!</h2> <p>I will get a random attribute even though I'm unstyled</p> </div> h2 { font-size: 150%; } They will be translated into something like this when processed: <!-- Inside the <head> --> <style type="text/css"> h2[data-v-4c74d97c] { font-size: 150%; } </style> <!-- In the page --> <div data-v-4c74d97c> <h2 data-v-4c74d97c>Hello!</h2> <p data-v-4c74d97c>I will get a random attribute even though I'm unstyled</p> </div> Another component will get a different generated attribute, and that's how style rules get scoped. Emulating style encapsulation is slower when it comes to applying the styles, but it won't probably affect your page significatively. So let's see how the most common front-end libraries handle style encapsulation. Vue Vue per se is agnostic when it comes to CSS, but Vue as an ecosystem offers vue-loader, a loader for Webpack that leverages on a PostCSS plugin to achieve emulated encapsulation. The loader gets included using vue-cli with a Webpack-based scaffolding template. In order to encapsulate your styles, you have to define a component's style using the <style> with the scoped attribute. Now this is a bit inconvenient, as it's the same syntax from the old spec for element-scoped styles, without having the same effect. In fact, descendant components will not be styled by the scoped style, opposed to the original proposal of <style scoped>, because they will miss the generated attribute defined for the ancestor. But still, encapsulation. Angular Angular is more sophisticated when it comes to encapsulation, as it let us decide among no encapsulation at all, emulated (which is the default one) and even taking advantage of Shadow DOM's native encapsulation. I guess this is possible also because Angular does not replace the host element from the page, de facto replicating what Web Components do. React/Preact Although React and its lesser known (but not less powerful) alternatives like Preact don't offer native solutions for CSS, there's a plethora of libraries that allow to style our components, from CSS Modules to Styled Components, from Glamor to JSXStyle. These libraries are all part of the great CSS-in-JS subject, and they all are based on the concept of encapsulated style, although mostly based on an emulated system. Conclusions CSS comes from a long way. It's probably one of the simplest language around, it's simple to use but this doesn't mean it's simple to manage. With the advent of large-scale web applications, the issue became evident, but solutions didn't. After long, struggling years, with style encapsulation we've managed to scale down the problem dramatically. Not only it seems a step in the right direction, but it feels like it. So it doesn't matter what library will you use for your next project, the advice is to choose one that allow style encapsulation. Which mainstream programming language has the ugliest syntax? Regardless of the language's usefulness, which language fails the beauty contes... Which contentious opinions in programming actually matter? charliedeveloper - Jun 14 Welcome to the Mapbox Developer community, Pride style Erin Quinn - Jun 15 Looks like Vue.js will likely out-star React over the weekend! Jason Yu - Jun 14 Do someone here have experience in building commercial Jekyll templates/themes? Jovan Savic - Jun 14
https://dev.to/maxart2501/css-for-an-encapsulated-web-7fo
CC-MAIN-2018-26
refinedweb
1,906
62.78
On Sat, 2006-02-04 at 12:45 +0100, Eric Tanguy wrote: > Le samedi 04 février 2006 à 10:24 +0000, Paul Howarth a écrit : > > On Fri, 2006-02-03 at 20:45 -0600, Rex Dieter wrote: > > > Eric Tanguy wrote: > > > > Le vendredi 03 février 2006 à 16:51 -0600, Rex Dieter a écrit : > > > > > > >>Eric Tanguy wrote: > > > > > > >>>>Something like this ought to do the trick: > > > >>>>%if "%{?fedora}" > "4" > > > >>>> > > >>>>%endif > > > >>>>%configure > > > > > > >>>It seems it's not taken into account for devel. How to know what > > > >>>%{?fedora} returns for devel ? > > > >> > > > >>AFAIK, on devel, %fedora expands to 5 in buildsys-macros > > > > > > > Maybe in buildsys but i'm trying to build it on a fc4 box using mock : > > > > mock -r fedora-5-i386-core foobar.spec > > > > > > Of course it's not. That macro only gets defined if using the FE > > > buildsystem (and/or) building from FE's Makefiles, ie, 'make mockbuild'. > > > I had assumed this was what you were referring to in your original post. > > > > He said he was using mock, and mock pulls in the required macro > > definitions by default courtesy of the [groups] repo, which points to > > > > > > So a mock build should be the same as an FE buildsystem build in this > > respect. The root.log from the mock build should show buildsys-macros > > being installed. > > > > Paul. > > The problem is : > cd /var/lib/mock/fedora-development-i386-core/root/etc/rpm > ls > nothing > and i would be able to find macros.disttag containing : > %fedora 5 > %dist .fc5 > > So it seems mock build is not the same as an FE buildsystem build or i > do something wrong ? Is there no reference to buildsys-macros in /var/lib/mock/fedora-development-i386-core/result/root.log? > Or it's because in FE buildsystem i do a make tag before requesting a > build ? No, that's a cvs tag, nothing to do with dist tag. Paul.
http://www.redhat.com/archives/fedora-extras-list/2006-February/msg00286.html
CC-MAIN-2015-14
refinedweb
310
71.34
Q Math MT5 This is the MT5 version. analysis Just download it and try it, it's free. standard Bollinger Bands indicator is calculated only on the basis of 7 price constants and a simple Moving Average (SMA). This modification allows calculating the indicator on the basis of any combination of 4 basic prices: Close, Open, High and Low. An additional convenience of the indicator is the possibility to select one of the six methods of averaging: simple ( SMA ), exponential ( EMA ), smoothed ( SMMA ), linear weighed ( LWMA ), double exponential ( DEMA ), triple exponential ( TEMA It is the very same classic Stochastic indicator, but with a little twist: NO NAME and data is shown in the sub window. It could be stupid, BUT, if you are running out of space in Micro windows like Mini Charts, where the indicator's name is totally useless, you came to the right place. And that's it! I know it seems stupid but I needed the classical version of Stochastic indicator without that annoying name on my Mini Chart, so I did it that way... The original formula is right from Metaquote's This simple indicator plots 4 channels based on ATR Multiplier. ATR timeframe and periods can be chosen. It can helps in: Choosing SL/TP values when entering a trade Identify breakouts/false breakouts Give more accuracy to candlestick analysis If you set to 0 a multiplier for a specific bands, that band will be not plotted on chart. It provides 9 buffers: ATR value and each band value. Remember we are here not to grow fast, but to grow with you. The robot is based on 2 harmonic patterns, in next updates we will install more patterns, at the moment it contains Gartley and Butterfly. The Robot searches in multiple pairs and temporalities at the same time. The ea is in the stabilization phase and has been giving a very high return (30%) for 1 month from an account of more than $ 2000 in ETHEREUM. Here you can follow the EA entries in real
https://www.mql5.com/pt/market/product/55873
CC-MAIN-2020-45
refinedweb
341
59.94
dll created with qt5 used for mt5(metaquotes5) gives the error "unresolved import function call" i'v created a dynamic dll using qt5 c++ library. if the dll's functions won't use the Qt specicify things like QString QList ...etc... and erything goes well. but when the dll's function refernece to some Qt "inner" things as i describe earily. and the mt5 show the error "unresovled import function call",and the dll can not be loaded! simple codes will say clearly: for example : the dll has a function add: extern "C"{ __declspec(dllexport) int __stdcall add(int a,int b){ return a+b; } } and the dll did well ,it can give the expected results; but if i add something Qt specify for example: #include <QString> extern "C"{ __declspec(dllexport) int __stdcall add(int a,int b){ QString my_test_str="no actual functional capability,just for testing"; return a+b; } } as you can see ,i add the QString; and the dll will not be loaded,and throw errors. so the question is: can the qt5 make a dll used for a C interface instead c++? if so what i should do? thanks What deserves to be mentioned : i can use any stl in the dll function without any error just like this: #include <vector> extern "C"{ __declspec(dllexport) int __stdcall add(int a,int b){ std::string string="my test string"; std::vector<const char*> my_test_vector[3]; my_test_vector.push_back("my test string"); return a+b; } } the dll also works. it's very strange. QString vs std::string, qtl vs stl....that's a little interesting! Hi and welcome to devnet, This article might give you the clues you need. Hope it helps @SGaist thanks for your help; but it doesn't work. because the QLibray is aslo the Qt Specifiy. Using any Qt inner object will throw errors. Did you saw that __declspechas to change when you build the library and when someone wants to use it ? @guzuomuse said in dll created with qt5 used for mt5(metaquotes5) gives the error "unresolved import function call": mt5 Your Lib is not loaded while using Qt specific keywords because your lib is depends on other Qt dlls, use dependency injector or something to check on what dll's your dll is dependent. hi,what does the "__declspec has to change" mean? maybe you say: the exported dll's function names changed? they haven't changed; keep the same name as i defined; See the Creating Shared Libraries chapter of Qt's documentation. @TobbY the only additional dependency dll is Qt5Core.dll; and it has been loaded together with the dll when imported. mank thanks to you both; finally, i solved this problem. in fact it's very easy. @TobbY is absolutely right. just put the dependencies dlls(here the only one Qt5Core.dll) and my dll in the same dir. i code the absolute path in the mql5,but left the dependency dll on the mt5's library before. that's not right. i'm so stupid! so i should give trust on the qt. it's my fault
https://forum.qt.io/topic/94517/dll-created-with-qt5-used-for-mt5-metaquotes5-gives-the-error-unresolved-import-function-call/10
CC-MAIN-2019-04
refinedweb
517
74.39
import "cmd/internal/gcprog" Package gcprog implements an encoder for packed GC pointer bitmaps, known as GC programs.. A Writer is an encoder for GC programs. The typical use of a Writer is to call Init, maybe call Debug, make a sequence of Ptr, Advance, Repeat, and Append calls to describe the data type, and then finally call End. Append emits the given GC program into the current output. The caller asserts that the program emits n bits (describes n words), and Append panics if that is not true. BitIndex returns the number of bits written to the bit stream so far. Debug causes the writer to print a debugging trace to out during future calls to methods like Ptr, Advance, and End. It also enables debugging checks during the encoding. End marks the end of the program, writing any remaining bytes. Init initializes w to write a new GC program by calling writeByte for each byte in the program.. Repeat emits an instruction to repeat the description of the last n words c times (including the initial description, c+1 times in total). ShouldRepeat reports whether it would be worthwhile to use a Repeat to describe c elements of n bits each, compared to just emitting c copies of the n-bit description.. Package gcprog imports 2 packages (graph) and is imported by 4 packages. Updated 2020-06-08. Refresh now. Tools for package owners.
https://godoc.org/cmd/internal/gcprog
CC-MAIN-2020-29
refinedweb
236
71.75
US4926375A - Multiple nodes broadcast communication method with receiver identification by bit position in transferred massage - Google PatentsMultiple nodes broadcast communication method with receiver identification by bit position in transferred massage Download PDF Info - Publication number - US4926375AUS4926375A US07046992 US4699287A US4926375A US 4926375 A US4926375 A US 4926375A US 07046992 US07046992 US 07046992 US 4699287 A US4699287 A US 4699287A US 4926375 A US4926375 A US 4926375A - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - data - global - pc - service - Abstract Description The present invention relates to a data communication system and, more particularly, to a method and apparatus for sharing of control and status information among a number of loosely coupled control and monitoring nodes of a distributed control system. In various systems employed within the manufacturing, chemical processing and power generation industries and in other applications, equipment is coupled together by a communication network to form a distributed control system. Such communication allows for (1) transfer of data and (2) control of connected equipment. The network employs a broadcast bus to physically interconnect control and monitoring nodes associated with a physical process. A node, as used herein, refers to any device connected to the communication network for either supplying data (a Source) or receiving data (a Sink). A node may act as both source and sink. For example, one node may be a controller such as, for example, a General Electric Company Programmable Controller (PC) distributed under the brand `Series Six`, which PC may provide commands to operate equipment (source) and receive data (sink) indicative of the status of the equipment. The communication network for factory control may include an overall broadband Local Area Network (LAN) supporting a number of smaller networks or "subnets". FIG. 1 illustrates an exemplary system in which a subnet carrier band LAN interconnects several devices forming a workcell. In such systems, it is important that each node be independent and perform its assigned task regardless of the other nodes. The communication network provides a data transfer function allowing each node to receive or generate data of interest. In some networks, data was sent to each node on the network, in sequence, with a separate message to each. In other words, sinks must "poll" sources; i.e., the sink node sends, in turn to all source nodes, a request message, receiving the data in separate response messages from each. More efficient networks use a broadcast technique in which each node sends all data onto the communication bus and all other nodes receive the data. The receiving nodes then evaluate and retain only data of interest to each. A disadvantage of the above described broadcast technique is that each node must read each message on the network or subnet to identify data of interest to it. Since some data may be transmitted at high repetition rates and be part of a large group of data, time required for each node to identify and process selected data may affect system performance. Another disadvantage is that, when nodes are added or deleted, existing nodes need to be changed to extend their knowledge of the system to these new nodes. It is an object of the present invention to provide a communication system which overcomes the above mentioned disadvantage of prior data transfer techniques. It is another object of the invention to provide a data sharing mechanism which allows a receiving node to identify and retain data of interest in an efficient manner. It is a still further object to provide a data sharing mechanism which is easily implemented at user nodes. It is yet another object to provide a data sharing mechanism which permits each node's application to be changed independently of other nodes. The above and other objects, features and advantages are attained in a communication network (hereinafter "Subnet") in which a global data service provider is associated with each node on the Subnet. The service provider allows source and sink nodes to share data by having the current state of the data copied on a periodic basis from a source node to one or more sink nodes. A node becomes a source by directing the associated service provider to transfer data from it to one or more sink nodes. The source node specifies: the direction of transfer (i.e., output), the data priority, whether or not the data is "global" (i.e., intended for all or a group of nodes) and a group address, a variable name for the data (for example, "TEMP"), the update rate for sourcing the named data and a local reference for where to get the data to be output. Thereafter, the provider formats a message for transfer onto the communication network. The formatted message block includes the priority, the destination or group address, the source address, an indicator of message type, a format key, the number of variables in the message and the list of variables, each accompanied by its name, type, length and value. At each other service provider coupled to the network, the formatted message is detected and its group address read. If the provider is among the addressed group, the message is searched for a variable of interest to the provider. The node associated with the provider will have previously identified variables of interest via an appropriate global input request, so that the provider has in memory a variable name for which to search. Similar to a source, a node becomes a sink by directing the associated service provider to receive global data. The sink specifies the direction of transfer (i.e., input), the group address, the name of the variable, the local reference where the global data is to be stored, a local reference where status is to be repeated, and a "timeout" value. When a variable of interest is found, the provider stores the source address, format key and a corresponding offset from the message start to the location in the message of the data of interest. Thereafter, whenever a message is received for that destination address, the provider checks the source address and format key and if they match those of a previously received message, the provider uses the stored offset value to index directly to the data of interest. In this manner, high data sampling rates may be sustained on the network. The actual data source node need not be known to the sink or receiving node since the data is located by reference to a symbolic variable name. This allows a node's application to be changed without affecting any other node. In addition, the location of data within each node is immaterial and may be changed without affecting other nodes. This allows "backup" or redundant sources for critical variables, whereby, if a primary source fails and no longer sends the variable, its backup will recognize this fact and begin sourcing that variable. Other sinks, after initially "timing out" will discover the presence again of the named variable and again begin receiving it. In an exemplary embodiment, the service provider comprises a command processor coupled via a data link to an application device (node) such as a PC. The command processor receives and processes commands from the node to transmit or receive data (variables). A variable state table is associated with the processor for storing variables and for keeping track of whether variables of interest have been found. Data transfer means is coupled to the node for coordinating the actual data transfer between a communication bus and the node. This control is effected by a bus access controller and a data buffer. The bus access controller responds to either a send or receive means, each coupled to the data transfer means, for moving data between the bus and node. A timer initiated by the command processor provides timing signals to the various functions for controlling data flow. For a better understanding of the present invention, reference may be had to the following detailed description taken in conjunction with the accompanying drawing in which: FIG. 1 is an illustration of a factory communication systems hierarchy; FIG. 2 is a functional block diagram of a communication sub-network in one form of the present invention; FIG. 3 is a functional block diagram of a global data service provider in an exemplary form of the present invention; FIGS. 4A-4G are a sequence of flow charts illustrating a preferred method of communication processing in accordance with the present invention; FIGS. 5A-5C illustrate protocol exchanges between peer Logical Link Control (LLC) users on a subnet; FIG. 6 illustrates a processing cycle in an exemplary programmable controller; FIG. 7 illustrates the transfer header structure in the PC of FIG. 6; FIGS. 8A-8B illustrate a protocol sequence between a Subnet PC driver and a PC; and FIG. 9 shows types of data flowing through a global data service provider. FIG. 1 illustrates one form of factory communication systems hierarchy. In this system, high level control functions and resources, such as, for example, an overall factory control 12, an inventory monitor 14 and a programmable device support system 16, are coupled together by means of a plant wide communication system using a broadband local area network (LAN) 18. The LAN 18 also couples multiple work cells, one of which is indicated generally at 20, to the higher level control and reporting functions through a gateway 22. At each cell 20 there may be a large number of other control or operator stations 24 accessing a cell communication network or subnet carrierband LAN 38 (hereinafter "Subnet"). Coupled to the Subnet are various controllers, such as, for example, a programmable controller 26, a numerical control 28 and a robot control 30, each directing an associated hardware function such as material handling 32, machine tool 34 and robot 36. The present invention provides LAN communication capability among and between controllers 26, 28 and 30 and other factory automation devices residing on a single subnet LAN segment 38. For purposes of illustration, the invention will be described in an embodiment particularly adapted for use in a General Electric Company Series Six programmable logic controller (PC) although it will be apparent that other types of controllers can interface with or be substituted for such a PC on the Subnet LAN. Communication on the Subnet may employ various other types of communication standards such as General Motors Corp. Manufacturing Automation Protocol (MAP). MAP allows each of the nodes (controllers 26, 28 and 30 and consoles 24) to access the Subnet LAN for either receiving or sending data using a protocol distinct from global data. For purposes of description, a node may be a source (sending or providing data) or a sink (receiving data) or both at the same or different times. This invention relates to a data sharing protocol which allows sharing of selected data among nodes on the subnet in realtime. In particular, there is disclosed a global data transfer allowing producers and consumers of named data to share the state of this data by having the current state of the data copied on a periodic basis from the producer to the consumer(s) of the data. No application interaction is required other than an initial indication of which data are to be output and/or input at each node. FIG. 2 illustrates functionally a group of nodes each coupled to the subnet LAN 38 by an associated global data service provider 40. The providers 40 are hardware interfaces (microcomputers, timers, data buffers and bus controllers) providing data transfer between the LAN 38 and the nodes. The providers 40 operate in accordance with stored programs to organize, identify, encode and format messages for transmission on the LAN and to search and extract desired data from messages received. Since the following description is given in terms applicable to a Series Six PC, the providers 40 may be sometimes referred to as a Series Six Subnet card or merely Subnet card. In the hierarchy of factory communication systems, the Subnet serves as a stand alone network or subsidiary to a plant-wide network as shown in FIG. 1. It is used to coordinate a related group of device controllers. The plant-wide communication system lies above this "subnetwork" layer and the dedicated controller to device communication systems lie below this layer. The application needs that the Subnet communication system must fulfill are summarized as follows: Exchange sampled data with peers Exchange unique data with peers "Peers" includes loosely coupled operator consoles that provide: Real time graphics display of process Rapid change of displays (pictures) Any console may show any display Autonomy from supervisory computer and plant-wide communication system Simple to maintain/change Standard access to all vendors' programmable control devices from supervisory computers Centralized application program storage Process monitoring Coordination of processes. The communication needs of distributed control are more demanding than those of the supervisory computer(s), perhaps 100 to 1000 times more demanding in terms of response time and throughput. Consequently, the appropriate communication system is one that: (1) efficiently handles the distributed control needs, yet (2) provides access to attached programmable devices from supervisory computers(s) using the MAP connection-oriented protocols. To serve the needs identified above, Subnet provides two kinds of communication services: (1) Global Data Services (2) Connection-Oriented Services, e.g., MAP Primarily, global data services address the distributed control needs, and connection-oriented services address the supervisory computer needs. Global data services provide flexible symbolic references to variables produced by any controller and shared with other controllers and/or operator consoles. Connection-oriented services in one form of the present invention are MAP services. Both global data and connection-oriented services are provided by all Subnet LAN Interface Units (LIU's). Reference will be made throughout this description to a number of hardware and software interface units and drivers such as, for example, LIU, PC driver, LLC (Logical Link Control). "LIU" refers to the illustrated system of FIG. 2, including the invention as well as other unrelated functions, "PC driver" is equivalent to the "Application Data Transfer" shown in 40 of FIG. 3. "LLC" is equivalent to the "Bus Access Controller" shown in 40 of FIG. 3. The operation of such elements is described in order to provide an understanding of their interface and utility with the present invention. The distinguishing characteristics of Subnet's communication services, are: __________________________________________________________________________ XFR CHARAC-SERVICE TOPOLOGY LIU ACTION TERISTICS__________________________________________________________________________GLBL OUT ONE TO MANY ONE RQST/PERIODIC SEND B'CAST SNDGLBL IN MANY TO ONE ONE RQST/PERIODIC RECEIVE RVC B'CASTCONNECTED ONE TO ONE 1ST RQST/MAKE AUTO FLOW CONN+ONE XFR EACH CONTROL, FINAL RQST/ONE RETRANS AS XFR+BRK CONN 1ST RQD RQST CAN ALSO BE FINAL__________________________________________________________________________ In dealing with data in distributed control systems, two kinds of data need to be distinguished: Sampled Data--Data which tracks the current value of a variable (e.g., temperature, position, number of parts) over a continuous range of values. Sampling rate is sufficient that loss or duplication of a single sample can occur without consequence. Unique Data--Data for which loss or duplication changes the intended meaning and, therefore, cannot be tolerated. For example, an indication that "associated counter value wrapped around"; or a command to "move robot arm 5 inches to the right". If this information were lost or duplicated, the frame sequence of which this frame is part, would have erroneous results. Often, matters of safety or great expense are at risk. Broadcast is the preferred technique for transferring sampled data because: Most process control data is sampled rather than unique. Broadcast can efficiently transfer data from a source to multiple destinations at frequent intervals. Destination station(s) need not be known to the source. Process can be made very reliable due to the independence of stations. Station failure is quickly and easily detected. Redundant or backup stations are transparent to other stations. Therefore, Subnet global data services use the broadcast technique. Global data services are appropriate for transmission of sampled data. Transmission of unique data requires the use of some other communication service. Connected service is appropriate for transferring large files (e.g., program download), for reaching MAP stations on other network segments, or whenever full MAP services are required. For a General Electric Company Series Six Programmable Logic Controller (PC), all locally initiated serial communication requests are issued using a Serial Communication Request (SCReq) instruction. SCReq's are used to issue service requests for global I/O, datagrams, or MAP services. SCReq provides a pointer to a service request block in a register table, whose first register contains a COMMAND parameter. The COMMAND parameter distinguishes the type of service requested. Until the LIU confirms each (SCReq), no other (SCReq) will be accepted. Remote responses or remotely initiated transactions result in local data transfer without action by the local PC application program. The Series Six PC is representative of the class of controllers involved. Other devices interface similarly Subnet communication services may illustratively use lower layer services provided as shown in Table 1. However, this invention is not limited or restricted to these particular underlying services. For example, lower layer services could be provided by Ethernet™ or other well known services. TABLE 1______________________________________ CONNECTION- GLOBAL ORIENTEDLAYER SERVICES SERVICES______________________________________APPLICATION SUBNET CASE/MMFSSESSION NULL ISOTRANSPORT NULL ISONETWORK NULL ISO CLNSLLC 802.2 TYPE 1 802.2 TYPE 1MAC 802-4 802.4PHYSICAL 802.4 5 MBPS CARRIERBAND OR 10 MBPS BROADBANDMEDIUM 802.4 5 MBPS CARRIERBAND OR 10 MBPS BROADBAND______________________________________ A Subnet global database consists of each station broadcasting the global data that it generates. Group addressing is used. Variables are known to those stations in the group according to symbolic names. The system detects failed stations and readily accommodates changes in the global database. Table 2 depicts a Subnet global database. TABLE 2______________________________________SOURCE VARIABLES BROADCAST TO ALL STATIONSSTATION SHARING SAME GROUP ADDRESS______________________________________Station 1 RED MASONStation 2 GREEN WHITE PAINTER..Station n PURPLE GOLD MECHANIC ROSEBUD______________________________________ Variables transferred in a global database are either of two types: bit strings or octet strings of an indicated length. The atomic unit of transfer in a global database is the variable: i.e., whatever amount of data comprise that variable, they are collectively treated as an entity and are transferred only as an atomic whole; only complete singular samples (e.g., from the same PC scan) are input or output. (Separate variables, even from the same controller, are not atomic.) Data samples will be delivered to the destination application in the same sequence they were sent by the source application. Conditions will occur where a sample of data will be lost. Loss of a sample will occur if: (1the frame containing the sample is subject to a common error (the entire frame is discarded), or (2if a subsequent sample is received at the destination LIU before the destination application accepts the prior sample (the prior sample may be discarded), or (3) if a subsequent sample is sent from the source application before the MAC token has allowed the source LIU to transmit the prior sample (the prior sample may be discarded). Duplication of samples will not occur. The PC application program will typically issue one or two Global Output Requests upon initialization that will satisfy all control needs. Additional Global Output Requests might be issued, during operation, in response to application-level requests for various display information from remotely connected operator consoles. The Global Output Request conveys needed information to the local LIU which then commences periodic transfer of information from the local PC to remote PC's (or other devices) attached via the Subnet The PC application program: (1) using (MOVE BLOCK) instructions, preloads a number of consecutive registers to produce a Global Output Request Block. (2) then, if the LIU is not busy, issues a (SCReq) which contains a pointer to the Global Output Request Block, (3) then, waits for an indication of either (SCReq) complete without error or (SCReq) complete with error. (4) At this point, the registers used for the Global Output Request Block may be reused for any other purpose, or another (SCReq) may be issued. A Global Request block takes the following form: TABLE 3______________________________________COMMANDGROUP ADDRESSSCHEDULEPRIORITYNUMBER OF VARIABLES (i)NAME/C1 /C2 Output Variable 1/C3 /C4/C5 /C6/C7 /CNTYPELENGTHLOCATIONNAME/C1 /C2 Output Variable 2/C3 /C4/C5 /C6/C7 /CNTYPELENGTHLOCATIONNAME/C1 /C2 Output Variable i/C3 /C4/C5 /C6/C7 /CNTYPELENGTHLOCATION______________________________________ In the preferred embodiment, the following definitions apply to the parameters indicated in the request block: COMMAND specifies to the LIU what kind of Serial Communication Request (SQREQ) is to be performed. There are two global output COMMAND values: (1) Start Global Output (2) Stop Global Output Start Global Output starts the periodic output of specified variables to a global database. Start Global Output overrides any prior global output request which specified the same GROUP ADDRESS. Stop Global Output deactivates the previously specified global output request which has the same GROUP ADDRESS. GROUP ADDRESS specifies the group address to which the specified global output data will be broadcast or to which the request applies. SCHEDULE specifies the periodic interval at which data transfer will be scheduled to output. Output from the PC will always occur coincident with the end of the scan in which it was scheduled. Values are: (1) Every controller Scan and (2) from 0.01 to 60 seconds, in increments of 0.01 seconds. At each schedule interval, the LIU does two things: (1) immediately restarts the elapsed time clock for the specified period, and (2)at the end of the current scan, reads the global output variables from the PC and queues the LLC/MAC to send the next time the token arrives. PRIORITY assigns the relative priority of the message across the network. For example, in the exemplary embodiment there are two priority values: (1) Control and (2) Display. Should LAN bandwidth or resources in the local or a remote LIU become constricted, Control level priority messages take precedence. NUMBER OF VARIABLES indicates to the LIU the length of the Global Output Request Block. Values are 1 to 256, i.e., the maximum number of registers that may be output by an active Global Output Request is 256. NAME assigns a global reference of up to N characters in length to the internal register identified by LOCATION. C1 to C, comprise the variable. Each variable, of the LENGTH specified, is preserved as an atomic whole; only complete samples (e.g., from the same PC scan) are transferred. LOCATION is a pointer to the (first) internal register containing the variable. The same internal register may be output under more than one variable name. A PC application program will typically issue a single Global Input Request upon initialization. The Global Input Request conveys required information to the local LIU which then commences periodic input of information to the local PC from remote PC's (or other controllers) connected via the Subnet LAN. The PC application program: (1) using (MOVE BLOCK) instructions, preloads a number of consecutive registers to produce a Global Input Request Block, (2) then, when the LIU is not busy, issue a (SCReq) which contains a pointer to the Global Input Request Block, (3) then waits for an indication of either complete without error or complete with error. (4) At this point, the registers used for the Global Input Request Block may be reused for any other purpose, or another (SCReq) may be issued. A Global Input Request block has the following format: TABLE 4______________________________________COMMANDGROUP ADDRESSSTATUS TABLENUMBER OF VARIABLES (i)NAME/C1 /C2 Input Variable 1/C3 /C4/C5 /C6/C7 /CNTYPELENGTHLOCATIONTIMEOUTNAME/C1 /C2 Input Variable 2/C3 /C4/C5 /C6/C7 /CNNAME/C1 /C2 Input Variable i/C3 /C4/C5 /C6/C7 /CNTYPELENGTHLOCATIONTIMEOUT______________________________________ In the preferred embodiment, the parameters in the Input Request block are defined as follows: COMMAND specifies to the LIU what kind of Serial Communication Request (SQREQ) is to be performed. There are two global input COMMAND values: Start Global Input Stop Global Input Start Global Input initiates the periodic input of specified variables from a global database. Start Global input overrides any prior global input request which specified the same GROUP ADDRESS. Stop Global Input deactivates the global input request which has the same GROUP ADDRESS. GROUP ADDRESS specifies the group address from which the specified global input data will be received or to which the request applies. STATUS TABLE identifies the internal register(s) where status bits, corresponding to the specified variables, are to be stored. There are two status bits per variable, so that faults can be quickly detected and isolated. Either bit TRUE indicates a fault; the corresponding input variable is invalid. Both bits FALSE indicates the variable was received and transferred successfully. Ln=Length or type mismatch; the local and remote (source) stations have specified a different type or length for the variable. Tn=Timeout; the timeout value lapsed without receiving data. Until the first sample of data is input, Tn is TRUE to indicate that the internal register(s) contain invalid data. Where n is a sequentially assigned variable number. The STATUS TABLE is arranged according to the of the variables in the Global Input Request block. ______________________________________STATUS T8 L8 T7 L7 T6 L6 T5 L5 T4 L4 T3 L3 T2 L2 T1 L1TABLE T9 L9 etc.______________________________________ So long as the Global Input Request is active, the Status bits in the STATUS TABLE are updated by the LIU whenever any status changes; therefore, fault indications clear automatically as soon as the fault is cleared. NUMBER OF VARIABLES indicates to the LIU the length of the Global Input Request Block. NAME identifies a global variable to be input. The GROUP ADDRESS will be searched for NAME and the corresponding value will be input as often as received (subject to the limitation that only the most recently received value of a variable will be input at the end of any PC scan). C1 to C, are allocated to receive the variable. Each variable, of the LENGTH specified, is preserved as a atomic whole; only complete samples (e.g., from the same PC scan) are transferred. If the LENGTH specified is different than that specified by the sender of the variable, the corresponding Ln status bit is set TRUE. LOCATION is a pointer to the (first) PC register to receive the global variable. No PC register may receive more than one global input variable. This restriction applies across all GROUP ADDRESSes. TIMEOUT specifies the maximum time to be allowed between receiving successive values of the specified variable. If the TIMEOUT period lapses without receiving the variable, the corresponding Tn status bit is set TRUE. Values for TIMEOUT are from 0.01 to 300 seconds, in increments of ) 0.01 seconds. However, the number of unique values of TIMEOUT may be restricted on some LIU's. In applying global I/O service, the user must allocate group addresses, schedules, and priorities according to his distributed control needs. General requirements in the preferred embodiments are as follows: (1) Each distributed system of controllers and operator consoles that are to share global data must be physically interconnected on the same Subnet LAN segment. (A segment is a section of LAN on which nodes can directly hear and communicate to all other nodes on the same segment without the aid of any interconnecting devices). (2) If multiple systems (as in 1 above) share the same Subnet LAN segment, unique group addresses must be assigned to each separate distributed system. (3) If, within a system, different sets of data are to be assigned different priorities, each set must be assigned a different group address. (4) If, within the same system and priority, different sets of data are to be output according to different schedules, each set must be assigned a different group address. (5) Some variables are only sensed intermittently, such as those associated with particular operator displays. Better network performance can be achieved if output of these variables is turned on and off as needed. Therefore, these variables might be logically grouped and assigned separate group addresses. (Datagram services or global data services can be used to communicate such requests (e.g., using a register variable defined at the application level) from an operator console to a specific controller or controllers.) (6) The maximum number of group addresses available on any Subnet LAN segment is 48. To illustrate the application of the above guidelines, Table 5 shows how a distributed system might allocate different group addresses to different set of data. TABLE 5__________________________________________________________________________ DISPLAY FURTHERGROUP DATA ON/OFFADDRESS PRIORITY SCHEDULE SUBDIVISION CONTROL__________________________________________________________________________1 CONTROL 0.1 SEC. --2 CONTROL 1.0 SEC.3 DISPLAY 0.5 SEC. DATA A SOURCE STA X SOURCE STA Y SOURCE STA Z1 DISPLAY 0.5 SEC. DATA B SOURCE STA X SOURCE STA Y SOURCE STA Z-- -- -- -- ---- -- -- -- ---- -- -- -- --8 DISPLAY 0.5 SEC. DATA F SOURCE STA X SOURCE STA Y SOURCE STA Z__________________________________________________________________________ Only one transaction may be processed by the PC application at a time. Once processing of a transaction has commenced, any attempt to initiate another transaction on the part of the local application or on the part of a remote peer receives an error response from the LIU software. Once any required response to a transaction has been received or generated by the PC application, the local or remote application is free to initiate the next transaction. The (SCReq) status register "busy" indication applies only to the interface between the application and the local LIU. Busy is true only while the local LIU accepts or rejects each command. It is entirely possible to be waiting for a transaction to complete while having the (SCReq) status show not busy. This allows datagram or global data traffic to be sent to the LIU software while waiting for a remote node to respond to a request. The providers 40 may provide data transmission via a token bus interface using the IEEE 802.4 MAC protocol and broadband transmission at 10M bps and carrier band transmission at 5M bps. The Subnet card will support globally shared data as specified by the Subnet Architecture. This architecture provides for the explicit sharing of the data in a PC with other interested nodes on the subnet segment. The source PC application node declares names and locations of data to be shared with each specified group of receiver nodes and an interval at which the contents of the data are to be sent. At each specified interval, the named data are collected and sent to all specified receiving nodes. The input receiver of global data specifies the groups of which it is a member and for each group the name of the global data variables to be received, the local location to receive the data, and a maximum time interval in which data is expected. All global data on the subnet is inspected and when values for the desired variables are found, these values are used to update the local locations specified. The services provided by global data are: (1) Begin global output of named data items. (2) End global output of named data items. (33) Begin receiving named global data items. (4) Stop receiving named global data items. The PC application node interacts with the subnet LAN interface unit (LIU) using control blocks and an eight bit status byte. The control blocks are used in conjunction with a serial communication request (SCReq) command in the application program logic to request communication services of the subnet LIU. The status byte is used to communicate status information about the overall communications environment and about any application request which is pending in the subnet LIU. The control blocks consist of contiguous sets of Series Six registers which contain the control block information. The first register of the control block (i.e., the register with the lowest register number) is used as the argument of the (SCReq) that initiates the request. The overall status of the subnet communication environment is reflected in an eight bit status byte which is updated on each communication request and on each sweep to contain the current communication state. The bits of the status byte are used to convey various information about the communication environment. Each bit in the status is guaranteed to remain in a state for at least one full sweep of the PC logic before changing to the opposite state. This guarantees the visibility of any communication status changes to the entire application. Table 6 gives the uses of the (SCReq) status bits. TABLE 6______________________________________(SCReq) Status UsageBit I/O Use______________________________________1 I1009 Busy2 I1010 Complete Without Error3 I1011 Complete With Error4 I1012 Externally Initiated Read Occurred5 11013 Externally Initiated Write Occurred6 I1014 Resource Problem in LIU7 I1015 MAP Data Indication8 I1016 Communication OK______________________________________ Messages transmitted on the subnet 38 in the inventive system have the format shown in Table 7. TABLE 7______________________________________prioritydestination addresssource addressmessage typeformat keynum variables = nname 1 (1/2)..name 1 (.sup.n-1 /n)...typelengthlocation 1name 2 (1/2)...name 2 (.sup.n-1 /n)typelengthlocation 2...name n (1/2)...name n (.sup.n-1 /n)typelengthlocation n______________________________________ The "priority" field is used to distinguish between priorities. Priority of access to the CPU and LAN will be given to high priority items over low priority items. The "destination" "address"specifies a group address for global data (variable) or a unique address for other data. "Source address" identifies the node transmitting the data. "Message type" specifies whether or not the message is global. "Format key" is a unique code assigned to the message by the source and is used to indicate that the message has a set format, i.e., successive transmissions of this message all have variables at the same address or byte location in the message; if the message format changes, the value of the format key will change. The "number of variables" field specifies the number of variable names which follow in the message up to a maximum of 255. The remainder of the control block consists of global variable definitions for output. Each definition consists of an eight octet (four register name, a length of the data to be transferred (expressed in number of registers), and the register number of the start of the data in the PC memory to be transferred. These entries are repeated for the number of variables specified in the "number of variables" field of the control block. A wild card character (whose value can be specified by the user) can be used in a name definition. The LIU software replaces all occurrences of the wild card character with a user specified replacement character. This allows a single PC application program to produce uniquely named output values when running in multiple PC's. The overall structure of the subnet software is a set of tasks which implement the various communication layer services. Intertask communication requests are used to communicate service requests among the layers. The service entry points copy any relevant parameters into private work queue entries and schedule the task which provides the desired service. Table 8 shows the tasks which make up the subnet software. Additional software provides the system services, diagnostics, software loading and configuration services. TABLE 8______________________________________Subnet Software Tasks CPUTask Function Budget______________________________________LLC Logical Link Control (Token Bus 0.75 ms Driver)PC PC Interface Driver 1.0 msGLBL Globol I/O Server 1.25 msTIMER Timer MAnagement .5 ms______________________________________ In addition to the normal state of transferring data, the LIU recognizes two distinct offline states which allow subsets of services to be accessed while restricting other services. In the "configure only" state, the LIU remains out of communication with the network for user data traffic of any kind but allows network or station management services to be accessed. This means that all network access is stopped but allows establishment of a communication environment with default parameters if remote network management is to be used. This state allows network management to change the configuration parameters. This state can be forced by an option switch or it will be entered if the initialization logic determines that the configuration parameters may be invalid (based on a checksum and flags kept with the parameters). In the "PC offline" state, the LIU remains in communication with the network for MAP traffic but all global inputs and outputs are halted. This state is entered whenever the PC enters the stop state. Logical Link Control (LLC) provides access to the subnet link. It manages the subnet TBC hardward and the link level protocol. The services provided by LLC (i.e., Bus Access Controller of FIG. 3) are those defined as IEEE 802.2 class 3 service, shown in Table 9. Only those services of LLC that are used by other portions of the system are reflected in Table 9. TABLE 9______________________________________Logical Link Control ServicesName Description______________________________________L --DATA.request Requests the output of a message to a specified link destination. The destination address can be either an individual node or group address.L --DATA.indication Signals the arrival of a message from a remote node.L --DATA --ACK. Requests the output of a message torequest a specified individual link destination. An acknowledgement must be received from the receiv- ing station for this request to complete successfully.L --DATA --ACK. Signals the arrival of a messageindication from a remote node which used the acknowledged connectionless protocol. The link acknowledge- ment has already been sent when this indication is made.L --DATA --ACK -- Returns the result of a previousSTATUS. L --DATA --ACK.request.indicationL --GROUP.enable Enables receiving of broadcast data on the specified group address.L --GROUP.disable Disables receiving of broadcast data on the specified group address.L --STATUS. Indicates a significant statusindication change within the LLC or MAC layer of which some management entity should be aware.L --TEST.request Causes the transfer of some number of 802.2 test frame(s) to a target node.L --TEST.confirm Reports the status of a previously requested test including the number of correct test responses received.L --XID.request Causes the transfer of an 802.2 xid request frame to a target node.L --XID.confirm Reports the result of a previously requested xid exchange.______________________________________ A PC driver provides transfer between the subnet card and the Series Six PC using the protocol established for CCM data transfers. All access to the Series Six memory is accomplished through the services provided by the PC driver. The services are specified in a manner consistent with the other communication layer services. The services provided by the PC driver (i.e., Application Data Transfer FIG. 3) are shown in Table 10. TABLE 10______________________________________PC Driver ServicesName Description______________________________________CPU --DATA.indica- Indicates data from the PCtion application which contains a control block requesting a communication service.CPU --DATA.request Requests data from the subnet software be transferred to the PC memory or a buffer of data from the PC memory be transferred to the LIU.CPU --DATA.confirm Response to a CPU --DATA.request which contains the result of the transfer of data to or from the PC.CPU --DATA --MODIFY. Requests data from the subnetrequest software be used to update the PC memory under control of a mask.CPU --DATA --MODIFY. Response to a CPU --DATA --MODconfirm IFY. request which contains the result of the update of data in the PC.CPU --STATUS. Service used to modify the statusindication bits transferred between the sub- net card and the PC giving the status of the subnet software.CPU --ABORT.indica- Indicates that the PC is notion longer responding.CPU --SWEEP.indica- Indicates a synchronization pointtion in the PC sweep for application services needing such synchronization.______________________________________ The Global Data service provider 40 provides global data sharing along nodes on the subnet. It implements the protocol for global data specified in the Subnet Architecture without any ongoing involvement of the PC application. The global data service provider handles all message formatting, message passing, timer handling and PC data updating required by the global data protocol. The global data service provider defines no unique service interfaces beyond those described in the PC driver and LLC. The services provided by Logical Link Control (LLC) are defined by IEEE 802.2 Class 3. Three types of data transfer service are provided: datagram, acknowledged connectionless datagram and data exchange. Unacknowledged service provides the one-way delivery of data to one or more destination nodes. No attempt is made to assure the sender of the delivery of the data. Send data with acknowledge service provides a one-way delivery data to a single destination node with assurance to the sender that it was received by the MAC layer in the destination. Request with reply service provides for a two-way exchange of data between two nodes. Only one SDU is transferred on any LLC request. Additional services are provided to register SAP values with LLC, modify the set of group addresses which LLC will recognize, signal LLC events, and to access LLC and MAC tallies. The data transferred between LLC and the LLC user resides in buffers which are described by the buffer pointers passed as parameters. The buffers passed to LLC from the user on requests consist only of data. Any number of bytes can be passed in the buffer. The buffer associated with this request is freed by LLC after the service requested by the user is performed. The buffers passed by LLC to the user contain two fields. The first field in the buffer consists of a pointer to any other buffers which may be associated with this indication. The remainder of the buffer consists of the input data. This structure allows the LLC layer to effectively use the scatter-gather capabilities of the TBC chip. If the pointer field is not NULL, then it is a pointer to the buffer descriptor of the next buffer in the input. The last (or only) buffer in the sequence will contain a NULL pointer in this field. The protocol exchanges between peer LLC users are shown in graphical form in FIG. 5A, 5B and 5C for the unacknowledged connectionless, send data with acknowledge and request with reply services respectively. There are no time sequences involved with the other LLC primitives as they are local and atomic in nature. The following services are available through the LLC: L-- REGISTER.request This service resides in the LLC layer (L-- service provider) and represents the L-- REGISTER.request service. It is called by an LLC user to register the use of an LSAP and to provide the function name to be called for data indications received on the SAP. The caller-provided function receives all L-- DATA.indication and L-- DATA-- ACK.indication indications from LLC. The first parameter is an integer parameter specifying the LSAP which is to be registered. The second parameter is a pointer to the routine to receive data indications on the SAP. The function should be prepared to handle the four parameters associated with the data indications. If the function pointer is NULL, then the LSAP is deregistered and all further traffic for the LSAP is ignored. A maximum of 16 LSAPs may be registered. L-- DATA.request This service resides in the LLC layer (L-service provider) and represents the L-- DATA.request service. It provides the transfer of an LLC SDU to a specified destination node. The first parameter (bp) is a pointer to the buffer containing the SDU to be transferred. Sufficient room remains in the front and rear of the buffer to allow the LLC layer to build its headers and trailers in the buffer without requiring any data movement. The second parameter (ra) is a character pointer which points to the remote (destination) address buffer. The format of the address buffer is a one byte length, followed by the link address of the destination, followed by a one byte length, followed by the destination LSAP. In all cases, the length of the link address will be sux bytes and the length of the LSAP will be one byte. Length indications will be present even though the lengths are always the same. The third parameter (la) is a character pointer which points to the local (source) address buffer. The format of the local address buffer is exactly the same as the remote address buffer. The fourth parameter (svc) is the class of service for the request. L-- DATA.indication This service resides in the LLC user (L-service user) and represents the L-DATA.indication service. The indication function registered for the LSAP is called by LLC to indicate the arrival of an LLC SDU from a specific remote node. The first parameter (bp) is a pointer to the buffer that contains the SDU which was received. The second parameter (ra) is a character pointer which points to the remote (source) address buffer. The format of the address buffer is a one byte length, followed by the link address of the source, followed by a one byte length, followed by the source LSAP. The third parameter (la) is a character pointer which points to the local (destination) address buffer. The format of the address buffer is the same as the previous parameter. The fourth parameter (svc) is the class of service for the request. L-- GROUP.enable This service resides in the LLC layer (L-service provider) and represents the L-- GROUP.enable service. It is called by an LLC user to enable the receipt of global data on a specified group address. The only parameter is an indication of the address (in the range of 1 to 47) to be added to the group address list. The group addressing capability of the TBC uses a mask which is applied to all incoming group addresses. The input group address is ANDed with the group address mask and the result is compared with the group address assigned to the TBC. If the masked input group address matches the TBC group address, then the SDU was addressed to this node. To allow multiple group addresses to be used, each group address available to the L-service user corresponds to a single bit in the MAC group address mask. All TBC group addresses are set to the value one (1) and the higher bits of the group address along with the group address mask are used to distinguish group addresses. The group address mask is initially set to all one bits. Thus, when it is ANDed to the input group address, all bits are unmasked. When a group address is added to the set, the bit corresponding to the requested group is changed to a zero thus disabling any effect it has on the group address filtering. A group address corresponds simply to the bit number of the bit which is to be set in the group address for the request. L-- GROUP.disable This service resides in the LLC layer (L-service provider) and represents the L-- GROUP.disable service. It is called by an LLC user to disable the receipt of global data on a specified group address. The only parameter is an indication of the address (in the range 1 to 47) to be removed from the group address list. The TBC group addressing scheme is discussed in LGRPena above. provided by the PC Driver are The services specific to the Series Six PC and are defined here without reference to a standard. The different forms of data transfer among nodes on the subnet (MAP data and global data) are represented in the Subnet card (provider 40) by different tasks. All of these tasks use the PC data transfer services. An underlying assumption of many applications of PCs is the predictable periodic sampling or updating of control points (or I/O's) in a process. To accomplish this, the PC application is structured as a sequence of logic which is processed repeatedly from top to bottom. Each cycle of processing the PC application is known as a "sweep" since it sweeps one pass through the application logic. Sweep processing on the Series Six consists of processing other than the solution of the application logic. Communication with many of the "smart" devices such as the programming terminal, the BASIC module and the communications devices are handled in "windows" which are portions of the sweep dedicated to serving the device. FIG. 6 shows the cycle of processing in a Series Six PC. The buffers transferred between the PC driver and the Series Six PC consist of a five byte header followed immediately by the data to be transferred or a buffer for data to be read from the PC. The structure of the buffers is a two byte memory address followed by a two byte length of the transfer followed by a one byte checksum over the heading (see FIG. 7). The transfer length field is used to encode both the number of words to be transferred and the direction of the transfer. If the high bit of the transfer length field is a one, the transfer is a read from the Series Six memory into the buffer of the request. If this bit is a zero, the data in the buffer is written to the Series Six memory. The length used for the transfer is one less than the desired length, thus a length of zero causes one word to be transferred. The checksum is the sum of the individual bytes of the header (bytes 1 through 4). An incorrect checksum will result in the CCM window being permanently closed by the Series Six PC. The byte order for 16 bit quantities is identical between the Series Six PC and the Intel family of processors; that is, the first (lower address) byte of a two byte quantity contains the least significant byte of the quantity and the second (higher address) byte contains the most significant byte. A special header is reserved to allow the LIU software to close the window. The bandwidth of the channel between the Subnet card (provider 40) and the Series Six PC is fairly small (5 ms per byte), the amount of time alotted to moving data between the LIU and the PC is small (approximately 10 ms), and the amount of data to be moved may be substantial. Therefore, the PC driver contains the concept of priority or class of service associated with a transfer request. The class of service is used to allocate the transfer bandwidth to the most critical messages first. Class of service is only associated with requests made by the Subnet software. The parameter data associated with a transfer initiated by the PC application using a [SCReq] is transferred during the [SCReq] window (except for large global data requests which are only partially transferred). The executive window is used to transfer all data except the [SCReq] parameter block and class of service is honored on all of these requests. Eight classes of service are available for use in the PC driver. Service classes are organized so that the higher the number, the lower the priority of the request. The assignment of service class is one of the most powerful tuning parameters available to influence the performance of the various services performed by the Subnet card. Table 11 below lists the initial breakdown of service class. TABLE 11______________________________________Series Six PC Service ClassesClass Use______________________________________0 Unused1 High Priority Global Data Reads and Writes2 Low Priority Global Data Reads and Writes3 Datagram Reads and Writes4 Unused5 Remotely Initiated MAP Reads and Writes6 Unused7 Locally Initiated MAP Reads and Writes______________________________________ FIGS. 8A-8B show the overall protocol sequence between the Subnet PC driver and the Series Six PC during the executive window and during a [SCReq] window. All transfer of link data takes place during the executive window. The [SCReq] window is restricted to transferring the data associated with the request. In order to distinguish the Executive window, the driver first checks the state of the "busy" status bit. If the "busy" bit is set, then the window is assumed to be an executive window. It the "busy" bit is clear, the contents of SMAIL & SMAIL+1 are checked for value. If these cells contain a non-zero value, the request was initiated by a [SCReq]. The contents of the mailbox sent by the PC on a [SCReq] contains in its most significant six bits the most significant six bits of the contents of the register associated with the [SCReq] (for subnet 38 this is the most significant bits of the command) followed by ten bits specifying the register associated with the request. Note that this restricts the command blocks for [SCReq]'s to beginning in the first 1024 registers in the Series Six. At the end of each executive window, the driver reads status information about the state of the PC CPU into the LIU memory. This information includes the RUN/STOP state, the state of the memory protect switch, and other state information which can change due to outside events and which may influence the state of the LIU software. A DMA checksum is available in the scratch pad which would allow an integrity check for the data transfer. Various services available through the PC are listed below. CPU-- DATA.indication This service resides in the PC Driver user (C-service user) and represents the CPU-- DATA.indication service. The first parameter contains the pointer to a buffer containing the data associated with the service request. This buffer will be a control block associated with the service requested by the PC application in the PC. The second parameter is the absolute address in the PC at which the parameter block begins. This service is unacknowledged (i.e. there is no corresponding CPU-- DATA.response sent when the service is performed. CPU-- DATA.request This service resides in the PC driver (C-service provider) and represents the CPU-- DATA.request. This service transfers data to or from the Series 6 PC at the request of the application service tasks in the Subnet card. The request contains the address of the buffer to be transferred and the information required by the PC driver to build a transfer header. The first parameter is a pointer to a buffer containing the data to be transferred to the Series Six PC or to be received from the PC. Sufficient room remains in the front of the buffer to allow the construction of the header for the transfer without moving the data. The second parameter is the absolute address within the Series Six for the transfer. The third parameter is the byte length of the transfer. The fourth parameter is the direction of the transfer. The direction can be either "TO CPU" for a write to Series Six memory, or "FROM-- CPU" for a read from the Series Six memory. The fifth parameter is the class of service of the request. The sixth and seventh parameters are used together to update the CCM status byte after the.confirm This service resides in the PC Driver user (C-service user) and represents the CPU-- DATA.confirm service. The name of this service is actually indefinite since the function which is to perform this service is passed as a parameter to the CDATreq call. The first parameter is a pointer to the buffer which contains the data associated with the indication being responded to. The second parameter is an integer parameter which was specified on the request for this response and its significance is determined by the C-service user. CPU-- DATA MODIFY.request This service resides in the PC driver (C-service provider) and represents the CPU-- DATA MODIFY.request. This service causes data in the PC memory to be modified (that is read, changed and rewritten) under the control of a mask. This allows the changing of individual bits in the PC memory. The request contains the address and length of the PC memory to be updated along with the data and the mask to be used to modify the PC memory. The first parameter is a pointer to a buffer containing the data to be transferred to the Series Six PC under control of the mask. The second parameter is the absolute address within the Series Six to be modified. The third parameter is the byte length of the data. The fourth parameter is a pointer to a buffer containing the mask associated with the request. Each byte of the mask data is applied against the data buffer to determine which bits of the PC memory are to be changed. A one bit indicates that the data bit from the data buffer should be used to update the corresponding bit in the PC memory. A zero bit indicates no change in the PC memory. If the buffer associated with the mask is shorter than the requested amount of update, the remaining bytes of mask are assumed to be hexadecimal value FFH (that is to say that all bits in the PC memory will be modified). The fifth parameter is the class of service of the request. The sixth and seventh parameters are used together to update the CCM status byte after the MODIFY.confirm This service resides in the PC Driver user (C-service user) and represents the CPU-- DATA-- MODIFY.confirm service. The name of tis service is actually indefinite since the function which is to perform this service is passed as a parameter to the CMODreq call. The first parameter is a pointer to the buffer which contains the data associated with the indication being responded to. The second parameter is an integer parameter which was specified on the request for this response and its significance is determined by the C-service user. CPU-- STATUS.request This service resides in the PC Driver (C-service provider) and corresponds to the CPU-- STATUS.request. This service allows a C-service user to set or reset bits in the CCM status word which is maintained by the PC Driver and is transferred to the Series Six PC at the end of each transfer window. The first parameter is an "and" mask and the second parameter is an "or" mask. The "and" mask is logically and'ed with the current CCM status and then the "or" mask is logically or'ed with the CCM status. This new status is then saved as an updated CCM status. Remember that the status will not immediately be changed in the Series Six PC but will be changed at the end of the next transfer window. This means that multiple CSTAind's may have been performed before the PC application is able to see the updated status. CPU-- ABORT.indication This service resides in the PC Driver user (C-service user) and provides the CPU-- ABORT.indication service. The PC abort indicates that the PC application program or the Series Six PC has ceased to communicate and any communication connections in use should be aborted. There are no parameters associated with this service. CPU-- SWEEP.indication This service resides in the PC Driver user (C-service user) and provides the CPU-- SWEEP.indication service. This service indicates that a synchronization point has been reached in the PC sweep logic processing. This allows activities which must be initiated on a per sweep basis to be notified of the occurrence of sweeps. On the Series Six, the sweep synchronization point is at the end of the executive window for communication. There are no parameters associated with this service. The overall data flow through the Series Six LIU is shown in FIG. 9. The figure shows data flows labelled by the service entry point which they use in transferring the data associated with the request. The network management and system service interfaces are not shown explicitly as they are immersed in all of the layer tasks. FIG. 3 is a functional block diagram of a global data service provider 40 connected between an application device (node) such as a PC and a subnet LAN 38. The application device initiates communication service requests in the form illustrated in Table 3. It also provides storage for output variables, input variables, and input-variable-status register (Valid/Timeout Fault/Definition Fault); maintains appropriate values of output variables and acts on input variables unless the corresponding input-variable-status register indicates a fault. The global data service provider 40 comprises several functional blocks defined as follows: Receives Application Device service requests; Determines validity of requests and acknowledges same to Application Device; Initializes the Variable Table and the input-register-status register; For send requests, generates a message format and sets the value of the format key; Associate each variable with a Timer and initiates (or, if the stop command, stops) the Timers. Timers associated with output variables periodically activate the Send Machine; Timers associated with input variables activate the Receive Machine, but the Receive Machine may forestall expiration indefinitely, by restarting the Timer prior to expiration; Each Timer is associated with the corresponding entry in the Variable Table. When activated, by a Timer, requests, via the Application Data Transfer, that the Application Device output the current (updated) value of the associated variable to the Date Buffer; When not notified by the Application Data Transfer that the output is complete, puts the updated data into the message format previously determined by the Command Processor, and requests that the Bus Access Controller send it. Performs transfers between the Application Device and the Data Buffer synchronous with the Application Device. Temporary storage to hold waiting to be output on the Communication Bus or waiting to be input to the Application Device. Implements the access control protocol common to all the nodes attached to the Communication Bus; Sends messages from the Data Buffer to the Communication Bus as requested by the Send Machine; Receives messages from the Communication Bus to the Data Buffer and notifies the Receive Machine. Holds the cross-reference between local and global references for each variable and, for input variables, a reference to the associated input-variable-status register; For input variables, stores the input variable state (Search/Transfer), and if in the Transfer state, the current input message format identifier (destination address, source address, format key); For input variables, associates with each entry the corresponding timer. When activated by receipt of a message via the Bus Access Controller, scans the Variable Table and; Saves received variables that have been requested and are valid (discarding others); Updates the associated input-variable-status registers; Updates the associated input variable states, and; Restarts the associated Timers; When activated via a Timer, updates the associated input-variable-status register and input variable state; Requests transfer of data and input variable-status register to the Application Device via the Application Data Transfer. FIGS. 4A-4G are functional flow charts illustrating data handling by a service provider 40. The charts in FIGS. 4A-4D illustrate the processing of start and stop requests. FIGS. 4E-4F represent, respectively, input and output data timer controls. As described above, each variable is associated with a processing time allowing processing to be terminated if a variable is not found within a set time. FIG. 4G illustrates the processing of a message for identifying data of interest to a node. If the message (link frame) on the bus is identified as intended for the receiving node (group address), the provider 40 determines if a message has been received from this source before. If it has, and if the format key is unchanged, the process of searching for a variable requires only that the offset to the data of interest be recalled from memory in order to index directly to the data. For a first time message search, the provider sequential searches (parses and tests) the message for variable names of interest. It will be appreciated that the data sharing mechanism of the present invention includes the following features: 1. Global variables are addressed by symbolic "names". 2. There may be multiple "namespaces" for variable names (within each namespace, each variable's name must be unique). 3. Once defined for output by the source node, each global variable is periodically sampled from its source node by the global data service and broadcast to all other nodes over the communications bus. (Global output from the node may be defined or terminated by that node at any time.) 4. The global data service at each node filters received broadcast messages and passes to the associated sink node all received samples of those variables (but only those variables) that are defined for input by that node. (Global input to the node may be defined or terminated by that node at any time.) 5. Each node may be source for some variables and sink for others 6. Variable definition parameters apply independently to each variable; the parameters include specification of the data type and size of the variable. 7. The global data service and protocol assure that portions of two different samples of a variable are never combined and passed as a single sample. Nor will samples be reversed in time order. (Though any sample which is found to contain a communication error will not be delivered to a sink node.) 8. A unique format "key" indicates when a broadcast message containing a global variable is structured differently from a preceding broadcast message containing that variable. 9 Detects and reports to the sink node differences in a variable's definitions between the source and sink nodes. 10. Detects and reports to the sink node failure to receive a sample of a global variable within a specified time interval. 11. Allows sharing of the communication bus with other kinds of data and protocols. While it will be recognized that the present invention is a combination of hardware in which a novel method is implemented in the form of control programs (software), the description in terms of functional block diagrams is believed sufficient to enable construction of the invention. Furthermore, the software programs may be written in many forms to accomplish the functions disclosed herein. Detailed manuals describing uses of LAN's by PC's are available from their manufacturers. The details provided herein are only intended to reflect those PC functions and features which interact with the present invention. What has been described is a data sharing mechanism employing a broadcast bus communication system in which global variables are exchanged between connected nodes or devices on the system via a global data service provider and unique protocol. While the invention has been described in an exemplary embodiment, other adaptations and arrangements will become apparent to those skilled in the art. It is intended therefore that the invention be interpreted in accordance with the spirit of the appended claims.
https://patents.google.com/patent/US4926375A/en
CC-MAIN-2018-43
refinedweb
11,144
51.48
Hello All, I have read the Tutorial about how to setup the DNS. I like to get some understanding to what happens once its setup on home computer with DSL. would any one in the outside world will be able to find my server name? at the middle of the tutorial it said "Fix Your Domain Registration" and their it says to use RegisterFree to point to my server. if so why I have to setup my DNS... I thought setting it up would mean that I would not need another server help to find me on the WWW once its name/ip is populated to other search engines in about three days... Any help to explain this up and clear my confusion that would be appreciated (please with simple terms if possiple). thxx Last edited by verybigtiger; 04-26-2012 at 02:26 AM. Reason: make it clearer When you have your own DNS server you are in charge of handling any namespace your domain might hold, you can either set one up yourself (and tell teh root servers where this is located) or use some of the provided hosting services out there that will do it, without you getting into how to setup the server itself, but you still have to tell the root servers what IP is holding information on your namespace. Don't worry Ma'am. We're university students, - We know what We're doing. 'Ruiat coelum, fiat voluntas tua.'Datalogi - en livsstil; Intet liv, ingen stil. Bookmarks
http://www.linuxhomenetworking.com/forums/showthread.php/19813-BIND-configuration?goto=nextnewest
CC-MAIN-2015-11
refinedweb
253
69.31
Time zones¶ Support for time zones is enabled by default. Airflow stores datetime information in UTC internally and in the database. It allows you to run your DAGs with time zone dependent schedules. At the moment Airflow does not convert them to the end user’s time zone in the user interface. There it will always be displayed in UTC. Also templates used in Operators are not converted. Time zone information is exposed and it is up to the writer of DAG to recommended or even required setup). The main reason is Daylight Saving Time (DST). Many countries have a system of DST, where clocks are moved forward in spring and backward in autumn. If you’re working in local time, you’re likely to encounter errors twice a year, when the transitions happen. (The pendulum and pytz documentation discusses. Please note that the Web UI currently only runs in UT timezone.is_aware(). default_args=dict( start_date=datetime(2016, 1, 1), owner='Airflow' ) dag = DAG('my_dag', default_args=default_args) op = DummyOperator(task_id='dummy', dag=dag) print(op.owner) # Airflow Unfortunately, during DST transitions, some datetimes don’t exist or are ambiguous. In such situations, pendulum raises an exception. That’s why you should always create aware datetime objects when time zone support is enabled. In practice, this is rarely an issue. Airflow gives you Time zone aware DAGs¶ Creating a time zone aware DAG is quite simple. Just make sure to supply a time zone aware start_date. It is recommended to use pendulum for this, but pytz (to be installed manually) can also be used for this. import pendulum local_tz = pendulum.timezone("Europe/Amsterdam") default_args=dict( start_date=datetime(2016, 1, 1, tzinfo=local_tz), owner='Airflow' ) dag = DAG('my_tz_dag', default_args=default_args) op = DummyOperator(task_id='dummy', dag=dag) print(dag.timezone) # <Timezone [Europe/Amsterdam]> Please note that while it is possible to set a start_date and end_date for Tasks always the DAG timezone or global timezone (in that order) will be used to calculate the next execution date.(execution_date) Cron schedules¶.
https://airflow.apache.org/docs/apache-airflow/1.10.3/timezone.html
CC-MAIN-2022-21
refinedweb
337
50.23
08 July 2010 12:45 [Source: ICIS news] LONDON (ICIS news)--The European Central Bank (ECB) left interest rates unchanged on Thursday at the record low level of 1.0% for the 14th straight month. The bank had cut its key rate several times from the October 2008 level of 4.25% as it tried to take ?xml:namespace> At a monthly news conference later on Thursday, ECB president Jean-Claude Trichet, was expected to be questioned on the upcoming publication of "stress tests" on European banks, which hope to reassure markets that institutions were stable enough to withstand another sharp economic downturn. Earlier, the Bank of England announced it would hold interest rates at 0.5% for the 14th consecutive month and leave its quantitative easing policy unchanged. On 7 July, official statistical office Eurostat announced that (.
http://www.icis.com/Articles/2010/07/08/9374821/european-central-bank-leaves-interest-rates-unchanged-at-1.0.html
CC-MAIN-2013-20
refinedweb
138
54.83
Creating an Interactive iOS 4 iPad App (Xcode 4) In the previous chapter we looked at the design patterns that we will need to learn and use regularly in the course of developing iPad applications. In this chapter we will work through a detailed example that will demonstrate the View-Controller relationship together with the implementation of Target-Action pattern to create an example interactive View-based Application. Click Next, name the product UnitConverter, enter your company identifier and make sure that the Product menu is set to iPad. -> Object Library menu option: From the Object Library panel (View -> Utilities -> Object Library), drag a Text Field object onto the View design area. Resize the object and position it so that it appears as follows: Within the Attribute Inspector panel (View -> Utilities -> Attribute Inspector), type the words "Enter temperature" into the Placeholder text field. This text will then appear in a light gray color in the text field as a visual cue to the user. Now that we have created the text field for the user to enter the temperature into, the next step is to add a Button object that can quarter of the overall width of the view and reposition it using the blue guidelines to ensure it is centered in relation to the button. Configure the label for centered alignment. Double click on the label to highlight the text and press the backspace key to clear the text . At this point the user interface design phase of our project is complete and the view should appear as illustrated in the following figure. We now are ready to try out a test build and run. Building and Running the Sample Application Before we move on to writing the controller code for our application and then connecting it to the user interface we have designed we should first perform a test build and run of the application so far. Click on the Run button located in the toolbar to compile the application and run it in the iOS iP When the user enters a temperature value into the text field and touches the convert button we need to trigger an action that will perform a calculation to convert the temperature. The result of that calculation will then be presented to the user on the label object. The Action will be in the form of a method that iPad iOS 4 Application Development Architecture. The UIKit framework contains a class called UIViewController that as a prefix the name that we gave to our new project).: // // UnitConverterViewController.h // UnitConverter // // Created by Techotopia on 1/10/11. // Copyright __MyCompanyName__ 2011. All rights reserved. // variables and using the IBOutlet" @implementation UnitConverterViewController /* // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *) nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @endThe first step is to instruct Objective-C to synthesize accessor methods for our tempText and resultLabel objects and then to implement the convertTemp method. The relevant section of the UnitConverterViewController.m file should now read as follows: #import "UnitConverterViewController.h" #import "UnitConverterViewController.h" @implementation UnitConverterViewController @synthesize resultLabel, tempText; - (void) convertTemp: (id) sender { double farenheit = [tempText.text doubleValue]; double celsius = (farenheit - 32) / 1.8; NSString *resultString = [[NSString alloc] initWithFormat: @"Celsius %f", celsius]; resultLabel.text = resultString; [resultString release]; } . . . @end Before we proceed it is probably a good idea to pause and explain what is happening in the above code. Those already familiar with Objective-C, however, may skip the next few paragraphs. In this file we are implementing the convertTemp method that we that comprise the user. The last step releases the memory that was allocated to the object referenced by resultText since it is no longer needed. Having created our action method we also need to modify the viewDidUnload and dealloc methods in this file to make sure we properly release the memory we allocated for our variables. Failure to do so will result in a memory leak in our application: - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; self.resultLabel = nil; self.tempText = nil; } - (void)dealloc { [resultLabel release]; [tempText release]; [super dealloc]; } Before proceeding to the next section of this chapter, now is a good time to perform a build and run to make sure that no errors exist in the code. Click on the Run button in the toolbar and correct any syntax errors that are reported. small square with the black triangle at the bottom of this panel will expand the panel to provide more detail: the following figure: Upon releasing the mouse button, Interface Builder will display a list of IBOutlet variables that match -> Connections Inspector. For example, the following figure shows the connection information for our label object: The final step is to connect the button object to our convertTemp action method. Cocoa Touch objects typically have a wide range of events that can be triggered by the user. To obtain a full listing of the events available on a particular object, display the Connections Inspector (View -> Utilities -> Connections Inspector) and select the button object in the view window. Listed under Sent Events in the connections panel is a list of the events that can be triggered by the button object. In this instance, we are interested in the Touch Up Inside event. This event is triggered when a user touches the button and then releases their finger the following figure: Releasing the mouse button over the File’s Owner icon will display a menu with a list of methods available in the view controller class. In our case the only method is convertTemp so select that method to initiate the connection. The event listing in the Connections dialog: By default the iPad iOS Simulator will display the iPad screen scaled down by 50%. To view the application at full size select the Window -> Scale -> 100% menu option. 4 Code to Hide the iPad Keyboard provides a tutorial on how to hide the keyboard when either the Return key or the background view are touched by the user.
http://www.techotopia.com/index.php/Creating_an_Interactive_iOS_4_iPad_App_(Xcode_4)
CC-MAIN-2017-26
refinedweb
1,066
51.68
In this article, we’re going to set up dependency injection in a new ASP .NET Web API project, using Ninject as our IoC container. For starters, do the following: - Create a new Web API project. - Install the Ninject.Web.WebApi NuGet package. - Install the Ninject.Web.WebApi.WebHost NuGet package. Since I need an injectable service to demonstrate this with, I’m also going to install my very own .NET Settings Framework via the Dandago.Settings NuGet package. When you installed Ninject.Web.WebApi.WebHost, it added a NinjectWebCommon.cs class under the App_Start folder: Ignore the boilerplate crap and look for the RegisterServices() method. There, you can set up your dependencies. In my case, it looks like this (needs namespace Dandago.Settings): /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { kernel.Bind<IConfigKeyReader>().To<AppSettingReader>(); kernel.Bind<IConfigKeyProvider>().To<ConfigKeyProvider>(); } Great! Now, let’s test it. Find ValuesController and at the following code at the beginning: private int x; public ValuesController(IConfigKeyProvider configKeyProvider) { this.x = configKeyProvider.Get<int>("x", 5); } Run it, and we should hit the breakpoint when going to /api/values: It’s working, and that’s all you need. In case it wasn’t that smooth, however, here are a couple of things that might have gone wrong. If you’re getting the above error complaining about not having a parameterless public constructor, then you probably forgot to install the Ninject.Web.WebApi.WebHost package. If on the other hand you went ahead and installed Ninject.Web.WebApi.WebHost first, that brings in an older version of the Ninject.Web.WebApi package, causing the above ActivationException. The solution is to upgrade Ninject.Web.WebApi. 2 thoughts on “ASP .NET Web API Dependency Injection with Ninject” I really appreciate this extremely useful example, however I am new to Web Api 2 and I noticed that there was no formal registering of the dependency resolver that I could see. Not in the WebApiConfig.cs or in the Global.asax like I am used to? I don’t know how you’re used to doing it, but I think it’s all in NinjectWebCommon. There are two assembly directives that refer to Start() and Stop() methods within the same file, taking care of setup and teardown. I don’t know how this relates to dependency resolvers though.
http://gigi.nullneuron.net/gigilabs/asp-net-web-api-dependency-injection-with-ninject/?replytocom=12301
CC-MAIN-2019-51
refinedweb
403
61.33
Contents - 1 Introduction - 2 Pandas isnull: isnull() - 3 Pandas isin : isin() - 4 Pandas empty : empty() - 5 Conclusion Introduction While working with your machine learning or data science project, you will often have to explore the content of the pandas dataframes In this tutorial, we will learn some useful pandas functions namely isnull(), isin(), and empty() that makes the life of data scientist easy. We will be looking at different examples along with the syntax for each function. Importing Pandas Library To start this tutorial, we will import the pandas library import pandas as pd This tutorial will be commenced with the isnull() function of pandas. Pandas isnull: isnull() The pandas isnull() function is used for detecting missing values in an array-like object. Syntax pandas.isnull(obj) obj – This is the object which is passed to the function for finding missing values in it. The result of this function is a boolean value. Based on the input provided, the boolean result is obtained. Note – Pandas has an alias of isnull() function known as isna() which is usually used more and we are going to use this alias in our example. Example 1: Applying isna() function over scalar values In this example, the isna() function of pandas is applied to scalar values. When the function is provided a scalar value, then the result is false and if we specify a null value, then the output is true. pd.isna('Orange') False import numpy as np pd.isna(np.nan) True The pandas isna() can be applied to arrays and the result is also generated in the form of boolean arrays. array = np.array([[np.nan, 7, 9], [ 8, np.nan,16]]) array array([[nan, 7., 9.], [ 8., nan, 16.]]) pd.isna(array) array([[ True, False, False], [False, True, False]]) Example 3: Usage of pandas isna() function on dataframe The isna() function is highly useful for dataframes. In this example, we will look at it and understand the usage. df = pd.DataFrame([['potato', None, 'spinach'], [None, 'Watermelon', 'Strawberry']]) df So the values which were specified as None in the array, had boolean True and other values were False. pd.isna(df) The next pandas function in this tutorial is isin(). Pandas isin : isin() With the help of isin() function, we can find whether the element present in Dataframe is present in ‘values’ which provided as an argument to the function. Syntax pandas.DataFrame.isin(values) values : iterable, Series, DataFrame or dict – Here the values which are required to be checked are provided in the form of either series, dataframe or dictionary. The result is an array of boolean values. Example 1: Using list as values When we use list as a parameter for the pandas isin() function, we can check whether each value is present in the list or not. df = pd.DataFrame({'seed_count': [50, 15], 'quantity': [15, 40]}, index=['watermelon', 'orange']) df This isin() function tells us where we have 15 as a value in the dataframe. df.isin([0, 15]) Example 2: Using dictionary as values By using dictionary as an input to the pandas function isin(), we can check each column’s value separately. df.isin({'quantity': [0, 40]}) Example 3: Using DataFrames as values When we pass dataframes as values, then the new dataframe is checked if it contains the values in the main dataframe. df_other = pd.DataFrame({'seed_count': [50, 5], 'quantity': [15, 2]}, index=['watermelon', 'orange']) df_other As the values of the bottom row didn’t match, they were assigned False bool value. df.isin(df_other) The third and final function in the list is empty() function. Pandas empty : empty() The pandas empty() function is useful in telling whether the DataFrame is empty or not. Syntax DataFrame.empty() This function returns a bool value i.e. either True or False. If both the axis length is 0, then the value returned is true, otherwise it’s false. Example 1: Simple example of empty function In this example, a dataframe is created with no values entered in it. As expected the empty function results True, which means there is an empty dataframe. df_emp = pd.DataFrame({'a' : []}) df_emp df_emp.empty True Example 2: Using Nan values in array When NaN values are provided as input to a DataFrame, then the DataFrame is not considered to be empty. df_nan = pd.DataFrame({'a' : [np.nan]}) df_nan As we can see in the output, the false value suggests that the DataFrame is not empty. df_nan.empty False If we drop these NaN values, then we can see the output. It shows the value as true, thus suggesting that dataframe is empty. df_nan.dropna().empty True Conclusion In this tutorial, we learn isnull(), isin() and empty() function of pandas that are used in the data explorations stage of a data science project. - Also Read – Tutorial – Pandas Drop, Pandas Dropna, Pandas Drop Duplicate - Also Read – Pandas Visualization Tutorial – Bar Plot, Histogram, Scatter Plot, Pie Chart - Also Read – Tutorial – Pandas Concat, Pandas Append, Pandas Merge, Pandas Join Reference –
https://machinelearningknowledge.ai/pandas-tutorial-isnull-isin-empty/
CC-MAIN-2022-33
refinedweb
833
60.04
The thing works both in IE 6.0 and FireFox 1.4. but with some problems. IE crashes when one refreshes the page or leave the page. This happens only after calling the Java method more than once. It does not crash if the Java method is called just once and then the page is refreshed. FireFox does not crash at all and no error whatsoever. However, it takes a good minute or two to display the newly created element. And, if during that time if one calls the Java method, it gives an error and it never works afterwords. But, once the Java method returns after the first time, it works robustly no matter how many times the Java method is called or if the page is refreshed or closed. Can any LiveConnect guru please take a look at this code and let me know whats wrong with it. Thank you very much in advance. D.K. Mishra ======== Hello.class ======== import java.applet.Applet; import java.awt.Graphics; import netscape.javascript.*; public class Hello extends Applet { private JSObject win; private JSObject doc; public void init() { } public void start() { win = JSObject.getWindow(this); doc =(JSObject) win.getMember("document"); } //A set of 2 overloaded helper methods to create object array to pass // as the 2nd argument to doc.call(string,Object[]). public Object[] objArr(JSObject jso) { Object[] ret = {jso}; return ret; } public Object[] objArr(String str) { Object[] ret = {str}; return ret; } //This cretes a filled HTML Tag like <p>Hello</p> or //<i>world!</i>. public JSObject createFilledTag(String strTag, String strText) { JSObject fragDoc = (JSObject) doc.call("createDocumentFragment",null); JSObject tagEle = (JSObject) doc.call("createElement",objArr(strTag)); JSObject tagTextEle = (JSObject)doc.call("createTextNode",objArr(strText)); tagEle.call("appendChild",objArr(tagTextEle)); fragDoc.call("appendChild",objArr(tagEle)); return fragDoc; } //This method is called from javascript. It inserts //*** Hello World! ***" into the empty <p id="para"></p> //element public void insertText(String str) { JSObject paraEle = (JSObject) doc.call("getElementById", objArr(str)); JSObject tmpEle = createFilledTag("b","*** Hello World! ***"); paraEle.call("appendChild",objArr(tmpEle)); }
http://groups.google.com/group/comp.lang.java.help/msg/35ddca8a2249914a
crawl-002
refinedweb
338
60.41
Learn more about these different git repos. Other Git URLs I submitted two updates to fedora {f27,f26}-testing about 13 days ago: Both are still blocked from being pushed to stable because "no test results found", but browsing the "Automated Tests" tab shows test results, and manually querying resultdb also shows that all tests were indeed run and all checks passed: It looks like the "no test results found" status for those updates just was never updated once the test results were available. I previously reported this issue against bodhi here, but was told that this is an issue in greenwave. Hi @decathorpe. :) Try out this python script, which poses the same query to greenwave that bodhi does. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 #!/usr/bin/env python """ Ask a question of greenwave. """ import pprint import sys import requests for nvr in ['golang-github-xtaci-smux-1.0.7-1.fc27']: url = ( ' 'api/v1.0/decision') payload = dict( #verbose=True, decision_context='bodhi_update_push_stable', product_version='fedora-27', subject=[{'item': nvr, 'type': 'koji_build'}], ) response = requests.post(url, json=payload, verify=False) print("-" * 40) print(nvr, response, response.status_code) data = response.json() print(pprint.pformat(data)) When I run it, I get this output: ('golang-github-xtaci-smux-1.0.7-1.fc27', <Response [200]>, 200) {u'applicable_policies': [u'taskotron_release_critical_tasks_for_stable'], u'policies_satisfied': False, u'summary': u'no test results found', u'unsatisfied_requirements': [{u'item': {u'item': u'golang-github-xtaci-smux-1.0.7-1.fc27', u'type': u'koji_build'}, u'scenario': None, u'testcase': u'dist.rpmdeplint', u'type': u'test-result-missing'}]} Which shows that greenwave is requiring the dist.rpmdeplint test to run, but it cannot find a result for it (neither pass nor failure). dist.rpmdeplint You posted some queries to taskotron above. Take one, and let's filter it down to look for only dist.rpmdeplint results. So, then we bounce you along. It seems like there really is no rpmdeplint result for golang-github-xtaci-smux-1.0.7-1.fc27. @kparal, where should we refer @decathorpe about this? golang-github-xtaci-smux-1.0.7-1.fc27 In the meantime, @decathorpe, you should be able to get your update moving by "waiving" the absence of that test result. $ waiverdb-cli -p "fedora-27" -t dist.rpmdeplint -s '{"item": "golang-github-xtaci-smux-1.0.7-1.fc27", "type": "koji_build"}' -c "I dunno what's wrong, but I really want to ship my update." All that aside, there is something that ends up being misleading about the "no test results found" string. Really, greenwave found some test results, it's just that none of them applied to the list of tests that it required. It should be possible to make that more clear on greenwave's side. Ok, Thanks! However: Running that command doesn't work, I get this error message: Error: The config option "resultsdb_api_url" is required There's no information on where to set this settings key (I'm guessing in /etc/waiverdb/client.conf?), but I have no idea which value I should put there. /etc/waiverdb/client.conf Hello @decathorpe. You can find how to configure "resultsdb_api_url" at the end of this section: Basically, it is the URL of the instance of resultsdb. So I guess for you it will be something like: resultsdb_api_url= Whoops. The resultsdb_api_url on the wiki page there was incorrect (for a few weeks!) It should be fixed now. So, then we bounce you along. It seems like there really is no rpmdeplint result for golang-github-xtaci-smux-1.0.7-1.fc27. @kparal, where should we refer @decathorpe about this? So, then we bounce you along. It seems like there really is no rpmdeplint result for golang-github-xtaci-smux-1.0.7-1.fc27. @kparal, where should we refer @decathorpe about this? Recently our resultsdb instance hasn't been able to keep up with the load and is often unresponsive, which means tests are unable to post results. So it's quite common for results to be missing. We will need to do something about it, unfortunately I don't know what and who can do it. This is especially problematic combined with the fact that we can't reschedule tests to be run again. I can only recommend @decathorpe to test dependencies manually and submit a waiver, so that he can push to stable updates. I don't have anything better at the moment, sorry :/ It's okay, @kparal. Let's get Bodhi to stop displaying results directly from resultsdb as a first step (so that the 1000 users don't all query resultsdb themselves). Today, greenwave supplies the list of all results it saw (even the "irrelevant" ones) in its API response to Bodhi, if Bodhi passes verbose=True in the question. Bodhi could cache this and display it to users, which I expect would greatly reduce the load on resultsdb. verbose=True /cc @bowlofeggs There's a Bodhi ticket where I've discussed having Bodhi cache the test results in its DB: OK, after adding the api endpoint to the .conf file, filing the waivers worked, and I guess now I have to wait until bodhi updates the status. Small suggestion: Maybe the default resultdb API path should be added to the default configuration, so not everybody has to find that wiki page and add the thing manually? Small suggestion: Maybe the default resultdb API path should be added to the default configuration, so not everybody has to find that wiki page and add the thing manually? Yeah you are totally right. Ralph did fix this in waiverdbPR#144 but it hasn't made it into a Fedora update just yet. Sorry about that. OK - there's a Bodhi change to make about increasing the frequency of the sync from Greenwave (best when driven by the message bus). There's another bit in here that will live on in #145. Metadata Update from @ralph: - Issue status updated to: Closed (was: Open) Yeah you are totally right. Ralph did fix this in waiverdbPR#144 but it hasn't made it into a Fedora update just yet. Sorry about that. It's nice to see that this is already done - I didn't know that when I stated my suggestion above. And thanks for your help with troubleshooting my issue :) No problem. Thanks for your patience (and for pursuing the report!) to comment on this ticket.
https://pagure.io/greenwave/issue/141
CC-MAIN-2022-21
refinedweb
1,087
65.42
Java naming conventions are sort of guidelines which application programmers are expected to follow to produce a consistent and readable code throughout the application. If teams do not not follow these conventions, they may collectively write an application code which is hard to read and difficult to understand. Java heavily uses Camel Case notations for naming the methods, variables etc. and TitleCase notations for classes and interfaces. Let’s understand these naming conventions in detail with examples. 1. Packages naming conventions Package names must be a group of words starting with all lowercase domain name (e.g. com, org, net etc). Subsequent parts of the package name may be different according to an organization’s own internal naming conventions. package com.howtodoinjava.webapp.controller; package com.company.myapplication.web.controller; package com.google.search.common; 2. Classes naming conventions In Java, class names generally should be nouns, in title-case with the first letter of each separate word capitalized. e.g. public class ArrayList {} public class Employee {} public class Record {} public class Identity {} 3. Interfaces naming conventions In Java, interfaces names, generally, should be adjectives. Interfaces should be in titlecase with the first letter of each separate word capitalized. In same cases, interfaces can be nouns as well when they present a family of classes e.g. List and Map. public interface Serializable {} public interface Clonable {} public interface Iterable {} public interface List {} 4. Methods naming conventions Methods always should be verbs. They represent an action and the method name should clearly state the action they perform. The method name can be a single or 2-3 words as needed to clearly represent the action. Words should be in camel case notation. public Long getId() {} public void remove(Object o) {} public Object update(Object o) {} public Report getReportById(Long id) {} public Report getReportByName(String name) {} 5. Variables naming conventions All instance, static and method parameter variable names should be in camel case notation. They should be short and enough to describe their purpose. Temporary variables can be a single character e.g. the counter in the loops. public Long id; public EmployeeDao employeeDao; private Properties properties; for (int i = 0; i < list.size(); i++) { } 6. Constants naming conventions Java constants should be all UPPERCASE where words are separated by underscore character (“_”). Make sure to use final modifier with constant variables. public final String SECURITY_TOKEN = "..."; public final int INITIAL_SIZE = 16; public final Integer MAX_SIZE = Integer.MAX; 7. Generic types naming conventions Generic type parameter names should be uppercase single letters. The letter 'T' for type is typically recommended. In JDK classes, E is used for collection elements, S is used for service loaders, and K and V are used for map keys and values. public interface Map <K,V> {} public interface List<E> extends Collection<E> {} Iterator<E> iterator() {} 8. Enumeration naming conventions Similar to class constants, enumeration names should be all uppercase letters. enum Direction {NORTH, EAST, SOUTH, WEST} 9. Annotations naming conventions Annotation names follow title case notation. They can be adjective, verb or noun based the requirements. public @interface FunctionalInterface {} public @interface Deprecated {} public @interface Documented {} public @Async Documented { public @Test Documented { In this post, we discussed the Java naming conventions to be followed for consistent writing of code which make the code more readable and maintainable. Naming conventions are probably the first best practice to follow while writing clean code in any programming language. Happy Learning !! Feedback, Discussion and Comments Oliver Hi, I think No. 8 also contains an error. “Similar to class names, enumeration names should be all uppercase letters.” Class names use titlecase with nouns, I guess you were thinking “constants” when you wrote this. Lokesh Gupta Yeh ! You are right. Thanks for reporting it. Chris I feel that this section: In Java, interfaces names, generally, should be verbs. Should actually be In Java, interfaces names, generally, should be **adjectives**. When people suggest things like ‘Runnable’ or ‘Tasklike’ it’s a description of the kind of object that the interface describes. ‘What kind of object is it? It is a {List|Runnable|Cancelable} type of object’ Lokesh Gupta I think, you are right. Adjectives makes more sense. Even nouns shall be used. Must be thinking something else when I wrote it. Thanks a ton for noticing. User123 should be Lokesh Gupta Thanks for pointing out. I must have been thinking something else. Much appreciated.
https://howtodoinjava.com/java/basics/java-naming-conventions/
CC-MAIN-2019-43
refinedweb
729
51.24
Virtual machine management toolkit. _ _ __ ____ ____ ____ _ _ / )( \ / \( _ \(_ _)( __)( \/ ) \ \/ /( O )) / )( ) _) ) ( \__/ \__/(__\_) (__) (____)(_/\_) by Websecurify Vortex is a virtual machine management tool. It is similar to Vagrant. The rationale for writing this tool is to enable better management of development and production infrastructure at the same time. We, Websecurify, could not easily achieve this with Vagrant so the this tool was written to fill the gap. You can do the following things with Vortex: Vortex removes any barriers from the time you start developing your application to the time it is already live and you need to maintain it. It is essential to understand the key principle behind Vortex, which is to always produce a replicable environment. This sounds nice and simple but it gets deeper than this. What this means in practice is that virtual machines/nodes are disposable. In other words, they only exist fully provisioned and not in any other way. They also don't maintain any state. Once you halt a node, it is gone for good with all the data it was keeping within it. If you boot the node again it will launch a brand new instance. This is why there is no state. State is essentially a 3rd-class citizen in Vortex. You provide it only by attaching external storages or by integrating with other services from your application. This sounds like a very extreme way of dealing with things but it does solve a few hard problems like scalability and the application effectiveness against hardware and other types of failure. This philosophy is a constrain, which works in our favour and it is fully embraced in the design of the tool. The easiest way to install Vortex is via node's npm. You need to have nodejs installed for this. Simply type the following command: npm install -g vortex An alternative approach is to just copy the source files and execute them manually though nodejs. There are plans to create a standalone binary distributions if there is need for this. You can use Vortex as a library or via the command line, which is more convenient. At the moment there are no docs on the API but for now you can just check the source code for inspiration. Here are a few examples of how to use Vortex via your shell: vortex status # shows the status of all nodes vortex boot # boots all nodes vortex halt # halts all nodes vortex provision # provision all nodes The following additional helper actions are also available: vortex up # boots and provisions a node vortex down # halts a node You can also specify which node you want to manipulate: vortex shell my-node-name # starts interactive session on the selected node vortex halt my-node-name # halts the selected node To get the complete list of actions just use actions vortex actions # get complete list of actions By the default Vortex reads the configuration from vortex.json located inside the current working directory. However, you can specify an alternative location with the -f|--file option. For example: vortex -f path/to/folder # loads path/to/folder/vortex.json manifest vortex -f path/to/config.json # loads path/to/config.json manifest Verbose messages can be obtained by using the -v|--verbose flag, which can also be combined with the -c|--colorize flag for better visual aid. For example: vortex -vv # enables debug level logging vortex -vvv -c # enables silly level logging with colorization Vortex supports different providers to manage your virtual machines/nodes. Out of the box you have support for VirtualBox and Amazon. VirtualBox is the default provider. Here is an example how to select a different provider: vortex --provider=Amazon boot # boots nodes into amazon ec2 The default provisioner, Roost, can also be configured with some command-line options. If you specify the -d|--dry flag the provisioner will only output information on what it will do but not perform any actions. This is useful if you are uncertain about the changes you are making to the roost manifests and you just want to check it out before doing it for real. For example: vortex --provider=Amazon -d provision my-sensitive-node # dry-runs the provisioner Here is another fun bit you can do. The shell action also accepts parameters which will be directly executed as commands. For example: vortex shell -- -- ls -la # will list the home folder You can apply commands to all nodes or the one you have specifically selected. The Vortex manifest file is a simple JSON document. By default you are only required to specify the nodes you want in your configuration: { ... "nodes": { "my-node": { } }, ... } This is the simplest possible configuration, which is not useful for anything just yet. To make this configuration useful for booting an image in Amazon you need to supply additional information. This is how it is done: { ... "amazon": { "accessKeyId": "YOUR ACCESS KEY GOES HERE", "secretAccessKey": "YOUR SECRET KEY GOES HERE", "region": "A REGION SUCH AS us-west-1, us-west-2, etc GOES HERE" }, "nodes": { "ubuntu": { "amazon": { "imageId": "ami-2fb3201f", "securityGroups": ["default"], "keyName": "my-key", "privateKey": "path/to/my-key.pem", "username": "ubuntu" } } }, ... } Providing credentials inside configuration file is not always optimal but it saves you from typing longer and more complex commands. The reality of the situation is that you can do the following: ACCESS_KEY_ID=bish SECRET_ACCESS_KEY=bosh AWS_REGION=us-west-1 vortex --provider=Amazon boot The config file for this will be: { ... "nodes": { "ubuntu": { "amazon": { "imageId": "ami-2fb3201f", "securityGroups": ["default"], "keyName": "my-key", "privateKey": "path/to/my-key.pem", "username": "ubuntu" } } }, ... } The same properties can also be provided per-node if this is what you want. Underneath all of this sits the aws-sdk for nodejs so all parameters are exactly the same as you will find in the SDK. VirtualBox is configured in the same way. The only difference is that you need to specify VirtualBox specific configuration. For example: { ... "nodes": { "ubuntu": { "amazon": { "imageId": "ami-2fb3201f", "securityGroups": ["default"], "keyName": "my-key", "privateKey": "path/to/my-key.pem", "username": "ubuntu" }, "virtualbox": { "username": "ubuntu", "password": "ubuntu", "vmId": "baseimage", "vmUrl": "" } } }, ... } If you have a lot of nodes that are similar with minor differences you can move the configuration out of the node structure and specify it globally like such: { ... " } } }, ... } Last but not least, nodes can be launched in their own namespaces. Namespaces are useful when there are a lot of stuff going on and you just want to logically separate nodes into different groups (or soft-groups if you prefer). Here is an example: { ... namespace: "my-config", ... " } } }, ... } Now node1 and node2 will run in the namespace my-config and this will not interfere with other nodes that have similar names. Namespaces can be used per node as well so you can get very creative. The VirtualBox provider can be configured by supplying a "virtualbox" property at the top level of the manifest file or per-node. The following options are accepted everywhere: The Amazon provider can be configured by supplying a "amazon" property at the top level of the manifest file or per-node. The following options are accepted everywhere: Vortex comes with a built-in provisioner called roost - another project of ours. Roost manifest files can be either imported from an external file or embedded directly into your vortex manifest. Here is an example: { ... "nodes": { "ubuntu": { "roost": "roost.json" } }, ... } You can also do the following if this is too much of a trouble: { ... "nodes": { "ubuntu": { "roost": { "apt": { "update": true }, "packages": [ "nodejs" ], "commands": [ "uname -a" ] } } }, ... } As a matter of fact, you can even apply a global roost file for all nodes. Just register the roost configuration outside of the nodes property. Merging roost manifests is also possible when declared at multiple levels. For example, at top level you may want to apply some defaults and maybe even some updates. Per node you may want to apply generic configurations and have some additional provisioning options for each provider. Such complex setup is possible and here is an example: { ... "roost": { "apt": { "update": true } } ... "nodes": { "ubuntu": { "roost": { "merge": true, "packages": [ "nodejs" ] }, "virtualbox": { "roost": { "merge": true, "commands": [ "cd /media/cdrom; ./VBoxLinuxAdditions-x86.run" ] } } } }, ... } The manifest is built from the inner most configuration and merged upwards if the merge flag is set to true. This is a non-standard roost option. For more information how the provisioner works just check the project page. Vortex can be extended with plugins. Plugins are essentially nodejs modules and are installed the same way you typically install nodejs modules, i.e. npm and package.json. A good starting doc how npm modules work can be found here. In order to load a plugin you need to declare it in your Vortex manifest file. Here is an example: { ... "plugins": [ "my-plugin" ], ... } Plugins are executed first and can affect everything from the actual manifest that was loaded to what providers and actions are exposed and much more. The following workflow takes place when working with plugins. require. getVortex(takes priority) and vortex. getVortexis used to retrieve an object that exposes a vortexfunction. vortexis looked for to check if the plugin is compatible at this stage. vortexfunction. The following parameters are passed: Use getVortex to augment the Vortex environment such as install new actions, providers, etc. Use vortex to do something, mostly with the manifest file, before the actual action takes place. Vortex plugins can do pretty much everything so here are some suggestions of what you could do if you spend some time writing a plugin: The list goes on and on. Get creative! Each node can have the following states when querying via the Provider.prototype.status function: These states are also exposed when quering a node via the status action, i.e. vortex status # shows a state such as booting, running, halting, stopped
https://www.npmjs.com/package/vortex
CC-MAIN-2015-35
refinedweb
1,641
55.54
Divide numbers from 1 to n into two groups with minimum sum difference from O(2^N) to O(N) Reading time: 30 minutes For numbers from 1 to given n, we have to find the division of the elements into two groups having minimum absolute sum difference. We will explore two techniques: - Brute force which will take O(2^N) time complexity - Greedy algorithm which will take O(N) time complexity Hence, we have reduced an exponential time complexity O(2^N) to a linear time complexity O(N). Naive Approach O(2^N) A naive approach would be to generate all subsets of the array of length N and calculating the difference for each of the two subsets in them. Since generating all subsets takes exponential time, this approach is very inefficient. The steps involved are: - Generate all subsets of the array and their corresponding complements. - Calculate the sum of all the elements over the two subsets. - Compute the absolute sum difference between the two subsets. - Update the minimum value of the absolute difference. Pseudocode: Input: Set[], set_size 1. Get the size of power set powet_set_size = pow(2, set_size) min_sum = INT_MAX 2 Loop for counter from 0 to pow_set_size (a) Loop for i = 0 to set_size (i) Initialize temp_sum to 0 (ii) If ith bit in counter is set Print ith element from set for this subset Update temp_sum by summing with ith element and taking difference with complement sum (iii) Set min_sum to min(min_sum, temp_sum) (b) Print seperator for subsets i.e., newline Greedy Algorithm O(N) We can divide numbers from 1 to n into two groups such that their absolute sum difference is always 1 or 0. So the absolute difference is atmost 1. We maintain two counters a and b, both of them initialized to 0 and one temporary variable initialized to half of the sum of the elements. - Run a loop from n to 1. - If the element does not exceed sum, insert it into Group 1, increment a by the element and decrement sum by the element. - Else, insert the element into Group 2. Implementation #include <bits/stdc++.h> using namespace std; int main() { //Obtaining bound of n int n; cin >> n; //Setting half sum value int sum = (n*(n+1)/2)/2; //Initializing counter variable int a = 0, b = 0; //Maintaining two groups vector<int> group1, group2; //Running loop from n to 1 for(int i=n;i>=1;i--){ if(sum-i>=0){ group1.push_back(i); sum -= i; a += i; }else{ group2.push_back(i); b += i; } } //Printing minimum sum difference cout << "Minimum difference: " << abs(a-b) << endl; //Printing the elements of the two groups cout << "Size of Group 1: " << group1.size() << endl; for(int i=0;i<group1.size();i++){ cout << group1[i] << " "; } cout << endl; cout << "Size of Group 2: " << group2.size() << endl; for(int i=0;i<group2.size();i++){ cout << group2[i] << " "; } return 0; } Examples //Console input 3 //Console output Minimum difference: 0 Size of Group 1: 1 3 Size of Group 2: 2 2 1 Here is the state of the two groups for all iterations: - Here sum is initialized to 3. Since sum - 3 is 0, 3 is inserted into Group 1. For each subsequent element, sum - i is less than 0, so it is inserted into Group 2. //Console input 6 //Console output Minimum difference: 1 Size of Group 1: 2 6 4 Size of Group 2: 4 5 3 2 1 Here is the state of the two groups for all iterations: - Here sum is initialized to 10. Since sum - 6 is 4, 6 is inserted into Group 1. Since sum - 5 is less than 0, it is inserted into Group 2. Since sum - 4 is 0, it is inserted into Group 1. For each subsequent element, sum - i is less than 0, so it is inserted into Group 2. Complexity The time complexity of this algorithm is O(N), as we loop from N to 1.
https://iq.opengenus.org/divide-numbers-from-1-to-n-into-two-groups-with-minimum-difference/
CC-MAIN-2021-04
refinedweb
664
60.55
Chair: Jon Gunderson Date: Wednesday, April 19th Time: 2:00 pm to 3:30 pm Eastern Standard Time, USA Call-in:Longfellow Bridge (+1) (617) 252-1038 Notes: Proposals for open ended and ambiguous checkpoints: Proposals for open ended and ambiguous checkpoints: Current Info: Proposal: Update: Sent request to IJ, UA, AU, GL lists for more information Chair: Jon Gunderson Scribe: Ian Jacobs RSVP Present: Madeleine Rothberg Al Gilman Mark Novak Denis Anson Hans Riesebos Harvey Bingham Al Gilman Gregory Rosmaita Regrets: Rich Schwerdtfeger Kitch Barnicle Charles McCathieNevile David Poehlman 2.IJ: Propose three terms to the list: Document Source, Document Object and Rendered Content 3.IJ: The content/ui division in G1 needs to be fixed IJ: Done locally. 4.IJ: Resolutions from FTF meeting IJ: All done, I believe. 5.IJ: Adopt new wording of proposal for checkpoint 9.2 IJ: Done locally. 7.CMN: Find out from I18N how to generalize the accessibility provided by sans-serif fonts. 13.DP: Review techniques for Guidelines 1 and 2 19.JG: Identify the minimal requirement for each checkpoint. 20.HB: Take scoping issue of the current guidelines to the EO Working Group HB: EO confirms that they are doing the FAQ. 18.JG: Take conformance grandularity issue to the WAI CG. JG: Sent as an agenda item. 1b) Continued 1. IJ: Draft a preliminary executive summary/mini-FAQ for developers. (No deadline.) 6.IJ: Propose split to the list. Identify why and issue of priority. 8.CMN: Propose a technique that explains how serialization plus navigation would suffice for Checkpoint 8.1. 9.DA: Send name of new organization to list that was mentioned by some person from the US Census Bureau 10.DA: Review techniques for Guidelines 7 and 8 11. DB: Get Tim Lacy to review G+ 12. DB: Review techniques for Guidelines 3, 4, and 11. 21.MQ: Review techniques for Guidelines 9 and 10 22.RS: Take notification of focus and view changes to PF as possible DOM 3 requirement. JG: The only thing not covered is inheritance by viewports. IJ: Does the presence of several viewports cause an accessibility problem? DA: Some users with CD can only process a certain amount of visual information. One user could play card games, e.g., as long as there were no more than four cards in his hand. If you have too many viewports, you may exceed the threshhold above which processing becomes impossible. IJ: For me, the requirement becomes the ability to close, not the ability to prevent opening. DA: If you have to close with looking, then it's still a problem. GR: I don't want to have to do this by hand: I want to configure the UA. Also, I want to ensure that configurations are inherited and that focus is constant when a viewport is duplicated. JG: Sounds like minimal requirement is allowing the user to turn off any viewport that opens automatically. IJ: Propose issues of opening in 4.15 and issues of number in 4.16. "Not opening" is a technique for controlling the number. DA: You might also want a technique to limit the number. IJ: Would being prompted cause you confusion? So is the technique we're suggestion appropriate for the users whose needs we're trying to address. DA: Prompting might confuse, yes. AG, JG: The proposed remedy would meet 4.16 as stated today. DA: One technique: ignore "target=_new". DA: Too much information is also a disorientation problem. But the cause of disorientation is different. IJ: 4.16: Allow the user to configure the number of viewports open at one time. DA: Techniques will be similar. AG: I have a residual unhappiness - you're creating more requirements. I realize that the problems aren't the same, but if the minimal requirements are the same, why not merge the checkpoints. IJ: Minimal requirement for 4.15 is to prevent the focus from moving (and let the user change it by hand, by navigating to another viewport). For 4.16, it's don't open new windows and don't ask. AG: "Just say no" is not sufficient. You also need prompting and "just do it normally". So three pieces to minimal requirement. AG: I think that number of viewports is an issue for blind users, even if it's a lower priority for them. Action IJ: Propose new 4.15 and 4.16 to list. MR: Refer to my email: MR: Part of the difficulty is to avoid distorting pitch. JG: Variable-speed tape recorders have been doing this since the 1950's. AG: Yes, there are techniques for doing this, although I don't think that this has been done commercially as long as that. JG: What's the range for slowing down? Half-speed? HB: George Kerscher argued that below 80%, the audio becomes almost unusual. DA: Most of the technology I'm aware of is about speeding up. GR: Although many blind users speed up rendering, some users who are newly blind, or who have other disabilities, will need to slow down as well. GR: I don't think slowing down needs to be symmetric with speeding up. IJ: We can't pick numbers out of a hat. For any of these ranges, we need to do research. JG: Note that 4.5 does not talk about speeding up since you still have access to content. Slower than you wish, perhaps, but you still have access. HB: Then it's a P2 or P3 to speed up. JG: You may not have to do pitch adjustment at 80% speed. JG: What about for animations and video? HB: Will you get "beating" phenomena in the visual field? AG: I think it makes sense to split video and animation due to feasibility. JG: We're not talking about compressing. We're talking about extending the time a frame is visible. AG: You could do dithering. MR: I'm not aware of data on these needs. DA: I would guess that for video or animation, half-speed would suffice. MR: If there is audio that goes with the video, then those two need to be synchronized. The user needs to know that if they slow down the video more than 80%, that the audio may not be rendered. JG: It would be acceptable to cut out the sound below 80%. DA: Yes, a user could view the video without audio first , then replay faster with audio to get more information. JG: - For video / animated: slow to at least 50% - For audio: slow to at least 80% - When synchronized: down to 80%, must synchronized, after that, audio can drop out. HB: We don't need to specify a continuous range down to 50% or 80%. Consider resolved for now (unless information contradicts our best guess): 1) Leave 4.5 a P1. (We have a reference implementation). 2) Specify minimal requirements for synchronization as stated above. Action DA: Get confirmation that these numbers make sense. Action MR: Send URI to MS's implementation to the list. Resolved: Keep same priority. Use wording below: 5.9 Follow operating system conventions that affect accessibility. In particular, follow conventions for user interface design, keyboard configuration, product installation, and documentation. [Priority 2] Note. Operating system conventions that affect accessibility are those described in this document and in platform-specific accessibility guidelines. Some of these conventions (e.g., sticky keys, mouse keys, show sounds, etc.) are discussed in the Techniques document [UAAG10-TECHS]. IJ: I've sent email to reviewer, no reply. But it sounds, based on discussion today, that users with CD are affected. J G: I sent email too: DA: Yes, too much visual information can cause problems. GR: Isn't there also an issue about spatial relationships between images and text? DA: Yes, that's a possibility. With people barely able to take in the information, the switch between modalities (text/image) can be difficult. If you want to talk about learning theory, you're dealing with different types of information. Research shows that transitions between modalities can be a very difficult task. I would suggest that this be moved to a P2. AG: Yes, flopping back and forth between modes. The lower challenge may be pure text or pure image. Images aren't bad, but complexity can be bad. IJ: A change in priority would be a substantial change to the document. This is not a mere clarification to the document. Therefore, this change (and others of this class), may cause us to cycle back. AG: I think that it's at the Director's discretion to be able to make the call. The Chair should get consensus and take "the call" to the Director. /* More discussion on the W3C process and how changes have an impact on moving to Recommendation */ DA: I don't think we should sacrifice accessibility for convenience. Resolved: Make 3.9 a P2 (due to impact on users with CD). /* IJ presents issue of DOM WG dealing with namespace issues and it being uncertain when they will move to Proposed Recommendation */ IJ: Options - Wait for them. - Drop to DOM 1. We lose namespace support and CSS module. HR: What about open-ended requirement for DOM? JG: We closed it off to make the spec tighter. HB: I don't believe that any MS product conforms to DOM 1. JG: MS people believe that they do. IJ: Philippe has told me that IE's DOM is a superset of of W3C DOM Level 1, but has not confirmed that yet. IJ: NN 6 claims full support for the DOM. JG: How many people consider DOM 2 critical? GR: I am leaning towards that position. I think 5.4 (CSS module) is important. IJ: Please be prepared to wait at least 3 extra months. GR: We might have to wait anyway if we recycle. IJ: We should change our strategy if we recycle so that we can drop to DOM 1 by default if DOM 2 not a PR after a certain point. HR: I think DOM Level 2 important as well. The CSS module is useful (as is the events module). DA: I think that going for accessibility would require going to DOM 2. MR: I'm not informed enough about the DOM. AG: I still believe that we should put in an explicit implementation time out: say that you have to comply within 6 months (for example) of the document becoming a Rec. This is the ugly proposal. HB: I would hate for our spec to have to wait 6 months. I'd rather put in words about DOM 1, and do what AG says. IJ: I think we should move to DOM 1. You would get more accessibility sooner, then publish another UAAG later and not break conformance to UAAG 1.0. JG: So it sounds like: - The WG should resolve the other issues first and get a sense of the sum of changes. - If we choose to recycle, we could try for DOM2. - If we choose not to recycle, we might try DOM1 then produce a new REC with even more of DOM2 work later on. JG: Would anyone object to going ahead with DOM1 and creating a new UA Rec when DOM2 becomes a Recommendation? HR: I still prefer the solution of giving user agents a six-month lead time. IJ: I think that an open-ended dependency on a document that doesn't yet exist or at least might change in unexpected ways, is dangerous.
http://www.w3.org/WAI/UA/2000/04/wai-ua-telecon-20000419
CC-MAIN-2014-23
refinedweb
1,925
67.96
Hello everyone! I have here my last homework assignment for an introductory C++ programming class. My skill level in this class ended about 4 weeks ago, so I'm have a terrible time with this last assignment. I've searched the board and found a few threads related to this exact same program, but due to my limited knowledge in this stuff, I couldn't apply any of it to my code to find a fix. It's another "Game of Life" type program I'm sure many of you are familiar with and probably consider pretty simple...I wish I did! Anyway, here is the assignment instructions and my code to follow: And the code:And the code:Write a program to simulate life. The world is a rectangular grid of cells, each of which may contain an organism. Each cell has eight neighbors, like so: 1 2 3 4 * 5 (the numbers are for illustration purposes only) 6 7 8 The world is initially created by the user with some organisms at various cells. The user specifies the cells that are initially "alive". Then successive generations are obtained by two rules: 1) an organism in a cell survives to the next generation if exactly 2 or 3 of its neighbors are living, otherwise it dies 2) an organism is born into an unoccupied cell if exactly 3 of its neighbors are occupied. Display the generations under user command until the user grows weary of this world. Method: The world will be represented by a 2-dimensional array (which may not be a global variable). 20 by 20 is a good size. Each component of the array will represent a cell that has a living organism in it or is dead. A new generation is generated by examining each cell and creating its next state (living or dead) simultaneously with all the other cells, thus the next generation must be created into a copy of the world. To simplify the code, the "perimeter" of the world can always be dead cells. And here's my code. Only problem is, I can't figure out the algorithm for Option 2: 2) an organism is born into an unoccupied cell if exactly 3 of its neighbors are occupied. I hope it's not a total disaster. As is, this doesn't output anything and I don't know why. I found an example online that I followed which did work, somewhat, but for some reason mine doesn't at all.I hope it's not a total disaster. As is, this doesn't output anything and I don't know why. I found an example online that I followed which did work, somewhat, but for some reason mine doesn't at all.Code:#include <iostream> #include <iomanip> #include <cstdlib> #include <cctype> const int MAX_SIZE = 20; void printGen(int[][MAX_SIZE]); void createGen(int[][MAX_SIZE]); using namespace std; int main() { int lifeGen[MAX_SIZE][MAX_SIZE]; int row, col; int x = 0; int y = 0; char cont; //Initialize array by setting all blocks to 0 for(row = 0; row < MAX_SIZE; row++) for(col = 0; col < MAX_SIZE; col++) lifeGen[row][col] = 0; //Initial instructions for inputting coordinates. cout << "Enter the coordinates to plot cell locations. Input " << endl; cout << "results in the format of:" << endl << endl; cout << "10 15" << endl << endl; cout << "When finished, enter coordinates with at least one being negative." << endl; //Loop for entering coordinates until user enters in a negative one. do { cout << "Enter coordinates: "; cin >> x; cin >> y; if(x > MAX_SIZE || y > MAX_SIZE) { cout << "Coordinates must be from 0 - 20! Reenter coordinates: "; cin >> x >> y; } else lifeGen[x - 1][y - 1] = 1; } while(x > 0 && y > 0); printGen(lifeGen); //User-controlled loop for outputting new generations. do { cout << endl << endl; cout << "Press any key and 'Enter' to continue with next generation, " << endl; cout << "or press 'Q' and 'Enter' to quit: "; cin >> cont; if(toupper(cont) == 'q') { cout << "Thanks for trying my life simulator! Bye!" << endl; return 0; system("PAUSE"); } else { createGen(lifeGen); printGen(lifeGen); } } while(cont); return EXIT_SUCCESS; } //Function to output life generation on screen. void printGen(int lifeGen[][MAX_SIZE]) { int rows; int cols; do { for(rows = 0; rows < MAX_SIZE; rows++) for(cols = 0; cols < MAX_SIZE; cols++) { if(cols % 2 == 0) cout << endl; if(lifeGen[rows][cols] == 1) cout << "*"; else if(lifeGen[rows][cols] != 1) cout << " "; } } while(cols < MAX_SIZE); } //Function to create new generations of cells. void createGen(int lifeGen[][MAX_SIZE]) { int r; int c; int totalrc = 0; for(r = 0; r < MAX_SIZE; r++) for(c = 0; c < MAX_SIZE; c++) { if(lifeGen[r][c] == 1) totalrc = lifeGen[r - 1][c - 1] + lifeGen[r - 1][c] + lifeGen[r - 1][c + 1] + lifeGen[r][c - 1] + lifeGen[r][c + 1] + lifeGen[r + 1][c - 1] + lifeGen[r][c - 1] + lifeGen[r + 1][c] + lifeGen[r + 1][c + 1]; if(totalrc <= 1 || totalrc >= 4) lifeGen[r][c] = 0; else if(totalrc == 3) lifeGen[r][c] = 1; } } Any help would be greatly appreciated!
https://cboard.cprogramming.com/cplusplus-programming/61985-assistance-some-homework-please.html
CC-MAIN-2017-43
refinedweb
826
58.52
System.Windows.Forms tutorial From Nemerle Homepage Graphical User Interface (GUI) with Windows.Forms Windows.Forms is the standard GUI toolkit for the .NET Framework. It is also supported by Mono. Like every other thing in the world it has its strong and weak points. Another GUI supported by .NET is Gtk#. Which outguns which is up to you to judge. But before you can do that, get a feel for Windows.Forms by going throught this tutorial. Note 1: To compile programs using Windows.Forms you will need to include -r:System.Windows.Forms in the compile command. E.g. if you save the examples in the file MyFirstForm.n, compile them with: ncc MyFirstForm.n -r:System.Windows.Forms -o MyFirstForm.exe Note 2: I am not going to go too much into details of every single thing you can do with Windows.Forms. These can be pretty easily checked in the Class reference. I will just try to show you how to put all the bricks together. The first step using System.Drawing; using System.Windows.Forms; class MyFirstForm : Form { public static Main() : void { Application.Run(MyFirstForm()); } } This is not too complex, is it? And it draws a pretty window on the screen, so it is not utterly useless, either :) Let us now try to customize the window a little. All the customization should be placed in the class constructor. using System.Drawing; using System.Windows.Forms; class MyFirstForm : Form { public this() { Text="My First Form"; //title bar ClientSize=Size(300,300); //size (without the title bar) in pixels StartPosition=FormStartPosition.CenterScreen; //I'm not telling FormBorderStyle=FormBorderStyle.FixedSingle; //not resizable } public static Main() : void { Application.Run(MyFirstForm()); } } Buttons, labels and the rest Wonderful. Now the time has come to actually display something in our window. Let us add a slightly customized button. All you need to do is to add the following code to the constructor of your Form class (i.e. somewhere between public this() { and the first } after it). def button=Button(); button.Text="I am a button"; button.Location=Point(50,50); button.BackColor=Color.Khaki; Controls.Add(button); def label=Label(); label.Text="I am a label"; label.Location=Point(200,200); label.BorderStyle=BorderStyle.Fixed3D; label.Cursor=Cursors.Hand; Controls.Add(label); I hope it should pretty doable to guess what is which line responsible for. Perhaps you could only pay a little more attention to the last one. Buttons, labels, textboxes, panels and the like are all controls, and simply defining them is not enough, unless you want them to be invisible. Also, the button we have added does actually nothing. But fear not, we shall discuss a bit of events very soon. A simple menu Menus are not hard. Just take a look at the following example: def mainMenu=MainMenu(); def mFile=mainMenu.MenuItems.Add("File"); def mFileDont=mFile.MenuItems.Add("Don't quit"); def mFileQuit=mFile.MenuItems.Add("Quit"); def mHelp=mainMenu.MenuItems.Add("Help"); def mHelpHelp=mHelp.MenuItems.Add("Help"); Menu=mainMenu; It will create a menu with two drop-down submenus: File (with options Don't Quit and Quit) and Help with just one option, Help. It is also pretty easy to add context menus to controls. Context menus are the ones which become visible when a control is right-clicked. First, you need to define the whole menu, and than add a reference to it to the button definition: def buttonCM=ContextMenu(); def bCMWhatAmI=buttonCM.MenuItems.Add("What am I?"); def bCMWhyAmIHere=buttonCM.MenuItems.Add("Why am I here?"); ... def button=Button(); ... button.ContextMenu=buttonCM; Basic events Naturally, however nice it all might be, without being able to actually serve some purpose, it is a bit pointless. Therefore, we will need to learn handling events. There are in fact two ways to do it. Let us take a look at the easier one for the begnning. The first thing you will need to do is to add the System namespace: using System; You can skip this step but you will have to write System.EventArgs and so on instead of EventArgs. Then, you will need to add a reference to the event handler to your controls. It is better to do it first as it is rather easy to forget about it after having written a long and complex handler. button.Click+=button_Click; ... mFileDont.Click+=mFileDont_Click; mFileQuit.Click+=mFileQuit_Click; ... mHelpHelp.Click+=mHelpHelp_Click; Mind the += operator instead of = used when customizing the controls. Finally, you will need to write the handlers. Do not forget to define the arguments they need, i.e. an object and an EventArgs. Possibly, you will actually not use them but you have to define them anyway. You can prefix their names with a _ to avoid warnings at compile time. private button_Click(_sender:object, _ea:EventArgs):void{ Console.WriteLine("I was clicked. - Your Button"); } private mFileQuit_Click(_sender:object, _ea:EventArgs):void{ Application.Exit(); } This is the way you will generally want to do it. But in this very case the handlers are very short, and like all handlers, are not very often used for anything else than handling events. If so, we could rewrite them as lambda expressions which will save a good bit of space and clarity. button.Click+=fun(_sender:object, _ea:EventArgs){ Console.WriteLine("I wrote it. - button_Click as a lambda"); }; mFileQuit.Click+=fun(_sender:object, _ea:EventArgs){ Application.Exit(); }; Cleaning up After you are done with the whole application, you should clean everything up. In theory, the system would do it automatically anyway but it certainly is not a bad idea to help the system a bit. Especially that it is not a hard thing to do at all, and only consists of two steps. The first step is to add a reference to System.ComponentModel and define a global variable of type Container: using System.ComponentModel; ... class MyFirstForm : Form { components : Container = null; ... In the second and last step, you will need to override the Dispose method. It could even fit in one line if you really wanted it to. protected override Dispose(disposing : bool) : void { when (disposing) when (components!=null) components.Dispose(); base.Dispose(d); } The second step You should now have a basic idea as to what a form looks like. The time has come to do some graphics. Painting There is a special event used for painting. It is called Paint and you will need to create a handler for it. using System; using System.Drawing; using System.Windows.Forms; class MySecondForm : Form { public this() { Text="My Second Form"; ClientSize=Size(300,300); Paint+=PaintEventHandler(painter); } private painter(_sender:object, pea:PaintEventArgs):void{ def graphics=pea.Graphics; def penBlack=Pen(Color.Black); graphics.DrawLine(penBlack, 0, 0 , 150, 150); graphics.DrawLine(penBlack, 150, 150, 300, 0); } public static Main():void{ Application.Run(MySecondForm()); } } A reference to the painter method should be added in the form class constructor (it affects the whole window), but this time it is a PaintEventHandler. Same as PaintEventArgs in the handler itself. In the handler, you should first define a variable of type PaintEventArgs.Graphics. This is the whole window. This is where you will be drawing lines, circles, strings, images and the like (but not pixels; we will discuss bitmaps later on). To draw all those things you will need pens and brushes. In our example we used the same pen twice. But sometimes you will only need it once. In such case, there is no point in defining a separate variable for it. Take a look at the next example. Overriding event handlers Another way to define a handler is to override the default one provided by the framework. It is not at all complicated, it is not dangerous and in many cases it will prove much more useful. Take a look at the example overriding the OnPaint event handler: protected override OnPaint(pea:PaintEventArgs):void{ def g=pea.Graphics; g.DrawLine(Pen(Color.Black), 50, 50, 150, 150); } Naturally, when you override the handler, there is no need any longer to add a reference to it in the class constructor. By the same token can you override other events. One of the possible uses is to define quite complex clickable areas pretty easily: protected override OnPaint(pea:PaintEventArgs):void{ def g=pea.Graphics; for (mutable x=0; x<300; x+=10) g.DrawLine(Pen(Color.Blue), x, 0, x, 300); } protected override OnMouseDown(mea:MouseEventArgs):void{ when (mea.X%10<4 && mea.Button==MouseButtons.Left) if (mea.X%10==0) Console.WriteLine("You left-clicked a line.") else Console.WriteLine($"You almost left-clicked a line ($(mea.X%10) pixels)."); } A simple animation with double-buffering What is an animation without double-buffering? Usually, a flickering animation. We do not want that. Luckily enough, double-buffering is as easy as abc in Windows.Forms. In fact, all you need to do is to set three style flags and override the OnPaint event handler instead of writing your own handler and assigning it to the event. class Animation:Form{ mutable mouseLocation:Point; //the location of the cursor public this(){ Text="A simple animation"; ClientSize=Size(300,300); SetStyle(ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint | ControlStyles.DoubleBuffer, true); //double-buffering on } protected override OnPaint(pea:PaintEventArgs):void{ def g=pea.Graphics; g.FillRectangle(SolidBrush(Color.DimGray),0,0,300,300); //clearing the window def penRed=Pen(Color.Red); g.FillEllipse(SolidBrush(Color.LightGray), mouseLocation.X-15, mouseLocation.Y-15, 29, 29); g.DrawEllipse(penRed, mouseLocation.X-15, mouseLocation.Y-15, 30, 30); g.DrawLine(penRed, mouseLocation.X, mouseLocation.Y-20, mouseLocation.X, mouseLocation.Y+20); g.DrawLine(penRed, mouseLocation.X-20, mouseLocation.Y, mouseLocation.X+20, mouseLocation.Y); } protected override OnMouseMove(mea:MouseEventArgs):void{ mouseLocation=Point(mea.X,mea.Y); Invalidate(); //redraw the screen every time the mouse moves } public static Main():void{ Application.Run(Animation()); } } As you can see, the basic structure of a window with animation is pretty easy. Main starts the whole thing as customized in the constructor. The OnPaint event handler is called immediately. And then the whole thing freezes waiting for you to move the mouse. When you do, the OnMouseMove event handler is called. It checks the new position of the mouse, stores it in mouseLocation and tells the window to redraw (Invalidate();). Try turning double-buffering off (i.e. comment the SetStyle... line) and moving the mouse slowly. You will see the viewfinder flicker. Now turn double-buffering back on and try again. See the difference? Well, in this example you have to move the mouse slowly to actually see any difference because the viewfinder we are drawing is a pretty easy thing to draw. But you can be sure that whenever anything more complex comes into play, the difference is very well visible without any extra effort. Bitmaps and images I promised I would tell you how to draw a single pixel. Well, here we are. protected override OnPaint(pea:PaintEventArgs):void{ def g=pea.Graphics; def bmp=Bitmap(256,1); //one pixel height is enough; we can draw it many times for (mutable i=0; i<256; ++i) bmp.SetPixel(i, 0, Color.FromArgb(i,0,i,0)); for (mutable y=10; y<20; ++y) g.DrawImage(bmp, 0, y); } So, what do we do here? We define a bitmap of size 256,1. We draw onto it 256 pixels in colours defined by four numbers: alpha (transparency), red, green, and blue. But as we are only manipulating alpha and green, we will get a nicely shaded green line. Finally, we draw the bitmap onto the window ten times, one below another. Mind the last step, it is very important. The bitmap we had defined is an offset one. Drawing onto it does not affect the screen in any way. Now, that you know the DrawImage method nothing can stop you from filling your window with all the graphic files you might be having on your disk, and modifying them according to your needs. class ImageWithAFilter:Form{ img:Image=Image.FromFile("/home/johnny/pics/doggy.png"); //load an image bmp:Bitmap=Bitmap(img); //create a bitmap from the image public this(){ ClientSize=Size(bmp.Width*2, bmp.Height); } protected override OnPaint(pea:PaintEventArgs):void{ def g=pea.Graphics; for (mutable x=0; x<bmp.Width; ++x) for (mutable y=0; y<bmp.Height; ++y) when (bmp.GetPixel(x,y).R>0){ def g=bmp.GetPixel(x,y).G; def b=bmp.GetPixel(x,y).B; bmp.SetPixel(x,y,Color.FromArgb(255,0,g,b)); } g.DrawImage(img,0,0); //draw the original image on the left g.DrawImage(bmp,img.Width,0); //draw the modified image on the right } ... } This example draws two pictures: the original one on the left and the modified one on the right. The modification is a very simple filter entirely removing the red colour. Adding icons to the menu Adding icons is unfortunately a little bit more complicated than other things in Windows.Forms. You will need to provide handlers for two events: MeasureItem and DrawItem, reference them in the right place, and set the menu item properly. mFileQuit.Click+=EventHandler( fun(_) {Application.Exit();} ); mFileQuit.OwnerDraw=true; mFileQuit.MeasureItem+=MeasureItemEventHandler(measureItem); mFileQuit.DrawItem+=DrawItemEventHandler(drawItem); private measureItem(sender:object, miea:MeasureItemEventArgs):void{ def menuItem=sender:>MenuItem; def font=Font("FreeMono",8); //the name and the size of the font miea.ItemHeight=(miea.Graphics.MeasureString(menuItem.Text,font).Height+5):>int; miea.ItemWidth=(miea.Graphics.MeasureString(menuItem.Text,font).Width+30):>int; } private drawItem(sender:object, diea:DrawItemEventArgs):void{ def menuItem=sender:>MenuItem; def g=diea.Graphics; diea.DrawBackground(); g.DrawImage(anImageDefinedEarlier, diea.Bounds.Left+3, diea.Bounds.Top+3, 20, 15); g.DrawString(menuItem.Text, diea.Font, SolidBrush(diea.ForeColor), diea.Bounds.Left+25, diea.Bounds.Top+3); }
http://nemerle.org/System.Windows.Forms_tutorial
crawl-002
refinedweb
2,303
52.36
This is your resource to discuss support topics with your peers, and learn from each other. 02-22-2013 06:16 PM How can I compare the color of a label in QML to another Color. My code below does not work as I'd expect it to. Despite the "scoreLabel" having color "Color.DarkGray" the if statements returns true. I also tried comparing it to the Color.create() color, but that does not work as expected either. if(scoreLabel.textStyle.color != Color.DarkGray/*Color.create("#f89e52")*/){ scoreLabel.textStyle.color = Color.DarkGray; //Otherwise make it orange. } else { scoreLabel.textStyle.color = Color.create("#f89e52") } Solved! Go to Solution. 02-22-2013 06:43 PM They do seem to be of the same type, in any case. When I log them I get this: console.log(scoreLabel.textStyle.color); console.log(Color.DarkGray); outputs: QVariant(bb::cascades::Color) QVariant(bb::cascades::Color) 02-22-2013 06:50 PM - edited 02-22-2013 06:59 PM Could you please try: if (scoreLabel.textStyle.color.toString() != Color.DarkGray.toString()) But most likely this won't work. I've checked Color class source code and there are operator== and operator!= defined. I don't think QML can use them though. It seems all instances are wrapped in QVariants which can't be directly compared. And Color class doesn't seem to support a conversion to string, so comparing them as strings also won't work. A possible solution is creating a C++ function and exporting it to QML: Q_INVOKABLE bool colorsEqual(Color *color1, Color *color2) { return *color1 == *color2; } UPD: Color is declared as meta-type in Cascades headers but pointer to it is not: resources/color.h:Q_DECLARE_METATYPE(bb::cascades: If the above function won't work, try the following: Q_INVOKABLE bool colorsEqual(const Color &color1, const Color &color2) { return color1 == color2; } ...or... Q_INVOKABLE bool colorsEqual(Color color1, Color color2) ... Try experimenting with different forms, I'm not sure which one will work. 02-22-2013 07:03 PM if(scoreLabel.textStyle.color.toString() == Color.DarkGray.toString()) this seems to always return true, strangely. I will try the C++ solution and see how that works. I would imagine it will work, seems like a good solution. 02-22-2013 07:11 PM 02-22-2013 07:21 PM - edited 02-22-2013 07:21 PM I can't get any of the equals methods working. Error: Unknown method parameter type: Color for the second two and Error: Unknown method parameter type: Color* for the first one. I'll have to look into propertys, I guess. There's no way to change a property at runtime, is there? I'm not sure I can do all my logic with just a single state. For instance, if the label starts black and has state1 when an action is performed I need to switch it to color orange. Orange would be state2. Now If I need to reset that back to black I still have the state assigned to it indicating that the color is black, since I couldn't change the state to state2. Any ideas? I'm starting to think it might be quicker to rewrite the entire list in C++ -__- QML is nothing but trouble. 02-22-2013 08:20 PM - edited 02-22-2013 08:57 PM I'll try to experiment with C++ function exporting too. The following code seems to work: // Navigation pane project template import bb.cascades 1.0 Page { Container { Label { id: label textStyle { base: style1.style } text: "Label text" } Button { id: button text: "Click me" property variant activeStyle: style1 onClicked: { if (activeStyle == style1) { label.textStyle.base = style2.style activeStyle = style2 } else { label.textStyle.base = style1.style activeStyle = style1 } } } attachedObjects: [ TextStyleDefinition { id: style1 color: Color.create("#ff0000") }, TextStyleDefinition { id: style2 color: Color.create("#00ff00") } ] } } Directly swapping the colors should also work. Properties can be reassigned at runtime. Ok, the C++ method also works. But I think the one with states is better. QML: // Navigation pane project template import bb.cascades 1.0 Page { Container { Label { id: label textStyle { color: Color.create("#ff0000") } text: "Label text" } Button { id: button text: "Click me" onClicked: { if (app.colorsEqual(label.textStyle.color, Color.create("#ff0000"))) { label.textStyle.color = Color.create("#00ff00") } else { label.textStyle.color = Color.create("#ff0000") } } } } } Test.hpp: #include <bb/cascades/Color> class Test : public QObject { Q_OBJECT public: Test(bb::cascades::Application *app); virtual ~Test() {} Q_INVOKABLE bool colorsEqual(bb::cascades::Color color1, bb::cascades::Color color2); }; Test.cpp: Test::Test(bb::cascades::Application *app) : QObject(app) { ... qml->setContextProperty("app", this); // <--------- ADDED AbstractPane *root = qml->createRootObject<AbstractPane>(); app->setScene(root); ... bool Test::colorsEqual(Color color1, Color color2) { return color1 == color2; } 02-23-2013 07:53 PM I was able to solve my problem with a variant. The reason I thought they couldn't be assigned at run time was because when I tried to conditionally set the variant with an if statement, I got a syntax error. Of course, I might just not know the correct syntax to do it, but this won't work: property variant activeStyle: if(ListItemData.num == 0) {style1} else {style2} However, by assigning the variant in the onCreationCompleted method of the label, I was able to skirt around that problem. Thanks for your help!
https://supportforums.blackberry.com/t5/Native-Development/Compare-a-labels-color-in-QML/m-p/2186167
CC-MAIN-2016-44
refinedweb
877
50.23
how to stop the subclass from overriding a method. Discussion in 'Ruby' started by Venkat Akkineni,11 - jstorta - Feb 20, 2006 Overriding a method at the instance level on a subclass of a builtintypeZac Burns, Dec 3, 2008, in forum: Python - Replies: - 16 - Views: - 549 - Arnaud Delobelle - Dec 5, 2008 String subclass method returns subclass - bug or feature?S.Volkov, Mar 11, 2006, in forum: Ruby - Replies: - 2 - Views: - 259 - S.Volkov - Mar 12, 2006 subclass a class in the namespace of the that subclassTrans, Oct 22, 2008, in forum: Ruby - Replies: - 8 - Views: - 368 - Robert Klemme - Oct 23, 2008 Subclass of subclassFab, Aug 9, 2012, in forum: C++ - Replies: - 0 - Views: - 427 - Fab - Aug 9, 2012
http://www.thecodingforums.com/threads/how-to-stop-the-subclass-from-overriding-a-method.858832/
CC-MAIN-2015-11
refinedweb
117
68.13
Our world is becoming globalized, distributed/clouded more and more. That is very true especially for IT in both consumer and enterprise space. Some examples would be Amazon AWS, Google Documents, Windows Azure Services, Office 365, Windows 8 UI Applications. Within this world, to develop scalable, fluent, and less-dependent/resilient applications, asynchronous programming approach can be used. In this post, I will give you the big picture of async world (what is it, why to use, and how and when/where to use it) within .NET platform. In the next post, I will focus on MS-recommended approach (task-based asynchronous programming TAP). In short, asynchronous programming is to enabling delegation of application process to other threads, systems, and or devices. Synchronous programs runs in a sequential manner whereas asynchronous applications can start a new operation and without waiting the new ones’ completion it can continue working on its own flow (main operation). To simplify, let's visualize a case where a person send an email and can do nothing till a response received from a sender. Here is the tasks beyond sending email are blocked by the response that you have no control over and may take for a while. What would be the asynchronous way is to send the email and continue working on other tasks while waiting the response from sender. In order to capture the full context of asynchronous programming, we need to understand roles and meanings of OS, CLR, application domain, thread, process. Any point where your application initiates an operation that is long-running and/or Input Output-bound. For example, you may leverage asynchronous programming in the following scenarios Let me explain it with an example: A simple application that process data from 2 Xml documents into a list. Here is the pseudo code: Now, I am interested in only line 1-3 to differentiate use of sync and Async calls. Here is code snippet showing the ProcessXml and Book objects and the method that does the parsing: using System; using System.Collections.Generic; using System.Linq; using System.Xml.Linq; using System.Threading.Tasks; namespace AsyncDemo { public class ProcessXml { /// <summary> /// Parses 'book' named elements from a file into a list of Book objects /// </summary> /// <param name="path">file path</param> /// <returns>List of Books obj</returns> public static List<Book> ParseToList(string path) { //System.Threading.Thread.Sleep(1000); if (System.IO.File.Exists(path)) { XDocument doc = XDocument.Load(path); if (doc != null) { var coll = doc.Root.Elements("book").Select(p => new Book { Author = p.Elements("author").First().Value, Title = p.Elements("title").First().Value, Genre = p.Elements("genre").First().Value, Price = double.Parse(p.Elements("price").First().Value), PublishDate = DateTime.Parse(p.Elements("publish_date").First().Value), Description = p.Elements("description").First().Value }); return coll.ToList<Book>(); } } return null; } public static Task<List<Book>> XmlProcessEngineAsync(string path) //System.Threading.Thread.Sleep(1000); List<Book> list; list = doc.Root.Elements("book").Select(p => }).ToList<Book>(); return null; // (Task<List<Book>>)list; } public class Book public string Author { get; set; } public string Title { get; set; } public string Genre { get; set; } public double Price { get; set; } public DateTime PublishDate { get; set; } public string Description { get; set; } } Here are 2 unit test methods that demonstrate the scenario both sync and Async manners: /// <summary> /// Tests the method synchronously /// </summary> [TestMethod] public void ParseToListTestSync() books1 = ProcessXml.ParseToList(file1); books2 = ProcessXml.ParseToList(file2); var list = MergeLists(books1, books2); Assert.IsNotNull(list); /// Tests the method asynchronously with APM approach w/o callback public void ParseToListTestAsyncWithAPM() books1 = books2 = null; DelProcessing del = new DelProcessing(ProcessXml.ParseToList); IAsyncResult result = del.BeginInvoke(file2, null, null); books2 = ProcessXml.ParseToList(file1); //if (!result.IsCompleted) //{ //this runs in main thread; do some other stuff while async method call in-progress //Thread.Sleep(1000); //} books1 = del.EndInvoke(result); Here is the screenshot from the run results As you can see, the 1st test method calls the function in a synchronous way: 2nd test method calls the function in a asynchronous way: Here is the picture demonstrates this: Since the version 1.1, .NET support asynchronous programming, since then each release brought new enhancements (delegates, TPL, etc.). Here is the picture taken when searching Async methods available within With FW 4.5. With FW 4.5, there are 3 patterns available for Async development: APM style can be implemented in 2 ways; with or without a callback. Sample call above (ParseToListTestAsyncWithAPM) is an example for non-callback APM. We can implement same functionality with a callback as seen below: 1: /// <summary> 2: /// Tests the method asynchronously with APM callback 3: /// </summary> 4: [TestMethod] 5: public void ParseToListTestAsyncWithAPM_Callback() 6: { 7: books1 = books2 = null; 8: int t1 = System.Threading.Thread.CurrentThread.ManagedThreadId; 9: int t2 = 0; 10: 11: DelProcessing del = new DelProcessing(ProcessXml.ParseToList); 12: IAsyncResult result = del.BeginInvoke(file2, (r) => 13: { 14: books1 = del.EndInvoke(r); 15: t2 = System.Threading.Thread.CurrentThread.ManagedThreadId; 16: }, null); 17: 18: books2 = ProcessXml.ParseToList(file1); 19: 20: var list = MergeLists(books1, books2); 21: 22: Assert.IsNotNull(list); 23: Assert.IsTrue(t2 > 0); 24: } Let me explain this little bit more in detail: TAP is the simplest one and is recommended by MS. Here is the code for implementing the same scenario with TAP: 2: /// Tests the method asynchronously with TAP 5: public void ParseToListTestAsyncWithTAP() 7: int t1 = System.Threading.Thread.CurrentThread.ManagedThreadId; 8: int t2 = 0; 9: 10: books1 = books2 = null; 11: Task.Factory.StartNew(()=> { 12: books1 = ProcessXml.ParseToList(file2); 13: t2 = System.Threading.Thread.CurrentThread.ManagedThreadId; 14: }); 15: books2 = ProcessXml.ParseToList(file1); 16: 17: var list = MergeLists(books1, books2); 18: 19: Assert.IsNotNull(list); 20: Assert.IsTrue(t2 > 0); 21: } TAP is hot:), will explain this in detail in my next post hopefully. For now, I would like to share the results of my efforts so far with you: Obviously, perhaps another post would be good for comparing sync vs async or APM vs TAP by running load tests. We will see. This is a very live world/sector and there are many things to unleash, is not it? Wow, that has been my longest post:). Forgot how fast time passed here in Robert’s Coffee in Istanbul. Well, in this post, I have explained various aspects of asynchronous programming; meaning, differentiations, why and how-to-use. Asynchronous programming can be implemented in both client and server side and provides scalability and performance advantages over synchronous programming. I would certainly recommend you to invest some time on this, since it is now simpler (TAP) and use of it becomes almost a must-have due to more integration to cloud applications. This confused me. Hi Livio, can you be specific please? Thanks, Awesome! Love the code snippets! Awesome post thanks for sharing code snippets <a href="">Asynchronous Programming</a>
http://blogs.technet.com/b/meamcs/archive/2012/09/08/why-and-how-to-asynchronous-programming.aspx
CC-MAIN-2014-35
refinedweb
1,125
50.12
I am making some experiments with large database and different O/R-mappers. To make it easier for me to measure the time that code takes to run I wrote simple command class that uses Stopwatch class and measures how long it takes for action to run. I this posting I will show you my class and explain how to use it. Here is the code of my class. You can take it and put it to some temporary project to play with it. /// <summary> /// Class for executing code and measuring the time it /// takes to run. /// </summary> public class TimerDelegateCommand { private readonly Stopwatch _stopper = new Stopwatch(); /// <summary> /// Runs given actions and measures time. Inherited classes /// may override this method. /// </summary> /// <param name="action">Action to run.</param> public virtual void Run(Action action) { _stopper.Reset(); _stopper.Start(); try { action.Invoke(); } finally _stopper.Stop(); } /// Static version of action runner. Can be used for "one-line" /// measurings. /// <returns>Returns time that action took to run in /// milliseconds.</returns> public static long RunAction(Action action) var instance = new TimerDelegateCommand(); instance.Run(action); return instance.Time; /// Gets the action running time in milliseconds. public long Time get { return _stopper.ElapsedMilliseconds; } /// Gets the stopwatch instance used by this class. public Stopwatch Stopper get { return _stopper; } } And here are some examples how to use it. Notice that I don’t have to write methods for every code piece I want to measure. I can also use anonymous delegates if I want. And one note more – Time property returns time in milliseconds! static void Main() long time = TimerDelegateCommand.RunAction(MyMethod); Console.WriteLine("Time: " + time); time = TimerDelegateCommand.RunAction(delegate { // write your code here }); Console.WriteLine("\r\nPress any key to exit ..."); Console.ReadLine(); You can also extend this command and write your own logic that drives code running and measuring. In my next posting I will show you how to apply Command pattern to make some more powerful measuring of processes. Thank you for submitting this cool story - Trackback from DotNetShoutout Thank you for submitting this cool story - Trackback from progg.ru You should account for this: msdn.microsoft.com/.../System.Diagnostics.Stopwatch.aspx "Note:." In my last post Find out how long your method runs I introduced how to measure the speed of code using
http://weblogs.asp.net/gunnarpeipman/archive/2010/09/12/find-out-how-long-your-method-runs.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29
crawl-003
refinedweb
379
60.92
. Howpackage to be available in your $PATH: How to install geckodriver Flash is abandoned and modern browsers no longer include it. On Ubuntu browser-plugin-freshplayer-pepperflashneedsmechanism.process will keep waiting for messages from the Playerworkers with new states of the game board. Then it will ask the Agentto make a move, and will respond back to the correct Playerworker about which action it should take. In this case the Agent has one queue to receive messages from all Playerworkers, but every Playerwill have their own queues to receive messages from the Agent. Since processes are isolated, the Agentprocess will be able to have its own Tensorflow model instance and related resources. 2. Player worker The player worker is responsible for running the game environment, and just needs to ask the Agentwhichis simplified to show the main ideas. But in English all this does is gets the Selenium instance to get the game board state, asks the Agentto: I’m using this approach to run 24 game instances for my Q-learning process for Bubble Shooter. If you’d like to see it in action, you can find it on GitHub. How to solve a puzzle game with Python and Z3 Theorem prover This post shows how Microsoft’s Z3 Theorem prover can be used to solve the Puzlogic puzzle game. We will replace the previous brute-force approach used in my puzlogic-bot with one that uses Z3. Over last few posts we walked through building up puzlogic-bot: building the basic solver, the vision component using OpenCV, hooking up mouse controls with Pynput. As I’m learning to use Z3, the Z3 puzzle solver implementation would be an interesting practical exercise. TL;DR: you can find the final solver code on Github. Z3 is a theorem prover made by Microsoft. The SMT (Satisfiability modulo theories) family of solvers build up on SAT (Boolean satisfiability problem) family of solvers, providing capabilities to deal with numbers, lists, bit vectors and so on, instead of just boolean formulas. Puzlogic is a number-based puzzle game. The goal of the game is to insert all given numbers into the board such that all rows and columns have unique numbers, and where a target sum is given - the row or column sums up to a given value. Basics of Z3 “SAT/SMT by Example” by Dennis Yurichev is a great book to help beginners pick up Z3, and it was the inspiration for this post. At school we all worked through equation systems like: 3x + 2y - z = 1 2x - 2y + 4z = -2 -x + 0.5y - z = 0 Z3 can be thought of as a constraint solver. Given a set of constraints, it will try to find a set of variable values that satisfy the constraints. If it can find a valid set of values - it means the constraints (and thus the theorem) is satisfiable. If not - it is not satisfiable. The most basic example given in Yurichev’s book is: from z3 import * x = Real('x') y = Real('y') z = Real('z') s = Solver() s.add(3*x + 2*y - z == 1) s.add(2*x + 2*y + 4*z == -2) s.add(-x + 0.5y - z == 0) print(s.check()) print(s.model()) Executing it outputs: sat [z = -2, y = -2, x = 1] Here each equation can be considered as a constraint: a sum of variable values have to sum up to a certain value, satisfying the equation. Z3 is way more powerful than that, as evident by Z3py reference, but for the Puzlogic puzzle this is actually sufficient. What we need next is how to describe the given puzzle as a set of constraints for use in Z3. Solving Puzlogic puzzles To quickly recap, the rules of the game are as follows: - All numbers on a given row or column must be unique - When a sum target is given for a given row or column, all cells in it must sum up to target sum - All purple pieces at the bottom of the game board must be used to fill the game board As established in the previous post, the game board is defined as a sparse matrix: a list of cells with x, ycoordinates and a value. E.g.: board = [ # (row, column, value) (0, 0, 0), (0, 3, 6), (1, 0, 2), (1, 1, 0), (1, 3, 0), (2, 0, 0), (2, 2, 5), (2, 3, 0), (3, 0, 0), (3, 3, 1) ] A value of zero is treated as an empty cell. Purple game pieces are expressed just as a list of numbers: [1, 2, 3, 4, 5, 6]. While target sums can be expressed as (dimension, index, target_sum)tuples: target_sums = [ # (dimension, index, target_sum (0, 1, 12), # second row sums up to 12 (0, 2, 10), # third row sums up to 10 (1, 0, 11) # first column sums up to 11 ] Dimension of 0marks a row, while 1marks a column. For the implementation there are 5 constraints to cover: - Limit values of pre-filled cells to the given values. Otherwise Z3 would assign them to some other values. - Allow empty cells to get values only of available pieces. - Require that there would be no duplicate values in any of the rows or columns. - If there are any sum targets specified for given rows or columns, require them to be satisfied. - Require that each purple game piece would be used only once. 0. Z3 variables First, the boardis extended by creating a Z3 variable for each cell to make it easier to apply constraints later on: def cell_name(self, row, column): return 'c_%d_%d' % (row, column) def solve(self): solver = Solver() # Create z3 variables for each cell extended_board = [(row, column, value, Int(self.cell_name(row, column))) for (row, column, value) in board] Both default values and the purple game pieces are integers, hence the Intvariable type. 1. Pre-filled variables So far Z3 would be able to assign any integer to any of the variables we created so far. We will constrain the cells to use the already specified value where it is specified. As mentioned before, in the boardsparse matrix the value of 0indicates an empty cell, so we initialize values for cells with values greater than 0: def is_cell_empty(self, value): return value == 0 def set_prefilled_cell_values(self, board): return [cell == value for (_, _, value, cell) in board if not self.is_cell_empty(value)] The cell == valueis going to generate an expression object of type z3.z3.BoolRef, as cellhere is a variable we created above with type Int, and valueis a constant. E.g. for the cell (0, 3, 6)the expression would be Int('c_0_3') == 6). 2. Empty cells can gain a value from any of the pieces Having available pieces [1, 2, 3, 4, 5, 6], we want the following constraint for each of the empty cells: cell == 1 OR cell == 2 OR cell == 3 OR cell == 4 OR cell == 5 OR cell == 6 def set_possible_target_cell_values(self, board, pieces): constraints = [] for (row, column, value, cell) in board: if self.is_cell_empty(value): any_of_the_piece_values = [cell == piece for piece in pieces] constraints.append(Or(*any_of_the_piece_values)) return constraints This returns a list of constraints in the format: constraints = [), # ... ] You may notice that this allows cells on the same row or column to have a common value. This will be fixed by constraint 3. This also allows the same piece to be used in multiple cells, we’ll fix it with constraint 4. 3. No duplicates in rows or columns def require_unique_row_and_column_cells(self, board): return [ c1 != c2 for ((x1, y1, _, c1), (x2, y2, _, c2)) in itertools.combinations(board, 2) if x1 == x2 or y1 == y2] itertools.combinations(), given a list of [A, B, C, ...]will output combinations [AB, AC, BC, ...]without repetitions. E.g. c_0_0 != c_0_3, c_0_0 != c_1_0, c_0_0 != c_2_0and so on. We use it to make sure that no cells with a common row or column index have the same value. Instead of manually iterating over each cell combination, we could’ve used Distinct(args) expression for each row and column. 4. Satisfying sum targets Any column or row may have a sum target that its pieces should sum up to. def match_sum_requirements(self, board, sum_requirements): constraints = [] for (dimension, index, target_sum) in sum_requirements: relevant_cells = [cell[3] for cell in board if cell[dimension] == index] constraints.append(sum(relevant_cells) == target_sum) return constraints It’s a fairly straightforward sum of relevant cells. E.g. for the (0, 1, 12)sum target, the constraint expression would be Sum(c_1_0, c_1_1, c_1_3) == 12 5. All pieces are used once Here that can be ensured by summing up pieces used in empty cells, and it should match the sum of pieces given originally. Just checking for unique values is insufficient, as on some levels there may be multiple pieces with the same value, e.g. [4, 5, 6, 4, 5, 6]. def target_cells_use_all_available_pieces(self, board, pieces): empty_cells = [cell for (_, _, value, cell) in board if self.is_cell_empty(value)] return [sum(empty_cells) == sum(pieces)] Putting it all together Now that we have methods that generate constraint expressions, we can add them all to the solver and format the output to make it look more readable: constraints = \ self.set_prefilled_cell_values(extended_board) + \ self.set_possible_target_cell_values(extended_board, pieces) + \ self.require_unique_row_and_column_cells(extended_board) + \ self.match_sum_requirements(extended_board, sum_requirements) + \ self.target_cells_use_all_available_pieces(extended_board, pieces) for constraint in constraints: solver.add(constraint) if solver.check() == sat: model = solver.model() return [ (row, column, model[cell].as_long()) for (row, column, value, cell) in extended_board if self.is_cell_empty(value) ] else: return False It’s worth noting that solver.check()may return either z3.sator z3.unsat. If it is satisfiable, solver.model()will contain a list of values: # solver.model() [c_1_1 = 6, c_3_0 = 5, c_2_3 = 2, c_1_3 = 4, c_0_0 = 1, c_2_0 = 3, c_3_3 = 1, c_2_2 = 5, c_1_0 = 2, c_0_3 = 6] The values, however, are still Z3 objects, so e.g. an Int()object value may need to be converted into a primitive intvia as_long()method. Once a model is solved, we also nicely format the return values in the form of: solution = [ # (row, column, expected_piece_value) (0, 0, 1), (1, 1, 6), (1, 3, 4), (2, 0, 3), (2, 3, 2), (3, 0, 5) ] The steps, if done by the human, will solve the puzzle. The final set of constraints for level 6 was: [ # 1. Pre-filled cell values c_0_3 == 6, c_1_0 == 2, c_2_2 == 5, c_3_3 == 1, # 2. Possible values for empty cells), Or(c_1_3 == 1, c_1_3 == 2, c_1_3 == 3, c_1_3 == 4, c_1_3 == 5, c_1_3 == 6), Or(c_2_0 == 1, c_2_0 == 2, c_2_0 == 3, c_2_0 == 4, c_2_0 == 5, c_2_0 == 6), Or(c_2_3 == 1, c_2_3 == 2, c_2_3 == 3, c_2_3 == 4, c_2_3 == 5, c_2_3 == 6), Or(c_3_0 == 1, c_3_0 == 2, c_3_0 == 3, c_3_0 == 4, c_3_0 == 5, c_3_0 == 6), # 3. Unique values in rows and columns c_0_0 != c_0_3, c_0_0 != c_1_0, c_0_0 != c_2_0, c_0_0 != c_3_0, c_0_3 != c_1_3, c_0_3 != c_2_3, c_0_3 != c_3_3, c_1_0 != c_1_1, c_1_0 != c_1_3, c_1_0 != c_2_0, c_1_0 != c_3_0, c_1_1 != c_1_3, c_1_3 != c_2_3, c_1_3 != c_3_3, c_2_0 != c_2_2, c_2_0 != c_2_3, c_2_0 != c_3_0, c_2_2 != c_2_3, c_2_3 != c_3_3, c_3_0 != c_3_3, # 4. Row/column sum targets 0 + c_1_0 + c_1_1 + c_1_3 == 12, 0 + c_2_0 + c_2_2 + c_2_3 == 10, 0 + c_0_0 + c_1_0 + c_2_0 + c_3_0 == 11, # 5. Ensuring that all pieces have been used once 0 + c_0_0 + c_1_1 + c_1_3 + c_2_0 + c_2_3 + c_3_0 == 21 ] Final thoughts The puzlogic-bot project is able to use the Z3 based solver for first few levels, but that’s only because the Vision component is not able to read the row/column target sums correctly. As we have seen in the example, the Z3-based solver has no issues dealing with target sums in further levels. I found it rather surprising just how approachable Z3 was for a newbie. The puzzle problem perhaps was primitive, but the only difficulty was to not get lost in the constraint definitions the puzzle required. You can find the final code of the solver on Github. If you are curious how puzlogic-botwas built, I have a series of posts on it too. How do you use or would use constraint solvers? Debugging TLA+ specifications with state dumps TLA+ (Temporal Logic of Actions) is a formal specification language, used for specifying and checking systems and algorithms for corectness. As I’m further exploring it (see my last post about modelling subscription throttling), I’m ocasionally running into situations where I think there may be an issue with the current specification as they unexpectedly do not terminate, but without a clear reason. In programming languages it is common to fire up a debugger and step through the program execution, inspecting instruction flow and variable values. With TLC there is no clear way to do so, at least I was unable to find any. TLC will only provide an error trace for errors it does find, however if TLC execution does not terminate this is of little use. In TLC Command-line options I found the -dump <file>option, which dumps out every reached state into the specified file. There is no equivalent GUI option in “TLA+ Toolbox’s” TLC options, so it has to be specified as a TLC command line parameter in Model > Advanced Options > TLC Options > TLC command line parameters. The dump file itself consists of plaintext state values, so it is certainly human readable. For a tautology checking spec when run with the -dump states.txt: VARIABLES P, Q F1(A, B) == A => B F2(A, B) == ~A \/ B ValuesEquivalent == F1(P, Q) <=> F2(P, Q) Init == /\ P \in BOOLEAN /\ Q \in BOOLEAN Next == /\ P' \in BOOLEAN /\ Q' \in BOOLEAN Spec == Init /\ [][Next]_<<P, Q>> The dump file in <SPEC_PATH>/<SPEC_NAME>.toolbox/<MODEL_NAME>/states.txt.dumpcontains: State 1: /\ P = FALSE /\ Q = FALSE State 2: /\ P = FALSE /\ Q = TRUE State 3: /\ P = TRUE /\ Q = FALSE State 4: /\ P = TRUE /\ Q = TRUE It’s a low-tech approach, it still is difficult to follow execution in cases with many states, but it has helped me get a better grasp on which states TLC is exploring and clarify issue suspicions. One thing to note is that depending on the size of a state and the number of states, the filesize may quickly grow into gigabytes range, so in certain cases it may be impractical to dump all distinct states, requiring early termination of TLC. This is a very basic method, but if you have better ways of debugging specs themselves - please do let me know! Adding controls and bot logic - Part 3 of Solving Puzlogic puzzle game with Python and OpenCV In the the second part of the series we added the Vision component for the Puzlogic puzzle solving bot. In this post we will add a mouse controller as well as handle basic bot logic to get the bot actually solve some of the beginner puzzles. Planning For controls, as before in the Burrito Bison bot, here we only need the mouse. So I’ll reuse the component, which uses pynput` for mouse controls. For the bot logic, we’ll need to use the Solver, Visionand Controlscomponents, making use of all 3 to solve a given puzzle. In this case the bot will rely on the human to set up the game level, handle game UIs, while bot itself will be responsible only for finding the right solution and moving game pieces to the target cells. Implementation Mouse Controller As mentioned before, we’ll use Pynputfor mouse controls. We start with the basic outline: import time from pynput.mouse import Button, Controller as MouseController class Controller: def __init__(self): self.mouse = MouseController() def move_mouse(self, x, y): # ... def left_mouse_drag(self, from, to): # ... For move_mouse(x, y), it needs to move the mouse to the given coordinate on screen. Pynput would be able to move the mouse in a single frame just by changing mouse.positionattribute, but for me that caused problems in the past, as the game would simply not keep up and may handle such mouse movements unpredictably (e.g. not registering mouse movements at all, or moving it only partially). And in a way that makes sense. Human mouse movements normally take several hundred milliseconds, not under 1ms. To mimic such gestures I’ve added a way to smooth out the mouse movement over some time period, e.g. 100ms, by taking a step every so often (e.g. every 2.5msin 40steps). def move_mouse(self, x, y): def set_mouse_position(x, y): self.mouse.position = (int(x), int(y)) def smooth_move_mouse(from_x, from_y, to_x, to_y, speed=0.1): steps = 40 sleep_per_step = speed // steps x_delta = (to_x - from_x) / steps y_delta = (to_y - from_y) / steps for step in range(steps): new_x = x_delta * (step + 1) + from_x new_y = y_delta * (step + 1) + from_y set_mouse_position(new_x, new_y) time.sleep(sleep_per_step) return smooth_move_mouse( self.mouse.position[0], self.mouse.position[1], x, y ) The second necessary operation is mouse dragging, which means pushing the left mouse button down, moving the mouse to the right position and releasing the left mouse button. Again, all those steps can be programatically performed in under 1ms, but not all games can keep up, so we’ll spread out the gesture actions over roughly a second. def left_mouse_drag(self, start, end): delay = 0.2 self.move_mouse(*start) time.sleep(delay) self.mouse.press(Button.left) time.sleep(delay) self.move_mouse(*end) time.sleep(delay) self.mouse.release(Button.left) time.sleep(delay) And that should be sufficient for the bot to reasonably interact with the game. Bot logic The bot component will need to communicate between the other 3 component. For bot logic there are 2 necessary steps: Solvercomponent needs to get information from Visionabout available cells, their contents, available pieces in order for Solverto provide a solution. - Take the solution from Solver, and map its results into real-life actions, by moving the mouse via Controlsto the right positions. We start by defining the structure: class Bot: """ Needs to map vision coordinates to solver coordinates """ def __init__(self, vision, controls, solver): self.vision = vision self.controls = controls self.solver = solver Then we need to a way feed information to Solver. But we can’t just feed vision information straight into the solver (at least not in the current setup), as it both work with slightly differently structured data. So we first will need to map vision information into structures that Solvercan understand. def get_moves(self): return self.solver.solve(self.get_board(), self.get_pieces(), self.get_constraints()) self.vision.get_cells()returns cell information in a list of cells of format Cell(x, y, width, height, content), but Solverexpects a list of cells in format of (row, column, piece): def get_board(self): """ Prepares vision cells for solver """ cells = self.vision.get_cells() return list(map(lambda c: (c.x, c.y, c.content), cells)) self.get_pieces()expects a list of integers representing available pieces. However, vision information returns Cell(x, y, width, height, content). So we need to map that as well: def get_pieces(self): """ Prepares vision pieces for solver """ return list(map(lambda p: p.content, self.vision.get_pieces())) self.get_constraints()currently is not implemented in Visioncomponent, and instead returns an empty list []. We can just pass that along to the Solverunchanged for now, but will likely have to change it once constraints are implemented. def get_constraints(self): """ Prepares vision constraints for solver """ return self.vision.get_constraints() Now that we have the solution in the form [(target_row, target_column, target_piece_value), ...], we need to map that back into something that could work for the graphical representation. We already treat xand yof each cell as the “row” and “column”, which works because all cells are arranged in a grid anyway. Now we only need to find which available pieces to take from which cell, and move those to respective target cells. def do_moves(self): moves = self.get_moves() board = self.vision.get_game_board() available_pieces = self.vision.get_pieces() def get_available_piece(piece, pieces): target = list(filter(lambda p: p.content == piece, pieces))[0] remaining_pieces = list(filter(lambda p: p != target, pieces)) return (target, remaining_pieces) for (to_x, to_y, required_piece) in moves: (piece, available_pieces) = get_available_piece(required_piece, available_pieces) # Offset of the game screen within a window + offset of the cell + center of the cell move_from = (board.x + piece.x + piece.w/2, board.y + piece.y + piece.h/2) move_to = (board.x + to_x + piece.w/2, board.y + to_y + piece.h/2) print('Moving', move_from, move_to) self.controls.left_mouse_drag( move_from, move_to ) get_available_piecetakes the first one from the vision’s set of pieces, and uses that as a source for the solution. As for handling coordinates, one thing to not forget is that coordinates point to the top left corner of the corner, and are not always absolute in relation to the OS screen. E.g. a piece’s xcoordinate is relative to the game screen’s xcoordinate, so to find its absolute coordinate we need to sum the two: absolute_x = board.x + piece.x. Top left corner of the piece is also not always usable - the game may not respond. However, if we target the center of the cell - that works reliably. So we offset the absolute coordinate with half of the cell’s width or height: absolute_x = board.x + piece.x + piece.width. Finally running the bot! Now we have all the tools for the bot to solve basic puzzles of Puzlogic. The last remaining piece is to build a run.pyscript putting it all together: from puzbot.vision import ScreenshotSource, Vision from puzbot.bot import Bot from puzbot.solver import Solver from puzbot.controls import Controller source = ScreenshotSource() vision = Vision(source) solver = Solver() controller = Controller() bot = Bot(vision, controller, solver) print('Checking out the game board') bot.refresh() print('Game board:', vision.get_cells()) print('Game pieces:', vision.get_pieces()) print('Calculating the solution') print('Suggested solution:', bot.get_moves()) print('Performing the solution') bot.do_moves() Now we can run the bot with python run.py, making sure that the game window is also visible on the screen. Here’s a demo video: You can see that the bot needs human help to get through the interfaces, and it can only solve puzzles by itself. Even then, it is able to resolve the first 4 basic puzzles, and just by chance is able to solve the 5th one, as it contains sum constraints - right now the bot cannot handle sum constraints in vision. So it does fail on the 6th puzzle. Full code mentioned in this post can be found on Github: flakas/puzlogic-bot - bot.py, flakas/puzlogic-bot - controls.py. If you are interested in the full project, you can also find it on Github: flakas/puzlogic-bot.
https://tautvidas.com/blog/
CC-MAIN-2019-43
refinedweb
3,747
62.27
Disclaimer. This is an ancient post. By the looks of it, I originally intended to write this almost a year ago, as a follow up to my scalar properties writeup. That was back when I was testing properties (and more exactly, default properties) and some of the design was still in flux. Now it’s pretty nailed down (there’s very little time to change it for Whidbey, anyhow), and it dawned on me that I hadn’t posted this material. So, I’ll do it now, with a bit of fixup. The Way C# Does It. Reader Rob Walker asked over a year ago: Why do I have to specify the type of the property 3 times in the definition? It makes this new syntax more verbose than the old. Why not just adopt the C# style? Let me start by mentioning that I’m not a member of the design team. I’ve been included on various discussions that take place regarding the syntax, but I haven’t been the one making the decisions. I can only make guesses as to their reasonings (or ask them). Either way, the reasoning is my own. I believe there are two reasons why we wouldn’t adopt the C# style. First, there’s an existing property syntax that users are familiar with. The changes between the two syntaxes are somewhat minor. Second, the C# property syntax doesn’t fit well with C++ paradigms. Though simple, I imagine the syntax could wreak havok on parsers, and could introduce a number of ambiguities in the language. One of the goals of adding CLI to C++ was to not break existing paradigms, where possible. Finally, the designers wanted to do what felt natural to C++ users. C++ users aren’t familiar with functions that have no parameter lists, that’s unusual. Speaking of how C# does it, here it is: public class ArrayWrap { public ArrayWrap(){ arr = new int[10]; } public int this[int idx] { get{ return arr[idx]; } set{ arr[idx] = value; } } private int[] arr; } And here’s code to produce the equivalent class in C++: public ref class ArrayWrap { public: ArrayWrap(){ arr = gcnew array<int>(10); } property int default[int] { int get(int idx){ return arr[idx]; } void set(int idx, int value){ arr[idx] = value; } } private: array<int>^ arr; }; We stole the default keyword and co-opted it for use in indexed properties. Note that you still make use of the property keyword. When this comes out in IL, it looks almost exactly like the C# indexer – with the default property being named “Item” and the getters and setters also thusly named. You can change the name of the default property by using the attribute System::Reflection::DefaultMember on the class. The compiler sees this attribute and interprets it to mean that the default member you created in the class should be named whatever you pass in the attribute ctor. A weird IL trick leads to another way you can generate a class with a default indexer using that attribute. In IL, a property is a default indexer if it meets two requirements: 1) it is an indexed property, and 2) its name matches the name in the DefaultMember attribute. So, you can actually generate a default member in C++ using code like this: [System::Reflection::DefaultMember(“Foo”)] public ref class ArrayWrap { public: property int Foo [int] { … } However, I do not recommend this approach, except as a possible way to workaround bugs, or to obfuscate for fun, as the C++ compiler will not recognize Foo as a default property when compiling, only when importing. Also, although it is legal in IL, I do not recommend using this trick to make default scalar properties. That’s just unpleasant. In a future posting, I’ll go over the various ways you can access properties, which may come in handy as workarounds.
https://blogs.msdn.microsoft.com/arich/2005/08/10/properties-part-2-defining-default-properties/
CC-MAIN-2016-36
refinedweb
647
60.45
I got asked if I could write a quick explanation on the useEffect hook provided by React and thought "Sure, that should help a few people!". useEffect can behave like componentDidMount shouldComponentUpdate and componentWillUnmount in one function if you set it up correctly. In this post I'll show you a few ways to replicate different lifecycle behaviours. Keep in mind that useEffect uses the second argument dependencies as a performance tool Here is an interesting read about how you can write your hooks in general even without dependencies: Example as componentDidMount First you can write an Effect that will just run once when the component mounted and will never run again: useEffect(() => { console.log('I was mounted and will not run again!') }, []) Important here is the empty array as a second argument. The second argument of useEffect can be used to watch properties for changes. See the following. Example as shouldComponentUpdate useEffect can also help with watchers on your properties so you can run it everytime a specific value is updated. Let's say we have a prop called "name" and our component should update something via effect everytime the name prop changes you could do it like this: const MyComponent = (props) => { useEffect(() => { document.title = `Page of ${props.name}` }, [props.name]) return <div>My name is {props.name} </div> } You can see that we passed props.name into the array in the second argument. This will now cause the effect to always run again when the name changes. Side note: You should always set the second argument because otherwise you can run into render loops. Example as componentWillUnmount useEffect can also be used to run code when the component dismounts. This is effective for subscriptions or other listeners (Websockets for example). let bookSubscription = null useEffect(() => { // stop the subscription if it already exists if (bookSubscription && bookSubscription.unsubscribe) bookSubscription.unsubscribe() // start a new subscription bookSubscription = startBookSubscription({ bookId: props.bookId }) return () => { // stop the subscription when the component unmounts bookSubscription.unsubscribe() } }, [props.bookId]) You can see that now we used all options available. This code will now - Start a new subscription when the component was mounted - Update the subscription with the new bookId when the bookId prop changes - unsubscribe the subscription when the component gets unmounted. You can run logic whenever the component unmounts by returning a function in your effect. I hope this quick post was helpful to you and helps you with further development. If you have questions, let me know! Discussion Sorry to be "that guy", but I think you've got it pretty wrong. Every useEffect you write should work correctly without using the dependencies argument. If something should only happen once, you should just do it, then set some state which tells you not to do it again. useEffect is not lifecycles. Hey, thanks for your comment. While you're technically right - they are not lifecycles - they can be used to replicate lifecycles introduced by ClassComponents. Also the dependencies argument should always be set, because a effect without dependencies will run on every render cycle. Can you specify why my useEffect should also work without dependencies? I just wrote an article the topic: dev.to/samsch_org/effects-are-not-... The purpose of the second argument is to be an optimization feature, not for control flow. This isn't state particularly explicitly in the React docs, but note that it's referred to as an optimization tool here reactjs.org/docs/hooks-effect.html... and how the examples throughout the doc page don't use the deps arg. Hey Samuel, thanks that was an interesting read! I'll link to it in my article if thats fine. I totally agree with you that dependency arguments are meant to be used as a performance tool, not lifecycles. In my post I just used lifecycles as an example on how the different parts of an effect work and how users can replicate lifecycles using them (and the return function). Give me a few minutes to bring in your article into my post. Super easy to understand! For the last example - it looks like startBookSubscriptionis being called each time the prop is changed (if I'm understanding correctly), maybe it should be renamed? Thats right. I'll update the code with a better example! One second. Thanks for the Clean and Simple explaination !
https://practicaldev-herokuapp-com.global.ssl.fastly.net/bdbch/a-quick-explanation-on-useeffect-2nn9
CC-MAIN-2021-04
refinedweb
721
56.66
A function is a single comprehensive unit (self-contained block) containing a block of code that performs a specific task. In this tutorial, you will learn about c programming user defined functions. C programming user defined functions In C programming user can write their own function for doing a specific task in the program. Such type of functions in C are called user-defined functions. Let us see how to write C programming user defined functions. Example: Program that uses a function to calculate and print the area of a square. #include <stdio.h> int square(int a); //function prototype int main() { int x, sqr; printf("Enter number to calculate square: "); scanf("%d", &x); sqr = square (x); //function call printf("Square = %d", sqr); return 0; } //end main int square (int a) //function definition { int s; s = a*a; return s; //returns the square value s } //end function square Elements of user-defined function in C programming There are multiple parts of user defined function that must be established in order to make use of such function. - Function declaration or prototype - Function call - Function definition - Return statement Function Call Here, function square is called in main sqr = square (x); //function call Function declaration or prototype int square(int a); //function prototype Here, int before function name indicates that this function returns integer value to the caller while int inside parentheses indicates that this function will recieve an integer value from caller. Function definition A function definition provides the actual body of the function. Syntax of function definition return_value_type function_name (parameter_list) { // body of the function } It consists of a function header and a function body. The function_name is an identifier. The return_value_type is the data type of value which will be returned to a caller. Some functions performs the desired task without returning a value which is indicated by void as a return_value_type. All definitions and statements are written inside the body of the function. Return statement Return statement returns the value and transfer control to the caller. return s; //returns the square value s There are three ways to return control. return; The above return statement does not return value to the caller. return expression; The above return statement returns the value of expression to the caller. return 0; The above return statement indicate whether the program executed correctly.
http://www.trytoprogram.com/c-programming/c-programming-user-defined-functions/
CC-MAIN-2019-30
refinedweb
388
51.28
Overview: First of all in this article, I will teach you about circle and area of circle. Then, I will show the demonstration area of circle formula. Moreover, I will write c program to print area of circle. After all, I will show its output. Table of contents: - What is Circle? - What is the radius of a circle? - What is the formula for the area of circle? - Demonstration of area of a circle - Logic to calculate the area of the circle - C program to find area of circle - Conclusion What is Circle? A circle is an important geometrical shape whose distance from the center to the edge is always the same. A circle is named by its center. In other words, A circle can be defined as a locus (a set of points in a particular position) of points that maintain equidistant from another point called the center. In other words, a circle is a Every circle has a center. A circle has the following components: What is the radius of a circle? The radius of a circle is the distance from the center of a circle to any point on the circle's Circumference. What is the formula for the area of circle? The area of a circle can be calculate by using this formula: A = Πr In this formula, The area and radius of the circle is denoted as A and r respectively. π is a constant whose value is 3.1415 or 22/7. Demonstration of area of a circle Logic to calculate area of the circle - Step 1: First get the radius of the circle, by using the variable r - Step 2: Then store the value of the area of circle into another variable area. - Step 3: After all, display variable area. C program to find area of circle #include <stdio.h> int main() { float radius, area; printf("\nEnter the radius of the Circle : "); scanf("%d", &radius); area = 3.14 * radius * radius; printf("\n The Area of The Circle : %f", area); return (0); } Conclusion: A circle can be defined as a locus (a set of points in a particular position) of points that maintain equidistant from another point called the center. In this Program first of all User Enter the radius of the circle then it shows the area of the circle according to the given radius by using formula A = Π r2. Area is always squarely proportional to the radius.
https://www.onlineinterviewquestions.com/blog/c-program-to-find-area-of-circle/
CC-MAIN-2021-17
refinedweb
405
72.26
Important: Please read the Qt Code of Conduct - What is the real effect of function: QGraphicsScene::setSceneRect? #include <QApplication> #include <QGraphicsScene> #include <QGraphicsView> int main(int argc, char *argv[]) { QApplication a(argc, argv); QGraphicsScene scene; QGraphicsView view(&scene); view.setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff); view.setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff); view.setFixedSize(800, 600); view.show(); scene.setSceneRect(100, 100, 300, 300); // scene.setSceneRect(0, 0, 300, 300); //scene.setSceneRect(-100, -100, 300, 300); scene.addRect(0, 0, 300, 300, QPen(Qt::blue)); scene.addRect(0, 0, 1, 1, QPen(Qt::red)); return a.exec(); } Changing the arguments of setSceneRect, I see the red point and the blue rectangle is positioned on different points in the window, why? Can you help me? I read the Qt documents about "Graphics View Framework" and QGraphicsScene, QGraphicsView, QGraphicsItem, Coordinates, ... but I cannot imagine why changing the positions make those items changing the positions in the window/view. - SGaist Lifetime Qt Champion last edited by Hi, From the doc, I'd say that the items are still put at the same global places but your scene is looking at a different place in the global scene. I think I've understood the issue. The center of the view will be the same as the center of the scene's rect at begin. Thanks
https://forum.qt.io/topic/58699/what-is-the-real-effect-of-function-qgraphicsscene-setscenerect
CC-MAIN-2022-05
refinedweb
215
50.63
"Bill Martin" <wcmartin at vnet.net> wrote ... > I'm wondering about the value of allowing a class definition like this: > > class C: > pass > > Now I can define a = c() and b = c(), then say a.x1 = 1.2, b.x2 = 3.5 or > some such. If I try to print a.x2 or b.x1, I get an exception message > basically saying those member variables don't exist. It seems to me this > defeats one of the basic ideas behind OOP, that is to assign members to > the class definition, thus ensuring that all instantiations of a class > have the same member variables and methods. I know there must be a > reason that python allows this kind of thing that I described, but I > can't figure out what that reason could be. To me, it looks like a very > dangerous thing to do. Can someone help me understand why we'd like to > do it? Thanks, > Well, if you think it's dangerous you probably *don't* want to do it, but it's the easiest way to set up a data structure with named components - you are just taking advantage of the fact that each instance has its own namespace. It's a firly standard idiom, though it isn't required. Now, other object-oriented languages can go to great pains to make sure that an instance's attributes can only be accessed in very controlled ways, such as using accessor methods (getthis(), setthat(), etc.). Python, on the other hand, pays you the compliment of assuming you know what you are doing, and "gives you enough rope to shoot yourself in the foot"! If you choose to abuse that freedom, you end up with a limp. [Please, nobody ask "a limp what?"]. regards -- Steve Holden Python Web Programming
https://mail.python.org/pipermail/python-list/2003-April/200824.html
CC-MAIN-2017-17
refinedweb
300
71.55
So far we have a fully working iOS app that shows cards players can flip over, has a cheat in place for 3D Touch users so you can always guess correctly, and communication happening from iOS to watchOS. The next step is to write a simple watchOS app that is able to receive that data and make the device buzz gently. We started our project with an Xcode watchOS template, so all this time you will have seen two watchOS folders in your Xcode project: WatchKit App and WatchKit Extension. Yes, cunningly they are two separate things. The extension contains all the code that gets run, and the app contains the user interface. Both run on the Apple Watch as of watchOS 2.0, but in watchOS 1.0 the extension used to run on your iPhone. The first thing we're going to do is design a very simple interface using WatchKit, which is the watchOS equivalent of UIKit. This interface is going to contain only a label and a button, telling users to check their phone for instructions. We haven't written those instructions yet, but all in good time… Look inside the WatchKit App folder for Interface.storyboard, and open that in Interface Builder. Using the Object Library, just like on iOS, drag a label then a button into the small black space of our app's user interface. You will see that WatchKit automatically stacks its views vertically so the interface doesn't get too cluttered. Select the label, set its Lines property to be 0 so that it spans as many lines as necessary, then align its text center and give it the following content: "Please read the instructions on your phone before continuing." Now select the button and give it the text "I'm Ready". Finally, select both the label and button then change their Vertical and Horizontal alignment properties to be Center. All being well, your WatchKit interface should look like the screenshot below. Don't worry that the views go right to the edge – the Watch's bezel blends seamlessly with the edge of the screen in its apps, so it will look fine on devices. That's it: that's our entire interface. Before we continue with any further coding, we need to create outlets for the label and button by using the Assistant Editor and Ctrl-dragging. Name the label welcomeText and the button hideButton – you'll notice these have the types WKInterfaceLabel and WKInterfaceButton because we're in WatchKit now, not UIKit. Finally, create an action for when the button is tapped, again by Ctrl-dragging in the Assistant Editor. Name this hideWelcomeText(). We're done with Interface Builder now, so please go back to the standard editor and open the InterfaceController.swift file from the WatchKit Extension. The first thing we're going to do is identical to the code from iOS: set ourselves up as the delegate for the WCSession and activate it. So, start by adding this import: import WatchConnectivity Now add this to the willActivate() method – for our purposes, that's the equivalent of viewDidLoad() in the iOS app: if WCSession.isSupported() { let session = WCSession.default session.delegate = self session.activate() } The code is identical to iOS – I told you this was going to be easy! You'll get an error when you try to assign the delegate to the Watch's view controller, so you'll need to tell iOS you conform to the WCSessionDelegate protocol like this: class InterfaceController: WKInterfaceController, WCSessionDelegate { With that, we're almost done with watchOS. In fact, we just need to do two more things, starting with implementing the hideWelcomeText() method. All this needs to do is hide the label and the button we created so that the watch's screen is blank apart from the time in the corner – we don't want any obvious UI in there that might alert people. Hiding things in WatchKit is almost the same as iOS, so update the hideWelcomeText() to this: @IBAction func hideWelcomeText() { welcomeText.setHidden(true) hideButton.setHidden(true) } Note that you need to use setHidden() rather than just changing a isHidden property as you would in UIKit. We also need to add an empty method in order to satisfy the WCSessionDelegate protocol. Add this now: func session(_ session: WCSession, activationDidCompleteWith activationState: WCSessionActivationState, error: Error?) { } The last thing we need to do for our watchOS app is to make the device tap your wrist when it receives a message from iOS. To do this, we just need to implement the didReceiveMessage method for the WCSession so that it plays a haptic effect. There are quite a few effects to choose from, but by far the most subtle is WKHapticType.click, which is so subtle that you can't help but marvel at the engineering of the Apple Watch. Add this code just beneath hideWelcomeText(): func session(_ session: WCSession, didReceiveMessage message: [String : Any]) { WKInterfaceDevice().play(.click) } So, whenever the watch receives any message from the phone, it will tap your wrist. Perfect! But… we're not done yet. You see, we need to show some instructions on the iOS app so that everything functions correctly. You see, not only does the Apple Watch go to sleep extremely quickly, but it also likes making noise to accompany haptic effects, which would rather spoil our hoax! So, to finish up we're going to add an alert to the iOS app reminding you to check your Apple Watch configuration every time it launches. So, head back to ViewController.swift in your iOS app, then add this new method: override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) let instructions = "Please ensure your Apple Watch is configured correctly. On your iPhone, launch Apple's 'Watch' configuration app then choose General > Wake Screen. On that screen, please disable Wake Screen On Wrist Raise, then select Wake For 70 Seconds. On your Apple Watch, please swipe up on your watch face and enable Silent Mode. You're done!" let ac = UIAlertController(title: "Adjust your settings", message: instructions, preferredStyle: .alert) ac.addAction(UIAlertAction(title: "I'm Ready", style: .default)) present(ac, animated: true) } That shows instructions to users every time the app runs. Note that you need to put it inside viewDidAppear() rather than viewDidLoad() because it presents an alert view controller..
https://www.hackingwithswift.com/read/37/9/designing-a-simple-watchos-app-to-receive-data
CC-MAIN-2022-40
refinedweb
1,057
62.38
import "nsIDocumentEncoder.idl"; Encode the document and send the result to the nsIOutputStream. Possible result codes are the stream errors which might have been encountered. Encode the document into a string. Encode the document into a string. Stores the extra context information into the two arguments. Initialize with a pointer to the document and the mime type. Documents typically have an intrinsic character set, but if no intrinsic value is found, the platform character set is used. This function overrides both the intrinisc and platform charset. Possible result codes: NS_ERROR_NO_CHARSET_CONVERTER If the container is set to a non-null value, then its child nodes are used for encoding, otherwise the entire document or range or selection or node is encoded. If the node is set to a non-null value, then the node is used for encoding, otherwise the entire document or range or selection is encoded. Set the fixup object associated with node persistence. If the range is set to a non-null value, then the range is used for encoding, otherwise the entire document or selection is encoded. If the selection is set to a non-null value, then the selection is used for encoding, otherwise the entire document is encoded. Set a wrap column. This may have no effect in some types of encoders.. Convert links, image src, and script src to absolute URLs when possible. XHTML/HTML output only. Do not print html head tags. XHTML/HTML output only. LineBreak processing: if this flag is set than CR line breaks will be written. If neither this nor OutputLFLineBreak is set, then we will use platform line breaks. The combination of the two flags will cause CRLF line breaks to be written. Normally when serializing the whole document using the HTML or XHTML serializer, the encoding declaration is rewritten to match. This flag suppresses that behavior. Encode entities when outputting to a string. E.g. If set, we'll output if clear, we'll output 0xa0. The basic set is just & < > " for interoperability with older products that don't support and friends. XHTML/HTML output only. Encode entities when outputting to a string. The HTML entity set additionally includes accented letters, greek letters, and other special markup symbols as defined in HTML4. XHTML/HTML output only. Encode entities when outputting to a string. The Latin1 entity set additionally includes 8bit accented letters between 128 and 255. XHTML/HTML output only. Attempt to encode entities standardized at W3C (HTML, MathML, etc). This is a catch-all flag for documents with mixed contents. Beware of interoperability issues. See below for other flags which might likely do what you want. XHTML/HTML output? Plaintext output: Convert html to plaintext that looks like the html. Implies wrap (except inside ), since html wraps. HTML, XHTML and XML output: do prettyprinting, ignoring existing formatting. XML output : it doesn't implicitly wrap LineBreak processing: if this flag is set than LF line breaks will be written. If neither this nor OutputCRLineBreak is set, then we will use platform line breaks. The combination of the two flags will cause CRLF line breaks to be written. Don't allow any formatting nodes (e.g. , ) inside a . This is used primarily by mail. XHTML/HTML output only. Output the content of noframes elements (only for serializing to plaintext). Output the content of noscript elements (only for serializing to plaintext). Normally is replaced with a space character when encoding data as plain text, set this flag if that's not desired. Plaintext output only. Output as though the content is preformatted (e.g. maybe it's wrapped in a PRE or PRE_WRAP style tag) Plaintext output only. XXXbz How does this interact with OutputFormatted/OutputRaw/OutputPreformatted/OutputFormatFlowed? Don't do prettyprinting. Don't do any wrapping that's not in the existing HTML/XML source. This option overrides OutputFormatted if both are set. HTML/XHTML output: If neither are set, there won't be prettyprinting too, but long lines will be wrapped. Supported also in XML and Plaintext output. Output only the selection (as opposed to the whole document). Wrap even if we're not doing formatted output (e.g. for text fields). Supported in XML, XHTML, HTML and Plaintext output. Set implicitly in HTML/XHTML output when no OutputRaw. Ignored when OutputRaw. XXXLJ: set implicitly in HTML/XHTML output, to keep compatible behaviors for old callers of this interface XXXbz How does this interact with OutputFormatFlowed?
http://doxygen.db48x.net/comm-central/html/interfacensIDocumentEncoder.html
CC-MAIN-2019-09
refinedweb
739
60.61
Don't know if it's coded to the best or not, as its my first attempt using IF's , but by looking at it I'm sure iv'e made more work for my self then i needed to. Feedback please, or anyway I could have made this easier if its possible. Thank youThank youCode:#include <iostream> using namespace std; int main() { system("TITLE My First C++ Programme!"); system("COLOR 04"); int age; //variables int sonsage; cout << "How Old Are You?"; cin >> age; if ( age > 50 ) { cout << "your getting old :(!\n"; cout << "How old is your son?"; cin >> sonsage; } else if ( age < 50 ) { cout << "Your young :) :)\n"; cout << "How old is your son?"; cin >> sonsage; } if ( sonsage < 10 ) { cout << "Wow hes a young man he is!\n"; } else { cout << "i dont care!!! hahahahahahahahah :)\n"; } }
http://cboard.cprogramming.com/cplusplus-programming/127689-first-cplusplus-programme-inspection-needed-please.html
CC-MAIN-2015-32
refinedweb
135
82.65
% Help Probably the top two problems people have with Ogre are not being able to compile or a missing dependency. For the first, you are going to need to learn how to use your compiler. If you barely know C++ then expect a challenge, but don't give up! Thousands of people have successfully gotten Ogre to work with both the GCC and MSVC compilers, so look in the wiki and forums for what they have done that you haven't. For missing dependencies, these are libraries that aren't installed, that aren't linked against your program, or that aren't in your runtime path. Other dependencies are incorrect rendering plugins in your plugins.cfg file or incorrect paths in your resources.cfg file, or missing one of the files all together. If you have problems reread this page as well as Installing An SDK - Shoggoth and Building From Source - Shoggoth and look in the Ogre.log file. You may also find your problem answered in the Build FAQ. If you need further help, search the forums. It is likely your problem has happened to others many times. If this is a new issue, read the forum rules then ask away. Make sure to provide relevant details from your Ogre.log, exceptions, error messages, and/or debugger back traces. Be specific and people will be more able to help you. Your First Application Now we will create a basic source file for starting an OGRE application. This program, like the included samples, uses the example framework. Copy the following code and include it as a new file in your project settings. Following our conventions, you'd put it in work_dir/src and name it SampleApp.cpp. Since this is dependent upon ExampleApplication.h and ExampleFrameListener.h make sure these files are accessible by your project. Our convention would have you put them in work_dir/include. You can copy them from the Samples directory. #include "ExampleApplication.h" ''// Declare a subclass of the ExampleFrameListener class'' '''class''' MyListener : '''public''' ExampleFrameListener { '''public''': MyListener(RenderWindow* win, Camera* cam) : ExampleFrameListener(win, cam) { } '''bool''' frameStarted('''const''' FrameEvent& evt) { '''return''' ExampleFrameListener::frameStarted(evt); } '''bool''' frameEnded(const FrameEvent& evt) { '''return''' ExampleFrameListener::frameEnded(evt); } }; ''// Declare a subclass of the ExampleApplication class'' '''class''' SampleApp : '''public''' ExampleApplication { '''public''': SampleApp() { } '''protected''': ''// Define what is in the scene '''void''' createScene('''void''') { // put your scene creation in here } ''// Create new frame listener'' '''void''' createFrameListener('''void''') { mFrameListener = '''new''' MyListener(mWindow, mCamera); mRoot->addFrameListener(mFrameListener); } }; #ifdef __cplusplus extern "C" { #endif #if OGRE_PLATFORM == OGRE_PLATFORM_WIN32 #define WIN32_LEAN_AND_MEAN #include "windows.h" '''INT''' WINAPI WinMain(HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, '''INT''') #else '''int''' main('''int''' argc, '''char''' **argv) #endif { ''// Instantiate our subclass'' SampleApp myApp; '''try''' { ''// ExampleApplication provides a go method, which starts the rendering.'' myApp.go(); } '''catch''' (Ogre::Exception& e) { #if OGRE_PLATFORM == OGRE_PLATFORM_WIN32 MessageBoxA(NULL, e.getFullDescription().c_str(), "An exception has occured!", MB_OK | MB_ICONERROR | MB_TASKMODAL); #else std::cerr << "Exception:\n"; std::cerr << e.getFullDescription().c_str() << "\n"; #endif '''return''' 1; } '''return''' 0; } #ifdef __cplusplus } #endif Compile this code now. However before running the program, make sure you have a plugins.cfg and a resources.cfg in the same directory as the executable. Review the Prerequisites section for the purpose of these files. Edit them and make sure the paths are correct. Otherwise your OGRE setup dialog box may not have any rendering libraries in it, or you may recieve an error on your screen or in Ogre.log that looks something like this: Description: ../../Media/packs/OgreCore.zip - error whilst opening archive: Unable to read zip file When the program starts it will display the OGRE setup dialog and start the application with a blank, black screen containing little more than the OGRE logo and an FPS (frame per second) display. We haven't added anything to this scene yet, as evidenced by the empty createScene method. Press ESC to quit the application. If you didn't get this, something is not right in your setup. See the Prerequisites and the Getting Help sections to review your installation. The ExampleApplication framework will boot the OGRE system, displaying a configuration dialog, create a window, setup a camera and respond to the standard mouselook & WSAD controls. All you have to do is to fill in the 'createScene' implementation. If you want to do more advanced things like adding extra controls, choosing a different scene manager, setting up different resource locations, etc, you will need to override more methods of ExampleApplication and maybe introduce a subclass of ExampleFrameListener. As mentioned before, you don't have to use the ExampleApplication and ExampleFrameListener base classes. Use them to work through the tutorials and to test things out. For larger projects you'll want to write your own framework, or use one of the frameworks or engines available on Projects using OGRE. - - Note for American readers - : Sinbad the lead developer and creator of OGRE, is British. Naturally he uses British spellings such as "Colour", "Initialise" and "Normalise". Watch out for these spellings in the API. See below to learn about your resources for getting help. Then your next step is to work through the Ogre Tutorials.
https://wiki.ogre3d.org/Setting+Up+An+Application+-+Windows+-+Shoggoth
CC-MAIN-2021-49
refinedweb
849
56.55
”final” keyboard can be used with methods, classes and variables in Java. Each use has different meaning that we are going to discuss in details in this tutorial : final variable, final method and final class. 1. Final Variable : final variables are variables that cannot be changed i.e. constant. It is really good practice to use final variables using uppercase letters and underscores as separator. Something as like below : final int MY_VARIABLE = 10; MY_VARIABLE cannot be changed after this initialization. If you want to change, you will get compile time error. Initialize a final variable in Instance initializer block , in constructor or in static block: Final variables that are not declared at the time of declaration are called blank final variables. But wait ..how can it be a constant without holding a value ? Yes, we need to initialize them as well. Initialization can be done either in a instance initializer block, or in a constructor . For a final variable that is static, we should initialise it inside static block. Let’s take a look into these three different scenarios. : public class Car { final int PRICE; final String COLOR; final static String OWNER_NAME; { COLOR = "black"; } static { OWNER_NAME = "Adam"; } Cars(int price) { PRICE = price; } } In the above example, for the class “Car”, three final variables are defined. We can change the “PRICE” variable using its constructor each time a new object will create. Final variables are used as constants. For global constants, static final variables are used. Final methods : Similar to final variables, we can have final method. Means method that cannot be changed. Behaviour of a method can only be changed by overriding it in another class. So, final methods are not allowed to override. Example : Car.java public class Car { final int WHEELS = 4; public final int getNoOfWheels(){ return WHEELS; } } Van.java public class Van extends Car { } Main.java public class Main { public static void main(String[] args){ Van van = new Van(); System.out.println("no of wheels of a Van "+van.getNoOfWheels()); } } In the above example we have one Car class with one final method getNoOfWheels() that returns 4 . We have created one new class ‘Van’ extending ‘Car’ . In ‘Main’ class, we are able to access the final method ‘getNoOfWheels’ from ‘van’ object. i.e. it is inheriting the method from its parent class. But if we try to override it inside ‘Van’ class, one compile-time error will be thrown mentioning that a final method cannot be overriden. Final Class : Final class is a class that cannot be extended i.e. it cannot be inherited. e.g. Int and Float are final classes . public final class Car { final int WHEELS = 4; public final int getNoOfWheels(){ return WHEELS; } } Now, if we try to create another class by extending class ‘Car’, it will show one compile time error message. Notes on final keyword in Java : - Final keyword can be applied to variable,method and class in Java. - Final variables cannot be changed, final methods cannot be override and final class cannot be extended. - Final variables should be initialised always. At the time of declaration, inside constructor, inside static method (for static final variables ) or inside instance initializer block. - A constructor cannot be final - All variables declared inside interface are final - Using final variables, methods and classes in Java improves performance.
https://www.codevscolor.com/java-final-keyword/
CC-MAIN-2020-29
refinedweb
550
65.62
Happy Friday! Today I’d like to shed some light on another brand-new functionality upcoming for PyCharm 4 – Behavior-Driven Development (BDD) Support. You can already check it out in the PyCharm 4 Public Preview builds available on the EAP page. Note: The BDD support is available only in the PyCharm Professional Edition, not in the Community Edition. BDD is a very popular and really effective software development approach nowadays. I’m not going to cover the ideas and principles behind it in this blog post, however I would like to encourage everyone to try it, since it really drives your development in more stable and accountable way. Sure, BDD works mostly for companies that require some collaboration between non-programmers management and development teams. However the same approach can be used in smaller teams that want to benefit from the advanced test-driven development concept. In the Python world there are two most popular tools for behavior-driven development – Behave and Lettuce. PyCharm 4 supports both of them, recognizing feature files and providing syntax highlighting, auto-completion, as well as navigation from specific feature statements to their definitions. On-the-fly error highlighting, automatic quick fixes and other helpful PyCharm features are also available and can be used in a unified fashion. Let me show you how it works in 10 simple steps: 1. To start with BDD development and in order to get the full support from PyCharm, you first need to define a preferred tool for BDD (Behave or Lettuce) in your project settings: 2. You can create your own feature files within your project – just press Alt+Insert while in the project view window or in the editor and select “Gherkin feature file”. It will create a feature file where you can define your own features, scenarios, etc. PyCharm recognizes feature files format and provides syntax highlighting accordingly: 3. Since there is no step definitions at the moment, PyCharm highlights these steps in a feature file accordingly. Press Alt+Enter to get a quick fix on a step: 4. Follow the dialog and create your step definitions: 5. You can install behave or lettuce right from the editor. Just press Alt+Enter on unresolved reference to get the quick-fix suggestion to install the BDD tool: 6. Look how intelligently PyCharm keeps your code in a consistent state when working on step definitions. Use Alt+Enter to get a quick-fix action: 7. In feature files, with Ctrl+Click you can navigate from a Scenario description to the actual step definition: Note: Step definitions may contain wildcards as shown in the step #6 – matched steps are highlighted with blue in feature files. 8. PyCharm also gives you a handy assistance on automatic run configurations for BDD projects. In the feature file, right-click and choose the “create” option, to create an automatic run configuration for behave/lettuce projects: 9. In the run configurations you can specify the scenarios to run, parameters to pass and many other options: 10. Now you’re all set to run your project with a newly created run configuration. Press Shift+F10 and inspect the results: That was simple, wasn’t it? Hope you’ll enjoy the BDD support in PyCharm and give this approach a try in your projects! See you next week! -Dmitry Very nice feature and smart integration. Will there be Gherkin keyword support for other languages (configurable)? Well we have no such plans currently. Let me know for what languages/frameworks do you need this support? Greate Feature! I love it. Currently I’m working with guys from another Business Unit. All of them are German native speakers. The support of German Gherkin keywords would be very very helpful. Thanks. I will be pleased to have a French Gherkins vesion This looks great. PyCharm 3.* has changed my world, can’t wait to see what 4 has to offer. The BDD (+1 lettuce) navigation and quick fixes are great. This is a nice feature. Would it be possible to have support for behave’s default parse mode for step parameters instead of using re? We had no such plans, however this sounds as a good feature request. Could you please create a ticket here ? OK thank you, I’ve now added a ticket. Normally I run my tests like this: python run_behave.py testcases/website.feature --browser_name=firefox --target_env= How do I convert this to a Behave Run Configuration in PyCharm? Hi, What is “run_behave.py”? Is it your custom file? What does it do in this case? You can pass any arguments to behave but behave does not have “browser_name” nor “target_env” arguments. If you need to pass some data to your step definitions you may use environment variables (they may be passed to any python configuration in PyCharm including behave). Look: My run_behave does the following: from behave import configuration from behave import __main__ # Adding my wanted option to parser in behave. configuration.parser.add_argument('-b', '--browser_name', help='Browser to use') configuration.parser.add_argument('-vb', '--browser_version', help='Browser version') configuration.parser.add_argument('-os', '--operating_system', help='OS where the browser is running') ... __main__.main() I guess it shouldn’t have been done this way. I will try to rewrite this to environment variables. Hello, PyCharm uses Behave API to run it, so you should not run it directly. I believe env. variable is the best way to pass something to step definitions. Could you make support for different languages? We use Russian in “feature” files but they’re shown as plain text. Hello Zoya, You may use Russian in feature files now, but keywords should be in English. If you need to use Russian keywords, you may create Feature Request: , we will try to implement it in future versions. Thank you, Ilya Unfortunately I have no permission to create a new task why Scenario outlines are not detected by the lettuce runner. I get “Empty test suite.” error message Please file a bug to Hi, I’ve been using Behave BDD framework with PyCharm and it’s great!! One quick question, it seems like the test run terminates if one of the scenarios fails. Is there a way force the feature to run completely even some scenario fails. Hello. *Feature* does not stop when *scenario* fails, but *scenario* fails when one of its *steps* fails. That is how Behave works. Look: If PyCharm behaves differently in your case, please submit a bug. Thank you. Pingback: JetBrains PyCharm Professional 4.5.2 Build 141.1580 They are planned for the next major release. Hello, I’m a tester and test automation programmer for a small python shop. The dev team I work with is heavily invested in Pytest, and as a result, insist on using the Pytest-BDD plugin for Gherkin, rather than Behave. Any chance you folks will ever incorporate support for Pytest-BDD into Pycharm? It’s not a big deal for the dev team (who all use emacs). However, it would be nice if I could have a lot of these IDE conveniences. Thanks, Greg. Hi Greg, Thanks for the request. Could you please create a feature request here ? It will be easier to track and manage it. Does it format parameter tables and keep them aligned properly? Hi Terrence, If you speak about “examples” section for scenario outlines, then answer is yes. When you call “reformat” (which is CTRL+ALT+L on Windows for example) in Gherkin (.feature) file, it reformats tables, so they look pretty. Hi, I’m have some trouble setting up my environment using lettuce and django. I can run lettuce inside virtualenv with ‘python manage.py harvest’, but when I try to use a lettuce configuration. I get this error: ValueError: Unable to configure handler ‘mail_admins’: Cannot resolve ‘assettools.common.backend.log.FormattedSubjectAdminEmailHandler’: cannot import name QuerySet when importing (…)\blockOperations-steps.py Here is my django configuration: And my lettuce configuration: Can you help me? Best regards, Veronica Hi. Could you give me access please? I am not sure you can run lettuce configuration with Django. But you may run manage.py console from PyCharm and run harvest from there. Hi, sorry about the restricted access, it should work now. I can run harvest from the manage.py console successfully, but then the results are presented in a very inconvenient way (plain text), and it gets hard to track the result with everything in the console. What would be the proper way to configure it so I can use it as a Lettuce configuration? Should I have a separate project just for the tests? Thanks! Unfortunatelly PyCharm does not have harvest support for now. Please create feature request: You can use Django manage.py console in PyCharm to run tests for now. Is this feature available in the community version also? I miss these features in the community version which I enjoyed in the paid version of RubyMine. Very helpful features. Would love to see it working … in free or paid version. BDD support is available only in PyCharm Professional Edition. I was crafting a tutorial of BDD in PyCharm and I noticed several things I didn’t like: 1) “Create all steps definition” doesn’t work well. It creates definitions for “some” expressions and I can’t figure out how it choose them. 2) Automatically created step_impl functions have “pass” in it. That means auto-generated tests will pass by default. That’s not nice. Test must fail! 3) It adds “use_step_matcher(‘re’)” by default. 99% of times you wont need that. I mean if you have a problem you want to solve with regular expressions then you have two problems But overal nice support and great work! “-pg_sleep(0)-“ 1-sleep(0)-1 1-pg_sleep(0)-1 When you start to have a few features files it seems sensible to produce a directory structure to manage them. This is reasonably easy to establish and a right click on a directory name allows you to Run all the tests in that directory. However if the directory structure is deeper than that you cannot run the all the contained feature files. This would be useful and allow a granular approach to test running. Chris , unfortunatelly it does not suppored now. Please create feature request I have downloaded the Pycharm Community and I have installed the behave 1.2.5 through pip. In the project interpreter I could see the installed packages. When I try to create a new feature file, I don’t see the “new Gherkin file” option in the context menu itself.Now how can i create a feature file in my project. Do i need to add any other plugin. BDD features are a Professional feature, you can get a free 30 day trial of PyCharm Professional Edition from our website: Let us know if you have any questions! Thank you. So it is not possible in Pycharm Community right. You can always use the command line, and manually use behavioral-driven testing. The PyCharm integration is a feature that’s only available in the professional edition though. How can I run multiple feature files in Pycharm Professional edition. I have used behave “one.feature”, “two.feature” but it fails. Do we need to setup any configuration file or anything else Actually I have removed the comma separator in between two feature files and it started to work. When I have more feature files how can I give it another way apart from command line/terminal. Any configuration file or .bat file or any runner file? I could run the multiple features by removing the comma which is in-between the feature files. Now, how could I run apart from terminal/command line. Any way like configuration setup, runner file or .bat file Hi, I am using Pycharm with behave. is there any way that i can navigate from a Scenario description in execute_steps() block to the actual step definition ? eg: @Given(‘i am on the home page of the website’) def step_on_home_page(context, publisher): context.execute_steps(u”’ Given I am on the login page When I login Then I am redirected to Home Page ”’)
https://blog.jetbrains.com/pycharm/2014/09/feature-spotlight-behavior-driven-development-in-pycharm/
CC-MAIN-2018-43
refinedweb
2,029
66.94
High Availability Raft Framework for Go High Availabilty Framework for Happy Data Uhaha is a framework for building highly available Raft-based data applications in Go. This is basically an upgrade to the Finn project, but has an updated API, better security features (TLS and auth passwords), customizable services, deterministic time, recalculable random numbers, simpler snapshots, a smaller network footprint, and more. Under the hood it utilizes hashicorp/raft, tidwall/redcon, and syndtr/goleveldb. Below a simple example of a service for monotonically increasing tickets. package main import "github.com/tidwall/uhaha" type data struct { Ticket int64 } func main() { // Set up a uhaha configuration var conf uhaha.Config // Give the application a name. All servers in the cluster should use the // same name. conf.Name = "ticket" // Set the initial data. This is state of the data when first server in the // cluster starts for the first time ever. conf.InitialData = new(data) // Since we are not holding onto much data we can used the built-in JSON // snapshot system. You just need to make sure all the important fields in // the data are exportable (capitalized) to JSON. In this case there is // only the one field "Ticket". conf.UseJSONSnapshots = true // Add a command that will change the value of a Ticket. conf.AddWriteCommand("ticket", cmdTICKET) // Finally, hand off all processing to uhaha. uhaha.Main(conf) } // TICKET // help: returns a new ticket that has a value that is at least one greater // than the previous TICKET call. func cmdTICKET(m uhaha.Machine, args []string) (interface{}, error) { // The the current data from the machine data := m.Data().(*data) // Increment the ticket data.Ticket++ // Return the new ticket to caller return data.Ticket, nil } Using the source file from the examples directory, we'll build an application named "ticket" go build -o ticket examples/ticket/main.go It's ideal to have three, five, or seven nodes in your cluster. Let's create the first node. ./ticket -n 1 -a :11001 This will create a node named 1 and bind the address to :11001 Now let's create two more nodes and add them to the cluster. ./ticket -n 2 -a :11002 -j :11001 ./ticket -n 3 -a :11003 -j :11001 Now we have a fault-tolerant three node cluster up and running. You can use any Redis compatible client, such as the redis-cli, telnet, or netcat. I'll use the redis-cli in the example below. Connect to the leader. This will probably be the first node you created. redis-cli -p 11001 Send the server a TICKET command and receive the first ticket. > TICKET "1" From here on every TICKET command will guarentee to generate a value larger than the previous TICKET command. > TICKET "2" > TICKET "3" > TICKET "4" > TICKET "5" There are a number built-in commands for managing and monitor the cluster. VERSION # show the application version MACHINE # show information about the state machine RAFT LEADER # show the address of the current raft leader RAFT INFO [pattern] # show information about the raft server and cluster RAFT SERVER LIST # show all servers in cluster RAFT SERVER ADD id address # add a server to cluster RAFT SERVER REMOVE id # remove a server from the cluster RAFT SNAPSHOT NOW # make a snapshot of the data RAFT SNAPSHOT LIST # show a list of all snapshots on server RAFT SNAPSHOT FILE id # show the file path of a snapshot on server RAFT SNAPSHOT READ id [RANGE start end] # download all or part of a snapshot And also some client commands. QUIT # close the client connection PING # ping the server ECHO [message] # echo a message to the server AUTH password # authenticate with a password By default a single Uhaha instance is bound to the local 127.0.0.1IP address. Thus nothing outside that machine, including other servers in the cluster or machines on the same local network will be able communicate with this instance. To open up the service you will need to provide an IP address that can be reached from the outside. For example, let's say you want to set up three servers on a local 10.0.0.0network. On server 1: ./ticket -n 1 -a 10.0.0.1:11001 On server 2: ./ticket -n 2 -a 10.0.0.2:11001 -j 10.0.0.1:11001 On server 3: ./ticket -n 3 -a 10.0.0.3:11001 -j 10.0.0.1:11001 Now you have a Raft cluster running on three distinct servers in the same local network. This may be enough for applications that only require a network security policy. Basically any server on the local network can access the cluster. If you want to lock down the cluster further you can provide a secret auth, which is more or less a password that the cluster and client will need to communicate with each other. ./ticket -n 1 -a 10.0.0.1:11001 --auth my-secret All the servers will need to be started with the same auth. ./ticket -n 2 -a 10.0.0.2:11001 --auth my-secret -j 10.0.0.1:11001 ./ticket -n 2 -a 10.0.0.3:11001 --auth my-secret -j 10.0.0.1:11001 The client will also need the same auth to talk with cluster. All redis clients support an auth password, such as: redis-cli -h 10.0.0.1 -p 11001 -a my-secret This may be enough if you keep all your machines on the same private network, but you don't want all machines or applications to have unfettered access to the cluster. Finally you can use TLS, which I recommend along with an auth password. In this example a custom cert and key are created using the mkcerttool. mkcert uhaha-example # produces uhaha-example.pem, uhaha-example-key.pem, and a rootCA.pem Then create a cluster using the cert & key files. Along with an auth. ./ticket -n 1 -a 10.0.0.1:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret ./ticket -n 2 -a 10.0.0.2:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret -j 10.0.0.1:11001 ./ticket -n 2 -a 10.0.0.3:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret -j 10.0.0.1:11001 Finally you can connect to the server from a client that has the rootCA.pem. You can find the location of your rootCA.pem file in the running ls "$(mkcert -CAROOT)/rootCA.pem". redis-cli -h 10.0.0.1 -p 11001 --tls --cacert rootCA.pem -a my-secret Below are all of the command line options. Usage: my-uhaha-app [-n id] [-a addr] [options] Basic options: -v : display version -h : display help, this screen -a addr : bind to address (default: 127.0.0.1:11001) -n id : node ID (default: 1) -d dir : data directory (default: data) -j addr : leader address of a cluster to join -l level : log level (default: info) [debug,verb,info,warn,silent] Security options: --tls-cert path : path to TLS certificate --tls-key path : path to TLS private key --auth auth : cluster authorization, shared by all servers and clients Networking options: --advertise addr : advertise address (default: network bound address) Advanced options: --nosync : turn off syncing data to disk after every write. This leads to faster write operations but opens up the chance for data loss due to catastrophic events such as power failure. --openreads : allow followers to process read commands, but with the possibility of returning stale data. --localtime : have the raft machine time synchronized with the local server rather than the public internet. This will run the risk of time shifts when the local server time is drastically changed during live operation. --restore path : restore a raft machine from a snapshot file. This will start a brand new single-node cluster using the snapshot as initial data. The other nodes must be re-joined. This operation is ignored when a data directory already exists. Cannot be used with -j flag.
https://xscode.com/tidwall/uhaha
CC-MAIN-2021-43
refinedweb
1,359
67.04
Currently, the production version of the application is a single JavaScript file. If the application is changed, the client must download vendor dependencies as well. It would be better to download only the changed portion. If the vendor dependencies change, then the client should fetch only the vendor dependencies. The same goes for actual application code. Bundle splitting can be achieved using optimization.splitChunks.cacheGroups. When running in production mode, webpack 4 can perform a series of splits out of the box but in this case, we'll do something manually. To invalidate the bundles correctly, you have to attach hashes to the generated bundles as discussed in the Adding Hashes to Filenames chapter. With bundle splitting, you can push the vendor dependencies to a bundle of their own and benefit from client level caching. The process can be done in such a way that the whole size of the application remains the same. Given there are more requests to perform, there's a slight overhead. But the benefit of caching makes up for this cost. To give you a quick example, instead of having main.js (100 kB), you could end up with main.js (10 kB) and vendor.js (90 kB). Now changes made to the application are cheap for the clients that have already used the application earlier. Caching comes with its problems. One of those is cache invalidation. A potential approach related to that is discussed in the Adding Hashes to Filenames chapter. Given there's not much to split into the vendor bundle yet, you should add something there. Add React to the project first: npm add react react-dom Then make the project depend on it: src/index.js import "react"; import "react-dom";... Execute npm run build to get a baseline build. You should end up with something as below: Hash: 8243e4d4e821c80ebf23 Version: webpack 4.43.0 Time: 3440ms Built at: 07/10/2020 3:00:42 PM Asset Size Chunks Chunk Names 1.js 127 bytes 1 [emitted] index.html 237 bytes [emitted] main.css 8.5 KiB 0 [emitted] main main.js 129 KiB 0 [emitted] main Entrypoint main = main.css main.js ... As you can see, main.js is big. That is something to fix next. vendorbundle# Before webpack 4, there used to be CommonsChunkPlugin for managing bundle splitting. The plugin has been replaced with automation and configuration. To extract a vendor bundle from the node_modules directory, adjust the code as follows: webpack.config.js const productionConfig = merge([ ...{ optimization: { splitChunks: { chunks: "all", }, }, },]); If you try to generate a build now ( npm run build), you should see something along this: Hash: 7d26879955396fd4464f Version: webpack 4.43.0 Time: 3442ms Built at: 07/10/2020 3:01:31 PM Asset Size Chunks Chunk Names 2.js 127 bytes 2 [emitted] index.html 276 bytes [emitted] main.css 8.5 KiB 0 [emitted] main main.js 2.65 KiB 0 [emitted] main vendors~main.js 127 KiB 1 [emitted] vendors~main Entrypoint main = vendors~main.js main.css main.js ... Now the bundles look the way they should. The image below illustrates the current situation. chunks: "initial"would give the same result in this case. You can see the difference after Code Splitting as the alloption is able to extract commonalities even chunks that have been code split while initialdoesn't go as far. The configuration above can be rewritten with an explicit test against node_modules as below: webpack.config.js const productionConfig = merge([ ...{ optimization: { splitChunks: { cacheGroups: { commons: { test: /[\\/]node_modules[\\/]/, name: "vendor", chunks: "initial", }, }, }, }, },]); Following this format gives you more control over the splitting process if you don't prefer to rely on automation. Webpack provides more control over the generated chunks by two plugins: AggressiveSplittingPluginallows you to emit more and smaller bundles. The behavior is handy with HTTP/2 due to the way the new standard works. AggressiveMergingPluginis doing the opposite. Here's the basic idea of aggressive splitting: const config = { plugins: [ new webpack.optimize.AggressiveSplittingPlugin({ minSize: 10000, maxSize: 30000, }), ], }, There's a trade-off as you lose out in caching if you split to multiple small bundles. You also get request overhead in HTTP/1 environment. The aggressive merging plugin works the opposite way and allows you to combine small bundles into bigger ones: const config = { plugins: [ new AggressiveMergingPlugin({ minSizeReduce: 2, moveToParents: true, }), ], }, It's possible to get good caching behavior with these plugins if a webpack records are used. The idea is discussed in detail in the Adding Hashes to Filenames chapter. webpack.optimize contains LimitChunkCountPlugin and MinChunkSizePlugin which give further control over chunk size. Tobias Koppers discusses aggressive merging in detail at the official blog of webpack. In the example above, you used different types of webpack chunks. Webpack treats chunks in three types: Starting from webpack 5, it's possible to define bundle splitting using entry configuration: const config = { entry: { app: { import: path.join(__dirname, "src", "index.js"), dependOn: "vendor", }, vendor: ["react", "react-dom"], }, }; If you have this configuration in place, you can drop optimization.splitChunks and the output should still be the same. To use the approach with webpack-plugin-serve, you'll have to inject webpack-plugin-serve/clientwithin app.importin this case. Doing this will require an extra check in addEntryToAll. The function was introduced in the Multiple Pages chapter. webpack-cascade-optimizer-plugin provides an approach of distributing code along output files in a smart order. The plugin allows you to get the benefits of bundle splitting without splitting. The situation is better now compared to the earlier. Note how small main bundle compared to the vendor bundle. To benefit from this split, you set up caching in the next part of this book in the Adding Hashes to Filenames chapter. To recap: optimization.splitChunks.cacheGroupsfield. It performs bundle splitting by default in production mode as well. AggressiveSplittingPluginand AggressiveMergingPlugin. Mainly the splitting plugin can be handy in HTTP/2 oriented setups. You'll learn to tidy up the build in the next chapter. This book is available through Leanpub (digital), Amazon (paperback), and Kindle (digital). By purchasing the book you support the development of further content. A part of profit (~30%) goes to Tobias Koppers, the author of webpack.
https://survivejs.com/webpack/building/bundle-splitting/
CC-MAIN-2020-40
refinedweb
1,035
59.7
Hello, I want to make a program that installs it self to a given directory, yet I don't know how to do it....I need a program that erases itself, where ever it is, and copies itself to the directory it was given, such as: Program output:Program output:Code:#include <iostream> using namespace std; char address[80]; int main() { cout<<"Where do you wish to install this program? : "<<endl; cin.getline(address,80,'\n'); return 0; } Yet I don't know how to get there, can someone help me?? Please....Yet I don't know how to get there, can someone help me?? Please....Code:Where do you wish to install this program? : C:\Program Files\
https://cboard.cprogramming.com/cplusplus-programming/76351-self-installer.html
CC-MAIN-2017-26
refinedweb
118
67.89
Congratulations! You've installed Panda3D on your machine. Now, what next? First of all, you should prepare by learning Python. You can also use C++ with Panda3D, but since most of its users use Python, and it is a very easy language to master, Python is the most recommended choice. Then, you can dive into the Manual and try the Hello World samples. The manual is a great resource for learning Panda3D and getting in-depth information about various subjects. However, for more concrete examples, you can dive into the Sample Programs, which ship with the Panda3D build. Panda3D comes with a handful of Sample Programs, which are stored in /Developer/Examples/Panda3D (for Panda3D 1.6.x, they are stored in /Applications/Panda3D/1.6.0/samples). After opening a terminal window navigate to the directory of the sample program you want to see. Then, you can run the program by using the 'ppython' command: (For Panda3D 1.6.x, use '/usr/bin/python' rather than 'ppython') ppython Tut-Asteroids.py What to do if you see the Error Message: dyld: Library not loaded: @executable_path/../Library/Frameworks/Cg.framework/Cg Referenced from: /Applications/Panda3D/1.6.0/lib/libpanda.dylib Reason: image not found Trace/BPT trap This means you need to download and install the NVIDIA Cg Toolkit before you will be able to use Panda3D. Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap Or the following: Traceback (most recent call last): File "main.py", line 1, in <module> import direct.directbase.DirectStart ImportError: No module named direct.directbase.DirectStart You are running a version of Python that is not compatible with the version Panda3D has been compiled with. However, macOS always ships with a system version of Python that should be compatible. The fact that it doesn't work simply indicates that the wrong copy of python is ran. You need to make sure you are running Apple's version of Python. Fortunately, Panda3D ships with a file called 'ppython' that is a mere symlink to /usr/bin/python, where Apple's copy of Python resides. So, instead of 'python', you should call 'ppython'. updated drivers by running Software Update or, if you have installed a 3rd party video card, installing new drivers from the video card vendor. Alternatively, you can use Panda3D in software rendering mode, You will not be able to use fancy shaders and it will be much slower than in hardware mode. However, if you still want to use it, you need to edit your Config.prc file (which can be found in the "etc" dir of your Panda3D installation) and find this line: load-display pandagl Replace that line with this instead: load-display p3tinydisplay
http://www.panda3d.org/manual/index.php/Getting_Started_on_macOS
CC-MAIN-2018-39
refinedweb
455
65.52
Modify Multiple SharePoint Documents but retaining the previous Modified By Recently AnonymousX Hello, I'm hoping to create a way of copying and renaming a specific file off of a company Sharepoint site. For local files I've always used the method of using FileExists( "path") then FileCopy ( "source", "dest" [, flag = 0] ) #include <WinAPIFiles.au3> Copy_File() Func Copy_File() local $source = "C:\Users\auser\Documents\test.xls" Local $dest = "C:\Users\auser\Documents\test" Local $iFileExists = FileExists($source) If $iFileExists Then FileCopy($source,$dest);copy file to new location MsgBox($MB_SYSTEMMODAL, "", "File was copied") Else MsgBox($MB_SYSTEMMODAL, "", "File doesn't exist") EndIf EndFunc However with the file location provided by sharepoint, it seems autoIt isn't able to find it. File path provided by sharepoint looks something like this: I know if I have excel open and paste the link into the excel file name open box, it will open the file just fine. Also I know I can create shortcuts to these links, and when I click on them it will open the file just fine too. So I'm not sure how I have to refer to these files for AutoIT to recognize it and copy it to the folder location I want. I don't really have a good understanding on how this stuff works, but I was hoping the solution wasn't too complicated, and could use some help. Any help is appreciated, thanks in advance. -? - Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/194794-modify-multiple-sharepoint-documents-but-retaining-the-previous-modified-by/
CC-MAIN-2022-05
refinedweb
279
60.14
#include <standard.disclaimer> OwenWatson: Just that the Genius plan locks you in for a long while (unless you fork out for modem/connection), plus you have to guess what your data consumption is likely to be; ours is very erratic, some months 30GB, others 150GB. If you get your guess wrong it can get very expensive, which is why the straight 30c/GB rate of FYX looks quite nice. old3eyes:Kyanar: And everyone else that's not an Genius can just go to hell. As usual. Looks that way. I'm giving them until the end of May then it's off to Telecom.. I've been with them for over 10 years now..
https://www.geekzone.co.nz/forums.asp?ForumId=82&TopicId=102424
CC-MAIN-2019-47
refinedweb
114
78.89
tl;dr This is a guide with the goal of laying down foundational knowledge that is required when speaking about building REST API's. The following topics are covered: - REST Constraints - Richardson Maturity Model - REST in Practice (Some practical guidelines) - Example project (Written in C# using .Net Core 3.1) called Ranker The main points that I would like to summarize with regards to REST are listed as follows: - REST IS an architectural style used to describe web architecture - REST IS protocol agnostic - REST IS about web architecture (REST != API) - REST IS NOT a design pattern - REST IS NOT a standard. However standards can be used to implement REST. 1. REST Fundamentals This sections covers REST essentials. The goal of this section is to make the reader comfortable with the notion of REST. It is also intended to provide the minimum required theory to start talking about REST and building HTTP services that incorporate a REST architectural style. Introduction REST (REpresentational State Transfer) is an architectural style that was defined by Roy Thomas Fielding in his PhD dissertation "Architectural Styles and the Design of Network-based Software Architectures". According to Fielding,. (Fielding, 2000) pg 109 Why REST? If you're someone that builds HTTP services for distributed systems, then understanding and applying REST principles will help you build services that are more: - scalable - reliable - flexible - portable By building services based on REST principles, one is effectively building services that are more web friendly. This is because REST is an architectural style that describes web architecture. REST Architectural Constraints Fielding describes REST as a hybrid style that is derived from several network-based architectural styles (Chapter 3) combined with a number of additional constraints. In this section, the six architectural constraints as applied to REST are discussed. The key takeaway is that these constraints encourage design that will result in applications that easily scale, are faster, and more reliable. The 6 architectural REST constraints are as follows: 1. Client-Server A guiding principle of Client-Server is the separation of concerns. It's all about achieving high cohesion and loose coupling in order to improve portability and flexibility. It also allows systems to evolve independently of each other. As can be seen by the diagram below, a Client sends a request, and a Server receives the request. 2. Statelessness A Server must not store any state during communications. All information required to understand a request must be contained within the Request. Therefore, every Request should be able to execute on its own and be self-contained. Also, a Client must maintain it's own state. The benefit of this approach is as follows: - Visibility - Everything required to understand the Request is within the Request. This makes monitoring a request easier. - Reliability - Recovering from failures is easier because the Server does not need to track/rollback/commit state because all the state is essentially captured within the message. If a Request fails, it can be as simple as resending the Request. - Scalability - Because there is no need to manage state and resources between requests, and because all Requests are isolated,scalability is improved and simplified. - Aligned with web architecture (the internet is designed this way) A disadvantage of this approach is that it decreases network efficiency because the Requests need to contain all the information required for that interaction. The more information, the larger the Request size, and therefore the more bandwidth is used. This will have a negative effect on latency as well. 3. Cache The primary reason for the Cache constraint is to improve network efficiency. As noted above in the Stateless constraint, the size of Requests can decrease network efficiency due to the need for more bandwidth. Through caching, it is possible to reduce and sometimes remove the need for a Client to interact with the Server. In other words it's possible to reduce and/or eliminate the need for Requests. Therefore, the Cache constraint states that a Server must include additional data in the response to indicate to the client whether the Request is cacheable and for how long. A network Client can then decide the appropriate action based on provided cache information in Response. Caching can improve performance. However, it comes with a number of disadvantages that impact the reliability of the system. For example: - Data Integrity - Response data could be inaccurate due to stale or expired data - Complexity - The implementation and use of caching mechanisms is renowned for it's complexity in the Computer Science world 4. Uniform Interface At the core of this constraint is the principle of generality which is closely related to the principle of anticipation. It stems from the fact that it is impossible to build the exact required interface for all network clients of a server service. Therefore, by providing a generic interface, one is able to provide a simplified interface with higher visibility that is able to satisfy the requirements of more clients. A disadvantage of this approach is that because the interface is so general, one is not able to satisfy specific client requirements. In other words, providing a generic interface can lead to a sub-optimal interface for many clients. There are four additional constraints that form part of the Uniform Interface and are listed as follows: Identification of resources A key abstraction of REST is a resource. According to Fielding (Resources and Resource Identifiers), a resource is any information that can be named. Furthermore, I personally like to think of a resource as a "Noun". Noun - a word (other than a pronoun) used to identify any of a class of people, places, or things ( common noun ), or to name a particular one of these ( proper noun ). It is also better to think of a single resource as a collection of resources. For example, if we were to provide an API to allow a Client to submit or retrieve a "rating", one would typically identify the resource as follows: GET /ratings Generally, there should only be a single way to access a resource. But this is more a guideline than a rule. Manipulation of resources through representations This constraint states that the client should hold the representation of a resource that has enough information to create, modify or delete a resource. It's important that the representation of a resource is decoupled from the way the resource is identified. A resource can be represented in multiple formats or representations such as JSON, XML, HTML, PNG etc. A client should be able to specify the desired representation of a resource for any interaction with the server. Therefore, a Client can specify to receive a resource in JSON format, but send the resource as input in XML format. For example: For the retrieval of an Employees resource, we use XML format by specifying a "Accept: application/xml" header. GET /ratings Accept: application/xml <ratings> <rating> <id>7337</id> <userId>98765</userId> <movieId>12345</movieId> <score>6</score> </rating> </ratings> For the creation of an Employees resource, we use JSON format by specifying a "Content-Type: application/json" header POST /ratings Content-Type: application/json { "userId": 98765, "movieId": 12345, "score": 6 } Should a specific format not be supported, it is important for the Server to provide an appropriate response to indicate that a specific format is not supported. For example: - Return a 406 Not Acceptable status code to indicate that the client specified a request with an Accept header format that the Server is unable to fulfill. [See here for more information] - Return a 415 Unsupported Media Type when a response is specified in an unsupported content type. [See here for more information] Self descriptive messages Self descriptive messages enable intermediary communication by allowing intermediary components to transform the content of the message. In other words, the semantics of the message are exposed to the intermediaries. The implication of this constraint is that interactions are stateless, standard methods and media types are used to expose the semantics of message, and responses indicate cacheability. Hypermedia as the engine of application state (HATEOAS) A key concept about HATEOAS is that it implies that a Response sent from a Server should include information that informs the Client on how to interact with the Server. The advantages of HATEOAS are as follows: - Improves discoverability of resources through published set of links (provided with response) - Indicates to Clients what actions can be taken next. In other words, without HATEOAS, a Client only has access to the data but no idea about what actions may be taken with that data 5. Layered System The key principle of this constraint is that the Client cannot make any assumptions that it is communicating directly with the Server. This constraint relates to the Client-Server constraint (discussed above) in such a way that Client and Server are decoupled. Therefore the Client makes no assumptions about any kind of hidden dependencies and this enables us to insert components and entire sub-systems between the Client and the Server. This allows one to add load balancers, DNS, caching servers and security (authentication and authorization) between Client and Server without disrupting the interaction. Layering allows one to evolve and improve ones architecture to improve scalability and reliability ones system. 6. Code On Demand This is an optional constraint. The key concept about this constraint is that when a Client makes a request to a resource on a Server, it will receive the resource as well as the code to execute against that resource. The Client knows nothing about the composition of the code and only needs to know how to execute it. Javascript is an example of where this is done. Richardson Maturity Model The Richardson Maturity Model is a heuristic maturity model that can be used to better understand how mature a service is in terms of the REST architectural style. - Level 0 Services at this level are described as having a single URI, and using a single HTTP verb (usually POST). This is very characteristic of most Web Services (WS-*) in that this services would have a single URI accepting an HTTP POST request having an XML payload. - Level 1 Services at this level are described as having many URIs with a single HTTP verb. The primary difference between Level 0 and Level 1 is that Level 1 services expose multiple logical resources as opposed to a single resource. - Level 2 Services at this level are described as having many URI-addressable resources. Each addressable resource supports both multiple HTTP verbs and HTTP status codes. - Level 3 Services at this level are like Level 2 services that additionally support Hypermedia As The Engine Of Application State (HATEOAS). Therefore, representations of a resource will also contain links to other resources (the actions that can be performed relating to current resource). When thinking about the RMM applies to your API, please refrain from thinking in terms of having a Level 2 or Level 3 REST API. According to this model, an API cannot be called a REST API unless it at least satisfies a Level 3 of the RMM. Therefore, it would be better to think of ones API as an HTTP API that satisfies a Level 1,2, or 3 on the RMM. 2. REST in Practice I have developed a simple Http Api to demonstrate some of the concepts that I discussed in Part 1 of this guide. A REST API guide with and example project written in C# using .Net Core 3.1 I've also started another project that I plan to use to demonstrate various technology concepts like REST API's. A playground for demonstrating concepts such as architecture, design, dotnet core, typescript, react, database and docker 2. Defining A Contract In this example, we are going to define contracts for 3 types of resources: - Users - Movies - Ratings There are 5 important aspects to defining a contract: - Naming a resource - Http methods used to interact with resource - Status codes used to describe the state of an interaction - Content Negotiation - Be consistent 2.1 Naming Guidelines - Resources should have names that are represented by nouns and not actions (behaviors) # Incorrect naming /getUsers /getUserById/{userId} # Correct Naming /users /users/{userId} - Resources should be named using plural form # Incorrect naming /user /movie /rating # Correct naming /users /movies /ratings - Mapping RPC style methods to resources The naming guidelines seem to suit naming resources very well. However, what happens when one needs to name something that is more a behavior than a resource? For example, let's say we want to compute the average rating for a movie. How would we structure our naming? /movies/{movieId}/averageRating I don't think there is 100% consensus on what the correct naming strategy is for a scenario such as this one. However, when faced with defining a contract for something that feels more about behavior than resources, I like to define contracts based on the outcomes of those behaviors. Therefore, for the example above: /averageMovieRatings /averageMovieRatings/{movieId} But what if we try to define a contract for a calculator? This is clearly an example of where defining a contract around a behavior is very difficult and "unnatural" to REST. The reason why it feels unnatural is because REST is an architectural style for describing web architecture. So if you imagined every endpoint as a webpage, then clearly the behaviors for a calculator don't map very well. My suggestion is to use an alternative technology like gRPC if you are building API's that are more about behavior than resources. - Represent hierarchy /users/{userId} /users/{userId}/ratings /users/{userId}/ratings/{ratingId} /movies/{movieId} /movies/{movieId}/ratings /movies/{movieId}/ratings/{ratingId} - Filtering, searching and sorting are not part of naming For filtering: # Incorrect /users/firstName/{firstName} # Correct /users?firstName={firstName} For searching: # Incorrect /users/search/{query} # Correct /users?q={query} For ordering: # Incorrect /users/orderBy/{firstName} # Correct /users?order={firstName} 2.2 Http Methods 2.3 Status Codes In this section, a list of commonly used status codes is provided. Status codes help convey meaning in client/server interactions. They also help achieve consistency in terms of defining a contract. Level 200 - Success - 200 Ok - Request succeeded - 201 Create - Request succeeded and resource created - 204 No Content - Request succeeded and there is no additional content to send in response body Level 300 - Redirection Responses - 301 Moved Permanently - The URL of requested resourced has changed permanently. The new URL is provided in response - 302 Found - Indicates that the URI of requested resource changed, and can therefore use the same URI for future requests - 304 Not Modified - Used for caching. Indicates that the resource has not changed and that the same cached version can be used Level 400 - Client Mistake - 400 Bad Request - The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications - 401 Unauthorized - Request failed due to authentication failure - 403 Forbidden - Request failed due to authorization failure - 404 Not Found - The requested resource could not be found - 405 Method Not Allowed - The request method is understood by server but not supported. In other words, the server doesn't have an endpoint supporting requested method. - 406 Not Acceptable - When a request is specified in an unsupported content type using the Accept header - 409 Conflict - Indicates a conflict in terms of requested resource state. For a POST, it could mean that a resource already exists. For a PUT, it could mean that the state of resource changed thereby making current request data stale. - 415 Unsupported Media Type - When a response is specified in an unsupported content type - 422 Unprocessable Entity - Indicates the the request was correct and understood by server, but the data contained within request is invalid. Level 500 - Server Mistake - 500 Internal Server Error - Indicates that something went wrong on the server that prevent the server from fulfilling the request. - 503 Service Unavailable - Indicates that the server is functional but not able to deliver requested resource. This is usually a result of a server being overloaded, server is under maintenance, or a client side issue relating to DNS server (DND server could be unavailable). - 504 Gateway Timeout - Indicates that a proxy server did not receive a timely response from the origin (upstream) server. 2.4 Content Negotiation Implies the type of representation (Media Type) that will be used for request and response. The Media Type is specified in header of request. Two popular Media Type formats that are used with Http Api's are: - application/json - application/xml Typically, I would support at least the two aforementioned formats. For any media type format that is not supported, the Api should return a 406 Not Acceptable status code. Examples: # Send POST request to create a a new user. # The request will use 'application/json' as input, but XML in return (application/xml) POST /users Accept: application/xml Content-Type: application/json { "firstName": "Bob", "lastName": "TheBuilder" } # The response is returned as XML <User> <Id>112233</Id> <FirstName>Bob</FirstName> <LastName>TheBuilder</LastName> </User> 3. Example Project To illustrate some of the topics that have been discussed, I created an example project called Ranker. Ranker is an API that has been designed by using REST as a guide. In terms of the Richardson Maturity Model, I have implemented all endpoints to be at least a Level 2. However, I have implemented some endpoints to a Level 3 (with HATEOAS). Conceptually, Ranker provides the following features: - Interface to manage Users (with HATEOAS) - Interface to manage Movies - Interface to manage Ratings In the following sections I provide more detail about the project and how to get started. Architecture Although the focus of this example project is to illustrate an implementation of REST, I decided to provide a basic architecture to also illustrate a good separation of concerns so that the Api layer (Controllers) are kept very clean. I've chosen a architecture based on the Onion Architecture. Below, I provide 2 different views of what equates to exactly the same architecture. Layered Architecture Onion Architecture - API Primary Responsibility: Provides a distributed interface that gives access to application features This API has been implemented as a number of HTTP services based on REST guidelines. The API itself is based on an MVC (Model, View, and Controllers) architecture. The Controllers are essentially the public facing API contract. - Infrastructure Primary Responsibility: Provide the core of the system an interface to the "world". This layer is all about defining and configuring external dependencies such as: - database access - proxies to other API's - logging - monitoring dependency injection - Application Primary Responsibility: Application logic. This layer is typically where you would find "Application "Services". - Domain Primary Responsibility: Enterprise domain logic. All domain logic relating to domain models and domain services are handled in this layer. API Contract The API has been implemented with the Open Api Specification (OAS). Once you have the API up and running, you can browse to the following Url to get access to the OAS Swagger Document. The Swagger document will look something like below: Pagination For this project, any endpoint returning a collection of items has been implemented with paging. Use the following query parameters to control paging: - page - the page number - limit - the number of items per page Pagination has been implemented in two ways for this example project. - Pagination in Header GET Header: X-Pagination { "CurrentPageNumber": 2, "ItemCount": 9742, "PageSize": 5, "PageCount": 1949, "FirstPageUrl": "", "LastPageUrl": "", "NextPageUrl": "", "PreviousPageUrl": "", "CurrentPageUrl": "" } - Pagination as links (HATEOAS) GET { . . . "links": [ { "href": "", "method": "GET", "rel": "current-page" }, { "href": "", "method": "GET", "rel": "next-page" }, { "href": "", "method": "GET", "rel": "previous-page" }, { "href": "", "method": "GET", "rel": "first-page" }, { "href": "", "method": "GET", "rel": "last-page" } ] } Filtering Where practical, I've tried to provide a filter per resource property. I've implemented filtering using 3 techniques: 1. Basic // filter users by last name and age GET 2. Range For numeric resource (and date) properties, I've implemented range filters as follows: // Possible input for age could be: // age=gt:30 // age=gte:30 // age=eq:30 // age=lt:30 // age=lte:30 GET 3. Multiple (comma separated values) // get a list of movies for the genres animation and sci-fi GET Ordering I've chosen to keep ordering parameters very succinct. Therefore, ordering for a collection of resources may be executed in the following ways: - Order by a single resource property in ascending order // order by last name ascending GET - Order by a single resource property in descending order // order by age descending GET - Order by multiple resource properties using mixed sort orders Notice that we use comma separated values for the order. // order by last-name ascending then by age descending GET Caching I have implemented some basic client side caching behavior. For example: The following endpoints use response caching where the cache expires after 10 seconds. GET GET GET{movieId} GET GET{ratingId} The following endpoint uses caching with an ETag. GET{userId} HATEOAS The following endpoints have been implemented to return links as part of response. // Get links available from root GET [ { "href": "", "method": "GET", "rel": "self" }, { "href": "", "method": "GET", "rel": "movies" }, { "href": "", "method": "POST", "rel": "create-movie" }, { "href": "", "method": "GET", "rel": "ratings" }, { "href": "", "method": "POST", "rel": "create-rating" }, { "href": "", "method": "GET", "rel": "users" }, { "href": "", "method": "POST", "rel": "create-user" } ] // Get as single user, including a list of navigational links GET { "userId": 10, "age": 30, "firstName": "Durham", "lastName": "Franks", "gender": "male", "email": "durhamfranks@kog.com", "links": [ { "href": "", "method": "DELETE", "rel": "delete-user" }, { "href": "", "method": "GET", "rel": "self" }, { "href": "", "method": "GET", "rel": "users" }, { "href": "", "method": "OPTIONS", "rel": "options" }, { "href": "", "method": "PATCH", "rel": "patch-user" }, { "href": "", "method": "POST", "rel": "create-user" }, { "href": "", "method": "PUT", "rel": "update-user" }, { "href": "", "method": "GET", "rel": "ratings" } ] } And for a collection of users (with links), we can use the request below. Please take note of the paging information that is returned as part of response // Get list of users (with links), and paging links GET { "users": [ { "userId": 23, "age": 40, "firstName": "Michele", "lastName": "Jacobs", "gender": "female", "email": "michelejacobs@kineticut.com", "links": [ { "href": "", "method": "DELETE", "rel": "delete-user" }, { "href": "", "method": "GET", "rel": "self" }, { "href": "", "method": "GET", "rel": "users" }, { "href": "", "method": "OPTIONS", "rel": "options" }, { "href": "", "method": "PATCH", "rel": "patch-user" }, { "href": "", "method": "POST", "rel": "create-user" }, { "href": "", "method": "PUT", "rel": "update-user" }, { "href": "", "method": "GET", "rel": "ratings" } ] }, { "userId": 33, "age": 40, "firstName": "Barnett", "lastName": "Griffith", "gender": "male", "email": "barnettgriffith@corpulse.com", "links": [ { "href": "", "method": "DELETE", "rel": "delete-user" }, { "href": "", "method": "GET", "rel": "self" }, { "href": "", "method": "GET", "rel": "users" }, { "href": "", "method": "OPTIONS", "rel": "options" }, { "href": "", "method": "PATCH", "rel": "patch-user" }, { "href": "", "method": "POST", "rel": "create-user" }, { "href": "", "method": "PUT", "rel": "update-user" }, { "href": "", "method": "GET", "rel": "ratings" } ] } ], "links": [ { "href": "", "method": "GET", "rel": "current-page" }, { "href": "", "method": "GET", "rel": "next-page" }, { "href": "", "method": "GET", "rel": "previous-page" }, { "href": "", "method": "GET", "rel": "first-page" }, { "href": "", "method": "GET", "rel": "last-page" } ] } 4. Technology Used OS I have developed and tested Ranker on the following Operating Systems. Ubuntu is an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things. - Windows 10 Professional In addition to developing Ranker on Windows 10, I have also tried and tested Ranker using Windows Subsystem For Linux. Specifically, I have used [WSL-Ubuntu]. See more about WSL below. Windows Subsystem For Linux The Windows Subsystem for Linux lets developers run a GNU/Linux environment -- including most command-line tools, utilities, and applications -- directly on Windows, unmodified, without the overhead of a virtual machine. Windows Subsystem For Linux 2 NOTE: I have not tested Ranker on WSL2 yet. I mention it here because I want to be clear that I've only tested on WSL. WSL 2 is a new version of the architecture in WSL that changes how Linux distros interact with Windows. WSL 2 has the primary goals of increasing file system performance and adding full system call compatibility. Each Linux distro can run as a WSL 1, or a WSL 2 distro and can be switched between at any time. WSL 2 is a major overhaul of the underlying architecture and uses virtualization technology and a Linux kernel to enable its new features. Code Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and macOS. It includes support for debugging, embedded Git control, syntax highlighting, intelligent code completion, snippets, and code refactoring. A fully-featured, extensible, FREE IDE for creating modern applications for Android, iOS, Windows, as well as web applications and cloud services. Database - Kept things simple and only used an in-memory database 5. Getting Started Before getting started, the following frameworks must be installed on your machine: - Dotnet Core 3.1 Get The Code Clone 'ranker' repository from GitHub # using https git clone # or using ssh git clone git@github.com:drminnaar/ranker.git Build The Code # change to project root cd ./ranker # build solution dotnet build Running the API Run the API from the command line as follows: # change to project root cd ./ranker/Ranker.Api # To run 'Ranker Api' () dotnet watch run Open Postman Collection I have provided a postman collection for the Ranker API. Please find the Postman collection _'Ranker.postman_collection'_at the root of the solution. Discussion Hello All, I need to discuss regarding RESTful API for accessing oracle database tables. API was created using Oracle Rest Data Services (rest data enable at SQL developer). I am getting error : 401 unauthorized while doing query through post man as below: GET localhost:8080/ords/autodemo2/meta... However below is working fine (code 200 OK): localhost:8080/ords/autodemo2/open... Not sure why metadata query is facing authorization issue while swagger(open API) one is giving response without issue at postman. I tried for other schema alias autodemo and here both are giving proper response: localhost:8080/ords/autodemo/metad... localhost:8080/ords/autodemo/open-... Thanks, Rajneesh Please fix your HTTP reference chart. The /{id} needs to be removed from the POST. Sorted! Thanks Great post! Perhaps you should mention 202 HTTP status code for async resource creation.
https://dev.to/drminnaar/rest-api-guide-14n2
CC-MAIN-2020-50
refinedweb
4,335
50.67
set detach-on-fork command Specifies whether GDB should debug both parent and child process after a call to fork() or vfork() Syntax set detach-on-fork off show detach-on-fork Modes - on - In this mode GDB will continue being attached to either parent or child process (depending on the set follow-fork-mode command. - off - In this mode GDB will be attached to both processes after a call to fork() or vfork(). Use the info inferiors command to show details and the inferior command to switch between them. Default mode The default value for the detach-on-fork setting is 'on'. Remarks Use the set follow-fork-mode command to control which process will be selected after gdb continues from a fork() or vfork() call. Examples In this example we will debug the following C++ program: #include <unistd.h> #include <stdio.h> void func(int pid, int ret) { printf("My PID is %d, fork() returned %d\n", pid, ret); if (ret) printf("We are in the parent process\n"); else printf("We are in the child process\n"); } int main() { int r = fork(); func(getpid(), r); return 0; } If we debug the program with the default setting for detach-on-fork, only one of the two processes keep on being debugged (and will trigger breakpoints). See the set follow-fork-mode description for more details. If we set detach-on-fork to on, GDB will not detach from the child process and we will be able to switch to it using the inferior command: Temporary breakpoint 1 at 0x804848f: file forktest.cpp, line 17. Starting program: /home/testuser/forktest Temporary breakpoint 1, main () at forktest.cpp:17 17 int r = fork(); (gdb) show follow-fork-mode Debugger response to a program call of fork or vfork is "parent". (gdb) set detach-on-fork off (gdb) break func Breakpoint 2 at 0x804844a: file forktest.cpp, line 7. (gdb) continue Continuing. [New process 8133] Breakpoint 2, func (pid=8125, ret=8133) at forktest.cpp:7 7 printf("My PID is %d, fork() returned %dpid, ret); (gdb) continue Continuing. My PID is 8125, fork() returned 8133 We are in the parent process [Inferior 1 (process 8125) exited normally] (gdb) info inferiors Num Description Executable 2 process 8133 /home/testuser/forktest * 1 <null> /home/testuser/forktest (gdb) inferior 2 [Switching to inferior 2 [process 8133] (/home/testuser/forktest)] [Switching to thread 2 (process 8133)] #0 0xb7fdd424 in ?? () (gdb) bt #0 0xb7fdd424 in ?? () #1 0x08048494 in main () at forktest.cpp:17 (gdb) continue Continuing. Breakpoint 2, func (pid=8133, ret=0) at forktest.cpp:7 7 printf("My PID is %d, fork() returned %dpid, ret); (gdb) continue Continuing. My PID is 8133, fork() returned 0 We are in the child process [Inferior 2 (process 8133) exited normally] As expected, GDB continued debugging the parent process. The child process remained suspended until we switched to it using the inferior command and resumed it using the continue command. Now we will configure GDB to switch to the child process: (gdb) set follow-fork-mode child (gdb) break func Breakpoint 1 at 0x804844a: file forktest.cpp, line 7. (gdb) run Starting program: /home/testuser/forktest [New process 8080] [Switching to process 8080] Breakpoint 1, func (pid=8080, ret=0) at forktest.cpp:7 7 printf("My PID is %d, fork() returned %dpid, ret); (gdb) continue Continuing. My PID is 8080, fork() returned 0 We are in the child process[Inferior 2 (process 8080) exited normally] (gdb) info inferiors Num Description Executable * 2 <null> /home/testuser/forktest 1 process 8077 /home/testuser/forktest (gdb) inferior 1 [Switching to inferior 1 [process 8077] (/home/testuser/forktest)] [Switching to thread 1 (process 8077)] #0 0xb7fdd424 in __kernel_vsyscall () (gdb) bt #0 0xb7fdd424 in __kernel_vsyscall () #1 0xb7ed4f7c in __libc_fork () at ../nptl/sysdeps/unix/sysv/linux/i386/../fork.c:131 #2 0x08048494 in main () at forktest.cpp:17 (gdb) continue Continuing. Breakpoint 1, func (pid=8077, ret=8080) at forktest.cpp:7 7 printf("My PID is %d, fork() returned %dpid, ret); (gdb) continue Continuing. My PID is 8077, fork() returned 8080 We are in the parent process[Inferior 1 (process 8077) exited normally] (gdb) Now the parent process was suspended until we switched back to it and resumed it using the continue command. Compatibility with VisualGDB You can run the set detach-on-fork command under VisualGDB using the GDB Session window:
https://visualgdb.com/gdbreference/commands/set_detach-on-fork
CC-MAIN-2019-22
refinedweb
731
53.92
: thomasvs Date: Mon Apr 08 2002 15:47:12 PDT Log message: registry ideas doc and an idea for guadec-4 presentation guadec rocks ! Added files: docs/random/thomasvs: guadec-4 registry Links: ====Begin Diffs==== --- NEW FILE: guadec-4 --- Presentation ideas for GUADEC 4 (thomasvs, April 8 2002) * use gst-editor to create pipelines that make a karaoke machine in different steps and using different features 1) pipeline 1: play the free software song by Richard Stallman 2) pipeline 2: do the same but add a visualization plugin 3) create a small video using actual RMS footage 4) pipeline 3: play this video and the song together 5) Stallman is a bit hard to understand. We want text. pipeline 4: use the subtitle reader to overlay text maybe also do a bouncing ball overlay ! 6) Stallman can't sing. Let's pitch-shift him. this will need MIDI or dynparams to control a pitch shifter 7) It sounds better, but still not quite there. Replace him with a festival voice doing the pitch shifting. --- NEW FILE: registry --- Reviewing the registry (thomasvs, April 8 2002) * added a --gst-registry flag to the core which allows any gst app to specify a different registry for loading/saving some stuff to do this went into gstreamer/gst/gstregistry.h * What location is used for writing ? (gst-register) - if specified (using --gst-registry) then use the specified location - if not specified : - if GST_CONFIG_DIR is writable as the current user, do it there (which should be sysconfdir/gstreamer) and reg.xml - if not writable, then try ~/.gstreamer/reg.xml * What location is used for reading ? (gst-whatever) - if specified (using --gst-registry) then use the specified location - if not specified : - try reading GST_CONFIG_DIR/reg.xml first - TODO: then try reading ~/.gstreamer/reg.xml AND replace every namespace collision with the new one * actual variables stuff (gstregistry.c) - use gst_registry_write_get to get a GstRegistryWrite struct back listing the right location of dir, file and tmp file - use gst_registry_read_get to get a GstRegistryRead struct back listing the path of global and local file to read * QUESTIONS - maybe it's better to try the global registry first (if unspecified), and see if you have write permissions ? Because if you do, you might as well do it there - the system gave you the permission. useful for doing garnome installs as a user CVS Root: /cvsroot/gstreamer Module: gstreamer Changes by: thomasvs Date: Thu Apr 11 2002 13:43:46 PDT Log message: update to new ideas Modified files: docs/random/thomasvs: registry Links: ====Begin Diffs==== Index: registry =================================================================== RCS file: /cvsroot/gstreamer/gstreamer/docs/random/thomasvs/registry,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- registry 8 Apr 2002 22:46:59 -0000 1.1 +++ registry 11 Apr 2002 20:43:34 -0000 1.2 @@ -9,7 +9,7 @@ - if specified (using --gst-registry) then use the specified location - if not specified : - - if GST_CONFIG_DIR is writable as the current user, do it there + - it can be written in the global location, do it there (which should be sysconfdir/gstreamer) and reg.xml - if not writable, then try ~/.gstreamer/reg.xml @@ -17,8 +17,12 @@ - if specified (using --gst-registry) then use the specified location - if not specified : - - try reading GST_CONFIG_DIR/reg.xml first - - TODO: then try reading ~/.gstreamer/reg.xml + - right now : + if local exists, only read local + if not, read global + + - TODO: try reading GST_CONFIG_DIR/reg.xml first + then try reading ~/.gstreamer/reg.xml AND replace every namespace collision with the new one * actual variables stuff (gstregistry.c) CVS Root: /cvsroot/gstreamer Module: gstreamer Changes by: thomasvs Date: Wed Apr 17 2002 05:29:38 PDT Log message: remarks for doc review Added files: docs/random/thomasvs: docreview Links: ====Begin Diffs==== --- NEW FILE: docreview --- Documentation review * gstbuffer - What are the flags in GstBuffer ? used anywhere ? defined how ? * General - how can we define common terms and make them cros-ref'd ? e.g. timestamps in buffer, do we say everywhere that they're in nanosec ? * Style - when in doubt, try to conform to GTK+ reference docs - in the arg clarification, use as much cross-reffing as possible. Do it only where it is useful in the explanation text. - examples - use active form instead of imperative describing functions; we describe what the function does. good : creates a new buffer bad : create new buffer - use singular for enum names; this makes it more natural to reference to it in the API docs good : GstBufferFlag bad : GstBufferFlags CVS Root: /cvsroot/gstreamer Module: gstreamer Changes by: thomasvs Date: Mon Jul 15 2002 16:12:14 PDT Log message: start of list of example features (and matching plug-ins later on) Added files: docs/random/thomasvs: features Links: ====Begin Diffs==== --- NEW FILE: features --- Here's a list of features in GStreamer which plug-ins can make use of. For each feature we will try to find a few plug-ins that show a good way of implementing them. seeking caps negotiation timestamps clock interaction signals object argument handling chain-based loop-based request pads sometimes pads CVS Root: /cvsroot/gstreamer Module: gstreamer Changes by: thomasvs Date: Fri Aug 30 2002 09:02:57 PDT Log message: some hint updates Modified files: docs/random/thomasvs: docreview Links: ====Begin Diffs==== Index: docreview =================================================================== RCS file: /cvsroot/gstreamer/gstreamer/docs/random/thomasvs/docreview,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- docreview 17 Apr 2002 12:29:25 -0000 1.1 +++ docreview 30 Aug 2002 16:02:45 -0000 1.2 @@ -11,6 +11,9 @@ * Style - when in doubt, try to conform to GTK+ reference docs + (in the gtk-doc tarball, doc/style-guide.txt) +- GtkMisc and GtkFontSelectionDialog are example templates. + - in the arg clarification, use as much cross-reffing as possible. Do it only where it is useful in the explanation text. @@ -18,9 +21,17 @@ - use active form instead of imperative describing functions; we describe what the function does. - good : creates a new buffer + good : creates a new buffer. bad : create new buffer - use singular for enum names; this makes it more natural to reference to it in the API docs good : GstBufferFlag bad : GstBufferFlags + - in arg clarification, use a period and start with a small letter. + Call the object you work on "a" instead of "the". Call the other objects + "the". + If the object in question is the return value, this means you call the + return value "a". If the object in question is the first argument + of the call, you call this argument "a" and the rest "the". + good : @buf: a pointer to query. + bad : @buf: The pointer to query==== --- NEW FILE: pwg --- Stuff for the PWG ----------------- * arguments - how to add arguments - create an identifier in the enum, starting with ARG_ example: ARG_RATE - add the property by adding a g_object_class_install_property line FIXME: what is name/nick/blurb in the paramspec ? - if the argument is readable, a block of code for it needs to be added to the _get_property function. - default value - default value should be set in _init function - default value can be specified in paramspec (but I don't think this is used anywhere) - things to check/possible problems - do you have a _get_property function ? - do you have a _set_property function ? - do both have a default handler that handles invalid property ID's ? - are the _get/_set_property handlers assigned to the class's struct ? - CVS Root: /cvsroot/gstreamer Module: gstreamer Changes by: thomasvs Date: Thu Sep 12 2002 08:23:06 PDT Log message: some docs on pthread (to be finished) Added files: docs/random/thomasvs: pthread Links: ====Begin Diffs==== --- NEW FILE: pthread --- Some notes on use of pthreads in GStreamer ------------------------------------------ First off, pthreads are HARD. Remember that. 1) How I learned to debug glibc and pthreads and add debug code to it. You have to trick your GStreamer test app in running against a modified glibc. I used Red Hat 7.3, downloaded the .src.rpm, installed it, applied the patches included, and ran ./configure --prefix=/home/thomas/cvs --with-add-ons make make install After quite some time this left me with recompiled libc and libpthread libraries in /home/thomas/cvs/lib, as well as a new ld-linux.so.2 Now you need to use this new ld-linux.so ld loader to run your app, preferably from inside of gdb so you can tell what's going on when it crashes. You can use ld-linux.so.2 to call your binaries: ld-linux.so.2 .libs/thread1 to run the thread1 program with the new glibc. If this is a GStreamer app, chances are it might not find some libraries it needs that you could safely use from /usr/lib (like, zlib and popt). Also, you want it to run in gdb, so this is my full line: LD_LIBRARY_PATH=/usr/lib /home/thomas/cvs/lib/ld-linux.so.2 \ /usr/bin/gdb .libs/thread1 At this point you can start adding debug code to the pthreads implementation in your glibc source tree. Just change, re-run make install, and restart the test app in gdb. Helpful --gst-mask is 0x00200100 to get thread info and scheduling info (with mem alloc from cothreads) 2) What GStreamer does with pthreads. Apps create a thread with gst_thread_new. This just allocates the GstThread structure without actually doing much with it. When a thread goes from NULL to READY, the gst_thread_change_state function creates the actual pthread. - we lock the thread->lock mutex - we create attributes for the pthread - by default the pthread is JOINABLE - we ask the thread's scheduler for a preferred stack size and location (FIXME: if the scheduler doesn't return one, what do we do ?) - we create the pthread with the given attributes - the pthread id is stored in thread->thread_id - the created pthread starts executing gst_thread_main_loop (thread) - the change_state function does a g_cond_wait - this means it unlocks the mutex, waits until thread->cond is set (which happens in gst_thread_main_loop), then relocks the mutex and resumes execution From the point of view of the created pthread, here's what happens. gst_thread_main_loop (thread) gets called - the thread's mutex gets locked - the thread's scheduler's policy gets examined - the scheduler gets set up (invokes the scheduler object's setup method) FIXME: what are the prereqs of this _setup method ? - basic and fast scheduler both call do_cothread_context_init - basic: this calls cothread_context_init - cothread_context_init - fast: this calls cothread_create (NULL, 0, NULL, NULL)) (FINISHME) (FOLDMEBACKTOREALDOCS) CVS Root: /cvsroot/gstreamer Module: gstreamer Changes by: thomasvs Date: Sun Oct 27 2002 18:22:13 PST Log message: some ideas on how to do metadata in gst Added files: docs/random/thomasvs: metadata Links: ====Begin Diffs==== --- NEW FILE: metadata --- I'll use this doc to describe how I think metadata should work from the perspective of the application developer and end user, and from that extrapolate what we need to provide that. RATIONALE --------- One of the strong points of GStreamer is that it abstracts library dependencies away. A user is free to choose whatever plug-ins he has, and a developer can code to the general API that GStreamer provides without having to deal with the underlying codecs. It is important that GStreamer also handles metadata well and efficiently, since more often than not the same libraries are needed to do this. So to avoid applications depending on these libs just to do the metadata, we should make sure GStreamer provides a reasonable and fast abstraction for this as well. GOALS ----- - quickly read and write content metadata - quickly read stream metadata - cache both kinds of data transparently - (possibly) provide bins that do this - provide a simple API to do this DEFINITION OF TERMS ------------------- The user or developer using GStreamer is interested in all information that describes the stream. The library handles these two types differently however, so I will use the following terms to describe this : - content metadata every kind of information that is tied to the "concept" of the stream, and not tied to the actual encoding or representation of the stream. - it can be altered without transcoding the stream - it would stay the same for different encodings of the file - describes properties of the information encoded into the stream - examples: - artist, title, author - year, track order, album - stream metadata every kind of information that is tied to the "codec" or "representation" used. - cannot be altered without transcoding - is the set of parameters the stream has been encoded with - describes properties of the stream itself - examples: - samplerate, bit depth/width, channels - bitrate, encoding mode (e.g. joint stereo) - video size, frames per second, colorspace used - length in time EXAMPLE PIPELINES ----------------- reading content metadata : filesrc ! id3v1 - would read metadata from file - id3v1 immediately causes filesrc to seek until it has found - the (first) metadata - that there is no metadata present resetting and writing content metadata : filesrc ! id3v1 reset=true artist="Arid" ! filesink - effect: clear the current tag and reset it to only have Arid as artist - id3v1 seeks to the right location, clears the tag, and writes the new one - filesrc might not be necessary here - this probably only works when doing an in-place edit COST ---- Querying metadata can be expensive. Any application querying for metadata should take this into account and make sure that it doesn't block the app unnecessarily while the querying happens. In most cases, querying content data should be fast since it doesn't involve decoding Technical data could be harder and thus might be better done only when needed. CACHE ----- Getting metadata can be an expensive operation. It makes sense to cache the metadata queried on-disk to provide rapid access to this data. It is important however that this is done transparently - the system should be able to keep working without it, or keep working when you delete this cache. The API would provide a function like gst_metadata_content_read_cached (location) or even gst_metadata_read_cached (location, GST_METADATA_CONTENT, GST_METADATA_READ_CACHED) to try and get the cached metadata. - check if the file is cached in the metadata cache - if no, then read the metadata and store it in the cache - if yes, then check the file against it's timestamp (or (part of) md5sum ?) - if it was changed, force a new read and store it in the cache - if it wasn't changed, just return the cached metadata For optimizations, it might also make sense to do GList * gst_metadata_read_many (GList *locations, ...) which would allow the back-end to implement this more efficiently. Suppose an application loads a playlist, for example, then this playlist could be handed to this function, and a GList of metadata types could be returned. Possible implementations : - one large XML file : would end up being too heavy - one XML file per dir on system : good compromise; would still make sense to keep this in memory instead of reading and writing it all the time Also, odds are good that users mostly use files from same dir in one app (but not necessarily) Possible extra niceties : - matching of moved files, and a similar move of metadata (through user-space tool ?) !!! For speed reasons, it might make sense to somehow keep the cache in memory instead of reparsing the same cache file each time. !!! For disk space reasons, it might make sense to have a system cache. Not sure if the complexity added is worth it though. !!! For disk space reasons, we might want to add an upper limit on the size of the cache. For that we might need a timestamp on last retrieval of metadata, so that we can drop the old ones. The cache should use standard glibc. FIXME: is it worth it to use gnome-vfs for this ? STANDARDIZATION OF METADATA --------------------------- Different file formats have different "tags". It is not always possible to map metadata to tags. Some level of agreement on metadata names is also required. For technical metadata, the names or properties should be fairly standard. We also use the same names as used for properties and capabilities in GStreamer. This means we use - encoded audio - "bitrate" (which is bits per second - use the most correct one, ie. average bitrate for VBR for example) - raw audio - "samplerate" - sampling frequency - "channels" - "bitwidth" - how wide is the audio in bits - encoded video - "bitrate" - raw video (FIXME: I don't know enough about video, are these correct) - "width" - "height" - "colordepth" - "colorspace" - "fps" - "aspectratio" We must find a way to avoid collision. A system stream can contain both audio and video (-> bitrate) or multiple audio or video streams. One way to do this might be to make a metadata set for a stream a GList of metadata for elementary streams. For content metadata, the standards are less clear. Some nice ones to standardize on might be - artist - title - author - year - genre (touchy though) - RMS, inpoint, outpoint (calculated through some formula, used for mixing) TESTING ------- It is important to write a decent testsuite for this and do speed comparisons between the library used and the GStreamer implementation. API --- struct GstMetadata { gchar *location; GstMetadataType type; GList *streams; GHashtable *values; }; (streams would be a GList of (again) GstMetadata's. "location" would then be reused to indicate an identifier in the stream. FIXME: is that evil ?) GstMetadataType - technical, content GstMetadataReadType - cached, raw GstMetadata * gst_metadata_read (const char *location, GstMetadataType type, GstMetadataReadType read_type); GstMetadata * gst_metadata_read_props (const char *location, GList *names, GstMetadataType type, GstMetadataReadType read_type); GstMetadata * gst_metadata_read_cached (const char *location, GstMetadataType type, GstMetadataReadType read_type); GstMetadata * gst_metadata_read_props_cached (...) GList * gst_metadata_read_cached_many (GList *locations, GstMetadataType type, GstMetadataReadType read_type); GList * gst_metadata_read_props_cached_many (GList *locations, GList *names, GstMetadataType type, GstMetadataReadType read_type); GList * gst_metadata_content_write (const char *location, GstMetadata *metadata); SOME USEFUL RESOURCES --------------------- - describes multimedia data for images distinction between content (descriptive), technical and administrative metadata CVS Root: /cvs/gstreamer Module: gstreamer Changes by: thomasvs Date: Wed Mar 10 2004 05:45:50 PST Log message: packaging guidelines Added files: docs/random/thomasvs: packaging Links: ====Begin Diffs==== --- NEW FILE: packaging --- Packaging guidelines for GStreamer ---------------------------------- Here are some guidelines for people trying to package GStreamer. VERSIONS -------- First of all, there are two concepts of version in GStreamer. The first is the source package version; it is either of the form x.y.z or x.y.z.n x is the major number, y is the minor number, z is the micro number, and n (if it exists) is the nano number. In the first case, it is an official release of GStreamer. In the second case, it is a cvs version tarball if n == 1, and a prerelease for the next version if n > 1. Source releases where y is even are considered "stable", and source releases where y is uneven are considered "unstable" or "development". This is similar to a lot of projects, including GLib and the kernel. The second version is an "interface" version, used in versioning tools, the library name, packages, GConf install paths, registry locations, and so on. It is of the form x.y Commonly, it is refered to as the "major/minor" number of GStreamer. In most cases it is the same as the one used in the source version; only when we are doing release candidates for a new major/minor source version do we manually force the major/minor to be the same as the one for the next new version. This is done to shake out bugs that can arise due to this change before we do an actual x.y.0 release. PARALLEL INSTALLABILITY ----------------------- Versions of GStreamer with a different "interface" or major/minor version are supposed to be parallel-installable. If they're not then it's considered to be a bug. There are parallel-installable versions from the 0.6 set and onwards. In practice, this means that - libraries contain the major/minor version in their name - plugins are installed in a major/minor-versioned directory - include headers are installed in separate directories - registry is saved in major/minor-versioned locations - major/minor-versioned tools are installed, together with versioned man pages - non-versioned front-end tools are also installed, that call the versioned ones, and only depend on glib and popt. So, all parts of GStreamer are parallel-installable, EXCEPT for the non-versioned tools and man pages. However, only one of these sets needs to be present, and preferably the latest source version of them. PACKAGING --------- To make packages of different major/minor versions parallel installable, the important part is to separate the package of the nonversioned tools and man pages, and make them usable for all the GStreamer library packages. We recommend putting the versioned binaries and man pages in the same package as the base GStreamer library. The base GStreamer library should require a version of the non-versioned tools, so that users can expect the non-versioned tools to be present in all cases, and our documentation agrees with the install. As for package names, we recommend the following system: - "gstreamer" as the base name should be used for the latest stable version of GStreamer. - "gstreamerxy" should be used for all other versions (older stable version, as well as current development version) As an example: - 0.7 is current development version, and 0.6 is latest stable version: "gstreamer" for 0.6 and "gstreamer07" for 0.7 - 0.8.0 gets released: "gstreamer06" for 0.6, "gstreamer07" is kept for 0.7, and "gstreamer" for 0.8, where: - the 0.8 "gstreamer" package can now obsolete the 0.7 package - the 0.6 "gstreamer06" package can obsolete previous "gstreamer" packages with lower version/release This ensures that users who just want the latest stable version of GStreamer can just install the "gstreamer"-named set of packages, while automatic tools can still upgrade cleanly, maintaining compatibility for applications requiring older versions. This base named should be used for all GStreamer packages; for example gstreamer07-plugins is a package of plugins to work with the gstreamer07 base library package. SPLITTING OF PLUGIN PACKAGES ---------------------------- Since GStreamer can depend on, but isn't forced to depend on, more than 40 additional libraries, choosing how to package these is a challenge compared to other projects. Three approaches have been used in the past. One was "one package per dependency library", so that users could choose exactly what functionality they want installed. A second one was "split according to functionality". This is more arbitrary. A third one, used by some distributors, is "put everything we want to ship in one big package". Packagers seem to agree that the first approach is too hard and users do not care this much about fine-grained control. We decided on a mix of 2) and 3); preferring to follow the base distribution's decision for the base -plugins package, then creating additional packages based on functionality. Plugins are put in -audio, -video, -dvd and -alsa packages. Also, some plugins are put in -extras- packages because they are distributed from a different location, are not as well maintained, have other issues, ... In the case of Fedora, for example, mp3 plugins are shipped in -extras-audio, and distributed on FreshRPMS or rpm.livna.org For Mandrake, for example, they would be shipped from PLF. Now, to make sure other packages can require functionality they need, virtual provides are added for plugin packages, combining the base gstreamer name with the name of the actual GStreamer plugin. Assuming 0.7, and the mad plugin, the package "gstreamer07-plugins-extra-audio" would virtual-provide "gstreamer07-mad". It would contain the file libgstmad.so in the correct directory. PACKAGING TIPS FOR RPMS * use a define for the base package name for all GStreamer spec files: %define gstreamer gstreamer07 * use a define for the major/minor version of the package: %define majorminor 0.7 This ensures you can easily migrate your spec files when a new major/minor version is released. * always use the correctly versioned gst-register-x.y tool in post scripts that install plugins. It helps to create a define for this as well: %define register %{_bindir}/gst-register-%{majorminor} > /dev/null 2>&1 * make each package that installs plugins have (pre) and (post) requires on the versioned register binary * make each package that installs plugins have a requires on the corresponding base plugins package * make sure that the nonversioned tools and man pages are put in a package that is named "gstreamer-tools" no matter what the major/minor version is. This way, the latest version of this package can be used for all previous major/minor packages. If you do not want this package twice, with different versions, then write your spec so that you don't package the tools for previous versions, and only for the latest version. * applications that require specific plugins to function should require the correct -plugins package, as well as any additional virtual provides they need to pull in. * stable releases can obsolete: the previous development releases to ensure they get removed when installing the new stable version. CVS Root: /cvs/gstreamer Module: gstreamer Changes by: thomasvs Date: Mon Jun 14 2004 08:21:32 PDT Log message: notes on capturing Added files: docs/random/thomasvs: capturing Links: ====Begin Diffs==== --- NEW FILE: capturing --- ELEMENTS (v4lsrc, alsasrc, osssrc) -------- - capturing elements should not do fps/sample rate correction themselves they should timestamp buffers according to "a clock", period. - if the element is the clock provider: - timestamp buffers based on the internals of the clock it's providing, without calling the exposed clock functions - do this by getting a measure of elapsed time based on the internal clock that is being wrapped. Ie., count the number of samples the *device* has processed/dropped/... If there are no underruns, the produced buffers are a contiguous data stream. - possibilities: - the device has a method to query for the absolute time related to a buffer you're about to capture or just have captured: Use that time as the timestamp on the capture buffer (it's important that this time is related to the capture buffer; ie. it's a time that "stands still" if you're not capturing) - since you're providing the clocking, but don't have the previous method, you should open the device with a given rate and continuously read samples from it, even in PAUSED. This allows you to update an internal clock. You use this internal clock as well to timestamp the buffers going out, so you again form a contiguous set of buffers. The only acceptable way to continuously read samples then is in a private thread. - as long as no underruns happen, the flow being output is a perfect stream: the flow is data-contiguous and time-contiguous. - if the element is not the clock provider - the element should always respect the clock it is given. - the element should timestamp outgoing buffers based on time given by the provided clock, by querying for the time on that clock, and comparing to the base time. - the element should NOT drop/add frames. Rather, it should just - timestamp the buffers with the current time according to the provided clock - set the duration according to the *theoretical/nominal* framerate - when underruns happen (the device has lost capture data because our element is not handling them quickly enough), this should be detectable by the element through the device. On underrun, the offset of your next buffer will not match the end_offset of your previous one (ie, the data flow is no longer contiguous). If the exact number of samples dropped is detectable, this is the difference between new offset and old offset_end. If it's not detectable, it should be guessed based on the elapsed time between now and the last capture. - a second element can be responsible for making the stream time-contiguous. (ie, T1 + D1 = T2 for all buffers). This way they are made acceptible for gapless presentation (which is useful for audio). - The element treats the incoming stream as data-contiguous but not necessarily time-contiguous. - If the timestamps are contiguous as well, then everything is fine and nothing needs to be done. This is the case where a file is being read from disk, or capturing was done by an element that provided the clock. - If they are not contiguous, then this element must make them so. Since it should respect the nominal framerate, it has to stretch or shorten the incoming data to match the timestamps set on the data. For audio and video, this means it could interpolate or add/drop samples. For audio, resampling/interpolation is preferred. For video, a simple mechanism that chooses the frame with a timestamp as close as possible to the theoretical timestamp could be used. - When it receives a new buffer that is not data-contiguous with the previous one, the capture element dropped samples/frames. The adjuster can correct this by sending out as much "no-signal" data (for audio, e.g. silence or background noise; for video, sending out black frames) as it wants, since a data discontinuity is unrepairable. So it can use these to catch up more aggressively. It should just make sure that the next buffer it gets again goes back to respecting the nominal framerate. - To achieve the best possible long-time capture, the following can be done: - audiosrc captures audio and provides the clock. It does contiguous timestamping by default. - videosrc captures video timestamped with the audiosrc's clock. This data feed doesn't match the nominal framerate. If there is an encoding format that supports storing the actual timestamps instead of pretending the data flow respects the nominal framerate, this can be corrected after recording. - at the end of recording, the absolute length in time of both streams, measured against a common clock, is the same or can be made the same by chopping off data. - the nominal rate of both audio and video is also known. - given the length and the nominal rate, we have an evenly spaced list of theoretical sampling points. - video frames can now be matched to these theoretical sampling points by interpolating or reusing/dropping frames. It can choose the best possible algorithm for this to decrease the visible effects (interpolating results in blur, add/drop frames results in jerkiness). - with the video resampled at the theoretical framerate, and the audio already correct, the recording can now be muxed correctly into a format that implicitly assumes a data rate matching the nominal framerate. - One possibility is to use the GDP to store the recording, because that retains all of the timestamping information. - The process is symmetrical; if you want to use the clock provided by the video capturer, you can stretch/shrink the audio at the end of recording to match. TERMINOLOGY ----------- - nominal rate the framerate/samplerate exposed in the caps; ie. the theoretical framerate of the data flow. This is the fps reported by the device or set for the encoder, or the sampling rate of the audio device. - contiguous data flow offset_end of old buffer matches offset of new buffer for audio, this is a more important requirement, since you configure output devices for a contiguous data flow. - contiguous time flow T1 + D1 = T2 for video, this is a more important requirement, because the sampling period is bigger, so it is more important to match the presentation time - "perfect stream" data and time are contiguous and match the nominal rate videotestsrc, sinesrc, filesrc ! decoder produce this NETWORK ------- - elements can be synchronized by writing a NTP clock subclass that listens to an ntp server, and tries to match its own clock against the NTP server by doing gradual rate adjustment, compared with the own system clock. CVS Root: /cvs/gstreamer Module: gstreamer Changes by: thomasvs Date: Thu Jun 17 2004 04:00:32 PDT Log message: more notes, getting there Modified files: docs/random/thomasvs: capturing Links: ====Begin Diffs==== Index: capturing =================================================================== RCS file: /cvs/gstreamer/gstreamer/docs/random/thomasvs/capturing,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- capturing 14 Jun 2004 15:21:19 -0000 1.1 +++ capturing 17 Jun 2004 11:00:20 -0000 1.2 @@ -27,6 +27,17 @@ thread. - as long as no underruns happen, the flow being output is a perfect stream: the flow is data-contiguous and time-contiguous. + - underruns should be handled like this: + - if the code can detect how many samples it dropped, it should just + send the next buffer with the new correct offset. Ie, it produced + a data gap, and since it provides the clock, it produces a perfect + data gap (the timestamp will be correctly updated too). + - if it cannot detect how many samples it dropped, there's a fallback + algorithm. The element uses another GstClock (for example, system clock) + on which it corrects the skew and drift continuously as long as it + doesn't drop. When it detected a drop, it can get the time delta + on the other GstClock since the last time it captured and the current + time, and use that delta to guesstimate the number of samples dropped. - if the element is not the clock provider - the element should always respect the clock it is given. @@ -122,3 +133,101 @@ - elements can be synchronized by writing a NTP clock subclass that listens to an ntp server, and tries to match its own clock against the NTP server by doing gradual rate adjustment, compared with the own system clock. +- sending audio and video over the network using tcpserversink is possible + when the streams are made to be perfect streams and synchronized. + Since the streams are perfect and synchronized, the timestamps transmitted + along with the buffers can be trusted. The client just has to make + sure that it respects the timestamps. +- One good way of doing that is to make an element that provides a clock + based on the timestamps of the data stream, interpolating using another + GstClock inbetween those time points. This allows you to create + a perfect network stream player (one that doesn't lag (increasing buffers)) + or play too fast (having an empty network queue). +- On the client side, a GStreamer-ish way to do that is to cut the playback + pipeline in half, and have a decoupled element that converts + timestamps/durations (by resampling/interpolating/...) so that the sinks + consume data at the same rate the tcp sources provide it. + tcpclientsrc ! theoradec ! clocker name=clocker { clocker. ! xvimagesink } + +SYNCHRONISATION +--------------- +- low rate source with high rate source: + the high rate source can drop samples so it starts with the same phase + as the low rate source. This could be done in a synchronizer element. + example: + - audio, 8000 Hz, and video, 5 fps + - pipeline goes to playing + - video src does capture and receives its first frame 50 ms after playing + -> phase is -90 or 270 degrees + - to compensate, the equivalent of 150 ms of audio could be dropped so + that the first videoframe's timestamp coincides with the timestamp of + the first audio buffer + - this should be done in the raw audio domain since it's typically not + possible to chop off samples in the encoded domain +- two low rate sources: + not possible to do this correctly, maybe something in the middle can be + found ? +IMPROVING QUALITY +----------------- +- video src can capture at a higher framerate than will be encoded +- this gives the corrector more frames to choose from or interpolate with + to match the target framerate, reducing jerkiness. + e.g. capturing at 15 fps for 5 fps framerate. +LIVE CHANGES IN PIPELINE +------------------------ +- case 1: video recording for some time, user wants to add audio recording on + the fly + - user sets complete pipeline to paused + - user adds element for audio recording + - new element gets same base time as video element + - on PLAYING, new element will be in sync and the first buffer produced + will have a non-zero timestamp that is the same as the first new video + buffer +- case 2: video recording for some time, user wants to add in an audio file + from disk. + - two possible expectations: + A) user expects the audio file to "start playing now" and be muxed + together with the current video frames + B) user expects the audio file to "start playing from the point where the + video currently is" (ie, video is at 10 seconds, so mux with audio + starting from 10 secs) + - case A): + - complete pipeline gets paused + - filesrc ! dec added + - both get base_time same as video element + - pipeline to playing + - all elements receive new "now" as base_time so timestamps are reset + - muxer will receive synchronized data from both + - case B): + nothing gets paused + - both get base_time that is the current clock time + - core sets + 1) - new audio part starts sending out data with timestamp 0 from start + of file + - muxer receives a whole set of frames from the audio side that are late + (since the timestamps start at 0), so keeps dropping until it has + caught up with the current set). + OR + 2) - audio part does clock query +THINGS TO DIG UP +---------------- +- is there a better way to get at "when was this frame captured" then doing + a clock query after capturing ? + Imagine a video device with a hardware buffer of four frames. If you + haven't asked for a frame from it in a while, three frames could be + queued up. So three consecutive frame gets result in immediate returns + with pretty much the same clock query for each of them. + So we should find a way to get "a comparable clock time" corresponding + to the captured frame. +- v4l2 api returns a gettimeofday() timestamp with each buffer. + Given that, you can timestamp the buffer by subtracting the delta + between the buffer's clock timestamp with the current system clock time, + from the current time reported by the provided clock. CVS Root: /cvs/gstreamer Module: gstreamer Changes by: thomasvs Date: Mon Nov 28 2005 16:52:06 PST Log message: add my todos for 0.10 Added files: docs/random/thomasvs: 0.10 Links: ====Begin Diffs==== --- NEW FILE: 0.10 --- gstreamer --------- - reorganize tests and examples into - testsuite - check: unit tests - examples: example code - tests: interactive tests - move gst/base to libs/gst/base ? (but elements link against them) - move elements out of gst/ dir ? gst-plugins-base ---------------- - gst-libs/gst/audio: - is audiofilter still needed ? any reason not to fold it into audio ? - gst: - adder: needs docs, an example, and a test - audioconvert: ok - audiorate: needs docs - audioresample: David needs to fix this - audioscale: needs to go - audiotestsrc: ok - ffmpegcolorspace: needs a test - playback: example - sine: removed - subparse: needs work - contained a very small code file that wasn't built, and a copy of a header that was in the tag lib; removed gst-plugins-good - alpha, alphacolor: document with example - auparse: crashes on e.g. gst-launch -v filesrc location=/usr/share/emacs/site-lisp/emacspeak/sounds/default-8k/ask-short-question.au ! auparse ! osssink -> will move to bad - autodetect: OK - videofilter: - is the lib still needed, given basetransform ? - currently installs a lib; should not install, or move to a dir, with pc file, possibly in -base - if we install it, get AS_LIBTOOL back and use it - ext: - aasink: properties need looking at - width, height: why are they there ? caps don't match - frames-displayed: in base class ? - frame-time: what's this ? - cairo: - cairotimeoverlay works - cairotextoverlay ? pipeline ? - flac: - flacenc: gst-launch -v audiotestsrc wave=white-noise ! flacenc ! filesink location=white-noise.flac does not produce a correct file - flacdec works, but gst-launch gnomevfssrc location= ! flacdec ! autoaudiosink does not CVS Root: /cvs/gstreamer Module: gstreamer Changes by: thomasvs Date: Tue Nov 29 2005 06:44:05 PST Log message: further review Modified files: docs/random/thomasvs: 0.10 Links: ====Begin Diffs==== Index: 0.10 =================================================================== RCS file: /cvs/gstreamer/gstreamer/docs/random/thomasvs/0.10,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- 0.10 29 Nov 2005 00:51:54 -0000 1.1 +++ 0.10 29 Nov 2005 14:43:53 -0000 1.2 @@ -8,26 +8,80 @@ - move gst/base to libs/gst/base ? (but elements link against them) - move elements out of gst/ dir ? +- check/gst/pipelines: currently disabled, random failures gst-plugins-base ---------------- - gst-libs/gst/audio: - is audiofilter still needed ? any reason not to fold it into audio ? + folded, DONE - gst: - adder: needs docs, an example, and a test - audioconvert: ok - - audiorate: needs docs + - audiorate: needs docs and tests - audioresample: David needs to fix this - audioscale: needs to go - audiotestsrc: ok - ffmpegcolorspace: needs a test - playback: example - - sine: removed + - sine: removed, DONE - subparse: needs work + -> check if it works ? clean up dead code ? move to bad ? - contained a very small code file that wasn't built, and a copy of a header - that was in the tag lib; removed + that was in the tag lib; removed; DONE + - tcp: + - works + - need tests + - need docs + - need possible porting to GNet + - typefind: + - need tests - this definately could use it + - is there any way they can be documented ? + - should the plugin docs show a list of them ? + - videorate: + - needs tests, docs + - videoscale: + - needs tests + - negotiation + - par conversion + - different scale algorithms + - needs docs + - negotation with five out of six free variables (src/sink w/h/par) + - videotestsrc: + - could use tests for all possible caps + - volume: OK +- ext: + - alsa: + - needs docs; esp. params and common practices + - needs interactive tests; depends on having such a setup available + - cdparanoia: + - needs docs, and interactive test + - remains in -base until cdio is proven to be better on all counts + - gnomevfs: + - needs docs (how to use proxy, link to gnomevfs docs, explanation + about need for homedir to create .gnome2 dir in, ...) + - needs test; test could use local files and http urls + - libvisual + - needs docs (easy) + - needs test + - ogg, vorbis, theora +- sys + - v4l + - needs interactive test + - needs lots of docs + - ximage + - interactive test should go somewhere + - docs ok + - xvimage + gst-plugins-good @@ -40,9 +94,10 @@ - autodetect: OK - videofilter: - is the lib still needed, given basetransform ? + yes - currently installs a lib; should not install, or move to a dir, with pc file, possibly in -base - - if we install it, get AS_LIBTOOL back and use it + DONE: moved to -base - ext: - aasink: properties need looking at - width, height: why are they there ? caps don't match
https://sourceforge.net/p/gstreamer/mailman/gstreamer-cvs-verbose/thread/E17koEb-0006TN-00@usw-pr-cvs1.sourceforge.net/
CC-MAIN-2018-17
refinedweb
7,019
59.13
. In [1]: from mxnet import init, nd from mxnet.gluon import nn net = nn.Sequential() net.add(nn.Dense(256, activation='relu')) net.add(nn.Dense(10)) net.initialize() # Use the default initialization method. x = nd.random.uniform(shape=(2, 20)) net(x) # Forward computation.)>. In [2]:. 4.2.1.1. Targeted Parameters¶ In order to do something useful with the parameters we need to access them, though. There are several ways to do this, ranging from simple to general. Let’s look at some of them. In [3]: print(net[1].bias) print(net[1].bias.data()) Parameter dense1_bias (shape=(10,), dtype=float32) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] <NDArray 10 @cpu(0)> The first returns the bias of the second layer. Sine. In [4]:. In [5]: net[0].weight.grad() Out[5]: [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] <NDArray 256x20 @cpu(0)> how the blocks were constructed. To avoid this, blocks come with a method collect_params which grabs all parameters of a network in one dictionary such that we can traverse it with ease. It does so by iterating over all constituents of a block and calls collect_params on subblocks as needed. To see the difference consider the following: In [6]: #: In [7]: net.collect_params()['dense1_bias'].data() Out[7]: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] <NDArray 10 @cpu(0)> Throughout the book we’ll see how various blocks name their subblocks (Sequential simply numbers them). This makes it very convenient to use regular expressions to filter out the required parameters. In [8]:) ). In [9]: def block1(): net = nn.Sequential() net.add(nn.Dense(32, activation='relu')) net.add(nn.Dense(16, activation='relu')) return net def block2(): net = nn.Sequential() for i in range(4): net.add(block1()) return net rgnet = nn.Sequential() rgnet.add(block2()) rgnet.add(nn.Dense(10)) rgnet.initialize() rgnet(x) Out[9]: [[. In [10]:. In [11]: rgnet[0][1][0].bias.data() Out[11]: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] <NDArray 32 @cpu(0)> 4.2.2. Parameter Initialization¶ Now that we know how to access the parameters, let’s look at how to initialize them properly. We discussed the need for Initialization in the previous chapter. By default, MXNet initializes the weight matrices uniformly by drawing from \(U[-0.07, 0.07]\) and the bias parameters are all set to \(0\). However, we often need to use other methods to initialize the weights. MXNet’s init module provides a variety of preset initialization methods, but if we want something out of the ordinary, we need a bit of extra work. 4.2.2.1. Built-in Initialization¶ Let’s begin with the built-in initializers. The code below initializes all parameters with Gaussian random variables. In [12]: # force_reinit ensures that the variables are initialized again, regardless of whether they were # already initialized previously. net.initialize(init=init.Normal(sigma=0.01), force_reinit=True) net[0].weight.data()[0] Out[12]: [). In [13]: net.initialize(init=init.Constant(1), force_reinit=True) net[0].weight.data()[0] Out[13]: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] <NDArray 20 @cpu(0)> If we want to initialize only a specific parameter in a different manner, we can simply set the initializer only for the appropriate subblock (or parameter) for that matter. For instance, below we initialize the second layer to a constant value of 42 and we use the Xavier initializer for the weights of the first layer. In [14]:)> 4.2.2.2. and modify the incoming NDArray according to the initial result. In the example below, we pick a decidedly bizarre and nontrivial distribution, just to prove the point. We draw the coefficients from the following distribution: In [15]:) Out[15]: [(0)> If even this functionality is insufficient, we can set parameters directly. Since data() returns an NDArray we can access it just like any other matrix. A note for advanced users - if you want to adjust parameters within an autograd scope you need to use set_data to avoid confusing the automatic differentiation mechanics. In [16]: net[0].weight.data()[:] += 1 net[0].weight.data()[0,0] = 42 net[0].weight.data()[0] Out[16]: )> Blocks. Let’s see how to do this a bit more elegantly. In the following we allocate a dense layer and then use its parameters specifically to set those of another layer. In [17]: net = nn.Sequential() # we need to give the shared layer a name such that we can reference its parameters shared = nn.Dense(8, activation='relu') net.add(nn.Dense(8, activation='relu'), shared, nn.Dense(8, activation='relu', params=shared.params), nn.Dense(10)) net.initialize() x = nd.random.uniform(shape=(2, 20)) net(x) # Check whether the parameters are the same print(net[1].weight.data()[0] == net[2].weight.data()[0]) net[1].weight.data()[0,0] = 100 # And(0)> The above example shows that the parameters of the second and third layer are tied. They are identical rather than just being equal. That is, by changing one of the parameters the other one changes, too. What happens to the gradients is quite ingenious. Since the model parameters contain gradients, the gradients of the second hidden layer and the third hidden layer are accumulated in the shared.params.grad( ) during backpropagation. 4.2.4. Summary¶ - We have several ways to access, initialize, and tie model parameters. - We can use custom initialization. - Gluon has a sophisticated mechanism for accessing parameters in a unique and hierarchical manner. 4.2.5. Problems¶ - Use the FancyMLP definition of the previous section?
http://gluon.ai/chapter_deep-learning-computation/parameters.html
CC-MAIN-2019-04
refinedweb
1,001
60.01
Change TargetNameSpace in webservice proxy class Discussion in 'ASP .Net Web Services' started by wal targetnamespace - what is it?!?!kevin bailey, Jun 24, 2003, in forum: XML - Replies: - 1 - Views: - 17,687 - kevin bailey - Jun 25, 2003 targetNamespace and XSLCB, Jun 27, 2003, in forum: XML - Replies: - 0 - Views: - 3,573 - CB - Jun 27, 2003 Re: transforming to an XML Schema - targetNamespaceC. M. Sperberg-McQueen, Jul 29, 2003, in forum: XML - Replies: - 0 - Views: - 617 - C. M. Sperberg-McQueen - Jul 29, 2003 targetNamespace/import conflictCharles Fineman, Jan 21, 2004, in forum: XML - Replies: - 2 - Views: - 4,675 - Charles Fineman - Feb 9, 2004 Help: targetNamespace value on my machine...Gianni Rubagotti, Feb 4, 2004, in forum: XML - Replies: - 1 - Views: - 650 - Patrick TJ McPhee - Feb 5, 2004
http://www.thecodingforums.com/threads/change-targetnamespace-in-webservice-proxy-class.785847/
CC-MAIN-2016-07
refinedweb
126
68.4
Elsewhere in the forum are the 15 or so lines of code that enable a simple mqtt publish capability that has been tested on the esp8266. The mqtt client code runs fine under the unix/linux version of micropython but at about 1400 lines of code I doubt it's much use on most hardware but at least demonstrates that the code works under micropython and there is a lot of additional code that can be stripped out or compressed. As the esp8266 kickstarter goals include an mqtt client this may or may not prove of any use for creating that but it was instructive to me in terms of the capabilities of micropython. (I am sure that Damien and/or Paul will create the equivalent in 25 lines or less.) The client does use errno.py, ffilib.py, os.py, select.py, socket.py and stat.py from micropython-lib. Next step is to get something that does work on the esp8266. For anyone interested the code below can be used to test the client: Code: Select all import umqtt as mqtt import utime as time def on_connect(client, userdata, flags, rc): print("Connected with result code "+str(rc)) client.subscribe("test") def on_message(client, userdata, msg): print(msg.topic+" "+str(msg.payload)) client = mqtt.Client() client.on_connect = on_connect client.on_message = on_message client.connect(<host>, 1883, 60) while 1: client.loop() time.sleep(1)
https://forum.micropython.org/viewtopic.php?p=10252
CC-MAIN-2021-10
refinedweb
235
64.91
!ATTLIST div activerev CDATA #IMPLIED> <!ATTLIST div nodeid CDATA #IMPLIED> <!ATTLIST a command CDATA #IMPLIED> I have been trying since Monday to import an animated model from Blender 3D to Unity3D. I found somewhere that one should import the files directly into the assets folder with the .blend extension. I made my animation with an armature and a mesh attached to it and then tried to save it into Unity. Instead of getting a model with no animation clip (my life story before), I now get a little blender icon and NO model!!! First of all, if anyone knows what I am doing wrong to not get the model imported, please comment. Also, if there is a magic way to make the animation clips appear, please comment. I can make the animations, but they never appear as an animation clip. Thanks in advance! P.S. This has really been bugging me. asked Jan 08 '11 at 04:32 AM AppTechStudios 125 ● 6 ● 7 ● 14 It couldn't have been bugging you that much. These questions have been answered countless times here and on the forum. Sorry for double posting then, but I could not find anything about models not importing from Blender (at least anything that was like my situation). As far as the animations go, I found a post/answer that said to use Armature animations, so I watched half a dozen different tutorials on how to make armatures and link them to meshes. Finally, I tried importing the .blend file into Unity, but that is when it didn't work. I looked around, but could not find any problems similar to mine. Again, I am sorry for posting a double question. I hate to answer my own question, but I found a thread somewhere that that gave me the answer. I had Blender 2.55 (or whatever the beta is) on my computer and that was messing it up. I simply put it into my trash, saved the .blend file again, and everything worked. I appreciate all of your help! answered Jan 08 '11 at 10:17 PM Go ahead and mark this answer as the correct answer. :) Glad you got your problem solved, and glad you posted the solution - that sounds like the kind of thing that would take ages to find. They require that you wait 30 hours before you mark your own answer as the correct answer, so I will mark it as soon as they allow it :) Yes, me too! If it's really a .blend, then you're using a version of Blender later than 2.49, which doesn't allow this yet. Otherwise, you accidentally used a .blend1, .blend2, etc., which are automatic backups that Blender makes. As for animation clips, they have to be bone-based Actions. IPO curves won't import. P.S. Neither Blender or Unity have "3D" in their names. answered Jan 08 '11 at 04:40 AM Jessy 15.6k ● 72 ● 95 ● 196 Adding "3D" to everything is a sad modern trend. I blame Avatar. (Or, should I say, Avatar3D). It will3D get worse3D before3D it gets3D better3D. At least we can look forward to space travel, when our space-cars will have space-TVs in them that fully space-support space-3D space. D: Really? I honestly thought that there was a 3D at the end of both of the titles! Thanks for letting me know! Yes, it is really a .blend. I tried putting the file onto my desktop (where it said .blend) and then importing it into Unity. I also used some crazy file names that I would have never used in the past, so I doubt that Unity would rename them. I am using Blender 2.49b, does that still work? Thanks! I forgot to say this, but I am using Unity iPhone. I don't know if it makes it a difference not, but I thought that I'd throw that out there. No. Unity iPhone, and Unity 2.5 onwards have never given me the problems you are describing. I've been using Blender files in Unity since 2006 without your issue occurring. import x969 blender x720 model x571 asked: Jan 08 '11 at 04:32 AM Seen: 2493 times Last Updated: Jan 08 '11 at 04:32 AM Blender to Unity Animations Blender FBX import model with animation issues Can I make animations snap to a frame? How Can I Get The Pants to Not Move? Blender or Unity for animation? Deformation in Character Animation that is not intended. Polygons, NURBs & Sub-Divisions How to animate inside Unity? (Blender animation is finished) Renaming an animation from FBX format Blender Path animation to Unity? EnterpriseSocial Q&A
http://answers.unity3d.com/questions/41865/blender-models-not-importing.html
CC-MAIN-2013-20
refinedweb
786
75.81
87782/create-a-key-pair-in-aws-using-boto3 Hi Team, I am new to the Boto3 module. I want to create a key pair in AWS. How can I do that with boto3? Hi@akhtar, You need to create. import boto3 ec2 = boto3.client('ec2') response = ec2.create_key_pair(KeyName='KEY_PAIR_NAME') print(response) delete_login_profile is the one you should use if .. Hello @Jino, The command for creating security group ...READ MORE Here is the simple way of implementing .., Amazon Web Services enables you to create ...READ MORE Hi@akhtar, You can find one method in your ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/87782/create-a-key-pair-in-aws-using-boto3
CC-MAIN-2021-43
refinedweb
122
71.1
On 12/20/2012 01:01 PM, Laine Stump wrote: > This patch resolves: > > > > >). Well, there are indeed people that like to compile against the oldest supported kernel then run that same binary across a range of kernels [1]; but if someone IS interested in that scenario, they can go ahead and submit a followon patch at that time. Besides, most people that use pre-built libvirt get it from a distro, and distros happen to match their builds to their kernels; not to mention that with open source, you can always recompile yourself to pick up any differences in the build dependencies you want to have. [1] I've noticed that the most common case of building against the oldest supported version then reusing that binary happens in the case of non-free software - after all, it is easier to ship binary-only executables that work across the widest range of systems than it is to have one binary per system, if you aren't going to give your users the freedom of recompiling from source. > > +# if defined(IFLA_EXT_MASK) && defined(RTEXT_FILTER_VF) > + /* if this filter exists in the kernel's netlink implementation, > + * we need to set it, otherwise the response message will not > + * contain the IFLA_VFINFO_LIST that we're looking for. > + */ > + if (nla_put(nl_msg, IFLA_EXT_MASK, RTEXT_FILTER_VF) < 0) > + goto buffer_too_small; > +# endif > + ACK. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2012-December/msg01253.html
CC-MAIN-2016-30
refinedweb
239
51.72
Here is the structure of an RSS 1.0 document: Here is the detail of all optional and mandatory tags related to RSS 1.0. Check out given example to prepare a RSS v1.0 feed for your website. NOTE: All the tag are case sensitive and should be used carefully. As an XML application, an RSS document is not required to begin with an XML declaration. It should start with XML version identifier tag. Here is list of RSS v1.0 Supported Encodings. Always and exact. Optional encoding attribute (default is UTF-8). The outermost level in every RSS 1.0 compliant document is the RDF element. The opening RDF tag assocaties the rdf: namespace prefix with the RDF syntax schema and establishes the RSS 1.0 schema as the default namespace for the document. <rdf:RDF xmlns: Always and exact.. <channel rdf: Required. Maximum 1 per RSS file. An identifying string for a resource. When used in an item, this is the name of the item's link. When used in an image, this is the Alt text for the image. When used in a channel, this is the channel's title. When used in a textinput, this is the the textinput's title. <title>TutorialsPoint</title> Required. 1-40 characters. The URL to which an HTML rendering of the channel title will link, commonly the parent site's home or news page <link></link> Required. 1-500 characters. A channel will have a description tag as described below: <description>Your source for tutorials, references and manuals!</description> Required. 1-500 characters. Establishes an RDF association between the optional image element and this particular RSS channel. The rdf:resource's {image_uri} must be the same as the image element's rdf:about {image_uri}. <image rdf: Required only if image element present in document body An RDF table of contents, associating the document's items with this particular RSS channel. Each item's rdf:resource {item_uri} must be the same as the associated item element's rdf:about {item_uri}. An RDF Seq (sequence) is used to contain all the items rather than an RDF Bag to denote item order for rendering and reconstruction. Note that items appearing in the document but not as members of the channel level items sequence are likely to be discarded by RDF parsers. <items><rdf:Seq><rdf:li ... </rdf:Seq></items> Required Establishes an RDF association between the optional textinput element and this particular RSS channel. The {textinput_uri} rdf:resource must be the same as the textinput element's rdf:about {textinput_uri}. <textinput rdf: Required only if texinput element present One end channel is required for a channel start tag. </channel> An image to be associated with an HTML rendering of the channel. This image should be of a format supported by the majority of Web browsers. While the later 0.91 specification allowed for a width of 1-144 and height of 1-400, convention (and the 0.9 specification) dictate 88x31. <image rdf: Optional; if present, must also be present in channel element The alternative text ("alt" attribute) associated with the channel's image tag when rendered as HTML. <title>{image_alt_text}</title> Required if the image element is present 1-40 characters. The URL of the image to used in the "src" attribute of the channel's image tag when rendered as HTML. <url>{image_url}</url> Required if the image element is present 1-500 characters. The URL to which an HTML rendering of the channel image will link. This, as with the channel's title link, is commonly the parent site's home or news page. <link>{image_link}</link> Required if the image element is present 1-500 characters. Closing tag for image tag. </image> Required; if image tag is present While commonly a news headline, with RSS 1.0's modular extensibility, this can be just about anything: discussion posting, job listing, software patch -- any object with a URI. There may be a minimum of one item per RSS document. While RSS 1.0 does not enforce an upper limit, for backward compatibility with RSS 0.9 and 0.91, a maximum of fifteen items is recommended. item_uri} must be unique with respect to any other rdf:about attributes in the RSS document and is a URI which identifies the item. {item_uri} should be identical to the value of the <link> sub-element of the <item> element, if possible. Recommended number of items per feed are 1-15. <item rdf: Required 1 or more. The item's title. <title>{item_title}</title> Required; with each item tag 1-100 characters. The item's URL. <link>{item_link}</link> Required; with each item tag 1-500 characters. A brief description/abstract of the item. <description>{item_description}</description> Optional; with an item tag 1-500 characters. Closing tag for item tag. </item> Required; for each item tag An input field for the purpose of allowing users to submit queries back to the publisher's site. This element should have a title, a link (to a cgi or other processor), a description containing some instructions, and a name, to be used as the name in the HTML tag <input type=text <textinput rdf: Optional; if present, must also be present in channel element A descriptive title for the textinput field. For example: "Subscribe" or "Search!". <title>{textinput_title}</title> Required if textinput is present 1-40 characters. A brief description of the textinput field's purpose. For example: "Subscribe to our newsletter for..." or "Search our site's archive of...". <description>{textinput_description}</description> Required if textinput is present 1-100 characters. The text input field's (variable) name. <name>{textinput_varname}</name> Required if textinput is present 1-500 characters. The URL to which a textinput submission will be directed (using GET). <link>{textinput_action_url}</link> Required if textinput is present 1-500 characters. CLosing tag for textInput. </textInput> Required with textinput This is closing tag for an RSS1.0 document. </rdf:RDF> Required Although an RSS 1.0 file is an XML document, RSS 0.91 extends XML by supporting a full set of HTML entities. If you want to use special characters such as ampersands (&) in <url> or <link> tags, you must substitute the appropriate decimal or HTML entities for those characters. Check for a complete set of HTML entities. Here is the example feed files which shows how to write RSS Feed using version 1.0. A specific file-extension for an RSS 1.0 document is not required. Either .rdf or .xml is recommended, the former being preferred. RSS 1.0 modules are maintained in separate documents, available online at RSS 1.0 modules.
http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=rss&file=rss1.0-tag-syntax.htm
CC-MAIN-2014-35
refinedweb
1,110
60.01
I remember as a kid those days I would read Curious George and wonder how did he end up there. Well usually the man in the yellow hat had something to do with that. I don’t have a man in the yellow hat but I starting thinking about helping customers (not starting mischief) and I thought it best to start a series on how to transition to the cloud. In the first blog in this series I’ll start with setup of Active Directory Federation Services (ADFS). I do think the first step should be planning but this is going to vary based on your parameters which would take a book to go through all the permutations. So I’ll only focus this series on the technical practicality. Mark recently posted on some of the requirements for setting up ADFS. Including the guide to setting up ADFS which can be found here. Setup UPN. One of the most important pre-req’s for ADFS is to ensure you have identified what the UPN suffix will be and to configure this for each user. In my lab edustl.com is the UPN. Keep in mind that UPNs for federation can only contain letters, numbers, periods, dashes, and underscores. To configure open ‘AD Domains and Trusts’, Right-Click on on Domains and Trusts and select “properties”. Add UPN suffix and Click “OK”. Being a big fan of PowerShell I also check for my users for UPNs. Today I’m only describing the installation of the ADFS server. A production environment should also have ADFS Proxy servers for security. Create DNS records for your ADFS environment. Since I only have the 1 ADFS server I did the following: Create a DNS Host Record for Active Directory Federation Services. a. On DC1, click Start, point to Administrative Tools, and then click DNS. b. Expand DNS Server name (XYZ), expand Forward Lookup Zones, and then click the edustl.com DNS zone. c. Right-click edustl.com zone and then click New Host (A or AAAA). d. In the New Host window, in the Name box, type adfs e. In the IP address box, type 192.168.1.211 (whatever your IP Address is) and then click Add Host. f. In the DNS dialog box, click OK. g. In the New Host window, click Done. h. Close DNS Manager. Service account. Next step is to create a service account for use with ADFS, this can be a domain user account – no special permissions need to be added. a. Create service account for ADFS – this can be a regular Domain User, no special permissions needed. b. Add internal ADFS server to AD forest. IIS and Certificate. Download and Install ADFS · Download ADFS 2.0 (here). During the install process, the following Windows components will be automatically installed: o Windows PowerShell o .NET Framework 3.5 SP1 o Internet Information Services (IIS) o Windows Identity Foundation · Download Microsoft Online Services Identity Federation Management Tool (64-bit) · Configure external DNS A record for ADFS Proxy (fim01.exchangefederation.edustl.com) Installing and configure ADFS 2.0 on internal server: · Double-click AdfsSetup.exe (this is the ADFS 2.0 download) · Click Next on the Welcome Screen and Accept the License Agreement · On the Server Role Option screen, select Federation Server · Finish the rest of the wizard, this will install any necessary prerequisites · At the end of the wizard, uncheck box to Start the ADFS 2.0 Management Snap-in · Request and provision public certificate through Entrust · Bind certificate to IIS on port 443 (remove binding for port 80) - In Lab we also had to do port mapping for internal server (68.188.39.211 – 172.16.3.52) · Configure ADFS utilizing ADFS 2.0 Management · Select ADFS 2.0 Federation Server Configuration Wizard · Select Create a new Federation Service · Select New Federation server farm (during the POC we did a Stand-alone configuration to prevent the need to add a container to the production AD for certificate sharing in the farm) · Select the public certificate and validate the Federation Service name. This will automatically fill in the name on the certificate Subject Name. (My FQDN was horrible as I used an existing VM HINT: Planning is very important! Now I’m stuck with a name I don’t like ) Finish the Wizard. Run Office 365 Desktop Setup from the Office 365 portal. Unselect all tools (Outlook, Sharepoint, & Lync) to install the Microsoft Online Connector. · a. Type $cred=Get-Credential and press Enter. b. Note: It’s a really good idea to setup an admin account that is not part of the domain you are converting to SSO · Enter you Microsoft Online Services administrator logon and password and click ok (ours was a user@liveatedu.onmicrosoft.com) c. Use the admin account the is NOT a member of the domain being converted · Type Set-MSOLContextcredential –msolAdminCredentials $cred and press enter d. This logs you into the Online Services · For a new domain – Type Add-MSOLFederatedDomain –domainname edustl.com · For existing domain – Type Convert-MSOLDomainToFederated –domainname edustl.com (also needed to add service.edustl.com) Ensure not added at first to web admin portal. If it is then you’ll have to ensure no users, mailboxes, or distribution groups are associated with that domain) adding or converting will add the domain to the portal. · Type Update-MSOLFederatedDomain –domainname edustl.com e. This updates and activates the SSO Exit the Federation Management Tool Check installation. Launch the ADFS Management console and check the Relying Party Trust to see if Microsoft Federation Gateway was added to the list. ADFS Server FQDN – my horribly named server. (In my case I also used the same ADFS server for my account namespace for federation trust – more on that later). Public Certificate – issued by GoDaddy used for the service connector. Other two certificates are self-signed generated as part of the federation trust. Claims Provider Trust is Active Directory. Relying Trust is Office 365 iDM platform. Install ADFS 2.0 Proxy server - Export public certificate from ADFS internal server and copy to proxy server - Validate DNS resolution of of your iDP server resolves to internal ADFS server from ADFS Proxy Server (a HOST file can be used for this if needed) - Validate DNS resolution of UPN resolves to external A record from an internet PC - Double-click AdfsSetup.exe (this is the ADFS 2.0 RTW download) - Click Next on the Welcome Screen and Accept the License Agreement - On the Server Role Option screen, select Federation Server Proxy - Finish the rest of the wizard, this will install any necessary prerequisites - At the end of the wizard, uncheck box to Start the ADFS 2.0 Management Snap-in - Import certificate in IIS and bind certificate to Default Web Site - Configure ADFS proxy by selecting ADFS 2.0 Federation Server Proxy Configuration Wizard - Finish Wizard That’s it. Finish ADFS and test. So the next thing we need to do is setup Dirsync. I’ll discuss this in my next blog. Take care!!!
https://blogs.technet.microsoft.com/educloud/2011/10/02/curious-greg-builds-a-labpart-i/
CC-MAIN-2017-09
refinedweb
1,174
56.66
llround Round to integral value, regardless of rounding direction DescriptionThe round functions return the integral value nearest to xrounding half-way cases away from zero, regardless of the current rounding direction. The lround and llround functions return the integral value nearest to x(rounding half-way cases away from zero, regardless of the current rounding direction) in the return formats specified. If the rounded value is outside the range of the return type, the numeric result is unspecified and the invalid floating-point exception is raised. A range error may occur if the magnitude of xis too large. Example: Example - Workings #include <stdio.h> #include <math.h> int main(void) { for(double a=120;a<=130;a+=1.0) /* note: increments by fraction are not exact! */ printf("round of %.1lf is %.1lf\n", a/10.0, round(a/10.0)); return 0; } Solution round of 12.0 is 12.0 round of 12.1 is 12.0 round of 12.2 is 12.0 round of 12.3 is 12.0 round of 12.4 is 12.0 round of 12.5 is 13.0 round of 12.6 is 13.0 round of 12.7 is 13.0 round of 12.8 is 13.0 round of 12.9 is 13.0 round of 13.0 is 13.0
http://www.codecogs.com/library/computing/c/math.h/round.php?alias=llround
CC-MAIN-2018-34
refinedweb
220
73.95
The Two Pillars of JavaScript Part 1: How to Escape the 7th Circle of Hell Before we get into this, allow me to introduce myself — you’re probably going to wonder who I think I am before this is over. I’m Eric Elliott, author of “Programming JavaScript Applications” (O’Reilly) and creator of the “Learn JavaScript with Eric Elliott” series of online JavaScript courses. I have contributed to software experiences for Adobe Systems, Zumba Fitness, The Wall Street Journal, ESPN, BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more. Once Upon a Time I was trapped in the darkness. I was blind — shuffling about, bumping into things, breaking things, and generally making an unholy mess of everything I touched. In the 90's, I was programming in C++, Delphi, and Java and writing 3D plugins for the software suite that eventually became Maya (used by lots of major motion picture studios to make summer blockbuster movies). Then it happened: The internet took off. Everybody started building websites, and after writing and editing a couple online magazines, a friend convinced me that the future of the web would be SaaS products (before the term was coined). I didn’t know it then, but that subtle course change transformed the way I think about programming on a fundamental level, because if you want to make a good SaaS product, you have to learn JavaScript. Once I learned it, I never looked back. Suddenly, everything was easier. The software I made was more malleable. Code survived longer without being rewritten. Initially, I thought JavaScript was mostly UI scripting glue, but when I learned cookies and AJAX blew up, that transformed, too. I got addicted, and I couldn’t go back. JavaScript offers something other languages lack: Freedom! JavaScript is one of the most important programming languages of all time, not simply because of its popularity, but because it popularized two paradigms which are extremely important for the evolution of programming: - Prototypal Inheritance (objects without classes, and prototype delegation, aka OLOO — Objects Linking to Other Objects), and - Functional Programming (enabled by lambdas with closure) Collectively, I like to call these paradigms the two pillars of JavaScript, and I’m not ashamed to admit that they’ve spoiled me. I don’t want to program in a language without them. JavaScript will be remembered as one of the most influential languages ever created. Lots of other languages have already copied one or the other, or both of the pillars, and the pillars have transformed the way we write applications, even in other languages. Brendan Eich didn’t invent either of the pillars, but JavaScript exposed the programming masses to them. Both pillars are equally important, but I’m concerned that a large number of JavaScript programmers are completely missing one or both innovations,. If you’re creating constructor functions and inheriting from them, you haven’t learned JavaScript. It doesn’t matter if you’ve been doing it since 1995. You’re failing to take advantage of JavaScript’s most powerful capabilities. You’re working in the phony version of JavaScript that only exists to dress the language up like Java. You’re coding in this amazing, game-changing, seminal programming language and completely missing what makes it so cool and interesting. We’re Constructing a Mess. “Those who are not aware they are walking in darkness will never seek the light.” ~ Bruce Lee Constructors violate the open/closed principle because they couple all callers to the details of how your object gets instantiated. Making an HTML5 game? Want to change from new object instances to use object pools so you can recycle objects and stop the garbage collector from trashing your frame rate? Too bad. You’ll either break all the callers, or you’ll end up with a hobbled factory function. If you return an arbitrary object from a constructor function, it will break your prototype links, and the `this` keyword will no longer be bound to the new object instance in the constructor. It’s also less flexible than a real factory function because you can’t use `this` at all in the factory; it just gets thrown away. Constructors that aren’t running in strict mode can be downright dangerous, too. If a caller forgets `new` and you’re not using strict mode or ES6 classes [sigh], anything you assign to `this` will pollute the global namespace. That’s ugly. Prior to strict mode, this language glitch caused hard-to-find bugs at two different startups I worked for, during critical growth periods when we didn’t have a lot of extra time to chase down hard-to-find bugs. In JavaScript, factory functions are simply constructor functions minus the `new` requirement,` behaves just like it does in any other function. Hurray! Welcome to the Seventh Circle of Hell. “Quite frequently I am not so miserable as it would be wise to be.” ~ T.H. White Everyone has heard the boiling frog analogy: If you put a frog in boiling water, it will jump out. If you put the frog in cool water and gradually increase the heat, the frog will boil to death because it doesn’t sense the danger. In this story, we are the frogs. If constructor behavior is the frying pan, classical inheritance isn’t the fire; it’s the fire from Dante’s seventh circle of hell. The Gorilla / Banana problem: “The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” ~ Joe Armstrong Classical Inheritance generally lets you inherit only from a single ancestor, forcing you into awkward taxonomies. I say awkward because without fail, every OO design taxonomy I have ever seen in a large application was eventually wrong. Say you start with two classes: Tool and Weapon. You’ve already screwed up — You can’t make the game “Clue.” The Fragile Base Class Problem The coupling between a child class and its parent is the tightest form of coupling in OO design. That’s the opposite of reusable, modular code. Making small changes to a base class creates rippling side-effects that break things that should be completely unrelated. The Duplication by Necessity Problem The obvious solution to taxonomy troubles is to go back in time, build up new classes with subtle differences by changing up what inherits from what — but it’s too tightly coupled to properly extract and refactor. You end up duplicating code instead of reusing it. You violate the DRY principle (Don’t Repeat Yourself). As a consequence, you keep growing your subtly different jungle of classes, and as you add inheritance levels, your classes get more and more arthritic and brittle. When you find a bug, you don’t fix it in one place. You fix it everywhere. “Oops. Missed one.” — Every Classical OO programmer, ever. This is known as the duplication by necessity problem in OO design circles. ES6 classes don’t fix any of these problems. ES6 makes them worse, because these bad ideas have been officially blessed by the spec, and written about in a thousand books and blog posts. The `class` keyword is now the most harmful feature in JavaScript. I have enormous respect for the brilliant and hard-working people who have been involved in the standardization effort, but even brilliant people occasionally do the wrong thing. Try adding .1 + .2 in your browser console, for instance. I still think Brendan Eich has contributed greatly to the web, to programming languages, and to computer science in general. P.S. Don’t use `super` unless you enjoy stepping through the debugger into multiple layers of inheritance abstraction. The Fallout These problems have a multiplying effect as your application grows, and eventually, the only solution is to rewrite the application from scratch or scrap it entirely — sometimes the business just needs to cut its losses. I have seen this process play out again, and again, job after job, project after project. Will we ever learn? At one company I worked for, it caused a software release date to slip by an entire year for a rewrite. I believe in updates, not rewrites. At another company I consulted for, it almost caused the entire company to crash and burn. These problems are not just a matter of taste or style. This choice can make or break your product. Large companies can usually chug along like nothing is wrong, but startups can’t afford to spin their wheels on problems like these while they’re struggling to find their product/market fit on a limited runway. I’ve never seen any of the problems above in a modern code base that avoids classical inheritance altogether. Step into the light. “Perfection is reached not when there is nothing more to add, but when there is nothing more to subtract.” ~ Antoine de Saint-Exupéry Updated: July 2019 Today I rely mostly on functions and module imports to share behaviors, and various forms of object composition to compose data structures. I use generic functions and abstract data types, e.g., map(), filter(), reduce(), and friends to manipulate data without exposing direct access to the underlying data structures. You can copy/extend object properties using object spread syntax: {…a, …b}. The copy mechanism is another form of prototypal inheritance. Sources of clone properties are a specific kind of prototype called exemplar prototypes, and cloning an exemplar prototype is known as concatenative inheritance. Concatenative inheritance is possible because of a feature in JavaScript known as dynamic object extension: the ability to add to an object after it has been instantiated. You rarely need classes in JavaScript, and I have never seen a situation where class is a better approach than the alternatives if you’re in control of the API you’re exporting. If you can think of any, leave a comment, but I’ve been making that challenge for years now, and nobody has come up with a good use-case — just flimsy arguments about micro-optimizations or style preferences. When I tell people that constructors and classical inheritance are bad, they get defensive. I’m not attacking you. I’m trying to help you. People get attached to their programming style as if their coding style is how they express themselves. Nonsense. What you make with your code is how you express yourself. How it’s implemented doesn’t matter at all unless it’s implemented poorly. The only thing that matters in software development is that your users love the software. I can warn you that there’s a cliff ahead, but some people don’t believe there is danger until they experience it first hand. Don’t make that mistake; the cost can be enormous. This is your chance to learn from the mistakes that countless others have made again and again over the span of decades. Entire books have been written about these problems. The seminal “Design Patterns” book by the Gang of Four is built around two foundational principles: “Program to an interface, not an implementation,” and “favor object composition over class inheritance.” Because child classes code to the implementation of the parent class, the second principle follows from the first, but it’s useful to spell it out. The seminal work on classical OO design is anti-class inheritance. It contains a whole section of object creational patterns that exist solely to work around the limitations of constructors and class inheritance. Google “new considered harmful,” “inheritance considered harmful,” and “super is a code smell.” You’ll dig up dozens of articles from blog posts and respected publications like Dr. Dobb’s Journal dating back to before JavaScript was invented, all saying much the same thing: `new`, brittle classical inheritance taxonomies, and parent-child coupling (e.g. `super`) are recipes for disaster. Even James Gosling, the creator of Java, admits that Java didn’t get inheritance right: Bill Venners: When asked what you might do differently if you could recreate Java, you’ve said you’ve wondered what it would be like to have a language that just does delegation. James Gosling: Yes. […]. When you experience years of application building without using class inheritance, and then you’re forced to work on a legacy codebase that uses it extensively, you realize that leaving class behind was positively liberating. Class inheritance was a mistake that we don’t need to keep clinging to. Good code is simple. “Simplicity is about subtracting the obvious and adding the meaningful.” ~ John Maeda As you strip constructors and classical inheritance out of JavaScript, it: - Gets simpler (Easier to read and to write. No more wrong design taxonomies.) - Gets more flexible (Switch from new instances to recycling object pools or proxies? No problem.) - Gets more powerful & expressive (Inherit from multiple ancestors? Inherit private state? No problem.) The Better Option “If a feature is sometimes dangerous, and there is a better option, then always use the better option.” ~ Douglas Crockford I’m not trying to take a useful tool away from you. I’m warning you that what you think is a tool is actually a foot-gun. In the case of constructors and classes, there are several better options.. Yes, some code is art in and of itself, but if it doesn’t stand alone published on paper, your code doesn’t fall into that category. Otherwise, as far as your users are concerned, the code is a black box, and what they enjoy is the program. Good programming style requires that when you’re presented with a choice that’s elegant, simple, and flexible, or another choice that’s complex, awkward, and restricting, you choose the former. I know it’s popular to be open minded about language features, but there is a right way and a wrong way. Choose the right way. ~ Eric Elliott P.S. Don’t miss The Two Pillars of JavaScript Part 2: Functional Programming (How to Stop Micromanaging Everything) Related Videos.
https://medium.com/javascript-scene/the-two-pillars-of-javascript-ee6f3281e7f3
CC-MAIN-2022-33
refinedweb
2,357
62.38
In one of my recent project requiring a CSV text file import, the data columns had to be in certain predefined positions for the program to work correctly. The import did not depend on field labels, which is obviously a wrong way to process data. It would not have been much of a concern had the CSV files been in the correct format. However, the CSV files I had had the columns in wrong positions, which required me to exchange the columns to their correct positions for the parent program to import them correctly. For a few files I’d have easily used a spreadsheet for the task, but with around 34 files it was going to be tedious. The task entailed me to write a small script which helped in the matter. The PHP script to exchange CSV columns is shown below. So if I’ve a CSV file with the following content: Running the above script on the file will result in a new file with the following content. The columns to exchange are represented by the $exchange array. However, if we set the array to the following, the 1st and 4th columns will be reset to the original, so take care when you create the array. The script above will not however exchange all types of CSV files; files where fields are enclosed by quotes will not work correctly with the above script. You will need to make some small modifications top the same. Just for comparision, prior to creating the PHP script, I played around with Awk, which enabled me to accomplish the same with only two lines of code. However, I’d still prefer working with PHP, as I can easily integrate it with my other code. 4 thoughts on “Exchanging columns in a CSV file” Thanks for sharing this. But I guess awk would an appropriate tool for this. awk -F “,” ‘{print $4″,”, $3″,”, $2″,”, $1}’ customer.csv I prefer to use the best suited tool for a given task, maybe this approach is different than those who think that coding in a single language makes it uniform to maintain. I did not see that you have already mentioned awk in your post. But my argument still stands valid. Let us see how python would do it. import csv with open(‘customer2.csv’, ‘wb’) as output: input = csv.reader(open(‘customer.csv’, ‘rb’)) output = csv.writer(output, dialect=input.dialect) for line in input: line.reverse() output.writerow(line) I guess you will agree that this 7 lines of code is more elegant, readable and flexible. The idea was not to completely reverse columns, but to exchange any two arbitrary columns. You can save the results to a tuple and the rearrange them as you wish. Here is how it can be done using exception handling. import csv with open(‘customer2.csv’, ‘wb’) as output: input = csv.reader(open(‘customer.csv’, ‘rb’)) output = csv.writer(output, dialect=input.dialect) for line in input: try: (customerno, firstname, lastname, sales) = line except ValueError: print ‘valueerror’ else: outLine = (sales, firstname, lastname, customerno) output.writerow(outLine)
https://www.codediesel.com/data/exchanging-columns-in-a-csv-file/
CC-MAIN-2018-51
refinedweb
518
73.37
I am trying to use recode and mutate_all to recode columns. However, for some reason, I am getting an error. I do believe this post is similar to how to recode (and reverse code) variables in columns with dplyr but the answer in that post has used lapply function. Here's what I tried after reading dplyr package's help pdf. by_species<-matrix(c(1,2,3,4),2,2) tbl_species<-as_data_frame(by_species) tbl_species %>% mutate_all(funs(. * 0.4)) # A tibble: 2 x 2 V1 V2 <dbl> <dbl> 1 0.4 1.2 2 0.8 1.6 grades<-matrix(c("A","A-","B","C","D","B-","C","C","F"),3,3) tbl_grades <- as_data_frame(grades) tbl_grades %>% mutate_all(funs(dplyr::recode(.,A = '4.0'))) Error in vapply(dots[missing_names], function(x) make_name(x$expr), character(1)) : values must be length 1, but FUN(X[[1]]) result is length 3 @Mir has done a good job describing the problem. Here's one possible workaround. Since the problem is in generating the name, you can supply your own name tbl_grades %>% mutate_all(funs(recode=recode(.,A = '4.0'))) Now this does add columns rather than replace them. Here's a function that will "forget" that you supplied those names dropnames<-function(x) {if(is(x,"lazy_dots")) {attr(x,"has_names")<-FALSE}; x} tbl_grades %>% mutate_all(dropnames(funs(recode=dplyr::recode(.,A = '4.0')))) This should behave like the original. Although really tbl_grades %>% mutate_all(dropnames(funs(recode(.,A = '4.0')))) because dplyr often has special c++ versions of some functions that it can use if it recognized the functions (like lag for example) but this will not happen if you also specify the namespace (if you use dplyr::lag).
https://codedump.io/share/4abqtaOzunsI/1/recode-and-mutateall-in-dplyr
CC-MAIN-2018-05
refinedweb
281
58.28
Closer AlphaGo by a whopping 100 games to 0, signifying a major advance in the field. Hear that? It’s the technological singularity inching ever closer. DeepMind’s blog article on how AlphaGoZero learns by playing itself is relatively approachable for the interested general observer. The learning rate is staggering – AlphaGoZero is able to go from the level of a newborn to the greatest player that has ever played the game in under 40 days. Humans are notoriously poor at grasping the profundity of the exponential. Here it is writ large and represents a genuine moment of shock and awe: It seems so alien that the inert mass of hardware that AlphaGoZero lives on can transform itself so miraculously. This is just one example of how software is changing reality all around us and increasingly mediating our experiences of the real world and not just in terms of the more obvious forms such as AR/VR. It is the new luminiferous ether in which we are balefully cast. The Atlantic published an excellent long read on the sheer intractability of software advances – for most humans, software like the ether is entirely invisible: Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise.. Artificial Intelligence The BBC has announced a partnership with UK universities to “to unlock the potential of data in the media“. It’s a smart move. BBC salaries will struggle to compete with private sector AI positions so working with British universities is what is left open. Apple have released a paper on how they use a deep neural network to detect the “Hey Siri” voice trigger. AI is hot. AI experts are even hotter:. No wonder beginners guides are popping up all the time and so many are being drawn to them to find out what all the fuss is about. HuffPo got in on the act with this “complete beginners guide”. Technology. Excellent Bloomberg piece on Apple’s struggle to get the iPhoneX to market on time which pins the major issues on the formidable complexity of the Face ID sensor and the electronics supply chain behind it: Apple continues its video exec hiring this time pulling in Jay Hunt from Channel 4. The Information however suggests a note of caution for news broadcasters anxious to pile into the OTT frenzy: is a terrible medium for both consuming and creating information. So businesses in the latter category, aka every news organization that is bulking up its video team, should proceed carefully. Why “the world needs a Vertu“. The picture is appropriately enough the crocodile leathered back of an entry-level Vertu Aster that would have set you back £5k or so: Amazon Jeff Bezos smashing a champagne bottle into a wind turbine is strangely compelling: Fun day christening Amazon’s latest wind farm. #RenewableEnergy pic.twitter.com/cTxeXdsFop — Jeff Bezos (@JeffBezos) October 19, 2017 Good article on the value of Prime membership to Amazon. Loss leader or value creator? Either way, Amazon is growing faster than any big company in the US these days—and maybe ever: Of the four tech giants … Even so, Amazon has not exactly garnered plaudits for its on the Roy Price debacle which has raised questions about the way the company is run and managed by mostly men:. Leadership Elon Musk’s email rule at Tesla is so good it is worth quoting in full without context – it speaks volumes eloquently: The transition from engineer to engineering manager is one that is harder to objectively measure. The nuance is the thing: As an engineer, you are used to your work being more or less directly proportional to its impact. You write code, it advances a product’s development. You don’t write code, and it doesn’t. In management, however, the relationship between your work and its results is much more nuanced. Software This Python graph gallery with code is an amazingly useful resource for anyone interested in data visualisation. From that site, TIL how to build a word cloud with this code: from wordcloud import WordCloud import matplotlib.pyplot as plt # Create a list of word text=("Python Python Python") wordcloud = WordCloud(width=480, height=480, max_words=20).generate(text) plt.figure() plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.margins(x=0, y=0) plt.show() resulting in this image: Streaming data frames in a Python notebook using pandas and an experimental library called Streamz for realtime telemetry. This will surely eventually be combined with Dask: Culture and Society The Atlantic on Trump’s America as told by modern day neo-anthropologists criss-crossing the US to try and find out what happened and where everyone is at now. Approaching Bonfire Night, National Geographic on the explosive legacy of Guy Fawkes arguably the first nationally notorious British traitor who has now become a symbol of popular revolt. A controversial must-read from the ever-excellent Aeon on why having children far from being life affirming is immoral. Moral philosopher Jeff McMahan profiled in this post, would doubtless beg to differ and invoke the non-identity problem: The non-identity problem runs as follows: Humans are capable of actions that simultaneously create an objectively worse state of affairs, but also allow certain people (who otherwise wouldn’t have existed) to be born. For those who exist as a direct result of such actions, the original act is not bad for them. It’s unclear whether the act is, in fact, bad for anyone. Insects are in serious trouble of the hyperobject variety. The sort of existential crisis techno-utopians tend not to talk about. Insects were notably absent in the bleak dystopian future portrayed in Bladerunner 2049:.
https://importdigest.co.uk/2017/10/31/newsletter-35/
CC-MAIN-2018-39
refinedweb
961
50.77
V1 5 FAQ If you cannot find the answer to your question here, and you have read Primer and AdvancedGuide, send it to googletestframework@googlegroups.com. Why should I use Google Test instead of my favorite C++ testing framework?¶ First, let's say clearly that we don't want to get into the debate of which C++ testing framework is the best. There exist many fine frameworks for writing C++ tests, and we have tremendous respect for the developers and users of them. We don't think there is (or will be) a single best framework - you have to pick the right tool for the particular task you are tackling. We created Google Test because we couldn't find the right combination of features and conveniences in an existing framework to satisfy our needs. The following is a list of things that we like about Google Test. We don't claim them to be unique to Google Test - rather, the combination of them makes Google Test the choice for us. We hope this list can help you decide whether it is for you too. - Google Test is designed to be portable. It works where many STL types (e.g. std::stringhelps you understand the context of an assertion failure when it comes from inside a sub-routine or loop. - You can decide which tests to run using name patterns. This saves time when you want to quickly reproduce a test failure. How do I generate 64-bit binaries on Windows (using Visual Studio 2008)?¶ (Answered by Trevor Robinson) Load the supplied Visual Studio solution file, either msvc\gtest-md.sln or msvc\gtest.sln. Go through the migration wizard to migrate the solution and project files to Visual Studio 2008. Select Configuration Manager... from the Build menu. Select <New...> from the Active solution platform dropdown. Select x64 from the new platform dropdown, leave Copy settings from set to Win32 and Create new project platforms checked, then click OK. You now have Win32 and x64 platform configurations, selectable from the Standard toolbar, which allow you to toggle between building 32-bit or 64-bit binaries (or both at once using Batch Build). In order to prevent build output files from overwriting one another, you'll need to change the Intermediate Directory settings for the newly created platform configuration across all the projects. To do this, multi-select (e.g. using shift-click) all projects (but not the solution) in the Solution Explorer. Right-click one of them and Properties. In the left pane, select Configuration Properties, and from the Configuration dropdown, select All Configurations. Make sure the selected platform is x64. For the Intermediate Directory setting, change the value from $(PlatformName)\$(ConfigurationName) to $(OutDir)\$(ProjectName). Click OK and then build the solution. When the build is complete, the 64-bit binaries will be in the msvc\x64\Debug directory. Can I use Google Test on MinGW?¶ We haven't tested this ourselves, but Per Abrahamsen reported that he was able to compile and install Google Test successfully when using MinGW from Cygwin. You'll need to configure it with: PATH/TO/configure CC="gcc -mno-cygwin" CXX="g++ -mno-cygwin" You should be able to replace the -mno-cygwin option with direct links to the real MinGW binaries, but we haven't tried that. Caveats: - There are many warnings when compiling. make checkwill produce some errors as not all tests for Google Test itself are compatible with MinGW. We also have reports on successful cross compilation of Google Test MinGW binaries on Linux using these instructions on the WxWidgets site. Please contact googletestframework@googlegroups.com if you are interested in improving the support for MinGW. Why does Google Test support EXPECT_EQ(NULL, ptr) and ASSERT_EQ(NULL, ptr) but not EXPECT_NE(NULL, ptr) and ASSERT_NE(NULL, ptr)?¶ Due to some peculiarity of C++, it requires some non-trivial template meta programming tricks to support using NULL as an argument of the EXPECT_XX() and ASSERT_XX() macros. Therefore we only do it where it's most needed (otherwise we make the implementation of Google Test't'll Google Mock's. Does Google Test support running tests in parallel?¶ Test runners tend to be tightly coupled with the build/test environment, and Google Test doesn't try to solve the problem of running tests in parallel. Instead, we tried to make Google Test work nicely with test runners. For example, Google Test's XML report contains the time spent on each test, and its gtest_list_tests and gtest_filter flags can be used for splitting the execution of test methods into multiple processes. These functionalities can help the test runner run the tests in parallel. Why don't Google Test run the tests in different threads to speed things up?¶ It's difficult to write thread-safe code. Most tests are not written with thread-safety in mind, and thus may not work correctly in a multi-threaded setting. If you think about it, it's already hard to make your code work when you know what other threads are doing. It's much harder, and sometimes even impossible, to make your code work when you don't know what other threads are doing (remember that test methods can be added, deleted, or modified after your test was written). If you want to run the tests in parallel, you'd better run them in different processes. Why aren't Google Test assertions implemented using exceptions?¶ Our original motivation was to be able to use Google Test in projects that disable exceptions. Later we realized some additional benefits of this approach: - Throwing in a destructor is undefined behavior in C++. Not using exceptions means Google Test's assertions are safe to use in destructors. - The EXPECT_*family of macros will continue even after a failure, allowing multiple failures in a TESTto be reported in a single run. This is a popular feature, as in C++ the edit-compile-test cycle is usually quite long and being able to fixing more than one thing at a time is a blessing. - If assertions are implemented using exceptions, a test may falsely ignore a failure if it's caught by user code:The above code will pass even if the try { ... ASSERT_TRUE(...) ... } catch (...) { ... } ASSERT_TRUEthrows. While it's unlikely for someone to write this in a test, it's possible to run into this pattern when you write assertions in callbacks that are called by the code under test. The downside of not using exceptions is that ASSERT_* (implemented using return) will only abort the current function, not the current TEST. Why do we use two different macros for tests with and without fixtures?¶ Unfortunately, C++'s macro system doesn't allow us to use the same macro for both cases. One possibility is to provide only one macro for tests with fixtures, and require the user to define an empty fixture sometimes: class FooTest : public ::testing::Test {}; TEST_F(FooTest, DoesThis) { ... } typedef ::testing::Test FooTest; TEST_F(FooTest, DoesThat) { ... } Yet, many people think this is one line too many. :-) Our goal was to make it really easy to write tests, so we tried to make simple tests trivial to create. That means using a separate macro for such tests. We think neither approach is ideal, yet either of them is reasonable. In the end, it probably doesn't matter much either way. Why don't we use structs as test fixtures?¶ We like to use structs only when representing passive data. This distinction between structs and classes is good for documenting the intent of the code's author. Since test fixtures have logic like SetUp() and TearDown(), they are better defined as classes. Why are death tests implemented as assertions instead of using a test runner?¶ Our goal was to make death tests as convenient for a user as C++ possibly allows. In particular: - The runner-style requires to split the information into two pieces: the definition of the death test itself, and the specification for the runner on how to run the death test and what to expect. The death test would be written in C++, while the runner spec may or may not be. A user needs to carefully keep the two in sync. ASSERT_DEATH(statement, expected_message)specifies all necessary information in one place, in one language, without boilerplate code. It is very declarative. ASSERT_DEATHhas a similar syntax and error-reporting semantics as other Google Test assertions, and thus is easy to learn. ASSERT_DEATHcan be mixed with other assertions and other logic at your will. You are not limited to one death test per test method. For example, you can write something like:If you prefer one death test per test method, you can write your tests in that style too, but we don't want to impose that on the users. The fewer artificial limitations the better. if (FooCondition()) { ASSERT_DEATH(Bar(), "blah"); } else { ASSERT_EQ(5, Bar()); } ASSERT_DEATHcan reference local variables in the current function, and you can decide how many death tests you want based on run-time information. For example,The runner-based approach tends to be more static and less flexible, or requires more user effort to get this kind of flexibility. const int count = GetCount(); // Only known at run time. for (int i = 1; i <= count; i++) { ASSERT_DEATH({ double* buffer = new double[i]; ... initializes buffer ... Foo(buffer, i) }, "blah blah"); } Another interesting thing about ASSERT_DEATH is that it calls fork() to create a child process to run the death test. This is lightening fast, as fork() uses copy-on-write pages and incurs almost zero overhead, and the child process starts from the user-supplied statement directly, skipping all global and local initialization and any code leading to the given statement. If you launch the child process from scratch, it can take seconds just to load everything and start running if the test links to many libraries dynamically. My death test modifies some state, but the change seems lost after the death test finishes. Why?¶. The compiler complains about "undefined references" to some static const member variables, but I did define them in the class body. What's wrong?¶ Google Test comparison assertions ( EXPECT_EQ, etc) will generate an "undefined reference" linker error. I have an interface that has several implementations. Can I write a set of tests once and repeat them over all the implementations?¶ Google Test doesn't yet have good support for this kind of tests, or data-driven tests in general. We hope to be able to make improvements in this area soon. Can I derive a test fixture from another?¶ Yes. Each test fixture has a corresponding and same named test case. This means only one test case can use a particular fixture. Sometimes, however, multiple test cases may want to use the same or slightly different fixtures. For example, you may want to make sure that all of a GUI library's test cases don't leak important system resources like fonts and brushes. In Google Test, you share a fixture among test cases by putting the shared logic in a base test fixture, then deriving from that base a separate fixture for each test case: virtual void SetUp() { BaseTest::SetUp(); // Sets up the base fixture first. ... additional set-up work ... } virtual void TearDown() { .... Google Test has no limit on how deep the hierarchy can be. For a complete example using derived test fixtures, see samples/sample5_unittest.cc. My compiler complains "void value not ignored as it ought to be." What does this mean?¶ You're probably using an ASSERT_*() in a function that doesn't return void. ASSERT_*() can only be used in void functions. My death test hangs (or seg-faults). How do I fix it?¶ In Google Test, is no race conditions or dead locks in your program. No silver bullet - sorry! Should I use the constructor/destructor of the test fixture or the set-up/tear-down function?¶ The first thing to remember is that Google Test does not reuse the same test fixture object across multiple tests. For each TEST_F, Google Test will create a fresh test fixture object, immediately call SetUp(), run the test, call TearDown(), and then immediately delete the test fixture object. Therefore, there is no need to write a SetUp() or TearDown() function if the constructor or destructor already does the job. You may still want to use SetUp()/TearDown() in the following cases: * If the tear-down operation could throw an exception, you must use. * The Google Test team is considering making the assertion macros throw on platforms where exceptions are enabled (e.g. Windows, Mac OS, and Linux client-side), which will eliminate the need for the user to propagate failures from a subroutine to its caller. Therefore, you shouldn't use Google Test assertions in a destructor if your code could run on such a platform. * In a constructor or destructor, you cannot make a virtual function call on this object. (You can call a method declared as virtual, but it will be statically bound.) Therefore, if you need to call a method that will be overriden in a derived class, you have to use SetUp()/TearDown(). The compiler complains "no matching function to call" when I use ASSERT_PREDn. How do I fix it?¶ parameters.); My compiler complains about "ignoring return value" when I call RUN_ALL_TESTS(). Why?¶ Some people had been ignoring the return value of RUN_ALL_TESTS(). That is, instead of return RUN_ALL_TESTS(); they write RUN_ALL_TESTS(); This is wrong and dangerous. A test runner needs to see the return value of RUN_ALL_TESTS() in order to determine if a test has passed. If your main() function ignores it, your test will be considered successful even if it has a Google Test assertion failure. Very bad. To help the users avoid this dangerous bug, the implementation of RUN_ALL_TESTS() causes gcc to raise this warning, when the return value is ignored. If you see this warning, the fix is simple: just make sure its value is used as the return value of main(). My compiler complains that a constructor (or destructor) cannot return a value. What's going on?¶. My set-up function is not called. Why?¶ C++ is case-sensitive. It should be spelled as SetUp(). Did you spell it as Setup()? Similarly, sometimes people spell SetUpTestCase() as SetupTestCase() and wonder why it's never called. How do I jump to the line of a failure in Emacs directly?¶ Google Test's failure message format is understood by Emacs and many other IDEs, like acme and XCode. If a Google Test message is in a compilation buffer in Emacs, then it's clickable. You can now hit enter on a message to jump to the corresponding source code, or use `C-x `` to jump to the next failure. I have several test cases which share the same test fixture logic, do I have to define a new test fixture class for each of them? This seems pretty tedious.¶ Google Test output is buried in a whole bunch of log messages. What do I do?¶ The Google Test output is meant to be a concise and human-friendly report. If your test generates textual output itself, it will mix with the Google Test output, making it hard to read. However, there is an easy solution to this problem. Since most log messages go to stderr, we decided to let Google Test output go to stdout. This way, you can easily separate the two using redirection. For example: ./my_test > googletest_output.txt Why should I prefer test fixtures over global variables?¶ There are several good reasons: 1. It's likely your test needs to change the states of its global variables. This makes it difficult to keep side effects from escaping one test and contaminating others, making debugging difficult. By using fixtures, each test has a fresh set of variables that's different (but with the same names). Thus, tests are kept independent of each other. 1. Global variables pollute the global namespace. 1. Test fixtures can be reused via subclassing, which cannot be done easily with global variables. This is useful if many test cases have something in common. How do I test private class members without writing FRIEND_TEST()s?¶ You should try to write testable code, which means classes should be easily tested from their public interface. One way to achieve this is the Pimpl idiom: you move all private members of a class into a helper class, and make all members of the helper class public. You have several other options that don't require using FRIEND_TEST: * Write the tests as members of the fixture class: class Foo { friend class FooTest; ... }; class FooTest : public ::testing::Test { protected: ... void Test1() {...} // This accesses private members of class Foo. void Test2() {...} // So does this one. }; TEST_F(FooTest, Test1) { Test1(); } TEST_F(FooTest, Test2) { Test2(); } class Foo { friend class FooTest; ... }; class FooTest : public ::testing::Test { protected: ... T1 get_private_member1(Foo* obj) { return obj->private_member1_; } }; TEST_F(FooTest, Test1) { ... get_private_member1(x) ... } class YourClass { ... protected: // protected access for testability. int DoSomethingReturningInt(); ... }; // in the your_class_test.cc file: class TestableYourClass : public YourClass { ... public: using YourClass::DoSomethingReturningInt; // changes access rights ... }; TEST_F(YourClassTest, DoSomethingTest) { TestableYourClass obj; assertEquals(expected_value, obj.DoSomethingReturningInt()); } How do I test private class static members without writing FRIEND_TEST()s?¶ We find private static methods clutter the header file. They are implementation details and ideally should be kept out of a .h. So often I make them free functions instead. Instead of: // foo.h class Foo { ... private: static bool Func(int n); }; // foo.cc bool Foo::Func(int n) { ... } // foo_test.cc EXPECT_TRUE(Foo::Func(12345)); You probably should better write: // foo.h class Foo { ... }; // foo.cc namespace internal { bool Func(int n) { ... } } // foo_test.cc namespace internal { bool Func(int n); } EXPECT_TRUE(internal::Func(12345)); I would like to run a test several times with different parameters. Do I need to write several similar copies of it?¶ No. You can use a feature called value-parameterized tests which lets you repeat your tests with different parameters, without defining it more than once. How do I test a file that defines main()?¶ To test a foo.cc file, you need to compile and link it into your unit test program. However, when the file contains a definition for the main() function, it will clash with the main() of your unit test, and will result in a build error. The right solution is to split it into three files: 1. foo.h which contains the declarations, 1. foo.cc which contains the definitions except main(), and 1. foo_main.cc which contains nothing but the definition of main(). Then foo.cc can be easily tested. If you are adding tests to an existing file and don't want an intrusive change like this, there is a hack: just include the entire foo.cc file in your unit test. For example: // File foo_unittest.cc // The headers section ... // Renames main() in foo.cc to make room for the unit test main() #define main FooMain #include "a/b/foo.cc" // The tests start here. ... However, please remember this is a hack and should only be used as the last resort. What can the statement argument in ASSERT_DEATH() be?¶ ASSERT_DEATH(_statement_, _regex_) (or any death assertion macro) can be used wherever _statement_ is valid. So basically _statement_ can be any C++ statement that makes sense in the current context. In particular, it can reference global and/or local variables, and can be: * a simple function call (often the case), * a complex expression, or * a compound statement. any where");} googletest_unittest.cc contains more examples if you are interested. What syntax does the regular expression in ASSERT_DEATH use?¶ On POSIX systems, Google Test uses the POSIX Extended regular expression syntax (). On Windows, it uses a limited variant of regular expression syntax. For more details, see the regular expression syntax. I have a fixture class Foo, but TEST_F(Foo, Bar) gives me error "no matching function for call to Foo::Foo()". Why?¶ Google Test needs to be able to create objects of your test fixture class, so it must have a default constructor. Normally the compiler will define one for you. However, there are cases where you have to define your own: * If you explicitly declare a non-default constructor for class Foo, then you need to define a default constructor, even if it would be empty. * If Foo has a const non-static data member, then you have to define the default constructor and initialize the const member in the initializer list of the constructor. (Early versions of gcc doesn't force you to initialize the const member. It's a bug that has been fixed in gcc 4.) Why does ASSERT_DEATH complain about previous threads that were already joined?¶ With the Linux pthread library, there is no turning back once you cross the line't suffer from this problem, as it doesn't create a manager thread. However, if you don't control which machine your test runs on, you shouldn't depend on this. Why does Google Test require the entire test case, instead of individual tests, to be named FOODeathTest when it uses ASSERT_DEATH?¶ Google Test does not interleave tests from different test cases. That is, it runs all tests in one test case first, and then runs all tests in the next test case, and so on. Google Test does this because it needs to set up a test case before the first test in it is run, and tear it down afterwords. cases, we need to run all tests in the FooTest case before running any test in the BarTest case. This contradicts with the requirement to run BarTest.DefDeathTest before FooTest.Uvw. But I don't like calling my entire test case FOODeathTest when it contains both death tests and non-death tests. What do I do?¶ You don't have to, but if you like, you may split up the test case into FooTest and FooDeathTest, where the names make it clear that they are related: class FooTest : public ::testing::Test { ... }; TEST_F(FooTest, Abc) { ... } TEST_F(FooTest, Def) { ... } typedef FooTest FooDeathTest; TEST_F(FooDeathTest, Uvw) { ... EXPECT_DEATH(...) ... } TEST_F(FooDeathTest, Xyz) { ... ASSERT_DEATH(...) ... } The compiler complains about "no match for 'operator<<'" when I use an assertion. What gives?¶. How do I suppress the memory leak messages on Windows?¶ Since the statically initialized Google Test. I am building my project with Google Test in Visual Studio and all I'm getting is a bunch of linker errors (or warnings). Help!¶ You may get a number of the following linker error or warnings if you attempt to link your test project with the Google Test library when your project and the are not built using the same compiler settings. - LNK2005: symbol already defined in object - LNK4217: locally defined symbol 'symbol' imported in function 'function' - LNK4049: locally defined symbol 'symbol' imported The Google Test project (gtest.vcproj) has the Runtime Library option set to /MT (use multi-threaded static libraries, /MTd for debug). If your project uses something else, for example /MD (use multi-threaded DLLs, /MDd for debug), you need to change the setting in the Google Test project to match your project's. To update this setting open the project properties in the Visual Studio IDE then select the branch Configuration Properties | C/C++ | Code Generation and change the option "Runtime Library". You may also try using gtest-md.vcproj instead of gtest.vcproj. I put my tests in a library and Google Test doesn't run them. What's happening?¶ Have you read a warning on the Google Test Primer page? I want to use Google Test with Visual Studio but don't know where to start.¶ Many people are in your position and one of the posted his solution to our mailing list. Here is his link:. My question is not covered in your FAQ!¶ If you cannot find the answer to your question in this FAQ, there are some other resources you can use: - read other wiki pages, - search the mailing list archive, - ask it on googletestframework@googlegroups.com and someone will answer it (to prevent spam, we require you to join the discussion group before you can post.). Please note that creating an issue in the issue tracker is not a good way to get your answer, as it is monitored infrequently by a very small number of people. When asking a question, it's helpful to provide as much of the following information as possible (people cannot help you if there's not enough information in your question): - the version (or the revision number if you check out from SVN directly) of Google Test you use (Google Test is under active development, so it's possible that your problem has been solved in a later version), - your operating system, - the name and version of your compiler, - the complete command line flags you give to your compiler, - the complete compiler error messages (if the question is about compilation), - the actual code (ideally, a minimal but complete program) that has the problem you encounter.
https://microsoft.github.io/mu/dyn/mu_tiano_platforms/Common/MU_TIANO/CryptoPkg/Library/OpensslLib/openssl/boringssl/third_party/googletest/docs/V1_5_FAQ/
CC-MAIN-2022-27
refinedweb
4,175
64.91
commute 0.2 commute.py helps users who travel across multiple modes of transport and multiple waypoints to make data-based decisions about which route to use and prefer at a given time or at a given time in future. This is a helper script for multi-modal commute planning based on the information that you specify. Table of contents Sample Usage $ commute -c config.yml -s HOME -d WORK Total time: 26min. Home (time: 26m. w/traffic drive) Work ----- Total time: 43min. Home (time: 41m. waiting: 02min. bus) Work ----- Total time: 45min. Home (time: 25m. w/traffic drive) Kwik-e-Mart (time: 20m. w/traffic drive) Work ----- .... Installation You can easily install this script using either pip or easy_install $ pip install commute or $ easy_install commute Configuration Get the Google API key This information is borrowed from Google Maps Python client repo - Visit and log in with a Google Account. - Select an existing project, or create a new project. - Click Enable an API. - Browse for the API, and set its status to “On”. The Python Client for Google Maps Services accesses the following APIs: - Directions API - Distance Matrix API - Elevation API - Geocoding API - Time Zone API - Roads API - Once you’ve enabled the APIs, click Credentials from the left navigation of the Developer Console. - In the “Public API access”, click Create new Key. - Choose Server Key. - If you’d like to restrict requests to a specific IP address, do so now. - Click Create. Your API key should be 40 characters long, and begin with AIza. Create the configuration file Then you will need to create a config.yml file, or just any yaml file with the following key fields api_key: # your Google API key over here places: # all the places which need to be tracked map: # the map, or essentially how you commute between any two places Sample configuration api_key: AIzaaaaaaaaaaaaaaaaaaaaaaaaaaa places: HOME: location: 742, Evergreen Terrace, Springfield alias: Home WORK: location: Springfield Nuclear Power Plant, Springfield alias: Work KWIK_E_MART: location: Kwik-e-Mart, Springfield alias: Apu's MOES_TAVERN: location: Moe's Tavern, Springfield alias: Moe's map: HOME: KWIK_E_MART: - mode: driving MOES_TAVERN: - mode: driving - mode: walking WORK: - mode: driving - mode: transit transit_mode: bus KWIK_E_MART: HOME: - mode: driving MOES_TAVERN: - mode: driving - mode: walking WORK: - mode: driving MOES_TAVERN: HOME: - mode: driving # drinking and driving is not encouraged - mode: walking # You don't go to Kwik-e-mart or to work from Moe's WORK: MOES_TAVERN: - mode: driving Parts of the configuration file api_key api_key will hold the information about the Google Developer’s API key. places map The first nesting under map contains the source, use the identifier from the places key above. map: PLACE1: PLACE2: .... .... .... The next nesting contains a map of possible destinations from the source, which contains the possible ways to travel from the source to the destination map: PLACE1: PLACE2: - mode: driving - mode: transit .... .... .... The routing information supports all the arguments that the Google Maps python client takes. For more information refer to Google Maps Python API documentation Usage $ commute -c config.yml -s HOME -d WORK $ commute -c config.yml -s HOME -d WORK -w now $ commute -c config.yml -s HOME -d WORK -w 'in an hour' $ commute -c config.yml -s HOME -d WORK -w 'friday evening @ 7' For using it as a library, import time from commute import get_all_paths, format_path config_file = "/path/to/config/file" src = "HOME" dst = "WORK" when = time.time() for rank, path in get_all_paths(config, src, dst, when): print(format_path(rank, path)) print("-" * 5) Status This project is at a very early stage right now. Please try it out and report any issues. - Author: Dhruv Baldawa - Keywords: commute googlemaps directions - License: MIT - Categories - Package Index Owner: dhruvbaldawa - DOAP record: commute-0.2.xml
https://pypi.python.org/pypi/commute
CC-MAIN-2017-43
refinedweb
623
62.48
As the head may be deleted due to duplication, I introduce a dummy head, the next node of dummy head would be the new head. cur is the current node and next is cur.next. The case without duplication is simple. If duplication happens, move cur to next.next and try to find one node that has different value from next.val. public class Solution { public ListNode deleteDuplicates(ListNode head) { if (head == null || head.next == null) return head; ListNode dummy = new ListNode (-1), pre = dummy; for (ListNode cur = head, next = head.next; cur != null; next = cur == null ? null : cur.next) { if (next == null || cur.val != next.val) { pre.next = cur; pre = cur; cur = cur.next; } else { cur = next.next; while (cur != null && cur.val == next.val) cur = cur.next; } } pre.next = null; return dummy.next; } } a little bit confused for line 12, why do cur = next.next here? For example if we have a list with three same value like 1 -> 2 -> 2 -> 2 -> 3 ->4, when 'curr' point to the first 2, 'next' point to the second 2, line 12 will jump 'curr' to the third 2 and 'next' to 3. so the third 2 will be kept in the list.
https://discuss.leetcode.com/topic/5694/java-solution-using-dummy-head
CC-MAIN-2017-47
refinedweb
202
85.89
FontSynthesis enum class FontSynthesis Possible options for font synthesis. FontSynthesis is used to specify whether the system should fake bold or slanted glyphs when the FontFamily used does not contain bold or oblique Fonts. If the font family does not include a requested FontWeight or FontStyle, the system fakes bold or slanted glyphs when the Weight or Style, respectively, or both when All is set. If this is not desired, use None to disable font synthesis. It is possible to fake an increase of FontWeight but not a decrease. It is possible to fake a regular font slanted, but not vice versa. FontSynthesis works the same way as the CSS font-synthesis( - .org/TR/css-fonts-4/#font-synthesis) property. import androidx.ui.core.Text import androidx.ui.text.TextStyle import androidx.ui.text.font.Font import androidx.ui.text.font.FontFamily // The font family contains a single font, with normal weight val fontFamily = FontFamily( Font(name = "myfont.ttf", weight = FontWeight.Normal) ) // Configuring the Text composable to be bold // Using FontSynthesis.Weight to have the system render the font bold my making the glyphs // thicker Text( text = "Demo Text", style = TextStyle( fontFamily = fontFamily, fontWeight = FontWeight.Bold, fontSynthesis = FontSynthesis.Weight ) ) Summary Enum values All enum val All : FontSynthesis The system synthesizes both bold and slanted fonts if either of them are not available in the FontFamily None enum val None : FontSynthesis Turns off font synthesis. Neither bold nor slanted faces are synthesized if they don't exist in the FontFamily Style enum val Style : FontSynthesis Only an slanted font is synthesized, if it is not available in the FontFamily. Bold fonts will not be synthesized. Weight enum val Weight : FontSynthesis Only a bold font is synthesized, if it is not available in the FontFamily. Slanted fonts will not be synthesized.
https://developer.android.com/reference/kotlin/androidx/ui/text/font/FontSynthesis
CC-MAIN-2020-05
refinedweb
299
57.67
#include <deLight.h> Light source. Lights are the only light sources you can place in a world. The other light source, the sky layers, light globally while this light sources work locally. Light sources have various parameters influencing the appearance of the lit objects. The color indicates the tint of the light source while the intensities indicate the strength of the light source. Intensities are measured in lumen. Lumen readings for various light sources can be found in literatur providing a good starting ground for realistic lighting. The intensity value indicates the strength of the light at the light source enlighting world elements facing towards the light source. The ambient intensity serves as a sort of backlit to avoid no light contribution on the backside of world elements. It is a sort of local ambient light for only this light source. This way the backlit can be adjusted for worlds if the graphic module does not use a more sophisticated lighting model. The half intensity distance and the attenuation exponent determine how the light attenuates over distance. This model is more powerful than the typical static-linear-quadratic attenuation model and requires also requires less parameters. The light is attenuated to half the strength reaching the distance set in the half intensity distance. The attenuation exponent indicates the power to which the distance of a fragment to the light source is raised. A value of 2 yields physically correct lighting while larger values produce dramatic lighting effects like light staying mostly constant over a given distance then falling off sharply. The cut off distance indicates a maximal distance the light travels before attenuated to zero strength. This is a hint for the graphic module especially for confined light sources which only illuminate objects in a small place. Each light has a position where the light starts from. Depending on the light type additional parameters are used. Type of the light source. Movement hints of the light. Parameter hints of the light. Creates a new light source with the given resource manager. Clean up light source. Adds a cage shape. Referenced by GetCastShadows(). Determines if the light is activated and emitting light. References SetActivated(). Retrieves the ratio of ambient light in relation to the total intensity. References SetAmbientRatio(). Retrieves the light angles. References SetAngles(). Retrieves the cage shape at the given index. Referenced by GetCastShadows(). Retrieves the number of cage shapes. Referenced by GetCastShadows(). Determines if the light casts shadows. References AddCageShape(), GetCageShapeAt(), GetCageShapeCount(), HasCageShape(), IndexOfCageShape(), NotifyCageChanged(), RemoveAllCageShapes(), RemoveCageShape(), and SetCastShadows(). Retrieves the color of the light source. References SetColor(). Retrieves the light cut-off distance. References SetCutOffDistance(). Retrieves the graphic system peer object. References SetGraphicLight(). Retrieves the distance in meters at which the intensity is halved. References SetHalfIntensityDistance(). Retrieves the light importance. References SetHintLightImportance(). Retrieves the movement hint. References SetHintMovement(). Retrieves the parameter hint. References SetHintParameter(). Retrieves the shadow importance. References SetHintShadowImportance(). Retrieves the intensity of the light source. References SetIntensity(). Next light in the parent world linked list. References SetLLWorldNext(). Previous light in the parent world linked list. References SetLLWorldPrev(). Retrieves the orientation of the light source. References SetOrientation(). Parent world or NULL. References SetParentWorld(). Retrieves the position of the light source. References SetPosition(). Retrieves the light canvas view or null if no light canvas view is set. Used only by spot lights. References SetProjectorCanvas(). Retrieves the light image or null if no light image is set. Used only by spot lights. References SetProjectorImage(). Retrieves the light range in meters. References SetRange(). Retrieves the shadow gap size. References SetShadowGap(). Retrieves the shadow origin. References SetShadowOrigin(). Retrieves the spot exponent. References SetSpotExponent(). Determines if the cage shape exists. Referenced by GetCastShadows(). Retrieves the index of the given cage shape or -1 if not found. Referenced by GetCastShadows(). Notifies the peers that the light cage changed. Referenced by GetCastShadows(). Removes all cage shapes. Referenced by GetCastShadows(). Removes a cage shape. Referenced by GetCastShadows(). Sets if the light is activated and emitting light. Referenced by GetActivated(). Sets the ratio of ambient light in relation to the total intensity. Referenced by GetAmbientRatio(). Sets the light angles. Referenced by GetAngles(). Sets if the light casts shadows. Referenced by GetCastShadows(). Sets the color of the light source. Referenced by GetColor(). Sets the light cut-off distance. Referenced by GetCutOffDistance(). Sets the graphic system peer object. Referenced by GetGraphicLight(). Sets the distance in meters at which the intensity is halved. Referenced by GetHalfIntensityDistance(). Sets the hint light importance. Referenced by GetHintLightImportance(). Sets the movement hint. Referenced by GetHintMovement(). Sets the parameter hint. Referenced by GetHintParameter(). Sets the hint shadow importance. Referenced by GetHintShadowImportance(). Sets the intensity of the light source. Referenced by GetIntensity(). Set next light in the parent world linked list. Referenced by GetLLWorldNext(). Set next light in the parent world linked list. Referenced by GetLLWorldPrev(). Sets the orientation of the light source. Referenced by GetOrientation(). Set parent world or NULL. Referenced by GetParentWorld(). Sets the position of the light source. Referenced by GetPosition(). Retrieves the light canvas view or null to unset the light canvas view. If the light canvas view is set the light parameters are bypassed and this canvas view is projected onto the object to be lit. The canvas view is used directly without multiplying the colors. Used only by spot lights. Referenced by GetProjectorCanvas(). Retrieves the light image or null to unset the light image. If the light image is set the light parameters are bypassed and this image is projected onto the object to be lit. If the image is grayscale the light color is first multiplied otherwise the image is used as it is. Used only by spot lights. Referenced by GetProjectorImage(). Sets the light range in meters. Referenced by GetRange(). Sets the shadow gap size. Referenced by GetShadowGap(). Sets the shadow origin. Referenced by GetShadowOrigin(). Sets the spot exponent. Referenced by GetSpotExponent().
https://dragengine.rptd.ch/docs/dragengine/latest/classdeLight.html
CC-MAIN-2018-30
refinedweb
975
54.9
I recently built a relatively large (100cm × 80cm) 3D printed artwork, which features a painted relief of mathematical functions. Read how I prepared the data, converted it into 3D printable tiles and converted them into printable gcode. All the images and previews are from the proof of concept I created. The final artwork is quite different, but I used the same code and methods to develop it. In this short tutorial, I will focus on the technical aspects to create the 3d printed tiles, including the scripts I used. I will only briefly cover the assembly and painting process, as I think this is too much out of the scope of my blog. My Vision Since working with computers, I have always liked two or three dimensions function plots of mathematical formulas. When I got a 3D printer, I tried various ways to create something physical from mathematical data. After creating smaller art pieces, I had to test something larger. My idea was to visualize three different functions on a larger canvas. The abstract artwork has three distinct sections, and each of these sections has its colour and mathematical formula. Define the Picture Dimension As all the calculations and generation of the data and models will be complex, I first set the final dimensions of the artwork. I found 100cm × 80cm is a good size, large enough to impress, but still light enough to hang on a wall. Next, I tested various data resolutions and how they look printed: A one-millimetre data resolution was just a little bit too large to create a smooth surface, so I set it to 0.5mm. Yet, I also found that larger data resolutions of 10mm create a unique effect, and it is something I will try at a later time. Prepare the Function Data I started creating the functions. Here, I used a simple python script to create a two-dimensional preview image of the function. import numpy as np import imageio as iio from math import sin, pi, sqrt, radians from pathlib import Path # Settings DATA_WIDTH = 2000 # points DATA_HEIGHT = 1600 # points PROJECT_DIR = Path(__file__).parent PREVIEW_IMG_PATH = PROJECT_DIR / 'function_preview.png' class WorkingSet: def __init__(self): self.out_img = np.zeros(shape=(DATA_HEIGHT, DATA_WIDTH, 3), dtype=np.uint8) @staticmethod def get_z(x: float, y: float) -> float: """ Function to generate the z height. .78)+0.5) return z def run(self): """ Run this working set. """ print('Calculate function data...') out = np.zeros(shape=(DATA_HEIGHT, DATA_WIDTH)) for py in range(DATA_HEIGHT): for px in range(DATA_WIDTH): z = self.get_z(px/(DATA_WIDTH-1), 1.0-py/(DATA_HEIGHT-1)) if z > 1.0: z = 1.0 if z < 0.0: z = 0.0 out = z() The script has a simple “settings” section, where I define the primary parameters of the preview. The shown case is 2000 by 1600 pixels, which match the 0.5mm resolution for the 100cm × 80cm picture. In get_z I define the function to create the Z-value for each image pixel. It takes normalized values from 0.0 to 1.0 and returns values in the range of 0.0 to 1.0. I used this simple preview script to create all three formulas. Yet, the visualization is not perfect because a grayscale image does not correctly visualize height levels. Prepare the Sections To visualize the colours and sections of the final image, I created a simplified representation in a vector drawing program. Then, I made an even simpler version for the calculations, only consisting of red, blue, and green areas. Next, I exported it as a PNG image with 2000 × 1600 pixels without antialiasing. Calculate the Picture Relief Next, I calculated the Z-data for the whole picture, combining all three functions into one. I used the following python script for this calculation: import enum from typing import Optional import numpy as np import imageio as iio from math import sin, pi, sqrt, radians from pathlib import Path # Settings PICTURE_WIDTH = 1.0 # m PICTURE_HEIGHT = 0.8 # m RELIEF_HEIGHT = 0.03 # m RELIEF_BASE_THICKNESS = 0.002 # m PICTURE_RESOLUTION = 0.0005 # m DATA_WIDTH = 2000 # points DATA_HEIGHT = 1600 # points PROJECT_DIR = Path(__file__).parent SECTION_WIDTH = DATA_WIDTH # px SECTION_HEIGHT = DATA_HEIGHT # px SECTION_PATH = PROJECT_DIR / 'sections.png' PREVIEW_IMG_PATH = PROJECT_DIR / 'preview_image.png' Z_DATA_PATH = PROJECT_DIR / 'z_data.npy' SECTION_BLEND_PATH = PROJECT_DIR / 'section_blend_data.npy' SECTION_BLEND_SIZE = 3 # delta pixels class Section(enum.Enum): NONE = 0 RED = 1 GREEN = 2 BLUE = 3 class WorkingSet: def __init__(self): self.img: Optional[np.ndarray] = None self.blend: Optional[np.ndarray] = None self.out_img = np.zeros(shape=(DATA_HEIGHT, DATA_WIDTH, 3), dtype=np.uint8) @staticmethod def get_blue_z(x: float, y: float) -> float: """ Function to generate the z height for the blue section. @param x The input in the range 0.0-1.0 @param y The input in the range 0.0-1.0 @return The result in the range 0.0-1.0. """ def wave(v: float): n = (v + 0.5) % 1.0 return (n if n < 0.5 else 1.0-n) * 2.0 d = (x + y*1.46) * 15 f1 = (sin(radians(d*360))+1.0)/2.0 f2 = wave(d) z = ((f1 * y) + (f2 * (1.0-y))) * ((1.0-y)*0.3+0.7) return z @staticmethod def get_red_z(x: float, y: float) -> float: """ Function to generate the z height for the red section. @param x The input in the range 0.0-1.0 @param y The input in the range 0.0-1.0 @return The result in the range 0.0-1.0. """ def wave(v: float): n = v % 1.0 return (n if n < 0.5 else 1.0-n) * 2.0 h = abs(sin(radians(x*1.2*180)))*0.8+0.2 d = sin(radians((x+0.1)*360))*0.7 return wave(x*25+d)*h @staticmethod def get_green_z(x: float, y: float): """ Function to generate the z height for the green section. .8)+0.5) return z def get_section(self, px: int, py: int) -> Section: """ Get the section based on a pixel in the section image. :param px: The pixel in the x axis (0-image width) :param py: The pixel in the y axis (0-image height) :return: 0 for no section (out of bounds), 1, 2, 3 for a coloured section. """ if px < 0 or px >= SECTION_WIDTH or py < 0 or py >= SECTION_HEIGHT: return Section.NONE p = self.img if p[0] >= 128: return Section.RED if p[1] >= 128: return Section.GREEN return Section.BLUE def calculated_blended_row(self, py: int) -> np.ndarray: """ Calculate a row from the section file, blending the surrounding pixels into a function weighting map. :param py: The pixel row in the y axis. :return: An array with function weights for all three functions. """ row = np.zeros(shape=(DATA_WIDTH, 3)) for px in range(DATA_WIDTH): found_pixels = 0 for y in range(py - SECTION_BLEND_SIZE, py + SECTION_BLEND_SIZE + 1): for x in range(px - SECTION_BLEND_SIZE, px + SECTION_BLEND_SIZE + 1): m = self.get_section(x, y) if m == Section.NONE: continue found_pixels += 1 row[px, m.value-1] += 1 if found_pixels > 0: row[px] /= found_pixels return row def get_z(self, px: int, py: int) -> float: """ Calculate the z height for a given pixel. :param px: The x coordinate for the pixel. :param py: The y coordinate for the pixel. :return: The z value for the given pixel coordinates. """ m = self.blend x = px / (DATA_WIDTH - 1) y = 1.0 - (py / (DATA_HEIGHT - 1)) z = (self.get_red_z(x, y) * m[0]) + (self.get_green_z(x, y) * m[1]) + (self.get_blue_z(x, y) * m[2]) z /= (m[0]+m[1]+m[2]) if z < 0: z = 0 if z > 1.0: z = 1.0 return z def run(self): """ Run this working set. """ if SECTION_BLEND_PATH.is_file(): print('Reading blend data...') self.blend = np.load(str(SECTION_BLEND_PATH)) else: print('Generating blend data...') self.img = iio.imread(SECTION_PATH) self.blend = np.ndarray(shape=(DATA_HEIGHT, DATA_WIDTH, 3)) for py in range(DATA_HEIGHT): print(f'at row {py}...') self.blend = self.calculated_blended_row(py) np.save(str(SECTION_BLEND_PATH), self.blend) print('Calculate functions...') out = np.zeros(shape=(DATA_HEIGHT, DATA_WIDTH)) for py in range(DATA_HEIGHT): for px in range(DATA_WIDTH): out = self.get_z(px, py) np.save(str(Z_DATA_PATH), out)() I use three functions get_blue_z, get_red_z and get_green_z to for the different z-profiles of the three sections. After the start, I read the section data from the image sections.png, adding a small “blur effect” and converting them into a weight map. This map contains three values for each data point. Each value controls the amount used from each function. Converting these values can take quite a while. Therefore I store them in the file section_blend_data.npy for later use. Next, I use the weight map to calculate the final z-values for all data points. From these values, I create another preview image preview_image.png and also store them in the file z_data.npy. The preview image looks like this: Convert the Data into 3D Objects Next, I switched to Blender to create 3D objects from the data. Here I used a simple script, creating individual tiles from the data: import bpy import numpy as np # Settings RELIEF_HEIGHT = 0.03 # m RESOLUTION = 0.0005 # m BASE_THICKNESS = 0.0006 # m TILE_SPACING = 0.0002 # m Z_DATA = np.load('<path>/z_data.npy') Z_DATA_WIDTH = 2000 # px Z_DATA_HEIGHT = 1600 # px def get_z(x: float, y: float) -> float: """ Get the Z value for a data point. :param x: The x coordinate in data space. :param y: The y coordinate in data space. :return: The resulting Z value in model space. """ if x < 0: x = 0 if y < 0: y = 0 if x >= (Z_DATA_WIDTH - 1): x = Z_DATA_WIDTH - 1 if y >= (Z_DATA_HEIGHT - 1): y = Z_DATA_HEIGHT - 1 z = float(Z_DATA[Z_DATA_HEIGHT - y - 1, x]) * RELIEF_HEIGHT return z def create_tile(sx: int, sy: int, width: int, height: int, name: str): """ Create a tile with start sx, sy and width, height in shape units """ vert_width = width + 1 vert_height = height + 1 def get_vert_index(x: int, y: int) -> int: """ Get the vertice index for one coordinate. """ return x * vert_width + y def face(x: int, y: int): """ Get one face tuple on the surface. """ return ( get_vert_index(x, y), get_vert_index(x + 1, y), get_vert_index(x + 1, y + 1), get_vert_index(x, y + 1)) def get_point(x: int, y: int): """ Get one single point in model space. """ rx = (sx+x) * RESOLUTION if x == width: rx -= TILE_SPACING ry = (sy+y) * RESOLUTION if y == height: ry -= TILE_SPACING rz = get_z(sx+x, sy+y) return rx, ry, rz # Create the vertices for the surface. vertices = [get_point(x, y) for x in range(vert_width) for y in range(vert_height)] faces = [face(x, y) for x in range(width) for y in range(height)] # Add four vertices for the bottom bt_index = len(vertices) # Store the index of these four bottom vertices. x1 = sx * RESOLUTION x2 = (sx+width) * RESOLUTION - TILE_SPACING y1 = sy * RESOLUTION y2 = (sy+height) * RESOLUTION - TILE_SPACING vertices.extend([ (x1, y1, -BASE_THICKNESS), (x2, y1, -BASE_THICKNESS), (x2, y2, -BASE_THICKNESS), (x1, y2, -BASE_THICKNESS)]) # add all other sides # front top_edge = [get_vert_index(vert_width - x - 1, 0) for x in range(vert_width)] top_edge.extend([bt_index, bt_index+1]) faces.append(top_edge) # back top_edge = [get_vert_index(x, vert_height-1) for x in range(vert_width)] top_edge.extend([bt_index+2, bt_index+3]) faces.append(top_edge) # left top_edge = [get_vert_index(0, y) for y in range(vert_height)] top_edge.extend([bt_index+3, bt_index]) faces.append(top_edge) # right top_edge = [get_vert_index(vert_width - 1, vert_height - y - 1) for y in range(vert_height)] top_edge.extend([bt_index+1, bt_index+2]) faces.append(top_edge) # bottom faces.append([bt_index, bt_index+3, bt_index+2, bt_index+1]) # Create Mesh Datablock mesh = bpy.data.meshes.new(name) mesh.from_pydata(vertices, [], faces) # Create Object and link to scene obj = bpy.data.objects.new(name, mesh) bpy.context.scene.collection.objects.link(obj) def main(): tile_width = 400 tile_height = 400 for x in range(5): for y in range(4): create_tile(tile_width * x, tile_height * y, tile_width, tile_height, f'tile{y}{x}') if __name__ == '__main__': main() What didn’t Work Originally I started with a different approach, where I created one solid object with the profile. Then, I made boxes in the correct sizes of the tiles and boolean operations to convert the relief into smaller entities. While this approach seemed more straightforward and accurate, Blender could not produce correct geometrical results. The created tiles always had defects, which caused problems in the slicer. Repairing these defects was time-consuming, but I also had to fix the objects after every rebuild of the geometries manually. My Blender script is as simple as possible. It loads the generated Z-data into a NumPy array, creating separate tiles. Instead of working with actual dimensions and dividing them into the data points, I define the distance between the data points using the variable RESOLUTION. I create each tile by generating a grid of vertices, using the Z-values multiplied with the desired relief size. Next, I add faces between all the vertices to build the top surface of the tile. Now I add four vertices more at the four bottom corners of the object. Then I create faces on the sides, connecting all the vertices from the side of the top surface with two vertices at the bottom. I move the vertices on the top and right sides by a tiny amount to create a minimal gap between the tiles. This gap is required to compensate for printing variations and the glue. Reduce the Mesh Size? I found reducing the mesh size is not worth the time and effort. The slicer software can efficiently deal with complex objects like these. Also, because the grid never perfectly represents the functions, I found changing the faces will often make these imperfections more visible. Export the Tiles from Blender Exporting all the tiles from Blender is simple. The export dialogue has a batch function, where you can save a selection of objects into individual files. So, I export all the generated tiles in individual STL files. I name them picture_tileXX.stl. In the Blender export dialogue, I use picture_.stl as filename, where it automatically adds the name of the objects at the end of this name. Slicing the Tiles Next, I slice all the objects and convert them into gcode. I first import one of the STL files in Prusa Slicer and configure all required parameters. After I am happy with the slicing results, I store the configuration with “File”->”Export”->”Export Config…” into a config.ini file. Now I can use the following Python script to convert all files into gcode automatically: import subprocess from pathlib import Path PROJECT_DIR = Path('<project path>/') PRUSA_CMD = '/Applications/PrusaSlicer.app/Contents/MacOS/PrusaSlicer' PRUSA_CONFIG = PROJECT_DIR / 'config.ini' STL_DIR = PROJECT_DIR STL_PATTERN = 'picture_tile*.stl' def main(): for path in STL_DIR.glob(STL_PATTERN): print(f'Processing: {path.name}') cmd = [ PRUSA_CMD, '--export-gcode', '--load=', str(PRUSA_CONFIG), '--output=', str(path.with_suffix('.gcode')), '--slice', str(path) ] subprocess.run(cmd) if __name__ == '__main__': main() Print the Tiles Printing all the 20 tiles took some time, but watching how the relief emerges layer by layer is lovely. After each print, I wrote the tile number on its back, which was a great help with the assembly. Assemble and Paint the Tiles I used a slightly larger sheet of plywood to assemble the tiles into the painting. I put all the tiles into place, using a fast curing epoxy glue. Next, I smoothed the surface with sandpaper. Smoothing the surface also removed most of the small tips from the seams. Where required, I filled the small gap between the tiles with wood filler. Adding a primer further smoothed the surface. I used acrylic colours to paint the relief with the colours I wanted. These colours nicely separated the different reliefs and added more depth and detail. Conclusion I hope this short tutorial gave you the inspiration to create your own large 3d painting. The final result at this scale is very impressive and with the large structures, the painting looks very different depending on the lights and its shadows. More Posts Box in 500 Sizes with Smileys Let’s Print a Cat/Pet Feeding Device (Part 7) Round Corner Template Set for Woodworking How to Create a Prototype Connector from a Pin Header Bottle Opener for Elderly People or with Disabilities
https://luckyresistor.me/2021/12/30/how-to-create-a-large-3d-printed-artwork/?shared=email&msg=fail
CC-MAIN-2022-05
refinedweb
2,686
67.76
Is React client side SEO friendly or not? Is React client-side rendering SEO friendly to Google crawl bots? This is a question that most SEO marketing experts, developers, and even myself have. According to Martin Splitt, a Google developer, who explains SEO techniques for JavaScript, said in March 2019 that client-side rendering is SEO friendly. How does google crawl a site and JavaScript? A Google crawl bot will hit your site in two waves. During the first wave, a crawl bot will hit your site and process the initial HTML that is served by the server. This wave is instant, and gets indexed on the first day. The content being rendered via JavaScript will not be indexed during this phase. Content being rendered on the client-side will get crawled on the second wave. According to Martin Splitt, this wave can take days to get indexed. I wanted to put this to the test. The experiment: Index my React client-side content The test is simple and ran on my personal site. Create a blank shell HTML file, and let some React code render content via client-side. And observe how the Google crawl bots handles this. Step 1: Create blank HTML Linguine Code is built on Next.JS. All I had to do was create a blank shell page. import * as React from 'react'; import Head from 'next/head'; const ReactSEOFriendly = () => { return ( <> <Head> <title>React client side SEO friendly content</title> <meta name="description" content="This page shows if React client side is friendly in the initial render." /> <script crossOrigin="true" src=" /> <script crossOrigin="true" src=" /> <script src=" /> </Head> <div id="seo-client-side" /> <script src="/js/react-compile.js"/> </> ) } export default ReactSEOFriendly; In this file, I’m importing the React scripts and my compiled React code that will render my content. Step 2: Render content with React (function() { window.onload = function() { if (React && ReactDOM) { class Container extends React.Component { constructor(props) { super(props); this.state = { showDescNow: false, showDeferDesc2: false, }; } componentDidMount() { this.setState({ showDescNow: true }); setTimeout(() => { this.setState({ showDeferDesc2: true, }) }, 5000); } render() { return ( <React.Fragment> <h1>This text shows all the time.</h1> {this.state.showDescNow && <h2>This text shows after component has mounted.</h2>} {this.state.showDeferDesc2 && <h3>This text shows 5 seconds after component has mounted.</h3>} </React.Fragment> ) } } ReactDOM.render(<Container />, document.getElementById('seo-client-side')); } } })() One thing to note is that I’m rendering parts of the content as 3 given times. - Instant - After the componentDidMount lifecycle - 5 seconds later once the component has mounted My goal here was to see if I can make the crawl bot not index parts of the content. My expectations was that it would get the instant copy right away. <h1>This text shows all the time.</h1> The other part of the content was displayed a few milliseconds after the React code has been parsed, and executed <h2>This text shows after component has mounted.</h2> The last part of the content was a big delay of 5 seconds. This was to try an emulate an API call to fetch data, with a long response time. <h3>This text shows 5 seconds after component has mounted.</h3> Step 3: Tell Google to queue an index This step had to happen in the Google Search Console. I told Google to test my live page, and queue a crawl. I was shocked to see that 2 parts of the content was picked up on the screenshot tab after it ran the view tested page. And it also looked like the big 5 second content delay portion did not get picked up. The results: Check Google search for React SEO friendly page 20 minutes later, I decided to go the Google search and search for And here were the search results I was honestly shocked, that - Google indexed the React content so quickly - It indexed the 5 second delay portion of the content Is React good or bad for SEO? That’s a really hard question. Clearly the results above show that React rendered content can be indexed, even with lengthy delays. But when you’re building a React page, it’s usually a single page application. The behavior of links is completely different than if it was server-side rendered. And link building (internal or external) plays a big factor in the SEO world. You have to be very careful how you build links in React. Google crawl bots does NOT interact with content that gets triggered by some human interaction like clicking, scrolling or hovering. For example, you can’t do this in React if you want Google to crawl your page properly const PageWithLinks = () => ( <> <a onClick={goTo('/page/url')}>Just don't</a> <a href="/page/url" onClick={goTo('/different/page')}>Don't do this either!</a> <span onClick={goTo('/different/page')}>This a big WTF!</span> </> ); These are good React link practices const PageWithLinks = () => ( <> <a href="/page/url">This works! It just defeats the purpose of SPA</a> <a href="/page/url" onClick={goTo('/page/url')}>Here's a workaround :)</a> </> ); Conclusion I’ve been fortunate to talk to SEO experts about this topic, and big tech media company developers about their experience using React to render content on the client side. Unfortunately, all the folks I talked to said that they saw a huge dip in traffic. Either to long to index content, or parts of content not being indexed. The part about long indexing times, is a bit iffy, since I witness my React friendly SEO page being indexed in 20 minutes. Perhaps Google search is getting smarter about JavaScript rendering. Personally, I’ll stick to server side rendering because it’s easier, and less constraints or rules that I have to worry about. You can see the React SEO friendly page here. You can also see the Google query here. I like to tweet about React and post helpful code snippets. Follow me there if you would like some too!
https://linguinecode.com/post/react-seo-friendly-or-not
CC-MAIN-2022-21
refinedweb
998
65.12
| about | « Back to DataShop DataShop's Export function allows you to save your log data out of DataShop and into an anonymous, tab-delimited text file. As in the other DataShop reporting tools, the sample selector allows you to filter rows based on criteria you define, and apply your own knowledge component model to the data before exporting. You can now view and edit your file in a text or spreadsheet editor. Important: If you save your data from a spreadsheet editor and would like to import the data into DataShop, be sure to preserve the tab-delimited text format of the file. Newer versions of Microsoft Excel tend to automatically format date/time fields for you. The result of this is that timestamp values are presented in a format that may obscure levels of detail in the time data. If you then save the file from Excel, you will lose this information! Columns of the current export formats are described below. Note: The list and order of columns in any of the export formats can change at any time. If you are writing a program that expects the columns in a certain order, be sure to verify the header of the column before assuming it's the column you expect. See the history of changes to these formats. Within each sample, rows are ordered by student then by transaction time. If the transaction time is identical for a given student, we can't know the real order in which the transactions occurred, so DataShop uses internal database identifiers to order the rows consistently. Within each sample, rows are ordered by student, then time of the first correct attempt (“Correct Transaction Time”) or, in the absence of a correct attempt, the time of the final transaction on the step (“Step End Time”). Within each sample, rows are ordered by student, then problem start time. Problem View is determined one of three ways: Problem Start Time is determined one of three ways: Exported data is anonymized; real student IDs are replaced with anonymous IDs during the export process. Should you wish to obtain identifiable student IDs—for example, if you are the instructor for a course or if the original data was anonymous—please authorized to view the real student IDs. We will then provide a mapping table from the anonymized IDs to the real student IDs. If downloading of your export file is blocked by Internet Explorer, check your browser's security settings:".
https://pslcdatashop.web.cmu.edu/help?page=export&datasetId=622
CC-MAIN-2021-04
refinedweb
413
58.32
I have been using the Selenium and Python2.7, on Firefox. I would like it to work like the following, but I do not know how to write code. Driver.get('') Url='' Driver.find_element_by_id("link").send_keys(Url) Driver.find_element_by_id("submit").click() time.sleep(5) #[Click]lowermost(highest quality) radio button #[Click]Proceed button You are able to get frame locator value in HTML code which is specified in iframes tag. u can move up from where your locator value is existed in HTML code. This function is suitable: def frame_switch(css_selector): driver.switch_to.frame(driver.find_element_by_css_selector(css_selector)) If you are just trying to switch to the frame based on the name attribute, then you can use this: def frame_switch(name): driver.switch_to.frame(driver.find_element_by_name(name))
https://codedump.io/share/lIdb9Y1FjjA8/1/how-to-identify-and-switch-to-the-frame-source-in-selenium
CC-MAIN-2016-50
refinedweb
126
50.33
Why did you learn about locals and globals? So you can learn about dictionary-based string formatting. As you recall, regular string formatting provides an easy way to insert values into strings. Values are listed in a tuple and inserted in order into the string in place of each formatting marker. While this is efficient, it is not always the easiest code to read, especially when multiple values are being inserted. You can't simply scan through the string in one pass and understand what the result will be; you're constantly switching between reading the string and reading the tuple of values. There is an alternative form of string formatting that uses dictionaries instead of tuples of values. Example 8.13. Introducing dictionary-based string formatting >>> params = {"server":"mpilgrim", "database":"master", "uid":"sa", "pwd":"secret"} >>> "%(pwd)s" % params 'secret' >>> "%(pwd)s is not a good password for %(uid)s" % params'secret' >>> "%(pwd)s is not a good password for %(uid)s" % params 'secret is not a good password for sa' >>> "%(database)s of mind, %(database)s of body" % params'secret is not a good password for sa' >>> "%(database)s of mind, %(database)s of body" % params 'master of mind, master of body''master of mind, master of body' So why would you use dictionary-based string formatting? Well, it does seem like overkill to set up a dictionary of keys and values simply to do string formatting in the next line; it's really most useful when you happen to have a dictionary of meaningful keys and values already. Like locals. Example 8.14. Dictionary-based string formatting in BaseHTMLProcessor.py def handle_comment(self, text): self.pieces.append("<!--%(text)s-->" % locals()) Example 8.15. More dictionary-based string formatting def unknown_starttag(self, tag, attrs): strattrs = "".join([' %s="%s"' % (key, value) for key, value in attrs]) self.pieces.append("<%(tag)s%(strattrs)s>" % locals())self.pieces.append("<%(tag)s%(strattrs)s>" % locals())
http://docs.activestate.com/activepython/3.5/dip/html_processing/dictionary_based_string_formatting.html
CC-MAIN-2018-47
refinedweb
322
51.58
How is it possible that files can contain null bytes in operating systems written in a language with null-terminating strings (namely, C)? For example, if I run this shell code: $ printf "Hello\00, World!" > test.txt $ xxd test.txt 0000000: 4865 6c6c 6f00 2c20 576f 726c 6421 Hello., World! test.txt Hello Hello\00, World! Null-terminated strings are a C construct used to determine the end of a sequence of characters intended to be used as a string. String manipulation functions such as strcmp, strcpy, strchr, and others use this construct to perform their duties. But you can still read and write binary data that contains null bytes within your program as well as to and from files. You just can't treat them as strings. Here's an example of how this works: #include <stdio.h> #include <stdlib.h> int main() { FILE *fp = fopen("out1","w"); if (fp == NULL) { perror("fopen failed"); exit(1); } int a1[] = { 0x12345678, 0x33220011, 0x0, 0x445566 }; char a2[] = { 0x22, 0x33, 0x0, 0x66 }; char a3[] = "Hello\x0World"; // this writes the whole array fwrite(a1, sizeof(a1[0]), 4, fp); // so does this fwrite(a2, sizeof(a2[0]), 4, fp); // this does not write the whole array -- only "Hello" is written fprintf(fp, "%s\n", a3); // but this does fwrite(a3, sizeof(a3[0]), 12, fp); fclose(fp); return 0; } Contents of out1: [dbush@db-centos tmp]$ xxd out1 0000000: 7856 3412 1100 2233 0000 0000 6655 4400 xV4..."3....fUD. 0000010: 2233 0066 4865 6c6c 6f0a 4865 6c6c 6f00 "3.fHello.Hello. 0000020: 576f 726c 6400 World. For the first array, because we use the fwrite function and tell it to write 4 elements the size of an int, all the values in the array appear in the file. You can see from the output that all values are written, the values are 32-bit, and each value is written in little-endian byte order. We can also see that the second and fourth elements of the array each contain one null byte, while the third value being 0 has 4 null bytes, and all appear in the file. We also use fwrite on the second array, which contains elements of type char, and we again see that all array elements appear in the file. In particular, the third value in the array is 0, which consists of a single null byte that also appears in the file. The third array is first written with the fprintf function using a %s format specifier which expects a string. It writes the first 5 bytes of this array to the file before encountering the null byte, after which it stops reading the array. It then prints a newline character ( 0x0a) as per the format. The third array it written to the file again, this time using fwrite. The string constant "Hello\x0World" contains 12 bytes: 5 for "Hello", one for the explicit null byte, 5 for "World", and one for the null byte that implicitly ends the string constant. Since fwrite is given the full size of the array (12), it writes all of those bytes. Indeed, looking at the file contents, we see each of those bytes. As a side note, in each of the fwrite calls, I've hardcoded the size of the array for the third parameter instead of using a more dynamic expression such as sizeof(a1)/sizeof(a1[0]) to make it more clear exactly how many bytes are being written in each case.
https://codedump.io/share/wCq99xuZwZP5/1/how-can-a-file-contain-null-bytes
CC-MAIN-2017-26
refinedweb
583
76.56
Introduction. The abundance on unstructured data Interestingly, unstructured data represents huge under-exploited opportunity. It is closer to how we communicate and interact as humans. It also contains a lot of useful & powerful information. For example, if a person speaks; you not only get what he / she says but also what were the emotions of the person from the voice. Also the body language of the person can show you many more features about a person, because actions speak louder than words! So in short, unstructured data is complex but processing it can reap easy rewards. In this article, I intend to cover an overview of audio / voice processing with a case study so that you would get a hands-on introduction to solving audio processing problems. Let’s get on with it! Table of Contents - What do you mean by Audio data? - Applications of Audio Processing - Data Handling in Audio domain - Let’s solve the UrbanSound challenge! - Intermission: Our first submission - Let’s solve the challenge! Part 2: Building better models - Future Steps to explore What do you mean by Audio data? Directly or indirectly, you are always in contact with audio. Your brain is continuously processing and understanding audio data and giving you information about the environment. A simple example can be your conversations with people which you do daily. This speech is discerned by the other person to carry on the discussions. Even when you think you are in a quiet environment, you tend to catch much more subtle sounds, like the rustling of leaves or the splatter of rain. This is the extent of your connection with audio. So can you somehow catch this audio floating all around you to do something constructive? Yes, of course! There are devices built which help you catch these sounds and represent it in computer readable format. Examples of these formats are - wav (Waveform Audio File) format - mp3 (MPEG-1 Audio Layer 3) format - WMA (Windows Media Audio) format If you give a thought on what an audio looks like, it is nothing but a wave like format of data, where the amplitude of audio change with respect to time. This can be pictorial represented as follows. Applications of Audio Processing Although we discussed that audio data can be useful for analysis. But what are the potential applications of audio processing? Here I would list a few of them - Indexing music collections according to their audio features. - Recommending music for radio channels - Similarity search for audio files (aka Shazam) - Speech processing and synthesis – generating artificial voice for conversational agents Here’s an exercise for you; can you think of an application of audio processing that can potentially help thousands of lives? Data Handling in Audio domain As with all unstructured data formats, audio data has a couple of preprocessing steps which have to be followed before it is presented for analysis.. We will cover this in detail in later article, here we will get an intuition on why this is done. The first step is to actually load the data into a machine understandable format. For this, we simply take values after every specific time steps. For example; in a 2 second audio file, we extract values at half a second. This is called sampling of audio data, and the rate at which it is sampled is called the sampling rate. Another way of representing audio data is by converting it into a different domain of data representation, namely the frequency domain. Here, we separate one audio signal into 3 different pure signals, which can now be represented as three unique values in frequency domain. There are a few more ways in which audio data can be represented, for example. using MFCs (Mel-Frequency cepstrums. PS: We will cover this in the later article). These are nothing but different ways to represent the data. Now the next step is to extract features from this audio representations, so that our algorithm can work on these features and perform the task it is designed for. Here’s a visual representation of the categories of audio features that can be extracted. After extracting these features, it is then sent to the machine learning model for further analysis. Let’s solve the UrbanSound challenge! Let us have a better practical overview in a real life project, the Urban Sound challenge. This practice problem is meant to introduce you to audio processing in the usual classification scenario. The dataset contains 8732 sound excerpts (<=4s) of urban sounds from 10 classes, namely: - air conditioner, - car horn, - children playing, - dog bark, - drilling, - engine idling, - gun shot, - jackhammer, - siren, and - street music Here’s a sound excerpt from the dataset. Can you guess which class does it belong to? To play this in the jupyter notebook, you can simply follow along with the code. import IPython.display as ipd ipd.Audio('../data/Train/2022.wav') Now let us load this audio in our notebook as a numpy array. For this, we will use librosa library in python. To install librosa, just type this in command line pip install librosa Now we can run the following code to load the data data, sampling_rate = librosa.load('../data/Train/2022.wav') When you load the data, it gives you two objects; a numpy array of an audio file and the corresponding sampling rate by which it was extracted. Now to represent this as a waveform (which it originally is), use the following code % pylab inline import os import pandas as pd import librosa import glob plt.figure(figsize=(12, 4)) librosa.display.waveplot(data, sr=sampling_rate) The output comes out as follows Let us now visually inspect our data and see if we can find patterns in the data Class: jackhammer Class: drillingClass: drilling Class: dog_barkingClass: dog_barking We can see that it may be difficult to differentiate between jackhammer and drilling, but it is still easy to discern between dog_barking and drilling. To see more such examples, you can use this code i = random.choice(train.index) audio_name = train.ID[i] path = os.path.join(data_dir, 'Train', str(audio_name) + '.wav') print('Class: ', train.Class[i]) x, sr = librosa.load('../data/Train/' + str(train.ID[i]) + '.wav') plt.figure(figsize=(12, 4)) librosa.display.waveplot(x, sr=sr) Intermission: Our first submission We will do a similar approach as we did for Age detection problem, to see the class distributions and just predict the max occurrence of all test cases as that class. Let us see the distributions for this problem. train.Class.value_counts() Out[10]: jackhammer 0.122907 engine_idling 0.114811 siren 0.111684 dog_bark 0.110396 air_conditioner 0.110396 children_playing 0.110396 street_music 0.110396 drilling 0.110396 car_horn 0.056302 gun_shot 0.042318 We see that jackhammer class has more values than any other class. So let us create our first submission with this idea. test = pd.read_csv('../data/test.csv') test['Class'] = 'jackhammer' test.to_csv(‘sub01.csv’, index=False) This seems like a good idea as a benchmark for any challenge, but for this problem, it seems a bit unfair. This is so because the dataset is not much imbalanced. Let’s solve the challenge! Part 2: Building better models Now let us see how we can leverage the concepts we learned above to solve the problem. We will follow these steps to solve the problem. Step 1: Load audio files Step 2: Extract features from audio Step 3: Convert the data to pass it in our deep learning model Step 4: Run a deep learning model and get results Below is a code of how I implemented these steps Step 1 and 2 combined: Load audio files and extract features def parser(row): # function to load files and extract features file_name = os.path.join(os.path.abspath(data_dir), '] Step 3: Convert the data to pass it in our deep learning model from sklearn.preprocessing import LabelEncoder X = np.array(temp.feature.tolist()) y = np.array(temp.label.tolist()) lb = LabelEncoder() y = np_utils.to_categorical(lb.fit_transform(y)) Step 4: Run a deep learning model and get results import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.optimizers import Adam from keras.utils import np_utils from sklearn import metrics num_labels = y.shape[1] filter_size = 2 # build model model = Sequential() model.add(Dense(256, input_shape=(40,))) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(256)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(num_labels)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam') Now let us train our model model.fit(X, y, batch_size=32, epochs=5, validation_data=(val_x, val_y)) This is the result I got on training for 5 epochs Train on 5435 samples, validate on 1359 samples Epoch 1/10 5435/5435 [==============================] - 2s - loss: 12.0145 - acc: 0.1799 - val_loss: 8.3553 - val_acc: 0.2958 Epoch 2/10 5435/5435 [==============================] - 0s - loss: 7.6847 - acc: 0.2925 - val_loss: 2.1265 - val_acc: 0.5026 Epoch 3/10 5435/5435 [==============================] - 0s - loss: 2.5338 - acc: 0.3553 - val_loss: 1.7296 - val_acc: 0.5033 Epoch 4/10 5435/5435 [==============================] - 0s - loss: 1.8101 - acc: 0.4039 - val_loss: 1.4127 - val_acc: 0.6144 Epoch 5/10 5435/5435 [==============================] - 0s - loss: 1.5522 - acc: 0.4822 - val_loss: 1.2489 - val_acc: 0.6637 Seems ok, but the score can be increased obviously. (PS: I could get an accuracy of 80% on my validation dataset). Now its your turn, can you increase on this score? If you do, let me know in the comments below! Future steps to explore Now that we saw a simple applications, we can ideate a few more methods which can help us improve our score - We applied a simple neural network model to the problem. Our immediate next step should be to understand where does the model fail and why. By this, we want to conceptualize our understanding of the failures of algorithm so that the next time we build a model, it does not do the same mistakes - We can build more efficient models that our “better models”, such as convolutional neural networks or recurrent neural networks. These models have be proven to solve such problems with greater ease. - We touched the concept of data augmentation, but we did not apply them here. You could try it to see if it works for the problem. End Notes In this article, I have given a brief overview of audio processing with an case study on UrbanSound challenge. I have also shown the steps you perform when dealing with audio data in python with librosa package. Giving this “shastra” in your hand, I hope you could try your own algorithms in Urban Sound challenge, or try solving your own audio problems in daily life. If you have any suggestions/ideas, do let me know in the comments below! Learn, engage , hack and get hired! Podcast: Play in new window | DownloadYou can also read this article on our Mobile APP 40 Comments Hi Faizan, It was great explanation thank you. and i am working like same problem but it is on the financial(bank customer) speech recognition problem, would you please help on this, Thank you in advance Regards, Kishor Peddolla Hey Kishor, Sure! Your problem seems interesting. I might add that Speech recognition is more complex than audio classification, as it involves natural language processing too. Can you explain what approach you followed as of now to solve the problem? Also, I would suggest creating a thread on discussion portal so that more people from the community could contribute to help you Nice article, Faizan. Gives a good foundation to exploring audio data. Keep up the good work. Thanks Regards Karthik Thanks Karthikeyan Thanks. This is something I had been thinking for sometime. Thanks kalyanaraman Nice article. I liked the introduction to python libraries for audio. Any chance, you cover hidden markov models for audio and related libraries. Thank you Thanks Manoj! I’ll try to cover this in the next article Hello Faizan and thank you for your introduction to sound recognition and clustering! Just a kind remark, I noticed that you have imported the Convolutional and maxpooling layers which you do not use so I guess there’s no need for them to be there….But I did say WOW when I saw them – I thought you would implement a CNN solution… Hi Faizan This is a very good article to get started on Audio analysis. I do not think any other books out there could have given this type of explanation ! Keep up the great work !!! Thanks Nagu Great Work! Appreciate your effort in documenting this. Thanks Krish Great work faizan! I did go through this article and I find that most of machine learning articles require extensive knowledge of dataset or domain : like speech here. How does one do that and how do you decide to work on such problems ? Any references? I usually tend to follow moocs, but how to do self research and design end to end processes especially for machine learning? Hi Gowri, You are right to say that data science problems involve domain knowledge to solve problems, and this comes from experience in working on those kind of problems. When I take up a problem, I try to do as much research as I can and also, try to get hands on experience in it. Each person has his or her own learning process. So my process may or may not work for you. Still I would suggest a course that would help you Thanks for suggesting the wonderful course !! Hi Faizan, I got the following result, would you give some solutions to me: In [132]: model.fit(X, y, batch_size=32, epochs=5) Traceback (most recent call last): File “”, line 1, in model.fit(X, y, batch_size=32, epochs=5) File “C:\Users\admin\Anaconda2\lib\site-packages\keras\models.py”, line 867, in fit initial_epoch=initial_epoch) File “C:\Users\admin\Anaconda2\lib\site-packages\keras\engine\training.py”, line 1522, in fit batch_size=batch_size) File “C:\Users\admin\Anaconda2\lib\site-packages\keras\engine\training.py”, line 1378, in _standardize_user_data exception_prefix=’input’) File “C:\Users\admin\Anaconda2\lib\site-packages\keras\engine\training.py”, line 144, in _standardize_input_data str(array.shape)) ValueError: Error when checking input: expected dense_7_input to have shape (None, 40) but got array with shape (5435L, 1L) The input which you give to the neural network is improper. You can answer the following questions to get the answer 1. What is the shape of input layer? 2. What is the shape of X? I have solved this problem, Thanks! Thank you for the great explanation. Do you mind making the source code including data files and iPython notebook available through gitHub? Sure. Will do Hi Faizan, A friendly reminder about the ipython notebook you promised. Here is the reason for my curiosity. While experimenting with urban sound dataset (), with an identical deep feed forward neural network like yours, the best accuracy I have achieved is 65%. That is after lots of hyper parameterization. I know in this blog you have reported similar accuracy and further alluded that you could achieve 80% accuracy. That is impressive, and I am aiming for similar result. However, I have noticed your dataset size is not the full 8K set. In my experimentation, I am using audio folders1-8 for training, folder 9 for validation and folder 10 for testing. I get 65% accuracy both on the validation and testing sets. Hope you could share your notebook or help me towards 80% accuracy goal. While I am currently experimenting with data augmentation, your help is much appreciated. I am aiming for this higher accuracy before using the trained model/parameters for a custom project of mine to classify a personal audio dataset. Thank you in advance, Phani. forgot to mention, for my training I am extracting 5 different datapoints (mfccs,chroma,mel,contrast,tonnetz) not just one (mfccs) like you did. With this fullset I get 65% accuracy. With mfccs alone I get only 53%. Also, 60% is the highest I saw so far in various other blogs with similar dataset. Interestingly convoluted networks (CNN) with mel features alone could not push this any further, making your results of 80% that much more impressive. Look forward to seeing your response. Thank you in advance. Nice article… even I want to classify normal and pathological voice samples using keras… if I get any difficulty please help me regarding this…. Sure Hi Faizan, Thank you for introducing this concept. However there is a basic problem,I am facing. I can’t install librosa, as every time I typed import librosa I got AttributeError: module ‘llvmlite.binding’ has no attribute ‘get_host_cpu_name’. I googled a lot, but didn’t find a solution for this. Can you please provide a solution here, so that I can proceed further. Thanks Hi, A solution to similar issue was to reinstall llvm package by executing sudo apt-get install llvm Tried with that, however not solved the problem.mine is windows OS with anaconda environment. Thanks As a last resort, you can rely on a docker system for testing out the code Hi sir. Thanks for this nice article. But how to I get datasets? Hello You can find the dataset here : Hi, How do you read train.scv to get train variable ? Thank You in advance Louis Hi Louis, The link for the dataset is provided in the article itself. you can download it from there. Can i get the dataset please Hi Maxwel, The link to the dataset is provided in the article itself. Hi, I would like to use your example for my problem which is the separation of audio sources , I have some troubles using the code because I don’t know what do you mean by “train” , and also I need your data to run the example to see if it is working in my python, so can you plz provide us all the data through gitHub? Hi Houda, The dataset has two parts, train and test. The link to download the datasets is provided in the article itself. Hi, thanks for the nice article, I have a problem dealing with the code, it gives me “name ‘train’ is not defined” even I have the dataset , can you help me plz ? Best. Hi, Glad you liked the article. Also, check the name you have set for the dataset you’re trying to load. I guess it should be ‘Train’, not ‘train’ Hi Aishwarya , First of all , thanks for your feedback, I download the data, otherwise, I get this error: TypeError: ‘<' not supported between instances of 'NoneType' and 'str' , this error comes with this command: y = np_utils.to_categorical(lb.fit_transform(y)) knowing that I am using python 3.6. any help or suggestion I will be upreciating that 🙂 Best.
https://www.analyticsvidhya.com/blog/2017/08/audio-voice-processing-deep-learning/
CC-MAIN-2021-04
refinedweb
3,169
66.54
Apple Has a Lot In Common With The Rolling Stones (Video) Roblimo posted 1 year,3 days | from the it's-only-a-smart-phone-but-I-like-it dept. (5, Funny) sexconker (1179573) | 1 year,3 days | (#44832771) Old, played out, desperate to remain relevant. Re:Yup (5, Insightful) Em Adespoton (792954) | 1 year,3 days | (#44832805) Old, played out, desperate to remain relevant. ...and yet any new repackaging of their material is met with instant sellouts. Re:Yup (3, Funny) cayenne8 (626475) | 1 year,3 days | (#44832951) Apple or any company would do well to survive like that guy has. As the old joke goes, "What will be left after a nuclear holocaust?" --Cockroaches and Keith Richards Re:Yup (1, Troll) PopeRatzo (965947) | 1 year,3 days | (#44833319):Yup (1) MightyYar (622222) | 1 year,3 days | (#44833609) I, too, wish the Stones had not sold out and just stayed true to their core values of sex and drugs. Re:Yup (1) Em Adespoton (792954) | 1 year,3 days | (#44834471) I, too, wish the Stones had not sold out and just stayed true to their core values of sex and drugs. It seems to me there was some rock 'n roll in there somewhere too.... Re:Yup (0) Anonymous Coward | 1 year,3 days | (#44833803)? Re:Yup (1) Em Adespoton (792954) | 1 year,3 days | (#44834515)?:Yup (4, Funny) HornWumpus (783565) | 1 year,3 days | (#44832819) If Apple were really like the Rolling Stones, after the Ho-hum new announcement they would yet again introduce the Lisa and the crowd would go wild. Gold colored is just a fancy way to say beige ... (0) Anonymous Coward | 1 year,3 days | (#44833135) ... yet again introduce the Lisa and the crowd would go wild ... Rectangular, rounded corners, gold/beige. Why not? :-) Re:Yup (1) vux984 (928602) | 1 year,3 days | (#44833391) they would yet again introduce the Lisa and the crowd would go wild. [ijailbreak.com] Re:Yup (1) Anonymous Coward | 1 year,3 days | (#44832897) Must... resists ... making... dried... up... corpse... joke... Re:Yup (3, Insightful) cayenne8 (626475) | 1 year,3 days | (#44833153). Re:Yup (1) CRCulver (715279) | 1 year,3 days | (#44833533) Re:Yup (2) cayenne8 (626475) | 1 year,3 days | (#44833773) I'm actually not quite THAT old...I was quite young when the Stones were in their heyday...I grew up mostly in the 80's and 90's, and even then during the middle of them, I didn't find much music that caught my attention they way older groups did. Yes, Jackson did have a bit of a comeback when he died....but in the past year or so, that seems to have faded from what I hear being played. I was there for Nirvana at the beginning...pretty powerful stuff, but aside from about 3 songs, all off the same album, you don't hear much being played in public really... I'm a huge Rush fan....but even with them, they ran out of steam for good stuff around the Signals time. But they do have quite a catalog up till then and a few after...so, I'd give you that one. However, I'd also say that Rush doesn't have quite the large swath of people that would know much of their music like the Stones or Beatles did and still do. I'd posit that if you got together a fairly good distributed group of young/old spread over the last 40-50 years...and played a number of songs from bands like Rush or the Stones. The majority would know more of the Stones' songs than Rush's.....I'd guess most of Rush's would be off the single album Moving Pictures as that not much else got widespread radio play at least in the US. Add in The Trees, Closer to the Heart and maybe Subdivisions and that's about it that I'd guess that the masses knew/know. One thing it might be however, is that that "shared existance" where much of the US knew and listened to the same thing...fragmented quite a bit, especially in music in about the 90's. Rock itself became : Rock, Rock and roll (oldies), Metal, Death Metal, etc...then all the other fragmented genre's. So, the 'group' experience kinda died and it was hard for one band to unite or gain such a large audience as they used to in the early days of rock like in the 60's and 70's. So, possibly a combinations of things....but again, I see bands of today, and I don't see them as great musicians, with a tight band that can play and improve ON stage.....they're too worried about messing up the dance choreography and timed out light show I guess to actually be able to just jam with each other and let the audience in to enjoy it. Re:Yup (2) CRCulver (715279) | 1 year,3 days | (#44834073) Your problem is that you posit the idea of an objectively "great" musician and then assume that one of his/her attributes would be a capability to improvise. I'd encourage you to read some ethnomusicology: whether improvisation is desirable or not varies widely across cultures and historical eras. For example, in Simha Arom's studies of the Aka Pygmies, he notes their virtuosity (their music is of a complexity that Western music arguably didn't reach until the 20th century), but he also notes that once they have learned a part, they do not vary it during performances. Westerners who have tried to play along with natives and try to add flaw by improvisation a little on their part, draw serious disapproval from the natives. So the US is perhaps simply evolving into a culture that doesn't care for improvising but instead focuses on other aspects of the performance. Things change, and once you have a good look at musical diversity across the world and through time, it's hard to argue that the current state is any better or worse than your memories of the Stones. Re:Yup (1) CRCulver (715279) | 1 year,3 days | (#44834239) Sorry, that should have read "try to add flair by improvising a little". Re:Yup (1) rasmusbr (2186518) | 1 year,3 days | (#44833653) I miss songs that you can 'feel' the soul coming through the speakers. <=> You are more than 25 years old. Re:Yup (1) Nerdfest (867930) | 1 year,3 days | (#44834553):Yup (1) frank_adrian314159 (469671) | 1 year,3 days | (#44833681) became a real nostalgia act. I doubt that Adele or Lady Gaga get this kind of support. If they don't hit multi-platinum on their singles and albums each and every time, they're screwed, as the labels will drop them instantly. Word on the street is if LG's third album performs as "poorly" as her second, she's history (Again. Sorry, Stephanie). The labels don't put the same amount into artist development as they used to. Then when you aggregate this with the concentration of music media with its tightly-controlled play-lists and rigid formats, it's a wonder that anything new sticks at all (Hint: Oldies stations make more money). Finally, you just don't have a baby boom to support new artists. The number of and disposable income of younger people are both dropping. And that lower income goes to a large number of other entertainment options. So yeah, we won't get label-backed bands like the Rolling Stones or Pink Floyd anymore. The best you can hope for is something like a Dave Mathews, who does enough of his own support and development that he might be able to tough it out between the time he's "relevant" and the time he's "nostalgia". Otherwise, it's off to the casino tours with the rest of you... Re:Yup (1) fustakrakich (1673220) | 1 year,3 days | (#44833995) Where are the songs from the 80's and 90's and 00's that will be the classic rock that will have the longevity the Stones' songs have had and somehow still do? In Prince's basement. Re:Yup (1) Megane (129182) | 1 year,3 days | (#44834263) it was the dawn of shovelware rap music. (though I have enjoyed stuff from Eminem and Run DMC) Re:Yup (0) Anonymous Coward | 1 year,3 days | (#44832973) And with overpriced concert tickets (and phones). Re:Yup (1) sp1nl0ck (241836) | 1 year,3 days | (#44832981) Re:Yup (1) Savage-Rabbit (308260) | 1 year,3 days | (#44832983) Old, played out, desperate to remain relevant. Yeah, they should dissolve Apple and give the money back to the shareholders... Re:Yup (0) Anonymous Coward | 1 year,3 days | (#44833299) If you like them, you have sympathy for the devil ? Re:Yup (1) poetmatt (793785) | 1 year,3 days | (#44833577) that's where we were in 2008, but it takes people a while to catch up. how many ethical issues have we had with apple since then? Re:Yup (1) LifesABeach (234436) | 1 year,3 days | (#44833625) Re:Yup (0) Anonymous Coward | 1 year,3 days | (#44834233) "Old, played out, desperate to remain relevant." And looking like old lesbians. Re:Yup (1) Megane (129182) | 1 year,3 days | (#44834287) Re:Yup (0) Anonymous Coward | 1 year,3 days | (#44834577) apple is the best, bar none. Re:Yup (2) hey! (33014) | 1 year,3 days | (#44834603) more could you possibly ask for? Eternal youth, apparently. no (0) Anonymous Coward | 1 year,3 days | (#44832801) wrong To paraphrase an old MST3K riff... (3, Funny) MickyTheIdiot (1032226) | 1 year,3 days | (#44832811) The submitter is a SPAZ Re:To paraphrase an old MST3K riff... (1) GoodNewsJimDotCom (2244874) | 1 year,3 days | (#44833227) Whats cool about MST3K being so old is that you forgot some of the jokes and they're funny again. I think when media passes a certain age, like 10 years, you can play it again and enjoy it over again. This goes for movies, books and video games. Re:To paraphrase an old MST3K riff... (0) Anonymous Coward | 1 year,3 days | (#44833523) The RiffTrax [rifftrax.com] Live riff (could be a pre-recorded one from last month) of Super Troopers is in theaters tonight, September 12th! (Oh hi) Mark your calendars for the live riff of Night of the Living Dead in theaters on October 24th. Re:To paraphrase an old MST3K riff... (0) Anonymous Coward | 1 year,3 days | (#44833555) In other words, the MST3K guys have also become the Rolling Stones and are still playing their decades old greatest hits. Innovative products for innovative thieves (1) intermodal (534361) | 1 year,3 days | (#44832835) I'm not looking forward to seeing phone-thieves cutting off fingers to access stolen phones. Re:Innovative products for innovative thieves (0) hedwards (940851) | 1 year,3 days | (#44832939) I am, it might finally settle the question of what precisely is it that an Apple Fanbois won't give up for their cult. Re:Innovative products for innovative thieves (1) intermodal (534361) | 1 year,3 days | (#44833019) I will concede that point. Re:Innovative products for innovative thieves (1) hedwards (940851) | 1 year,3 days | (#44833053) Of course if this becomes a common occurrence, I may have to run away shrieking every time I see an iPhone, because those dudes be crazy. Re:Innovative products for innovative thieves (1) Tablizer (95088) | 1 year,3 days | (#44832997) Use middle finger for ID, then you at least can flipoff the robbers a good long time. Re:Innovative products for innovative thieves (1) MickyTheIdiot (1032226) | 1 year,3 days | (#44833133) The Anthony Weiner method. Re:Innovative products for innovative thieves (1) xxxJonBoyxxx (565205) | 1 year,3 days | (#44834025) >> The Anthony Weiner method. No, I will not mushroom stamp my phone. Don't think so, sorry (1) djupedal (584558) | 1 year,3 days | (#44832853) If you simply want to state that the marketplace brings similar tasks and tension, fine, but then it's about branding, fans and being able to maintain what was good in the first place with what's worth adding, and over time, all brands have that in common. Of course an article written on that level, without major attention-getting keywords such as 'Apple' or 'Rolling Stones' wouldn't stand much of a chance when it comes to competing for it's own brand-like attention in today's media, so it is easy to see why such a comparison is floated. My Thoughts (2, Interesting) Anonymous Coward | 1 year,3 days | (#44832903) I wrote about why I thought Apple failed in their iPhone 5C/5S product announcement just yesterday. Unlike Ron Miller, I actually get into concrete reasons why I think Apple failed. (no ads - not fishing for ad views) Re:My Thoughts (2) Score Whore (32328) | 1 year,3 days | (#44833117) After reading your blog I'd have to say that no you didn't. Re:My Thoughts (1) MickyTheIdiot (1032226) | 1 year,3 days | (#44833149) This time, the "C" stands for Crap. Re:My Thoughts (0) Anonymous Coward | 1 year,3 days | (#44833545) Re:My Thoughts (1) fuzznutz (789413) | 1 year,3 days | (#44833655) breath for than to happen. 4 You don't seem to understand what iterative means. Re:My Thoughts (3, Insightful) MachineShedFred (621896) | 1 year,3 days | (#44833749) You don't understand the use of 64-bit processing if you only think it is about memory limitations. They always say that (5, Funny) Tablizer (95088) | 1 year,3 days | (#44832933):They always say that (0) Anonymous Coward | 1 year,3 days | (#44833087) iLithp Fikthed that for you. Re:They always say that (1) camperdave (969942) | 1 year,3 days | (#44833113) ...until all the youtube vids of Siri choking badly. Actually, it was Koothrappali [youtube.com] who choked. Re:They always say that (1) AvitarX (172628) | 1 year,3 days | (#44833501) Really? The thing that they did that blew my mind was the retina display, and after using one, it was hard to go back to old displays (fortunately android got there when it was time to upgrade). Rolling tumbleweed maybe (0) bluefoxlucid (723572) | 1 year,3 days | (#44832947) Apple has a lot more in common with Blackberry (2, Interesting) JoeyRox (2711699) | 1 year,3 days | (#44833037) Re:Apple has a lot more in common with Blackberry (0) the computer guy nex (916959) | 1 year,3 days | (#44833121) [imgur.com] Re:Apple has a lot more in common with Blackberry (1) coinreturn (617535) | 1 year,3 days | (#44833291) Re: Apple has a lot more in common with Blackberry (0) Anonymous Coward | 1 year,3 days | (#44833307) Should we try the fish? (You'll obviously be here all week) Re:Apple has a lot more in common with Blackberry (1) drkim (1559875) | 1 year,3 days | (#44833559) Apple's share of the smartphone market is the best it is have ever been. [imgur.com] ...which is sad - considering that it's only 17% [tinyurl.com] Re:Apple has a lot more in common with Blackberry (0) Anonymous Coward | 1 year,3 days | (#44833717) That doesn't refute any of the statements. Nokia's share of the smartphone market, too, was the best it had ever been while they started to stagnate. Re:Apple has a lot more in common with Blackberry (1) fuzznutz (789413) | 1 year,3 days | (#44833763) FTFY. You might want to read your graphic a little more carefully. Assuming the graphic is even accurate... Re:Apple has a lot more in common with Blackberry (2) JoeyRox (2711699) | 1 year,3 days | (#44833801) Re:Apple has a lot more in common with Blackberry (1) rasmusbr (2186518) | 1 year,3 days | (#44833411) that Apple have done extremely well on in the past. Re:Apple has a lot more in common with Blackberry (2) drkim (1559875) | 1 year,3 days | (#44833613) But with this new product: [theonion.com] Re:Apple has a lot more in common with Blackberry (1, Insightful) Quila (201335) | 1 year,3 days | (#44833893). and "Start Me Up" was played at Windows 95 debut (1) themushroom (197365) | 1 year,3 days | (#44833071) Just thought a little devil's advocate would be fun here. Kind of a jab to the Stones (1) sl4shd0rk (755837) | 1 year,3 days | (#44833079) Apple isn't even British. Re:Kind of a jab to the Stones (1) drkim (1559875) | 1 year,3 days | (#44833597) Apple isn't even British. No, they're Irish. One thing in common (-1, Flamebait) the computer guy nex (916959) | 1 year,3 days | (#44833131) How would I have known... (0) Anonymous Coward | 1 year,3 days | (#44833185) Okay, I'm sitting in jail right now but I only have ONE question! How the HELL was I supposed to know that exposing myself to school children is illegal?? WTF? I would never have guessed that in a million years! Shouldn't they have explained it to me and then let me go? Strange country we live in! Sent from my Cell phone How about they Think Differently? (1) tuppe666 (904118) | 1 year,3 days | (#44833195) iWork failed before? How about they buy Dell; Nintendo; Nokia; Netflix? How about competing with Office instead of limiting it to their products? How about they compete against Amazon; Facebook; Google search and advertising? How about they they do a Netbook or a Console; Car Radio? How about they buy or build a University or Manufacturing facilities? What are they going to do...buy back shares; How knicker wetting exciting is that? Re:How about they Think Differently? (0) Anonymous Coward | 1 year,3 days | (#44833723) There are some markets they can step in which are stagnant. You suggested car audio. This is a market that is so boring that not even thieves grab a radio from a dash anymore when looking for something to sell for a crack rock. Apple could step into this market with a 1 (or possibly 2) DIN audio head and completely revolutionize things, and car makers would make their cars (and CANBUS APIs) available for Apple, which they would not do for any other stereo manufacturer. With this in mind, iPhones and iDevices could function as keys using Bluetooth, or if the audio had had a built in LTE transmitter, the owner's iDevice could do basic functions from almost anywhere there is a signal. iCloud Music would be the only game in town in getting stuff onto the player. To boot, the audio head could be removable so it could be upgraded every year. Car audio is a market waiting for Apple. It wouldn't take much for them to twist the arms of car makers to make their cars be compatible with an Apple made car stereo, and only with it. With horsepower being pointless due to congested roads, and MPG being high across the board, creature comforts are becoming a decision point for car buyers. Apple device friendly can be something that can make or break a purchase. The day Apple knocks on car makers door is the day that the fate of Alpine, Sony, Jensen, and other audio makers will be history. Re:How about they Think Differently? (3, Insightful) MachineShedFred (621896) | 1 year,3 days | (#44834075) makes the jumps, or Apple takes its lumps (3, Interesting) David Govett (2825317) | 1 year,3 days | (#44833197) Re:Apple makes the jumps, or Apple takes its lumps (1) quacking duck (607555) | 1 year,3 days | (#44834089)'), but that false one is total malicious ignorance. Fortunately there's an effective response to that: point out that Samsung will have released over 25 smartphone models in 2013 alone [phonearena.com] , and dare them to show the same scorn at Samsung for releasing so many models with minor feature differences. No one spouting their false accusation has ever replied after being slapped across the face with that revelation. Ho-hum, another really amazing device (5, Insightful) sandbagger (654585) | 1 year,3 days | (#44833223):Ho-hum, another really amazing device (1) mrwolf007 (1116997) | 1 year,3 days | (#44834065):Ho-hum, another really amazing device (1) cdrudge (68377) | 1 year,3 days | (#44834579). And this needed to be a video post why? (0) Anonymous Coward | 1 year,3 days | (#44833351) Enough said. Other Parallels (3, Interesting) ScottCooperDotNet (929575) | 1 year,3 days | (#44833363). I think they did just fine with this conference (2) damn_registrars (1103043) | 1 year,3 days | (#44833419):I think they did just fine with this conference (0) Anonymous Coward | 1 year,3 days | (#44834293) You get awfully quiet when you get exposed as an idiot: Baby Boomers - Only Ones Who Care? (0) xxxJonBoyxxx (565205) | 1 year,3 days | (#44833599). Not a relative of who? (0) Anonymous Coward | 1 year,3 days | (#44833621) What is the summary on about? Rabid excitement? (1) msobkow (48369) | 1 year,3 days | (#44833629). Re:Rabid excitement? (0) Anonymous Coward | 1 year,3 days | (#44833895) This "news" piece is just a marketing ploy. Most people (those that aren't rabid Apple fans they buy each update) are already in the "meh, got one already" category. In the US, every two years people update their plans/device to whatever is available, because they believe the device is free. They tend to stick with their existing platform due to the purchases made for their last device. Apple are still making a killing, but they're falling way behind in market share due to the consumers coming into the market fresh each year (think high school or wealthy middle school kids). They're the ones going for Android devices and they'll probably stick with the platform for many years. Apple's iPhones look very antiquated today, and the UI rather primitive and limiting to what's being done elsewhere. Submitter has an addiction (0) Anonymous Coward | 1 year,3 days | (#44833685) Apple's latest iPhone announcement seems to have been greeted with a massive "ho hum" You call lining up before the phone is announced "ho hum?" Where do you get all that Valium? "People" vs "pundits" (3, Interesting) danaris (525051) | 1 year,3 days | (#44833795) Apple: doomed since the beginning! (2, Insightful) mveloso (325617) | 1 year,3 days | (#44834359). (not a relative) (0) Anonymous Coward | 1 year,3 days | (#44833815) Huh? Why is "(not a relative)" in the summary when no one else is named? Suck it up libtards! (-1) Anonymous Coward | 1 year,3 days | (#44834095) Koch bought Molex. Apple uses Molex. Koch is getting more money from you fags... Like the stones? (0) Anonymous Coward | 1 year,3 days | (#44834197) So in 15 years, they'll all be dead? Speculation & time to market is the killer (1) ryanw (131814) | 1 year,3 days | (#44834203) come to conclusions that are even more grand than apple is going to release. This creates a sense of disappointment at the times of announcements. For example, people had speculated we would see the appleTV Television with integrated iSight camera at this product announcement. Since it didn't happen, and only other things which we already knew (5c, 5s, finger scanner, faster processors, updated camera) there wasn't a lot of room for surprise. The only surprise I saw was the dual colored led flash. Everything else I seemed to have already heard about and seen leaks on the Internet for several weeks if not months. If apple wants to keep surprising us, they need to close the loop on their leaks, or show us products sooner to be the first to introduce it to us, instead of the rumor mill. Apple's biggest problem (2) FyreMoon (528744) | 1 year,3 days | (#44834285). Re:Apple's biggest problem (0) Anonymous Coward | 1 year,3 days | (#44834605) I don't want to know what an iDevice would cost if it was completely built in the states. And besides all of the so-called "leaks" are planned events to build pre-release buzz and keep the whole we have a secret aura. Obligatory Arbitrary Jab: But you would probably be a real expert in early leaks before the main event, if you ever got to participate in an event. Freelance IT journalist - WTF?? (0) tomboalogo (2509404) | 1 year,3 days | (#44834363) Reworked joke: Q: What do you call 5000 IT journalists at the bottom of the ocean? A: A bloody good start. We need to 'off' these bastards before they breed. What they Have in Common (1) rssrss (686344) | 1 year,3 days | (#44834507) Years of Drug abuse? Everyone's memory is flawed. (0, Insightful) Anonymous Coward | 1 year,3 days | (#44834569).
http://beta.slashdot.org/story/191517
CC-MAIN-2014-41
refinedweb
4,109
70.02
Red Hat Bugzilla – Full Text Bug Listing Description of problem: # koan --replace-self - looking for Cobbler at the rhpl or ethtool module is required to use this feature (is your OS>=EL3?) Version-Release number of selected component (if applicable): koan-2.2.2-1.fc17.noarch This is not required because it's an either/or situation: def get_network_info(): try: import ethtool except: try: import rhpl.ethtool ethtool = rhpl.ethtool except: raise InfoException("the rhpl or ethtool module is required to use this feature (is your OS>=EL3?)") Adding a requirement on one would exclude the other, so neither are required and the import error is handled gracefully with a suggestion to install the one available for your distro. I disagree. I filed this bug against *Fedora*, not cobbler in general. In Fedora, there is no rhpl.ethtool, only python-ethtool, and it should be required by the Fedora koan package.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=819913
CC-MAIN-2016-30
refinedweb
152
56.25
One of the many challenges in moving code from Swift 2.2 to Swift 3 is dealing with changed method signatures. For example, say you have the following function in Swift 2.2: func frobnicate(runcible: String) { print("Frobnicate: \(runcible)") } You call this with frobnicate(string) in 2.2 and frobnicate(runcible: string) in 3. The new first label rule introduced in SE-0046 means that you have to differentiate calls based on how they consume this already existing function: // Consuming func frotz() { #if swift(>=3.0) // Swift 3.x code frobnicate(runcible:"frotz 3.x " + string) #else // Swift 2.2 code frobnicate("frotz 2.2 " + string) #endif } Remember that Swift’s compile-time build directives must surround entire statements and expressions. You cannot “cheat” to surround only the runcible: label. (And even if you could, it would look horrible.) The expression limitation means you’ll need to take care when introducing multi-version code. The previous example worked by differentiating code at the call site. An alternative approach involves leaving the consumer untouched and supplying consistent API signatures based on the Swift distribution used to compile the code. That approach looks like this: // Supplying consistent API signatures #if swift(>=3.0) func fripple(_ fweep: String) { // Full 3.x codebase print("3.x fripple", fweep) } #else func fripple(fweep: String) { // Full 2.2 codebase print("2.2 fripple", fweep) } #endif In this example, both fripple implementations can be called without first labels. The big problem here is that you must duplicate the full codebase for both implementations, even if they are essentially or entirely the same. If you fix a bug in one implementation, you must mirror that fix in the other implementation. Fortunately there is a way around this. It works by unifying shared code in a separate closure. By moving the code out from the functions or methods, you can treat the signatures and the implementations as separate configurations and avoid code duplication. // Alternatively pull the code out to a closure let sharedIzyuk: (String) -> String = { krebf in // Place full codebase here, assuming it // runs under both 2.2 and 3 return "sharedIzyuk \(krebf)" } #if swift(>=3.0) func izyuk(_ krebf: String) -> String { return "3.x " + sharedIzyuk(krebf) } #else func izyuk(krebf: String) -> String { return "2.2 " + sharedIzyuk(krebf) } #endif However ugly, this approach enables you to isolate the shared functionality from the different signatures. You’ll only need to maintain one closure — even if that closure itself contains conditional compilation for 2.2 and 3 code. The only other practical solutions are to commit to a full Swift 3 migration, to maintain separate 2.2 and 3 repositories, or to create parallel full implementations using #if swift() build configurations with duplicated code. 4 Comments Why a closure instead of a function? It does not look as though it is being passed anywhere, or closing over variables I surrounding scope. Because closures don’t have first parameter label rules Of course. I need more coffee! Or you could just go with the Swift 3.0 syntax func fripple(_ fweep: String) and live with the “extraneous ‘_’ in parameter” warning in Swift 2.2.
https://ericasadun.com/2016/05/13/first-parameters-swift-signatures-and-conditional-builds/
CC-MAIN-2021-43
refinedweb
527
59.9
Java multi-threading interview questions and answers - continuation Q. How can threads communicate with each other? How would you implement a producer (one thread) and a consumer (another thread) passing data (via stack)?Q. How can threads communicate with each other? How would you implement a producer (one thread) and a consumer (another thread) passing data (via stack)? A. The wait( ), notify (), and notifyAll( ) methods are used to provide an efficient way for threads to communicate with each other. This communication solves the ‘consumer-producer problem’. This problem occurs when the producer thread is completing work that the other thread (consumer thread) will use. Example: If you imagine an application in which one thread (the producer) writes data to a file while a second thread (the consumer) reads data from the same file. In this example the concurrent threads share the same resource file. Because these threads share the common resource file they should be synchronized. Also these two threads should communicate with each other because the consumer thread, which reads the file, should wait until the producer thread, which writes data to the file and notifies the consumer thread that it has completed its writing operation. Let’s look at a sample code where count is a shared resource. The consumer thread will wait inside the consume( ) method on the producer thread, until the producer thread increments the count inside the produce( ) method and subsequently notifies the consumer thread. Once it has been notified, the consumer thread waiting inside the consume( ) method will give up its waiting state and completes its method by consuming the count (i.e. decrementing the count). Here is a complete working code example on thread communication. Note: A method calls notify( )/notifyAll( ) as the last thing it does (besides return). Since the consume method was void, the notify( ) was the last statement. If it were to return some value, the notify( ) would have been placed just before the return statement. Q. Why wait, notify, and notifyall methods are defined in the Object class, and not in the Thread class? A.. Q. What does join( ) method do? A. t.join( ) allows the current thread to wait indefinitely until thread “t” is finished. t.join (5000) allows the current thread to wait for thread “t” to finish but does not wait longer than 5 seconds. try { t.join(5000); //current thread waits for thread “t” to complete but does not wait more than 5 sec if(t.isAlive()){ //timeout occurred. Thread “t” has not finished } else { //thread “t” has finished } } For example, say you need to spawn multiple threads to do the work, and continue to the next step only after all of them have completed, you will need to tell the main thread to wait. this is done with thread.join()method. Here is the RunnableTask. The task here is nothing but sleeping for 10 seconds as if some task is being performed. It also prints the thread name and timestamp as to when this task had started import java.util.Date; public class RunnableTask implements Runnable { @Override public void run() { Thread thread = Thread.currentThread(); System.out.println(thread.getName() + " at " + new Date()); try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } } } The taskmanager manages the tasks by spawing multiple user threads from the main thread. The main thread is always created by default. The user threads 1-3 are run sequentially, i.e. thread-2 starts only after thread-1 completes, and so on. The user threads 4-6 start and executes concurrently. public class TaskManager { public static void main(String[] args) throws InterruptedException { RunnableTask task = new RunnableTask(); //threads 1-3 are run sequentially Thread thread1 = new Thread(task, "Thread-1"); Thread thread2 = new Thread(task, "Thread-2"); Thread thread3 = new Thread(task, "Thread-3"); thread1.start(); //invokes run() on RunnableTask thread1.join(); // main thread blocks (for 10 seconds) thread2.start(); //invokes run() on RunnableTask thread2.join(); // main thread blocks (for 10 seconds) thread3.start(); //invokes run() on RunnableTask thread3.join(); // main thread blocks (for 10 seconds) Thread thread4 = new Thread(task, "Thread-4"); Thread thread5 = new Thread(task, "Thread-5"); Thread thread6 = new Thread(task, "Thread-6"); thread4.start(); //invokes run() on RunnableTask thread5.start(); //invokes run() on RunnableTask thread6.start(); //invokes run() on RunnableTask } } Notice the times of the output. There is a 10 second difference bewteen threads 1-3. But Threads 4-6 started pretty much the same time. Thread-1 at Fri Mar 02 16:59:22 EST 2012 Thread-2 at Fri Mar 02 16:59:32 EST 2012 Thread-3 at Fri Mar 02 16:59:42 EST 2012 Thread-4 at Fri Mar 02 16:59:47 EST 2012 Thread-6 at Fri Mar 02 16:59:47 EST 2012 Thread-5 at Fri Mar 02 16:59:47 EST 2012 Q. If 2 different threads hit 2 different synchronized methods in an object at the same time will they both continue? A. No. Only one thread can acquire the lock in a synchronized method of an object. Each object has a synchronization lock. No 2 synchronized methods within an object can run at the same time. One synchronized method should wait for the other synchronized method to release the lock. This is demonstrated here with method level lock. Same concept is applicable for block level locks as well. Q. Explain threads blocking on I/O? A... Q. If you have a circular reference of objects, but you no longer reference it from an execution thread, will this object be a potential candidate for garbage collection? A. Yes. Refer diagram below. Q. Which of the following is true? a) wait( ), notify( ) ,notifyall( ) are defined as final & can be called only from within a synchronized method b) Among wait( ), notify( ), notifyall( ) the wait() method only throws IOException c) wait( ),notify( ),notifyall( ) & sleep () are methods of object class A. a and b. The c is wrong because the sleep method is a member of the Thread class.The other methods are members of the Object class. Q. What are some of the threads related problems and what causes those problems? A. DeadLock, LiveLock, and Starvation. Deadlock occurs when two or more threads are blocked forever, waiting for each other. This: Q. What happens if you call the run( ) method directly instead of via the start method? A. Calling run( ) method directly just executes the code synchronously (in the same thread), just like a normal method call. By calling the start( ) method, it starts the execution of the new thread and calls the run( ) method. The start( ) method returns immediately and the new thread normally continues until the run( ) method returns. So, don't make the mistake of calling the run( ) method directly. Note: These Java interview questions and answers are extracted from my book "Java/J2EE Job Interview Companion". If you liked the above Java multi-threading questions and answers, you will like the following link, which has a little more advanced coding questions on multi-threading. If you work a lot with multi-threading, and want to really master multi-threading, then my favorite book is: Labels: Multi-threading 8 Comments: Thanks for your wonderful information which helped us to join java online training Hi I read this post 2 times. It is very useful. Pls try to keep posting. Let me show other source that may be good for community. Source: Are you applying for other jobs? interview question answers Best regards Jonathan. wait() doesn't throw IOException so b) is also false Thanks for the article, excellent stuff. I have seen interesting Interview Questions and Answers here. I have bookmarked this post, infact I have listed some important interview questions on java threads and concurrency, please check Java Thread Interview Questions I love Multithreading.... Thanks for this article. Thanks! great article.very inspiring Thread thread1 = new Thread(task, "Thread-1"); is throwing an error , "Constructor Thread(RunnableTask, Srting)" is undefined. When I tried executing the Code TaskManager.java and the RunnableTask.java classes .. Could some body please provide a resolution?
http://java-success.blogspot.com.au/2012/04/java-multi-threading-interview.html
CC-MAIN-2018-09
refinedweb
1,344
65.93
Debug mode¶ Description Plone can be put in the debug mode where one can diagnose start up failures and any changes to CSS, JavaScript and page templates take effect immediately. Introduction¶ By default when you start Plone you start it in a production mode. - Plone is faster - CSS and JavaScript files are merged instead of causing multiple HTTP request to load these assets. CSS and JavaScript behavior is different in production versus debug mode, especially with files with syntax errors because of merging. - Plone does not reload changed files from the disk Because of above optimizations the development against a production mode is not feasible. Instead you need to start Plone in debug mode (also known as development mode) if you are doing any site development. In debug mode - If Plone start-up fails, the Python traceback of the error is printed in the terminal - All logs and debug messages are printed in the terminal; Zope process does not detach from the terminal - Plone is slower - CSS and JavaScript files are read file-by-file so line numbers match on the actual files on the disk. (portal_css and portal_javascript is set to debug mode when Plone is started in debug mode) - Plone reloads CSS, JavaScript and .pt files when the page is refreshed Note Plone does not reload .py or .zcml files in the debug mode by default. Reloading Python code¶ Reloading Python code automatically can be enabled with sauna.reload add-on. JavaScript and CSS issues with production mode¶ See portal_css and portal_javascript in the Management Interface to inspect how your scripts are bundled. Make sure your JavaScript and CSS files are valid, mergeable and compressable. If they are not then you can tweak the settings for individual file in the corresponding management tool. Refresh issues¶ Plone production mode should re-read CSS and JavaScript files on Plone start-up. Possible things to debug and force refresh of static assets - Check HTML <head> links and the actual file contents - Go to portal_css, press Save to force CSS rebundling - Make sure you are not using plone.app.caching and doing caching forever - Use hard browser refresh to override local cache Starting Plone in debug mode on Microsoft Windows¶ This document explains how to start and run the latest Plone (Plone 4.1.4) on Windows 7. This document explains post-installer steps on how to start and enter into a Plone site. Installation Installation¶ This quick start has been tested on Windows 7. Installation remains the same on older versions of Windows through WinXP. - Run installer from Plone.org download page - The Plone buildout directory will be installed in C:\Plone41 - The installer will launch your Plone instance when it finishes. To connect, direct your browser to: Note In the buildout bin directory you’ll find the executable files to control Plone instance. Starting and Stopping Plone¶ If your Plone instance is shutdown you can start and control it from the command prompt. Note To control Plone you need to execute your command prompt as an administrator. In the command prompt enter the following command to access your buildout directory (the varies according to Plone version): cd "C:\\Plone41" To start Plone in debug mode type: bin\instance fg You can interrupt the instance by pressing CTRL-C. This will also take down the Zope application server and your Plone site. Accessing Plone¶ When you launch Plone in debug or daemon mode it will take a few moments to launch. If you are in debug mode, Plone will be ready serve pages when the following line is displayed in your command prompt: INFO Zope Ready to handle requests When the instance is running and listing to port 8080, point your browser to address on your local computer: The Plone welcome screen will load and you can create your first Plone site directly by clicking the Create a new Plone Site button. A form will load asking for the Path Identifier (aka the site id) and Title for a new Plone site. It will also allow you to select the main site language, and select any add-on products you wish to install with the site. Note These entries can all be modified once the site is created. Changing the site id is possible, but not recommended. To create your site, fill in this form and click the Create Plone Site button. Plone will then create and load your site. Note The url of your local Plone instance will end with the site id you set when setting up your site. If the site id were Plone then the resultant URL is:. Congratulations! You should be now logged in as an admin to your new Plone instance and you’ll see the front page of Plone. Starting Plone in debug mode on UNIX¶ Single instance installation (“zope”)¶ Enter to your installation folder using cd command (depends on where you have installed Plone): cd ~/Plone/zintance # Default local user installation location For root installation the default location is /usr/local/Plone. Type in command: bin/instance fg Press CTRL+C to stop. Clustered installation (“zeo”)¶ If you have ZEO cluster mode installation you can start individual processes in debug mode: cd ~/Plone/zeocluster bin/zeoserver fg & # Start ZODB database server bin/client1 fg & # Start ZEO front end client 1 (usually port 8080) # bin/client2 fg # For debugging issues it is often enough to start client1 Determining programmatically whether Zope is in debug mode¶ Zope2’s shared global data Globals, keeps track on whether Zope2 is started in debug mode or not.: import Globals if Globals.DevelopmentMode: # Zope is in debug mode Note There is a difference between Zope being in debug mode and the JavaScript and CSS resource registries being in debug mode (although they will automatically be set to debug mode if you start Zope in debug mode).
https://docs.plone.org/develop/plone/getstarted/debug_mode.html
CC-MAIN-2018-05
refinedweb
977
66.57
What the hell is vuex? install I won’t talk about the specific installation of vuex here. The official information is relatively clear,Poke here to enter。 However, two points should be noted: - In a modular packaging system, you must explicitly use the Vue.use()To install vuex, for example: import Vue from 'vue' import Vuex from 'vuex' Vue.use (vuex) // this function must be called to inject vuex - When using the global script tag to refer to vuex, you don’t have to worry about it. Just refer to it directly, but you should pay attention to the sequence of references, as follows: //After Vue, the introduction of vuex will automatically install <script src="/path/to/vue.js"></script> <script src="/path/to/vuex.js"></script> Although script seems to be more automatic, you will understand that modularity is actually our best posture after more contact. Unveiling vuex When we get a tool, what we need to understand at the first time is what problems it can help us solve. For example, hammers, broken eggs, hit the phone, such as apples, can not only eat but also play. What about vuex, if you put Vue.js If he wants to go next door to buy a pack of cigarettes, he can just walk there. Driving a car is a burden. But if he wants to go tens of kilometers to school to pick flowers, then Santana will have to be used. Otherwise, when he goes there, all the flowers may wither. Of course, the analogy is just to tell us the value of vuex. In practical application,What can it do? When do you need to turn its cards? Let’s take a look at the official Code: new Vue({ //State data source data () { return { count: 0 } }, //View view template: ` <div>{{ count }}</div> `, //Actions event methods: { increment () { this.count++ } } }) This is a very simple growth counting function page, andVue.jsIf you have a leg, you should understand. By event increment, implementation countGrow, and then render to the interface. In fact, this way is just like walking to buy cigarettes. It is a short-distance effect. It is easy to understand that the government calls it “one-way data flow”. Unidirectional data flow However, the situation has changed. Now there are two pages a and B, and the following two requirements: - They are required to be able to countControl. - Request a has been revised countB should know immediately after B changes, and a should know immediately after B changes. What should I do? A little bit of development experience, you can easily think of, the data source countSplit it off and manage it in a global variable or global singleton mode, so that it is not easy to get this state on any page. Yeah, that’s the idea behind vuex. That’s what it does. Is there a feeling that vuex, a big name, is harming you? It’s the global model. It’s also possible to do without it. Yes, it can be done, just like you can go to school to see flowers without Santana, but the process is different. The purpose of vuex is to manage the shared state. In order to achieve this goal, it makes a series of rules, such as modifying the data source state, triggering actions and so on. All of them need to follow its rules, so as to make the project structure clearer and easier to maintain. Let’s look at the official description Vuex is a Vue.js State management mode of application development. It uses centralized storage to manage the state of all components of the application, and ensures that the state changes in a predictable way with corresponding rules. There is no instant clearer. When do you turn vuex In fact, after understanding what vuex is going to do, it’s much easier to choose when to turn over the cards. Just like the previous analogy, when you go to the next room to buy a pack of cigarettes and open a Santana, when you look for a parking space, you smoke out. Therefore, we need to measure the short-term and long-term benefits according to the needs of the project. If we don’t plan to develop large single page applications, vuex may still be a burden for you. For some small and medium-sized projects, if you are lazy to walk and feel troublesome to drive, you can ride a bike. The bike sharing here refers to a simple official bike sharingStore modeIs actually a simple global object. The difference between global objects and vuex is relatively easy to understand Vuex differs from simple global objects in the following two aspects: - Vuex’s state storage is responsive. When Vue components read the state from the store, if the state in the store changes, the corresponding components will be updated efficiently. - You can’t directly change the state in the store. The only way to change the state in a store is to explicitlyCommit mutation。 This makes it easy for us to track every state change, so that we can implement some tools to help us better understand our application. Simple example //If you are in a modular build system, make sure you call it at the beginning Vue.use (Vuex) const store = new Vuex.Store({ state: { count: 0 }, mutations: { increment (state) { state.count++ } } }) The core of every vuex application is the store. A store is basically a container that contains most of the information in your applicationState。 Note: if mutations don’t know what it is, it doesn’t matter. It will be explained later. It can be simply understood that it can only modify the data in the state by using the methods in it. store.commit ('increment ') // call methods in mutations console.log(store.state.count) // -> 1 The government also gave specific reasons for the design We submit the mutations instead of changing them directly store.state.count Because we want to track state changes more clearly. This simple convention can make your intention more obvious, so that when you read the code, you can more easily understand the state changes inside the application. In addition, this also gives us the opportunity to implement some debugging tools that can record every state change and save the state snapshot. With it, we can even achieve a debugging experience like time travel. Because the state in store is responsive, the state in calling store in components is simple, and only needs to be returned in the computed property. Triggering a change is only to submit a mutation in the methods of the component. Vuex’s state and getter Similarly, we have learned that vuex, like a global administrator, helps us to manage the shared data of projects in a unified way. What kind of way is it managed? How can we communicate with the administrator in order to access and operate the shared data effectively? One more word The viscera of vuex consists of five parts: state, getter, mutation, action and module. As for these five parts, I will divide them into several chapters to elaborate in detail. In this lecture, I will work with you to thoroughly deal with state and getter. Of course, in practical application, these five parts are not necessary. You can add whatever you need. But generally, no matter how simple vuex is, it will at least consist of state and mutation. Otherwise, you should consider whether vuex is necessary. Finally, warm tips, the document sample code uses the syntax of es2015, if you have not yet understood, firstKnow more about this。 Single state tree Vuex usesSingle state treeAccording to the official description, it may be a little confused, but it doesn’t matter. Here we’ll learn more about what isSingle state tree。 Let’s take a look at the meaning of trees. “ organizational structure As shown in the figure above, the organizational structure of a company belongs to a tree structure. The general manager is the trunk of the tree, and other departments or occupations belong to the branches of the tree. Generally speaking, a company will only have such a tree structure. If there are two equal general managers, there may be conflicts in the management of the company. Who will listen to the people below, right! OK, now let’s take a look at the official narrativeSingle state tree: 1. Use an object (trunk)That’s all (Branch)Application level status. “ 2. Per app (company)Only one store instance object will be included (trunk)。 Single state tree allows us to locate any specific state fragment directly, and can easily get a snapshot of the current application state during debugging. State Let’s go back to the simple store sample code: import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) const store = new Vuex.Store({ state: { count: 0 } }) So how do we present state in Vue components? Because the state storage of vuex is responsive, the easiest way to read the state from the store instance is in theCalculation propertiesReturns a status as follows: //Create a counter component const Counter = { data() { return {} }, template: `<div>{{ count }}</div>`, computed: { count () { return store.state.count } } } whenever store.state.countWhen it changes, it will retrieve the calculated properties and refresh the interface. It’s important to note that if you store.state.countIn data, store.state.countThe change of the interface will not trigger the refresh of the interface. Of course, it cannot be done directly <div>{{ store.state.count }}</div>Because you can’t directly access the store object in the template, you will undoubtedly report an error if you write in this way. This mode depends on the global administrator store. If there are too many modules, it means that every module or page needs to introduce the store as long as it uses the data in this state. This kind of operation is really a bit uncomfortable. Of course, the government certainly does not allow such a crazy operation: Through the store option, vuex provides a mechanism to “inject” the state from the root component into each sub component (to be called) Vue.use (Vuex)): const app = new Vue({ el: '#app', //Provide the store object to the "store" option, //This can inject an instance of the store into all the subcomponents store, //Subcomponents components: { Counter }, template: ` <div class="app"> <counter></counter> </div> ` }) By registering the store option in the root instance, the store instance will be injected into all the subcomponents under the root component, and the subcomponents can be accessed through this. $store. Let’s update the implementation of counter const Counter = { template: `<div>{{ count }}</div>`, computed: { count () { return this.$store.state.count } } } Vuex is easy to use, but don’t abuse it Using vuex doesn’t mean you need to put all the states in vuex. Although putting all the states into vuex makes the state changes more explicit and easier to debug, it also makes the code lengthy and non intuitive. If some states strictly belong to a single component, it’s better to be a local state of the component. You should make trade-offs and decisions based on your application development needs. Getter Sometimes, we will find that the data in the state is not what we want directly, but needs to be processed to meet our needs. For example, in a component, we need to change the date in the state dateConvert to the day of the week to show: computed: { weekDate () { return moment(this.$store.state.date).format('dddd'); } } Note: Here’s themomentIs a third-party date processing class library, need to import before using. If only one component needs to do this, it’s OK, but if many components need to do this transformation, then you need to copy this function in each component. What’s more, once the product manager is in a bad mood and doesn’t want to use the day of the week to show it, he wants to use it directly 2018-10-30 11:12:23If you want to display the date in this way, you have to change the date formatting method in all the components that use it. Even if you extract it separately as a public function, all kinds of import is also troublesome. The most important thing is that it is not well managed. So, at this time, vuex introduced another awesome thing,Getter。 We can think of it as a computed property in the store. Just like calculating a property, the return value of a getter is cached according to its dependency, and is recalculated only when its dependency value changes. Let’s take a look at these two examples and focus on the following notes: const store = new Vuex.Store({ state: { date: new Date() }, getters: { //Getter takes state as its first parameter weekDate: state => { return moment(state.date).format('dddd'); } } }) getters: { //Getter can also receive getters as a second parameter dateLength: (state, getters) => { return getters.weekDate.length; } } Not only that, getter will store.getters Objects are exposed, and you can access these values in the form of attributes: console.log(store.getters.weekDate) We can easily use it in any component: computed: { weekDate () { return this.$store.getters.weekDate } } Now the demand has changed again. The format of weekdate to be displayed in each module is different. Some need to display all the dates, and some need to display the day of the week. What should I do? It’s easy to do, so you can send it to getter, but how? Since getter is cached as part of Vue’s responsive system when accessed through attributes, it cannot be accessed directly store.getters.weekDate('MM Do YY')Because weekdate is not a function, it’s just a property. Now that the attribute can’t pass parameters, what should I do? Let’s try to turn this property into a function. getters: { //Return a function, you can pass parameters weekDate: (state) => (fm) => { return moment(state.date).format(fm ? fm : 'dddd'); } } The usage is as follows: store.getters.weekDate('MM Do YY') Maybe children’s shoes who have read the official documents will wonder why they didn’t explain those auxiliary functions, such as mapState、 mapGetters。 Don’t worry, there will be a special chapter to explain it later, because I found that these auxiliary functions (including the following mapMutationsand mapActions)They are all created to solve the same problem, only in different forms, so it’s better to talk about it together, maybe the effect will be better. Vuex’s station Last lectureVuex’s state and getter, which tells us how to use the state data in the warehouse. Of course, it’s not enough to use it alone. Most application scenarios still have to control these states. How to control them is the key point of this lecture. Only mutation state The only way to change the state in vuex’s store is to submit a mutation.Mutations in vuex are very similar to events: each mutation has a string event type and a callback function. This callback function is where we actually change the state, and it will accept state as the first parameter const store = new Vuex.Store({ state: { count: 1 }, mutations: { //The event type is increment increment (state) { //Change status state.count++ } } }) Note that we can’t directly store.mutations.increment()Vuex specifies that the store.commitTo trigger the corresponding type: store.commit('increment') Chuanshen We can also ask store.commit Pass in additional parameters: mutations: { increment (state, n) { state.count += n } } //Call store.commit('increment', 10) For this extra parameter in the mutation, the official gave it the name of “tall”Load (payload)。 To be honest, I saw this title in the document for the first time“Submit load”I really don’t want to look down. We are often not defeated by these unsophisticated concepts, but by our own inner fear. In most cases, the load is an object that makes it easier to read: mutations: { increment (state, payload) { state.count += payload.amount } } There are two ways to submit: //1. Submit load and type separately store.commit('increment', { amount: 10 }) //2. The whole object is transferred to the mutation function as a load store.commit({ type: 'increment', amount: 10 }) Of course, there is no absolute limit to which way to use. It depends on my own preferences. As far as I am concerned, I prefer to use the second posture, which is more realistic together. Modify the rules It’s easy to modify the state data of the basic type. There are no restrictions, but if you modify the object, you should pay attention to it. For example: const store = new Vuex.Store({ state: { student: { Name: 'Xiao Ming', Sex: 'female' } } }) At this time, if we want to give studentAdd an age age: 18What should we do? Yes, directly sexNow, just add this field. It’s best to do this. But what if we want to change it dynamically? Then you have to follow Vue’s rules. As follows: mutations: { addAge (state) { Vue.set(state.student, 'age', 18) //Or: // state.student = { ...state.student, age: 18 } } } The above are two ways to add attributes to an object. Of course, if you want to modify the specific value of an added object, you can change it directly, such as state.student.age=20That’s it. As for why, we have learned before, because the state in the store is responsive. When we change the state data, the Vue component that monitors the state will also be updated automatically. Therefore, the mutation in vuex also needs to follow these rules as well as using Vue. Use constants This is to replace the name of the mutation event with a constant. // mutation-types.js export const SOME_MUTATION = 'SOME_MUTATION' // store.js import Vuex from 'vuex' import { SOME_MUTATION } from './mutation-types' const store = new Vuex.Store({ state: { ... }, mutations: { //Use the es2015 style computational property naming function to use a constant as the function name [SOME_MUTATION] (state) { // mutate state } } }) Some people may have doubts. What’s the use of doing this? You have to create more type files and import them when you use them. Isn’t it troublesome! Let’s see how the mutation is called store.commit('increment')You can find that here is the method of commit submission incrementIn the form of a string. If the project is small and developed by one person, it’s OK. However, if the project is large and there are more people writing code, it’s troublesome. Because there are many ways to commit, it will be particularly confusing. Moreover, if it is substituted in the form of string, it’s very difficult to check if something goes wrong. Therefore, for large projects with multi person cooperation, it’s better to use the constant form to deal with the mutation. For small projects, it doesn’t matter, just want to be lazy. Must be a synchronous function It’s important to remember,Mutation must be a synchronous function.Why? As mentioned earlier, the reason why we want to change the state data by submitting a mutation is that we want to track the change of state more clearly. If it is asynchronous as follows: mutations: { someMutation (state) { api.callAsyncMethod(() => { state.count++ }) } } We don’t know when the state will change, so we can’t track it. This is contrary to the original design of the station, so it is mandatory that it must be a synchronous function. store.commit('increment') //Any state changes caused by "increment" should be completed at this point. Action under vuex Through the previous lectureVuex’s station, we know how to modify the data of a state, and we can only submit the modification through the mutation. In addition, we also know that the mutation must be a synchronous function. What if the asynchronous function must be used in the requirement? Easy to do, then it’s action’s turn. Brief introduction Action is similar to mutation, except that: 1. Action submits a mutation, not a direct change of state. “ 2. An action can contain any asynchronous operation. Let’s look at a simple example of action const store = new Vuex.Store({ state: { count: 0 }, mutations: { increment (state) { state.count++ } }, actions: { increment (context) { context.commit('increment') } } }) As you can see, the action function takes a contextParameter. Note that this parameter is not general. It has the same methods and properties as the store instance, but they are not the same instance. I will explain why they are different when I learn modules later. So you can use it here context.commitTo submit a mutation, or through context.stateand context.gettersTo get the state and getters. Of course, to simplify the code, we can use theParametric deconstructionIt’s easy to expand commit、 stateAnd so on. As follows: actions: { increment ({ commit }) { commit('increment') } } Distribute action store.dispatch('increment') Station passed store.commitTrigger, then action store.dispatchMethod trigger. At first glance, it seems unnecessary. Isn’t it more convenient for us to distribute the mutation directly? In fact, this is not the case. Remember that mutation must execute this restriction synchronously? Action is not bound! We can perform asynchronous operations inside the action actions: { incrementAsync ({ commit }) { setTimeout(() => { commit('increment') }, 1000) } } It’s the same as the way of distribution of music dispatch: //Distributed as load store.dispatch('incrementAsync', { amount: 10 }) //Distribute as objects store.dispatch({ type: 'incrementAsync', amount: 10 }) Let’s take a more practical shopping cart example, which involves calling asynchronous API and distributing multiple mutations actions: { checkout ({ commit, state }, products) { //Back up the items in the current shopping cart const savedCartItems = [...state.cart.added] //Make a checkout request and then empty the cart optimistically commit(types.CHECKOUT_REQUEST) //The shopping API accepts a success callback and a failure callback shop.buyProducts( products, //Successful operation () => commit(types.CHECKOUT_SUCCESS), //Failed operation () => commit(types.CHECKOUT_FAILURE, savedCartItems) ) } } Note that a series of asynchronous operations are in progress in the example, and the side effects of the action (i.e. state changes) are recorded by submitting a mutation. Combined action Action is usually asynchronous, so how do you know when an action ends? More importantly, how can we combine multiple actions to handle more complex asynchronous processes? First, you need to understand store.dispatchCan handle the promise returned by the handler of the triggered action, and store.dispatchStill return promise: I don’t know what promise isLet’s get to know。 actions: { actionA ({ commit }) { return new Promise((resolve, reject) => { setTimeout(() => { commit('someMutation') resolve() }, 1000) }) } } Call: store.dispatch('actionA').then(() => { // ... }) Of course, it can also be like this: actions: { // ... actionB ({ dispatch, commit }) { return dispatch('actionA').then(() => { commit('someOtherMutation') }) } } We can also use itasync / awaitHow to combine actions: //Suppose getdata() and getotherdata() return promise actions: { async actionA ({ commit }) { commit('gotData', await getData()) }, async actionB ({ dispatch, commit }) { Await dispatch ('actiona ') // wait for actiona to complete commit('gotOtherData', await getOtherData()) } } One store.dispatchMultiple action functions can be triggered in different modules. In this case, the return promise is executed only after all trigger functions have completed. We often encounter this kind of situation in actual projects. For example, you want to handle B event now, but B event needs a kind of resource, and this kind of resource must be obtained through a event. At this time, we need to combine actions to handle these events. Vuex’s little helper I’ve talked about the four carriages of state, getter, mutation and action under vuex. I don’t know if you have understood them. Of course, if you want to really master it, you still need constant practice and hands-on practice. In fact, as long as you master the four carriages, you will have no problem in dealing with some small and medium-sized projects. The ultimate carriage of module is actually to deal with those slightly large and complex projects and avoid store There are too many data in it, so it’s difficult to manage and design. This cart is a little more abstract and not easy to control. Let’s explain it in detail next time. Many of the supporting facilities in Vue have been pursuing simplicity and perfection in the use experience, so this is a very important reason why Vue has won the hearts of the people for a long time. In this lecture, let’s talk about some common auxiliary functions of vuex. mapState Through the previous learning, we know that the easiest way to read the state from the store instance is toCalculation propertiesReturns a state in the. So, what happens when a component needs to get multiple states? Is that right export default { ... computed: { a () { return store.state.a }, b () { return store.state.b }, c () { return store.state.c }, ... } } Of course, this is no problem, but always feel very uncomfortable to write, it looks even more uncomfortable, right! Since it’s so easy for us to feel it, can vuex not feel it, can we bear it? Absolutely not, so mapStateAuxiliary functions have been created to deal with the pain points of this person’s gnashing teeth. //In the separately built version, the auxiliary function is Vuex.mapState import { mapState } from 'vuex' export default { // ... computed: mapState({ //Arrow functions make the code more concise a: state => state.a, b: state => state.b, c: state => state.c, //Pass string parameter 'B' //Equivalent to ` state = > state. B` bAlias: 'b', //To be able to use 'this' to get the local state //Regular functions must be used cInfo (state) { return state.c + this.info } }) } From the above example, we can see that we can directly store all the needed states in the mapStateThere is a unified management, but also can take alias, do additional operations and so on. If the name of the mapped calculation attribute is the same as the name of the child node of the state, we can simplify it and give the mapStatePass a string array: computed: mapState([ //Map this. A to store.state .a 'a', 'b', 'c' ]) because computedThis calculation property receives an object, so you can see from the example code above, mapStateThe function returns an object. If you want to mix it with local calculation properties, you can use ES6 syntax to simplify it computed: { localComputed () { ... }, //Use the object expansion operator to blend this object into an external object ...mapState({ // ... }) } I see mapStateAfter the auxiliary function, the usage of the following auxiliary functions is almost the same. Let’s go on. mapGetters This and mapStateBasically, there is no difference. Just look at the official examples import { mapGetters } from 'vuex' export default { // ... computed: { ...mapGetters([ 'doneTodosCount', 'anotherGetter', // ... ]) } } To get an alias, use the form of an object. The following example means to this.doneCountMap to this.$store.getters.doneTodosCount。 mapGetters({ doneCount: 'doneTodosCount' }) mapMutations Look directly at the sample code: import { mapMutations } from 'vuex' export default { // ... methods: { ...mapMutations([ //Will` this.increment () 'mapped to // `this.$store.commit('increment')` 'increment', //'mapmutations' also supports loads: //Will` this.incrementBy (amount) ` map to // `this.$store.commit('incrementBy', amount)` 'incrementBy' ]), ...mapMutations({ //Will` this.add () 'mapped to // `this.$store.commit('increment')` add: 'increment' }) } } It’s not easy to use. Serial loads can be directly supported. mapActions and mapMutationsAs like as two peas, change your name. import { mapActions } from 'vuex' export default { // ... methods: { ...mapActions([ //Will` this.increment () 'mapped to // `this.$store. dispatch('increment')` 'increment', //Mapactions also supports payload: //Will` this.incrementBy (amount) ` map to // `this.$store. dispatch('incrementBy', amount)` 'incrementBy' ]), ...mapActions({ //Will` this.add () 'mapped to // `this.$store. dispatch('increment')` add: 'increment' }) } } Want to invoke in component, directly this.xxxIt’s over. Vuex administrator module This is the last and most complex lecture of vuex foundation. It may be a bit difficult for beginners to accept if they come according to the official rules, so after thinking about it, I decided to spend more time to explain it with a simple example, and review the previous knowledge by the way. First of all, we need to understand the background of module. We know that vuex uses a single state tree, and all the states of the application are centralized into one object. If the project is relatively large, the corresponding state data will certainly be more. In this way, the store object will become very bloated and difficult to manage. It’s like a company is run by the boss alone. If the small company is OK, if the company is a little bigger, it will be in trouble. At this time, the boss will set up major departments and arrange a supervisor for each department to assign the management tasks. If there is anything to deal with, he only needs to communicate with these supervisors, and then the supervisor will assign the tasks. This will greatly improve the work efficiency and reduce the burden of the boss. In the same way, the module actually assumes the role of department administrator, and the store is the boss. After understanding this level, it will be much easier to do. Next, let’s start to practice step by step. 1、 Preparation Here we use the officialvue-cliTo build a project called “vuex test.”. Of course, you have to install Vue cli first: npm install -g @vue/cli # OR yarn global add @vue/cli After the installation is complete, you can use the following command to create the project: vue create vuex-test You can also use a graphical interface to create: vue ui The specific usage of Vue cli can be viewed officially,Poke here to enter 。 After the project is created, find the directory where the project is installed, and open the console to execute: //First navigate to the project directory cd vuex-test //Then install vuex npm install vuex --save //Run it npm run serve After running, it can be opened’s see the effect. Finally, we find a favorite ide to open this project, which is convenient for viewing and editing. I personally prefer to use webstore, and I recommend it here. 2、 Simple start Default structure diagram for the project Here, we just look srcDirectory, the rest of the time. assembly componentsIt’s not within the scope of this, so we can also ignore resources assetsThere’s nothing to say. If there are pictures or videos, just put them in this folder. We open it App.vueFile, remove the component related code, and write a little simple Vue code. It is amended as follows: <template> <div id="app"> <img src="./assets/logo.png" /> <h1>{{name}}</h1> < button @ Modify name < / button > </div> </template> <script> export default { data() { return { name: 'Lucy' } }, methods: { modifyNameAction() { this.name = "bighone" } } } </script> Now let’s introduce vuex and use it to manage state data, such as here name。 First of all srcCreate a new one in store.jsFile, and write the following familiar Code: import Vue from 'vue'; import Vuex from 'vuex'; Vue.use(Vuex); export default new Vuex.Store({ state: { name: 'Lucy', }, mutations: { setName(state, newName) { state.name = newName; } }, actions: { modifyName({commit}, newName) { commit('setName', newName); } } }); Then, in the main.jsImport from storeAnd global injection: import store from './store'; // ... new Vue({ store, render: h => h(App), }).$mount('#app') Final revision App.vueThe code in is as follows: <script> import {mapState, mapActions} from 'vuex'; export default { computed: { ...mapState(['name']) }, methods: { ...mapActions(['modifyName']), modifyNameAction() { this.modifyName('bighone'); } }, } </script> I think it’s no problem to understand these codes, because these are very basic knowledge points of vuex. Here’s a brief review of the practical operation to deepen the impression. If you don’t understand, it proves that you haven’t mastered the basic knowledge before. 3、 Introduction of module In the foreword, we have explained the basic responsibilities of the module, so how to use it? Vuex allows us to divide the store into large and small objects. Each object also has its own state, getter, mutation and action. This object is called a module, in which sub modules and sub modules can be embedded Now in srcThere’s a folder inside, named moduleAnd then create a new one inside moduleA.jsFile, and write the following code: export default { state: { text: 'moduleA' }, getters: {}, mutations: {}, actions: {} } As above, build another one moduleB.jsThe document is not repeated here. Then open it store.jsFile, import the two modules: import moduleA from './module/moduleA'; import moduleB from './module/moduleB'; export default new Vuex.Store({ modules: { moduleA, moduleB, }, // ... } At this time, two sub modules have been injected into the store moduleA moduleB, we can App.vuePassed in this.$store.state.moduleA.textThis way to directly access the state data in the module. As follows: // ... computed: { ...mapState({ name: state => state.moduleA.text }), }, // ... Therefore, the internal state of the module is local and only belongs to the module itself, so the external state must be accessed through the corresponding module name. howeverNote: Action, mutation and getter in the module can be registered in theGlobal Namespace This enables multiple modules to respond to the same mutation or action. Here, take the response of mutation as an example, add a mutation to ModuleA and a mutation to moduleb, as follows: mutations: { setText(state) { state.text = 'A' } }, Module B is the same as above. Just change the name of the text. There will be no repetition here. And then back App.vueIn the text, it is amended as follows: <script> import {mapState, mapMutations} from 'vuex'; export default { computed: { ...mapState({ name: state => ( state.moduleA.text +'and'+ state.moduleB.text ) }), }, methods: { ...mapMutations(['setText']), modifyNameAction() { this.setText(); } }, } </script> Run it and click Modify, and we will find the textThe values have changed. Of course, as like as two peas of action, you can try it. If there is an intersection of data between modules, we can update the data between modules synchronously in this way. Although it seems very convenient, we must be careful when using it. Once this method is not used well and errors are encountered, it is difficult to check. 4、 Access the root node We already know that the internal state of the module is local and belongs only to the module itself. So what if we want to access the data state of the store root node in the module? It’s very simple. We can get it through the parameter rootstate in the getter and action inside the module. Next, let’s give it to you modelA.jsAdd a bit of code to the file. export default { // ... getters: { //Note: rootstate must be the third parameter detail(state, getters, rootState) { return state.text + '-' + rootState.name; } }, actions: { callAction({state, rootState}) { alert(state.text + '-' + rootState.name); } } } Then modify it App.vue : <script> import {mapActions, mapGetters} from 'vuex'; export default { computed: { ...mapGetters({ name: 'detail' }), }, methods: { ...mapActions(['callAction']), modifyNameAction() { this.callAction(); } }, } </script> Then run it and you will find that the data of the root node has been obtained by us. It should be noted that in getters, rootstate is exposed by the third parameter. In addition, there is a fourth parameter, rootgetters, which is used to obtain the getters information of the root node. I won’t show it here. If you are interested, you can try it. The only thing to emphasize is that you must not mistake the position of parameters. Of course, rootgetters can also be received in action, but in action, because all the data it receives is wrapped in the contextObject, so there is no restriction on the order of unpacking. 5、 Namespace As we already know, action, mutation and getter in the module are registered in the global namespace by default. What if we just want them to work in the current module? By adding namespaced: trueMake it a module with a namespace.When a module is registered, all its getters, actions and mutations will automatically be named according to the registered path of the module. We are here moduleA.jsAdd in namespaced: true。 export default { namespaced: true, // ... } If you run the code at this time, you will find the following error: [vuex] unknown getter: detail Not found in global getter detailBecause its road strength has changed, it no longer belongs to the global, only belongs to module a. So, at this time, if we want to visit it, we must take the road. modify App.vueAs follows: <script> import {mapActions, mapGetters} from 'vuex'; export default { computed: { ...mapGetters({ name: 'moduleA/detail' }), }, methods: { ...mapActions({ call: 'moduleA/callAction' }), modifyNameAction() { this.call(); } }, } </script> The first mock exam is that if a module is enabled, the getter, dispatch and commit in getter and action are localized, and there is no need to add space name prefix in the same module. That is to say, change namespacedProperty does not need to modify any code in the module. So what do we doAccessing global content within a module with a namespaceWhat about it? Through the previous study, we have learned that: If you want to use global state and getter, rootstate and rootgetter will be passed into getter as the third and fourth parameters, and action will also be passed in through the properties of context object. Now if you want to distribute an action or submit a mutation in the global namespace, we just need to { root: true }As the third parameter, it can be passed to dispatch or commit. export default { namespaced: true, // ... actions: { callAction({state, commit, rootState}) { Commit ('setname ','change', {root: true}); alert(state.text + '-' + rootState.name); } } } Now let’s see howRegister global action in module with namespace。 If you need to register global actions in a module with a namespace, you can add root: trueAnd put the definition of this action in the function handler. There is a slight change in the writing. Let’s take a look and revise it moduleA.js, as follows: export default { namespaced: true, // ... actions: { callAction: { root: true, handler (namespacedContext, payload) { let {state, commit} = namespacedContext; commit('setText'); alert(state.text); } } } } In a nutshell, here’s the namespacedContextIt is equivalent to the context object of the current module, payloadIs the parameter passed in when calling, also called load of course. That’s all for the example. Let’s take a lookBinding function with namespace。 about mapState, mapGetters, mapActionsand mapMutationsHow to bind these functions to modules with a namespace has already been written in the above example code. Let’s take a look at other easier ways to write them. Let’s take a look at the previous way. Here we use the official example code to illustrate: computed: { ...mapState({ a: state => state.some.nested.module.a, b: state => state.some.nested.module.b }) }, methods: { ...mapActions([ // -> this['some/nested/module/foo']() 'some/nested/module/foo', // -> this['some/nested/module/bar']() 'some/nested/module/bar' ]) } More elegant writing: computed: { ...mapState('some/nested/module', { a: state => state.a, b: state => state.b }) }, methods: { ...mapActions('some/nested/module', [ 'foo', // -> this.foo() 'bar' // -> this.bar() ]) } Pass the space name string of the module as the first parameter to the above function so that all bindings will automatically use the module as the context. We can also use createNamespacedHelpersCreate helper functions based on a namespace. It returns an object with a new component binding auxiliary function bound to a given namespace value import { createNamespacedHelpers } from 'vuex' const { mapState, mapActions } = createNamespacedHelpers('some/nested/module') export default { computed: { //Search in 'some / nested / module' ...mapState({ a: state => state.a, b: state => state.b }) }, methods: { //Search in 'some / nested / module' ...mapActions([ 'foo', 'bar' ]) } } 6、 Dynamic registration of modules This chapter is quite clear on the official website, so it’s directly moved here. After the store is created, you can use the store.registerModuleMethod dynamic registration module: //Register module ` mymodule` store.registerModule('myModule', { // ... }) //Register nested modules ` nested / mymodule` store.registerModule(['nested', 'myModule'], { // ... }) And then you can go through it store.state.myModuleand store.state.nested.myModuleAccess the status of the module. Module dynamic registration enables other Vue plug-ins to use vuex to manage state by adding new modules to the store. For example, vuex-router-syncPlug in combines Vue router and vuex through dynamic registration module to realize routing state management of application. You can also use it store.unregisterModule(moduleName)To dynamically unload the module. Note that you cannot use this method to unload static modules (that is, modules declared when the store was created). When registering a new module, you are likely to want to keep the past state, for example, from a server-side rendering application. You can go through preserveStateOption to archive it: store.registerModule('a', module, { preserveState: true })。 7、 Module reuse For one thing, reuse will pollute the data state in the module, so just like data in Vue, you can use a function to declare the state. const MyReusableModule = { state () { return { foo: 'bar' } }, //... } This work adoptsCC agreementReprint must indicate the author and the link of this article
https://developpaper.com/vuex/
CC-MAIN-2021-10
refinedweb
6,781
57.27
Howdy, I read the post about "how to make an object move in the direction its pointing" and clicked on the link the directed me to an article about this kind of stuff becuase I also had that problem. So I read the article and noticed that I could pass a radian to the "sin()" and "cos()" functions, than multiply the return by the length of the velocity vector. But in order to make this work for me, I had to first convert the allegro degrees (256) of my object into normal degrees (360), than find the radian of the result and pass the final result to the two functions. This worked, but as I read further on, I noticed that "fsin()" and "fcos()" accepted allegro degrees as their parameters, I tried this and it didn't work.....any suggestions?:-/ ---"No amount of prayer would have produced the computers you use to spread your nonsense." Arthur Kalliokoski I tried this and it didn't work.....any suggestions?:-/ Yes actually, I suggest you post a small (ie <15 lines) bit of code that demonstrates your problem. How is my posting? - iPad POS but as I read further on, I noticed that "fsin()" and "fcos()" accepted allegro degrees as their parameters, I tried this and it didn't work.....any suggestions? What do you mean by "didn't work?" And were you actually using the correct function names, which are fixsin() and fixcos()? Here's the manual entry for fixsin() so you can see how it should be used. By the way, for modern computers with fast FPUs, using the fixed versions may actually be slower than their floating point counterparts. Well, the article said the "fsin()" and "fcos()" are the function names. I compile with no errors but when I run the program some super - duper - oober high number is produced. Also, when I try to declare the angle as "fixed" my compiler returns an error that says "fixed" is invalid and doesn't work. #include <allegro.h> fixed f = fsin( itofix( 192 ) ); // 192 is 270 degress IIRC I tried to pass the angle variable in the "fcos()" and "fsin()" functions using itofix but that didn't work either....help please!..... Show some friggin code! int shipangle; int laserxspeed; int laseryspeed; int laserx,lasery; if(key[KEY_LEFT]) shipangle++; if(key[KEY_RIGHT]) shipangle--; laserxspeed = 8 * fcos(itofix(shipangle)); laseryspeed = 8 * fsin(itofix(shipangle)); laserx += laserxspeed; lasery += laseryspeed; laserxspeed = 8 * fixtof(itofix(shipangle)); laseryspeed = 8 * fixtof(itofix(shipangle)); you have to convert fix back to float/int. alternatively you can just use cos/sin from the standard math library... I had to first convert the allegro degrees (256) of my object into normal degrees (360), than find the radian of the result ... why not just convert directly between Allegro degrees and radians (* π / 128)? Of course, the best course of action is to keep everything in radians and only convert when drawing (* 128 / π). It's by far the easiest, most convenient and readable way. And for Eris's sake, use floats! void and_then_draw_it(void) { rotate_sprite(buffer, shipsprite, shipx, shipy, ftofix(shipangle*128.0/M_PI)); } -- Move to the Democratic People's Republic of Vivendi Universal (formerly known as Sweden) - officially democracy- and privacy-free since 2008-06-18! But if I stick with float point math I have to do conversions everry time I call sin() and cos(). No. Unless you mean the float->double conversion in sin() and cos() (they take and return doubles), but even the overly anal language of C++ does that automagically without complaining. (And, of course, using doubles is always an option as well) sin() and cos() (they take and return doubles) No. Sin() and cos() both take radians, not doubles. (unless you meen the radian AS the double) And how come when I declare a variable with "fixed" my compiler returns an error saying that "fixed" isn't a class and doesn't exist? (unless you meen the radian AS the double) Yes, he meant radians as doubles. radians are a unit of measure, doubles are an encoding of numeric quantity, they are independant of each other. It would be like saying "im going to measure this in meters, not integers". And how come when I declare a variable with "fixed" my compiler returns an error saying that "fixed" isn't a class and doesn't exist? I forget exactly what causes this, are you including <allegro.h> before you use a fixed variable? Dont use fixed numbers, they're slower than just using doubles and only have a precision from ~-32000 to ~32000 -- just use itofix and ftofix when you need to pass them into allegro functions... Also as kazzmir said, radians, degrees, and allegro degrees are just units of measure. You can put radians into a double, a float or a fixed, just as well as you can put allegro degrees, or degrees, or meters, miles, or light years in. A radian is a number between 0 and ~6.2831 (PI*2), a degree is between 0 and 360, and an allegro degree is between 0 and 255 -- they're all the same and you can use basic fractional math to convert between them. For simplicity sake, you could do the following after including math.h (and make sure you include math.h or cos/sin won't work right): #include <math.h> #define DegToRad PI/180.0f #define RadToDeg 180.0f/PI then when you have a variable: double myvar = 180; //180 degrees double length = 10; //10 pixels and you need to pass it into cos or sin you can: //the double "myvar" is in degrees, but cos and sin only take doubles as radians! double dx = cos(myvar * DegToRad) * length; double dy = sin(myvar * DegToRad) * length; edit:To convert from degrees to allegro degrees: #define DegToADeg 256.0f/360.0f #define ADegToDeg 360.0f/256.0f And Similarly to convert allegro degrees to radians: #define ADegToRad PI/128.0f #define RadToADeg 128.0f/PI Ah,ha! Thanks. That's what I was looking for. A simply and fast way to convert....thanks. (Can't believe I didn't think of that);) On the radians / double issue again:Computers don't know about units (meters, degrees, liters, ampères, kelvins, radians and what have you) when doing calculations; all units are usually implicit, and the function takes something as an argument or returns something that is to be interpreted as a real-world number, the documentation usually specifies what unit is expected. For many things, especially in the computing world, there are conventions that pretty much everybody adheres to: for example, the size of a data chunk is given in bytes (not 12-bit binary words, or words of 57 base-7 digits). For other types of quantities though, such as angles, the matter isn't as clear, since there are several systems being used alongside each other; in this case, there are 3 systems involved, each of which has its advantages:- Degrees; a full circle equals 360 degrees. This format is very human-friendly, since most commonly used angles are represented as whole numbers (0, 30, 45, 90, 180). It also allows for easy conversion to and from the "clock" system (the system used by sailors and aircraft pilots to indicate relative directions: "Enemy fighters at 3 o'clock!") - each hour equals 30 degrees. OpenGL uses this format.- Radians; a full circle equals 2 * PI. From the mathematician's point of view, this is the "natural" way of measuring angles, because it defines the angle by the length of an arc on the unit circle. In other words, an angle in radians equals the length of its arc divided by a radius. (It makes more sense when you see it sketched on paper). Also, all the trig functions can be used unchanged for various circle calculations when using radians. Downside is that angles in this format look very un-intuitive, because PI is not a whole number. The C math library (libm) uses radians.- Allegro degrees; a full circle equals 256. Big advantage is that it allows for a number of speed optimizations, especially on older computers. Wrapping an angle into a full-circle-range, which usually requires a modulo operation, can be optimized to a bitwise-and, because 256 is a power-of-2. It is still quite intuitive (half circle is 128, 1/4 is 64, etc.); downside, though, is that it is incompatible with pretty much all other libraries, including libm. The unit chosen is independent from the numeric encoding format you use: int, allegro::fixed, float, double, etc.The issue with allegro's math functions is that they expect fixed-point numbers, but since they are typedef'ed as long ints, C cannot tell the difference, nor convert them correctly for you. My suggestion is to leave the fixed-point math alone altogether, and do everything using floats (or doubles) and radians. The only point where you really can't avoid 256-based fixed-point values is when you pass angles to allegro's drawing functions, in which case you should just take the float-point value and convert it on-the-fly, like this: #include <allegro.h> #include <math.h> // assuming angle to be a float: rotate_sprite(screen, sprite, 100, 100, ftofix(angle * 128.0f / M_PI)); ---Me make music: Triofobie---"We need Tobias and his awesome trombone, too." - Johan Halmén #define DegToADeg 256/360#define ADegToDeg 360/256 Baaaaaaaad idea. 256/360 is an integer division, resulting in an integer. You want 256.0/360.0 or something similar. "Enemy fighters at 3 o'clock!") - each hour equals 15 degrees. Don't you mean 30 degrees? --! 'course. 30 degrees. I'll go edit, hehe. Baaaaaaaad idea. 256/360 is an integer division, resulting in an integer. You want 256.0/360.0 or something similar. Thank you, I'll edit that post and put 256.0f/360.0f -- etc, I've been using VB.NET where constants have to be declared with data types and this would work fine, and I've forgotten about some the annoyances of C... Public Const DegreesToADegrees As Double = 256 / 360 Even 256/360.0f would do. But, even that looks like horrible coding style to me (DegToADeg sounds like a function, not a constant, which should be const float blah = blah anyway ). I'd personally do #define DegToADeg(x) = (x) * 256/360.0f. BAF.zone | SantaHack! #define DegToADeg(x) = (x) * 256/360.0f. What about: const float DEG2ADEG_FAC = 256.0f / 360.0f; inline float DegToADeg(float x) { return x * DEG2ADEG_FAC; } ? That's even better. [edit]Nevermind I'd personally do #define DegToADeg(x) = (x) * 256/360.0f. #define DegToADeg(x) = (x) * (256/360.0f) Makes it easier for a compiler to optimize your code. Some versions of <math.h> define fcos() and fsin() as float versions (rather than double) of sin() and cos() This is why Allegro's fsin() and fcos() have been renamed to fixsin() and fixcos(). It is better to use the new names to avoid confusion (both by you and by your compiler). The old compatability defines may be removed by Allegro 5.0.
https://www.allegro.cc/forums/thread/590290/653787
CC-MAIN-2021-39
refinedweb
1,864
64.2
It seems an custom url rewrite provider did the trick: public class SplashUrlRewriteProvider : FriendlyUrlRewriteProvider { public override bool ConvertToInternal(EPiServer.UrlBuilder url, out object internalObject) { if (url.AppRelativePath == "~/") { url.Path = "/index.htm"; internalObject = PageReference.StartPage; return true; } return base.ConvertToInternal(url, out internalObject); } } I'm trying to implement a typical splash-page functionality while using episerver 5. Whenever I hit the I want to be served a static html page but once I move from there and go to the startpage from within the site I see the "default" start . It's totally okay to map / to the static file and /en/ to the startpage in episerver. With the old url rewriting I used to be able to map e.g. index.html as the preferred default page type and since this was found before default.aspx I would get index.html whenever I go for the / directory. However the new url rewriting is "smarter" and catches my request for /(index.html). I tried various combinations e.g. mapping .htm to ssinc.dll but I'm venturing into unknown territory here. As soon as I enable the wildcard mapping episerver will serve the startpage in episerver. Any ideas on how to solve this? Is there a way to tell the FriendlyUrlRewriteProvider to ignore pages that are "found"?
https://world.optimizely.com/forum/legacy-forums/Episerver-CMS-version-5/Thread-Container/2008/4/Prevent-UrlRewrite-for-existing-files/
CC-MAIN-2021-39
refinedweb
216
60.41
Actually I have a query which returns result containing column(for ex.Address) of type varchar but the domain model for that table containing property of type object(for ex. Address Address).Because of which it trows error which says could not cast string to Comment.I cant figure out how to resolve this issue with dapper .net. Code snippet: IEnumerable<Account> resultList = conn.Query<Account>(@" SELECT * FROM Account WHERE shopId = @ShopId", new { ShopId = shopId }); The Account object is for example. public class Account { public int? Id {get;set;} public string Name {get;set;} public Address Address {get;set;} public string Country {get;set;} public int ShopId {get; set;} } As there is type mismatch between database table column(Address) and domain model property(Address) dapper throws exception.So is there is any way to map that properties though dapper.. Since there is a type mismatch between your POCO and your database, you'll need to provide a mapping between the two. public class Account { public int? Id {get;set;} public string Name {get;set;} public string DBAddress {get;set;} public Address Address { // Propbably optimize here to only create it once. get { return new Address(this.DBAddress); } } public string Country {get;set;} public int ShopId {get; set;} } Something like that - you match the db column to the property DBAddress (You need to provide an alias like SELECT Address as DBAddress instead of *) and provide a get method on your Address object which creates / reuses a type of Address with the contents of the db value. Another option is to use Dapper's Multi-Mapping feature. public class TheAccount { public int? Id { get; set; } public string Name { get; set; } public Address Address { get; set; } public string Country { get; set; } public int ShopId { get; set; } } public class Address { public string Street { get; set; } public string City { get; set; } } public class Class1 { [Test] public void MultiMappingTest() { var conn = new SqlConnection( @"Data Source=.\SQLEXPRESS; Integrated Security=true; Initial Catalog=MyDb"); conn.Open(); const string sql = "select Id = 1, Name = 'John Doe', Country = 'USA', ShopId = 99, " + " Street = '123 Elm Street', City = 'Gotham'"; var result = conn.Query<TheAccount, Address, TheAccount>(sql, (account, address) => { account.Address = address; return account; }, splitOn: "Street").First(); Assert.That(result.Address.Street, Is.Not.Null); Assert.That(result.Country, Is.Not.Null); Assert.That(result.Name, Is.Not.Null); } } The only issue I see with this is that you'll have to list all of the Account fields, followed by the Address fields in your select statement, to allow splitOn to work.
https://dapper-tutorial.net/knowledge-base/17562373/dapper--net---table-column-and-model-property-type-mismatch
CC-MAIN-2021-10
refinedweb
419
54.42
Theme: Creating a Theme Starting pointStarting point We've provided an example theme that you can use as a starting point for your own themes. Additionally, there's also a YouTube tutorial on this topic that should get you started in no time. Ok, let's dive into it: Get the Reaction Example Theme. Theme contentsTheme contents Every theme requires a specific structure to be properly registered as a Reaction theme. register.js (Required) - Registers a Reaction plugin allowing it to be included automatically. import { Reaction } from "/server/api"; Reaction.registerPackage({ // Label that shows up in tooltips and places where the package is accessable for settings label: "My Theme", // Unique name, used for pulling package data out of the database name: "my-theme", // Icon for toolbars icon: "fa fa-bars", // Auto-enable plugin, sets enabled: true in database autoEnable: true, // Settings for plugin settings: {}, // Routes and other registry items related to layout registry: [] }); client/index.less (Required for LESS processing) - Entry point of all client side LESS files. From this file you can import all your custom files and they will be processed and included when the app is built. // Entrypoint for LESS CSS @import "styles/navbar.less"; You may store your CSS anywhere within your plugin. For the example theme we've placed CSS in the directory client/styles. Install themeInstall theme Themes are installed in imports/plugins/custom/. Themes are auto included and their load order is currently based on their order in the custom directory. Keep this in mind if you decide to have multiple themes in the custom directory as they may conflict with each other. PLEASE NOTE: In order for your theme plugin to be loaded the first time, you will need to stop and restart your Reaction instance to trigger the plugin loader. Overriding variables and stylesOverriding variables and styles You can override classes and variables of the default theme simple by defining the classes and variables after importing the base theme. client/styles/navbar.less .my-navbar { display: flex; // All LESS is auto-prefixed justify-content: center; align-items: center; height: 100px; background-color: @reaction-brand; align-content: center; justify-content: center; } .my-navbar a { padding: 0 6px; font-size: 24px; color: @white; // All variables and mixins from the default theme are available to use } In LESS variables are considered constants, and are processed first, from top to bottom of all included LESS files. That means you can override variables after they've already been declared and the last instance takes effect.
https://docs.reactioncommerce.com/docs/next/creating-a-theme
CC-MAIN-2018-30
refinedweb
419
51.78
Problem You want to run some Ruby code (such as a call to a shell command) repeatedly at a certain interval. Solution Create a method that runs a code block, then sleeps until it's time to run the block again: def every_n_seconds(n) loop do before = Time.now yield interval = n-(Time.now-before) sleep(interval) if interval > 0 end end every_n_seconds(5) do puts "At the beep, the time will be #{Time.now.strftime("%X")}…beep!" end # At the beep, the time will be 12:21:28… beep! # At the beep, the time will be 12:21:33… beep! # At the beep, the time will be 12:21:38… beep! # … Discussion There are two main times when you'd want to run some code periodically. The first is when you actually want something to happen at a particular interval: say you're appending your status to a log file every 10 seconds. The other is when you would prefer for something to happen continuously, but putting it in a tight loop would be bad for system performance. In this case, you compromise by putting some slack time in the loop so that your code isn't always running. The implementation of every_n_seconds deducts from the sleep time the time spent running the code block. This ensures that calls to the code block are spaced evenly apart, as close to the desired interval as possible. If you tell every_n_seconds to call a code block every five seconds, but the code block takes four seconds to run, every_n_seconds only sleeps for one second. If the code block takes six seconds to run, every_n_seconds won't sleep at all: it'll come back from a call to the code block, and immediately yield to the block again. If you always want to sleep for a certain interval, no matter how long the code block takes to run, you can simplify the code: def every_n_seconds(n) loop do yield sleep(n) end end In most cases, you don't want every_n_seconds to take over the main loop of your program. Here's a version of every_n_seconds that spawns a separate thread to run your task. If your code block stops the loop by with the break keyword, the thread stops running: def every_n_seconds(n) thread = Thread.new do while true before = Time.now yield interval = n-(Time.now-before) sleep(interval) if interval > 0 end end return thread end In this snippet, I use every_n_seconds to spy on a file, waiting for people to modify it: def monitor_changes(file, resolution=1) last_change = Time.now every_n_seconds(resolution) do check = File.stat(file).ctime if check > last_change yield file last_change = check elsif Time.now - last_change > 60 puts "Nothing's happened for a minute, I'm bored." break end end end That example might give output like this, if someone on the system is working on the file /tmp/foo: thread = monitor_changes("/tmp/foo") { |file| puts "Someone changed #{file}!" } # "Someone changed /tmp/foo!" # "Someone changed /tmp/foo!" # "Nothing's happened for a minute; I'm bored." thread.status # => false See Also
http://flylib.com/books/en/2.44.1/running_a_code_block_periodically.html
CC-MAIN-2018-05
refinedweb
516
72.16
*/ 2. Make child process In-dependent - setsid()Before we see how we gonna make a child process independent let us talk Process group and Session ID. A process group denotes a collection of one or more processes. Process groups are used to control the distribution of signals. A signal directed to a process group is delivered individually to all of the processes that are members of the group.. New process images created by a call to a function of the exec family and fork() inherit the process group membership and the session membership of the parent process image. A process receives signals from the terminal that it is connected to, and each process inherits its parent's controlling tty. A daemon process daemon process should operate independently from other processes. setsid();setsid() system call is used to create a new session containing a single (new) process group, with the current process as both the session leader and the process group leader of that single process group. (setpgrp() is an alternative for this). NOTE: We have to create a child process and use setsid() to make it independent. Trying on a parent process returns error saying EPERM. 3. Change current Running Directory - chdir()A daemon process should run in a known directory. There are many advantages, in fact the opposite has many disadvantages: suppose that our daemon process is started in a user's home directory, it will not be able to find some input and output files. If the home directory is a mounted filesystem then it will even create many issues if the filesystem is accidentally un-mounted. chdir("/server/");The root "/" directory may not be appropriate for every server, it should be chosen carefully depending on the type of the server. 4. Close Inherited Descriptors and Standard I/O Descriptors A child process inherits default standard I/O descriptors and opened file descriptors from a parent process, this may cause the use of resources un-neccessarily. Unnecessary file descriptors should be closed before fork() system call (so that they are not inherited) or close all open descriptors as soon as the child process starts running as shown below. for ( i=getdtablesize(); i>=0; --i) close(i); /* close all descriptors */There are three standard I/O descriptors: - standard input 'stdin' (0), - standard output 'stdout' (1), - standard error 'stderr' (2). int fd; fd = open("/dev/null",O_RDWR, 0); if (fd != -1) { dup2 (fd, STDIN_FILENO); dup2 (fd, STDOUT_FILENO); dup2 (fd, STDERR_FILENO); if (fd > 2) close (fd); }5. Reset File Creation Mask - umask() Most Daemon processes runs as super-user, for security reasons they should protect files that they create. Setting user mask will prevent unsecure file priviliges that may occur on file creation. umask(027);This will restrict file creation mode to 750 (complement of 027). Let us see a sample 'C' code which creates a daemon. #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #define EXIT_SUCCESS 0 #define EXIT_FAILURE 1 static void daemonize(void) { pid_t pid, sid; int fd; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } if (pid > 0) { exit(EXIT_SUCCESS); /*Killing the Parent Process*/ } /* At this point we are executing as the child process */ /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } fd = open("/dev/null",O_RDWR, 0); if (fd != -1) { dup2 (fd, STDIN_FILENO); dup2 (fd, STDOUT_FILENO); dup2 (fd, STDERR_FILENO); if (fd > 2) { close (fd); } } /*resettign File Creation Mask */ umask(027); } int main( int argc, char *argv[] ) { daemonize(); while(1) { /* Now we are a daemon -- do the work for which we were paid */ sleep(10); } return 0; } References thank you. a very nice and easy to understand example. -priya nice articles easy to understand... -Srinivas Thank you; this was very helpful and clear! thank you ... if suppose more child process created then how to handle the terminal exit process (STDIN,STDOUT,STDERR)..how to exit the parent process safely without affecting the created child's.. regards siva Very comprehensive guide. I have been searching, reading so many articles but this is the only one making all of my confuses cleared.
http://codingfreak.blogspot.com/2012/03/daemon-izing-process-in-linux.html
CC-MAIN-2017-34
refinedweb
712
62.58
Using filters to manipulate data¶ Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of built-in filters in the official Jinja2 template documentation. You can also use Python methods to transform data. You can create custom Ansible filters as plugins, though we generally welcome new filters into the ansible-base repo so everyone can use them. Because templating happens on the Ansible controller, not on the target host, filters execute on the controller and transform data locally. Handling undefined variables Defining different values for true/false/null (ternary) - Formatting data: YAML and JSON Combining and selecting data - Selecting from sets or lists (set theory) Calculating numbers (math) Managing network interactions Encrypting and checksumming strings and passwords - - - - Getting Kubernetes resource names Handling undefined variables¶ Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the mandatory filter. Providing default values¶ You can provide default values for variables directly in your templates using the Jinja2 ‘default’ filter. This is often a better approach than failing if a variable is not defined: {{ some_variable | default(5) }} In the above example, if the variable ‘some_variable’ is not defined, Ansible uses the default value 5, rather than raising an “undefined variable” error and failing. If you are working within a role, you can also add a defaults/main.yml to define the default values for variables in your role. Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use a default with a value in a nested data structure (in other words, {{ foo.bar.baz | default('DEFAULT') }}) when you do not know if the intermediate values are defined. If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to true: {{ lookup('env', 'MY_USER') | default('admin', true) }} Making variables optional¶ By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable omit: - name: Touch files with an optional mode ansible.builtin.file: dest: "{{ item.path }}" state: touch mode: "{{ item.mode | default(omit) }}" loop: - path: /tmp/foo - path: /tmp/bar - path: /tmp/baz mode: "0444" In this example, the default mode for the files /tmp/foo and /tmp/bar is determined by the umask of the system. Ansible does not send a value for mode. Only the third file, /tmp/baz, receives are chaining though, so be prepared for some trial and error if you do this. Defining mandatory values¶ If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting DEFAULT_UNDEFINED_VAR_BEHAVIOR to false. In that case, you may want to require some variables to be defined. You can do this with: {{ variable | mandatory }} The variable value will be used as is, but the template evaluation will raise an error if it is undefined. Defining different values for true/false/null (ternary)¶ You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9): {{ (status == 'needs_restart') | ternary('restart', 'continue') }} In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8): {{ enabled | ternary('no shutdown', 'shutdown', omit) }} Managing data types¶ You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user prompt might return a string when your playbook needs a boolean value. Use the type_debug, dict2items, and items2dict filters to manage data types. You can also use the data type itself to cast a value as a specific data type. Discovering the data type¶ New in version 2.3. If you are unsure of the underlying Python type of a variable, you can use the type_debug filter to display it. This is useful in debugging when you need a particular type of variable: {{ myvar | type_debug }} Transforming dictionaries into lists¶ New in version 2.6. Use the dict2items filter to transform a dictionary into a list of items suitable for looping: {{ dict | dict2items }} Dictionary data (before applying the dict2items filter): tags: Application: payment Environment: dev List data (after applying the dict2items filter): - key: Application value: payment - key: Environment value: dev New in version 2.8. The dict2items filter is the reverse of the items2dict filter. If you want to configure the names of the keys, the dict2items filter accepts 2 keyword arguments. Pass the key_name and value_name arguments to configure the names of the keys in the list output: {{ files | dict2items(key_name='file', value_name='path') }} Dictionary data (before applying the dict2items filter): files: users: /etc/passwd groups: /etc/group List data (after applying the dict2items filter): - file: users path: /etc/passwd - file: groups path: /etc/group Transforming lists into dictionaries¶ New in version 2.7. Use the items2dict filter to transform a list into a dictionary, mapping the content into key: value pairs: {{ tags | items2dict }} List data (before applying the items2dict filter): tags: - key: Application value: payment - key: Environment value: dev Dictionary data (after applying the items2dict filter): Application: payment Environment: dev The items2dict filter is the reverse of the dict2items filter. Not all lists use key to designate keys and value to designate values. For example: fruits: - fruit: apple color: red - fruit: pear color: yellow - fruit: grapefruit color: yellow In this example, you must pass the key_name and value_name arguments to configure the transformation. For example: {{ tags | items2dict(key_name='fruit', value_name='color') }} If you do not pass these arguments, or do not pass the correct values for your list, you will see KeyError: key or KeyError: my_typo. Forcing the data type¶ You can cast values as certain types. For example, if you expect the input “True” from a vars_prompt and you want Ansible to recognize it as a boolean value instead of a string: - debug: msg: test when: some_string_value | bool If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string: - shell: echo "only on Red Hat 6, derivatives, and later" when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6 New in version 1.6. Formatting data: YAML and JSON¶ You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging: {{ some_variable | to_json }} {{ some_variable | to_yaml }} For human readable output, you can use: {{ some_variable | to_nice_json }} {{ some_variable | to_nice_yaml }} You can change the indentation of either format: {{ some_variable | to_nice_json(indent=2) }} {{ some_variable | to_nice_yaml(indent=8) }} The to_yaml and to_nice_yaml filters use the PyYAML library which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol) To avoid such behavior and generate long lines, use the width option. You must use a hardcoded number to define the width, instead of a construction like float("inf"), because the filter does not support proxying Python functions. For example: {{ some_variable | to_yaml(indent=8, width=1337) }} {{ some_variable | to_nice_yaml(indent=8, width=1337) }} The filter does support passing through other YAML parameters. For a full list, see the PyYAML documentation. If you are reading in some already formatted data: {{ some_variable | from_json }} {{ some_variable | from_yaml }} for example: tasks: - name: Register JSON output as a variable ansible.builtin.shell: cat /some/path/to/file.json register: result - name: Set a variable ansible.builtin.set_fact: myvar: "{{ result.stdout | from_json }}" Filter to_json and Unicode support¶ By default to_json and to_nice_json will convert data received to ASCII, so: {{ 'München'| to_json }} will return: 'M\u00fcnchen' To keep Unicode characters, pass the parameter ensure_ascii=False to the filter: {{ 'München'| to_json(ensure_ascii=False) }} 'München' New in version 2.7. To parse multi-document YAML strings, the from_yaml_all filter is provided. The from_yaml_all filter will return a generator of parsed YAML documents. for example: tasks: - name: Register a file content as a variable ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml register: result - name: Print the transformed variable ansible.builtin.debug: msg: '{{ item }}' loop: '{{ result.stdout | from_yaml_all | list }}' Combining and selecting data¶ You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data. Combining items from multiple lists: zip and zip_longest¶ New in version 2.3. To get a list combining the elements of other lists use zip: - name: Give me list combo of two lists ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | zip(['a','b','c','d','e','f']) | list }}" - name: Give me shortest combo of two lists ansible.builtin.debug: msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}" To always exhaust all lists use zip_longest: - name: Give me longest combo of three lists , fill with X ansible.builtin.debug: msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}" Similarly to the output of the items2dict filter mentioned above, these filters can be used to construct a dict: {{ dict(keys_list | zip(values_list)) }} List data (before applying the zip filter): keys_list: - one - two values_list: - apple - orange Dictonary data (after applying the zip filter): one: apple two: orange Combining objects and subelements¶ New in version 2.7. The subelements filter produces a product of an object and the subelement values of that object, similar to the subelements lookup. This lets you specify individual subelements to use in a template. For example, this expression: {{ users | subelements('groups', skip_missing=True) }} Data before applying the subelements filter: users: - name: alice authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub groups: - wheel - docker - name: bob authorized: - /tmp/bob/id_rsa.pub groups: - docker Data after applying the subelements filter: - - name: alice groups: - wheel - docker authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - wheel - - name: alice groups: - wheel - docker authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - docker - - name: bob authorized: - /tmp/bob/id_rsa.pub groups: - docker - docker You can use the transformed data with loop to iterate over the same subelement for multiple objects: - name: Set authorized ssh key, extracting just that data from 'users' ansible.posix.authorized_key: user: "{{ item.0.name }}" key: "{{ lookup('file', item.1) }}" loop: "{{ users | subelements('authorized') }}" Combining hashes/dictionaries¶ New in version 2.0. The combine filter allows hashes to be merged. For example, the following would override keys in one hash: {{ {'a':1, 'b':2} | combine({'b':3}) }} The resulting hash would be: {'a':1, 'b':3} The filter can also take multiple arguments to merge: {{ a | combine(b, c, d) }} {{ [a, b, c, d] | combine }} In this case, keys in d would override those in c, which would override those in b, and so on. The filter also accepts two optional parameters: recursive and list_merge. - recursive Is a boolean, default to False. Should the combinerecursively merge nested hashes. Note: It does not depend on the value of the hash_behavioursetting in ansible.cfg. - list_merge Is a string, its possible values are replace(default), keep, append, prepend, append_rpor prepend_rp. It modifies the behaviour of combinewhen the hashes to merge contain arrays/lists. default: a: x: default y: default b: default c: default patch: a: y: patch z: patch b: patch If recursive=False (the default), nested hash aren’t merged: {{ default | combine(patch) }} This would result in: a: y: patch z: patch b: patch c: default If recursive=True, recurse into nested hash and merge their keys: {{ default | combine(patch, recursive=True) }} This would result in: a: x: default y: patch z: patch b: patch c: default If list_merge='replace' (the default), arrays from the right hash will “replace” the ones in the left hash: default: a: - default patch: a: - patch {{ default | combine(patch) }} This would result in: a: - patch If list_merge='keep', arrays from the left hash will be kept: {{ default | combine(patch, list_merge='keep') }} This would result in: a: - default If list_merge='append', arrays from the right hash will be appended to the ones in the left hash: {{ default | combine(patch, list_merge='append') }} This would result in: a: - default - patch If list_merge='prepend', arrays from the right hash will be prepended to the ones in the left hash: {{ default | combine(patch, list_merge='prepend') }} This would result in: a: - patch - default If list_merge='append_rp', arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed (“rp” stands for “remove present”). Duplicate elements that aren’t in both hashes are kept: default: a: - 1 - 1 - 2 - 3 patch: a: - 3 - 4 - 5 - 5 {{ default | combine(patch, list_merge='append_rp') }} This would result in: a: - 1 - 1 - 2 - 3 - 4 - 5 - 5 If list_merge='prepend_rp', the behavior is similar to the one for append_rp, but elements of arrays in the right hash are prepended: {{ default | combine(patch, list_merge='prepend_rp') }} This would result in: a: - 3 - 4 - 5 - 5 - 1 - 1 - 2 recursive and list_merge can be used together: default: a: a': x: default_value y: default_value list: - default_value b: - 1 - 1 - 2 - 3 patch: a: a': y: patch_value z: patch_value list: - patch_value b: - 3 - 4 - 4 - key: value {{ default | combine(patch, recursive=True, list_merge='append_rp') }} This would result in: a: a': x: default_value y: patch_value z: patch_value list: - default_value - patch_value b: - 1 - 1 - 2 - 3 - 4 - 4 - key: value Selecting values from arrays or hashtables’]. Combining lists¶ This set of filters returns a list of combined lists. permutations¶ To get permutations of a list: - name: Give me largest permutations (order matters) ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | permutations | list }}" - name: Give me permutations of sets of three ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | permutations(3) | list }}" combinations¶ Combinations always require a set size: - name: Give me combinations for sets of two ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | combinations(2) | list }}" Also see the Combining items from multiple lists: zip and zip_longest products¶ The product filter returns the cartesian product of the input iterables. This is roughly equivalent to nested for-loops in a generator expression. For example: - name: Generate multiple hostnames ansible.builtin.debug: msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}" This would result in: { "msg": "foo.com,bar.com" } Selecting JSON data: JSON queries¶ To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the json_query filter. The json_query filter lets you query a complex JSON structure and iterate over it using a loop structure. Note This filter has migrated to the community.general collection. Follow the installation instructions to install that collection. Note This filter is built upon jmespath, and you can use the same syntax. For examples, see jmespath examples. Consider this ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}" To extract all server names: - name: Display all server names ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}" To extract ports from cluster1: - ansible.builtin.name: Display all ports from cluster1 debug: var: item loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}" vars: server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port" Note You can use a variable to make the query more readable. To print out the ports from cluster1 in a comma separated string: - name: Display all ports from cluster1 as a string ansible.builtin.debug: msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}" Note In the example above, quoting literals using backticks avoids escaping quotes and maintains readability. You can use YAML single quote escaping: - name: Display all ports from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}" Note Escaping single quotes within single quotes in YAML is done by doubling the single quote. To get a hash map with all ports and names of a cluster: - name: Display all server ports and names from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}" vars: server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}" Randomizing data¶ When you need a randomly generated value, use one of these filters. Random MAC addresses¶ New in version 2.6. This filter can be used to generate a random MAC address from a string prefix. Note This filter has migrated to the community.general collection. Follow the installation instructions to install that collection. To get a random MAC address from a string prefix starting with ‘52:54:00’: "{{ '52:54:00' | community.general.random_mac }}" # => '52:54:00:ef:1c:03' Note that if anything is wrong with the prefix string, the filter will issue an error. New in version 2.9. As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses: "{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}" Random items or numbers¶ The random filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range. To get a random item from a list: "{{ ['a','b','c'] | random }}" # => 'c' To get a random number between 0 and a specified number: "{{ 60 | random }} * * * * root /script/from/cron" # => '21 * * * * root /script/from/cron' To get a random number from 0 to 100 but in steps of 10: {{ 101 | random(step=10) }} # => 70 To get a random number from 1 to 100 but in steps of 10: {{ 101 | random(1, 10) }} # => 31 {{ 101 | random(start=1, step=10) }} # => 51 You can initialize the random number generator from a seed to create random-but-idempotent numbers: "{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron" Shuffling a list¶ The shuffle filter randomizes an existing list, giving a different order every invocation. To get a random list from an existing list: {{ ['a','b','c'] | shuffle }} # => ['c','a','b'] {{ ['a','b','c'] | shuffle }} # => ['b','c','a'] You can initialize the shuffle generator from a seed to generate a random-but-idempotent order: {{ ['a','b','c'] | shuffle(seed=inventory_hostname) }} # => ['b','a','c'] The shuffle filter returns a list whenever possible. If you use it with a non ‘listable’ item, the filter does nothing. Managing list variables¶ You can search for the minimum or maximum value in a list, or flatten a multi-level list.) }} Selecting from sets or lists (set theory)¶ You can select or combine items from sets or lists. New in version 1.4. To get a unique set from a list: # list1: [1, 2, 5, 1, 3, 4, 10] {{ list1 | unique }} # => [1, 2, 5, 3, 4, 10] To get a union of two lists: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | union(list2) }} # => [1, 2, 5, 1, 3, 4, 10, 11, 99] To get the intersection of 2 lists (unique list of all items in both): # list1: [1, 2, 5, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | intersect(list2) }} # => [1, 2, 5, 3, 4] To get the difference of 2 lists (items in 1 that don’t exist in 2): # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | difference(list2) }} # => [10] To get the symmetric difference of 2 lists (items exclusive to each list): # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | symmetric_difference(list2) }} # => [10, 11, 99] Calculating numbers (math)¶ New in version 1.9. You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().) }} Managing network interactions¶ These filters help you with common network tasks. Note These filters have migrated to the ansible.netcommon collection. Follow the installation instructions to install that collection. IP address filters¶ New in version 1.9. To test if a string is a valid IP address: {{ myvar | ansible.netcommon.ipaddr }} You can also require a specific IP protocol version: {{ myvar | ansible.netcommon.ipv4 }} {{ myvar | ansible.netcommon.ipv6 }} IP address filter can also be used to extract specific information from an IP address. For example, to get the IP address itself from a CIDR, you can use: {{ '192.0.2.1/24' | ansible.netcommon | ansible.netcommon] | ansible.netcommon | ansible.netcommon XPath Support. Network VLAN filters¶ New in version 2.8. Use the vlan_parser filter to transform an unsorted list of VLAN integers into a sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties: Vlans are listed in ascending order. Three or more consecutive VLANs are listed with a dash. The first line of the list can be first_line_len characters long. Subsequent list lines can be other_line_len characters. To sort a VLAN list: {{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }} This example renders the following sorted list: ['100,1688,3002-3005,3999'] Another example Jinja template: {% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %} switchport trunk allowed vlan {{ parsed_vlans[0] }} {% for i in range (1, parsed_vlans | count) %} switchport trunk allowed vlan add {{ parsed_vlans[i] }} This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration. Encrypting and checksumming strings and passwords) }} Manipulating text¶ Several filters work with text, including URLs, file names, and path names. Adding comments to files¶ The comment filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses # to start a comment line and adds a blank comment line above and below your comment text. For example the following: {{ "Plain style (default)" | comment }} produces this output: # # Plain style (default) # Ansible offers styles for comments in C ( //...), C block ( /*...*/), Erlang ( %...) and XML ( <!--...-->): {{ "C style" | comment('c') }} {{ "C block style" | comment('cblock') }} {{ "Erlang style" | comment('erlang') }} {{ "XML style" | comment('xml') }} You can define a custom comment character. This filter: {{ "My Special Case" | comment(decoration="! ") }} produces: ! ! My Special Case ! You can fully customize the comment style: {{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }} That creates the following output: ####### # # Custom style # ####### ### # The filter can also be applied to any Ansible variable. For example to make the output of the ansible_managed variable more readable, we can change the definition in the ansible.cfg file to this: [defaults] ansible_managed = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} and then use the variable with the comment filter: {{ ansible_managed | comment }} which produces this output: # # This file is managed by Ansible. # # template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2 # date: 2015-09-10 11:02:58 # user: ansible # host: myhost # Splitting URLs" # } Searching strings with regular expressions') }} # change a multiline string {{ var | regex_replace('^', '#CommentThis#', multiline=True) }} Note If you want to match the whole string and you are using * make sure to always wraparound your regular expression with the start/end anchors. For example ^(.*)$ will always match only one result, while (.*) on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements: # add "https://" prefix to each item in a list GOOD: {{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }} {{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }} {{ hosts | map('regex_replace', '^', 'https://') | list }} BAD: {{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }} # append ':80' to each item in a list GOOD: {{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }} {{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }} {{ hosts | map('regex_replace', '$', ':80') | list }} BAD: {{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }} Note Prior to ansible 2.0, if “regex_replace” filter was used with variables inside YAML arguments (as opposed to simpler ‘key=value’ arguments), then you needed to escape backreferences (for example, \\1) with 4 backslashes ( \\\\) instead of 2 ( \\). New in version 2.0. To escape special characters within a standard Python regex, use the “regex_escape” filter (using the default re_type=’python’ option): # convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$' {{ '^f.*o(.*)$' | regex_escape() }} New in version 2.8. To escape special characters within a POSIX basic regex, use the “regex_escape” filter with the re_type=’posix_basic’ option: # convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$' {{ '^f.*o(.*)$' | regex_escape('posix_basic') }} Managing file names and path names¶ file name (new in version 2.0): # with path == 'nginx.conf' the return would be ('nginx', '.conf') {{ path | splitext }} The splitext filter returns a string. The individual components can be accessed by using the first and last filters: # with path == 'nginx.conf' the return would be 'nginx' {{ path | splitext | first }} # with path == 'nginx.conf' the return would be 'conf' {{ path | splitext | last }} To join one or more path components: {{ ('/etc', path, 'subdir', file) | path_join }} New in version 2.10. Manipulating strings¶ To add quotes for shell usage: - name: Run a shell command ansible.builtin.shell: echo {{ string_value | quote }} To concatenate a list into a string: {{ list | join(" ") }} To work with Base64 encoded strings: {{ encoded | b64decode }} {{ decoded | string | b64encode }} As of version 2.6, you can define the type of encoding to use, the default is utf-8: {{ encoded | b64decode(encoding='utf-16-le') }} {{ decoded | string | b64encode(encoding='utf-16-le') }} Note The string filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded. New in version 2.6. Managing UUIDs¶ To create a namespaced UUIDv5: {{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }} New in version 2.10. To create a namespaced UUIDv5 using the default Ansible namespace ‘361E6D51-FAEC-444A-9079-341386DA8E2E’: {{ string | to_uuid }} New in version 1.9. To make use of one attribute from each item in a list of complex variables, use the Jinja2 map filter: # get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host {{ ansible_mounts | map(attribute='mount') | join(',') }} Handling dates and times¶ To get a date object from a string use the to_datetime filter: #, and so on Getting Kubernetes resource names¶ Note These filters have migrated to the community.kubernetes collection. Follow the installation instructions to install that collection. Use the “k8s_config_resource_name” filter to obtain the name of a Kubernetes ConfigMap or Secret, including its hash: {{ configmap_resource_definition | community.kubernetes.k8s_config_resource_name }} This can then be used to reference hashes in Pod specifications: my_secret: kind: Secret name: my_secret_name deployment_resource: kind: Deployment spec: template: spec: containers: - envFrom: - secretRef: name: {{ my_secret | community.kubernetes.k8s_config_resource_name }} New in version 2.8. See also - Intro to playbooks An introduction to playbooks - Conditionals Conditional statements in playbooks - Using Variables All about variables - Loops Looping in playbooks - Roles Playbook organization by roles - Tips and tricks Tips and tricks for playbooks - User Mailing List Have a question? Stop by the google group! - irc.freenode.net #ansible IRC chat channel
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html
CC-MAIN-2021-04
refinedweb
4,656
51.38
#1) After creating a module you should be able to publish the object directly into the module's scope. The normal CreateModule APIs allow you to provide a dictionary which could already contain this object. Unfortunately all the objects members won't be magically available, you'll need to access the members through the object. You could execute some python code such as: import sys mymod = sys.modules[__name__] for x in dir(myobj): setattr(mymod, x, getattr(myobj, x)) which will copy all the members from the object into the module (you could also do the equivalent from C# using Ops.GetAttrNames, Ops.GetAttr, and Ops.SetAttr). #2 - yes, to do this you just need to add a reference to the assembly and import the type from the correct namespace. E.g.: import clr clr.AddReference('System.Windows.Forms') from System.Windows.Forms import Form class MyForm(Form): pass a = MyForm() #3 - There's a couple of ways to do this. You could use the CreateMethod / CreateLambda APIs which will return a strongly typed delegate w/ some code you can later call. You can also use the CompiledCode method and the Execute method will re-run it. The overload that takes an EngineModule allows you to execute the code multiple times with a different set of globals. -----Original Message----- From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Ori Sent: Wednesday, August 01, 2007 5:08 AM To: users at lists.ironpython.com Subject: [IronPython] Some questions about Iron-Python Hello, I'm new to IronPython and I have a few qusetions: 1. Is there an option to add a context object to the compiled module (so all properties and methods from the object will be accesible) 2. Can a python class inherit from a class located in some referenced .net assembly? 3. Can I save compiled code in memory and use it to evaluate code multiple times? I've seen there is a CompiledCode object but I saw only an Execute method. Thanks, Ori -- View this message in context: Sent from the IronPython mailing list archive at Nabble.com. _______________________________________________ Users mailing list Users at lists.ironpython.com
https://mail.python.org/pipermail/ironpython-users/2007-August/005378.html
CC-MAIN-2018-05
refinedweb
368
65.32