text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Mozilla Sweet.js: Extending JavaScript with Macros - | - - - - - - Read later Reading List Mozilla Sweet.js enables developers to enrich JavaScript by adding new syntax to the language through the use of macros. This helps developers to customize the JavaScript syntax for their style, or to expand it by creating a new JavaScript-based DSL useful for their niche domain. Sweet.js provides the possibility to define hygienic macros inspired by Scheme and Rust with the macro keyword. A simple example is replacing the keyword function with a shorter one, def: macro def { case $name:ident $params $body => { function $name $params $body } } Now, functions can be defined using def: def add (a, b) { return a + b; } A more interesting example is introducing the keyword class: macro class { case $className:ident { constructor $constParam $constBody $($methodName:ident $methodParam $methodBody) ... } => { function $className $constParam $constBody $($className.prototype.$methodName = function $methodName $methodParam $methodBody; ) ... } } An example of using class: class Person { constructor(name) { this.name = name; } say(msg) { console.log(this.name + " says: " + msg); } } var bob = new Person("Bob"); bob.say("Macros are sweet!"); More macro examples can be found on Mozilla/Sweet.js wiki in GitHub where the source code can also be found under a BSD license. Sweet.js files containing macros are compiled into pure JavaScript files without any foreign syntax with sjs. require-sweet provides an AMD loader and there is a SweetJS gem for compiling Sweet.js files from Ruby. Sweet.js currently supports declarative macro definitions, but plans are to support imperative definitions in the future, according to Tim Disney from Mozilla Research. That means macros could contain arbitrary JavaScript code that is run at compile time. Robert Barth Re: Ugh by Russell Leggett
https://www.infoq.com/news/2012/10/Mozilla-Sweetjs
CC-MAIN-2017-39
refinedweb
282
57.67
Here is a tidbit that Python programmers should be aware of (credit to). Although the name list suggests linked list (which has O(1) insert/delete), Lists are implemented as resizable vectors, so insertions & deletions can be O(N). Use collections.deque for O(1) insertions and removals. import collections import copy aList = list(xrange(10000)) aDeque = collections.deque(xrange(10000)) def test1(): a = copy.copy(aList) for x in xrange(len(a)): del a[0] def test2(): a = copy.copy(aDeque) for x in xrange(len(a)): del a[0] python -m timeit -s"import test" "test.test1()" 100 loops, best of 3: 11.4 msec per loop python -m timeit -s"import test" "test.test2()" 1000 loops, best of 3: 587 usec per loop
https://davidaneiss.wordpress.com/
CC-MAIN-2017-39
refinedweb
126
69.68
Simple, asynchronous audio playback for Python 3. Project description The simplaudio package provides cross-platform, dependency-free audio playback capability for Python 3 on OSX, Windows, and Linux. MIT Licensed. Installation Installation (make sure the pip command is the right one for your platform and Python version): pip install simpleaudio See documentation for additional installation information. Quick Function Check import simpleaudio.functionchecks as fc fc.LeftRightCheck.run() See documentation for more on function checks. Simple Example import simpleaudio as sa wave_obj = sa.WaveObject.from_wave_file("path/to/file.wav") play_obj = wave_obj.play() play_obj.wait_done() Support For usage and how-to questions, first checkout the tutorial in the documentation. If you’re still stuck, post a question on StackOverflow and tag it ‘pysimpleaudio’. For bug reports, please create an issue on Github . Big Thanks To … Jonas Kalderstam Christophe Gohlke Tom Christie Many others for their contributions, documentation, examples, and more. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/simpleaudio/1.0.2/
CC-MAIN-2020-05
refinedweb
171
51.75
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Functional field output before save and Editable field?. In openerp 7 Is it possible to functional field output shows before save the records and Editable functional field? I think yes. Example on_change method:. link text Just add any field for output in your result. def custom_function(self, cr, uid, ids, field_name_from_xml, context=None): # work with input data #... your result in custom_addr custom_addr = 'Barcelona' return { 'value': { 'address': custom_addr # adress is must in your xml declaration in this case } } About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/functional-field-output-before-save-and-editable-field-1232
CC-MAIN-2018-05
refinedweb
132
58.89
- Type: Improvement - Status: Open (View Workflow) - Priority: Major - Resolution: Unresolved - Component/s: blueocean-plugin - Labels:None The api to load steps is paginated. Currently only the first page is loaded. When a user clicks on a node, it should load and display the steps using this pagination (ie keep loading and appending until there are no more). This should be able to generate loads of steps I think:' } - relates to JENKINS-41897 Karaoke: steps and nodes are limited to 100 - need to increase limit - Closed JENKINS-39770 Pipeline visualization not rendered when there is more that 100 nodes - Closed Is this something still looked into? I see the nodes api being called with a limit parameter of 10000, the steps api does not have such parameter, and it maxes out at 100. {code:java} pipeline { agent { label any } stages { stage('200 Steps'){ steps { script { def range = 1..200 range.each { n -> println n } } } } } } {code} Call for nodes API: Call for steps API: Is there a way I can send the limits parameter to the steps API too. Or, I need to change here: Also is there a reason why this limit is not being passed to steps/ endpoint ?
https://issues.jenkins-ci.org/browse/JENKINS-42781?focusedCommentId=377326&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2020-10
refinedweb
198
64.44
Published: 05/31/2018, Last Updated: 05/31. In the current Unity workflow, you: Let's call this the Classic Unity workflow. There are some inherent drawbacks and performance considerations for this way of doing things. For one, data and processing are tightly coupled. This means that code reuse can happen less frequently as processing is tied to a very specific set of data. On top of this, the classic system is very dependent on reference types. In the Classic GameObject and Components example shown below, the Bullet GameObject is dependent on the Transform, Renderer, Rigidbody, and Collider references. Objects being referenced in these performance-critical scripts exist scattered in heap memory. As a result of this, data is not transformed into a form that can be operated on by the faster SIMD vector units. Figure 1. Classic gameobject and components lists. Accessing data from system memory is far slower than pulling data from a nearby cache. That is where prefetching comes in. Cache prefetching is when computer hardware predicts what data will be accessed next, and then preemptively pulls it from the original, slower memory into faster memory so that it is warmed and ready when it's needed. Using this, hardware gets a nice performance boost on predictive computations. If you are iterating over an array, the hardware prefetch unit can learn to pull swaths of data from system memory into the cache. When it comes time for the processor to operate on the next part of the array, the necessary data is sitting close by in the cache and ready to go. For tightly packed contiguous data, like you'd have in an array, it's easy for the hardware prefetcher to predict and get the right objects. When many different game objects are sparsely allocated in heap memory, it becomes impossible for the prefetcher to do its thing, forcing it to fetch useless data. Figure 2. Scattered memory references between gameobjects, their behaviors, and their components. The illustration above shows the random sporadic nature of this data storage method. With the scenario shown above, every single reference (arrows)—even if cached as a member variable—could potentially pull all the way from system memory. The Classic Unity GameObject scenario can get your game prototyped and running in a very short timeline, but it's hardly ideal for performance-critical simulations and games. To deepen the issue, each of those reference types contain a lot of extra data that might not need to be accessed. These unused members also take up valuable space in processor caches. If only a select few member variables of an existing component are needed, the rest can be considered wasted space, as shown in the Wasted Space illustration below: Figure 3. The items in bold indicate the members that are actually used for the movement operation; the rest is wasted space. To move your GameObject, the script needs to access the position and rotation data members from the Transform component. When your hardware is fetching data from memory, the cache line is filled with much potentially useless data. Wouldn't it be nice if you could simply have an array of only position and rotation members for all of the GameObjects that are supposed to move? This will enable you to perform the generic operation in a fraction of the time. Unity's new entity component system helps eliminate inefficient object referencing. Instead of GameObjects with their own collection of components, let's consider an entity that only contains the data it needs to exist. In the Entity Component System with Jobs Diagram below, notice that the Bullet entity has no Transform or Rigidbody component attached to it. The Bullet entity is just the raw data needed explicitly for your update routine to operate on. With this new system, you can decouple the processing completely from individual object types. Figure 4. Entity component system with jobs diagram. Of course, it's not just movement systems that benefit from this. Another common component in many games are more complex health systems set up across a wide variety of enemies and allies. These systems typically have little to no variation between object types, so they are another great candidate to leverage the new system. An entity is a handle used to index a collection of different data types that represent it (archetypes for ComponentDataGroups). Systems can filter and operate on all components with the required data without any help from the programmer; more on this later. The data is all efficiently organized in tightly packed contiguous arrays and filtered behind the scenes without the need to explicitly couple systems with entity types. The benefits of this system are immense. Not only does it improve access times with cache efficiency; it also allows advanced technologies (auto-vectorization / SIMD) available in modern CPUs that require this kind of data alignment to be used. This gives you performance by default with your games. You can do much more every frame or do the same thing in a much shorter amount of time. You'll also get a huge performance gain from the upcoming Burst compiler feature for free. Figure 5. Note the fragmentation in cache line storage and wasted space generated by the classic system. See image below for data comparison. Figure 6. Compare the memory footprint associated with a single move operation with both accomplishing the same goal. The Burst compiler is the behind-the-scenes performance gain that results from the entity component system having organized your data more efficiently. Essentially, the burst compiler will optimize operations on code depending on the processor capabilities on your player's machine. For instance, instead of doing just 1 float operation at a time, maybe you can do 16, 32, or 64 by filling unused registers. The new compiler technology is employed on Unity's new math namespace and code within the C# job system (described below), relying on the fact that the system knows data has been set up the proper way with the entity component system. The current version for Intel CPUs supports Intel® Streaming SIMD Extensions 4 (Intel® SSE4), Intel® Advanced Vector Extensions 2 (Intel® AVX2), and Intel® Advanced Vector Extensions 512 (Intel® AVX-512) for float and integer. The system also supports different accuracy per method, applied transitively. For example, if you are using a cosine function inside your top-level method with a low accuracy, the whole method will use a low accuracy version of cosine as well. The system also provides for AOT (Ahead-of-Time) compilation with dynamic selection of proper optimized function based on the feature support of the processor currently running the game. Another benefit to this method of compilation is the future-proofing of your game. If a brand-new processor line comes out to market with some amazing new features to be leveraged, Unity can do all of the hard work for you behind the scenes. All it takes is an upgrade to the compiler to reap the benefits. The compiler is package-based and can be upgraded without requiring a Unity editor update. Since the Burst package will be updated at its own cadence, you will be able to take advantage of the latest hardware architectural improvements and features without having to wait for the code to be rolled into the next editor release. Most people who have worked with multi-threaded code and generic tasking systems know that writing thread-safe code is difficult. Race conditions can rear their ugly heads in extremely rare cases. If the programmer hasn't thought of them, the result can be potentially critical bugs. On top of that, context-switching is expensive, so learning how to balance workloads to function as efficiently as possible across cores is difficult. Finally, writing SIMD optimized code or SIMD intrinsics is an esoteric skill, sometimes best left to a compiler. The new Unity C# job system takes care of all of these hard problems for you so that you can use all of the available cores and SIMD vectorization in modern CPUs without the headache. Figure 7. C# job system diagram. Let's look at a simple bullet movement system, for example. Most game programmers have written a manager for some type of GameObject as shown above in the Bullet Manager. Typically, these managers pool a list of GameObjects and update the positions of all active bullets in the scene every frame. This is a good use for the C# job system. Because movement can be treated in isolation, it is well suited to be parallelized. With the C# job system, you can easily pull this functionality out and operate on different chunks of data on different cores in parallel. As the developer, you don't have to worry about managing this work distribution; you only need to focus entirely on your game-specific code. You'll see how to easily do this in a bit. Combing the entity component system and the C# job system gives you a force more powerful than the sum of its parts. Since the entity component system sets up your data in an efficient, tightly packed manor, the job system can split up the data arrays so that they can be efficiently operated on in parallel. Also, you get some major performance benefits from cache locality and coherency. The thin as-needed allocation and arrangement of data increases the chance that the data your job will need will be in shared memory before it's needed. The layout and job system combination beget predictable access patterns that give the hardware cues to make smart decisions behind the scene, giving you great performance. "OK!" You are saying, "This is absolutely amazing, but how do I use this new system?" To help get your feet wet, let's compare and contrast the code involved in a very simple game that uses the following programming systems: Here's how the game works: Test Configuration: The Classic system checks each frame for spacebar input and triggers the AddShips() method. This method finds a random X/Z position between the left and right sides of the screen, sets the rotation of the ship to point downward, and spawns a ship prefab at that location. void Update() { if (Input.GetKeyDown("space")) AddShips(enemyShipIncremement); } void AddShips(int; } } Code sample showing how to add ships using the classic system Figure 8. Classic ship prefab. (source: Unity.com Asset store battleships package). The ship object spawned, along with each of its components, are created in heap memory. The movement script attached accesses the transform component every frame and updates the position, making sure to stay between the bottom and top bounds of the screen. Super simple! using UnityEngine; namespace Shooter.Classic { public class Movement : MonoBehaviour { void Update() { Vector3 pos = transform.position; pos += transform.forward * GameManager.GM.enemySpeed * Time.deltaTime; if (pos.z < GameManager.GM.bottomBound) pos.z = GameManager.GM.topBound; transform.position = pos; } } } Code sample showing move behavior The graphic below shows the profiler tracking 16,500 objects on the screen at once. Not bad, but we can do better! Keep on reading. Figure 9. After some initializations, the profiler is already tracking 16,500 objects on the screen at 30 FPS. Figure 10. Classic performance visualization. Looking at the BehaviorUpdate() method, you can see that it takes 8.67 milliseconds to complete the behavior update for all ships. Also note that this is all happening on the main thread. In the C# job system, that work is split among all available cores. using Unity.Jobs; using UnityEngine; using UnityEngine.Jobs; namespace Shooter.JobSystem { [ComputeJobOptimization] public struct MovementJob : IJobParallelForTransform { public float moveSpeed; public float topBound; public float bottomBound; public float deltaTime; public void Execute(int index, TransformAccess transform) { Vector3 pos = transform.position; pos += moveSpeed * deltaTime * (transform.rotation * new Vector3(0f, 0f, 1f)); if (pos.z < bottomBound) pos.z = topBound; transform.position = pos; } } } Sample code showing job movement implementation using the C# Job System Our new MovementJob script is a struct that implements one of the IJob interface variants. This self-contained structure defines a task, or "job", and the data needed to complete that task. It is this structure that we will schedule with the Job System. For each ship's movement and bounds-checking calculations, you know you need the movement speed, the top bound, bottom bound, and the delta time values. The job has no concept of delta time, so that data must be provided explicitly. The calculation logic itself for the new position is the same as the classic system, although assigning that data back to the original transform must be updated via the TransformAccess parameter since reference types (such as Transform) don't work here. The basic requirements to create a job involve implementing one of the IJob interface variants, such as IJobParallelForTransform in the above example, and implementing the Execute method specific to your job. Once created, this job struct can simply be passed into the Job Scheduler. From there, all of the execution and resulting processing will be completed for you. To learn more about how this job is structured, let's break down the interface it is using: IJob | ParallelFor | Transform. IJob is the basic interface that all IJob variants inherit from. A Parallel For Loop is a parallel pattern that essentially takes a typical single threaded for loop and splits the body of work into chunks based on index ranges to be operated on within different cores. Last but not least, the Transform keyword indicates that the Execute function to implement will contain the TransformAccess parameter to supply movement data to external Transform references. To conceptualize all of these, think of an array of 800 elements that you iterate over in a regular for loop. What if you had an 8-core system and each core could do the work for 100 entities automagically? A-ha! That's exactly what the system will do. Figure 11. Using Jobs speeds up the iteration task significantly. The Transform keyword on the end of the interface name simply gives us the TransformAccess parameter for our Execute method. For now, just know each ship's individual transform data is passed in for each Execute invocation. Now let's look at the AddShips() and Update() method in our game manager to see how this data is set every frame. using UnityEngine; using UnityEngine.Jobs; namespace Shooter.JobSystem { public class GameManager : MonoBehaviour { // ... // GameManager classic members // ... TransformAccessArray transforms; MovementJob moveJob; JobHandle moveHandle; // ... // GameManager code // ... } } Code sample showing required variables to set up and track jobs Right away, you notice that you have some new variables that you need to keep track of: void Update() { moveHandle.Complete(); if (Input.GetKeyDown("space")) AddShips(enemyShipIncremement); moveJob = new MovementJob() { moveSpeed = enemySpeed, topBound = topBound, bottomBound = bottomBound, deltaTime = Time.deltaTime }; moveHandle = moveJob.Schedule(transforms); JobHandle.ScheduleBatchedJobs(); } void AddShips(int amount) { moveHandle.Complete(); transforms.capacity = transforms.length +; transforms.Add(obj.transform); } } Code sample showing C# Job System + Classic Update() and AddShips() implementations Now you need to keep track of our job and make sure that it completes and reschedules with fresh data each frame. The moveHandle.Complete() line above guarantees that the main thread doesn't continue execution until the scheduled job is complete. Using this job handle, the job can be prepared and dispatched again. Once moveHandle.Complete() returns, you can proceed to update our MovementJob with fresh data for the current frame and then schedule the job to run again. While this is a blocking operation, it prevents a job from being scheduled while the old one is still being performed. Also, it prevents us from adding new ships while the ships collection is still being iterated on. In a system with many jobs we may not want to use the Complete() method for that reason. When you schedule MovementJob at the end of Update(), you also pass it the list of all the transforms to be updated from ships, accessed through the TransformAccessArray. When all jobs have completed setup and schedule, you can dispatch all jobs using the JobHandle.ScheduleBatchedJobs() method. The AddShips() method is similar to the previous implementation with a few small exceptions. It double-checks that the job has completed in the event the method is called from somewhere else. That shouldn't happen, but better safe than sorry! Also, it saves off a reference to the newly spawned transforms in the TransformAccessArray member. Let's see how the work distribution and performance look. Figure 12. Using the C# Job System, we can nearly double the number of objects on the screen from the classic system in the same frame time (~33 ms). Figure 13. C# job system + classic Profiler View. Now you can see that the Movement and UpdateBoundingVolumes jobs are taking about 4 ms per frame. Much better! Also note that there are nearly double the number of ships on the screen as the classic system! We can still do better, however. This current method is still limited by a few things: This is where things get just a little bit more complex, but once you understand it you'll know it forever. Let's tackle this by looking at our new enemy ship prefab first: Figure 14. C# job system + Entity Component System ship prefab. You'll probably notice a few new things. One, there are no built-in Unity components attached, aside from the Transform component (which isn't used). This prefab now represents a template that we will use to generate entities rather than a GameObject with components. The idea of a prefab doesn't exactly apply to the new system in the same way you are used to. You can look at it as a convenient container of data for your entity. This could all be done purely in script as well. You also now have a GameObjectEntity.cs script attached to the prefab. This required component signifies that this GameObject will be treated like an entity and use the new entity component system. You see that the object now also contains a RotationComponent, a PositionComponent, and a MoveSpeedComponent. Standard components such as position and rotation are built-in and don't need to be explicitly created, but MoveSpeed does. On top of that, we have a MeshInstanceRendererComponent, which exposes a public member a material reference that supports GPU instancing, which is required for the new entity component system. Let's see how these tie into the new system. using System; using Unity.Entities; namespace Shooter.ECS { [Serializable] public struct MoveSpeed : IComponentData { public float Value; } public class MoveSpeedComponent : ComponentDataWrapper<MoveSpeed> { } } Code sample showing how to set up MoveSpeed data (IComponentData) for the Entity Component System When you open one of these data scripts, you see that each structure inherits from IComponentData. This flags the data as a type to be used and tracked by the entity component system and allows the data to be allocated and packed in a smart way behind the scenes while you get to focus purely on your gameplay code. The ComponentDataWrapper class allows you to expose this data to the inspector window of the prefab it's attached to. You can see that the data you've associated with this prefab represents only the parts of the Transform component required for basic movement (position and rotation) and the movement speed. This is a clue that you won't be using Transform components in this new workflow. Let's now look at the new version of the GameplayManager script: using Unity.Collections; using Unity.Entities; using Unity.Mathematics; using Unity.Transforms; using UnityEngine; namespace Shooter.ECS { public class GameManager : MonoBehaviour { EntityManager manager; void Start() { manager = World.Active.GetOrCreateManager<EntityManager>(); AddShips(enemyShipCount); } void Update() { if (Input.GetKeyDown("space")) AddShips(enemyShipIncremement); } void AddShips(int amount) { NativeArray<Entity> entities = new NativeArray<Entity>(amount, Allocator.Temp); manager.Instantiate(enemyShipPrefab, entities); for (int i = 0; i < amount; i++) { float xVal = Random.Range(leftBound, rightBound); float zVal = Random.Range(0f, 10f); manager.SetComponentData(entities[i], new Position { Value = new float3(xVal, 0f, topBound + zVal) }); manager.SetComponentData(entities[i], new Rotation { Value = new quaternion(0, 1, 0, 0) }); manager.SetComponentData(entities[i], new MoveSpeed { Value = enemySpeed }); } entities.Dispose(); } } } Code sample showing C# Job System + Entity Component System Update() and AddShips() implementations We've made a few changes to enable the entity component system to use the script. Notice you now have an EntityManager variable. You can think of this as your conduit for creating, updating, or destroying entities. You'll also notice the NativeArray<Entity> type constructed with the amount of ships to spawn. The manager's instantiate method takes a GameObject parameter and the NativeArray<Entity> setup that specifies how many entities to instantiate. The GameObject passed in must contain the previously mentioned GameObjectEntity script along with any needed component data. The EntityManager creates entities based off of the data components on the prefab while never actually creating or using any GameObjects. After you create entities, iterate through all of them and set each new instance's starting data. This example sets the starting position, rotation, and movement speed. Once that's done, the new data containers, while secure and powerful, must be freed to prevent memory leaks. The movement system can now take over the show. using Unity.Collections; using Unity.Entities; using Unity.Jobs; using Unity.Mathematics; using Unity.Transforms; using UnityEngine; namespace Shooter.ECS { public class MovementSystem : JobComponentSystem { [ComputeJobOptimization] struct MovementJob : IJobProcessComponentData<Position, Rotation, MoveSpeed> { public float topBound; public float bottomBound; public float deltaTime; public void Execute(ref Position position, [ReadOnly] ref Rotation rotation, [ReadOnly] ref MoveSpeed speed) { float3 value = position.Value; value += deltaTime * speed.Value * math.forward(rotation.Value); if (value.z < bottomBound) value.z = topBound; position.Value = value; } } + Entity Component MovementSystem implementation Here's the meat and potatoes of the demo. Once entities are set up, you can isolate all relevant movement work to your new MovementSystem. Let's cover each new concept from the top of the sample code to the bottom. The MovementSystem class inherits from JobComponentSystem. This base class gives you the callbacks you need to implement, such as OnUpdate(), to keep all of the system-related code self-contained. Instead of having an uber-GameplayManager.cs, you can perform system-specific updates in this neat package. The idea of JobComponentSystem is to keep all data and lifecycle management contained in one place. <ECS/ECS_MovementJobStruct.cs> The MovementJob structure encapsulates all information needed for your job, including the per-instance data, fed in via parameters in the Execute function, and per-job data via member variables that are refreshed via OnUpdate(). Notice that all per-instance data is marked with the [ReadOnly] attribute except the position parameter. That is because in this example we are only updating the position each frame. The rotation and movement speed of each ship entity is fixed for its lifetime. The actual Execute function contains the code that operates on all of the required data. You may be wondering how all of the position, rotation, and movement speed data is being fed into the Execute function invocations. This happens automatically for you behind the scene. The entity component system is smart enough to automatically filter and inject data for all entities that contain the IComponentData types specified as template parameters to IJobProcessComponentData. using Unity.Collections; using Unity.Entities; using Unity.Jobs; using Unity.Mathematics; using Unity.Transforms; using UnityEngine; namespace Shooter.ECS { public class MovementSystem : JobComponentSystem { // ... // Movement Job // ... OnUpdate() method implementation The OnUpdate() method below MovementJob is also new. This is a virtual function provided by JobComponentSystem so that you can more easily organize per-frame setup and scheduling within the same script. All it's doing here is: Voila! Our job is set up and completely self-contained. The OnUpdate() function will not be called until you first Instantiate entities containing this specific group of data components. If you decided to add some asteroids with the same movement behavior, all you would need to do is add those three same Component scripts containing the data types on the representative GameObject that you instantiate. The important thing to know here is that the MovementSystem doesn't care what the entity is that it's operating on. It only cares if the entity contains the types of data it cares about. There are also mechanisms available to help control life cycle. Figure 15. Running at the same frame time of ~33 ms, we can now have 91,000 objects on screen at once using the entity component system. Figure 16. With no dependencies on classic systems, the entity component system can use the available CPU time to track and update more objects. As you can see in the profiler window above, you've now lost the transform update method that was taking up quite a bit of time on the main thread in the C# job system and Classic combo section shown above. This is because we are completely bypassing the TransformArrayAccess conduit we had previously and directly updating position and rotation information in MovementJob and then explicitly constructing our own matrix for rendering. This means there is no need to write back to a traditional Transform component. Oh yeah, and we've forgotten about one tiny detail the Burst compiler. Now, we'll take exactly the same scene, do absolutely nothing to the code beyond keeping the [ComputeJobOptimization] attribute above our job structure to allow the Burst compiler to pick up the job and we'll get all these benefits. Just make sure that the Use Burst Jobs setting is selected in the Jobs dropdown window shown below. Figure 17. The dropdown allowing the use of Burst Jobs. Figure 18. By simply allowing Burst Jobs to optimize jobs with the [ComputeJobOptimization] attribute, we go from 91,000 objects on screen at once up to 150,000 with much higher potential. Figure 19. In this simple example, the total time to complete all MovementJob and UpdateRotTransTransform tasks went from 25 ms down to only 2 ms completion time. We can now see that the bottleneck has shifted from the CPU to the GPU, as the cost of rendering all of these tiny ships on the GPU now outweighs the cost of tracking, updating, and render command generation / dispatch on the CPU side. As you can see from the screenshot, we've got 59,000 more entities on screen at the same exact frame rate. For FREE. That's because the Burst compiler was able to perform some arcane magic on the code in the Execute() function, leveraging new tightly packed data layout and the latest architecture enhancements available on modern CPUs behind the scenes. As mentioned above, this arcane magic actually takes the form of auto-vectorization, optimized scheduling, and better use of instruction-level parallelism to reduce data dependencies and stalls in the pipeline. Take a few days to soak in all of these new concepts and they'll pay dividends on subsequent projects. The power saved through the powerful gains reaped in these new systems are a currency that can be spent or saved. Table 1. Optimizations resulted in significant improvements, such as the number of objects supported on the screen and update costs. If you're targeting a mobile platform and want to significantly reduce the battery consumption factor in player retention, just take the gains and save them. If you're making a high-end desktop experience catering to the PC master race, use those gains to do something special with cutting edge simulation or destruction tech to make your environments more dynamic, interactable, and immersive. Stand on the shoulders of giants and leverage this revolutionary new tech to do something previously claimed impossible in real time then put it on the asset store so I can use it. Thanks for reading. Stay tuned for more samples from Unity—watch this space! Unity Entity Component System Documentation Unity Entity Component System Samples Unity Entity Component System Forums Learning about efficient memory layout
https://software.intel.com/content/www/us/en/develop/articles/get-started-with-the-unity-entity-component-system-ecs-c-sharp-job-system-and-burst-compiler.html?cid=em-elq-43168&utm_source=elq&utm_medium=email&utm_campaign=43168&elq_cid=1717881
CC-MAIN-2020-29
refinedweb
4,666
55.03
Pythonista AirCode Hey guys! ShadowSlayer here. I've noticed that these is no AirCode function in Pythonista... So I made a simple app :). It has:<br> GUI Client (Computer)<br> Simple server (iOS)<br> GUI Client uses wxPython toolkit. Basically, it allows you to edit files on your computer and then save them to Pythonista. If you find any bugs, please report them to my email! [Website][] [Website]: Error 404 @Dann, sorry for this one, I was working on my website, and probably modified the file. It is fixed now i am interested in this, but its offline again... why not use github? @siddbudd, well fixed it again. Now I have link pointing straight to the website. And yeah, I will probably create github for this project thx SS :) I don't yet know how to use it though, but I will try to figure out... the client on Windows runs just fine after installing wxPy, but the server on my iPad seems to have some problem. First of all, its unstoppable, after launching it, I cannot quit it, nothing happens when tapping on the x button, I have to force quit Pythonista. Also cannot get connection to it from the client. @siddbudd, Well, I am working on the code. I plan adding select so you can stop the script any time. And to connect to the server, you have to get it's local IP address (like 192.168.0.xxx), and then in the Connect window enter IP:port (default port is 28899). Good luck with it XD. P. S. Here is example how to find local IP address You have to start server.py on the computer, and then client.py on the iDevice. import socket s = socket.socket() s.bind(("0.0.0.0", 8080)) c, a = s.accept() print(a) c.close() s.close() import socket socket.create_connection(("your-computer-local-ip", 8080)).close()
https://forum.omz-software.com/topic/603/pythonista-aircode/2
CC-MAIN-2018-47
refinedweb
319
77.43
On This Page SMS 2003 does not require any computers to be domain controllers. In fact, it is more secure not to install any site systems on a domain controller, though it is still possible if necessary. Domain membership in SMS is required, but none of the site systems require domain controller functionality or installation in the domain controller. For more information about security considerations for site and hierarchy design, see Scenarios and Procedures for Microsoft Systems Management Server 2003: Security on Microsoft TechNet. Table 1 Site System Requirements Client Access Point To be a client access point (CAP), the site system that you want to configure as a CAP must have at least one NTFS partition available. SMS 2003 does not support CAPs on non-NTFS partitions. Distribution point To use the Background Intelligent Transfer Service (BITS), the site server and distribution point must have Microsoft Internet Information Services (IIS) installed and enabled. You must also enable WebDAV extensions for IIS for Windows Server 2003. IIS is not required if the distribution point will not be BITS-enabled. Management point To be a management point, the system must have IIS installed and enabled, and run at least Windows 2000 SP3. A management point on Windows Server 2003 must also have BITS enabled. If you do not enable IIS and BITS in Windows first, enabling a Windows Server 2003 as a management point fails. The Task Scheduler and Distributed Transaction Coordinator (DTC) services must be enabled. On Windows Server 2003 domain controllers, the Task Scheduler service is disabled by default. Management points require NTFS partitions. Reporting point To be a reporting point, the site system server must have IIS installed and enabled. Microsoft Internet Explorer 5.01 SP2 or later must be installed on any server or client that uses Report Viewer. To use graphs in the reports, Office Web Components (Microsoft Office 2000 SP2 or Microsoft Office XP) must be installed. The reporting point also requires that Active Server Pages be enabled. Note: ASP is not enabled by default on IIS in Windows Server 2003. Server locator point To be a server locator point, the site system server must have IIS installed and enabled. It is strongly recommended that all server computers with an SMS site server role have only NTFS partitions, and no FAT partitions. You should not assign any SMS server role to servers which have non-NTFS partitions. When IIS is required for an SMS site system role, SMS 2003 uses the default website. For more information about system requirements and supported platforms, see the "Getting Started" section in the Microsoft Systems Management Server 2003 Concepts, Planning, and Deployment Guide. Yes. There is no technical restriction from doing so, but perform testing to ensure acceptable performance. For more information on SMS 2003 sites and hierarchy, see "Appendix F: Capacity Planning for SMS Component Servers" inScenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet Client access points (CAPs) are for SMS 2.0 or SMS 2003 Legacy Clients, and management points are only used by SMS 2003 Advanced Clients. They both function as the primary contact point for clients. Clients retrieve configuration information and report information like inventory and status. The biggest difference is how data is delivered to the client. The site server replicates a set of files down to the CAP. The client then reads and copies down those files from the CAP and processes them. If the CAP is offline, the site server cannot update the CAP, so the clients that access that CAP could potentially be out of date. Advanced Clients request policy from a management point. The management point does not store any data locally. When a client makes a request for policy, the management point retrieves any applicable policies from the SMS site database and transfers those policies to the client. The management point caches policies locally to improve performance. If a client requests a more recent version of the policy, the management point retrieves the newer version. Management points require IIS. CAPs do not require IIS; they perform file transfers through SMB. For more information about SMS site and hierarchy design, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet. Each SMS site must have at least one site system enabled as a CAP, even if the site does not contain any Legacy Clients. SMS does not allow you to remove the last CAP from a site unless a new CAP is specified. The statement to the contrary in Chapter 12, "Planning Your SMS Security Strategy," in the Microsoft Systems Management Server 2003 Concepts, Planning, and Deployment Guide, is in error. Instead of eliminating the CAP, you can manually remove rights to the CAP share for non-administrative accounts. For more information, see Scenarios and Procedures for Microsoft Systems Management Server on Microsoft TechNet. This is documented in the SMS 2003 Operations Release Notes. Search on "Site configuration and maintenance." Verify that Microsoft SQL Server™ has named pipes enabled. If you are running advanced security, verify that the management point computer account has been added to the SMS_SiteSystemToSQLConnection_7<site_code> group, and verify that the site database server is running SQL Server 2000 SP3. For more information about troubleshooting connectivity between a management point and the SMS site database, see article 832109 or article 829868 in the Microsoft Knowledge Base. There are several reasons the management point installation might fail. The MPMSI.log file will log management point installation errors. If you have problems installing the management point, search on "return value 3" for errors in this log file. Here is a list of configurations to verify: If all of these configurations are verified, try updating the MDAC version on the management point to 2.8. For more information, see article 820761, "INFO: List of Significant Fixes That Are Included in MDAC 2.8," in the Microsoft Knowledge Base. A proxy management point is used by roaming Advanced Clients to retrieve policy at remote locations. The proxy management point receives inventory data and status messages and sends them to the secondary site server to be forwarded to the parent site, increasing bandwidth usage efficiency for roaming clients. A proxy management point also services the Advanced Clients that are in its roaming boundaries and are assigned to its primary site. A proxy management point can only be installed in a secondary site, not in a primary site. For more information about proxy management points, see "Appendix F: Capacity Planning for SMS Component Servers" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet. No. This is not supported. Windows Server 2003 Web Edition does meet the requirements for SMS clients but not SMS site servers. For more information about system requirements and supported platforms, see the "System Requirements" section in What’s New in SMS 2003 Service Pack 1 available from the SMS 2003 Product Documentation page. Each site can only have one default management point at a time. Advanced Clients only communicate with the default management point. If you need additional management points for performance reasons, combine multiple management points of one site into a Network Load Balancing cluster and configure the virtual IP address of the cluster as the default management point for that site. If you configure additional management points but do not combine them into a Network Load Balancing cluster, those additional management points will not be used by the Advanced Clients. There is no automatic failover to additional management points. If the default management point goes offline, the SMS administrator must manually designate a different computer to be the default management point. If you don't have a default management point, Advanced Clients cannot download any policies or report any data to the site. For more information about site configuration, see "Appendix E: Designing Your SMS Sites and Hierarchy," in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet. The server locator point locates CAPs for Legacy Clients and management points for Advanced Clients. The server locator point is mostly used in client installation. The server locator point: Plan for a server locator point in your SMS hierarchy when any of the following are true: For more information about assigning site system roles, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet. Yes, but it probably isn’t necessary. If you are running Capinst.exe, your Legacy Clients access the server locator points at logon time and produce minimal network traffic. Advanced Clients query the server locator point once on installation when running Capinst.exe. They can also query the server locator point for automatic discovery of the assigned site code if the Active Directory schema has not been extended. You can only register one server locator point entry in WINS, so if you have not extended your Active Directory schema, you only have one effective server locator point in a WINS infrastructure. However, you can have up to 1,000 server locator points published in a single Active Directory environment. For more information about server locator points, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deploymenton Microsoft TechNet. A protected distribution point has special boundaries that control which Advanced Clients can use that distribution point. If an Advanced Client falls within the protected boundary of a distribution point, and if the package is on that distribution point, then the Advanced Client will only attempt to retrieve the package from that protected distribution point. Advanced Clients that fall outside the boundaries of the protected distribution point can never retrieve packages from that distribution point. Configure protected distribution points when you want to prevent clients from crossing a slow network link to retrieve a package from a distribution point. For more information about assigning distribution points, see "Appendix E: Designing Your SMS Sites and Hierarchy," and "Appendix H: Upgrading to SMS 2003" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deploymenton Microsoft TechNet. Advanced Clients and Legacy Clients choose distribution points differently. When a Legacy Client receives an advertisement, it gets a list, from the client access point, of all distribution points containing that package. Unless the Legacy Client has been configured with a preferred distribution point by using the prefserv.exe tool, the Legacy Client will randomly choose any distribution point in the site that has the package. Even if you use prefserv.exe, you are only configuring a preferred distribution point. If the preferred distribution point is unavailable or does not have the requested content, the Legacy Client will randomly select any distribution point in the site. Legacy Clients do not recognize the boundaries of protected distribution points, and they treat them like any other distribution point in the site. Advanced Clients use more complex algorithms when selecting a distribution point. The Advanced Client sends a content location request to the management point. If the Advanced Client is in the boundary of a protected distribution point, then the management point returns only the name of the protected distribution point. If the Advanced Client is in the local roaming boundaries for a site, then the management point returns the list of all available distribution points with that content. The Advanced Client sorts the list in this order: If the Advanced client is in the local roaming boundaries and more than one distribution point is available, the Advanced Client randomly chooses any distribution point from the list. For example, if there are three distribution points in the Active Directory site, the client chooses randomly. If the Advanced Client is in the remote roaming boundaries for a site, it randomly selects any distribution point in the site. Yes. If the host computer is running Microsoft Virtual Server 2005 R2 and SMS 2003 SP2, all site system roles are supported on the guest operating system. If the host computer is running Microsoft Virtual Server 2005 or Microsoft Virtual PC 2004 it can fill any SMS SP2 server role, but SMS server roles are not supported on the guest operating system. SMS 2003 SP2 supports the Legacy Client or Advanced Client running on the guest operating system, provided that the guest operating system meets the operating system and dependency requirements for the particular SMS client. For more information about Virtual Server 2005 and Virtual PC 2004 support, see Microsoft Systems Management Server 2003 Supported Configurations for Service Pack 2 on the Microsoft web site. You can move a local SMS site database to a remote server or move the SMS site database from one remote server to another remote server. Moving a remote SMS site database from a remote computer running SQL Server to the site server back to the site server is not supported. When moving the SMS site database server, SMS moves the SMS Provider to the same server that the SMS site database is moving to. The basic steps involve backing up the SMS site database, restoring it to the new computer running SQL Server, running the SMS Setup Wizard on the primary site server, and then running a site upgrade from the SMS CD. For detailed steps, see the section "Changes to Site Configuration, Hardware, and Infrastructure" in Scenarios and Procedures for Microsoft Systems Management Server on the Microsoft Download site. Yes. SMS 2003 SP2 is supported for use with SQL 2005. Prior versions of SMS do not support the use of SQL 2005. The following list describes the sequence for upgrading an existing SMS 2003 site to SQL 2005: Upgrade Sequence: For more information about system requirements and supported platforms, see the "Server Software Requirements" in the Supported Configurations Guide SMS 2003 SP2 at the Microsoft Download center. Management points, server locator points, and reporting points. Reporting points also require that Active Server Pages be enabled. Distribution points can use BITS to manage downloads to the Advanced Client. BITS enabled distribution points require IIS and WebDAV. Windows Server 2003 does not install IIS by default. If you install IIS on Windows Server 2003, then BITS, ASP, and WebDAV are not enabled by default. For more information about site systems, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet. If your site system is running Windows 2000 Server and IIS 5.0, run the IIS Lockdown Wizard with the SMS IISLockd.ini. IIS Lockdown works by turning off unnecessary features, which reduces potential attacks. The IIS Lockdown Wizard includes the URLScan Security tool, which restricts the types of HTTP requests that IIS processes. If your site system is running Windows Server 2003 and IIS 6.0, the IIS Lockdown feature is integrated into IIS. You should still run URLScan 2.5 to apply UrlScan_SMS.ini file. Download the SMS IISLockd.ini and UrlScan_SMS.ini as part of the SMS Toolkit from the Microsoft Download site. For the procedure to apply these templates, see the documentation that comes with the SMS Toolkit. Important Running the IIS Lockdown or URLScan tools without the SMS templates can cause SMS operations to fail. For more information about Internet Information Services security, see Scenarios and Procedures for Microsoft Systems Management Server. Yes. If you run your site systems on Windows Server 2003 SP1, you might need to perform some workarounds to restore full SMS functionality. The following sections of this FAQ provide information about issues that might arise and suggested workarounds you can perform: Resetting the DCOM permissions to pre- Windows Server 2003 SP1 levels Server locator points and reporting points require the same level of DCOM permissions they had prior to Windows Server 2003 SP1. Windows Server 2003 SP1 splits the previous Launch permission into Local Launch and Remote Launch and splits the Activation permission into Local Activation and Remote Activation. In addition, the activation permissions are being moved from the Access Permission ACL to the Launch Permission ACL. For more information about the new COM permissions, see Granular COM Permissions on MSDN. If you upgrade your server locator point to Windows Server 2003 SP1, you must reset the COM permissions so that the Internet Guest Account (IUSR_<servername>) has Local Launch permissions as it did prior to SP1, as shown in the following procedure. To grant Local Launch permission to the Internet Guest Account: If you upgrade your reporting point to Windows Server 2003 SP1, you must reset the COM permissions so that the SMS Reporting Users Group has Local Launch permissions as it did prior to SP1, as shown in the following procedure. To grant Local Launch permission to the SMS Reporting Users Group: If your site server is running Windows Server 2003 SP1 and you want to run the SMS Administrator console on a computer that does not contain the SMS Provider, you must reset the COM permissions so that the user running the SMS Administrators console has remote launch and remote activation on the computer running the SMS Provider. Because everyone running the SMS Admin console should be a member of SMS Administrators, you can also grant the remote launch and remote activation to the SMS Administrators group on the SMS Provider. Additional Configuration Tasks if you Run the Security Configuration Wizard Introduced in Windows Server 2003 SP1, the Security Configuration Wizard helps you create a security policy that you can apply to any server on your network. The wizard recognizes SMS server roles, services, ports, and applications, but might not recognize all of the required configurations. The following section details which configurations are not automatically configured by the Security Configuration Wizard and the additional configurations required to keep SMS functioning properly. Note For more information about the roles and features recognized by the Security Configuration Wizard, view the configuration database while running the wizard. Enable Remote WMI in the Security Configuration Wizard for Remote Site Database Servers When using the Security Configuration Wizard in Windows Server 2003 SP1, the Remote WMI service is not selected by default. The Security Configuration wizard is unable to recognize the SMS Provider. If you run the wizard on the server that has the SMS Provider installed, you must enable the Remote WMI service on the Select Administration and Other Options page of the Security Configuration Wizard. Unless Remote WMI is enabled, the SMS Administrator consoles on the site server and any other remote consoles will fail to connect to the SMS namespace in WMI. Enable the SMS Database Monitor Ports on Remote SMS Site Database Servers. If your SMS site database server is not on the same computer as the SMS site server, the Security Configuration wizard correctly enables the SMS Database Monitor service (SMS_SQL_Monitor_<ServerName>) but it does not enable the ports used by the SMS Database Monitor service. On the Open Ports and Approve Applications page of the wizard, select Ports used by SMS_SQL_MONITOR_<ServerName<. If the SMS site database server is on the same computer as the SMS site server, no ports are required. Enable Remote Administration for IIS and Related Components on BITS-enabled distribution points. When you run the Security Configuration wizard on a BITS-enabled distribution point, you must select Remote administration for IIS and related components on the Installed Options page. If Remote administration for IIS and related components is not enabled, the wizard blocks the SMS Distribution Manager service from creating virtual directories on the distribution point. Deselect the CAP Role if it is not on the Site Server. The Security Configuration Wizard always identifies a site server as having a Client Access Point, whether or not the site server is actually assigned that role. If the CAP role is incorrectly selected, deselect it on the Select Administration and Other Options page of the Security Configuration Wizard. Re-run the Wizard after Changing Site System Roles. If you run the Security Configuration Wizard on a server and then configure a site role on that server, you should re-run the wizard to ensure the site system roles functions properly. Identifying Ports and Services Required If Windows Firewall Is Enabled Windows Server 2003 SP1 also includes the Windows Firewall feature first released in Windows XP SP2. The firewall can interfere with some SMS features. Windows Firewall is not enabled by default on servers. If you enable the Windows Firewall on a Windows Server 2003 SP1 server, either by using Control Panel or by running the Network Security section Security Configuration Wizard, you must verify that the following ports and applications are permitted to pass through the Windows Firewall. Remote Control Port Remote Control Function TCP port 2701 Allows general contact, reboot, and ping TCP port 2702 Remote Control TCP port 2703 Chat TCP port 2704 File Transfer Server 2003 SP1.
http://technet.microsoft.com/en-us/cc998654.aspx
crawl-002
refinedweb
3,500
51.18
Blogs by Author & Date Introduction po... Continue reading → Have you ever had a product design idea? Our engineers created this overview of the product development process to demystify how to bring your product to market. Learn more about DMC's Custom Software and Hardware Development services.... previ... a... This is a brief tutorial on getting started with the Siemens embedded web server in the S7-1200 and S7-1500. Using the concepts explained below, you can create a simple web page or a fully featured HTML5 web app. Getting Started Step 1. Turn on the web server. To do this, navigate to the web server menu in the device configuration page and check the box to enable the web server. Step 2: Download your project to your PLC and browse to its IP add... This past Friday (April 29th), we started and......several hours later....finished our move from 1333 Kingsbury to 2222 Elston. It was pretty crazy as we (and a team of movers) dismantled the place and hauled everything to the new building. I took a video of the aftermath. Even though the place looks totally trashed, It was a really good space for DMC over the past 7 years. Kingsbury was definitely the Millennium Falcon of offices: "She may not look like much, but she's got it where it ... We are all getting new business cards at DMC, so we thought it would be fun to put QR codes on them to make it easier for our smartphone enabled customers to scan our info and add us to their contact database. Thanks to Google it's pretty easy to make your own QR codes using their QR chart API . The only issue is that you have to properly format the data before sending it to google, especially if you want to all of your contact info in the proper vCard or MeCARD format. A little jav... What was the best part of my trip to Charlotte, NC last week? - Watching the Blackhawks Stanley Cup game 6 on a massive screen in the NASCAR hall of fame after the Siemens award ceremony. Very surreal... and totally awesome! Yes, Siemens really knows how to put on a great event. Last week was the 2010 Siemens Automation Summit, held in beautiful Charlotte, NC. The Siemens Automation Summit is a conference that specifically focused on the end users of Siemens automaton products. It is a gre... Encapsulating your data into custom Data Structures will allow you to Dominate (maybe not the world, but at least your PLC) I'm going to conclude my series on the IEC 61131-3 standard by examining the benefits of Data Structures. (If you missed my other posts, you can check out the previous part 1, part 2, part 3) Before we dive into Data Structures, let's review basic data types. Every PLC supports a certain group of standard data types. The list typically includes the followin... If you are a programmer, an OEM, or end user that utilizes IEC 61131-3 compliant PLCs, you should MUST read this article. In the 3rd part of my series on the IEC61131-3 programming standard (you can check out the previous part 1 and part 2), I will explain the huge benefits of structured programming and why Function Blocks are the greatest thing ever....seriously they are awesome and worth using in every PLC program. So what is a Function Block anyway? A Function Block is an encapsulate... hardwa... This is part two of my series on the IEC-61131-3 programming standard. Here's a link to Part 1 for those of you that missed it. The IEC61131-3 standard contains 5 different programming languages. This article will give a brief introduction to each one and some tips on choosing which language is best. The five languages: Ladder Diagram is most popular in the USA. It is based on the graphical presentation of Relay Ladder Logic. Most non-IEC61131-3 compliant PLCs only support ladder lo... DMC received a record number of visitors to our booth at yesterday's job fair at the University of Illinois. In our quest to hire the best of the best, Jon Carson and I met and spoke with some really promising candidates and managed to collect almost 100 resumes. After a few minor adjustments, our ping-pong ball demo performed flawlessly throughout the day. As intended, it drew a lot of attention to our booth and hopefully helped us find our next stellar DMC engineer. In the spirit of coo... Check out the nifty new code syntax highlighter we've added to the blog. This will make it much easier for us to post code snippets. Here's a sample c# function that is automatically formatted. Also note the handy tools on the upper right corner of the code box. Just hover your mouse over the upper right corner and you can view, copy, or print the code. using System; namespace MyNameSpace { class HelloWorld { static void Main(string[] args) ... Nobody loves catchy numeric buzzwords as much as me. I - E - C - Six - Eleven - Thirty - One - Dash - Three.... It just rolls off the tongue! OK I admit, it's a mouthful, but trust me it's worth knowing about. This is a brief intro to IEC61131-3 and the first part in a series of posts that will cover its features and benefits. So what is it? The International Electro Technical Commission (IEC) is a non-profit organization that develops standards for electrical and electronic techno... We are big fans of the .NET Micro Framework. For anyone who hasn't heard of it, it's a super light version of the .NET Framework that runs on resource-constrained devices embedded systems (read more of our thoughts on .Net Micro Framework). It's a great platform, however sometimes we do run into issues. The Micro Framework has lighter versions of some of the core functions built into the standard .NET Framework. The encryption functions built into the standard .NET platform are no... proje... Alle...
https://www.dmcinfo.com/latest-thinking/blog/articletype/authorview/authorid/18/timj
CC-MAIN-2018-09
refinedweb
1,013
74.39
How to set up Django on Plesk As a follow-up to the article about Ruby on Rails and Plesk I’ll try to explain how to organize Django hosting on Plesk. We will use an Ubuntu 14.04 server and Plesk 12.0 for our experiments. I assume that you will get this configuration somehow. Plesk may use Apache and Apache+nginx for serving of websites. In the scope of this article we’ll check how to setup Django apps hosting only for Apache without nginx. Let’s get started. First of all let’s check Python presence in the system. You can do it by issuing the command: python --version An output can be the following: Python 2.7.6 Install a Python package manager (under the “root” user): apt-get install python-pip It is common practice to have a separate virtual environment for each Python application. So, let’s install the “virtualenv” package (under the “root” user): pip install virtualenv We will use Phusion Passenger as an application server for hosting Django projects. So, let’s install it (under the “root” user): apt-get install libapache2-mod-passenger The next step is to create a domain in Plesk for the application. Let it be a subdomain in our case. We’ll have a separate directory for document root and app files: One should take into account that the document root is a subdirectory. In our example django-public is located inside the django-app directory. This is needed for the proper functioning of the application server. To avoid problems, you need to clean up the newly created django-public directory (remove a default site stub). You can do it via file manager, FTP or SSH. It’s more convenient to work via SSH under a particular user (not root), so let’s enable SSH access for the subscription: Now it’s time to log into the system via SSH under a user account (“admindom” in our case) and take some steps. The “django-app” directory will contain your application and a virtual environment for it. Go to the “django-app” directory and create a virtual environment for Python: cd django-app/ virtualenv django-app-venv Activate the new virtual environment to make sure that subsequent commands will be performed in a proper context: source django-app-venv/bin/activate Next step is to install Django framework. The latest stable version of Django can be installed by “pip”: pip install Django You can verify your Django installation using Python interpreter. Type python in the command shell and use the following code: >>> import django >>> print(django.get_version()) 1.7 Now you can upload your application via FTP or SSH. If the application is located at the “app” directory, you need to upload files to “django-app/app” directory. So the “django-app” directory may look like the following: app/ django-app-venv/ One of the possible ways to serve static assets is to copy them to the corresponding directory: cp -R ./django-app/django-app-venv/local/lib/python2.7/site-packages/django/contrib/admin/static \ ./django-app/django-public/ To serve our application via the application server, we need to create passenger_wsgi.py file inside the “django-app” directory with the following content: import sys, os app_name = 'app' env_name = 'django-app-venv' cwd = os.getcwd() sys.path.append(cwd) sys.path.append(cwd + '/' + app_name) INTERP = cwd + '/' + env_name + '/bin/python' if sys.executable != INTERP: os.execl(INTERP, INTERP, *sys.argv) sys.path.insert(0, cwd + '/' + env_name + '/bin') sys.path.insert(0, cwd + '/' + env_name + '/lib/python2.7/site-packages/django') sys.path.insert(0, cwd + '/' + env_name + '/lib/python2.7/site-packages') os.environ.setdefault("DJANGO_SETTINGS_MODULE", app_name + ".settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() Variable values in lines 3 and 4 should be replaced with yours. The final step is to check your app via a browser. If there were errors you can see the “Web application could not be started” error page from Phusion Passenger with details. If everything is OK, you’ll see your application: In some cases, app restart is required. Instead of restarting of the whole web server, Passenger provides a simple way to restart app. You need to create directory named “tmp” inside “django-app”: mkdir tmp After that, to restart the app, you can use the following command: touch tmp/restart.txt That’s it. Now you know how to host Django apps using Plesk. How useful was this post? Click on a heart to rate it! Average rating / 5. Vote count: Oh no, sorry about that! Let us know how we can do better below Thanks for your feedback! I have no experience with django at all, but I found the way to run django apps on plesk without any special configuration. I hope this will be useful for someone else: Yes, using of Apache mod_wsgi is another option. Doesn’t work for me… all i get is a Passenger Error Message “Web application could not be started” Any Hints? Please describe your problem on forum – Need to provide the whole output (or screenshot), OS name, Ruby version. Maybe you’re using old Ruby, maybe something else. The configuration steps mentioned here work for multiple domains? Yes, why not? 🙂 I’ve created a blogpost about installing Python 2.7.x on CentOS 6.6 with Plesk 12 for Django. See: Doesn´t work, I have ubuntu with plesk 12.0.18, help me!! please. What exactly doesn’t work? It’s better to describe your problem in details on the forum Which Plesk editions support this configuration? Which Linux distro is recommended for this setup? Any Plesk edition has such support. Starting from Plesk 12.5 it’s possible to install the Phusion Passenger via autoinstaller. As you may see, I suggest Ubuntu 14.04. It’s my personal preference. In general it’s better to use some modern OS version. I need to also setup a deployment script that pulls the latest app code from github.com and i want to use SSH keys. If I create a SSH login, as you have suggested, where does the .shh/ directory go? Directory .ssh is located inside the home directory of the user. E.g. /var/www/vhosts/*main-domain-name.dom*/.ssh Doesn’t work for me. I get the “Web application could not be started”. I followed your tutorial except the part about the serve static assets. I couldn’t quite understand it. Is there an example/test app which I could try to test if everything should work if the app is correct? “Web application could not be started” error is always followed by additional output about the reason of failure. Please provide it. Maybe you have different OS, different version of Python. If so you need to fix paths to Python in passenger_wsgi.py file. Hello Guys, This work with Plesk Automation 11.5 Linux Node? There is nothing specific, so probably yes, it should work. error 504 gateway timeout nginx, some idea for this issue? thanks. this post not works, show error 504 Gateway Time-out nginx, something idea about this error. my vps information is: Product version: 12.0.18 Update #98 Update date: 2017/03/17 06:26 Build date: 2015/10/14 14:00 Build target: Ubuntu 14.04 Revision: 333119 Architecture: 64-bit Wrapper version: 1.1 in advance thanks Hi, I followed your tutorial, but when I want to see my websiteI have this message “No index site defined / uploaded”. on my Plesk, I have put in the “httpdocs” directory, the “django-app” directory. In this directory, I have my project , the “jango-app-venv” and also the passenger_wsgi.py Do I need to change some settings in my Plesk administration ? I have try to change settings “index files”, but I don’t know what I have to write. Thanks for your help
https://www.plesk.com/blog/product-technology/plesk-and-django/
CC-MAIN-2019-43
refinedweb
1,321
59.19
After knowing some function in scala, I've tried to patch the element from list: val aList: List[List[Int]] = List(List(7,2,8,5,2),List(8,7,3,3,3),List(7,1,4,8,8)) List[List[Int]] = List(List(7,2,2,5,2),List(7,7,3,3,3),List(7,1,4,4,4)) def f(xs: List[Int]) = xs match { case x0 :: x1 :: x2 :: x3 :: x4 => List(x0,x1,x2,x3,x4) case 8 :: x1 :: x2 :: x3 :: x4 => List(x1,x1,x2,x3,x4) case x0 :: 8 :: x2 :: x3 :: x4 => List(x0,x0,x2,x3,x4) case x0 :: x1 :: 8 :: x3 :: x4 => List(x0,x1,x1,x3,x4) case x0 :: x1 :: x2 :: 8 :: x4 => List(x0,x1,x2,x2,x4) case x0 :: x1 :: x2 :: x3 :: 8 => List(x0,x1,x2,x3,x3) } aList.flatMap(f) Product with java.io.Serializable scala.collection.GenTraversableOnce The problem is just in the last match pattern: case x0 :: x1 :: x2 :: x3 :: 8 => List(x0,x1,x2,x3,x3) You put 8 in the position of the list tail, so it has to have the type List[Int] (or more generally GenTraversableOnce as the compiler tells you). If you have fixed length inner lists, you should change your patterns to have :: Nil in the end: case 8 :: x1 :: x2 :: x3 :: x4 :: Nil => List(x1,x1,x2,x3,x4) ... case x0 :: x1 :: x2 :: x3 :: 8 :: Nil => List(x0,x1,x2,x3,x3) An alternative is case List(8, x1, x2, x3, x4) => List(x1,x1,x2,x3,x4) ... case List(x0, x1, x2, x3, 8) => List(x0,x1,x2,x3,x3) Also, your first pattern means that the other ones won't be ever reached, it just leave the list as is. If your inner lists are not necessarily fixed-size, you need a more generic solution. Clarify, please, if that's the case. Also, if you want to map List[List[Int]] to List[List[Int]], you should use .map(f) instead of flatMap. I noticed that in your example in the last sub-list you have two 8s replaced by the left 4. If you want to achieve this, you can make your function recursive and add a default case (for when all 8s are replaced). def f(xs: List[Int]) = xs match { case 8 :: x1 :: x2 :: x3 :: x4 :: Nil => f(List(x1,x1,x2,x3,x4)) case x0 :: 8 :: x2 :: x3 :: x4 :: Nil => f(List(x0,x0,x2,x3,x4)) case x0 :: x1 :: 8 :: x3 :: x4 :: Nil => f(List(x0,x1,x1,x3,x4)) case x0 :: x1 :: x2 :: 8 :: x4 :: Nil => f(List(x0,x1,x2,x2,x4)) case x0 :: x1 :: x2 :: x3 :: 8 :: Nil => f(List(x0,x1,x2,x3,x3)) case _ => xs } But even with these fixes this way f will cycle on a list with two 8s in the beginning and some other edge cases. So here is a more generic solution with pattern matching: def f(xs: List[Int]): List[Int] = { // if there are only 8s, there's nothing we can do if (xs.filter(_ != 8).isEmpty) xs else xs match { // 8 is the head => replace it with the right (non-8) neighbour and run recursion case 8 :: x :: tail if x != 8 => x :: f(x :: tail) // 8 is in the middle => replace it with the left (non-8) neighbour and run recursion case x :: 8 :: tail if x != 8 => x :: f(x :: tail) // here tail either starts with 8, or is empty case 8 :: tail => f(8 :: f(tail)) case x :: tail => x :: f(tail) case _ => xs } }
https://codedump.io/share/3wUlnIr5EkYE/1/replacement-some-element-with-next-or-previous-element-in-scala-list
CC-MAIN-2018-22
refinedweb
600
79.13
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions. Dear all, Can somebody tell me how we can add federation with social Identity providers in the new Azure AD? We do not seem to find how we can do this, in the past this was possible by adding a new "access control namespace" but this seems to be deprecated. Thank you in advance for all the information you can provide us. Kind regards, Bart Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
https://social.technet.microsoft.com/Forums/en-US/f795f55b-9a40-42f1-b592-5e554e3da1c7/how-to-add-federation-with-social-identity-providers-in-azure-ad?forum=ADFS
CC-MAIN-2022-27
refinedweb
113
61.36
RAISE(3P) POSIX Programmer's Manual RAISE(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. raise — send a signal to the executing process #include <signal.h> int raise(int sig); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The raise() function shall send the signal sig to the executing thread or process. If a signal handler is called, the raise() function shall not return until after the signal handler does. The effect of the raise() function shall be equivalent to calling: pthread_kill(pthread_self(), sig); Upon successful completion, 0 shall be returned. Otherwise, a non- zero value shall be returned and errno shall be set to indicate the error. The raise() function shall fail if: EINVAL The value of the sig argument is an invalid signal number. The following sections are informative. None. None. The term ``thread'' is an extension to the ISO C standard. None. kill(3p), sigaction(3p) The Base Definitions volume of POSIX.1‐2008, signal RAISE(3P) Pages that refer to this page: signal.h(0p), abort(3p), kill(3p), killpg(3p), pthread_kill(3p), sigaction(3p), signal(3p)
http://man7.org/linux/man-pages/man3/raise.3p.html
CC-MAIN-2018-34
refinedweb
245
58.18
CA1708: Identifiers should differ by more than case Visual Studio 2015 The names of two types, members, parameters, or fully qualified namespaces are identical when they are converted to lowercase. Identifiers for namespaces, types, members, and parameters cannot differ only by case because languages that target the common language runtime are not required to be case-sensitive. For example, Visual Basic is a widely used case-insensitive language. This rule fires on publicly visible members only. Select a name that is unique when it is compared to other identifiers in a case-insensitive manner. Do not suppress a warning from this rule. The library might not be usable in all available languages in the .NET Framework. Show:
https://msdn.microsoft.com/en-us/library/ms182242.aspx
CC-MAIN-2016-30
refinedweb
117
54.93
Hello Neil, Friday, September 11, 2009, 7:26:47 PM, you wrote: i suggest you to import extensible-exceptions package instead - it's available even for ghc 6.8. alternatively, you may import old-exceptions package (or something like this). trying to develop code compatible with both versions of exceptions should be a nightmare :D > Hi, > In my CHP library I need to do some exception catching. I want the > library to work on GHC 6.8 (with base-3 -- this is the current version > in Ubuntu Hardy and Jaunty, for example) and GHC 6.10 (which comes with > base-4). But base-3 and base-4 need different code for exception > catching (whether it's importing Control.OldException or giving a type > to the catch method). > Here's what I currently do -- my Haskell file contains this: > #if __GLASGOW_HASKELL__ >= 609 > import qualified Control.OldException as C > #else > import qualified Control.Exception as C > #endif > My cabal file contains this (it used to say just "base,..." but Hackage > complained at me the other day when I tried to upload that): > Build-Depends: base >= 3 && < 5, ... > This works on two machines: one is 6.8+base-3, the other is > 6.10+base-3&base-4, where cabal seems to use base-4. However, I have > had a bug report (now replicated) which stems from a different > 6.10+base-3&base-4 machine where cabal picks base-3 instead. The real > problem is that the #if is based on GHC version, but really it should be > based on which base-* library is being used. I know the code works with > base-3 (use Control.Exception) and base-4 (use Control.OldException) but > I can't get the build process right to alter the code based on which > base-* library is being used. > Can anyone tell me how to fix this? I don't think that changing to > always use Control.Exception would fix this, because I need to give a > different type for catch in base-3 to base-4, so there's still the > incompatibility to be dealt with. > Thanks, > Neil. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com
http://www.haskell.org/pipermail/haskell-cafe/2009-September/066247.html
CC-MAIN-2014-35
refinedweb
371
76.32
Created on 2018-12-13 00:30 by vstinner, last changed 2019-03-01 17:28 by vstinner. The following code hangs: --- import multiprocessing, time pool = multiprocessing.Pool(1) result = pool.apply_async(time.sleep, (1.0,)) pool.terminate() result.get() --- pool.terminate() terminates workers before time.sleep(1.0) completes, but the pool doesn't mark result as completed with an error. Would it be possible to mark all pending tasks as failed? For example, "raise" a RuntimeError("pool terminated before task completed"). Attached PR 11139 sets RuntimeError("Pool terminated") error in pending results if the pool is terminated. Pablo: since you worked on multiprocessing recently, did you see this bug? I'm not sure about my PR 11139... If someone else wants to work on a fix, ignore my PR ;-)
https://bugs.python.org/issue35478
CC-MAIN-2019-30
refinedweb
131
52.97
If jQuery gave us proper control over the DOM, and React brought components to the limelight, what's next? Svelte by Rich Harris might be an answer to this conundrum. You might remember him from tools such as Bublé and Rollup. Read on to learn what Svelte is about. theguardian.com, based in New York City. My background is in journalism, and my day job is to come up with new ways to use web technologies in the service of storytelling. A key part of that is building the tools that we need to create rich and performant applications on tight deadlines.I'm an interactive editor at On one level, Svelte is a UI framework – if you've heard of tools like React, Vue or Ractive, it tackles the same problems they do. It allows you to build applications in a declarative, component-driven way, rather than creating a hairball of imperative DOM manipulation. But on another way, it's a complete rethink of how we approach the problem: rather than being a piece of software that sits between you and the browser, giving you a set of abstractions to work with, it essentially writes your app for you in the most efficient way possible. The result is faster loading, faster running apps, with next to zero waste. It's less magical than it sounds. You write components in HTML files, which can optionally include <style> and <script> elements to encapsulate CSS and behaviours. Svelte's template syntax takes a few minutes to learn. These component files are converted into modules by Svelte's compiler, using the command line interface or one of the various build tool integrations. These modules contain what you might call 'vanilla JS' – i.e. low-level DOM manipulation specific to your app – meaning there's no data-binding or DOM diffing or any of the other tricks frameworks have to use to render your UI. HelloWorld.html <h1>Hello {{name}}!</h1> app.js import HelloWorld from './HelloWorld.html'; var app = new HelloWorld({ target: document.querySelector('main'), data: { name: 'world' } }); app.set({ name: 'SurviveJS' }); To the end user, the biggest difference is speed. Svelte is lighter and faster than alternative solutions because the browser has a lot less work to do (benchmarks coming soon!). Our TodoMVC implementation is just 3.6 kB gzipped, which is tiny. For the developer, the advantages are more subjective. Svelte has a very simple API and is designed to behave very predictably – for example, the DOM updates synchronously whenever data changes. It has some productivity-boosting features borrowed from Ractive, like scoped styles and computed properties. One of the nice features about Svelte's approach is that it's inherently very easy to adopt incrementally, without a big bang rewrite. Ordinarily if you wanted to move from one framework to another there'd be a transition period during which your app depended on both, which is terrible. Guilt, partly. The JS community has become hyper-aware in the last two or three months about the cost of shipping too much JavaScript – it's not just about the download time, it's also about the parse/eval time, which on mobile has a real impact for a lot of people. As the creator of Ractive, I'd been unwittingly contributing to the problem, just like every framework author. As soon as I had the idea for Svelte – 'what if the framework was actually just a compiler?' – I could barely sleep until I'd written the first proof-of-concept. I don't think I've ever been this excited about one of my projects. There's a huge amount of work still to do – server-side rendering and progressive enhancement, transitions, routing, plus all the documentation and examples that go along with it. We'll be very busy over the next few weeks. I'm particularly keen to explore a couple of areas that Svelte opens up – statically analysing CSS in the context of the markup it's attached to, and WYSIWYG component editors that can create dependency-free widgets and applications. I think there are some tremendous opportunities that only really become practical when you have a zero-runtime framework and template-driven components. What Svelte does is an example of 'ahead-of-time' compilation, or AoT. (It's not the only framework doing AoT – Angular 2 has also embraced similar techniques – though as far as I know Svelte is the first to take the idea this far.) I've seen a huge amount of interest in AoT, and I think there's a lot of undiscovered territory, and not just in UI. I'm excited to see how that develops. A lot of people have asked me if Svelte's techniques could be used with JSX. Unfortunately, other framework authors have reached the same conclusion that I did – because JSX is 'just JavaScript', no compiler could ever have the same guarantees about the shape of your app, meaning there will always need to be some runtime reconciliation process. So I think we might see a resurgence of interest in non-JSX approaches to building apps. If you try to learn web development by reading blog posts about new technologies you will drown in information. Instead, find someone who is a bit further along on their programming journey and befriend them. Just build stuff badly – hack it together any way you can – and ask them for help when you need it. Both of you will become better programmers as a result. Dominic Gannaway – here's the author of Inferno, which is probably the fastest UI framework in the world. Until Svelte overtakes it, at least. Thanks for the interview Rich! It's cool to see how changing the axioms and identifying true problems can lead to new solutions that change the way we think about web development. There is always room for innovation. You can learn more about Svelte at their site. There's also a REPL you can use to try out the syntax. Remember to star the project as well as everyone likes stars.
https://survivejs.com/blog/svelte-interview/
CC-MAIN-2021-04
refinedweb
1,016
63.09
Script Validators Tutorial Your Jira workflows define the business processes associated with working on different issue types. Workflows can be simple or complex, and they can include several or few workflow functions. Depending on what your organisation requires, you may want to use enhanced workflow validators provided by ScriptRunner. These script validators allow you to do more in your workflow, providing extra control or information. All of the ScriptRunner workflows functions are found on the normal Workflows page of Jira Administration (Administration>Issues>Workflows), specifically by editing an existing workflow. Edit an existing workflow and click on the transition you wish to add a validator to, then Validators>Add Validator>Script Validator [ScriptRunner]. What is a Script Validator? A script validator is a powered-up version of a standard Jira validator. Instead of using one of the basic validators that you find in Jira, you can run a script to validate your issue. Like other scripted workflow functions, ScriptRunner includes some built-in options, as well as a custom script, and a simple script option. Remember, a validator checks to see if the user can transition an issue, they don’t prevent the next transition button from appearing (that would be a script condition). When you select Script Validator [ScriptRunner], you see several options on the Select Script page. Built-in Script Validators: ScriptRunner includes two built-in script validators that you can use without copying or pasting any scripts. Require a Comment on Transition: This validator makes sure the user transitioning the issue enters a comment. You could use this validator to get background information on an issue that needs to transition to a Review status. Field(s) Changed Validator: This validator makes sure the user updates a specified field. You can only select fields that appear on the transition screen for the issue. Simple Scripted Validator: This option is for inline scripts that are less complicated than the Custom Script option. Use a simple script that only checks if the transition should run or not run. Custom Script Validator Here you can upload your own script file or paste a script. These scripts are more complicated than the Simple Scripted Validators. If you aren’t quite a Groovy writer yet, you can copy and paste some example scripts found in Adaptavist’s ScriptRunner documentation and use them, or you can create your own. You can copy and paste code from the Adaptavist Library or create your own. Examples of Script Validators Field(s) Changed Great Adventure needs every onboarding issue to show which department a new employee belongs to. To handle this requirement, they’ve been using components for each of the departments. Unfortunately, many users are transitioning the onboarding issues without updating the Component field first, so the issues are needing manual edits for the different departments. To help with this situation, Great Adventure plans to use a script validator to make sure users complete the Component field before they resolve an onboarding issue. Add Field(s) Changed Validator Validators. Click Add Validator. Now you see the Add Validator to Transition page. From the list, select Script Validator [ScriptRunner], and then click Add. On the Select Script page, select Field(s) Changed Validator. Now, you see some options for this built-in script. In the Note field, type a note for your reference. In the Fields menu, select a field that you want to be sure has changed. Great Adventure would use the Component field. Click Preview to see a statement that references what this script validator achieves. Click Update to add the script validator. Once you click Update, you see the Workflow Edit screen with your new validator. The next step is to Publish the workflow and test it. Test Field(s) Changed Validator that uses the workflow you edited. Transition the issue to where you added the scripted validator. When you click that transition and the transition screen appears, try to transition without changing the field you selected. It should not work. Change the field and try and again and it should work. This validator can be a useful reminder and an easy way to be sure that an issue is being updated as needed. If needed, users could still edit the field on the issue after it has transitioned. Custom Script- Require Fix Version In Great Adventure’s Software Development team, they use Fix Versions to track when the team resolves bugs in the products. Similar to the Component issue, many users forget to add the Fix Version when completing the issue. Great Adventure has decided to use a script validator in their Software Development project, so when a user sets the resolution to Fixed, they also have to add the correct Fix Version. To add this validator, you just copy/paste the following code into the box that appears when you select the Custom Script Validator option on the Select Script page. import com.opensymphony.workflow.InvalidInputException if (issue.getResolution() == "Fixed" && ! issue.fixVersions) { throw new InvalidInputException("fixVersions", "Fix Version/s is required when specifying Resolution of 'Fixed'") } Like the built-in validator we discussed previously, this validator requires a transition screen. In this particular case you use a Resolution screen to capture the last bit of information to resolve the issue, but a different use-case might call for a screen on a different transition. One thing to note if you are using sample scripts—there may be items to change as you update your ScriptRunner.
https://scriptrunner.adaptavist.com/6.11.0/jira/tutorials/scripted-validators-tutorial.html
CC-MAIN-2020-50
refinedweb
913
55.13
Step 1: Learn how CDs and CD-Rs work How do you arrange the 1s and 0s? It helps to know that the data is written along a spiral that starts from the center of the CD and spirals outward in a clockwise direction. The length of each bit is a fairly precise value (more on this later), and the pitch of the spiral, or the distance between successive spirals, is also a fairly precise value. Thus, using some math and some guesswork, it is possible to create a mapping from the nth bit in your data to an x,y coordinate. Now we really have to look under the hood of CD data storage to figure out how to tell the CD writer to write a 0 or 1 for the nth bit. Data is organized as a sequence of sectors, each of which is 2352 bytes long. The data within each sector is organized in a particular way depending on what type of CD your are dealing with (data, audio, etc...). The most "raw" type of organization is known as "mode 2." Mode 2 does away with many of the nice things about CDs like error correction, but it gives us the most control over the bits. In a mode 2 sector, the first 12 bytes contain "syncing" data and the next 4 contain specific information about the sector. These bytes cannot be changed at the software level. (Maybe it is possible to write a driver that could change these?) The next 2336 bytes are free to be anything though. If this were all that happened to the data, our job would be easy. Unfortunately, there's a lot more data manipulation before the data actually gets written to the CD. First, the data in each sector is "scrambled" by which we mean it is run through some math function which is supposed to "whiten" the data (i.e. keep the average height of the data on the CD half-way between pit and no-pit). Second, the data is sent through a CIRC encoder, which applies some error correction codes. Finally, the data is sent though an eight-to-fourteen modulator (EFM). This maps each 8-bit byte to a 14-bit sequence. This is to prevent long sequences of 0's (no change in height) which are hard for the CD drive to read. The point is: drawing pictures on CDs is possible, so it should be done. For a more complete (but still at some times cryptic) explanation of CDs, check out the freely available ECMA-130 specification. Thanks if FF is white, 00 is black, and 01 means to go to the next line, FF00FF0100FF00010000000100FF00 represents a small image of the letter A. This only indicates lines of black and white, so what you see here is already being used by the program. P.S. That string of numbers works as follows: FF00FF0100FF00010000000100FF00 which becomes this when splitting bytes apart with spaces: FF 00 FF 01 00 FF 00 01 00 00 00 01 00 FF 00 then, converting 01 to a new line, 00 to #, and FF to a space: and you should be able to read the letter in that. octave:1> img2cd('logo.gif') n128sec = 1 error: invalid row index = 83 error: invalid column index = 142 error: evaluating binary operator `<' near line 176, column 25 error: if: error evaluating conditional expression error: evaluating if command near line 176, column 5 error: evaluating for command near line 174, column 3 error: evaluating for command near line 156, column 1 error: called from `img2cd' in file `/home/pabs/img2cd.m' Any idea how to fix this? octave:1> img = imread('sohv.jpg') octave:2> img2cd(img) Regards JB Talking about a C/C++ ver, this code is easy to read and understand to a c++ programer(noob or pro!). It would be easy to convert it.(open in free SCITE to read it first!) C++ Code simplified (I am a noob at C!) < Start > /* Simple C++ code convertion for matlab code */ /* This is the base of the C code you would need. You would have to xor the data. Note: Looked for C++ BITXOR/XOR function but could not find any! "this sequence of bytes is xored with the bytes of sector to "scramble" the data" --matlab code Then setup the calabration for the type or brand of CD used. Get the center radius and other things. The comments were not clear in the program! There was picture adjustment code to make the picture as it was on the computer screen. "i came up with these fudge factors to attempt to straighten my picture. they work OK for my CD-Rs.they probably won't work for yours" --matlab code Then write the image to 1/0 bits as a file. The matlab code tried to write at 128 sectors( A second?) >>>>> NOTE: Most of the code in the MATLAB is math! The C++ code would be much better to read and less lines! <<<<< */ #include <stdio.h> int main () { FILE * CDfile; /* First lines of the MATLAB code I can't get about the Xor :( */ char code[] = { The Code }; /* State the variable || (Below) open the file on the CD to be writen to. */ CDfile = fopen ( "f:/temp/CDHOLOGRAM.data" , "wb" ); /* Write the data (file) to the CD */ fwrite (code , 1 , sizeof(code) , CDfile ); /* Sorry! I don't know how to write it to the CD (fwrite?). . Close the file (below code). */ fclose (CDfile); return 0; } /* This takes much more time to read, then to load (My website) ! */ < End > I hope that C++ pro will take this on! error: syntax error before '=' token img2cd.c:7:1: warning: "RAND_MAX" redefined In file included from img2cd.c:3: stdlib.h:29:1: warning: this is the location of the previous definition img2cd.c: In function `main': img2cd.c:53: error: `PICTURE_X' undeclared (first use in this function) error: `PICTURE_Y' undeclared (first use in this function) error: `f1tof2Frame' undeclared (first use in this function) GOD! At last... Have I to put something after {(at the end)? static unsigned char picture[PIC_HEIGHT][PIC_WIDTH] = { Don't kill me XD A lot of CD/DVD burners nowadays have the ability. mechanically, at amasci.com or google "hand-drawn holograms", and also some instructables on it. If you can do the math AND burn the curves with this instructable's way, you can Make all the holograms you want. Suggestions: Holographic and/or digital sundial CD! or, CD clock with 3D numbers and hands, using a quartz movement. WIth c being enough points between 0 and 2 x pi, and radius r x= r cos(c) ; y=r sin(c) makes circular fresnel curves, and r determines the depth or projection distance of the image. This then has to be remapped onto the CD using similar math, converting voxel coordinates to pits bit and byte addresses. A second conversion is needed to try to find the tracks and bytes of every hologram pit burnt on the surface of the disc relative to the center, using polar coordinates instead of x,y. "for" all x and y that x2+y2=r2 is the pythagorean theorem way to do the same thing... both maths are useful for triangles and circles also. I hope you know a good geometry teacher if you need help. Don't fear the math, it just does what you can easily do by hand with a coaster and a double pointed circle making compass, when you make "hand drawn holograms". And if you can doodle a video game on graph paper, that's related too, to where your pixels are when you're laser burning your own hologram coaster. ??? Attempted to access img(83,142); index out of bounds because size(img)=(1,9). Error in ==> img2cd at 176 if img(ny(k), nx(k))<128
http://www.instructables.com/id/Burning-visible-images-onto-CD-Rs-with-data-beta/CX43DSNF4AQO8BG
CC-MAIN-2015-22
refinedweb
1,317
71.65
API: 090: Proposed changes in LayoutManager3D and Container3D Hi LG3D developers, In order to implement some SceneManager feature, I'm considering the following API changes: <br /> public class Container3D extends Component3D {<br /> ...<br /> public Object removeChild(int index);<br /> ...<br /> }<br /> And accordingly, <br /> public interface LayoutManager3D {<br /> ...<br /> public Object removeLayoutComponent(Component3D comp);<br /> ...<br /> }<br /> Both methods' return type used to be "void". Now they return the "Constraints" object that represents the position of the removed comp3d in the container (i.e. if you call addChild() with the removed comp3d and the returned constraints, the comp3d is inserted back to the layout where it was). Would it be OK with you folks? A trickiness in this change is that it will break all the LayoutManager3D implementations. However, for those layouts that is not used by SceneManager internally, removeLayoutComponent() doesn't need to return a right value (i.e. it can just return "null"). So, when I change the API, I'll update the removeLayoutComponent() method of all the classes that implements LayoutManager3D to return "null" (including all the incubator apps) so that the CVS gets compiled successfully. For ordinally LG3D apps, no further change shouldn't be required at this moment (I'll make further announcement if it becomes mandatory in future). Please let me know your thoughts by Aug. 7th (Mon). Thanks in advance, hideya Hi Hideya, In BgManager I don;t usu right now removeChild() but in layout a litlle bit ;), for me everything sounds great. I feel drag'n'drop mechanism here a litlle. Radek
https://www.java.net/node/657906
CC-MAIN-2015-18
refinedweb
260
55.84
Paul DiLascia is a freelance software consultant specializing in training and software development in C++ and Windows. He is the author of Windows++: Writing Reusable Code in C++ (Addison-Wesley, 1992). QCould you please comment on a discussion that we've been having in our development department about passing character strings into and out of C++ class objects? One side argues that they should always be passed as LPCSTRs, another argues for using the CString class. I've enclosed a much-simplified class definition (see Figure 1) to illustrate the different methods considered. In reality our classes have a number of CString data members and, rather than having separate functions to set or get a particular item, we use the one function with an ID number to indicate which string we want to access. Ian CleggEngland AAt first I thought I knew the answer to this question off the top of my head-after all, it seems like such a simple, innocent question-but then I began to wonder. Many hours and countless brain cells later, I found myself sucked deeper and deeper into C++, MFC internals, and yucky assembly language mucky-muck. In the end, it turned out I was right, but only through brute force was I able to prove it. After such a grueling ordeal, I quickly realized that the only way I could reward myself, the only possible pleasure to be gained from having endured it, was to inflict the same punishment on my readers. In fact, since CStrings are so important and ubiquitous in MFC, a familiar little class that programmers use every day, I decided to make this month's column a sort of mini-treatise on CStrings-especially since they've changed considerably as of release 4.0. I know it sounds boring but I guarantee you'll be surprised to learn all the things that go on while you're not looking. First off, if you're not using CStrings, shame on you! Come on guys, the year 2000 is almost upon us! No more character arrays, strcpy, strdup, and all that rot. CStrings are easy, lightweight, overwrite-proof, and provide useful functions for manipulating strings. There's even a Format function that works like printf! Not to mention that CStrings go from English to International (ASCII to Unicode) with the flip of a compiler switch. There's just no excuse for writing char[256] any more unless you're dealing with legacy code written in COBOL. Now that I got that off my chest, let's do something about Ian's code. First, it's always better to instantiate CString objects directly as inline class members or on the stack instead of allocating them from the heap. A CString is very small; it contains only one member (m_pchData, a pointer to the actual character data) so a CString is only four bytes, the same as an int. It makes no sense to allocate CStrings individually unless you have some truly bizarre situation at hand. In general, you should think of CString as a primitive type like int or long or double. You wouldn't allocate space to store one int, would you? So the first thing you should do is make m_str an actual CString instead of a pointer to one. class MyClass { CString m_str; // not a pointer . . . }; This entails no extra storage overhead. On the contrary, it uses half the space the previous design used and results in less memory fragmentation. It also simplifies your code greatly because you can get rid of all the new/delete stuff and checks for NULL. CString already contains a lot of code to do all that checking for you. Use it. The second thing you should do is declare your Set/GetCString functions with const CString& (reference to const CString) instead of just plain CString. When you pass an object by value, C++ must make a copy of it on the stack, which requires a function call to the copy constructor CString::CString(const CString&). If you use a reference, C++ just pushes a pointer and there's no copy constructor call. When you use a reference, you need const to tell the compiler that your Set function doesn't modify its argument or, in the case of Get, that the CString returned may not be modified. In general, you can use const Foo& as a way to pass Foo objects more efficiently-as if they were values-provided you don't modify them. Figure 2 shows my modified version of MyClass with const CString& declarations and m_str converted to CString. Now, let's explore the original question: should you declare Get/Set functions with LPCSTR or const CString&? If you take my initial advice to always use CString and never use LPCSTR, then this question never arises. However, LPCSTR is sometimes necessary, and the magic of C++ lets you use LPCSTR interchangeably with CString. Say you have a function, like SetLPCSTR, that expects LPCSTR but you call it with a CString instead. MyClass myobj; CString cs; myobj.SetLPCSTR(cs); // type mismatch? Superficially it looks like a type mismatch, but this code compiles because CString has a conversion operator, CString::operator LPCSTR, that converts the CString to an LPCSTR. All the compiler needs to know is that there's this member function called operator LPCSTR (operator const char*) that returns LPCSTR. The compiler generates code like this: . . . CString cs; myobj.SetLPCSTR(cs.operator LPCSTR()); // OK, types // agree This looks funny because there's a space in the function name, but that's just syntax. Internally, operator LPCSTR is just another member function that returns LPCSTR. SetLPCSTR gets LPCSTR, which is what it expects. What about going the other way? What if you have a Set function that expects const CString& and you try to give it an LPCSTR? LPCSTR lp; myobj.SetCString(lp); // type mismatch? This is a little more tricky. One of the functions defined for CString is CString::CString(LPCSTR), a constructor that creates a CString from an LPCSTR. The compiler notices this and says, "Duh, I can make this compile if I create a temporary variable." LPCSTR lp; CString temp(lp); // create temp myobj.SetCString(temp); // OK, args match Once again, the types match: SetCString gets a CString, which is what it expects. There are two other things I must point out here. First, hidden behind the scenes is a call to the destructor CString::~CString as temp goes out of scope. Second, the temp solution only works if the argument to SetCString is declared either CString or const CString&. If SetCString is declared to take CString& (a non-const reference), the compiler can't use the temp trick. For all it knows, SetCString might modify temp, and there's no way to propagate the change back to lp. However you declare your arguments-CString or LPCSTR-you can still pass the other kind of argument in your code. Which is better? I'm getting there, I promise. So far, I've only showed you what happens for converting function arguments. As you'd expect, the compiler works the same magic on return values. You can write LPCSTR lp; CString cs; lp = myobj.GetCString(); // type mismatch? cs = myobj.GetLPCSTR(); // type mismatch? and C++ works its gris-gris to make your code compile. In the first case, C++ converts the return value from const CString& to LPCSTR by invoking the conversion operator CString::operator LPCSTR. In the second case, the conversion is actually an assignment: C++ invokes CString::operator=(LPCSTR). In all, there are eight cases to consider: four cases for Set and four cases for Get, depending on the type declared versus the type passed or assigned. In addition to the hidden conversions for arguments and return values, you also have to consider what happens inside your Set/Get functions. For example, if you write void MyClass::SetLPCSTR(LPCSTR lpsz) { m_str = lpsz; } the innocent-looking assignment statement actually compiles into a call to CString::operator=(LPCSTR). Likewise, you have to consider what happens for SetCString, GetLPCSTR, and GetCString. Things are really getting out of hand here! In an effort to get a handle on all this madness, I wrote a program, STRTEST.CPP (see Figure 2), that illustrates exactly what happens in each situation. STRTEST contains the improved MyClass with Set/Get functions for CString and LPCSTR and a main function that exercises each of the eight cases I mentioned. It also contains a stripped-down version of CString, with only the functions declared that are relevant to the discussion at hand. All functions are left outline (as opposed to inline) so you can see where the compiler generates function calls. The idea is to compile STRTEST and look at the assembly code generated in the hopes of understanding what's really going on behind the veil of the compiler. This is the brute force investigative technique I mentioned at the outset. It's disgusting to look at, I know, but it's also amusing. Figure 3 shows the abridged assembly output for the main function, with my running commentary. You'd think by now I would just come out and tell you the answer, but I've only described the type conversions generically. The next thing you have to do is look inside CString to see what all these operators and constructors actually do. Fortunately, this is a little more interesting. Consider the conversion operator for LPCSTR. I mentioned earlier that CString contains just one member, m_pchData, a char* that points to the actual character data, such as "Hello, world". Knowing this, you can probably guess how CString:: operator LPCSTR is implemented. // (from afx.inl) inline CString::operator LPCTSTR() const { return m_pchData; // just return ptr to string } Just like a typical Get function, all it does is return a data member. Since it's inline, converting a CString to LPCSTR is very fast. If you write SetLPCSTR(cs); // cs is a CString it gets compiled exactly as if you'd written SetLPCSTR(cs.m_pchData); which you can't do because m_pchData is protected. What about the other operators? Well, when I told you about m_pchData, I didn't tell you everything. It's true that m_pchData points to the underlying character string, but hidden behind the string is a little struct. struct CStringData { long nRefs; // reference count int nDataLength; // length of string int nAllocLength; // length of buffer allocated }; Figure 4 illustrates the situation. When CString allocates space for a new string, it adds a few extra bytes to store this header. CStringData contains vital information about the string. For example, CString::GetStringLength is implemented like this: // (from afx.inl) inline int CString::GetLength() const { return GetData()->nDataLength; } GetData is another inline function: inline CStringData* CString::GetData() const { ASSERT(m_pchData != NULL); return ((CStringData*)m_pchData)-1; } Figure 4 Anatomy of a CString Why did the implementers of MFC put the CStringData information as a hidden block preceding the character data instead of storing it as class members in CString, which would be the obvious thing to do? Because it makes CStrings small and fast. Consider what happens when you copy a CString in either a copy constructor or an assignment from CString to CString. If all the information is stored in the CString, as it was before MFC 4.0, you'd have to copy it along with m_pchData, so there would be more things to copy. Plus, you can't just copy the value of m_pchData, you have to allocate a new buffer and copy the contents with a function like strcpy or memcpy. Starting in release 4.0, MFC uses a different technique called "copy on modify" to copy CStrings. Commercial string libraries have long used this technique; MFC finally caught up. The basic idea is to copy only the pointer at first, and not actually copy the bytes until it becomes necessary. Figure 5 shows how it works. Figure 5 CString Copy on Modify in Action! Say you have a CString, cstr1, with a ref count of 1. Then suppose you make a copy of it. cstr2 = cstr1; Instead of copying all the string information and character bytes, the assignment operator copies the pointer m_pchData and increments CStringData::nRefs. Now cstr1 and cstr2 actually point to the same object in memory, but nRefs is 2 instead of 1. This makes two CStrings, but just one byte array. What happens if the program subsequently alters either cstr1 or cstr2? No problem. Before modifying any CString, MFC checks the ref count. If it's greater than 1, some other CString is pointing to this same m_pchData so MFC can't change it. Instead, MFC allocates a new m_pchData with its own CStringData and copies the bytes. MFC decrements the ref count in the original object and sets the new ref count to 1. A similar thing happens when a CString is destroyed; only when the ref count drops to zero does MFC actually deallocate m_pchData. You can see that this strategy only works because the information about the string-CStringData-is kept with the string itself and CStrings are just pointers to these data/string objects. Figure 6 summarizes what all the relevant CString functions and operators do with regard to copying. Figure 6 CString Functions and Operators CString function/operator Costly? What it does CString::CString(const CString& cs) No Quick copy. Copy value of m_pchData and increment CStringData:::nRefs. CString::CString(LPCSTR lp) Yes Always allocate a new character array and CStringData. Copy bytes from lp. CString::~CString() Deallocate string only if -nRefs <= 0; that is, if this is the only CString using this particular m_pchData. operator LPCSTR() const Inline function just returns m_pchData. No function call. const CString& operator=(const CString& cs) Similar to copy constructor. Copy value of m_pchData and increment nRefs. const CString& operator=(LPCSTR lp); Similar to LPCSTR constructor. Always allocate a new character array and CStringData. Copy bytes from lp. The whole point is that copying CStrings is now very fast since you just copy one pointer. You can pass CStrings around by value without paying a price. A typical application might have many functions with arguments declared CString, and you might pass the same CString by value from function A to function B to function C. Each call requires creating a copy of the CString on the stack. Before MFC 4.0, this would allocate and copy a new string every time! Copy on modify fixes this situation so only m_pchData is copied. As soon as one of the functions or some other part of the code attempts to modify the underlying string, MFC makes a new copy. Remember, this only applies when you pass CStrings by value. If you use const references (const CString&), C++ passes a pointer to the actual CString and doesn't even call the copy constructor. Finally, I'm in a position to answer the question! I could have made you wade through the assembler code, but I have some sympathy and did the dirty work myself. I compiled the results in two tables that summarize what happens in the eight different cases in the main function of STRTEST.CPP (see Figures 7 and 8). And the winner is CString! If you want to maximize performance, you should declare your Set/Get functions using CString, not LPCSTR. class MyClass { CString m_foo; public: void SetFoo(const CString& cs) { m_str = cs; } const CString& GetFoo(); { return m_str; } }; Why? Well, it should be obvious from Figure 8 that CString is the way to go for the Get function. Case 6, where you return LPCSTR and then assign it to a CString, is the one to avoid because it always does an allocation when the CString is assigned to an LPCSTR. The Set function is a little more subtle. At first glance, it seems like case 3 is really bad because it not only creates a temp variable, but the temp variable must be destroyed as well. When you look at it again, you realize that the m_pchData created for temp is immediately copied to m_str inside the Set function and has its ref count bumped up. But, when temp is subsequently destroyed, nothing happens because m_str is now using the same m_pchData that was originally created for temp! In other words, the underlying string is allocated only once and then handed to m_str, where it resides until m_str is destroyed. This is essentially the same overhead as case 4, only the allocation happens inside the Set function in operator=(LPCSTR). Having a Set function that takes LPCSTR doesn't really buy you anything. (There are a few extra pushes and pops associated with creating the temp variable, but that's negligible.) The moral of the story is, use const CString& in all your declarations. This makes sense; m_str is already a CString so why convert it? LPCSTRs will have to be converted one way or another, so let the compiler do it when necessary. If you convert the m_str to LPCSTR in your Set/Get functions, you'll only have to convert back again in the case where you have a CString. Phew Have a question about programming in C or C++? Send it to Paul DiLascia at 72400.2702@compuserve.com
http://www.microsoft.com/msj/archive/S1F0A.aspx
CC-MAIN-2016-36
refinedweb
2,881
64.3
Given that it is a common stimulus in visual science, I was wondering whether someone has already the code to generate such a stimulus. The code below does it. Note that, for the code below, the radius below is the confusing bit. Here’s an explanation: It’s units are “fractions of the stimulus” in this case and the radius is specified to 3xSigma (for the gaussian). SO if you have a stimulus width of 3 deg (so radius of 1.5) then setting the filter radius to be 0.1 means that the radius was actually 0.1*3/2 = 0.15 deg = 3xSigma. So if you want to quote the gaussian filter in terms of its sigma (common practice) it would be radiusXstimSize/2/3 = 0.05 deg in this case. Here’s the code to create it: from psychopy import filters import numpy as np from psychopy import visual, event def makeFilteredNoise(res, radius, shape='gauss'): noise = np.random.random([res, res]) kernel = filters.makeMask(res, shape=shape, radius=radius) filteredNoise = filters.conv2d(kernel, noise) filteredNoise = (filteredNoise-filteredNoise.min())/(filteredNoise.max()-filteredNoise.min())*2-1 return filteredNoise filteredNoise = makeFilteredNoise(256, 0.1) win = visual.Window([400,400], monitor='testMonitor') stim = visual.ImageStim(win, image = filteredNoise, mask=None) stim.draw() win.flip() event.waitKeys() Thanks for the code. This is Gaussian smoothing, isn’t it? I ll try to modify it to obtain a bandpass. Oh sorry yes I misread your request. Bandpass filtering needs to be done in the fourier domain so a bit different. @jon already gave a nice answer for generating static images. If you want a moving image or the possibility to control bandwidth in the orientation domain, you may be interested in the code : This easily generates images or movies and has been already suite extensively used with psychopy. As @jon mentioned, this is al done in the Fourier space. cheers, Laurent Just stumbled onto this code. Thanks Jon! Mysteriously, I can’t seem to get this to work with a units=“pix” window. The numpy array looks reasonable, but drawing an ImageStim where image is set to this seems to draw nothing. Hey there, can I clarify what radius actually does in this makeMask function? (The radius of the noise blobs maybe?). The source code comments aren’t really very intuitive to my simple mind. I was playing around with this value and can’t be certain what this radius is actually doing just by looking at it visually, apart from knowing that increasing it makes the noise blobs coarser. The noise stimulus class can make various kinds of visual noise including bandpass by two different routes.
https://discourse.psychopy.org/t/creating-bandpass-gaussian-white-noise/2125
CC-MAIN-2021-39
refinedweb
445
59.6
RE: Create an .exe at runtime with .NET 2.0 - From: andersch <andersch@xxxxxxxxxxxxxxxxx> - Date: Tue, 13 Feb 2007 15:17:01 -0800 Hi Linda Thank you for your answer. Can you show me a simple code example please? I've searched without sucess for an example how I can compile a new executable with the encrypted file (resource file) and the already compiled winform (.exe). Thanks and Regards, andersch "Linda Liu [MSFT]" wrote: Hi Andersch,. Based on my understanding, you'd like to create an executable file at runtime with .NET 2.0, which in turn lauches an existing simple winform application to decrypt an encrypted file. If I'm off base, please feel free to let me know. The .NET Framework includes a mechanism called the Code Document Object Model (CodeDOM) that enables developers of programs that emit source code to generate source code in multiple programming languages at run time, based on a single model that represents the code to render. The System.CodeDom namespace defines types that can represent the logical structure of source code, independent of a specific programming language. The System.CodeDom.Compiler namespace defines types for generating source code from CodeDOM graphs and managing the compilation of source code in supported languages. FYI, The .NET Framework includes code generators and code compilers for C#, JScript, and Visual Basic. For more information on CodeDOM and how to use it, you may visit the following link. 'Dynamic Source Code Generation and Compilation' Hope this helps. If you have any question,: Create an .exe at runtime with .NET 2.0 - From: "Jeffrey Tan[MSFT]" - RE: Create an .exe at runtime with .NET 2.0 - From: Linda Liu [MSFT] - References: - RE: Create an .exe at runtime with .NET 2.0 - From: Linda Liu [MSFT] - Prev by Date: security using windowsprincipal class - Next by Date: .net resource files question... - Previous by thread: RE: Create an .exe at runtime with .NET 2.0 - Next by thread: RE: Create an .exe at runtime with .NET 2.0 - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.general/2007-02/msg00367.html
crawl-002
refinedweb
339
67.86
A Hierarchical model for Rugby prediction¶ - @Author: Peadar Coyle - @date: 31/12/15 updated 29/12/2017 I came across the following blog post on Daniel Weitzenfeld’s blog, based on the work of Baio and Blangiardo. In this example, we’re going to reproduce the first model described in the paper using PyMC3. Since I am a rugby fan I decide to apply the results of the paper to the Six Nations Championship, which is a competition between Italy, Ireland, Scotland, England, France and Wales. Motivation¶ Your estimate of the strength of a team depends on your estimates of the other strengths Ireland are a stronger team than Italy for example - but by how much? Source for Results 2014 are Wikipedia. I’ve added the subsequent years, 2015, 2016, 2017. Manually pulled from Wikipedia. - We want to infer a latent parameter - that is the ‘strength’ of a team based only on their scoring intensity, and all we have are their scores and results, we can’t accurately measure the ‘strength’ of a team. - Probabilistic Programming is a brilliant paradigm for modeling these latent parameters - Aim is to build a model for the upcoming Six Nations in 2018. In [1]: !date import numpy as np import pandas as pd try: from StringIO import StringIO except ImportError: from io import StringIO import pymc3 as pm, theano.tensor as tt import matplotlib.pyplot as plt from matplotlib.ticker import StrMethodFormatter import seaborn as sns %matplotlib inline Sat Jan 13 16:41:34 UTC 2018 /opt/cond This is a Rugby prediction exercise. So we’ll input some data. We’ve taken this from Wikipedia and BBC sports. In [2]: try: df_all = pd.read_csv('../data/rugby.csv') except: df_all = pd.read_csv(pm.get_data('rugby.csv')) What do we want to infer?¶ -. - Often we don’t know what the Bayesian Model is explicitly, so we have to ‘estimate’ the Bayesian Model’ - If we can’t solve something, approximate it. - Markov-Chain Monte Carlo (MCMC) instead draws samples from the posterior. - Fortunately, this algorithm can be applied to almost any model. What do we want?¶ - We want to quantify our uncertainty - We want to also use this to generate a model - We want the answers as distributions not point estimates Visualization/EDA¶ We should do some some exploratory data analysis of this dataset. The plots should be fairly self-explantory, we’ll look at things like difference between teams in terms of their scores. In [3]: df_all.describe() Out[3]: In [4]: # Let's look at the tail end of this dataframe df_all.tail() Out[4]: There are a few things here that we don’t need. We don’t need the year for our model. But that is something that could improve a future model. Firstly let us look at differences in scores by year. In [5]: df_all['difference'] = np.abs(df_all['home_score']-df_all['away_score']) In [6]: (df_all.groupby('year')['difference'] .mean() .plot(kind='bar', title='Average magnitude of scores difference Six Nations', yerr=df_all.groupby('year')['difference'].std()) .set_ylabel('Average (abs) point difference')); We can see that the standard error is large. So we can’t say anything about the differences. Let’s look country by country. In [7]: df_all['difference_non_abs']=df_all['home_score']-df_all['away_score'] Let us first loook at a Pivot table with a sum of this, broken down by year. In [8]: df_all.pivot_table('difference_non_abs', 'home_team', 'year') Out[8]: Now let’s first plot this by home team without year. In [9]: (df_all.pivot_table('difference_non_abs', 'home_team') .rename_axis("Home_Team") .plot(kind='bar', rot=0, legend=False) .set_ylabel('Score difference Home team and away team') ); You can see that Italy and Scotland have negative scores on average. You can also see that England, Ireland and Wales have been the strongest teams lately at home. In [10]: (df_all.pivot_table('difference_non_abs', 'away_team') .rename_axis("Away_Team") .plot(kind='bar', rot=0, legend=False) .set_ylabel('Score difference Home team and away team') ); This indicates that Italy, Scotland and France all have poor away from home form. England suffers the least when playing away from home. This aggregate view doesn’t take into account the strength of the teams. Let us look a bit more at a timeseries plot of the average of the score difference over the year. We see some changes in team behaviour, and we also see that Italy is a poor team. In [11]: g = sns.FacetGrid(df_all, col="home_team", col_wrap=2, size=6) g = g.map(plt.plot, "year", "difference_non_abs", marker=".").set_axis_labels("Year", "Score Difference") In [12]: g = sns.FacetGrid(df_all, col="away_team", col_wrap=2, size=6) g = g.map(plt.plot, "year", "difference_non_abs", marker=".").set_axis_labels("Year", "Score Difference") You can see some interesting things here like Wales were good away from home in 2015. In that year they won three games away from home and won by 40 points or so away from home to Italy. So now we’ve got a feel for the data, we can proceed on with describing the model. What assumptions do we know for our ‘generative story’?¶ - We know that the Six Nations in Rugby only has 6 teams - they each play each other once - We have data from the last few years - We also know that in sports scoring is modelled as a Poisson distribution - We consider home advantage to be a strong effect in sports The model.¶: - The parameter home represents the advantage for the team hosting the game and we assume that this effect is constant for all the teams and throughout the season - The scoring intensity is determined jointly by the attack and defense ability of the two teams involved, represented by the parameters att and def, respectively - Conversely, for each t = 1, …, T, the team-specific effects are modelled as exchangeable from a common distribution: - \(att_{t} \; \tilde\;\; Normal(\mu_{att},\tau_{att})\) and \(def_{t} \; \tilde\;\;Normal(\mu_{def},\tau_{def})\) We restrict to only the useful columns for this model. In [13]: df = df_all[['home_team', 'away_team', 'home_score', 'away_score']] In [14]: teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) - We did some munging above and adjustments of the data to make it tidier for our model. - The log function to away scores and home scores is a standard trick in the sports analytics literature Building of the model¶ - We now build the model in PyMC3, specifying the global parameters, and the team-specific parameters and the likelihood function In [15]: with pm.Model() as model: # global model parameters home = pm.Flat('home') sd_att = pm.HalfStudentT('sd_att', nu=3, sd=2.5) sd_def = pm.HalfStudentT('sd_def', nu=3, sd=2.5) intercept = pm.Flat('intercept') # team-specific model parameters atts_star = pm.Normal("atts_star", mu=0, sd=sd_att, shape=num_teams) defs_star = pm.Normal("defs_star", mu=0, sd=sd_def, shape=num_teams) atts = pm.Deterministic('atts', atts_star - tt.mean(atts_star)) defs = pm.Deterministic('defs', defs_star - tt.mean(defs_star)) home_theta = tt.exp(intercept + home + atts[home_team] + defs[away_team]) away_theta = tt.exp(intercept + atts[away_team] + defs[home_team]) # likelihood of observed data home_points = pm.Poisson('home_points', mu=home_theta, observed=observed_home_goals) away_points = pm.Poisson('away_points', mu=away_theta, observed=observed_away_goals) - We specified the model and the likelihood function - All this runs on a Theano graph under the hood In [16]: with model: trace = pm.sample(1000, tune=1000, cores=3) pm.traceplot(trace) Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... /opt/conda/lib/python3.6/site-packages/pymc3/model.py:384: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. if not np.issubdtype(var.dtype, float): Multiprocess sampling (3 chains in 3 jobs) NUTS: [defs_star, atts_star, intercept, sd_def_log__, sd_att_log__, home] 100%|██████████| 2000/2000 [00:29<00:00, 68.07it/s] Let us apply good statistical workflow practices and look at the various evaluation metrics to see if our NUTS sampler converged. In [17]: bfmi = pm.bfmi(trace) max_gr = max(np.max(gr_stats) for gr_stats in pm.gelman_rubin(trace).values()) In [18]: (pm.energyplot(trace, legend=False, figsize=(6, 4)) .set_title("BFMI = {}\nGelman-Rubin = {}".format(bfmi, max_gr))); Our model has converged well and the Gelman-Rubin statistic looks good. Let us look at some of the stats, just to verify that our model has returned the correct attributes. We can see that some teams are stronger than others. This is what we would expect with attack In [19]: pm.stats.hpd(trace['atts']) Out[19]: array([[ 0.08968587, 0.24789199], [-0.17169411, 0.00788836], [ 0.02725114, 0.18826695], [-0.20606871, -0.02168828], [-0.44769868, -0.22706935], [ 0.1813111 , 0.34262147]]) In [20]: pm.stats.quantiles(trace['atts'])[50] Out[20]: array([ 0.17179028, -0.08393033, 0.10807495, -0.115689 , -0.33455323, 0.25632191]) Results¶ From the above we can start to understand the different distributions of attacking strength and defensive strength. These are probabilistic estimates and help us better understand the uncertainty in sports analytics In [21]: df_hpd = pd.DataFrame(pm.stats.hpd(trace['atts']), columns=['hpd_low', 'hpd_high'], index=teams.team.values) df_median = pd.DataFrame(pm.stats.quantiles(trace['atts'])_values) This is one of the powerful things about Bayesian modelling, we can have uncertainty quantification of some of our estimates. We’ve got a Bayesian credible interval for the attack strength of different countries. We can see an overlap between Ireland, Wales and England which is what you’d expect since these teams have won in recent years. Italy is well behind everyone else - which is what we’d expect and there’s an overlap between Scotland and France which seems about right. There are probably some effects we’d like to add in here, like weighting more recent results more strongly. However that’d be a much more complicated model. In [22]: labels = teams.team.values pm.forestplot(trace, varnames=['atts'], ylabels=labels, main="Team Offense") Out[22]: <matplotlib.gridspec.GridSpec at 0x7fbac5cefb70> In [23]: pm.forestplot(trace, varnames=['defs'], ylabels=labels, main="Team Defense") Out[23]: <matplotlib.gridspec.GridSpec at 0x7fbad0b11908> Good teams like Ireland and England have a strong negative effect defense. Which is what we expect. We expect our strong teams to have strong positive effects in attack and strong negative effects in defense. This approach that we’re using of looking at parameters and examining them is part of a good statistical workflow. We also think that perhaps our priors could be better specified. However this is beyond the scope of this article. We recommend for a good discussion of ‘statistical workflow’ you visit Robust Statistical Workflow with RStan Let’s do some other plots. So we can see our range for our defensive effect. I’ll print the teams below too just for reference In [24]: teams Out[24]: In [25]: pm.plot_posterior(trace, varnames=['defs']); We know Ireland is defs_2 so let’s talk about that one. We can see that it’s mean is -0.39 which means we expect Ireland to have a strong defense. Which is what we’d expect, Ireland generally even in games it loses doesn’t lose by say 50 points. And we can see that the 95% HPD is between -0.491, and -0.28 In comparison with Italy, we see a strong positive effect 0.58 mean and a HPD of 0.51 and 0.65. This means that we’d expect Italy to concede a lot of points, compared to what it scores. Given that Italy often loses by 30 - 60 points, this seems correct. We see here also that this informs what other priors we could bring into this. We could bring some sort of world ranking as a prior. As of December 2017 the rugby rankings indicate that England is 2nd in the world, Ireland 3rd, Scotland 5th, Wales 7th, France 9th and Italy 14th. We could bring that into a model and it can explain some of the fact that Italy is apart from a lot of the other teams. Now let’s simulate who wins over 1000 seasons. In [26]: with model: pp_trace = pm.sample_posterior_predictive(trace) 100%|██████████| 1000/1000 [00:01<00:00, 823.05it/s] In [28]: home_sim_df = pd.DataFrame({ 'sim_points_{}'.format(i): 3 * home_won for i, home_won in enumerate(pp_trace['home_points'] > pp_trace['away_points']) }) home_sim_df.insert(0, 'team', df['home_team']) away_sim_df = pd.DataFrame({ 'sim_points_{}'.format(i): 3 * away_won for i, away_won in enumerate(pp_trace['home_points'] < pp_trace['away_points']) }) away_sim_df.insert(0, 'team', df['away_team']) In [30]: sim_table = (home_sim_df.groupby('team') .sum() .add(away_sim_df.groupby('team') .sum()) .rank(ascending=False, method='min', axis=0) .reset_index() .melt(id_vars='team', value_name='rank') .groupby('team') ['rank'] .value_counts() .unstack(level='rank') .fillna(0) .div(1000)) In [31]: sim_table Out[31]: In [32]: ax = sim_table.loc[:, 1.0].plot(kind='barh') ax.xaxis.set_major_formatter(StrMethodFormatter('{x:.1%}')); ax.set_xlabel("Probability of finishing with the most points\n(including ties)"); ax.set_ylabel("Team"); We see according to this model that Ireland finishes with the most points about 60% of the time, and England finishes with the most points 45% of the time and Wales finishes with the most points about 10% of the time. (Note that these probabilities do not sum to 100% since there is a non-zero chance of a tie atop the table.) As an Irish rugby fan - I like this model. However it indicates some problems with shrinkage, and bias. Since recent form suggests England will win. Nevertheless the point of this model was to illustrate how a Hierachical model could be applied to a sports analytics problem, and illustrate the power of PyMC3. Covariates¶ We should do some exploration of the variables In [33]: df_trace = pm.trace_to_dataframe(trace) In [34]: teams.team.values Out[34]: array(['Wales', 'France', 'Ireland', 'Scotland', 'Italy', 'England'], dtype=object) In [35]: import seaborn as sns cols = { 'atts_star__0': 'atts_star_wales', 'atts_star__1': 'atts_star_france', 'atts_star__2': 'atts_star_ireland', 'atts_star__3': 'atts_star_scotland', 'atts_star__4': 'atts_star_italy', 'atts_star__5': 'atts_star_england' } df_trace_att = df_trace[list(cols)].rename(columns=cols) _ = sns.pairplot(df_trace_att) We observe that there isn’t a lot of correlation between these covariates, other than the weaker teams like Italy have a more negative distribution of these variables. Nevertheless this is a good method to get some insight into how the variables are behaving.
https://docs.pymc.io/notebooks/rugby_analytics.html
CC-MAIN-2018-47
refinedweb
2,408
59.9
Python renderer that includes a Pythonic Object based interface Let's take a look at how you use pyobjects in a state file. Here's a quick example that ensures the /tmp directory is in the correct state. Nice and Pythonic! By using the "shebang" syntax to switch to the pyobjects renderer we can now write our state data using an object based interface that should feel at home to python developers. You can import any module and do anything that you'd like (with caution, importing sqlalchemy, django or other large frameworks has not been tested yet). Using the pyobjects renderer is exactly the same as using the built-in Python renderer with the exception that pyobjects provides you with an object based interface for generating state data. Pyobjects takes care of creating an object for each of the available states on the minion. Each state is represented by an object that is the CamelCase version of its name (i.e. File, Service, User, etc), and these objects expose all of their available state functions (i.e. File.managed, Service.running, etc). The name of the state is split based upon underscores ( _), then each part is capitalized and finally the parts are joined back together. Some examples: postgres_userbecomes PostgresUser ssh_known_hostsbecomes SshKnownHosts How about something a little more complex. Here we're going to get into the core of how to use pyobjects to write states. The objects that are returned from each of the magic method calls are setup to be used a Python context managers ( with) and when you use them as such all declarations made within the scope will automatically use the enclosing state as a requisite! The above could have also been written use direct requisite statements as. You can use the direct requisite statement for referencing states that are generated outside of the current file. The last thing that direct requisites provide is the ability to select which of the SaltStack requisites you want to use (require, require_in, watch, watch_in, use & use_in) when using the requisite as a context manager. The above example would cause all declarations inside the scope of the context manager to automatically have their watch_in set to Service("my-service"). To include other states use the include() function. It takes one name per state to include. To extend another state use the extend() function on the name when creating a state. Like any Python project that grows you will likely reach a point where you want to create reusability in your state tree and share objects between state files, Map Data (described below) is a perfect example of this. To facilitate this Python's import statement has been augmented to allow for a special case when working with a Salt state tree. If you specify a Salt url ( salt://...) as the target for importing from then the pyobjects renderer will take care of fetching the file for you, parsing it with all of the pyobjects features available and then place the requested objects in the global scope of the template being rendered. This works for all types of import statements; import X, from X import Y, and from X import Y as Z. See the Map Data section for a more practical use. Caveats: In the spirit of the object interface for creating state data pyobjects also provides a simple object interface to the __salt__ object. A function named salt exists in scope for your sls files and will dispatch its attributes to the __salt__ dictionary. The following lines are functionally equivalent: Pyobjects provides shortcut functions for calling pillar.get, grains.get, mine.get & config.get on the __salt__ object. This helps maintain the readability of your state files. Each type of data can be access by a function of the same name: pillar(), grains(), mine() and config(). The following pairs of lines are functionally equivalent: When building complex states or formulas you often need a way of building up a map of data based on grain data. The most common use of this is tracking the package and service name differences between distributions. To build map data using pyobjects we provide a class named Map that you use to build your own classes with inner classes for each set of values for the different grain matches. Note By default, the os_family grain will be used as the target for matching. This can be overridden by specifying a __grain__ attribute. If a __match__ attribute is defined for a given class, then that value will be matched against the targeted grain, otherwise the class name's value will be be matched. Given the above example, the following is true: os_familyof Debian will be assigned the attributes defined in the Debian class. osgrain of Ubuntu will be assigned the attributes defined in the Ubuntu class. os_familygrain of RedHat will be assigned the attributes defined in the RHEL class. That said, sometimes a minion may match more than one class. For instance, in the above example, Ubuntu minions will match both the Debian and Ubuntu classes, since Ubuntu has an os_family grain of Debian an an os grain of Ubuntu. As of the 2017.7.0 release, the order is dictated by the order of declaration, with classes defined later overriding earlier ones. Additionally, 2017.7.0 adds support for explicitly defining the ordering using an optional attribute called priority. Given the above example, os_family matches will be processed first, with os matches processed after. This would have the effect of assigning smbd as the service attribute on Ubuntu minions. If the priority item was not defined, or if the order of the items in the priority tuple were reversed, Ubuntu minions would have a service attribute of samba, since os_family matches would have been processed second. To use this new data you can import it into your state file and then access your attributes. To access the data in the map you simply access the attribute name on the base class that is extending Map. Assuming the above Map was in the file samba/map.sls, you could do the following. salt.renderers.pyobjects. PyobjectsModule(name, attrs)¶ This provides a wrapper for bare imports. salt.renderers.pyobjects. load_states()¶ This loads our states into the salt __context__
https://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.pyobjects.html
CC-MAIN-2018-43
refinedweb
1,044
61.16
Objectives for version 1.0: settle for a Version Control System (VCS) and Bug Tracking System (BTS). (done) Objectives for version 1.1: finish splitting every command into its own file (done) finish reviewing new completions (done) decide and enforce a new indentation policy (done) Objectives for version 1.2: drop bash < 3.2 support (done) merge bash-completion-lib's test suite (done) remove global variables $bash205, $bash205b, $bash3 & $bash4. They're cluttering everyone's environment and we can just use $BASH_VERSINFO. (done) ditto remove $default, $filenames etc. (done) replace _get_cword() with _get_comp_words_by_ref() (done) Objectives for version 2.0: drop bash < 4.1 support (done) new directory layout (done) strengthen use of GNU Autotools (add to README, fix make distcheck) (done) Load completions dynamically (done) Objectives for version 3.0: - merge bash-completion-lib. make bash-completion 'nounset'-proof: no errors should be reported when running bash-completion with set -u or set -o nounset active. make bash-completion 'failglob'-proof: no errors should be reported when running bash-completion with shopt -s failglob active. make bash-completion 'nullglob'-proof: no errors should be reported when running bash-completion with shopt -s nullglob active. create namespace by prefixing all functions with bashcomp_, _comp_, comp_, _bc_, ...? ...and ditto for all global variables, both internal ones and ones controlling completion features.
https://wiki.debian.org/Teams/BashCompletion/Proposals/Roadmap
CC-MAIN-2020-45
refinedweb
220
51.24
Help:Preferences The - email". -) - A wikilink to an existing page will be in class 'stub' if the page is in the main namespace, it is not a redirect, and the number of bytes of the wikitext is less than the threshold. -. -. MediaWiki:Prefs-textboxsize Here you can set up your preferred dimensions for the textbox used for editing page text. - Columns and Rows - . General". - MediaWiki:Tog-editsection - An edit link will appear "MediaWiki:Vector-view-edit" tab at the top of the page. -. MediaWiki:Prefs-beta -. Labs features - Enable side-by-side preview - Enable step-by-step publishing Translation options - Assistant languages - comma separated list of language codes. Translation of a message in these languages are shown when you are translating. The default list of languages depends on your language. 7 days; the maximum is 91 -. - Omit diff after performing a rollback - only for administrators.
https://wiki.gentoo.org/wiki/Help:Preferences
CC-MAIN-2018-30
refinedweb
145
58.18
Ordinary webservers had until then simply received filenames and transmitted files - rather simple. The NCSA team looked at several alternatives whereby a web page could be generated on the fly, via code, and settled on the quickest and dirtiest solution: A request for a file arrives. If that file is determined to be a CGI, rather than being sent directly to the client, it is instead executed, provided with the data the client collected from the HTML form (in an encoded format), either via it's stdin or an environment variable. That "CGI Program" does whatever it likes with the data - it is in all respects a normal executable of the web server's host operating system - and then produces output on its stdout channel which is, more or less, sent directly back to the client's web browser. The entire gamut of executable types has been exploited to operate as CGIs at one time or other: C and C++ programs, shell scripts, and PERL scripts are among the most common. As idoru mentions, the first thing in that output is formalized to be the mime type of the following data, the accidental omission of which confuses many first time CGI authors. This is an intelligent design decision by the system's designers, as CGI's may generate not just a new web page, but images, audio, video, or any other kind of data which the web browser can handle, on the fly. CGI's and forms represent a turning point of the World Wide Web as a medium. They quickly became ubiquitous, and the core of that mechanism has become something like the heart of the functioning web. As site designers and developers quickly discovered, the mechanism of the CGI itself, which refers specifically to the interface between the standalone executable and the webserver, was ineffecient to use on a large scale, since processing each page request would require a fork and an exec call, churning OS resources to start and then eventually terminate an (often large) executable process over and over again for each hit, filling process tables and swap space, and in general making poor use of the way Unix allocates resources. Many web application designers would address this problem by going on to create custom webservers which would directly perform the specific kind of automation they required; many of the web's most popular sites are the result of this custom ground-up programming. As general purpose webserver designs matured, however, almost all of them drifted towards a "module" system as a kind of middle ground. Thus, the standalone CGI executable was replaced by a library, integrated either statically at compile time or on demand at runtime. These module interfaces typically included a more sophisticated interface to the webserver's various resources, especially shared memory and persistence. A majority of the automation now operating on the internet works via modules - a significant scalability win. Module-driven automation is not technically a CGI at all, since it does not at any time refer to the Common Gateway Interface; however, the term "CGI" has become synonymous with web-based applications, and is often misapplied to refer to them all. It is also worth mentioning the evolution of the CGI-based script into the incarnation it enjoys today - for instance, on this site. Shell scripts in general and PERL scripts in particular suffered from the efficiency problems of CGI invocation; the repetitive invocation perl's often large runtime interpreter to handle each page request was was a scalability nightmare that was bringing many sites to their knees under load conditions. However, PERL, as well as a number of other scripting languages (some, like PHP, were designed more recently, explicitly for creating CGI's) were too cheap and convienient to give up easily. The solution eventually settled on by the industry has been the script interpreter web server module, containing a single persistent interpreter which handles all the transactions. Only one copy (per server process) need be kept in memory, it is initialized only once, at server start, and it can theoretically afford variable continuity between transactions. Apache's mod_perl is an excellent example. The vast majority of web automation written today is written in a scripting language against this or a similar type of system (PHP, ColdFusion, etc). Once you have taken the query apart, and have put all the names and values in their individual strings, you must then go to each string and decode the percent signs (%2A -> hex code 42 -> '*'). Also, in a "comments" system, the comment can contain nasty little suprises (eg. <img src="">). These profane comments are the reason that you must disallow many types of tags. Always beware the evil query. Opening Note: Please message me if anything in this writeup is confusing or unclear. I'm always willing to answer individual questions, and would like to make this writeup accessible to everyone--callow newbies and seasoned veterans alike. PHP was my second programming language--my first was BASIC. Edsger Dijkstra said, "The teaching of BASIC should be rated as a criminal offense: it mutilates the mind beyond recovery." In practice, PHP is a great language for certain tasks, but understanding CGI is not one of them. When I discovered my second true love, Ruby (my first love having left me to become a lesbian), my first thought was to program web applications with it. However, this wasn't as simple as I thought it would be. Up until that point, my sole experience with web programming had been PHP with Apache. Pretty much every shared webhost under the sun has mod_php and Apache installed by default, making it very easy to write web applications with PHP: You create a ".php" file, upload it to your website, and whatever your PHP script prints out appears on the page. It was when I tried to apply these assumptions to Ruby that the house of cards came tumbling, tumbling down. Let's take a remedial class for a moment. What happens when you type "" into your web browser and hit enter? You'll notice that I emphasized #5. This isn't just because I have a distressing fixation on the number five, though I must admit that I sometimes find myself distracted by the sultry dip of its lower curve, and the sharp, almost offensive ninety-degree angle jutting salaciously out of its--oh, dear. Excuse me. So, the number fi-... The item after number four. 90% of the magic of CGI takes place in this step. Let's drill down, fearless spelunkers of knowledge that we are. Apache is the de facto standard for webservers, a veritable colossus of feature-rich flexibility, though it is recently challenged by cheeky upstarts like lighty, and the mad hatters at OKCupid were motivated to code their own webserver entirely. When there are no scripting languages or URL tomfoolery enabled, Apache simply takes a request URI, finds the corresponding file on its filesystem, and sends the contents to the browser along with a few terse headers. But what about my precious PHP? How does it fit in? Suppose Apache receives a request for /generate_erotic_fiction.php (don't judge me!). At this point, mod_php kicks in. mod_php is an Apache module that tells Apache how to handle PHP files. In contrast to mod_cgi, which can "handle" any executable file, mod_php can only "handle" PHP files. So, why use one over the other? Uh-oh. It turns out that mod_php happily hides a lot of stuff that goes on under the hood. This is a blessing because it makes web scripting with PHP easy and straightforward. It is a curse, however, because it promotes an incomplete understanding of how CGI works. Any scripting language that wants to generate meaningful websites needs to have an equally meaningful environment set up before it does its stuff. mod_cgi lets you command Apache to execute files instead of just reading them. Apache gives the program some environment variables to give it some context about who's asking for erotic fiction. It then uses the first few lines of the program's (plain text) output as its headers, and if everything's in order, gives the rest of the output back to the browser. The big difference between mod_cgi and non-CGI solutions like mod_php is that, when using mod_cgi to run your scripts, you need to massage the environment into ease of use yourself, or with libraries provided by your scripting language. If you're familiar with Linux/Unix computers, you'll be familiar with environment variables, which are special values that exist in the invisible ether of your operating system. Environment variables can be anything, from system-specified details like where to look when executing programs, to user-specified nonsense like what your favorite text editor is. When Apache executes a CGI application, it tweaks the environment first, by setting some contextual environment variables. CGI applications can then access information about the web request--like the URL, the IP address of the client, and so on--just by examining some environment variables. Okay, okay. I've covered all of this nonsense about why it's hard to work with CGI when you're not using mod_php. Now, for the six readers still with me, I'll talk about how to begin working with CGI. You'll be able to follow along best if you have access to a Linux/Unix command line: Try getting a VPS at Slicehost, or running an Ubuntu Linux server on an old computer. The fact is, it's a lot harder to learn the inner workings of CGI on a shared host, because you need fine-grained control over your environment. Simply put, a CGI application is any executable program that outputs headers, then content, most often for webservers. So, let's program an extremely minimal example in C. #include <stdio.h> int main(void) { // Tell Apache that we're serving HTML, // since CGI applications can serve anything (except pork chops). printf( "Content-Type: text/html" ); // Tell Apache to get ready for our content. printf( "\n\n" ); // Output content. printf( "Sup guys?" ); // In Unix world, ending by returning 0 means the program finished successfully. return( 0 ); } Options +ExecCGI Okay, that was pretty low-level, and I won't make fun of you if you skipped over it. Let's approach this with a scripting language, like Perl, Python or Ruby. Actually, things are going to work pretty much the same: Just create a script that outputs a header, the delimiter, and some text. Let's say we're writing a Ruby script that ends in .rb. Now, we need a two-line .htaccess file block: Options +ExecCGI AddHandler cgi-script .rb <FilesMatch "^myprogram$"> ForceType application/x-httpd-cgi </FilesMatch> Jeez, this is tedious! Manually outputting headers, and we haven't even gotten to environment variables yet! Do we really have to reinvent the wheel? No, of course not! I just enjoy making people who want to learn suffer. No, no, no. We started at ground level to get a firm foundation in what it means to program CGI. But the truth is, there are thousands of great libraries and frameworks already out there. For example, Ruby: #!/usr/bin/env ruby require 'cgi' cgi = CGI.new( "html3" ) cgi.out do cgi.html do cgi.head{ cgi.title{"TITLE"} } + cgi.body do cgi.div do "Fo' sho." end end end end Still higher level are frameworks like Camping and Ruby on Rails. Parallels exist for every language imaginable. The choice can be suffocating--pick one blindly and just jump in! So, why did I make you sit through all that bunk about Unix environment variables? Here's one reason: We need to understand performance. If a hundred thousand people are hitting your script every hour, mod_php is going to be much faster than a script running as CGI. Why? Log in or register to write something here or to contact authors. Need help? accounthelp@everything2.com
https://everything2.com/title/Common+Gateway+Interface
CC-MAIN-2018-17
refinedweb
2,006
61.67
Hi I just downloaded Python 1.4.1 and I signed the PythonScriptShell with all the permissions, installed it on my N95 and I get the following error when I import the sensor module ------------------------------------------------------------------------- >>> import sensor Traceback (most recent call last): File "<console>", line 1, in ? File "c:\resource\site.py", line 97, in platsec_import return _original_import(name, globals, locals, fromlist) File "c:\resource\sensor.py", line 20, in ? import _sensor File "c:\resource\site.py", line 114, in platsec_import raise ImportError("Permission denied (error -46). Possible cause: Check that %s.pyd is compiled to have at least the same capabilities as this Python interp reter process."%name) ImportError: Permission denied (error -46). Possible cause: Check that _sensor.p yd is compiled to have at least the same capabilities as this Python interpreter process. ------------------------------------------------------------------------- I did sign the PythonScriptShell but not the PythonForS60 file, was I supposed to sign that one as well? what permissions are required for the sensor? How can I check the permissions a particular sis has assigned? Do I have to install it in a particular drive? Thanks
http://developer.nokia.com/community/discussion/showthread.php/118545-Sensor-module-error
CC-MAIN-2014-35
refinedweb
182
60.21
Check if the end of the Array can be reached from a given position. Introduction In this blog, we will discuss a recursion problem asked frequently in Interviews. The problem is to check if the end of the Array can be reached from a given position.We are given N numbers in an array and a starting position, and we are asked to check if the end of the Array can be reached from a given position on a condition that we can only move either (current index + array[current index]) or (current index - array[current index]). Example: Say 5 elements in an array given are: 4, 1, 3, 2, 5 And the start index given is 1. We have to check if we can reach the end of the array starting from position 1, following all the given conditions. Recursive Approach The approach to Check if the end of the Array can be reached from a given position would be straightforward if we follow the recursive method. Time Complexity = O(n) If the index is negative or greater than the array size, then we definitely can't reach the end, so return false. ⬇ If we have reached the last index, we can reach the end and return true. ⬇ Use recursion to change position and check if now we can get the end of the array. Solving the above given example, PseudoCode Algorithm ___________________________________________________________________ procedure ifEnd(int a[], int n, int start_idx): ___________________________________________________________________ 1. If start_idx<0 or start_idx>n return false; 2. If start_idx==n-1 return true; 3. return ifEnd(a,n,start_idx-a[start_idx]) or ifEnd(a,n,start_idx+a[start_idx]. 4. Declare main function, give user input and return answer. end procedure ___________________________________________________________________ Implementation in C++ #include <iostream> using namespace std; bool ifEnd(int a[], int n, int start_idx){ //if index is negative or greater than size of array then definitely we can't reach the end. if(start_idx<0 || start_idx>n){ return false; } //if we have reached the last index, this means that it is possible to reach the end. if(start_idx==n-1){ return true; } //using recursion to change position and checking if we can reach the end of the array. return ifEnd(a,n,start_idx-a[start_idx]) || ifEnd(a,n,start_idx+a[start_idx]); } int main(){ int n; cin>>n; int a[n]; for(int i=0; i<n; i++){ cin>>a[i]; } int src; cin>>src; bool result=ifEnd(a,n,src); cout<<result; } Output Sample Input: 5 4 1 3 2 5 1 Sample Output: Yes Complexity Analysis Time Complexity: O(n) Analysing Time Complexity: In the worst-case, all elements are traversed once in the recursive call stack. ∴n. Space complexity: O(n). BFS Approach This problem can also be solved using the BFS approach: Algorithm for Breadth-First Search: - The graph G and array visited[] are global, and visited is initialized to 0. - Then u=v, visited[v]=1 - Repeat, for all vertices w, adjacent to u do, if visited[w]=0, add w to queue and visited[w]=1 - If the queue is empty, return and delete the next element u from the queue. Algorithm for Breadth-First Traversal: - BFS(G,n) - For i=1 to n do visited[i]=0, for i=1 to n, do if(visited[i]=0)then BFS(i) visiting order: 2,3,4,5,6,7,8 Visited queue: 1 0 0 0 0 0 0 0 u=1, w={2,3} q: 2 3 remove 2 u=2, w={1,4,5} q: 3 4 5 remove 3 u=3, w={1,6,7} q: 4 5 6 7 remove 4 u=4, w={2,8} q: 5 6 7 8 remove 5 u=5, w={2,8} Already visited u=6, w={3,8} Already visited u=7, w={3,8} Already visited u=8, w={4,5,6,7} Already visited So same algorithm would be used to check if the end of the Array can be reached from a given position with the condition given that we can only move either (current index + array[current index]) or (current index - array[current index]) Till now, I assume you must have got the basic idea of what has been asked in the problem statement. So, I strongly recommend you first give it a try. Please have a look at the algorithm, and then again, you must give it a try. PseudoCode Algorithm using Breadth-first approach ___________________________________________________________________ procedure solve(int a[], int n, int start_idx): ___________________________________________________________________ 1. queue is required in bfs approach because of its FIFO approach. queue<int> q; 2. bool visited[n] = { false }; bool reached = false; 3. q.push(start); 4. Until q becomes empty: 5. int temp = q.front(); q.pop(); 6. if visited[temp] == true: continue 7. visited[temp] = true; 8. if (temp == n - 1) reached = true; break; 9. if (temp + arr[temp] < n) q.push(temp + arr[temp]); 10. if (temp - arr[temp] >= 0) q.push(temp - arr[temp]); 11. Then, if (reached == true) Print yes else print no. end procedure ___________________________________________________________________ Implementation in C++ #include <bits/stdc++.h> using namespace std; void solve(int arr[], int n, int start) { // queue is required in bfs approach because of its FIFO approach //write algorithm for bfs. queue<int> q; // At first all nodes are unvisited, and we have not reach the end. bool visited[n] = { false }; bool reached = false; // Inserting first element in queue q.push(start); // Until queue becomes empty while (!q.empty()) { // Get the top element and delete it int temp = q.front(); q.pop(); // If already visited ignore that index if (visited[temp] == true) continue; visited[temp] = true; //when we have reached the end of the array if (temp == n - 1) { reached = true; break; } // as we can move only temp + arr[temp] or temp - arr[temp], so inserting that in the queue. if (temp + arr[temp] < n) { q.push(temp + arr[temp]); } if (temp - arr[temp] >= 0) { q.push(temp - arr[temp]); } } // If we can reach the end of the array,print yes else print no. if (reached == true) { cout << "Yes"; } else { cout << "No"; } } // Driver Code int main() { int n,s; cin>>n; int arr[n]; for(int i=0; i<n; i++){ cin>>arr[i]; } cin>>s; solve(arr, n, s); return 0; } Output Sample Input: 5 4 1 3 2 5 1 Sample Output: Yes Complexity Analysis Time Complexity: O(n). Analysing Time Complexity: In the worst-case, all elements are traversed once. ∴n. Space complexity: O(n). Frequently Asked Questions - When to use the breadth-first search method? Answer) Whenever we need to search part of the tree as a solution to the problem where depth can vary, we use the breadth-first approach. - What is recursion? Answer) The calling function itself, again and again, is referred to as recursion. To know more about recursion, click here. - When to use recursion? Answer)When we can break the problem into smaller parts, we can use recursion. It also helps in reducing the complexity sometimes. Key Takeaways This article taught us how to Check if the end of the Array can be reached from a given position by approaching the problem using recursion. We discussed its implementation using illustrations, pseudocode, and then proper code. We hope you could take away critical techniques like analyzing problems by walking over the execution of the examples and finding out the recursive pattern followed. Now, we recommend you practice problem sets based on recursion to master your fundamentals. You can get a wide range of questions similar to this on CodeStudio.
https://www.codingninjas.com/codestudio/library/check-if-the-end-of-the-array-can-be-reached-from-a-given-position
CC-MAIN-2022-27
refinedweb
1,265
71.24
Written by Gerald Ramich, Senior Microsoft Premier Field Engineer. This article is for folks who are trying to troubleshoot Microsoft Outlook Connectivity issues to Exchange servers. I’ll look at a wide number of troubleshooting items, including: - RPC through CAS for Mailbox Access: How does it work? - Troubleshooting Common RPC issues - Common Root Causes For Receiving the RPC Dialog Box - Verifying Exchange Client Access Ports - Troubleshooting Address Book Lookup or Check name Failures. - Troubleshooting Kerberos - Troubleshooting Networking Issues - Troubleshooting Connections to the Store - Troubleshooting Outlook 2007/2010 OOF, Free/Busy, OWA/ECP and OAB Links So let’s dive in. 1.0 - RPC through CAS for Mailbox Access: How does it work? After an Outlook Profile is created, Outlook needs to connect to the Mailbox and AD. Note Outlook 2007 and higher will connect to Autodiscover if it deems the profile needs to be changed. - When Outlook launches, it connects to an Endpoint Mapper (EPM) using port 135 - Outlook 2003 and higher only. I am not covering older Outlook versions as these act differently. - Outlook has two other settings: “DS Server” and “Closest GC.” You should note that “Closest GC” is not supported in an Exchange 2010 environment, and “DS Server” is not recommended. Outlook needs to connect to the NSPI directory endpoint on CAS vs. some AD server. Features that break are delegate / multiple mailbox access for older Outlook clients (2003) and support for archived mailboxes. - Outlook queries for Three UUIDs (Universally Unique Identifiers): - MS Exchange RPC Client Access Services (a4f1db00-ca47-1067-b31f-00dd010662da and/or 5261574a-4572-206e-b268-6b199213b4e4) formally assigned to the Exchange Information store in legacy versions of Exchange - MS Exchange Address book Service (1544f5e0-613c-11d1-93df-00c04fd7bd09) for the MS Exchange RFR Interface - MS Exchange Address book Service(f5cc5a18-4264-101a-8c59-08002b2f8426) for the MS Exchange NSP Interface - The MAPI client has knowledge of the needed UUIDs, however it will not know the port number the server is listening on for each of these UUIDs since they are random at startup of the service. - The End Point Mapper returns the listening port number for each of the UUIDs on the CAS server. 2.0 - Troubleshooting Common RPC issues 2.1 - Common Root Causes For Receiving the RPC Dialog Box High network latency - Loss of network connectivity on the client side - Loss of a network path within a network - Exchange server outages and crashes - Active Directory/Domain Controller outages and crashes - High database and/or log disk latencies - High server CPU and/or context switching - Long running MAPI operations A few thoughts to keep in mind when debugging latency issues: - High disk latencies usually affect multiple users, not just single users - If max server latencies are high and cannot be explained by high disk latencies, also check for Jet Log Stalls and high server context switching or CPU usage. - Disconnects and reconnects always start with calls to Logon, so seeing lots of Logon calls from a particular user is a sign of connection problems (in addition to the RPC failures) - Outlook/COM add-ins, VBA code, and MAPI code running on the user’s workstation can cause problems that are intermixed with Outlook requests. Those functions may make expensive calls (like Findrow, SetColumns, SortTable, and IMAIL conversions). While Outlook from time to time makes expensive calls, 3rd party applications are common culprits. 2.1.1 - Troubleshooting the RPC Dialog Box - First, determine if this issue is occurring for a single user or multiple users. Narrow down the client versions and locations if possible. - How long is the pop-up message on the screen? Is the Pop up on screen longer than 5-10 seconds for multiple clients? The pop up can be expected for short periods under even the best conditions. For single client issues it may not be server related. Reference: - Verify TCP Chimney is disabled on Exchange and GCs. This is required. Reference: - If clients are online, check critical folder size as this affects overall performance. The numbers are higher for Exchange 2007/2010, however cached mode is recommended. - Exchange 2007 information: - Outlook users experience poor performance when they work with a folder that contains many items on a server that is running Exchange Server: - Recommended Mailbox Size Limits – Misleading title, it’s really about Item counts. (impacts Outlook 2003 OSTs, newer versions are not affected) - How to troubleshoot the RPC Cancel Request dialog box in Outlook 2003 or in Outlook 2002: - Start Perfwiz on Exchange using all counters for at least 6 hours. - Use an Exchange Performance Troubleshooter (ExTRA) to get a snapshot of performance on Exchange. - Capture a concurrent client and server Netmon trace while reproducing the issue. The only way to track if this is a server or network related issue is to follow any delayed “Response” packets from server to the client. - Use PFDavAdmin to get mailbox item count if necessary. - Exmon can be used to determine various items such as cached or online, how many CPU cycles a user is using, etc. Yes, it has been updated for 2010! Place on Mailbox role…not CAS. · Determine the suspected server from the RPC popup. The specific server will be listed and can include the user’s home Exchange server, an Exchange server the user was referred to for public folders, another user’s home Exchange server in the case of Calendar Details in the F/B UI, shared calendar/shared folder, or in delegate access, and Active Directory Servers. · Collect Exmon ETL data via one of the supported methods. Consistent problems may need only 5 minutes worth of data collection to trace the event. It is important to trace for a period afterwards to allow collection and tracing by the Exchange server. Outlook buffers some monitoring data until its next server communication. Problems that happen sporadically may require multiple hours or days’ worth of collection. ETL file size and server impact is documented in the Frequently Asked Questions. · Open the ETL data file with the Exmon tool. · Verify that the user made RPC calls and those calls were traced. Find the user’s display name in the By User view. If the Exchange Server is Exchange 2003 or higher, verify that the IP address of the client appears as the IP address of the client machine in question. If the user’s display name does not appear in the By User view, an RPC call may have been issued and received by Exchange, but no successful Logon operation from that user was received (and thus could not be attributed to any user). Alternatively find the “” (BLANK) user name in the By User view and look for the user’s IP Address. If the IP Address appears in this list, an RPC was received by the Exchange server, but the Logon call failed. · See if any MAPI operations took longer than 500 milliseconds. Within the By User View, the Max Server Latency will indicate the longest time spent processing a single MAPI operation, but an RPC could contain multiple operations. · If the Max Server Latency is above 500 milliseconds, double click on the user’s name in the By User view. This will cause a reparse of the ETL file (which can take minutes for extremely large files) and will eventually display a detailed view of the user’s MAPI operations. Find the time frame in question (we have accuracy to about 15 milliseconds) in the By Time view. Verify if other operations took a long amount of time that would have been in the same packet or in packets within close range. It is prudent at this point to verify disk latencies are acceptable within the guidelines given in the Exchange Performance Tuning Whitepaper since the overall latency is determined both by the CPU and Store processing, as well as the timeliness of Jet database accesses by the disk subsystem. · If roughly 5000 milliseconds cannot be accounted for, network latency may be involved. Check the By Clientmon view (if you’re using both Exchange 2003 and higher and Outlook 2003 and higher) for high max and/or average latencies. Using the By Clientmon view, find the user in the list and verify the user’s IP Address is in the list of IP Addresses. If the IP Address of the client is not in the user’s IP Address list, it is possible no client monitoring data was received. Check both the local and other average and max latencies. High average latencies could indicate an overall bad network condition. If the average is acceptable, the max latencies could be high on account of a momentary network issue or because of a long running MAPI operation. Remember, these latencies are the total round trip time of the packet including network transit and store latencies. · If latencies are acceptable, check for failed RPC Packets. Failures happen from time to time and do not always indicate a problem, but are a useful step. · Look out for IP Addresses reported in the By Clientmon view (IP addresses that Outlook thinks it is using based on the NIC/VPN) that differ from the IP Addresses in the By User view (IP address as seen by Store). Differences indicate some sort of proxy server or NAT. Client IP Addresses starting with 192.168.X.X are notoriously Wireless routers (but not a requirement nor definitive). These also indicate that the user may be using RPC/HTTP from a remote location. 2.2 - Verifying Exchange Client Access Ports Verify TCP Ports on Exchange Server are listening using the RPCDump –i command. Below is example of what to look for. Note: this is a truncated Output. Search for UUIDs. You will see these twice: once for Outlook Anywhere(RPC/HTTP) and once for regular RPC. A breakdown of example above: Server: LAB-E2K10-CSHT Port: [39627] UUID: [a4f1db00-ca47-1067-b31f-00dd010662da] Accessible: YES Resource Kit tools: RPC Dump. If one or more of these ports are not listening. You can use “Netstat –ano” and compare the ports that are listed in RPCDump to the PID that is listed in Netstat. Verify if another service has this port. TCP 0.0.0.0:39627 0.0.0.0:0 LISTENING 2804 ß MSExchangeRPC TCP 0.0.0.0:63534 0.0.0.0:0 LISTENING 5368 ßMSExchnageAB Restarting the Information store will not re-register a stolen port, A restart is required to register TCP ports. 2.3 – Troubleshooting Address Book Lookup or Check name Failures. Typically this error will resemble something like “The name could not be resolved. The name could not be matched to a name in the address list.” - Determine if the client is connecting to a GC or CAS in one of two ways: hold the Ctrl Key then “right click” the Outlook Icon on the Task Bar and choose Connection Status; and/or look at the Type Directory and look at the server Name. - Capture a trace from the client to see which GC/CAS we are trying to connect to. - If it is a GC, several things on the GC should be checked using DCDiag, NetDiag. - If it is a CAS, RPCDump can also show if F5CC (NT Directory NSPI) is listening - Also Verify the user is showing in the GAL and not hidden. - Verify Kerberos is working. 2.4 – Troubleshooting Kerberos Netmon will show most Kerberos errors. Testing with NTLM in the Outlook profile under the “Security Tab” is also a good option to eliminate Kerberos issues. If Kerberos fails but NTLM auth works, Verify SPNs using SetSPN tool. setspn -L ExchangeServerName SPNs should be registered as follows on Exchange Server: - http/ For Exchange Web Services and the Autodiscover service - exchangeMDB/ For RPC Client Access - exchangeRFR/ For the Address Book service - exchangeAB/ For the Address Book service Note: Load Balancers require the Alternative Service Account and SPN registered to the Load balancer FQDN instead of the individual server names. Note: SPNs could be registered as follows pointing to GCs on Exchange2003/ 2007 servers, this should not be done on Exchange 2010: exchangeAB/<GlobalCatalogServerName> Once SPNs are verified, I recommend this whitepaper: Troubleshooting Kerberos Errors 2.5 - Troubleshooting Networking Issues - Capture concurrent Netmon from client and server(s) affected. - Look for RPC Fault, dropped packets, TCP retransmit. - Devices can cause several connection issues. For example: context 0x0 status 0x1C00001A errors are typically a device issue, as outlined in this MSDN article. - Don’t forget Chimney/TCP Offloading can cause connectivity failures. Check NIC drivers, these need to be up to date. - Firewall between client and servers. It’s necessary to insure all listed ports open for Exchange 2010. - Check out the List of Extended MAPI numeric result codes in this KB article. 2.6 - Troubleshooting Connection to the Store This error will typically show up as “Unable to open your default mail folders. The information store could not be opened.” - The netmon trace should show RPC Fault and then the corresponding error may indicate “Access Denied.”. - There can be several causes for this. In some cases it may be as simple as the “access this computer from network right”. - Dump both AD Permissions and Exchange Permissions for extended rights, as follows: - Get-ADPermissions MailboxAlias | where [($_.ExtendedRights –Like “*-as*”)} |ft User,ExtendedRights,Deny –Auto (This finds all Receive-as / Send-as extended permissions) - Get-MailboxPermission MailboxAlias –User <person checking access for> |ft User,AccessRights, Deny –Auto - On the get-mailboxpermission you could add | where [($_.AccessRights –Like “*full*”)} 3.0 - Troubleshooting Outlook 2007/2010 OOF, Free/Busy, OWA/ECP and OAB Links 3.1 - HTTP troubleshooting - These steps are true for any HTTP application. - IIS Logs and Status Codes are you friends. The following KB article points to common causes for most of the codes, so turn up protocol logging in IIS: HTTP status codes in IIS 7.0 - For HTTP status code definitions, visit the World Wide Web Consortium (W3C) Web site: - High level Codes a. 1XX – Informational b. 2XX - Success c. 3XX - Redirection d. 4XX - Client Error e. 5XX - Server Error 5. Mainly you will have to focus on the 4XX and 5XX codes. 6. 4XX Codes have Sub codes to further describe the issue, as follows: a. 400 – Bad Request i. 400.1 - Invalid Destination Header. ii. 400.2 - Invalid Depth Header. iii. 400.3 - Invalid If Header. iv. 400.4 - Invalid Overwrite Header. v. 400.5 - Invalid Translate Header. vi. 400.6 - Invalid Request Body. vii. 400.7 - Invalid Content Length. viii. 400.8 - Invalid Timeout. ix. 400.9 - Invalid Lock Token. b. 401 – Access Denied (logon issues) i. 401.1 - Logon failed. ii. 401.2 - Logon failed due to server configuration. iii. 401.3 - Unauthorized due to ACL on resource. iv. 401.4 - Authorization failed by filter. v. 401.5 - Authorization failed by ISAPI/CGI application. c. 403 – Forbidden (Access Restrictions) i. 403.1 - Execute access forbidden. ii. 403.2 - Read access forbidden. iii. 403.3 - Write access forbidden. iv. 403.4 - SSL required. v. 403.5 - SSL 128 required. vi. 403.6 - IP address rejected. vii. 403.7 - Client certificate required. viii. 403.8 - Site access denied. ix. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. x. 403.10 - Forbidden: Web server is configured to deny Execute access. xi. 403.11 - Forbidden: Password has been changed. xii. 403.12 - Mapper denied access. xiii. 403.13 - Client certificate revoked. xiv. 403.14 - Directory listing denied. xv. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. xvi. 403.16 - Client certificate is untrusted or invalid. xvii. 403.17 - Client certificate has expired or is not yet valid. xviii. 403.18 - Cannot execute requested URL in the current application pool. xix. 403.19 - Cannot execute CGI applications for the client in this application pool. xx. 403.20 - Forbidden: Passport logon failed. xxi. 403.21 - Forbidden: Source access denied. xxii. 403.22 - Forbidden: Infinite depth is denied. d. 404 – Not Found i. 404.0 - Not found. ii. 404.1 - Site Not Found. iii. 404.2 - ISAPI or CGI restriction. iv. 404.3 - MIME type restriction. v. 404.4 - No handler configured. vi. 404.5 - Denied by request filtering configuration. vii. 404.6 - Verb denied. viii. 404.7 - File extension denied. ix. 404.8 - Hidden namespace. x. 404.9 - File attribute hidden. xi. 404.10 - Request header too long. xii. 404.11 - Request contains double escape sequence. xiii. 404.12 - Request contains high-bit characters. xiv. 404.13 - Content length too large. xv. 404.14 - Request URL too long. xvi. 404.15 - Query string too long. xvii. 404.16 - DAV request sent to the static file handler. xviii. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. xix. 404.18 - Querystring sequence denied. xx. 404.19 - Denied by filtering rule. e. 405 – Method Not allowed f. 406 – Client browser does not accept Mime Type Request page g. 408 – Request timed out h. 412 – Precondition Failed. 7. The Sub number will be in one of two formats a. 401.1 – Displayed in browser (note: for IIS 7+, the substatus code is only visible from the server console, by default. For security, remote clients are only given the basic 3-digit HTTP status code) b. 401 1 – Displayed IIS logs at end of string, as long as http substatus logging is enabled (it is by default) 8. 5XX Codes have Sub codes to further describe the issue, as follows: a. 500 – Internal Server error i. 500.0 - Module or ISAPI error occurred. ii. 500.11 - Application is shutting down on the Web server. iii. 500.12 - Application is busy restarting on the Web server. iv. 500.13 - Web server is too busy. v. 500.15 - Direct requests for Global.asax are not allowed. vi. 500.19 - Configuration data is invalid. vii. 500.21 - Module not recognized. viii. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. ix. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. x. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. xi. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. xii. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. xiii. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. xiv. Note Here is where the global rules configuration is read. xv. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. xvi. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. xvii. 500.100 - Internal ASP error b. 501 – Header values specify a configuration that is not implemented c. 502 – Web Server received an invalid response while acting as a gateway or proxy i. 502.1 - CGI application timeout. ii. 502.2 - Bad gateway. d. 503 – Service unavailable i. 503.0 - Application pool unavailable. ii. 503.2 - Concurrent request limit exceeded. 9. The Sub number will be in one of two formats a. 500.0 – Displayed in browser on server console only (see above) b. 500 0 – Displayed IIS logs at end of string, if http substatus logging is enabled (on by default) 10. So what logging can I turn up outside of IIS? a. Diags Logging in Exchange i. MSExchange AutoDiscover ii. MSExchange Availability (EWS/OOF/AS), Calendar and Free / Busy iii. MSExchange Control Panel (Option Page in OWA). Outlook 2010 will use the ECP for some features. iv. MSExchange WebServices (RPC/HTTP, also EWS/OOF/AS) b. Test Command lets i. Test-OutlookWebServices (AutoDiscover) ii. Test-CalendarConnectivity (AS) 1. Only Anonymous and not very useful. 2. Use URL in Apps events iii. Test-ECPConnectivity iv. Test-WebServicesConnectivity(RPC/HTTP access) 1. Note: It does not test Calendar, OOF or ECP! 2. Test-OutlookConnectivity (AutoDiscover, Profile Creation, MAPI or RPC/HTTP access) Fixed, thanks. There is a typo "Exchange 2007/2001, however cached mode is recommended." Great information….thank you I have recently started a site, the information you offer on this web site has helped me greatly. Thanks for all of your time & work. donde lo puedo comprar. estoy interesada 传世私服一条龙制作天龙sf一条龙开服-客服咨询QQ1325876192(企鹅扣扣)-Email:1325876192@qq.com 天堂2sf一条龙
https://blogs.technet.microsoft.com/mspfe/2011/04/12/troubleshooting-microsoft-exchange-outlook-connectivity-issues/
CC-MAIN-2018-47
refinedweb
3,411
58.89
TS TSYS interview questions and answers. We have tried to share some of the manual testing interview questions, selenium interview questions & testing interview questions also, but we are recommending spending some quality time to get comfortable with what might be asked when you go for the TSYSSYS Pune Interview Questions Company Location: Pune, India Attended on: 15.11.2021 - Oops concept in java and how you implement in your project - Abstraction and Encapsulation - Overloading and Overriding - Diff between Hashset and Hashmap - Character literals - How compiler judge between Compile time and Runtime Exception - Code snippets: a) String s = ‘j’+’a’+’v’+’a’; b) try(system.exit(1) catch(sysout(e)) finally(sysout(“we are out”) c) public class A( ( If(true) break; )) Selenium Questions - What is a webdriver? - What is the implicit, explicit wait, and pooling concept? Manual Testing Questions - What will be your manual test case approach if you got a new module to test? - MAVEN lifecycle? Api Testing - What will be the error codes? - Rest Api HTTP, post, etc TSYS Noida Interview Questions Company Location: Noida, India Position: SDET Role Interview Dated: May 8, 2021 Updated on: 22.07.2021 1st round (May 06, 2021) - Test – HackerRank 2nd round (May 08, 2021) – By Shobhit [ Technical Round 1 1 (May 08, 2021) (04:30 pm to 5:15 pm) ] - Exception Handling – try, catch, finally - finally vs finalize? - How do get the links to count inside 1 web page? - How do you handle Alerts? - What is the parent class of Alert - How to take Screenshots? - What are interface and Abstract class? - Why interfaces are used? - Program – Method overloading and Method Overriding? - What is the usage of the final keyword? - What is the use of the final class? Example of any final class in Java. - final int variableName; Will it give any error during compile time? - Why static block is used? API Testing - How to validate the response code is 200, using the RestAssured library? - PUT vs POST? - PUT vs PATCH - Different status codes. Database Testing - Joins - Write a query to get the list of employees whose Salary is more than 10000. - Write a query to get the employee who has the max salary. - Delete and Employee whose name is John - DROP vs DELETE vs TRUNCATE Ans:. - DDL vs DML? Can you list any of them? Ans: DML stands for Data Manipulation Language. DDL statements are used to create databases, schema, constraints, users, tables, etc. DML statement is used to insert, update or delete the records. Java Programs - What will be the output Class A { final int number1; public static void main(String[] args) { System.out.println(number1); } } - int_Array -> 2nd largest in the array int[] int_Array={10,32,3,5,47}; - Write a Program to find whether String is Palindrome or not public static void main(String[] args) { String str = "NitiN"; String rev = ""; int length = str.length(); for (int i = length - 1; i >= 0; i--) rev = rev + str.charAt(i); if (str.equals(rev)) System.out.println(str + " is a palindrome"); else System.out.println(str + " is not a palindrome"); } 3rd round (May 08, 2021) – Gaurav Aggarwal Interview – Technical round 2 [ Round 2 (May 08, 2021)(05:40 pm to 06:40 pm) ] - How do you execute only the failed Test cases? - Any other way than IRetryAnalyzer, is there a way to achieve the above? - Upload a file using AutoIT? - What is AutoIT and how exactly you are using that in your script? - SQL -> GROUP BY, JOINS, - static keyword usage (Method, block, variable) - Why can’t we override the static method Ans: Overloading is the mechanism of binding the method call with the method body dynamically based on the parameters passed to the method call. Static methods are bonded at compile time using static binding. Therefore, we cannot override static methods in Java - OOPS concept in your Selenium project Java Programs - int[] int_Array={1,9,8,19,4,1……………1000 items}; Which 3 consecutive no. are there in the Array has maximum sum? Example: 1,2,3 _ 2,3,4 _ 3,4,5 - Input: “I AM in TSYS” Write a generic way so that I get output as “I AM in TSYS”. Ans: String str = "I AM in TSYS"; System.out.println(str.replaceAll("\\s+", " ").trim()); - What is the output? class Super { public int index = 1; } class App extends Super { public App(int index) { index = index; } public static void main(String args[]) { App myApp = new App(10); System.out.println(myApp.index); } } Ans: Output : 1 [ index=index (both are same) ] super.index = index; Then, Output=10; - What is the output try { print "1" // some code throw IOException } catch(Exception e) { print "2" } catch(IOException IO) { print "3" } finally { print "4" } Ans: IOException can not be resolved to a type Output : 1 3 4 try { System.out.println("1"); // some code throw IOException throw new IOException(); } catch (IOException IO) { System.out.println("3"); } catch (Exception e) { System.out.println("2"); } finally { System.out.println("4"); } TSYS Interview Questions Company Name: TSYS Company Location: Noida, India Updated on: 06.07.2021 - Define STLC? - Difference between test case and test scenario. - Define your role in your company. - How will you create a BDD framework from scratch? - Tell me how you created your organization framework. What are the main components? - TestNG annotations. - If you created any proof of concept at your firm. - Stale exception - How many types of exceptions in selenium you have faced. - Upcasting example that you implemented in your selenium code. - How you implemented overriding, interfaces in your framework. - Example of dynamic and static polymorphism you implemented in your framework. - What are the main components of the BDD framework? - What will you do to optimize your framework to run them in parallel using Selenium Grid/ Browser Stack? - How is the automation of Salesforce Lightning different from classic
https://www.softwaretestingo.com/tsys-interview-questions/
CC-MAIN-2022-40
refinedweb
965
66.84
Recently Browsing 0 members No registered users viewing this page. Similar Content - By TheDcoder Hello everyone, I am working on a project which requires reading a few values from Excel, the catch is that I need it to be very fast... unfortunatley I found out that read operations using the supplied Excel UDF are very slow, more than 150 ms for each operation on average Here is my testing setup that I made: #include <Excel.au3> #include <MsgBoxConstants.au3> Global $iTotalTime = 0 Test() Func Test() Local $oExcel = _Excel_Open() Local $oBook = _Excel_BookAttach("Test.xlsx", "FileName", $oExcel) Local $sSheet = "Sheet1" If @error Then Return MsgBox($MB_ICONERROR, "Excel Failed", "Failed to attach to Excel") Local $iNum For $iRow = 1 To 6 Time() Local $iNum = Number(_Excel_RangeRead($oBook, $sSheet, "A" & $iRow)) If ($iNum = 1) Then ConsoleWrite("Row " & $iRow & " is 1 and value of column B is " & _Excel_RangeRead($oBook, $sSheet, "B" & $iRow)) Else ConsoleWrite("Row " & $iRow & " is not 1") EndIf ConsoleWrite(". Reading took: ") Time() Next ConsoleWrite("The whole operation took " & $iTotalTime & " milliseconds." & @CRLF) EndFunc Func Time() Local Static $hTimer Local Static $bRunning = False If $bRunning Then Local $iTime = Round(TimerDiff($hTimer), 2) $iTotalTime += $iTime ConsoleWrite($iTime & @CRLF) Else $hTimer = TimerInit() EndIf $bRunning = Not $bRunning EndFunc And Test.xlsx in CSV format: 1,-1 -1,1 1,-1 1,1 -1,-1 1,1 Here is the actual xlsx but it should expire in a week: And finally output from my script: Row 1 is 1 and value of column B is -1. Reading took: 276.06 Row 2 is not 1. Reading took: 163.36 Row 3 is 1 and value of column B is -1. Reading took: 302.58 Row 4 is 1 and value of column B is 1. Reading took: 294.65 Row 5 is not 1. Reading took: 152.33 Row 6 is 1 and value of column B is 1. Reading took: 284.92 The whole operation took 1473.9 milliseconds. Taking ~1.5 seconds for reading 6 rows of data is bad for my script, which needs to run as fast as possible . It would be nice if I can bring this down to 100 ms somehow, I am not very experienced working with MS office so I thought about asking you folks for help and advice on how I can optimize my script to squeeze out every bit of performance that I can get from this script Thanks for the help in advance! - By IAMK If you press the "Login" button in the top-left of, it creates a popup in which you press "Login with Twitter ID", which then opens a new window with an "Authorize app" button. None of these 3 buttons have a Name or ID, so how do I click on them, because _IEGetObjByName / _IEGetObjByID will not work. Here are the sources of the 3 buttons: ;Login button. ;<a onclick="showLoginDialog();" href="javascript:void(0);">Login</a> ;Twitter button (Note that I am already signed into Twitter and just need to Authorize it). ;> ;Popup window appears. ;<input class="submit button selected" id="allow" type="submit" value="Authorize app"> I have also tried: Local $oLinks = _IETagNameGetCollection($ie, "a") For $oLink In $oLinks If $oLink.InnerText = "showLoginDialog()" Then _IEAction($oLink, "Click") ExitLoop EndIf Next I've even tried adding "showLoginDialog()" and "javascript:void(0)" to the end of the URL, but as expected, that wouldn't work either. My goal is something like this: #include <IE.au3> Local $ie = _IECreate("") _IELoadWait($ie) Local $originalHandle = $ie ;===Functions========================================================== Func login() ;Source: <a onclick="showLoginDialog();" href="javascript:void(0);">Login</a> _IEAction(ABOVESOURCE, "Click") ;Source: > _IEAction(ABOVESOURCE, "Click") ;New window appears for Twitter sign in, but the URL is locked. ;Source: <input class="submit button selected" id="allow" type="submit" value="Authorize app"> _IEAttach($ie, ABOVEHANDLE) ;How do I get the handle of the new window from above? _IEAction(ABOVESOURCE, "Click") _IEAttach($ie, $originalHandle) EndFunc ;====================================================================== ;===Code=============================================================== login() ;====================================================================== Thank you in advance. - By HardXOR Hello AutoIt community I run into speed problem in my script wich i cant solve myself, problem is with decoding texture loop - for better explanation, you need extract from file pallete (16x 16 RGB color) and picture data (224 * 128 byte), then use correct color for your picture data.... nothing extra hard and also texture is quite small 224*256 it is for my car model viewer/later maybe editor GranTurismo 2 from Playstation 1, so its old dataformat and i cant understand why AutoIt take so long to decode texture when good old Playstation almost 2,5 decades old can do that nearly immediately (when you list through cars in shop or garage) My first atempt was create all trought dllstructure, because its easier approach, but it was soooo slow (40-50s for create textures) then i upgrade my routine via arrays, first 3D arrays later only 1D, next i put decoding colors outside loop but it is still not enough, my last version took cca 15s wich is still unacceptable for car model viewer when you click on one carmodel from listview (1100 cars for whole game) and you must wait 15-16s for model to load.... oh and i forgot mention some cars have more then 1 color (much more... 8-9-10 etc) soloading take 8-9-10 times more time in attachment i post texture file from GranTurismo 2 for one car (contain only 1 color) and also my dll struct version and array version code dll struct version - ± 40 sec (33 without saving) #include <FileConstants.au3> Global $IMDT[256][256][4] LoadTexture("ufs9r.cdp") Func LoadTexture($file) $fileHandle = FileOpen($file, $FO_BINARY) $header = FileRead($fileHandle, 0x20) ConsoleWrite("header> " & $header & @CRLF) $PAL = FileRead($fileHandle, 0x200) ConsoleWrite("PAL> " & $PAL & @CRLF) FileSetPos($fileHandle, 0x43A0, $FILE_BEGIN) $IMD = FileRead($fileHandle, 0x7000) ConsoleWrite("IMD> " & $IMD & @CRLF) $st = DllStructCreate("BYTE[512]") DllStructSetData($st, 1, $PAL) $struct_PAL = DllStructCreate("WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16];WORD[16]", DllStructGetPtr($st)) $struct_IMD = DllStructCreate("BYTE[" & 0x7000 & "]") DllStructSetData($struct_IMD, 1, $IMD) $start = TimerInit() For $i = 0 To 15 For $j = 0 To 223 $cn = 0 For $k = 0 To 127 $bt = DllStructGetData($struct_IMD, 1, $j * 128 + $k + 1) $blue = BitShift(DllStructGetData($struct_PAL, $i + 1, BitAND($bt, 0x0F) + 1), 7) $IMDT[$j][$cn][0] = $blue $green = BitShift(DllStructGetData($struct_PAL, $i + 1, BitAND($bt, 0x0F) + 1), 2) $IMDT[$j][$cn][1] = $green $red = BitShift(DllStructGetData($struct_PAL, $i + 1, BitAND($bt, 0x0F) + 1), - 3) $IMDT[$j][$cn][2] = $red If DllStructGetData($struct_PAL, $i + 1, BitAND($bt, 0x0F) + 1) = 0 Then $IMDT[$j][$cn][3] = 0x00 Else $IMDT[$j][$cn][3] = 0xFF EndIf $cn += 1 $blue = BitShift(DllStructGetData($struct_PAL, $i + 1, BitShift($bt, 4) + 1), 7) $IMDT[$j][$cn][0] = $blue $green = BitShift(DllStructGetData($struct_PAL, $i + 1, BitShift($bt, 4) + 1), 2) $IMDT[$j][$cn][1] = $green $red = BitAND(BitShift(DllStructGetData($struct_PAL, $i + 1, BitShift($bt, 4) + 1), - 3), 0xFF) $IMDT[$j][$cn][2] = $red If DllStructGetData($struct_PAL, $i + 1, BitShift($bt, 4) + 1) = 0 Then $IMDT[$j][$cn][3] = 0x00 Else $IMDT[$j][$cn][3] = 0xFF EndIf $cn += 1 Next Next saveTGA($i) Next ConsoleWrite("t " & TimerDiff($start) & @CRLF) ; +- 40 seconds FileClose($fileHandle) EndFunc Func saveTGA( 255 For $j = 0 To 255 For $k = 0 To 3 $data &= hex($IMDT[$i][$j][$k], 2) Next Next Next $binary = FileOpen("test\" & $name & ".tga", BitOR($FO_BINARY, $FO_OVERWRITE, $FO_CREATEPATH)) FileWrite($binary, "0x" & $data) FileClose($binary) EndFunc array version - ± 15 sec (under 10s without saving) #include <FileConstants.au3> LoadTexture2("ufs9r.cdp") Func LoadTexture2($file) $fileHandle = FileOpen($file, $FO_BINARY) $a = TimerInit() Global $header[0x20] For $i = 0 To UBound($header) - 1 $header[$i] = Int(String(FileRead($fileHandle, 1))) ; read 0x20 bytes Next ConsoleWrite("header " & TimerDiff($a) & @CRLF) $a = TimerInit() Global $PAL[0x100] For $i = 0 To UBound($PAL) - 1 $PAL[$i] = Number(FileRead($fileHandle, 2)) ; read 0x200 (16*16) words Next Global $PALcolor[16 * 16 * 4] For $i = 0 To UBound($PAL) - 1 $PALcolor[$i * 4 + 0] = BitShift($PAL[$i], 7) $PALcolor[$i * 4 + 1] = BitShift($PAL[$i], 2) $PALcolor[$i * 4 + 2] = BitShift($PAL[$i], -3) If $PAL[$i] = 0 Then $PALcolor[$i * 4 + 3] = 0x00 Else $PALcolor[$i * 4 + 3] = 0xFF EndIf Next ConsoleWrite("PAL " & TimerDiff($a) & @CRLF) $a = TimerInit() FileSetPos($fileHandle, 0x43A0, $FILE_BEGIN) Global $IMD[0x7000] For $i = 0 To UBound($IMD) - 1 $IMD[$i] = Int(String(FileRead($fileHandle, 1))) ; read 0x7000 bytes Next ConsoleWrite("IMD " & TimerDiff($a) & @CRLF) Global $IMDT[256*256*4] $a = TimerInit() For $i = 0 To 15 For $j = 0 To 223 $cn = 0 For $k = 0 To 127 $byte = $IMD[$j * 128 + $k] ; byte for decode $index = $j * 1024 + $cn * 4 $index2 = $i * 0x40 + BitAND($byte, 0x0F) * 4 $IMDT[$index + 0] = $PALcolor[$index2 + 0] ; blue $IMDT[$index + 1] = $PALcolor[$index2 + 1] ; green $IMDT[$index + 2] = $PALcolor[$index2 + 2] ; red $IMDT[$index + 3] = $PALcolor[$index2 + 3] ; alpha $cn += 1 $index = $j * 1024 + $cn * 4 $index2 = $i * 0x40 + BitShift($byte, 4) * 4 $IMDT[$index + 0] = $PALcolor[$index2 + 0] ; blue $IMDT[$index + 1] = $PALcolor[$index2 + 1] ; green $IMDT[$index + 2] = $PALcolor[$index2 + 2] ; red $IMDT[$index + 3] = $PALcolor[$index2 + 3] ; alpha $cn += 1 Next Next ;~ $b = TimerInit() saveTGA2($i) ;~ ConsoleWrite("save TGA " & TimerDiff($b) & @CRLF) Next ConsoleWrite("full time " & TimerDiff($a) & @CRLF) ; 16 seconds FileClose($fileHandle) EndFunc Func saveTGA2( UBound($IMDT) - 1 $data &= Hex($IMDT[$i], 2) Next $binary = FileOpen("test\" & $name & ".tga", BitOR($FO_BINARY, $FO_OVERWRITE, $FO_CREATEPATH)) FileWrite($binary, "0x" & $data) FileClose($binary) EndFunc if anyone can optimize my code I would be very grateful, or pointing me to better solution, thx ufs9r.cdp - Triblade Hi all, I was pondering over a question with regards to the speeds of reading something and did not see this kind of question in a forum search. The question: What is (technically) faster? Multiple reads from the same 3d array cell, or only once make a 'temp' variable from that cell and read the value from this? I don't know if either has any real impact at all anyway, but just wanted to ask anyway. :-) There may be a difference if the value holds an integer or a string (or something else) but in my case, is a simple integer. To hopefully clarify with a small bit of code: $process = $start - 15 If $xy[$process][3] <> "x" Then If _ArraySearch($open, $process, 1, $open[0][0], 0, 0, 1, 1) <> -1 Then UpdateOpen($xy[$process][5], $closed[0][0]) ElseIf $start > 0 And _ArraySearch($closed, $process, 1, $closed[0][0], 0, 0, 1, 0) = -1 Then Add_open($start, $closed[0][0], $counter, $process) EndIf EndIf You can read from this, that the array $closed[0][0] is being read 3 times. And this goes on further in the code I did not show. My question boils down to this, should I make a 'temp' variable to hold that $closed[0][0] value until the function is done? It may not have a real impact on my small script, but I really am interested in the answer at least. Regards, Tri. - Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/195973-_ie-speed/
CC-MAIN-2022-05
refinedweb
1,874
51.31
They replied with a solution called weak linking What is weak linking as quoted from. Here's the basic steps: 1. Use the latest tools with the latest SDK. 2. In your project, set the IPHONEOS_DEPLOYMENT_TARGET build setting to the oldest OS you want to support (say iPhone OS 2.0). And use GCC 4.2 3. If you use frameworks that are not present on that older OS, set the frameworks to be weak imported. Do this using the Linked Libraries list in the General tab of the target info window. if you use Makefile to build the app add this in linker flag LDFLAGS += -weak_framework MessageUI and add this key in Info.plist <key>MinimumOSVersion</key> <string>2.0</string> This is how to test whether framework is available or not #import <MessageUI/MessageUI.h> #include <dlfcn.h> if ( dlsym(RTLD_DEFAULT, "MFMailComposeErrorDomain") != NULL ) { NSLog(@"%@", @"MessageUI framework is available"); NSLog(@"MFMailComposeErrorDomain = %@", MFMailComposeErrorDomain); } else { NSLog(@"%@", @"MessageUI framework is not available"); } 4. For places where you use C-style imports that aren't present on older systems, check whether the import is present before using it. 5. If you use Objective-C methods that aren't present on older systems, use -respondsToSelector: to verify that the methods are present before calling them. if ( [[UIApplication sharedApplication] respondsToSelector:@selector(canOpenURL:)] ) { if ( [[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:@"tel:996-1010"]] ) { NSLog(@"%@", @"tel URL is supported"); } else { NSLog(@"%@", @"tel URL is not supported"); } } else { NSLog(@"%@", @"-canOpenURL: not available"); } 6. If you use Objective-C classes that aren't present on older systems, you can't just use the class directly. For example: obj = [[NSUndoManager alloc] init]; will cause your application to fail to launch on iPhone OS 2.x, even if you weak link to the framework. Rather, you have to do the following: NSUndoManager undoManager; Class cls; cls = NSClassFromString(@"NSUndoManager"); if (cls != nil) { undoManager = [[[cls alloc] init] autorelease]; NSLog(@"%@", @"NSUndoManager is available"); // This tests whether we have access to NSUndoManager's selectors. [undoManager beginUndoGrouping]; [undoManager endUndoGrouping]; } else { NSLog(@"%@", @"NSUndoManager not available"); } undoManager = nil; 7. Test, test, test! Updated: There is sample source code for the Mail Composer in Developer site 2 comments: >> check whether the import is present before using it. How to do that check, please? Thanks for sharing this article on Software development. It was very nice. Looking for more..................Please continue.
http://iphonesdkdev.blogspot.ru/2009/07/how-to-build-single-iphone-application.html
CC-MAIN-2017-51
refinedweb
390
58.48
make generate-plist cd /usr/ports/devel/lfcbase/ && make install clean pkg install lfcbase Deleted ports which required this port: Number of commits found: 55 devel/lfcbase: upgrade 1.14.0 -> 1.14.2 databases/cego: upgrade 2.45.5 -> 2.45.6 lfcbase: - In configure.ac added check for darwin. This is required, since for File::flush implemention, darwin rather requires a fcntl call with option F_FULLFSYNC instead of fsync ( see OSX man page for fsync ) cego: - Added command line option --fsync to enable physical disk synchronisation for logging and checkpointing. This options slows down database significantly but ensures consistent data in case of an operating system crash Submitted by: Bjoern Lemke <lemke@lemke-it.com> devel/lfcbase: upgrade 1.13.1 -> 1.14.0 - Added File::hasData method to check for available input data from file descriptor ( implemented with POSIX poll function ) Submitted by: Bjoern Lemke <lemke@lemke-it.com> devel/lfcbase: upgrade 1.13.0 -> 1.13.1 - Stability patch in Datetime::asChain methods. The result of localtime is checked for null pointer ( may occur in case of very large long values, for which the instance has been created ) and in this case, an exception is thrown. Submitted by: Bjoern Lemke <lemke@lemke-it.com> devel/lfcbase: update 1.11.9 -> 1.13.0 devel/lfcxml: update 1.2.6 -> 1.2.10 databases/cego: update 2.39.16 -> 2.44.1 databases/cegobridge: update 1.4.0 -> 1.5.0 databases/p5-DBD-cego: update 14.0 -> 1.5.0 - Warning: storage format has changed Export to xml format before upgrade and re-import after the upgrade See UPDATING - recompile all applications linked to libcego - Lots of changes, among them: o improved crash recovery o fixes to SQL expected behaviour o better CDATA handling o fixes primary key handling design issue o changes to serialisation for export/import, XML export/import is still possible Submitted by: Bjoern Lemke <lemke@lemke-it.com> Changel> devel/lfcbase: update 1.11.7 -> 1.11.8 - Extensions made for Chain::toLower and Chain::toUpper methods. To treat multi character strings, a conversion is made to wide characters using mbstowcs libc function. case conversion now is done with towupper / towlower wide character function. Strings are then converted back to multicharacters using wcstombs function. - This allows upper/lower case conversion now for german Umlaute which have actually not been treated> devel/lfcbase: update 1.11.5 -> 1.11.6 - Code cleanup class Net and NetHandler, changed from bind to ::bind for FreeBSD 12, compile problems occured without namespace definition.2 -> 1.10.3 - Value change of NetHandler::SENDLEN from 1024 to 8192 On FreeBSD based systems, the lower value lead to a poor network performance for large mesages, since subsequent send calls seem to slow down the network throughput. On OSX and Windows/MinGW64 based systems, this effect has not been observed but a sendlen of 8192 seems to be no problem also for these systems Submitted by: Bjoern Lemke <lemke@lemke-it.com>> devel/lfcbase: update 1.10.0 -> 1.10.1 - Added File class constructor to support STDIN read mode Submitted by: Bjoern Lemke <lemke@lemke-it.com>> devel/lfcbase: update 1.9.6 -> 1.9.7 - Removed include socketvar.h in Net.cc and NetHandler.cc since compile error occured for FreeBSD 12 Submitted by: Bjoern Lemke <lemke@lemke-it.com>> devel/lfcbase: update 1.8.10 -> 1.8.11 lfcbase: - Added range check to Chain::toInteger method to catch overflow exception cego: - This version brings a complete redesign of low level page handling. Instead of page references identified by fileId and pageid, a database unique pageid is used now This results in a complete reimplemtation of several low level classes like CegoFileHandler, CegoBufferPool, Blob handling, etc. Since pages are references by a single ( 64 bit ) id now, I expect an increased performance behaviour over all database operations. Most code modifications are done, code complies and basic functionally works ( create tableset, create table, insert table ) - First performance analysis indicates a speedup of about 10% for btree creation, so significant speedup for full table scans. - All base checks passed, but there is still a page allocation leak for table drops - Functional tests with SysMT successful completed Submitted by: Bjoern Lemke <lemke@lemke-it.com>, iXsystems, and RootBSD 10 vulnerabilities affecting 139 ports have been reported in the past 14 days * - modified, not new All vulnerabilities Last updated:2019-11-07 12:09:53
https://www.freshports.org/devel/lfcbase/
CC-MAIN-2019-47
refinedweb
743
56.15
Scrollbars are a common feature in modern GUI’s. Most often you’ll see them in the GUI for “Terms and Agreements” where you scroll down hundreds of lines to reach the “I accept” button. The Tkinter Scrollbar is a way for us to bring the scroll feature in our Python software. Most of the time, the reason you’ll want a Tkinter Scrollbar is because there is a large text area in your GUI and you want to conserve space. You don’t want your text taking up all the space in the GUI after all. Tkinter Scrollbar Syntax from tkinter import * scroll = Scrollbar(master, options...) Scrollbar Options List of all relevant options available for the Tkinter Scrollbar. The first thing to know is that you can’t use the scrollbar on every Tkinter widget. There are only certain widgets which have the scroll capability and they are listed below. - List-box Widget - Entrybox Widget - Canvas Widget - Text Widget The requirement is that a widget must have the yscrollcommand and/or xscrollcommand option. We’ll be exploring the Listbox and Canvas widgets here, since they are the most commonly used with the scrollbar. If you have any trouble understanding any of these other widgets, follow the links their respective articles to learn more. Listbox Scrollbar The first thing to do is create the scrollbar. This scrollbar is then used in the creation of the listbox by passing it into the yscrollcommand parameter. If you wanted scroll the listbox in the X direction you would use xscrollcommand instead. We then generated 100 values and inserted them into listbox. The scrollbar won’t trigger unless the size of values is greater than the container they are in. Finally we call the config and assign an additional option called command. The mylist.yview function activates the listbox’s scroll feature. Use xview or xview depending on the orientation of your widgets (Horizontal or vertical). from tkinter import * root = Tk() root.geometry("200x250") mylabel = Label(root, text ='Scrollbars', font = "30") mylabel.pack() myscroll = Scrollbar(root) myscroll.pack(side = RIGHT, fill = Y) mylist = Listbox(root, yscrollcommand = myscroll.set ) for line in range(1, 100): mylist.insert(END, "Number " + str(line)) mylist.pack(side = LEFT, fill = BOTH ) myscroll.config(command = mylist.yview) root.mainloop() We’ve displayed the output of the above code in the form of a GIF, shown below. Be careful when packing the Scrollbar. Since typically Scrollbars are on the right, use the side = RIGHT option and to ensure the scrollbar fills the screen, use the fill = Y (though you may want to alter this in some scenarios) Canvas Scrollbar As a bonus, we’ve created two scroll bars this time, just to show you it’s possible. You likely won’t be using this in an actual GUI. Notice the many expand = True options. Try re-sizing the Tkinter frame with these on, then remove them and try again. The last new feature here is the scroll region for the Canvas. It’s total size is technically 500 by 500, but only 300 by 300 is viewable at any given time. The scroll region should always be larger than the width and height. Ruins the purpose otherwise. from tkinter import * root = Tk() frame=Frame(root,width=300,height=300) frame.pack(expand = True, fill=BOTH) canvas = Canvas(frame,bg='white', width = 300,height = 300, scrollregion=(0,0,500,500)) hbar = Scrollbar(frame,orient = HORIZONTAL) hbar.pack(side = BOTTOM, fill = X) hbar.config(command = canvas.xview) vbar = Scrollbar(frame,orient = VERTICAL) vbar.pack(side = RIGHT, fill = Y) vbar.config(command = canvas.yview) canvas.config(width=300,height=300) canvas.config(xscrollcommand=hbar.set, yscrollcommand=vbar.set) canvas.pack(side=LEFT, expand = True, fill = BOTH) root.mainloop() The above code produces the below GUI. This marks the end of the Python Tkinter Scrollbar article. Any suggestions or contributions for CodersLegacy are more than welcome. Relevant questions regarding the article material can be asked in the comments section below. Learnt about other amazing widgets from the Tkinter Homepage!
https://coderslegacy.com/python/python-tkinter-scrollbar/
CC-MAIN-2021-21
refinedweb
674
58.99
This article is going to be different from the rest of my articles published on Analytics Vidhya – both in terms of content and format. I usually layout my article such that after a read, the reader is left to think about how this article can be implemented on grounds. In this article, I will start with a round of brainstorming around a particular type of business problem and then talk about a sample analytics based solution to these problems. To make use of this article make sure that you follow my instructions carefully. Let’s start with a few business cases: - Retail bank: Optimize primary bank branch allocation for all the customers. This is to make sure that the bank branch allotted to the customer is close to the mailing or permanent address of the customer for his convenience. This might be specially applicable, if we open a new branch and the closest branch for many existing customer changes to this new branch. - Retail Store chain: Send special offers to your loyal customers. But offers could be region specific so same offer cannot be sent to all. Hence, you first need to find the closest store to the customer and then mail the offer which is currently applicable for that store. - Credit card company who sells co-branded cards: You wish to find out all partner stores which are closest to your existing client base and then mail them appropriate offers. - Manufacturing plant: Wish to find out wholesalers near your plant for components required in manufacturing of the product. What is so common in all the problems mentioned above? Each of these problems deal with getting the distance between multiple combination of source and target destinations. Exercise : Think about at-least 2 such cases in your current industry and then at least 2 cases outside your current industry and write them in the comment section below. A common approach I have worked in multiple domains and saw this problem being solved in similar fashion which gives approximate but quick results. Exercise : Can you think of a method to do the same using your currently available data and resources? Here is the approach : You generally have a PIN CODE for both source and destination. Using these PIN CODES, we find the centroid of these regions. Once you have both the centroids, you check their latitude and longitude. You finally calculate the eucledian distance between these two points. We approximate our required distance with this number. Following figure will explain the process better : The two marked areas refers to different PIN CODES and the distance 10 kms is used as an approximate distance between the two points. Exercise : Can you think of challenges with this approach ? Here are a few I can think of : - If the point of interest is far away from the centroid, this approach will give inaccurate results. - Some times the centroid of other PIN CODE can be more closer to the point of interest than its own PIN CODE. But because it falls in area of the distant PIN CODE, we still approximate the point of interest with the centroid of distant PIN CODE. - In cases where we need finer distances than the precision of PIN CODE demarcation, this method will lead nowhere. Imagine a scenario where two branches of a bank and customer address is located in the same PIN CODE. We have no way to find the closest branch. - The distance calculated is a point to point distance and not on road. Imagine a scenario when you have two PIN Codes right next to each other but you have valley between which you need to circle around to reach destination. A manual Approach Say you have two branches and a single customer, how will you make a call between the two branches (which one is closer)? Here is a step by step approach : - You choose the first combination of branch-customer pair. - You feed the two addresses in Google Maps. - You pick the distance/time on road - You fill in the distance in the table with the combinations (2 in this case) - Repeat the same process with the other combination. How to automate this approach? Obviously, this process cannot be done manually for millions of customers and thousands of branches. But this process can be well automated (however, Google API have a few caps on the total number of searches). Here is a simple Python code which can be used to create functions to calculate the distance between two points on Google Map. Exercise : Create a table with a few sources and destinations. Use these functions to find distance and time between those points. Reply “Done without support” if you are able to implement the code without looking at the rest of the solution. Here is how we can read in a table of different source-destination combinations : Notice that we have all types of combinations here. Combination 1 is a combo of two cities. Combo 4 is a combination of two detailed address. Combo 6 is a combination of a city and a monument. Let’s now try to get the distances and time & check if they make sense. All the distance and time calculations in this table look accurate. Exercise : What are the benefits of using this approach over the PIN CODE approach mentioned above? Can you think of a better way to do this task? Here is the complete Code : import googlemaps from datetime import datetime def finddist(source, destination): gmaps = googlemaps.Client(key='XXX') now = datetime.now() directions_result = gmaps.directions(source, destination, mode="driving",departure_time=now) for map1 in directions_result: overall_stats = map1['legs'] for dimensions in overall_stats: distance = dimensions['distance'] return [distance['text']] def findtime(source, destination): gmaps = googlemaps.Client(key='XXX') now = datetime.now() directions_result = gmaps.directions(source, destination, mode="driving",departure_time=now) for map1 in directions_result: overall_stats = map1['legs'] for dimensions in overall_stats: duration = dimensions['duration'] return [duration['text']] import numpy as np import pandas as pd import pylab as pl import os os.chdir(r"C:\Users\Tavish\Desktop") cities = pd.read_csv("cities.csv") cities["distance"] = 0 cities["time"] = 0 for i in range(0,8): source = cities['Source'][i] destination = cities['Destination'][i] cities['distance'][i] = finddist(source,destination) cities['time'][i] = findtime(source,destination) End Notes GoogleMaps API come with a few limitations on the total number of searches. You can have look at the documentation, if you see a use case of this algorithm. Did you find the article useful? Share with us find more use cases of GoogleMaps API usage apart from the one mentioned in this article? Also share with us any links of related video or article to leverage GoogleMaps API. Do let us know your thoughts about this article in the box below. 6 Comments
https://www.analyticsvidhya.com/blog/2015/03/hacking-google-maps-create-distance-features-model-applications/
CC-MAIN-2017-17
refinedweb
1,133
63.19
by Michael S. Kaplan, published on 2013/02/19 07:01 -05:00, original URI: A timely follow-up to PowerShell ISE (or legacy) will do *everything* (and it's really easy to start!) and PowerShell ISE will do *everything* (IF YOU LET IT!).... After all, in that first blog I talked about Internationalization. And World-Readiness. But all I really went on about was the ISE (Integrated Scripting Environment), and how easy it was to get to, no matter how hard Windows 8 might try to make it. So where's the Internationalization? Where's the World-Readiness? Well, there is one great thing the ISE brings to the min. The .NET Framework! And its namespaces, like System.Globalization and System.Globalization.DateTimeFormatInfo and System.Globalization.NumberformatInfo. All you have to do is add a CurrentCulture or a CurrentUICulture, and stir! Of course, it is important to either So first you have to find out if the environment is set up properly when your cmdlet is called. If it is not set up properly, then you will have to figure out how to set it up.... I suppose you will have to answer that, based on your cmdlet's scenario! For example do you need your user interface to be localized? Just set the CurrentUICulture. And vdo you need to format dates and times and number and currency values? Just set the CurrentCulture. Do you need to sort lists of items? Again, just set the CurrentCulture. Easy! Do you need the two settings to be the same? Even easier! Just set CurrentCulture = CurrentUICulture! And is your cmdlet a lower level workhorse that relies oin higher level cmdlets to do the heavy lifttng? That's easiest! Just don't set a bleeding thing, and let the bloody higher level protocols that have all the bleeding answers provide all the bloody answers. They wanted to, anyway. :-) referenced by 2013/02/20 PowerShell ISE will do *everything* (but good luck finding it on Windows 8 without help!) go to newer or older post, or back to index or month or day
http://archives.miloush.net/michkap/archive/2013/02/19/10395086.html
CC-MAIN-2017-13
refinedweb
349
67.96
Air Source Water Chiller and Heat Pump Floor heating and Air-con Unit (with Super Heater) Installation and Instruction Manual CONTENT 1 Preface 2 Safety Precaution (1) Mark notes (2) Icon Notes (3) Warning (4) Attention 3 Specification (1) Appearance and Structure of the unit (2) The data of unit (3) Unit Dimension 4 Installation (1) Application of heat pump (2) Choose a right heat pump unit (3) Installation place (4) Installation method (5) Water loop connection (6) Power supply connection (7) Location of the unit (8) Transit (9) Trial Running 5 Usage (1)The displaying of the wire controllerr (2) Functions associated with the buttons (3) The operation of the wire controller 6 Maintenance (1) Maintenance (2) Ordinary malfunctions and solution 7 Appendix (1) Appendix 1 (2) Appendix 2 (3) Appendix 3 (4) Appendix 4 (5) Appendix 5 (6) Appendix 6 1 2 2 2 3 4 5 5 5 6 8 8 9 9 9 10 10 10 11 11 12 12 13 14 17 17 18 20 20 21 22 22 23 24 Preface In order to provide the customers with high quality, strong reliability and good versatility product, this heat pump is produced by strict design and manufacture standards. This manual includes all the necessary information about installation, debugging, discharging and maintenance. Please read this manual carefully before you open or maintain the unit. The manufacture of this product will not be held responsible if someone is injured or the unit is damaged, as a result of improper installation, debugging, unnecessary maintenance which is not in line with this manual. The unit must be installed by qualified personnel. It is vital that the below instructions are adhered to at all times to keep the warranty. The unit can only be opened or repaired by qualified installer or an authorised dealer. Maintenance and operation must be carried out according to the recommended time and frequency, as stated in this manual. Use genuine standard spare parts only. Failure to comply with these recommendations will invalidate the warranty. Air source water chiller and heat pump is a kind of high efficiency, energy saving and environment friendly equipment, which is mainly used for house warming. It can work with any kind of indoor unit such fan coil, radiator, or floor heating pipe, by provide warm or hot water. One unit of monobloc heat pump can also work with several indoor units. The air source water heat pump unit is designed to have heat recovery by using super heater which can provide hot water for sanitary purpose. This series of heat pump unit owns following features: 1 Advanced controlling The PC microcomputer based controller is available for the users to review or set the running parameters of the heat pump. Centralized controlling system can control several units by PC. 2 Nice appearance The heat pump is designed with beautiful looking. The monobloc one has the water pump included which is very easy for installation. 3 Flexible installation The unit has smart structure with compact body, just simple outdoor installation is needed. 4 Quiet running High quality and efficient compressor, fan and water pump is used to ensure the low noise level with insulation. 5 Good heat exchange rate The heat pump unit use special designed heat exchanger to enhance whole efficiency. 6 Large working range This series of heat pump is designed to work under different working conditions as low as -15 degrees for heating. 1 Safety Precaution To prevent the users and others from the harm of this unit, and avoid damage on the unit or other property, and use the heat pump properly, please read this manual carefully and understand the following information correctly. Mark Notes Meaning Mark A wrong operation may lead to death or heavy injury on people. WARNING A wrong operation may lead to harm on people or loss of material. ATTENTION Icon notes Icon Meaning Prohibition. What is prohibited will be nearby this icon Compulsory implement. The listed action need to be taken. ATTENTION (include WARNING) Please pay attention to what is indicated. 2 Safety Precaution Warning Meaning Installation Professional installer is required. The heat pump must be installed by qualified personals, to avoid improper installation which can lead to water leakage, electrical shock or fire. Please make sure that the unit and power connection Earthing is required have good earthing, otherwise may cause electrical shock. Meaning Operation DO NOT put fingers or others into the fans and evaporator PROHIBITION of the unit, otherwise harm may be occurred. When there is something wrong or strange smell, the power supply need to be shut off to stop the unit. Continue to run Shut off the power may cause electrical short or fire. Move and repair Entrust Meaning When the heat pump need to be moved or installed again, please entrust dealer or qualified person to carry it out. Improper installation will lead to water leakage, electrical shock, injury or fire. It is prohibited to repair the unit by the user himself, otherwise Entrust Prohibit electrical shock or fire may be occur. When the heat pump need to be repaired, please entrust dealer or qualified person to carry it out. Improper movement or repair on the unit will lead to water leakage, electrical shock, injury or fire. 3 Safety Precaution ATTENTION Meaning Installation The unit CANNOT be installed near the flammable gas. Installation Place Once there is any leakage of the gas, fire can be occur. Make sure that the basement of the heat pump is strong enough, to avoid any decline or fall down of the unit Fix the unit Make sure that there is circuit breaker for the unit, lack of Need circuit breaker circuit breaker can lead to electrical shock or fire. Meaning Operation Please check the installation basement in a period (one month), Check the installation basement to avoid any decline or damage on the basement, which may hurt people or damage the unit Please switch off the power for clean or maintenance. Switch off the power It is prohibited to use copper or iron as fuse. The right fuse must be fixed by electrician for the heat pump. Prohibition It is prohibited to spray the flammable gas to the heat pump, as it may cause fire. Prohibition 4 Specification 1 Appearance and structure of the unit Circulating fan (side discharge) Wire controller The maximum cable for the wire controller is 200 metres from the heat pump. Water inlet Water outlet 2. The data of unit Unit Model Cooling Capacity PASRW BTU/h Heating Capacity 030B 020B kW kW BTU/h 5.1 7.1 17400 24200 6.0 8.2 20500 27000 2.5 Cooling Power Input kW 1.8 Heating Power Input kW 1.5 2.0 Running Current(Cooling/Heating) A 7.8/6.5 10.9/8.7 Electrical Heater kW Power Supply Compressor Quantity Compressor Fan Quantity / / 230V~/50Hz 230V~/50Hz 1 1 Rotary Rotary 1 1 Fan Power Input W 120 120 Fan rotate speed RPM 850 850 Noise dB(A) 54 54 Water Pump Input kW 0.2 0.2 Water head m 6 8 Water Connection inch 1 1 Water Flow Volume m 3/h 0.9 1.3 Water Pressure Drop kPa 17 17 Unit Net Dimensions(L/W/H) mm See the drawing of the units Unit Shipping Dimensions(L/W/H) mm Net Weight kg Shipping Weight kg See package label See nameplate See package label Cooling: Ambient temperature:35 /24 ,Inlet/Outlet water temperature:12 /7 Heating: Ambient temperature:7 /6 ,Inlet/Outlet water temperature:30 /35 (Above information just for your reference, Please subject to nameplate on the unit 5 Specification 3 Unit dimension Models: PASRW020B 1120 Needle valve hole 645 Water outlet (1 inch) Water inlet (1inch) Electric wire hole Drainage (1inch) 1160 425 190 1130 Models: PASRW0 3 0B 1120 845 Water outlet (1 inch) Electric wire hole Water inlet (1inch) 1160 6 190 425 1130 Specification Model: PASRW040B Water outlet Water inlet 420 Drainage 1080 7 Installation 1 Application of heat pump 1.1 Only for air-con WATER COUPLING AUTOMATIC AIR VENT CHECK-VALVE FOR WATER FLEXIBLE CONNECTION FOR WATER DIRTY DRAIN WATER THERMOMETER WATER PRESSURE METER WATER FILTER 1.2 Air-con and super heater(for hot water) Water tank Sanitary water WATER COUPLING AUTOMATIC AIR VENT CHECK-VALVE FOR WATER FLEXIBLE CONNECTION FOR WATER DIRTY DRAIN WATER THERMOMETER WATER PRESSURE METER WATER FILTER 8 Installation 2 Choose a right heat pump unit 2.1 Based on the local climate condition, construction features and insulation level, calculate the required cooling(heating) capacity per square meter. 2.2 Conclude the total capacity which will be needed by the construction. 2.3 According to the total capacity needed, choose the right model by consulting the heat. Pump features as below: 1. Heat pump features Cooling only unit chilled water outlet temp. at 5-15 , maximum ambient temp. at 43 . Heating and Cooling unit for cooling chilled water outlet temp. at 5-15 ,maximum ambient temp. at 43 . For heating, warm water inlet temp. at 40-50 , minimum ambient temp. at -10 . 2. Unit application Air source water chiller and heat pump is used for house, office, hotel, and so forth, which need heating or cooling separately, with each area need to be controlled. 3 Installation place The unit can be installed on any place outdoor which can carry heavy machine such as terrace, housetop, ground and so on. The location must have good ventilation. The place is free from heat radiation and other fire flame. A pall is needed in winter to protect the heat pump from snow. There must be not obstacles near the air inlet and outlet of the heat pump. A place which is free from strong air blowing. There must be water channel around the heat pump to drain the condensing water . There must be enough space around the unit for maintenance. 4 Installation method The heat pump can be installed onto the concrete basement by expansion screws, or onto a steel frame with rubber feet which can be placed on the ground or housetop. Make sure that the unit is placed horizontally. 9 Installation 5 Water loop connection Please pay attention to below matters when the water pipe is connected: Try to reduce the resistance to the water from the piping. The piping must be clear and free from dirty and blocks. Water leakage test must be carried out to ensure there is no water leaking. And then the insulation can be made. Attention that the pipe must be tested by pressure separately. DO NOT test it together with the heat pump. There must be expansion tank on the top point of the water loop, and the water level in the tank must be at least 0.5 meter higher than the top point of the water loop. The flow switch is installed inside of the heat pump, check to ensure that the wiring and action of the switch is normal and controlled by the controller. Try to avoid air stayed inside of the water pipe, and there must be air vent on the top point of the water loop. There must be thermometer and pressure meter at the water inlet and outlet, for easy inspection during running. 6 Power supply connection Open the front panel, and open the power supply access. The power supply must go through the wire access and be connected to the power supply terminals in the controlling box. Then connect the 3-signal wire plugs of the wire controller and main controller. If the outside water pump is needed, please insert the power supply wire into the wire access also and connect to the water pump terminals. If an additional auxiliary heater is need to be controlled by the heat pump controller, the relay (or power) of the aux-heater must be connected to the relevant output of the controller. 7 Location of the unit Outlet water ATTENTION Requirement A>500mm Inlet water B>1500mm C>1000mm D>500mm 10 D wall wall C wall Air outlet B Air inlet wall A Maintenance space Installation 8 Transit When the unit need to be hung up during installation, a 8 metres cable is needed, and there must be soft material between the cable and the unit to prevent damage to the heat pump cabinet. (See picture 1) Picture 1 WARNING DO NOT touch the heat exchanger of the heat pump with fingers or other objects 9 Trial Running Inspection before trial running Check the indoor unit, and make sure that the pipe connection is right and the relevant valves are open . Check the water loop, to ensure that the water inside of the expansion tank is enough, the water supply is good, the water loop is full of water and without any air. Also make sure there is good insulation for the water pipe. Check the electrical wiring. Make sure that the power voltage is normal, the screws are fastened, the wiring is made in line with the diagram, and the earthing is connected. Check the heat pump unit including all of the screws and parts of the heat pump to see if they are in good order. When power on, review the indicator on the controller to see if there is any failure indication. The gas gauge can be connected to the check valve to see the high pressure(or low pressure) of the system during trial running. Trial running Start the heat pump by press " or " key on the controller. Check whether the water pump is running, if it runs normally there will be 0.2 MPa on the water pressure meter. When the water pump runs for 1 minutes, the compressor will start. Hear whether there is strange sound from the compressor. If abnormal sound occurs please stop the unit and check the compressor. If the compressor runs well please look for the pressure meter of the refrigerant. Then check whether the power input and running current is in line with the manual. If not please stop and check. Adjust the valves on the water loop, to make sure that the hot(cool) water supply to each door is good and meet the requirement of heating(or cooling). Review whether the outlet water temperature is stable. The parameters of the controller are set by the factory, it is not allowed to change then by user himself. 11 Usage 1.The operation manual of the wire controller 2 C SE CAREDL 1 2 3 4 1 Prg mute clear Sel About buttons Button Prg Meaning mute Pressing this button could returning back to the previous interface. Sel Function1 continually pressing this button could enter into the set interface. Function2 pressing this button could enter into the next interface. Function1 continually pressing this button could start the heating mode. Function2 pressing this button could turn up and increase the value. Function1 continually pressing this button could start the cooling mode. Function2 pressing this button could turn down and increase the value. 2 About icons Ico Ico Meaning Meaning Compressor1 and 2 start up Defrosting Compressor3 and 4 start up Electrical heater start up At least one compressor starts up Warning Water pump starts up Cooling mode Condensate fan starts up Heating mode 12 Usage 2. Functions associated with the buttons Button Unit status Switch off buzzer or alarm relay, if alarm active Press once Manual reset of alarms that are no longer active Press for 5 s Enter parameter programming mode after entering password Press once Return to higher subgroup inside the programming environment until exiting, saving to EEPROM Press once Select higher item inside the programming environmen Switch from standby to heat pump mode (P6= 1) and vice-versa Access direct parameters:selection (as for button on Uc2) Select item inside the programming environment and display direct parameter values/confirm the changes to the parameter Select lower item inside the programming environmen Switch from standby to chiller mode (P6= 1) and vice-versa + Button operation Press once or hold Press for 5 s Press for 5 s Press once Press once or hold Press for 5 s Immediately reset the hour counter (inside the programming environment) Press for 5 s Start manual defrost on both circuits Press for 5 s Display the terminal Info screen Press for 6 s 13 Usage 3.The operation of the wire controller 3.1 Turn on/off On the state of power off, pressing(for 5s) or could start the unit. The screen displays the mode and water inlet temperature On the state of power on pressing(for 5s) or could turn off the unit. For the corresponding modes please refer to the corresponding buttons. The screen shows the water inlet temperature. Press the system will be heating; Press the system will be heating. Water inlet temperature Water inlet temperature CAREDL 2 C SE CAREDL 2 C SE Prg mute 1 2 Continually press clear Sel mute clear Sel On the state of standby CAREDL Prg On the state of power on 2 C SE CAREDL 2 C SE Prg mute 1 2 Continually press clear Sel Prg mute clear Sel 3.2 Check the parameters On the state of power on or power off, the measured temperature of B01-B04 could be checked. Press or to enter into the temperature interface. Press (Up)or Prg to return (Down) to find the needed temperature, then press Sel to check it. Press mute back to the previous interface. B01:Inlet water temp. B02:Outlet water temp. B03:Coil temp. B04:Ambient temp. CAREDL 2 C SE CAREDL 1 2 2 C SE Prg Prg mute clear press mute clear Sel Sel or Press CAREDL 2 C SE CAREDL Prg Press mute Sel The water inlet temperature is 20.8 2 C SE Prg Prg mute mute clear clear Sel back to the former interface 14 Sel Usage 3.3 Temperature setting On the state of power on or power off, you could set the heating or cooling temperature. Press Sel and hold for 5s to enter into the parameter setting interface. Press (Up)or (Down) to choose the needed setting. Press Sel to confirm. Press Sel again to enter into the corresponding parameter setting interface. Pressing or could increase or decrease the value of the parameters. Press Sel conserve settings.Back the previous interface press Prg . mute The setting of parameters will effect the performance and efficiency of the unit. Do not make any change of them when it is not necessary. r01/r02/r03/r04 are permitted to set by the users. For the default values please refer to Parameters Table . The permitted setting are shown as below (take r01 for example) CAREDL 2 C SE CAREDL 1 2 Prg mute mute Continually press clear clear Sel Sel Sel press select "r" Cooling temperature set value CAREDL 2 C SE Prg 2 C SE CAREDL Prg 2 C SE Prg Sel Press mute or mute clear clear Sel Sel Press CAREDL Sel 2 C SE CAREDL 2 C SE Prg Prg Press mute mute clear Sel clear Sel or press Sel to save setting CAREDL 2 C SE Prg Prg mute CAREDL 2 C SE Press mute to return back to Prg the fomer interface Sel mute clear clear Sel 15 Usage 3.4 Malfunction When there is something wrong with the unit, the wire controller will display the error code according to the fault reason. For the detailed meanings of the error codes please refer to the Fault Table. For example: Water inlet fault CAREDL 2 C SE Prg mute clear Sel 16 Maintenance 1 Maintenance Check the water supply and air vent frequently, to avoid lack of water or air in the water loop. Clean the water filter in a certain period to keep good water quality. Lack of water and dirty water can damage the unit. The heat pump will start the water pump per 72 hours when it is not running, to avoid freezing. to keep it good. Please drain the water from the lowest point of the heat exchanger to avoid freezing in winter. Water recharge and full inspection on the heat pump is needed before it is restarted. Please drain out the water in the super heater of the heat pump unit in winter, when the super heater is not used. The water loop of the heat pump MUST be protected from freezing in winter time. Please pay attention to below suggestions. Nonobservance on below suggestion will invalid the warranty for the heat pump. (1) Please do not shut off the power supply to the heat pump in winter. When the air temperature is below 0 , if the inlet water temperature is above 2 and below 4 , the water pump will start for freezing protect, if the inlet water is lower than 2 , the heat pump will run for heating. (2) Use anti-freezing liquid (glycol water) 1) look for below table for the volume of the glycol water 2) the glycol water can be added into the system from the expansion tank of the water loop. Glycol percentage % ambient temp. ( ) 10 20 30 40 50 -3 -8 -14 -22 -33 cooling/heating capacity fluctuation 0.991 0.982 0.972 0.961 0.946 power input fluctuation 0.996 0.992 0.986 0.976 0.966 water flow fluctuation 1.013 1.040 1.074 1.121 1.178 water drop fluctuation 1.070 1.129 1.181 1.263 1.308 Note: if the glycol water is too much, the water flow and water pump will be influenced and the heat exchange rate will be decreased. This table is for reference, please use antifreezing water according to the real condition of the local climate. 17 Maintenance 2.Ordinary malfunctions and solution 1) According to failure code of the controller, we can judge and solute the failure. Display Reason Water inlet temp. Sensor failure E1 The sensor is open or short circuit Check or change the sensor Water outlet temp. Sensor failure E2 The sensor is open or short circuit Check or change the sensor Evaporator sensor failure E3 The sensor is open or short circuit Check or change the sensor Ambient sensor failure E4 The sensor is open or short circuit Check or change the sensor Anti freezing under cooling mode A1 Water flow rate is not enough Check the water flow volume,or water system is jammed or not The antifreezing protection in winter A1 Ambient temp. too low Flow switch failure FL No water/little water in water system. Check the water flow volume, water pump is failure or not Malfunction Resolution Normal working High pressure protect HP1 High pressure switch action Check through each pressure switch and return circuit Low pressure protect LP1 Low pressure switch action Check through each pressure switch and return circuit Exhaust temperature protect tC1 Exhaust temperature is too high Check through each temp. switch and return circuit 18 Maintenance 2) Look over and clear the failure according to below information. Possible causes for the failure Failure Heat pump cannot be started Water pump is running with high noise or 1 Wrong power supply 2 power supply cable loose 3 circuit breaker open 1 shut off the power and check power supply; 2 check power cable and make right connection 3 check for the cause and replace the fuse or circuit breaker 1 2 3 4 1 check the water supply and charge water to the piping; 2 discharge the air in the water loop; 3 open the valves in water loop; 4 clean the water filter. lack of water in the piping much air in the water loop water vavles closed dirt and block on the water filter without water Heat pump capacity is low, compressor do not stop 1 lack of refrigerant; 2 bad insulation on water pipe; 3 low heat exchange rate on air side exchanger; 4 lack of water flow 1 check for the gas leakage and recharge the refrigerant; 2 make good insulation on water pipe; 3 clean the air side heat exchanger; 4 clean the water filter High compressor 1 too much refrigerant 2 low heat exchange rate on air side exhaust exchanger Low pressure 1 lack of gas problem of the system Compressor do not run High noise of compressor Fan do not run 1 discharge the redundant gas 2 clean the air side heat exchanger 1 check the gas leakage and recharge freon; 2 replace filter or capillary; 3 clean the water filter and discharge the air in water loop. 1 check off the power supply; 2 replace compressor contactor; 3 tighten the power cable; 4 check the compressor exhaust temp.; 5 reset the return water temp.; 6 clean the water filter and discharge the air in water loop. 2 block on filter or capillary 3 lack of water flow 1 2 3 4 5 6 power supply failure compressor contactor broken power cable loose protection on compressor wrong setting on return water temp. lack of water flow 1 liquid refrigerant goes into compressor 2 compressor failure 1 bad evaporation, check the cause for bad evaporation and get rid of this; 2 use new compressor; 1 failure on fan relay 2 fan motor broken 1 replace the fan relay; 2 replace fan motor. 1 no gas in the heat pump; 2 heat exchanger broken; 3 compressor failure. 1 check system leakage and recharge refrigerant; 2 find out the cause and replace the heat exchanger; 3 replace compressor. 1 low water flow rate; 2 low setting for the desired water temp.; 1 clean the water filter and discharge the air in water loop. 2 reset the desired water temperature. 1 lack of water in the system; 2 failure on flow switch 1 clean the water filter and discharge the air in water loop. 2 replace the flow switch. The compressor runs but heat pump has not Solutions heating or cooling capacity Low outlet water temperature Low water flow protection 19 Appendix Appendix 1 Install sketch map Especial installation£¨expandable water tank£© Legend explanation 1 main unit 2 fan coil 3 rubber flexible connection 4 thermometer 5 pressure meter 6 filter similar as "Y" 7 check valve 8 ball valve 9 flow meter 10 bypass valve 11 drain 12 filter 13 two-way valve 14 three-way valve 15 automatic ventilation 16 water pump 17 ball valve 18 ball valve 19 the close and expandable water tank 20 automatically filled-water automatic fill-water valve Pressure leakage valve '2/1 Technical request 1.Each connection must be connected tightly and have no leakage. 2.the arrowhead orientation of automatic filled- water must accord with water supply. 3.The pressure of automatic filled-water has been set,and please do not remove screw. Connect supply pipe '2/1 inlet outlet drain Installation request£º 1 The factory only offers main unit (0 and 1)in the legend, and the other modules which are indispensable fittings, are provided by users or installation company. 2 The unit which of code contains the letter "B" ,has water pump inside and need not install water pump outside £¨16£© 3 Automatic ventilation£¨15£©is installed on the top point of the water system¡£ 4 The quantity proportion of two-way valve£¨13£©and three-way valve£¨14£©is referred to the technical regulation, and there is three-way valve installed on the farthest place of water system. 5 The ball valve (17) is used when it is swashed, filled water in the water system and so on. 20 Appendix Appendix 2 The installation explanation of automatic filled-water 1 When automatic filled-water valve is installed ,the arrowhead orientation of inlet water must accord with the orientation of valve ; 2 Automatic filled-water has been adjusted in advance to 1.5bar 3 If readjust the pressure of inlet water, please operate as follows * open the screw cap C * If reduce the pressure of water supply, please unscrew the pressure to adjust the screw(B) * If increase the pressure of water supply, please screw down the pressure to adjust the screw (B) 4 When the system need fill water at first, wrest the handle(A) of filled-water.Then the handle(A) can return(close) when the system is full of water. 5 Automatic filled-water Valve need clean in a periodic time and then you must close the tap, unscrew the plug(D),remove the inside filter net. Please assemble them again after cleaning. NOTICE There are two connections for water pressure meter in the central section of automatic filled-water,where the water pressure meter can be connected directly and display the set pressure. The screw cap(C) must be tweaked after adjusting the filledwater pressure. B C 1/4' 1/2' D A 21 Appendix Appendix 3 The installation explanation of the leakage pressure valve. 1 The action pressure of leakage pressure valve 'is more than 3bar(valve is open), but the pressure can not be adjusted. 2 The valve will open automatically to make sure that the water loop of air-con system is safe when the water pressure in the backwater side is higher than the set pressure. outlet inlet Appendix 4 The way of assistant heat source connection Unit provides the connection of assistant heat source which can not be only for gas-fired boiler, but also for electronic boiler or warm-net pipe for city accordingly. The way to the connection is as follows: 1 water chiller and heat pump+assistant gas-fired boiler water chiller and heat pump inlet inlet outlet cable Three-way valve outlet control wire 2 gas-fired boiler water chiller and heat pump+assistant electronic boiler water chiller and heat pump inlet inlet outlet cable electronic boiler outlet 22 Appendix Appendix5 The unit's parameter Please set according the below table: Par Description Limits R01 Cooling set-point 12 R02 Cooling differential 2 R03 Heating set-point 40 R04 Heating differential 2 23 Unit Appendix Appendix 6 Cable specification 1. Single phase unit Nameplate maximum current No more than 10A 10~16A 16~25A 25 ~32A 32 ~40A 40 ~63A 63~75A 75~101A 101~123A 123~148A 148~186A 186~224A Phase line 2 2 2 2 2 2 2 2 2 2 2 2 MCB Creepage protector 20A 32A 40A 40A 63A 80A 30mA less than 0.1 sec Signal line 30mA less than 0.1 sec 30mA less than 0.1 sec 250A 280A MCB Creepage protector 20A 32A 40A 40A 63A 80A 100A 125A 160A 30mA less than 0.1 sec 30mA less than 0.1 sec 30mA less than 0.1 sec 30mA less than 0.1 sec 30mA less than 0.1 sec 100A 125A 160A 225A n 0.5mm2 2. Three phase unit Nameplate maximum current No more than 10A 10~16A 16~25A 25 ~32A 32 ~40A 40 ~63A 63~75A 75~101A 101~123A 123~148A 148~186A 186~224A Phase line 3 3 3 3 3 3 3 3 3 3 3 3 30mA less than 0.1 sec 30mA less than 0.1 sec 30mA less than 0.1 sec 30mA less than 0.1 sec Signal line n 0.5mm2 225A 30mA less than 0.1 sec 250A 30mA less than 0.1 sec 280A 30mA less than 0.1 sec When the unit will be installed at outdoor, please use the cable which can against UV. 24 Code 20130308-0001
http://manualzz.com/doc/48721845/%E9%87%91%E5%88%9A%E7%83%AD%E6%B3%B5%E8%AF%B4%E6%98%8E%E4%B9%A6%EF%BC%88%E4%B8%AD%E6%80%A7%E8%8B%B1%E6%96%87-r410a-%E9%87%91%E5%88%9A-d%E6%AC%BE-2-4p
CC-MAIN-2018-17
refinedweb
5,270
58.42
You can use Flash to create XML content, without the need to load it first from an external file. You may want to do this to send an XML fragment to an external application. Ive used this process to request information from external systems. The request required an XML document, so I created the content within Flash and used the sendAndLoad method to send the XML packet to the application. Youll learn more about sending information from Flash later in this chapter. There are two ways to create XML content from Flash. Either you can create an XML string and add it to your XML object, or you can use methods such as createElement and createTextNode to generate the structure programmatically. Well start by looking at the first approach. The easiest way to create a simple XML fragment is by creating an XML string, as shown here. When you do this, Flash wont determine whether the XML is well formed or valid. var myXML:XML = new XML("<name>Sas</name>"); In the example, the XML object contains a single XML node, <name> . Where the string is longer, it can be useful to add it to a variable, as shown here: var XMLString:String; XMLString = "<login><name>Sas</name><pass>1234</pass></login>"; var myXML:XML = new XML(XMLString); You can also use the parseXML method to add content to an existing XML object: var XMLString:String; XMLString = "<login><name>Sas</name><pass>1234</pass></login>"; var myXML:XML = new XML(); myXML.parseXML(XMLString); The parseXML method replaces any existing content within the XML tree, which means it isnt a good candidate where you need to preserve the current tree. My preference is for the first approach so that I dont overwrite any XML content by mistake, but either method will achieve the same result. Whichever method you choose, it might be better to set out the XML string so that you can read it more easily. In this example, its easier to read the nodes within the XML fragment compared with the earlier example: XMLString = "<login>"; XMLString += "<name>Sas</name>"; XMLString += "<pass>1234</pass>"; XMLString += "</login>"; The file createXMLstring.fla contains the example shown here. The XMLString variable stores the XML content in string format. The code adds the variable to the myXML object and displays it in the Output window. var XMLString:String; XMLString = "<login>"; XMLString += "<name>Sas</name>"; XMLString += "<pass>1234</pass>"; XMLString += "<browser>IE</browser>"; XMLString += "<os>PC</os>"; XMLString += "</login>"; var myXML:XML = new XML(XMLString); trace(myXML); Figure 4-26 shows the movie when tested . Once youve created the XML object, you can extract the content using the properties discussed earlier. For example, the line that follows creates the output shown in Figure 4-27. trace(myXML.firstChild.firstChild.firstChild.nodeValue); You need to be careful if your XML string includes attributes. Well-formed XML documents use quotation marks around attribute values, and these will cause problems in your code unless you escape them with the backslash character ( \ ). Heres an example. Ive escaped the quotes around the attribute value "first" with backslashes so it reads \"first\" . You can find this example in the resource file createXMLString.fla . var XMLQuotesString:String;Sas</name>"; XMLString += "<pass>1234</pass>"; XMLString += "</login>" var myQuotedXML:XML = new XML(XMLString); trace(myQuotedXML); You can see that its easy to create content using strings of XML information. An alternative method is to create XML content with ActionScript, using methods of the XML class. The methods of the XML class that you can use to create XML content include createElement createTextNode appendChild insertBefore cloneNode removeNode Well look at the cloneNode and removeNode methods later in the chapter when we look at modifying existing XML content. You add a new element by creating it and either appending or inserting it into your document tree. This example shows how you can create an element: var myXML:XML = new XML(); var RootNode:XMLNode = myXML.createElement("login"); When you use the createElement method, it doesnt have a position in the document tree. You will have to use either the appendChild or insertBefore method to place it in the tree. The appendChild method adds the node at the end of the current childNodes collection. The next example uses this method to add a new child to the XML object. In fact, as its the first child of the XML object, were adding the root node of the XML document. myXML.appendChild(RootNode); If you want to use the insertBefore method, the parent node will have to have at least one existing child node within the document tree. This example shows how to use insertBefore : var BrowserNode:XMLNode = myXML.createElement("browser"); var OSNode:XMLNode = myXML.createElement("os"); RootNode.appendChild(OSNode); RootNode.insertBefore(BrowserNode, OSNode); In our earlier XML string example, we worked with the following XML structure: <login> <name>Sas</name> <pass>1234</pass> <browser>IE</browser> <os>PC</os> </login> The next example uses the appendChild method to create the same XML structure. At the end, it traces the document tree in an Output window. You can see the example in the resource file createXMLMethods.fla .(NameNode); RootNode.appendChild(PassNode); RootNode.appendChild(BrowserNode); RootNode.appendChild(OSNode); trace (myXML); Figure 4-28 shows how the movie appears when tested. Note that the child elements are empty because we havent yet added any text elements. I could have achieved the same result by using the insertBefore method. Ive shown this in the example that follows, and it is also available in the createXMLMethods.fla resource file. Youll have to uncomment the relevant lines within the file if you want to test the code.(OSNode); RootNode.insertBefore(BrowserNode, OSNode); RootNode.insertBefore(PassNode, BrowserNode); RootNode.insertBefore(NameNode, PassNode); trace (myXML); If you traced the document tree code, it would appear identical to the output in Figure 4-28. The two methods achieve the same result, but there is one difference. Using insertBefore , I have to start with the last child node in the tree and work my way up through the child nodes. If I use appendChild , I start at the beginning and work my way down to the last child node. You can compare both examples in the resource file createXMLMethods.fla . To complete the document tree, we need to add text to the child nodes. Text nodes are child nodes of the parent element node. In this line, for example, Child text is a child node of the <pElement> node: <pElement>Child text</pElement> In Flash, Id refer to it using one of these two lines: pElementNodeRef.firstChild; pElementNodeRef.childNodes[0]; You use the createTextNode method to add text to an element: var myXML:XML = new XML(); var TextNode:XMLNode = myXML.createTextNode("Some text"); As with element nodes, when you first create a text node it has no position in the document tree. You will need to use the appendChild method to add this node into the XML object. In Flash, a text node is always a child of an element node. You can see this in the following example: var myXML:XML = new XML(); var RootNode:XMLNode = myXML.createElement("login"); var ChildNode:XMLNode = myXML.createElement("name"); var NameTextNode:XMLNode = myXML.createTextNode("Sas"); myXML.appendChild(RootNode); RootNode.appendChild(ChildNode); ChildNode.appendChild(NameTextNode); In the example, weve created a root node and child node and appended them to the document tree. The createTextNode method creates a text node containing Sas and appends it as a child of ChildNode . Figure 4-29 shows how myXML would appear if shown in the Output window. Earlier we created the structure for the login XML document. We generated the elements and the code here illustrates how you could add the text nodes. In this example, Ive used the appendChild method for all nodes. Ive shown the new lines in bold."); var NameTextNode:XMLNode = myXML.createTextNode("Sas Jacobs"); var PassTextNode:XMLNode = myXML.createTextNode("1234"); var BrowserTextNode:XMLNode = myXML.createTextNode("IE"); var OSTextNode:XMLNode = myXML.createTextNode("PC"); myXML.appendChild(RootNode); RootNode.appendChild(NameNode); RootNode.appendChild(PassNode); RootNode.appendChild(BrowserNode); RootNode.appendChild(OSNode); NameNode.appendChild(NameTextNode); PassNode.appendChild(PassTextNode); BrowserNode.appendChild(BrowserTextNode); OSNode.appendChild(OSTextNode); trace (myXML); You can find the example in the resource file createXMLMethodsText.fla . Figure 4-30 shows how this example appears when tested. If you compare the number of lines of code that it took to create this output with the example that used an XML string, youll see that its a much longer way of creating a new document tree. Given that it takes more work, why would you use XML methods to create a new document? Well, you can use the methods shown in this section to work with an existing document tree, so its worthwhile getting a good understanding of how they work. In the next section, Ill look at how you can add attributes to the document tree using ActionScript. Adding attributes to elements within Flash is very easy. You just set the name and value of the attribute as shown here: var myXML:XML = new XML(); var RootNode:XMLNode = myXML.createElement("login"); var ChildNode:XMLNode = myXML.createElement("name"); myXML.appendChild(RootNode); RootNode.appendChild(ChildNode); ChildNode.attributes.type="first"; I could also have written the last line using associative array notation: ChildNode.attributes["type"]="first"; You can see this example in the resource file createXMLMethodsAttributes.fla . Figure 4-31 shows the Output window that displays when testing the movie. In the examples weve worked with so far in this section, we havent included an XML declaration in the document tree. Youll recall that this declaration is optional, but some external applications may require that you include it when you send XML content out of Flash. The next section shows you how to add this declaration to your XML packet. The xmlDecl property allows you to set or read the XML declaration within the XML document tree. It doesnt matter whether youve created the document using an XML string or using XML methods. When you first create the document within Flash, the value of the xmlDecl property is set to undefined . You can add an XML declaration by setting the property as shown here. Notice that Ive escaped the quotation marks with a backslash character. If I dont do this, Ill get an error message in Flash. var myXML:XML = new XML(); myXML.xmlDecl = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"; I dont need to place the declaration in the document tree as Flash will automatically add it in the appropriate place once Ive set the value. You can see an example of this in the resource file createXMLExtras.fla . Its also shown in Figure 4-32 in the next section. Flash includes the docTypeDecl property so that you can read and set a reference to a DTD. A DTD allows you to validate your XML document, but Flash wont try to perform any validation when it detects a reference to this type of file. Flash cant validate XML documents as it contains a nonvalidating parser. You can set a reference to a DTD by using the following code. As with the XML declaration, I had to escape the quotes in the declaration. var myXML:XML = new XML(); myXML."; Again, Flash automatically places this declaration at the correct position in the document tree. You can see an example in the resource file createXMLExtras.fla . Figure 4-32 shows the content of the document tree from the example file. Youve seen two different approaches to generating XML content within Flash: using an XML string and using XML class methods. While the XML string approach is quicker, it will replace any existing content within an XML object. If youre manipulating an existing XML tree, you will have to rely on the methods of the XML class. There are some limits to these XML class methods. You may have noticed that we havent used Flash methods to add XML processing instructions. Thats because this is not possible in Flash. Unfortunately, it might be required if you need to create an XML document for an external application that includes a reference to a style sheet. To achieve this, youll have to create the XML document using an XML string. Similarly, if you need to include namespaces or schema references in the XML document, youll also have to use an XML string to create the document tree. The next example shows how you can include these elements in your document tree. Youll need to escape the quotes in the style sheet declaration and in the schema declarations within the <login> element. You can see the example in the resource file createXMLOther.fla . var XMLString:String;"; XMLString +="<login>"; XMLString +="<name>Sas</name>"; XMLString +="<pass>1234</pass>"; XMLString +="<browser>IE</browser>"; XMLString +="<os>PC</os>"; XMLString +="</login>"; var myXML:XML = new XML(XMLString); Figure 4-33 shows the output when you trace the contents of myXML . In the preceding section, you learned how to create XML tree structures in two different ways. We used an XML string to add content to the document tree. We also used XML class methods to add elements programmatically. In the example that follows, well use a combination of both approaches to create an XML document within Flash. In this example, well use Flash to create the XML document tree shown here. It is a cut-down version of the address.xml file that youve seen previously. Well start by adding the XML declaration and root node using an XML string. Well add the contacts using methods of the XML class. <?xml version="1.0" encoding="UTF-8"?> <phoneBook> <contact id="1"> <name>Sas Jacobs</name> <address>Some Country</address> <phone>123 456</phone> </contact> <contact id="2"> <name>John Smith</name> <address>Another Country</address> <phone>456 789</phone> </contact> </phoneBook> Create a new Flash document called createAddressXML.fla . Name the first layer actions and add the following code to frame 1. The code creates an XML string and an XML declaration for the XML object myXML . var XMLString:String = "<phoneBook/>"; var myXML:XML = new XML(XMLString); myXML.xmlDecl = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"; trace (myXML); Test the movie. You should see an Output window similar to the example shown in Figure 4-34. The window displays an XML declaration and an empty root node. Figure 4-34: Displaying the XML declaration and root node Add the following arrays to the actions layer. The arrays contain the content for the XML document. Storing the information in arrays allows us to add content to the XML tree within a loop. var arrNames:Array = new Array("Sas Jacobs","John Smith"); var arrAddress:Array = new Array("Some Country", "Another Country"); var arrPhone:Array = new Array("123 456", "456 789"); Create the <contact> nodes as shown here. Ive added the id attribute to the node and set it to be one more than the value of i . In other words, the id will start at 1 . I also created the variables that Ill need a little later. var ContactNode:XMLNode; var NameNode:XMLNode; var AddressNode:XMLNode; var PhoneNode:XMLNode; var TextNode:XMLNode; for (var i:Number=0; i < arrNames.length; i++) { ContactNode = myXML.createElement("contact"); ContactNode.attributes.id = i + 1; myXML.firstChild.appendChild(ContactNode); } Test the movie. You should see something similar to the screenshot displayed in Figure 4-35. Figure 4-35: Displaying the contact nodes and attributes Modify the for loop as shown in the bold lines in the following code. Weve created the child elements and text elements and appended them to the <contact> nodes. Ive added spaces to make the blocks easier to understand. for (var i:Number=0; i < arrNames.length; i++) { ContactNode = myXML.createElement("contact"); ContactNode.attributes.id = i + 1; myXML.firstChild.appendChild(ContactNode) NameNode = myXML.createElement("name"); AddressNode = myXML.createElement("address"); PhoneNode = myXML.createElement("phone"); TextNode = myXML.createTextNode(arrNames[i]); NameNode.appendChild(TextNode); ContactNode.appendChild(NameNode); TextNode = myXML.createTextNode(arrAddress[i]); AddressNode.appendChild(TextNode); ContactNode.appendChild(AddressNode); TextNode = myXML.createTextNode(arrPhone[i]); PhoneNode.appendChild(TextNode); ContactNode.appendChild(PhoneNode); } Test the movie. You should see something similar to the example shown in Figure 4-36. Weve used Flash to create the XML document shown earlier. You can see the completed example in the resource file createAddressXML.fla . Figure 4-36: Displaying the finished XML file In addition to creating XML document trees within Flash, its important to be able to manipulate existing trees and modify the content that they contain. You might need to do this if youre allowing a user to change the values in your Flash XML application. You need to apply the changes to the document tree, so well look at that in the next section.
https://flylib.com/books/en/1.350.1.40/1/
CC-MAIN-2021-39
refinedweb
2,782
56.76
I understand how to use the JNDIView MBean via the jmx-console to browse the global JNDI and the ENC contexts of EJBs. How can one browse the ENC contexts (i.e. java:comp namespace) of web applications? I have tried searching the web, the wiki and these forums without any luck. Thanks, Ray I don't think that is possible. I think that each ENC is local, and private, to the application. For this reason, I always include a simple jsp in my application that I can use to get at the application's ENC. The jsp is posted at I do not see why the ENC for an EJB would be accessible via the JNDIView but the ENC for a web application would not be. From a J2EE perspective the ENC for an EJB or web application is equally local and private. I think you may posted the wrong link, that contains a JSP for displaying the system properties. Not that I do not appreciate your effort. :-) Oh shoot, I searched for the wrong code! I have two common jsps, one for the system properties, the other for JNDI. I thought I had a version of the JNDI jsp that was all-inclusive, but I can't seem to find it (perhaps on my laptop at home). The simplest version I can find is one that uses servlets, JSPs and the JSTL (I usually port this code to whatever framework I am studying). If interested, I could post it (without comments it's not too long, but I do have to clean it up some).
https://developer.jboss.org/thread/78818
CC-MAIN-2018-17
refinedweb
267
80.41
If we want to find numbers on lines that start with the string “X-” such as: X-DSPAM-Confidence: 0.8475X-DSPAM-Probability: 0.0000 We don’t just want any floating point numbers from any lines. We only to extract numbers from lines that have the above syntax. We can construct the following regular expression to select the lines: ˆX-.*: [0-9.]+ Translating this, we are saying, we want lines that start with “X-” followed by zero or more characters “.*” followed by a colon (“:”) and then a space. After the space we are looking for one or more characters that are either a digit (0-9) or a period “[0-9.]+”. Note that in between the square braces, the period matches an actual period (i.e. it is not a wildcard between the square brackets). This is a very tight expression that will pretty much match only the lines we are interested in as follows: import re hand = open('mbox-short.txt') for line in hands: line = line.rstrip() if re.search('ˆX\S*: [0-9.]+', line) : print line When we run the program, we see the data nicely filtered to show only the lines we are looking for. X-DSPAM-Confidence: 0.8475X-DSPAM-Probability: 0.0000X-DSPAM-Confidence: 0.6178X-DSPAM-Probability: 0.0000 But now we have to solve the problem of extracting the numbers using split. While it would be simple enough to use split, we can use another feature of regular expressions to both search and parse the line at the same time. Parentheses are another special character in regular expressions. When you add parentheses to a regular expression they are ignored when matching the string, but when you are using findall(), parentheses indicate that while you want the whole expression to match, you only are interested in extracting a portion of the substring that matches the regular expression. So we make the following change to our program: import re hand = open('mbox-short.txt')for line in hand: line = line.rstrip() x = re.findall('ˆX\S*: ([0-9.]+)', line) if len(x) > 0 : print x Instead of calling search(), we add parentheses around the part of the regular expression that represents the floating point number to indicate we only want findall() to give us back the floating point number portion of the matching string. The output from this program is as follows: ['0.8475']['0.0000']['0.6178']['0.0000']['0.6961']['0.0000'].. The numbers are still in a list and need to be converted from strings to floating point but we have used the power of regular expressions to both search and extract the information we found interesting. As another example of this technique, if you look at the file there are a number of lines of the form: Details: If we wanted to extract all of the revision numbers (the integer number at the end of these lines) using the same technique as above, we could write the following program: import re hand = open('mbox-short.txt') for line in hand: line = line.rstrip() x = re.findall('ˆDetails:.*rev=([0-9.]+)', line) if len(x) > 0: print x Translating our regular expression, we are looking for lines that start with “Details:’, followed by any any number of characters “.*” followed by “rev=” and then by one or more digits. We want lines that match the entire expression but we digits as possible before extracting those digits. This “greedy” behavior is why we get all five digits for each number. The regular expression library expands in both directions until it counters a non-digit, the beginning, or the end of a line. Now we can use regular expressions to re-do an exercise from earlier in the book where we were interested in the time of day of each mail message. We looked for lines of the form: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 And wanted to extract the hour of the day for each line. Previously we did this with two calls to split. First the line was split into words and then we pulled out the fifth word and split it again on the colon character to pull out the two characters we were interested in. While this worked, it actually results in pretty brittle code that is assuming the lines are nicely formatted. If you were to add enough error checking (or a big try/except block) to insure that your program never failed when presented with incorrectly formatted lines, the code would balloon to 10-15 lines of code that was pretty hard to read. We can do this far simpler with the following regular expression: ˆFrom .* [0-9][0-9]: The translation of this regular expression is that we are looking for lines that start with the two digits as follows: ˆFrom .* ([0-9][0-9]): This results in the following program: import rehand = open('mbox-short.txt')for line in hand: line = line.rstrip() x = re.findall('ˆFrom .* ([0-9][0-9]):', line) if len(x) > 0 : print x When the program runs, it produces the following output: ['09']['18']['16']['15'] - 瀏覽次數:975
http://www.opentextbooks.org.hk/zh-hant/ditatopic/6777
CC-MAIN-2021-17
refinedweb
865
62.17
[Warning this is not new stuff - but shouldn't be overlooked if you need to secure sensitive data in your application.] Isn't “Secure String” an oxymoron for .NET? So if we are thinking about securing some sensitive data in say C or C++, it's relatively simple- load it into a char array memory and encrypt it, wiping the memory out after the information has been loaded. char Now try that with .NET! From the Microsoft site: “A String is called immutable because its value cannot be modified once it has been created.“ “A String is called immutable because its value cannot be modified once it has been created.“ So how can you destroy one? Set it to empty? Well simply put, you can't . Once your string is no longer referenced, or worse yet your object containing the string, it's time for the Garbage Collector to come and do its work. The problem is if your object has been around long enough to get into Generation 1 or 2, then it is going to take a bit longer. Hmmm, so in translation if you keep a password, Credit Card, encryption key or some other sensitive text in memory as a string you can't destroy it (think memset for us oldies!). Only the GC can free the memory for you, and you are dependent on HOW it frees that memory. I personally don't know for a fact if it memsets it to blank, or just dereferences the pointer. However I would be willing to bet it is the option that requires the least amount of work and that doesn't bode well for controlling the exposure of our sensitive data. Plainly that proverbially sucks! Enter the “SecureString” class, from the Microsoft site it says: “Represents text that should be kept confidential. The text is encrypted for privacy when being used, and deleted from computer memory when no longer needed.” “Represents text that should be kept confidential. The text is encrypted for privacy when being used, and deleted from computer memory when no longer needed.” Wow doesn't that just sound like the ticket we need! Secure, Encryption, delete from memory – how fantastic! Uh oh, keep reading the remarks: .” SecureString BSTR – oh I feel the COM headache coming back! Actually, it's really not that bad, but it's definitely not a straight swap for a System.String. See some example code below: System.String using System; using System.Text; using System.Runtime.InteropServices; using System.Security.Cryptography; using System.Security; namespace CSharpHacker.Utilities { /// <span class="code-SummaryComment"><summary> </span> A word of caution to the above code: SensitiveDataToString Base64SensitiveDataHash string So even with all those disclaimers running a program that takes input will still a ‘leak information’ for the System.String. Specifically the tricky area is how do you get them into your program in the first place? Read them from a database, WinForm user input, or a web page? Kinda tricky ! If you search the web, there are implementations of secure login controls that build it up character by character, but certainly something to think about. So how Secure is “SecureString”? The answer is “it depends”, but reasonably secure and a heck of a lot better than System.String. A while ago, there was a big storm about tools that can connect to the process and decrypt your SecureStrings. The best rebuttal to this I've seen can be read about in [SecureString Redux]. I definitely recommend reading this. SecureString SecureStrings Now I have to say I've been meaning to write this for some time now! Hopefully this helped raise awareness of the string leakage risks in the .NET language and ways to help minimize the string information leak scenario. Potential enhancements to the helper class would be: HashAlgorithm Gareth This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) unsafe void ModifyString(string s) { fixed (char* p = s) { for (int i = 0; i < s.Length; i++) p[i] = '\0'; } } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/53965/Who-knows-of-the-NET-Secure-Strings?fid=1558329&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=3345312&fr=1
CC-MAIN-2016-40
refinedweb
705
64.91
Post your Comment bubble sort bubble sort write a program in java using bubble sort bubble sort - Java Beginners bubble sort how to write program The bubble-sort algorithm in double... Hi friend, Bubble Sort program : public class...[] = {10,5,3,89,110,120,1,8,2,12}; System.out.println("Values Before the sort:\n bubble sort bubble sort how to calculate the number of passes in bubble sort Need help in constructing bubble sort Need help in constructing bubble sort using a bubble sort, for each... array figured out just couldnt dont know how to plug in the bubble sort...:// Bidirectional Bubble Sort in Java Bidirectional Bubble Sort in Java Introduction : Bidirectional Bubble Sort... of an array using bi-directional bubble sort. Definition: A alternative of bubble Bubble Sort in Java Bubble Sort aka exchange sort in Java is used to sort integer values... are needed. Bubble Sort is a slow and lengthy way to sort elements. Example of Bubble Sort in Java: public class BubbleSort { public static void main(String Bubble Sorting in Java Bubble Sorting in Java  ... are going to sort integer values of an array using bubble sort. Bubble sort is also known as exchange sort. Bubble sort is a simplest sorting Bubble Sorts Java NotesBubble Sorts People like bubble sorts -- could it be the name.... Java has better sort methods in java.util.Arrays.sort... that all students are expected to understand to some extent. Bubble Sort -- fixed buble sort buble sort ascending order by using Bubble sort programm Java BubbleSort Example Heap Sort in Java Heap Sort in Java is used to sort integer values of an array. Like quicksort, insertion sort, bubble sort and other sorting methods, heap sort is used.... Example of Heap Sort in Java: public class eap_Sort{ public static void main Merge Sort Java Merge Sort in Java is used to sort integer values of an array. There are many methods to sort Java like bubble sort, insertion sort, selection sort, etc.... Example of Merge Sort in Java public class mergeSort{ public static void main Quick Sort in Java Quick sort in Java is used to sort integer values of an array... in comparison to other sorting algorithms like bubble sort, insertion sort, heap... into a sorted array. Example of Quick Sort in Java: public class QuickSort write a progam for bubble sort using file read nd write? write a progam for bubble sort using file read nd write? hi, please give the code Sort with this A program is required to ask users to rate the Java programming language... Scanner(System.in); System.out.print("Rate Java(0-10): "); int rate..."); } } } } System.out.print("Invalid! Rate Java within the range(0-10): "); rate=input.nextInt Insertion Sort In Java Insertion Sort In Java  ... sorting algorithm is similar to bubble sort. But insertion sort is more efficient than bubble sort because in insertion sort the elements comparisons are less Odd Even Transposition Sort In Java Odd Even Transposition Sort In Java  ... is based on the Bubble Sort technique of comparing two numbers and swapping... iteration. The comparison is same as bubble sort. Working insertion sort insertion sort write a program in java using insertion sort SEARCH AND SORT SEARCH AND SORT Cam any one provide me the code in java that : Program to search for MAX,MIN and then SORT the set using any of the Divide and conquer method Java: Example - String sort Java: Example - String sort Sorting is a mechanism in which we sort the data in some order. There are so many sorting algorithm are present to sort the string. The example given below is based on Selection Sort. The Selection sort Post your Comment
http://www.roseindia.net/discussion/18533-Bidirectional-Bubble-Sort-in-Java.html
CC-MAIN-2013-20
refinedweb
619
54.52
of querying /usersto get a list of users, or /user/:idto get a particular user, the endpoint will look like /graphqlfor all the requests. - In GraphQL, the data coming back from a response is set by the query library stated and it can be set to only send a few data properties, therefore, queries in GraphQL have better performance. - No need to set method verbs in GraphQL. Keywords such as Query or Mutation will decide what the request will perform. - REST API routes usually are handled by one route handler. In GraphQL you can have a single query trigger multiple mutations and get a compound response from multiple sources. Queries A query is a GraphQL method that allows us to GET data from our API. Even though it may receive parameters to filter, order, or simply search for a particular document a query can not mutate this data. Mutations Mutations are everything that is not what would refer to a GET verb in regular APIs. Updating, creating, or deleting data from our API is done via mutations Subscriptions With the use of web sockets, a subscription refers to a connection between the client and the server. The server is constantly watching for mutations or queries that are attached to a particular subscription, and communicate any changes to the client in real time. Subscriptions are mostly used for real-time widgets/apps. Types and Inputs To make sure our queries and mutations can process the data to query a database, types work much like a model ORM for databases. By setting types up we can define the type of variable our resolvers will return. Similarly, we need to set input types for our resolvers to receive. For example, we will define a couple types and inputs: type User { id: ID name: String! age: Int! address: Address followers: [ID] } type Address { street: String city: String country: String } input UserInput { name: String! age: Int! } type Query { getAllUsers: [User] } type Mutation { createUser(user: UserInput!): ID } Properties can have a custom type as its type apart from the primitive ones, such as: - String - Int - Float - Boolean - ID And they can also be an array of a certain type determined by the brackets, which is shown in the example above. Furthermore, the mandatory status of a property can be set with the !, meaning that the property needs to be present. Resolvers These are the actions that are performed when calling queries and mutations. getAllUsers and createUser are going to be connected to a resolver that will perform the actual calculations and database queries. Creating our Project For this tutorial, we will be creating a Vue.js project using the Vue CLI 3.0, which will bootstrap a project with a folder structure that looks like this: If you need help setting up the project, you can look at this tutorial for the command line interface. We can start serving our application with the command: $ npm run serve Apollo Client Apollo Client brings a tool to front-end development to make GraphQL queries/mutations easier. It acts as an HTTP client that connects to a GraphQL API and provides caching, error handling, and even state management capabilities. For this tutorial, Vue-Apollo will be used, which is the Apollo integration specially designed for Vue.js. Apollo Configuration To start our Apollo configuration a few packages will need to be installed: $ npm install apollo-client apollo-link-http apollo-cache-inmemory vue-apollo graphql graphql-tag Inside a /graphql folder in our project, we will create apollo.js: // apollo.js import Vue from 'vue' import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' import { InMemoryCache } from 'apollo-cache-inmemory' import VueApollo from 'vue-apollo' const httpLink = new HttpLink({ uri: process.env.VUE_APP_GRAPHQL_ENDPOINT }) // Create the apollo client export const apolloClient = new ApolloClient({ link: httpLink, cache: new InMemoryCache(), connectToDevTools: true }) // Install the Vue plugin Vue.use(VueApollo) export const apolloProvider = new VueApollo({ defaultClient: apolloClient }) HttpLink is an object that requires a uri property, which refers to the GraphQL endpoint from the API being used. Ex: localhost:8081/graphql Then, a new ApolloClient instance needs to be created, where the link, cache instance, and further options can be set. Finally, we wrap our ApolloClient inside a VueApollo instance so we can use its hooks inside our Vue components. Global Error Handling There is a way of handling errors globally inside the configuration file. For that we need to install an npm package called apollo-link-error, which inspects and manages errors from the network: // apollo.js import Vue from 'vue' import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' import { onError } from "apollo-link-error" import { InMemoryCache } from 'apollo-cache-inmemory' import VueApollo from 'vue-apollo' const httpLink = new HttpLink({ uri: process.env.VUE_APP_GRAPHQL_ENDPOINT }) // Error Handling const errorLink = onError(({ graphQLErrors, networkError }) => { if (graphQLErrors) graphQLErrors.map(({ message, locations, path }) => console.log( `[GraphQL error]: Message: ${message}, Location: ${locations}, Path: ${path}` ) ) if (networkError) console.log(`[Network error]: ${networkError}`) }) // Create the apollo client export const apolloClient = new ApolloClient({ link: errorLink.concat(httpLink), cache: new InMemoryCache(), connectToDevTools: true }) // Install the Vue plugin Vue.use(VueApollo) export const apolloProvider = new VueApollo({ defaultClient: apolloClient }) After importing the onError function from the package, we can implement it as a sort of middleware of Apollo Client. It'll catch any network or GraphQL errors, giving us the chance to manage them globally. The callback gets called with an object with some properties whenever an error has happened: - operation: The operation that triggered the callback because an error was found. - response: The result of the operation. - graphQLErrors: An array of errors from the GraphQL endpoint - networkError: Any error during the execution of the operation or server error. - forward: The next link referenced in the chain. Managing State with Apollo Client A different alternative to using Vuex with Vue projects, and when using the Apollo Client is to use a package called apollo-link-state. It works as a local data management tool that works as if you were querying a server, but it does it locally. Also, it is a great way of managing the cache for our application, thus making Apollo Client an HTTP client and state/cache management tool. For more information, you can check the official documentation for Apollo-link-state. Creating Queries For creating queries we need to set up a string-type tag with the package graphql-tag. For keeping a tidy and structured project, we will create a folder called queries inside the graphql folder. Assuming the server receiving the query is set up properly to interpret this query, for example, we can trigger a resolver called getAllUsers: import gql from 'graphql-tag' export const GET_ALL_USERS_QUERY = gql` query getAllUsers { getAllUsers { // Fields to retrieve name age } } ` The default operation in GraphQL is query, so the query keyword is optional. If a retrieved field has subfields, then at least one of them should be fetched for the query to succeed. Using Mutations Much like queries, we can also use mutations by creating a gql-string. import gql from 'graphql-tag' export const CREATE_USER_MUTATION = gql` mutation createUser($user: UserInput!) { createUser(user: $user) } ` Our createUser mutation expects a UserInput input, and, to be able to use parameters passed by Apollo. We will first define a variable with the $ called user. Then, the outside wrapper will pass the variable to the createUser mutation, as expected by the server. Fragments In order to keep our gql-type strings tidy and readable, we can use fragments to reuse query logic. fragment UserFragment on User { name: String! age: Int! } query getAllUsers { getAllUsers { ...UserFragment } } Using GraphQL in Vue components Inside the main.js file, to configure the Apollo Client, we need to import and attach the client to our instance. // main.js import Vue from 'vue' import { apolloProvider } from './graphql/apollo' Vue.config.productionTip = false /* eslint-disable no-new */ new Vue({ el: '#app', apolloProvider, render: h => h(App) }) Since we have added our ApolloProvider to the Vue instance, we can access the client through the $apollo keyword: // GraphQLTest.vue <template> <div class="graphql-test"> <h3 v-Loading...</h3> <h4 v-{{ getAllUsers }}</h4> </div> </template> <script> import { GET_ALL_USERS_QUERY } from '../graphl/queries/userQueries' export default { name: 'GraphQLTest', data () { return { users: [] } }, async mounted () { this.loading = true this.users = await this.$apollo.query({ query: GET_ALL_USERS_QUERY }) this.loading = false } } </script> If we want to create a user, we can use a mutation: // GraphQLTest.vue <template> <div class="graphql-test"> <input v- <input v- <button @Create User</button> </div> </template> <script> import { CREATE_USER_MUTATION } from '../graphl/queries/userQueries' export default { name: 'GraphQLTest', data() { return { user: { name: null, age: null } } }, methods: { async createUser () { const userCreated = await this.$apollo.mutate({ mutation: CREATE_USER_MUTATION, variables: { user: this.user // this should be the same name as the one the server is expecting } }) // We log the created user ID console.log(userCreated.data.createUser) } } } </script> Using this approach lets us micro-manage when and where our mutations and queries will execute. Now we will see some other ways of handling these methods that Vue Apollo gives us. The Apollo Object Inside our Vue components, we get access to the Apollo object, which can be used to easily manage our queries and subscriptions: <template> <div class="graphql-test"> {{ getAllUsers }} </div> </template> <script> import { GET_ALL_USERS_QUERY } from '../graphl/queries/userQueries' export default { name: 'GraphQL-Test', apollo: { getAllUsers: { query: GET_ALL_USERS_QUERY } } } </script> Refetching Queries When defining a query inside the Apollo object, it is possible to refetch this query when calling a mutation or another query with the refetch method or the refetchQueries property: <template> <div class="graphql-test"> {{ getAllUsers }} </div> </template> <script> import { GET_ALL_USERS_QUERY, CREATE_USER_MUTATION } from '../graphl/queries/userQueries' export default { name: 'GraphQL-Test', apollo: { getAllUsers: { query: GET_ALL_USERS_QUERY } }, methods: { refetch () { this.$apollo.queries.getAllUsers.refetch() }, queryUsers () { const user = { name: Lucas, age: 26 } this.$apollo.mutate({ mutation: CREATE_USER_MUTATION, variables: { user } refetchQueries: [ { query: GET_ALL_USERS_QUERY } ] }) } } } </script> By using the Apollo object, provided to us by Vue-Apollo, we no longer need to actively use the Apollo client way of triggering queries/subscriptions and some useful properties and options become available to us. Apollo Object Properties - query: This is the gqltype string referring to the query that wants to get triggered. - variables: An object that accepts the parameters being passed to a given query. - fetchPolicy: A property that sets the way the query will interact with the cache. The options are cache-and-network, network-only, cache-only, no-cache, standbyand the default is cache-first. - pollInterval: Time in milliseconds that determines how often a query will automatically trigger. Special Options - $error to catch errors in a set handler. - $deep watches deeply for changes in a query. - $skip: disables all queries and subscriptions in a given component. - $skipAllQueries: disables all queries from a component. - $skipAllSubscriptions: to disables all subscriptions in a component. Apollo Components Inspired by the way the Apollo Client is implemented for React (React-Apollo), Vue-Apollo provides us with a few components that we can use out of the box to manage the UI and state of our queries and mutations with a Vue component inside the template. ApolloQuery Simpler way of managing our queries in a more intuitive manner: <ApolloQuery : <template slot- <!-- Loading --> <div v-Query is loading.</div> <!-- Error --> <div v-We got an error!</div> <!-- Result --> <div v-{{ data.getAllUsers }}</div> <!-- No result (if the query succeed but there's no data) --> <div v-else>No result from the server</div> </template> </ApolloQuery> ApolloMutation Very similar to the example above, but we must trigger the mutation with the mutate function call: <ApolloMutation : <template slot- <!-- Loading --> <h4 v-The mutation is loading!</h4> <!-- Mutation Trigger --> <button @Create User</button> <!-- Error --> <p v-An error has occurred!</p> </template> </ApolloMutation> Conclusion GraphQL brings a lot of flexibility to API development, from performance, ease-of-use, and an overall different perspective of what an API should look and behave like. Furthermore, ApolloClient and Vue Apollo delivers a set of tools for better management of our UI, state and operations, even error handling and cache! For more information about GraphQL and Apollo Client you can visit the following:
https://stackabuse.com/building-graphql-apis-with-vue-js-and-apollo-client/
CC-MAIN-2020-05
refinedweb
2,009
54.12
Key Takeaways - Blazor is a new single-page application (SPA) framework form Microsoft. Unlike other SPA frameworks such as Angular or React, Blazor relies on the .NET framework in favor of JavaScript. - The Document Object Model (DOM) is a platform and language agnostic interface that treats an XML or HTML document as a tree structure. It allows for the document's content to be dynamically accessed and updated by programs and scripts. - Blazor uses an abstraction layer between the DOM and the application code, called a RenderTree. It is a lightweight copy of the DOM's state composed by standard C# classes. - The RenderTree can be updated more efficiently than the DOM and reconciles multiple changes into a single DOM update. To maximize effectiveness the RenderTree uses a diffing algorithm to to ensure it only updates the necessary elements in the browser’s DOM. - The process of mapping a DOM into a RenderTree can be controlled with the @key directive. Controlling this process may be necessary in certain scenarios that require the context of different DOM elements to be maintained when a DOM is updated. Blazor is a new single-page application (SPA) framework from Microsoft. Unlike other SPA frameworks such as Angular or React, Blazor relies on the .NET framework in favor of JavaScript. Blazor supports many of the same features found in these frameworks including a robust component development model. The departure from JavaScript, especially when exiting a jQuery world, is a shift in thinking around how components are updated in the browser. Blazor’s component model was built for efficiency and relies on a powerful abstraction layer to maximize performance and ease-of-use. Abstracting the Document Object Model (DOM) sound intimidating and complex, however with modern web application it has become the normal. The primary reason is that updating what has rendered in the browser is a computationally intensive task and DOM abstractions are used to intermediate between the application and browser to reduce how much of the screen is re-rendered. In order to truly understand the impact Blazor’s RenderTree has on the application, we need to first review the basics. Let’s begin with a quick definition of the Document Object Model. A Document Object Model or the DOM is a platform and language-agnostic interface that treats an XML or HTML document as a tree structure. In the DOM’s tree structure, each node is an object that makes up part of the document. This means the DOM is a document structure with a logical tree. … the DOM is a platform and language agnostic interface that treats an XML or HTML document as a tree structure. When a web application is loaded into the browser a JavaScript DOM is created. This tree of objects acts as the interface between JavaScript and the actual document in the browser. When we build dynamic web applications or single page applications (SPAs) with JavaScript we use the DOMs API surface. When we use the DOM for creating, updating and deleting HTML elements, modifying CSS and other attributes, this is known as DOM manipulation. In addition to manipulating the DOM, we can also use it and create and respond to events. In the following code sample, we have a basic web page with two elements, an h1 and p. When the document is loaded by the browser a DOM is created representing the elements from the HTML. We can see in figure 1 a representation of what the DOM looks like as nodes in a tree. <!DOCTYPE html> <html> <body> <h1>Hello World</h1> <p id="beta">This is a sample document.</p> </body> </html> Figure 1: An HTML document is loaded as a tree of nodes; each object represents an element in the DOM. Using JavaScript we can traverse the DOM explicitly by referencing the objects in the tree. Starting with the root node document we can traverse the objects children until we reach a desired object or property. For example, we can get the second child off the body branch by calling document.body.children[1] and then retrieve the innerText value as a property. document.body.children[1].innerText "This is a sample document." An easier way to retrieve the same element is to use a function that will search the DOM for a specific query. Several convenience methods exist to query the DOM by various selectors. For example, we can retrieve the p element by its id beta using the getElementById function. document.getElementById("beta").innerText "This is a sample document." Throughout the history of the web, frameworks have made working with the DOM easier. jQuery is a framework that has an extensive API built around DOM manipulation. In the following example we’ll once again retrieve the text from the p element. Using jQuery’s $ method we can easily find the element by the id attribute and access the text. //jQuery $("#beta").text() "This is a sample document." jQuery’s strength is convenience methods which reduce the amount of code required to find and manipulate objects. However, the biggest drawback to this approach is inefficient handling of updates due to directly changing elements in the DOM. Since direct DOM manipulation is a computationally expensive task, it should be performed with a bit of caution. Since direct DOM manipulation is a computationally expensive task, it should be performed with a bit of caution. It’s common practice in most applications to perform several operations that update the DOM. Using a typical JavaScript or jQuery approach we may remove a node from the tree and replace it with some new content. When elements are updated in this way, elements and their children can often be removed and replaced when no change needed. In the following example, several similar elements are removed using a wildcard selector, n-elements. The elements are then replaced, even if they only needed modification. As we can see in figure 2, many elements are removed and replaced while only two required updates. // 1 = initial state $(“n-elements”).remove() // 2-3 $(“blue”).append(modifiedElement1) // 4 $(”green”).append(modifiedElement2) // 4 $(“orange”).append(modifiedElement3) // 4 Figure 2: 1) The initial state; 2) Like elements are selected for removal 3) Elements and their children are removed from the DOM 4) All elements are replaced with only some receiving changes. In a Blazor application we are not responsible for making changes to the DOM. Instead, Blazor uses an abstraction layer between the DOM and the application code we write. Blazor’s DOM abstraction is called the RenderTree and is a lightweight copy of the DOM’s state. The RenderTree can be updated more efficiently than the DOM and reconciles multiple changes into a single DOM update. To maximize effectiveness the RenderTree uses a diffing algorithm to ensure it only updates the necessary elements in the browser’s DOM. If we multiple updates on elements in a Blazor application within the same scope of work, the DOM will only receive the changes produced by the final difference produced. When we perform work a new copy of the RenderTree is created from the changes either through code or data binding. When the component is ready to re-render the current state is compared to the new state and a diff is produced. Only the difference values are applied to the DOM during the update. Let’s look take a closer look at how the RenderTree can potentially reduce DOM updates. In figure 3 we begin with the initial state with three elements that will receive updates, green, blue, and orange. Figure 3: The initial state of the RenderTree (left) and DOM (right). The elements with the values, green, blue, and orange will be affected by code. In figure 4 we can see work being done over several steps within the same cycle. The items are removed and replaced, with the result swapping only the value of green and blue. Once the lifecycle is complete the differences are reconciled. Figure 4: 1) our current RenderTree 2-4) some elements removed, replaced, and updated 5) The current state and new state are compared to find the difference. Figure 5: The RenderTree difference is used to update only elements that changed during the operation. Creating a RenderTree In a Blazor application, Razor Components (.razor) are actually processed quite differently from traditional Razor Pages or Views, (.cshtml) markup. Razor in the context of MVC or Razor Pages is a one-way process that is rendered server-side as HTML. A component in Blazor takes a different approach - its markup is used to generate a C# class that builds the RenderTree. Let’s take a closer look at the process to see how the RenderTree is created. When a Razor Component is created a .razor file is added to our project, the contents are used to generate a C# class. The generated class inherits from the ComponentBase class which includes the component’s BuildRenderTree method as shown in Figure 6. BuildRenderTree is a method receives a RenderTreeBuilder object and appends the component to the tree by translating our markup into RenderTree objects. Figure 6: The ComponentBase class diagram with the BuildRenderTree method highlighted. Using the Counter component example included in the .NET template we can see how the component’s code becomes a generated class. In the counter component there are significant items we can identify in the resulting RenderTree including: - page routing directive - h1 is a basic HTML element - p, currentCount is a mix of static content and data-bound field - button with an onclick event handler of IncrementCount - code block with C# code @page "/counter" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" @Click me</button> @code { private int currentCount = 0; private void IncrementCount() { currentCount++; } } The code in the counter example is used to generate a class with detailed BuildRenderTree method that describes the objects in the tree. If we examine the generated class, we can see how the significant items were translated into pure C# code: - page directive becomes an attribute tag on the class - Counter is a public class that inherits ComponentBase - AddMarkupContent defines HTML content like the h1 element - Mixed elements such as p, currentCount become separate nodes in the tree defined by their specific content types, OpenElement and AddContent - button includes attribute objects for CSS and the onclick event handler - code within the code block is evaluated as C# code [Route("/counter")] public class Counter : ComponentBase { private int currentCount = 0; protected override void BuildRenderTree(RenderTreeBuilder builder) { builder.AddMarkupContent(0, "<h1>Counter</h1>\r\n\r\n"); builder.OpenElement(1, "p"); builder.AddContent(2, "Current count: "); builder.AddContent(3, this.currentCount); builder.CloseElement(); builder.AddMarkupContent(4, "\r\n\r\n"); builder.OpenElement(5, "button"); builder.AddAttribute(6, "class", "btn btn-primary"); builder.AddAttribute<MouseEventArgs>(7, "onclick", EventCallback.Factory.Create<MouseEventArgs>(this, new Action(this, Counter.IncrementCount))); builder.AddContent(8, "Click me"); builder.CloseElement(); } private void IncrementCount() { this.currentCount++; } } We can see how the markup and code turn into a very structured piece of logic. Every part of the component is represented in the RenderTree so it can be efficiently communicated to the DOM. Included with each item in the render tree is a sequence number, ex: AddContent(num, value). Sequence numbers are included to assist the diffing algorithm and boost efficiency. Having a raw integer gives the system an immediate indicator to determine if a change has happened by evaluating order, presence or absence of and item’s sequence number. For example, if we compare a sequence of objects 1,2,3 with 1,3 then it can be determined that 2 is removed from the DOM. The RenderTree is a powerful utility that is abstracted away from us by clever tooling. As we can see by the previous examples, our components are just standard C# classes. These classes can be built by hand using the ComponentBase class and manually writing the RenderTreeBuilder method. While possible, this would not be advised and is considered bad practice. Manually written RenderTree’s can be problematic if the sequence number is not a static linear number. The diffing algorithm needs complete predictability, otherwise the component may re-render unnecessarily voiding it’s efficiently. Manually written RenderTree’s can be problematic if the sequence number is not a static linear number. Optimizing Component Rendering When we work with list of elements or components in Blazor we should consider how the list of items will behave and the intentions of how the components will be used. Ultimately Blazor's diffing algorithm must decide how the elements or components can be retained and how RenderTree objects should map to them. The diffing algorithm can generally be overlooked, but there are cases where you may want to control the process. - A list rendered (for example, in a @foreach block) which contains a unique identifier. - A list with child elements that may change with inserted, deleted, or re-ordered entries - In cases when re-rendering leads to visible behavior differences, such as lost element focus. The RenderTree mapping process can be controlled with the @key directive attribute. By adding a @key we instruct the diffing algorithm to preserve elements or components related to the key's value. Let’s look at an example where @key is needed and meets the criteria listed above (rules 1-3). An unordered list ul is created. Within each list item li is an h1 displaying the Value of the class Color. Also, within each list item is an input which displays a checkbox element. To simulate work that we might do in a list such as: sorting, inserting, or removing items, a button is added to reverse the list. The button uses an in-line function items = items.Reverse() to reverse the array of items when the button is clicked. <ul class="list-group"> @foreach (var item in items) { <li class="list-group-item"> <h1>@item.Value</h1> <input type="checkbox" /> </li> } </ul> <button @Reverse</button> @code { // class Color {int Id, string Value} IEnumerable<Color> items = new Color[] { new Color {Id = 0, Value = "Green" }, new Color {Id = 1, Value = "Blue" }, new Color {Id = 2, Value = "Orange" }, new Color {Id = 3, Value = "Purple" } }; } When we run the application the list renders with a checkbox for each item. If we select the checkbox in the “Green” list item then reverse the list, then the selected checkbox will remain at the top of the list and is now occupying the “Purple” list item. This is because the diffing algorithm only updated the text in each h1 element. The initial state, and reversed state is shown in Figure 30, note the position of the checkbox remains unchanged. Figure 7: A rendering error is visible as the checkbox fails to move when the array is reversed, and the DOM loses context of the element’s relationship. We can use the @key directive to provide additional data for the RenderTree. The @key will identify how each list item is related to its children. With this extra information the diffing algorithm can preserve the element structure. In our example we’ll assign the item’s Id to the @key and run the application again. @foreach (var item in items) { <li @ <h1>@item.Value</h1> <input type="checkbox" /> </li> } With the @key directive applied the RenderTree will create, move, or delete items in the list and their associated child elements. If we select the checkbox in the “Green” list item then reverse the list, then the selected checkbox will also move because the RenderTree is moving the entire li group of elements within the list, this can be seen in Figure 31. Figure 8: Using the key attribute the elements retain their relationship and the checkbox remains with the appropriate container as the DOM updates. For this example, we had an ideal scenario that met the criteria for needing @key. We were able to fix the visual errors caused by re-rendering the list of items. However, use cases aren’t always this extreme, so it’s important to take careful consideration and understand the implications of applying @key. When @key isn't used, Blazor preserves child element and component instances as much as possible. The advantage to using @key is control over how model instances are mapped to the preserved component instances, instead of the diffing algorithm selecting the mapping. Using @key comes with a slight diffing performance cost, however if elements are preserved by the RenderTree it can result in a net benefit. Conclusion While the RenderTree is abstracted away through the Razor syntax in .razor files, it’s important to understand how it impacts the way we write code. As we saw through example, understanding the RenderTree and how it works is essential when writing components that manage a hierarchy. The @key attribute is essential when working with collections and hierarchy so the RenderTree can be optimized and avoid rendering errors. About the Author Ed Charbeneau is the author of the Free Blazor E-book; Blazor a Beginners Guide, a Microsoft MVP and an international speaker, writer, online influencer, a Developer Advocate for Progress, and expert on all things web development. Charbeneau enjoys geeking out to cool new tech, brainstorming about future technology, and admiring great design. Community comments
https://www.infoq.com/articles/blazor-rendertree-explained/?itm_source=articles_about_dotnet&itm_medium=link&itm_campaign=dotnet
CC-MAIN-2020-34
refinedweb
2,880
54.02
A Sequencer that buffers results and returns them in order of their sequence number. More... #include <sequencers.h> A Sequencer that buffers results and returns them in order of their sequence number. The OrderedSequencer maintains an internal, monotonically incrementing counter for the next sequence number it expects. If it receives a result with a higher sequence number, it will buffer it for later (when the sequence number reaches that of this result). Otherwise, if the sequence numbers match, the result is returned. Implementation note: The OrderedSequencer is implemented with a fixed-size buffer. Let m be the maximum number of jobs in the data loader's queue and s be the current sequence number. Assume m jobs are scheduled in the DataLoader. Any new result is stored at index job.sqn mod m in the OrderedSequencer. Why are we sure sequence numbers of new jobs will not collide with sequence numbers of buffered jobs? The OrderedSequencer will not return from next() until it receives the result with sqn s. This means no new jobs can be scheduled in the DataLoader in the meantime, which enforces that as long as sqn s has not been received, s + m (which would cause a collision in the fixed-size buffer) will not yet be scheduled. Definition at line 63 of file sequencers.h. Constructs the OrderedSequencer with the maximum number of results it will ever hold at one point in time. Definition at line 68 of file sequencers.h.
https://caffe2.ai/doxygen-c/html/structtorch_1_1data_1_1detail_1_1sequencers_1_1_ordered_sequencer.html
CC-MAIN-2019-30
refinedweb
246
64.81
Subversion Offline Solution (SOS) Project description - License: MPL-2.0 - Documentation (official website), Code Repository (at Github) - Buy a coffee for the developer to show your appreciation! List of Abbreviations and Definitions - MPL: *Mozilla Public License* - PyPI: *Python Package Index* - SCM: Source Control Management - SOS: Subversion Offline Solution - in time. commands, - Tracking mode: Only files that match certain file patterns are respected during commit, update (could be augmented, but it’s a non-goal too) - attemüt Latest Changes Version 1.3 release on 2018-02-10: Bug 167 Accidentally crawling file tree and all revisions on status). Migration from older repositories: Add a , [] at the end of each branch info inside .sos/.meta, e.g. modify [0, 1518275599353, "trunk", true, []] to [0, 1518275599353, "trunk", true, [], []] (not the additional trailing , []) Downloads so far: Version 1.2 released on 2018-02-04: - Bug 135, 145 Fixes a bug showing ignored files as deleted - Bug 147 Fixes sos ls problems - Enhancement 113 Usability improvements - Enhancement 122 Complete rework of merge logic and code - Enhancement 124 Uses enum - Enhancement 137 Better usage help page - Enhancement 142, 143 Extended sos config and added local configurations - Enhancement 153 Removed Python 2 leftovers, raised minimum Python version to 3.4 (but 3.3 may also work) - Enhancement 159 Internal metadata updates. Migration from older repositories: Add , {} to .sos/.meta right and status, and also ls (in #101) - Enhancement 86 Renamed command for branch removal to destroy - Feature 8 Added functionality to rename tracking patterns and move files accordingly - Feature 61 Added option to only consider or exclude certain file patterns for relevant operations using --only creates was added as well to not switch to the new branch - commit creates a numbered revision from the current file tree, similar to how SVN does, but revision numbers are only unique per branch, as they aren’t stored in a global namespace. The commit message is strictly optional on purpose (as sos commit serves largely as a CTRL+S replacement) - The first revision (created during execution of sos offline or sos branch) always has the number 0 - Each sos commit increments refers to the latest revision, -2 to the second-latest - You may specify a revision of the current branch by /<revision>, while specifying the latest revision of another branch by <branch>/ (note the position of the slash) - delete destroys and removes a branch. It’s a command, not an option flag as in git branch -d <name> for usability’s sake - add and rm add renames a file tracking pattern and all matching files accordingly; only useful in tracking or picky mode. It supports reordering of literal substrings, but no reordering of glob markers (*, ? etc.), and of adjacent glob markers. Use --soft to avoid files actually being renamed in the file tree. Warning: the --force option flag will be considered for several consecutive, potentially dangerous operations - switch works like checkout in Git for a revision of another branch (or of the current), or update to latest or a specific revision in SVN. Please note that switching to a different revision will in no way fix or remember that revision. The file tree will always be compared to the branch’s latest commit for change detection - update works a bit like pull and merge in Git or update in SVN and replays the specified other (or “remote“‘s) branch’s and/or revision’s changes into the file tree. There are plenty of options to configure what changes are actually integrated, plus interactive integration. This command will not switch the current branch like switch does.`, `--rm` and `-` or `--picky` mode will always combine (build the union of) all tracked file patterns. To revert this, use the `switch --meta` command set sets a boolean flag, a string, or an initial list (semicolon-separated) - sos config unset removes a boolean flag, a string, or an entire list - sos config add adds a string entry to a list, and creates it if necessary - sos config rm removes a string entry from a list. Must be typed exactly as the entry to remove - sos config show lists all defined configuration settings, including storage location/type (global, local, default) - sos config show <parameter> show only one configuration item - sos config show flags|texts|lists show supported relying on modification timestamp only; file size is always checked in both modes. Default: False - track: Flag for always going offline in tracking mode (SVN-style). Default: False - picky: Flag for always going offline in picky mode (Git-styly). Default: False - compress: Flag for compressing versioned artifacts. Default: False - defaultbranch: Name of the initial branch created when going offline. Default: Dynamic per type of VCS in current working directory (e.g. master for Git, trunk for SVN, no name for Fossil) - texttype: List of file patterns that should be recognized as text files that can be merged through textual diff, in addition to what Python’s mimetypes library will detect as a text/... mime. Default: Empty list - bintype: List of file patterns that should be recognized as binary files which cannot be merged textually, overriding potential matches in texttype. Default: Empty list - ignores: List of filename patterns (without folder path) to ignore during repository operations. Any match from the corresponding white list will negate any hit for ignores. Default: See source code, e.g. ["*.bak", "*.py[cdo]]" - ignoresWhitelist: List of filename patterns to be consider even if matched by an entry in the ignores list. Default: Empty list - ignoreDirs: As ignores, but for folder names - not versioned with commits (!). This means that the “what to track” metadata is not part of the changesets. This is a simplification stemming from the main idea that revisions form a linear order of safepoints, and users rarely go back to older revisions - sos update will not warn if local changes are present! This is a noteworthy exception to the failsafe approach taken for most other commands Hints and Tipps - To migrate an offline repository, either use the sos dump <targetname>.sos.zip command, or simple move the .sos folder into an (empty) target folder, and run sos switch trunk --force (or use whatever branch name you’re wanting to recreate). For compressed offline repositories, you may simply tar all files, otherwise you may want to create an compressed archive for transferring the .sos folder - To save space when going offline, use the option sos offline --compress: It may increase commit times by a larger factor (e.g. 10x), but will also reduce the amount of storage needed to version files. To enable this option for all offline repositories, use sos config set compress on - When specifying file patterns including glob markers on the command line, make sure you quote them correctly. On Linux (bash, sh, zsh), but also recommended on Windows, put your patterns into quotes ("), otherwise the shell will replace file patterns by the list of any matching filenames instead of forwarding the pattern literally to SOS - Many commands can be shortened to three, two or even one initial letters, e.g. sos st will run sos status, just like SVN does (but sadly not Git). Using SOS as a proxy to other VCS requires you to specify the form required by those, e.g. sos st works for SVN, but not for Git (sos status, however, would work) - It might in some cases be a good idea to go offline one folder higher up in the file tree than your base working folder to care for potential deletions, moves, or renames - The dirty flag is only relevant in tracking and picky mode (?) TODO investigate - is this true, and if yes, why - Branching larger amounts of binary files may be expensive as all files are copied and/or compressed during sos offline. A workaround is to sos offline only in the folders that are relevant for a specific task Development and Contribution See CONTRIBUTING.md for further information. Release Management - Increase version number in setup.py - Run python3 setup.py clean build test to update the PyPI version number, compile and test the code, and package it into an archive. If you need evelated rights to do so, use sudo -E python.... - Run git add, git commit and git push and let Travis CI and AppVeyor run the tests against different target platforms. If there were no problems, continue: - Don’t forget to tag releases - Run python3 setup.py sdist - Run twine upload dist/*.tar.gz to upload the previously created distribution archive to PyPI. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sos-vcs/2018.1213.2812/
CC-MAIN-2018-22
refinedweb
1,440
57.5
#include <QGuiApplication> i'm working on a project and i need to include <QGuiApplication>. but i get an error: fatal error: QGuiApplication: No such file or directory i also can't find a .pro file anywhere to add QT += gui. please help! - raven-worx Moderators @user4592357 said in #include <QGuiApplication>: i also can't find a .pro file anywhere to add QT += gui. please help! what does that mean?!?! What are you actually trying to do? i need to use QGuiApplication::clipboard() for which i need to include <QGuiApplication> which i somehow am not able to do - SGaist Lifetime Qt Champion Hi, To add to @raven-worx, without a .pro file how are you managing your project ? CMake ? QBS ? - raven-worx Moderators @user4592357 so you are trying just to use a portion of Qt code in an arbitrary C++ application? Did i get this right? If so it would be way more easier to use the native code working for what you are trying to do, than fiddling around with the Qt integration into your application (if possible at all). they use .make files but i didn't find any such include there What version of Qt are you using? In old versions it's #include <QApplication>and QApplication::clipboard() qt 4.8, and thanks, i used it and it solved my problem. but i'm still curious cause this might happen to me again and i wanna know how can i include <QGuiApplication> in such projects? You can't In Qt5 the gui module was split in 2, gui and widgets. gui includes QGuiApplication and widgets uses QApplication. In Qt4 there is no such distinction, QApplication is the only option.
https://forum.qt.io/topic/82998/include-qguiapplication
CC-MAIN-2017-47
refinedweb
281
74.08
Hello, I am brand new to Dataiku. I will be working with imaging data in the Nifti format. I would like to start by making a very simple Dataiku workflow. So far I have my image, let's called it "a.nii.gz" uploaded to the workflow. The first thing I would like to do is read in this file into python and say either print out the content of it, or perhaps make an image (through matplotlib.pyplot). When I link a python script to the data set "a.nii.gz" it automatically creates a python template for me. When I run the template I get an error, which isn't surprising at all given how I'm not sure what I'm doing. Within the template I get this line: # Read recipe inputs a = dataiku.Dataset("a") a_df = a.get_dataframe() However the data is not a dataframe, it's a nifti .nii.gz file. This can be read in via a line like img = nib.load('a.nii.gz') I tried commenting out the lines pertaining to the dataframe, however I then get errors that these lines are missing. It seems, understandably, that Dataiku is built around processing dataframe data. However my data is not immediately in dataframe format, and I would prefer to avoid it if at all possible. Is there a way to load in data that's essentially of an unknown format which will be later converted via a python script? Or am I going about this the wrong way? Thank you for your detailed steps and description of your setup! And always useful to start with a small test setup initially. For the situation you describe, where you are reading in "non-dataframe" data, I would suggest creating a managed folder instead, to house or point to your image files. Here's an example. From the flow, you can click on + Dataset > Folder to create a new managed folder: Just for testing, you can simply upload a file like "a.ii.gz": Then, you can create a Python recipe from your folder. I would suggest for testing to use the "Edit in Notebook" option so that you can run your code in regular notebook format. The flow will require an "output", so if you just want to test out plotting the image or play around first, you can do this easily in the notebook view. Here's the Python recipe that I created, and am testing in a notebook: import dataiku import pandas as pd, numpy as np from dataiku import pandasutils as pdu # Read recipe inputs folder = dataiku.Folder("nifti") nifti_info = nifti.get_info() # go through all the files in the folder for nifti_file in folder.list_paths_in_partition(): my_nifti_file = nib.load(nifti_info['path'] + nifti_file) With the folder input, the template Python code that you'll see provided for you instead points to the Folder and not a dataset. You can then iterate through all files in the folder, as shown here. And then you can easily read in files of any data type, so you aren't expected to read in a dataframe. In addition, once you start to scale, you can easily change your folder to point to a folder on your filesystem, or to an external file store (i.e. S3), which should make this easy to manage. You can toggle these settings under the Folder "Settings" tab. Let me know if you have any questions about this process. Thanks, Sarina
https://community.dataiku.com/t5/Using-Dataiku-DSS/How-to-read-in-and-view-an-image-basic/td-p/15439
CC-MAIN-2021-31
refinedweb
578
73.47
Larry's WebLog Evolution Platform Developer Build (Build: 5.6.50428.7875)2004-03-12T13:41:00Z2 Years...<p><font face="Tahoma" size="2">On August 26th, I have been working at Microsoft for 2 years. Actually it is kinda hard to believe that I have been here for so long and so short.</font></p> <p><font face="Tahoma" size="2").</font></p> <p><font face="Tahoma" size="2">After two years, I am glad that I made the desicion to go with MSDN team. Our team has been going through some exciting transformation for our customers, the products that will come out will be every exciting and should give our customer better experience.</font></p> <p><font face="Tahoma" size="2">Anyway, I just want to come back to update my status as a college hire. It has been great 2 years!</font></p><div style="clear:both;"></div><img src="" width="1" height="1">MSDNArchive am Back!<P><FONT face=Verdana size=2>Hi, I am back!</FONT></P> <P><FONT face=Verdana size=2>After a long absense from the blog community, I finally have some time on my hand to start writing a new post again.</FONT></P> <P><FONT face=Verdana size=2>If you have not heard, MSDN just launched Microsoft <A href="">Product Feedback Center</A>.</FONT></P> <P><FONT face=Verdana size=2>In the last 5 days, Product Feedback Center has attracted more than 1500 new users. Being one of the two testers on this project, I feel very happy that users really embrace this idea.</FONT></P> <P><FONT face=Verdana size=2>Anyway, I will keep this one short and enjoy July 4th this Sunday. Have a great Independence day, everyone!</FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">MSDNArchive<P><FONT face=Verdana size=2>Hi everyone:</FONT></P> <P><FONT face=Verdana size=2>This is a delayed introduction for myself, let me get it started quickly.</FONT></P> <P><FONT face=Verdana size=2>My name is Larry. I graduated from <A href="">Northwestern University </A>in June 2002, since then I have been working @ MS on the MSDN Test team as a Software Design Engineer in Test for ... one and half years now... (wow, that's a long time!)</FONT></P> <P><FONT face=Verdana size=2).</FONT></P> <P><FONT face=Verdana size=2:</FONT></P> <P><FONT face=Verdana size=2>- Learn new technology and tools faster<BR>- More satisfaction when getting things done (afterall, many people in the world are using the product)<BR>- Easy access to many industry experts and gurus (yeah, I love this part)<BR>- Great learning environment for any inexperienced person</FONT></P> <P><FONT face=Verdana size=2>Anyway, I will stop now. Since I am just taking a little break from work.</FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">MSDNArchive vs. Your Own Unit Tests<P><FONT face=Verdana size=2>Lately, I have been surrounded by NUnit in my daily conversations.</FONT></P> <P><FONT face=Verdana size=2>1. It is probably that I am a tester, I somehow got to write/improve some unit tests;<BR>2. NUnit has quickly become a general practice (standard) among my product team where more developers, PM and testers have been pushing this neat practice;<BR>3. NUnit has grown out of its design role among my test organization (I will explain later in this post)</FONT></P> <P><FONT face=Verdana size=2>First of all, you can find NUnit <A href="">here</A>. L</FONT><FONT face=Verdana size=2>et me introduce NUnit to people who have not used yet.</FONT></P> <P><FONT face=Verdana size=2>A simple test can be created with following few lines of code:</FONT></P> <P><FONT face=Verdana color=#a52a2a size=1>[TestFixture]<BR></FONT><FONT face=Verdana color=#a52a2a size=1>public class SampleTest<BR>{<BR> [Test]<BR> public void MyTest()<BR> {<BR> Assertion.Assert(“This is MyTest“, 1=2);<BR></FONT><FONT face=Verdana color=#a52a2a size=1> }<BR>}</FONT></P> <P><FONT face=Verdana size=2>If you need more information about <A href="">NUnit</A>, please check out their documentation.</FONT></P> <P><FONT face=Verdana size=2>In addition to the NUnit which developers and testers can use for Windows-based applicaiton, NUnit has a version for ASP.NET (hear from my co-workers, I believe that it is still in beta) as well.</FONT></P> <P><FONT face=Verdana size=2>What have made NUnit a such interesting tool is that testers can leverage NUnit to do some cool testing practices.<BR>- NUnit attributes can be extended by writing custom attributes by developers and testers;<BR>- With a great number of unit tests, anyone can write a not-so-difficult .NET application with Reflection to discover NUnit unit tests by their attributes and invoke them one by one.</FONT></P> <P><FONT face=Verdana><FONT size=2>The advantages of using NUnit in normal testing scenarios are:<BR>- Without too much code, testers can acquire coverage number on applications with an instrumentation tool;<BR>- A suite of NUnit unit tests can set the minimum requirement for a code's minimum functional screening; therefore, this ensures that tester will not spend time on a broken build;</FONT></FONT></P> <P><FONT face=Verdana><FONT size=2>I think I will stop here for the day. As of myself, I am still exploring NUnit, I have been using it for a while now; however, with much more possiblity NUnit presents with other great tools, a tester nowadays can achieve many things without too much effort.</FONT></FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">MSDNArchive test & a little bit of something else<P><FONT face=Verdana size=2>Lately, I have been doing a great deal of performance testing. In fact, I am kinda glad that I had the opportunity and time to get good at it. I have only been working at MS for one and half years, a lot of the tasks we do daily here are still kinda new and exciting to me. For example, performance testing is not as easy as some people may think it is. There are so many factors which can affect the testing itself as well as the result. Most importantly, the purpose of a performance test is to measure the scalibility and architecture of an application. Sometimes, I even surprised at the conclusion we came to. All I have to say, “Performance testing is a science!”</FONT></P> <P><FONT face=Verdana size=2>What made the performance testing a science in my own mind is that the tester has to understand what he/she is going after, what kind of steps he/she needs to take to get there, and most importantly, how to interpret so many numbers a tester gets back from running the test. Here is a simple list of things a normal performance testing would need:</FONT></P> <P><FONT face=Verdana size=2>- <FONT color=#a52a2a>C</FONT>lient(s) as computers which would run the test<BR>- <FONT color=#a52a2a>T</FONT>arget as the object a tester want to measure<BR>- <FONT color=#a52a2a>P</FONT>erformance Counters as the things someone need to pay attention to<BR>- <FONT color=#a52a2a>D</FONT>uration as the length of the test which can vary dramatically from few minutes to several days; in some extreme case, even few months<BR>- <FONT color=#a52a2a>W</FONT>armup and CoolDown Time as the time the Target would get ready to be stressed and “Attacked”<BR>- <FONT color=#a52a2a>N</FONT>umbers as the result a tester would get back from running a performance test<BR>- <FONT color=#a52a2a>C</FONT>onslusion as the analysis of those numbers</FONT></P> <P><FONT face=Verdana size=2>If you haven't noticed yet, I was trying to create a cool acronym with the first letters of each term. Have a nice day, all! Hopefully you would find this post interesting.</FONT></P> <P><FONT face=Verdana color=#000000 size=2>P.S. My friend and I have been going to Applebee's lately for its Happy Hour menu, all I have to say is “Man, it is good!”</FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">MSDNArchive first Blog!!!<P><FONT face=Verdana size=2>Exciting!!! I finally have a blog of my own, yeh! All the sudden, I can post my own thoughts and other cool things in my own “publication”. Since I have never felt that I would ever be a good enough writer to publish anything, this is just a such remarkable event.</FONT></P> <P><FONT face=Verdana size=2>Anyway, since this is my first blog. I guess that I will keep it short. For a while, I had a not-so-positive impression about having someone's personal and sometimes random thoughts online for the entire world. I used to think that who would care about someone else's thoughts afterall. However, after reading many people's blogs, I actually like the idea of recording one's own feeling and thoughts at a particular moment some place on World Wide Web. It is such a cool way to communicate and exchange ideas with people who are on the other side world (Probably 12 time zones different than mine!)</FONT></P> <P><FONT face=Verdana size=2>Since I never got used to write a long story about anything, I will take a break now!</FONT></P> <P><FONT face=Verdana size=1>P.S. I will also write some of my posts in Chinese in the future, let's see how popular this idea will go!!!</FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">MSDNArchive
http://blogs.msdn.com/b/larsun/atom.aspx
CC-MAIN-2016-18
refinedweb
1,676
60.75
Lab sessions Tue Nov 05 to Thu Nov 07 Lab written by Julie Zelenski, with modifications by Nick Troccoli Learning Goals This lab is designed to give you a chance to: - study the relationship between C source and its assembly translation - use objdumpand gdbto disassemble and trace assembly code Find an open computer and somebody new to sit with. Share with them anything fun you did over the weekend! Clone the repo by using the command below to create a lab6 directory containing the project files. git clone /afs/ir/class/cs107/repos/lab6/shared lab6 Open the lab checkoff form. Exercises To study C->assembly translation, we recommend you try out Compiler Explorer, a nifty online tool that provides an "interactive compiler". This link and the pre-provided links throughout this lab are pre-configured for myth's version of GCC and compiler flags from the CS107 makefiles. (To manually configure yourself: select language C, compiler version x86-64 gcc 5.4 and enter flags -Og -std=gnu99). In Compiler Explorer, you can enter a C function, see its generated assembly, then tweak the C source and observe how those changes are reflected in the assembly instructions. You could instead make these same observations on myth using gcc and gdb, but Compiler Explorer makes it easier to do those tasks in a convenient environment for exploration. Try it out! 1) Assembly Code Study (70 minutes) This section will get you familiar with various assembly commands, from the mov command from the earlier lecture, as well as commands for arithmetic and logic. Check out a reference for these commands here while you work. Addressing Modes Review the two deref functions below. void deref_one(char *ptr, long index) { *ptr = '\0'; } void deref_two(int *ptr, long index) { *ptr = 0; } Open Compiler Explorer - we've created a pre-made environment already loaded with this code here. Review the generated assembly shown in the right pane. - Take time to read through the movinstruction. What is it doing? How does that connect with the C code it represents? - There is one difference between the assembly instruction sequences of the two functions. What is the difference? Why it is different? - Edit both functions to instead assign ptr[7]to their respective values. How does this change the addressing mode using for the destination operand of the movinstruction? Do both dereffunctions change in the same way? - Edit both functions to now assign ptr[index]to their respective values. How does this change the addressing mode using for the destination operand of the movinstruction? Do both dereffunctions change in the same way? - Change the assignment statement to ptr[0] = ptr[1]. For both functions, the assembly sequence is one instruction longer. Previously, only one movinstruction was needed. This assignment statement requires two movs. Why? (Extra: with this change, you may wonder why the compiler outputs movzblwhen it uses only one byte of what is moved. In this case it would likely be functionally equivalent to just move the one byte needed, but the compiler makes choices that it thinks make program execution more efficient. For example, this may help with something called instruction-level parallelism) Now open the pre-made environment available here for the code below: typedef struct coord { int x; int y; } coord; void deref_three(struct coord *ptr) { ptr->x = ptr->y; } void deref_two(int *ptr, long index) { ptr[0] = ptr[1]; } - Take time to read through the movinstructions. What are they doing? How does that connect with the C code they represent? - The assembly for deref_threeis identical to deref_two, but the C source for the two functions seem to have nothing to do with one another! How is possible that both can generate the same assembly instructions? Signed/Unsigned Arithmetic Let's explore a bit with arithmetic in assembly. The add, subl and imul instructions perform addition, subtraction and multiplication, respectively. Here is the format for these instructions: add src, dst # dst += src sub src, dst # dst -= src imul src, dst # dst *= src The following two functions perform the same arithmetic operation on their arguments, but those arguments differ in type (signedness). int signed_arithmetic(int a, int b) { return (a - b) * a; } unsigned int unsigned_arithmetic(unsigned int a, unsigned int b) { return (a - b) * a; } Open the pre-made environment for the code above. - Both functions generate the same sequence of assembly instructions! The choice of two's complement representation allows the add/sub/imul instruction to work for both unsigned and signed types. Neat! - Edit both functions to add the following line at the start: a >>= b;. This performs a right shift on one of its arguments. When doing a right-shift, does gcc emit an arithmetic ( sar) or logical ( shr) shift? Does it matter whether the argument is signed or unsigned? - For the shift instructions, the shift amount can be either an immediate value, or the byte register %cl (and only that register!). However, the instruction interprets the contents of the %cl register in a slightly interesting way. Check out slides 39-40 of lecture 12 here. Slide 40 in particular mentions information we didn't have time to bring up in lecture. Read over this explanation and discuss the included example with your partner. Then, take a look back at the assembly code you have been working with so far, particularly the a >>= blines. Notice how it uses the %cl register for shifting. What is the largest shift amount that could be specified using the %cl register for an unsigned int? Why might this limit make sense? Load Effective Address The lea instruction is meant to help do calculations with memory addresses, but it also has helpful uses for general arithmetic. Take a few minutes to read over the description of mov vs. lea on the x86-64 Guide page. It packs a lot of computation in one instruction: two adds and a multiply by constant 1, 2, 4 or 8, and is often used by compiler to do an efficient add/multiply combo. int combine(int x, int y) { return x + y; } Open the pre-made environment for the code above. - In the generated assembly, you'll see a leainstead of the expected add. Interesting! Take a minute to work through what that leainstruction is doing to understand why leacan be used for addin this case. What else can it do? Let's find out! - Edit the combinefunction to return x + 2 * y, and then return x + 8 * y - 17, and observe how a single leainstruction can also compute these more complex expressions, such as with multiplication! - Edit the function to now be return x + 47 * yand the result will no longer fit the pattern for an leabecause of the multiplication factor. What sequence of assembly instructions is generated instead? Multiply is one of the more expensive instructions and the compiler prefers cheaper alternatives where possible. Open this pre-made environment for the code below: int scale(int x) { return x * 4; } - The scalefunction multiplies its argument by 4. Look at its generated assembly-- there is no multiply instruction. What has the compiler used instead? (Note: the add of zero does nothing, but is outputted for some reason by the compiler) Edit the scalefunction to return x * 16. What assembly instruction is used now? - It is perhaps unsurprising that the compiler treats multiplication by powers of 2 as a special case given an underlying representation in binary, but it's got a few more tricks up its sleeve than just that! Edit the scalefunction to instead multiply its argument by a small constant that is not a power of 2, e.g. x * 3, x * 7, x * 12, x * 17, .... For each, look at the assembly and see what instructions are used. GCC sure goes out of its way to avoid multiply! - Experiment to find a small integer constant C such that return x * Cis expressed as an actual imulinstruction. - Note: you may encounter another rarer form of imulwith 3 arguments while playing around with multiplication. These arguments are source1, source2, destination, meaning that this form multiplies the sources together and puts the result in the destination. The destination must be a register, the first source a register or memory location, and the second source an immediate value. Division Division is something that is also tedious for modern CPUs; for this reason, GCC avoids division when possible. Let's find out what wizardry it is willing to employ. Open this pre-made environment containing the code below: unsigned int unsigned_division(unsigned int x) { return x / 2; } - In the generated assembly for unsigned_division, there is no divinstruction in sight, which is the instruction to perform division. How then did it implement unsigned division by 2? What is the interpretation of a one operand shrinstruction? - Change the function to divide by a different power of 2 instead (e.g. 4 or 8 or 64). What changes in the generated assembly? Restore the function back to its original return x / 2 and paste in this additional function that does a signed division by 2: int signed_division(int x) { return x / 2; } Whereas addition, subtraction, and multiplication operate equivalently on signed and unsigned integers, it's not quite so with division. If the divisor does not evenly divide the dividend, the quotient must be rounded (the terminology is DIVIDEND / DIVISOR = QUOTIENT). In the case of dividing an odd dividend by 2, the remainder is 1. Discarding the remainder (which is what unsigned divide does by shifting away the lsb) has the effect of rounding down to a lesser value. However, the rule for integer division is that quotient must be rounded toward zero, e.g. dividing 3/2 = 1 and dividing -3/2 = -1. A positive quotient is rounded down, but a negative quotient is rounded up. - What is the bit pattern for 3and then 3 >> 1? What is the bit pattern for -3and then -3 >> 1? (Assume arithmetic right shift) Do you see why the right shift (discard lsb) will round a positive value toward zero and rounds a negative value away from zero? This difference in rounding is the crux of why right-shift alone is insufficient if the result is negative. - Compare the assembly for unsigned_divisionto signed_division. The signed version has a pair of instructions ( shr, add) inserted and uses sarin place of shr. First, consider that last substitution. If the number being shifted is positive, there is no difference, but arithmetic right shift versus logical on a negative number has a profound impact. What is different? Why? - Now let's dig into the shrand addinstructions that were inserted in the signed version. Trace through their operation when the dividend is positive and confirm these instructions made no change in the quotient. But these instructions do have an effect when the dividend is negative. They adjust the negative dividend by a "fixup" amount before the divide (shift). This pushes the dividend to the next larger multiple of the divisor so as to get the proper rounding for a negative quotient. - The necessary fixup amount when dividing by 2 is 1, the fixup for dividing by 4 is 3, for 8 it is 7 and so on. This calculation should be reminiscent of the roundupfunction we studied way back in lab1! Change the signed_divisionfunction to divide by 4 instead. The fixup amount is now 3. The assembly instructions handle the fixup slightly differently than before. It computes the dividend plus fixup and uses a "conditional mov" instruction to select between using the fixed or unmodified dividend based on whether the dividend is negative. This is a new type of movinstruction in addition to the ones we've already seen - we'll discuss this more in a future lecture, but the way to read the testand cmovinstructions is that if the number in %edi is not negative, it moves %edi into %eax. This undoes the fixup if the value is non-negative since fixup is not needed in that case. It's cool to see that assembly conditionally adds this fixup of 3 here to ensure the shift produces the correct result! Optional: further explorations: Division by powers of two get special treatment, sure, but what about constants that are not powers of two? Open this pre-made environment containing the code below: unsigned int unsigned_div10(unsigned int x) { return x / 10; } Look at the C source and generated assembly for this function. The assembly still doesn't use a div instruction, by there is a multiply by a bizarre number: -858993459 (in hex 0xcccccccd). What sorcery is this? This mysterious transformation is effectively multiplying by 1/10 as a substitute for divide by 10. The 1/10 value is being represented as a "fixed point fraction" which is constructed similarly to the way floats are. Enter 0.1 into the float visualizer tool and look to see where that same funny 0xcccccccd number shows up in the significand bits of the float. This technique is known as reciprocal multiplication and gcc will generally convert division by a constant in this way. The math behind this is somewhat complex, but if you'd like to know more, check out Doug Jones' tutorial on reciprocal multiplication. 2) Tools (40 minutes) Deadlisting with objdump As part of the compilation process, the assembler takes in assembly instructions and encodes them into binary machine form. Disassembly is the reverse process that converts binary-encoded instructions back into human-readable assembly text. objdump is a tool that operates on object files (i.e. files containing machine instructions in binary). It can dig out all sorts of information from the object file, but one of the more common uses is as a disassembler. Let's try it out! - Invoking objdump -dextracts the instructions from an object file and outputs the sequence of binary-encoded machine instructions alongside the assembly equivalent. This dump is called a deadlist ("dead" to distinguish from the study of "live" assembly as it executes). Use maketo build the lab programs and then objdump -d codeto get a sample deadlist. - The ./countops.pypython script reports the most heavily used assembly instructions in a given object file. Try out ./countops.py codefor an example. The script uses objdumpto disassemble the file, tallies instructions by opcode, and reports the top 10 most frequent. Use whichto get the path to a system executable (e.g. which emacs) and then use countops.pyon that path. Try this for a few executables. Does the mix of assembly instructions seem to vary much by program? Tip #1: you can save the output of objdump to file so that you can easily view it later by doing objdump -d code > myfile.txt. Tip #2: remember that all literal values in assembly are in hexadecimal! This is a common gotcha when reading assembly code. GDB Commands for Live Assembly Debugging Below we introduce a few of the gdb commands that allow you to work with code at the assembly level. The gdb command disassemble with no argument will print the disassembled instructions for the currently executing function. You can also give an optional argument of what to disassemble, such as a function name or code address. (gdb) disass myfn Dump of assembler code for function myfn: 0x000000000040051b <+0>: movl $0x1,-0x20(%rsp) 0x0000000000400523 <+8>: movl $0x2,-0x1c(%rsp) ... The hex number in the leftmost column is the address in memory for that instruction and in angle brackets is the offset of that instruction relative to the start of the function. You can set a breakpoint at a specific assembly instruction by specifying its address b *address or an offset within a function b *myfn+8. Note that the latter is not 8 instructions into main, but 8 bytes worth of instructions into main. Given the variable-length encoding of instructions, 7 bytes can correspond to one or several instructions. If you try to set a breakpoint on an address without using the * prefix, gdb will try to interpret the address as a function name and not the address of a particular assembly instruction, so be careful! (gdb) b *0x400570 break at specified address (gdb) b *myfn+8 break at instruction 8 bytes into myfn() The gdb commands stepi and nexti allow you to single-step through assembly instructions. These are the assembly-level equivalents of the source-level step and next commands. They can be abbreviated si and ni. (gdb) stepi executes next single instruction (gdb) nexti executes next instruction (step over fn calls) The gdb command info reg will print all integer registers. You can print or set a register's value by name. Within gdb, a register name is prefixed with $ instead of the usual %. (gdb) info reg rax 0x4005c1 4195777 rbx 0x0 0 .... (gdb) p $rax show current value in %rax register (gdb) set $rax = 9 change current value in %rax register The tui (text user interface) splits your session into panes for simultaneously viewing the C source, assembly translation, and/or current register state. The gdb command layout <argument> starts tui mode. The argument specifies which pane you want ( src, asm, regs, split or (gdb) layout split. Reading and Tracing Assembly in GDB Let's try out all these new gdb commands! - Read over the program code.c. - Compile the program and load it into gdb. Disassemble myfn. - Use the disassembly to figure out where arris being stored (Hint: take a look at the first movinstructions - %rspis a pointer to the current top of the stack). How are the values in arrinitialized? What happened to the strlencall on the string constant to init the last array element? - What instructions were emitted to compute the value assigned to count? What does this tell you about the sizeofoperator? Is it an actual function executed by the assembly instructions? - Set a breakpoint at myfnand run the program. Use the gdb command info localsto show the local variables. Compare this list to the declarations in the C source. You'll see some variables are shown with values ("live"), some are <optimized out>, but others don't show up at all. Look at the disassembly to figure out what happened to these entirely missing variables. Step through the function repeating the info localscommand to observe which variables are live at each step. Examine the disassembly to explain why there is no step at which both totaland squaredare live. As a hint: where do the different variables live at the assembly level? What about ones with constant values?6 should be understanding tools for disassembling object files, and using the debugger at the assembly level. The hands-on practice relating C code to its compiled assembly and reverse-engineering from assembly to C is great preparation for your next assignment. Here are some questions to verify your understanding and get you thinking further about these concepts: - Give an example of a C sequence/expression that could be compiled into a variety of different, but equivalent, assembly instruction(s). - What is the difference between sarand shr? - How is the return value of a function communicated from the callee to the caller? - What kind of clues might suggest that an leainstruction is doing an address calculation versus when it is merely arithmetic? - How can you get the value of a variable during execution that gdb reports has been <optimized out>?
https://web.stanford.edu/class/archive/cs/cs107/cs107.1202/lab6/
CC-MAIN-2020-05
refinedweb
3,221
55.34
The ESP32 contains a Serial Peripheral Interface Flash File System (SPIFFS). SPIFFS is a lightweight filesystem created for microcontrollers with a flash chip, which NodeMCU LittleFS Filesystem Uploader in Arduino IDE. At the moment, this is not compatible with Arduino 2.0. If you’re using VS Code with the PlatformIO extension, read the following tutorial instead: Table of Contents - Introducing SPIFFS - Installing the Arduino ESP32 Filesystem Uploader - Uploading Files using the Filesystem Uploader - Testing the Uploader especially useful to: - Create configuration files with settings; - Save data permanently; - Create files to save small amounts of data instead of using a microSD card; - Save HTML and CSS files to build a web server; - Save images, figures, and icons; - And much more. With SPIFFS, you can write the HTML and CSS in separate files on your computer. This makes it really easy and simple to work with files. Let’s install it. Note: at the time of writing this post, the ESP32 Filesystem Uploader plugin is not supported on Arduino 2.0. First, make sure you have the ESP32 add-on for the Arduino IDE. If you don’t, follow the next tutorial: Windows Instructions Follow the next steps to install the filesystem uploader if you’re using Windows: 1) Go to the releases page and click the ESP32FS-1.0.zip file to download. 2) Find your Sketchbook location. In your Arduino IDE, go to File > Preferences and check your Sketchbook location. In my case, it’s in the following path: C:\Users\sarin\Documents\Arduino. 3) Go to the sketchbook location, and create a tools folder. 4) Unzip the downloaded .zip folder. Open it and copy the ESP32FS folder to the tools folder you created in the previous step. You should have a similar folder structure: <Sketchbook-location>/tools/ESP32FS/tool/esp32fs.jar 5) Finally, restart your Arduino IDE. To check if the plugin was successfully installed, open your Arduino IDE. Select your ESP32 board, go to Tools and check that you have the option “ESP32 Sketch Data Upload“. MacOS X Follow the next instructions if you’re using MacOS X. 1) Go to the releases page and click the ESP32FS-1.0.zip file to download. 2) Unpack the files. 3) Create a folder called tools in /Documents/Arduino/. 4) Copy the unpacked ESP32FS folder to the tools directory. You should have a similar folder structure. ~Documents/Arduino/tools/ESP32FS/tool/esp32fs.jar save into the ESP32 filesystem. As an example, create a .txt file with some text called test_example. 5) Then, to upload the files, in the Arduino IDE, you just need to go to Tools > ESP32 Sketch Data Upload. The uploader will overwrite anything you had already saved in the filesystem. Note: in some ESP32 development boards you need to press the on-board BOOT button when you see the “Connecting …….____……” message. The files were successfully uploaded to the ESP32 filesystem when you see the message “SPIFFS Image Uploaded“./RST”). Another way to save data permanently is using the ESP32 Preferences library. It is especially useful to save data as key:value pairs in the flash memory. Check the following tutorial: For more projects with ESP32, check the following resources: 98 You must name the file same as it is displayed in the code, for example, in this code there is written: File file = SPIFFS.open(“/test_example.txt”); So you need to call the file as test_example or change the code to the name you want, of course adding .txt . PD. Do not add the .txt on the file name, just create the file as a text file and name it as you want. The program will understand you are talking about a .txt file. At first, why can’t I try it? thank you for answer. Sarah, I have the same error that Carl describes on July 4, 2019. I am also using the Arduino IDE 1.8.9. It turned out that adding #include “FS.h” in the example solves the problem. Greeting, Dick This comment is not correct, forget it. Greeting, Dick. Hi, great THX from Austria. Solved my problem 🙂 Thanks, I tried all of the other options mentioned and they didn’t work. Yours, however, did work. Many thanks. I’m migrating from Windows to Mac and things are a bit different. I also had to move by library folder into the sketches folder.. This is explained very well, thank you for the tutorial. Will this work with an ESP8266 as well? Hi. Yes. Here’s the tutorial for ESP8266: Regards, Sara I am having zero luck with this. I can upload the SPIFFS file the first time and then thereafter I get: [SPIFFS] data : /Users/davidh/Documents/Arduino/spiffs-check/data [SPIFFS] start : 2686976 [SPIFFS] size : 1472 [SPIFFS] page : 256 [SPIFFS] block : 4096 /test_example.txt [SPIFFS] upload : /var/folders/n1/59y24n0j2z128fchsgxh1f5w0000gn/T/arduino_build_804785/spiffs-check.spiffs.bin [SPIFFS] address: 2686976 [SPIFFS] port : /dev/cu.SLAB_USBtoUART [SPIFFS] speed : 921600 [SPIFFS] mode : dio [SPIFFS] freq : 80m esptool.py v2.6 Serial port /dev/cu.SLAB_USBtoUART.SLAB_USBtoUART: [Errno 16] Resource busy: ‘/dev/cu.SLAB_USBtoUART’ Failed to execute script esptool SPIFFS Upload failed! If I restart the Arduino application it uploads. However running the SPIFFS read test program always comes up with failed to open file for reading. So issues with upload and with reading. Hi David. Can you take a look at this discussion and see if it helps: See Stéphane answer. Regards, Sara I am not having much luck with the web servers using the SPIFF. I am using Arduino V1.8.13. When I try to include the .zip library ESP32FS-1.0 I always get the error message: “TTGO LoRa32-OLED V1, 80MHz, 921600, None” “Specified folder/zip file does not contain a valid library” I am wondering if the LoraESP32 board supports ESP32FS. Thanks Hi Brian. Where are you trying to include the filesystem uploader? It shouldn’t be in the project folder. The data folder should be in the project folder. You’re probably not installing the uploader plugin properly. Regards, Sara 1.) I download ESP32FS-1.0.zip 2.) In Arduino IDE, I go to: Sketch -> Include Library-> Add .zip Library 3.) I choose ESP32FS1.0.zip from my download folder. 4.) I get the following error message at the bottom of the IDE: “Specified folder/zip file does not contain a valid library” with an orange background. The newer versions of Arduino IDE do not have a Tool folder like the previous versions. Should I maybe just unzip the ESP32FS-1.0.zip outside of the Arduino IDE and place the .jar file in a Tool folder that I create myself? Many thanks for your help! Hi Brian. I don’t think it should be installed in that way. But you can try it. I think you may need to install the Arduino version that comes with a .zip file: And then, follow our instructions to install the filesystem. Regards, sara Hello Sara, Thank you very much for this tutorial and it worked well for me, but when I opened the serial monitor, after the sketch had been down loaded, it was blank. However, I realized that the text had been and gone before I opened the monitor and I just pressed the reset button on the ESP32 board and up came the text as you specified. I wonder is this would be of help to the gentleman who seemed to have similar problems. I was so pleased that this worked for me as yesterday evening I had been trying for hours to try and get LittleFS working without success and I spotted your tutorial, like a life belt to a drowning man. Kindest regards Adrian Hi. Yes, that’s normal because the text is printed in the setup(). So, you need to press the RST button to be able to see it. We have a tutorial about LittleFS with the ESP8266 with VSCode + PlatformIO: Regards, Sara Hi Rui and Sara, A slight word of warning/note: The uploader will overwrite anything you had already saved in the filesystem. For me, not a big deal since it was just some test files for an ESP32-based shell/file editor I’ve been working on. But in a production environment where I might be saving config files and runtime data, this would be an issue. You might wish to note this in the text of the tutorial (unless I missed it!!). Steve Yes, you are right. Thanks for the warning. I’ve added a note about that. Regards, Sara For those who struggling on Mac. Make a folder, “tools” inside Arduino directory. Then, move downloaded folder to “tools” folder Finally, you should change the the name of the folder from “ESP32FS 2” to “ESP32Fs”. Now, you may find “ESP32 Sketch Data Upload” at the tools bar. sorry the folder name should be changed to “ESP32FS”, not “ESP32Fs”. saved my day, thank you Hi Rui et al, Great project! I’m quite new in the field, please help me out: I’m getting the “expected unqualified-id before ‘<‘ token ” error while trying to compile the code, line 23 (). Where should I dig? Regards, Th. Hi. that is a syntax error. Double-check that you have copied the complete code and that you haven’t inserted any character by mistake. Regards, Sara Thank you. I looked at many sites for instructions for installing “Sketch Data Upload” plugins. This is the only site that gave the correct folder location for Windows 10. Thanks again. You are always the best source for information about ESP devices. Thanks 🙂 Hi Rui and Sara, Very good your tutorial on ElegantOTA library. Congratulations! I wonder if there is any way or library to do a remote update, that is, I am in one place and ESP32 is in another. Hi. You can check IoTAppStory: Regards, Sara Hello Sara, I appreciate your prompt reply. I will check carefully, Regards, Rubens Am getting this error on esp32 “SPIFFS Not Supported on avr Hi. Make sure you have an ESP32 board selected in Tools > Board. Regards, Sara Thanks. solved I have a SPIFFS based webserver (based in your post on that). I want to and a settings file to store wifi crefentials etc. But if I do that next time I upload the SPIFFS image I loose all settings stored on my settings file. Is there a way to keep the settings? Hi. Yes. I recommend using the Preferences library. We even have a tutorial showing how to save and read Wi-Fi credentials. I think this post is exactly what you are looking for: Regards, Sara I was able to do this to get SPIFFS working, but I was hoping that it could show how to log a temperature and light (LDR) reading or some couple of sensors. if you are going to do a follow-up, the next step would be data logging to SPIFFS Hi Dave. Thanks for the suggestion. We have these data logging tutorials for the SD card (not SPIFFS), but it is similar: – – Regards, Sara I’m stuck ar this step: “3) Unzip the downloaded .zip folder to the Tools folder. You should have a similar folder structure: /Arduino-/tools/ESP32FS/tool/esp32fs.jar” There’s no “tools” folder in Arduino for Mac. Where should it be placed then? Hi. On the OS X create the tools directory in ~/Documents/Arduino/ and unpack the files there. Regards, Sara Thank you Sara. But it didn’t work. In fact, I’ve found only one file within this zip: “/ESP32FS/tool/esp32fs.jar”. I’ve placed it in “~Documents/Arduino//ESP32FS/tool/esp32fs.jar”. I’ve restarted Arduino IDE and no “ESP32 Sketch Data Upload” appears in Tools menu. Is it suitable for Arduino IDE 2.0? Thank you. Regards Hi. Unfortunately, this doesn’t work with Arduino 2.0. Regards, Sara Thank you, Sara. It seems that there won’t be any support to that in Arduino IDE 2.0. Regards Hi, I have installed Arduino 1.8.15. Then installed the ESP32FS downloaded from website and placed under C:\Program Files (x86)\Arduino\tools folder. After that I was able to see “ESP32 Sketch Data Upload” in my tools menu of IDE. But when I click there I get “SPIFFS Error: esptool not found!” My sketch folder is “C:\Users\jinda\OneDrive\Documents\Arduino\hardware\heltec\IRserver3” As I was reading the comments I tried putting the ESP32FS under the tools folder under IRserver3 but it still didn’t work. Any help please? Hi. I’m not sure what is causing that error. See this discussion, it might help: Regards, Sara Thanks a lot Sara. My esptool.exe was in the esptool folder, I took it out to the same location as esptool.py then it worked. Great! I’m glad you solved the issue. Regards, Sara I have Arduino IDE 1.8.16 from the Windows Store, not the version from Arduino website. I cannot locate the “Tools” folder anywhere to copy the plugin into. I am running latest update of Windows 10. Have checked /AppData/Local (no Arduino folders at all here), same for AppData/Roaming Have checked the Sketchbook location and the ArduinoData folder that sits next to that. Nowhere do I have a Tools folder! Has anyone else encountered this or have a solution as to where to copy the plugin for this version of the Arduino IDE? Hi. That version of Arduino IDE, doesn’t have the Tools folder. You need the .ZIP folder version of Arduino from the Arduino website: Regards, Sara Perfect. Reinstalled from Arduino website and now I can have the tools folder, and the Data Uploader appears in the Tools menu in the IDE. Unable to actually do the upload however, as I get “SPIFFS Error: serial port not defined!” popping up as an error. I am using an AiThinker ESP32Cam board, which of course does not have a direct USB connection, and I think this might be the cause of the issue. Have you found a way to upload to SPIFFS on one of these ESP32Cam boards? I’ve never been able to get serial monitor to work with one of these ESP32Cam boards (but have with other ESP32 boards that have direct USB), so I’m wondering if there is a clever way to get Arduino IDE to recognise serial connection that would also enable upload to SPIFFS on an ESP32Cam? Progress – managed to get SPIFFS upload to start but, it only gets as far as “Connecting…..___…..” then “A fatal error occured: Failed to connect to ESP32: Timed out waiting for packet header” then “SPIFFS Upload Failed” I discovered that other ESP32 boards have way more options in the tools menu than the AIThinker ESP32Cam has (like partition scheme for example). So as a test, I swapped from ESP32Cam to Wrover, then back again, and for some reason that allowed the serial port error to go away. Hi. When you see the dots on the debugging window ….___….____ press the on-board RST button and it will upload the spiffs image. To be able to see the Serial Monitor, you must disconnect GPIO 0 from GND after uploading the code. Regards, Sara Thanks Sara – that has been an awesome help. Fixed two issues at once! When my project is complete I’ll send through the details. Have really enjoyed the tutorials, e-books and courses that you and Rui have made available. That’s great! I’m glad you enjoy our tutorials. Regards, Sara Hi, Thank you for the tutorial. I has been many learning from your tutorial. I have problem related ESP32 Sketch Data Upload plugin. Is it available for ubuntu 20.04 arduino ide? Thank you very much. Hendry Hi. I think you can install it in Ubuntu (but I never tried it). Regards, Sara Hi Sara Thanks a lot for those tutos and help you provide for all of us, you website is a reference to me before starting anything with ESP chip. I am under Win10 and Arduino 1.8.13 I follow your instructions and createa tools folder before uncompressing the uploader and it works but: Now I am trying to upload on a ESP32-C3 dev module then got error message : esptool.py v3.1 Serial port COM11 Connecting…. A fatal error occurred: This chip is ESP32-C3 not ESP32. Wrong –chip argument? SPIFFS Upload failed! Of course I check my setup to confirm I select the right chip in tools board selection Do you know if there is an other way to specify chip argument ? Hi. I haven’t experimented with ESP32-C3. From that error, it seems it is not compatible with the current ESP32 board’s add-on. You need a different installation for that board. Go to File > Preferences and copy the following link to the “Additional Boards Manager URLs”: Then, click ok. Then go to Tools > Board > Boards Manager In the Boards Manager, search esp32 by Espressif Systems. Install the latest 2.0.0 version Now, once the installation is complete, you can see the ESP32C3 Dev Module in Tools > Board. Select it and try to upload code again. I don’t know if code for ESP32 is compatible with ESP32-C3. Regardsm Sara Orginal data uploader didn’t work for ESP32-C3 you have to install a forked one for that you can find here : On ESP32 dev kit V1 i made also this at start of sketch : #include “FS.h” I have made a SPIFF manager where you can also transfer files from ESP32 to PC via Serial in binary mode, so your file/image is converted in Hex, copy this output on serial monitor and paste in a Notepad++, in Plugins/Converter of Notepad++ you see HEX to ASCII converter, use it and save result with original extension (so .bmp or .png ecc…), in this mode, if you save screenshots of ESP32 to SPIFF you can also transfer to PC (i am working for save display buffer, but various solutions online not work on my Ili9488 touch screen….need more work) Hello Sara. Your are my ESP32-Hero’s !! For each project i take a look at your website for technical support and knowledge. Mostly I find the anwsers on your website. But now I have a question about this subject SPIFFS. In the example you show Serial.write(file.read()); But how can I read a couple characters or read a line from the file and bring this into a variable or string ? Or did I miss this little part of the story ? Greetings Abraham. Hi. At the moment, we don’t have any tutorials about different ways to handle files. You can use a function like the following, for example, that returns the first line written in a file: String readFile(fs::FS &fs, const char * path){ Serial.printf(“Reading file: %s\r\n”, path); File file = fs.open(path); if(!file || file.isDirectory()){ Serial.println(“- failed to open file for reading”); return String(); } String fileContent; while(file.available()){ fileContent = file.readStringUntil(‘\n’); break; } return fileContent; } Just pass SPIFFS as the first argument and the filepath as a second argument to that function. Regards, Sara Hi, Do you know of any way to upload data files to SPIFFS on ESP32-S2 based boards? “ESP32 Sketch Data Upload” in the Arduino IDE errors out when it finds the ESP32-S2 is being used. Any thoughts or suggestions would be appreciated. Thanks, Bill You need close serial monitor Hello, thanks for the great tutorial. I need to use SPIFFS on an external FLASH chip on a custom board based on an ESP32 pico. How could I do it? And if the uploader mentioned above to put under “Tools” doesn’t support external FLASH (only internal I guess), how should I upload the files? Do I have to create a custom partition? I don’t know how to tell SPIFFS to search from the extended memory. Thank you You mean SD ? is many example and also some external library for manage em No I mean a second FLASH chip that I add to a custom PCB where the ESP32 is mounted on. So I would like to upload with the tool my files to the second FLASH memory in SPIFFS, not the internal one on the ESP32. SPIFFS doesn’t seem to be supported anymore. I am new to this so figuring it out with littleffs is a bit over my head 🙁 Hi. Why do you say that? You can install it and use it. You just need to follow the steps. Regards, Sara there is no tools folder in my arduino folder please help me Hi. Try to unzip the esp32fs.jar file containing the plugin in Sketchbook location > tools > ESP32FS > tool folder. The Sketchbook location folder is defined in the Arduino IDE, by selecting File > Preferences. If the Arduino IDE is open, then Arduino IDE must be closed and restarted, and the Tools menu will then include the SP32 Sketch Data Upload option. Regards, Sara When uploading the sketch data, the serial monitor window should NOT be open. Otherwise, there will be an error message stating that the upload failed. I spent some time googling for the info. Perhaps this post should be amended with such a warning for newbies. Not sure if anyone else encountered the same problem. Thought I just share the info. Otherwise, this was an enjoyable project. Hi Sara, As always, the tutorials on your site are the best! When I first saw Rui’s newsletter email I thought it was going to be for an html based files system. No worries, I found what I needed. However I do want to mention I use a file upload function on my website. It really makes it handy to upload specific files via Wi-Fi, only one at a time, when making small changes to a webpage or a saved variable. Or as in my case, a JSON data file. Hi. Is there a stand alone way tooad files on to the spiff drive and retrieve them without using the IDE or a Web interface. I am looking at how to have someone load their own data on to the SPIFF after I have programmed it for them. Hi. I’m sorry, but I’m not familiar with something that does that. Regards, Sara If you want upload and download from SPIFFS is more simple use a FTP server and a external FTP on PC (for example Total Commander with Local IP set and username/password same of Esp32) how show this example: #include <WiFi.h> #include “SPIFFS.h” #include <SimpleFTPServer.h>/* */ const char* ssid = “WIFI SSID”; const char* password =”WIFI Password”; FtpServer ftpSrv; //set #define FTP_DEBUG in ESP8266FtpServer.h to see ftp verbose on serial void setup(void){ Serial.begin(115200); WiFi.begin(ssid, password); Serial.println(“”); // Wait for connection while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print(“.”); } Serial.println(“”); Serial.print(“Connected to “); Serial.println(ssid); Serial.print(“IP address: “); Serial.println(WiFi.localIP()); /////FTP Setup, ensure SPIFFS is started before ftp; ///////// if (SPIFFS.begin(true)) { Serial.println(“SPIFFS opened!”); ftpSrv.begin(“Esp32″,”Esp32”); //username, password for ftp. set ports in ESP8266FtpServer.h (default 21, 50009 for PASV) } } void loop(void){ ftpSrv.handleFTP(); //make sure in loop you call handleFTP()!! } HI and thanks. The people I am targetting are very non-computer.. So Telling them to set up their ESP32 for their Wifi is going to be beyond them. Think of it this way. An analogy.. I am making a Photo shower using an ESP32 I want the end users to be able to add their own photos. I want to save costs on not having an SD card so the SPIFFS or the LITLTEFS would be ideal as an SD reader and an SD Card could add about $5 to the values. But an application that can talk to the ESP32 via a serial port would be much easier. Many thanks for replying Dave Dave, if you setup a webserver you can create a webpage to upload files. Search out littlefs file manager and fine tune to your needs. There is limited space so they might need to remove files to make room for new files. On the SPIFFS you can fit only 3-5 Jpg images…you need use a SD for more Great! Thanks for sharing. Regards, Sara SPIFFS is deprecated Replaced by LittleFS. That’s what you should be talking up, not SPIFFS. Hi. SPIFFS is only deprecated for the ESP8266. Regards, Sara
https://randomnerdtutorials.com/install-esp32-filesystem-uploader-arduino-ide/?replytocom=735676
CC-MAIN-2022-27
refinedweb
4,117
75.3
Considering a Port to .NET Core? Use NDepend Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, take a look at the latest version of NDepend, with extensive features around technical debt measurement. An American colloquialism holds, “only two things are certain: death and taxes.” If I had to appropriate that for the software industry, I might say that the two certainties are death and legacy code. Inevitably, you have code that you have had for a while, and you want to do things with it. Software architects typically find themselves tasked with such considerations. Oh, sure, sometimes they get to pick techs and frameworks for greenfield development. Sometimes they get to draw fancy diagrams and lay out plans. But frequently, life charges them with the more mundane task of “figuring out how to make that creaky old application run on an iPhone.” Okay, maybe it’s not quite that silly, but you get the idea. If you earn a living as an architect in the .NET world, you have, no doubt, contemplated the impact of .NET Core on your application portfolio. Even if you have no active plans to migrate, this evolution of .NET should inform your strategic decisions going forward. But if you have use for deploying the framework along with your application or if you want to run on different operating systems, you’re going to need to port that legacy code. I am, by no means, an expert in .NET Core. Instead, my areas of specialty lie in code analysis, developer training, and IT management and strategy consulting. I help dev teams create solutions economically. And because of this, I can recognize the value of NDepend to a port from what I do know about .NET core. Is Your Code Worth Preserving? As I’ve mentioned various times on this blog, part of my consulting practice revolves around codebase assessments. I help clients understand how aspects of their code translate to business outcomes. Often, NDepend plays a key role in such assessments. Clients often ask me the question, “is this code worth preserving?” Obviously, this question invites intense subjectivity. But that doesn’t mean you can’t establish some objective criteria for answering. NDepend’s extensive code quality metrics and code rules help you get started. On top of that, you can use it to identify problematic areas of the codebase and recurring anti-patterns. If your applications sets off red flags left and right, you may want to rethink your strategy. Do you have classes that are thousands of lines? How about methods that are brutally complex? Is the code of even marginal quality? Use the tool to give yourself an idea as to the state of your code. Sometimes simply looking at things from a new angle can give you the jolt that you need to consider a new option. It’s possible that you’d be better off starting from scratch with .NET Core than trying a port. Identifying Dependencies If you’re satisfied that porting your code would be worthwhile, you should first run this API port utility on the prospective codebase. It will furnish you with a high level summary of assembly portability, and a list of non-portable API usage. That gives you a great start, but NDepend can further help you. The utility itself provides a general score and a list. But NDepend specifically deals in helping you visualize and manage your dependencies. Use NDepend to see just how daunting a task you face. Perhaps the actual scope of your dependence is more or less than you think. Alternatively, you might find hidden, transitive dependencies. How Modular Are You? Once you have a feel for the mechanics of the port and dependency management, it’s time to look at a higher level architectural concern. How modular is your application? .NET core, for instance, does not support Webforms. But perhaps you were thinking that you’d prefer to move forward with MVC anyway and were willing to incur the cost of migrating both to .NET Core and to MVC. That’s a fine goal, to be sure, but is it realistic? NDepend can help you visualize properties in your codebase that help you understand. The aforementioned dependencies factor in, certainly. And you will also want to look at cohesion, coupling, and fan-in as well. NDepend can help you paint a picture of whether a partially portable codebase is modular enough to evolve or not. Ad Hoc Queries In the past, I’ve blogged about treating code as data. Powerful visualization tools, informative out of the box rules, and tons of capabilities all make NDepend a wonderful tool. But perhaps its most subtle, and possibly underrated, value proposition comes from it allowing you to make ad hoc queries of your code. Consider the aforementioned Webforms example. Beyond what NDepend gives you immediately, you might want other questions answered. “What percentage of classes in my code base reference a Webforms namespace?” “Which assemblies depend on Webforms?” “What is the average cyclomatic complexity of code behind methods?” CQLinq and the NDepend API grant you a whole lot of power. Using these tools, you can construct answers to questions like these that you might have about your codebase. In fact, you can construct answers to far more questions than I can dream up and feed you as examples. For the architect role, the answers to questions like that furnish true power. The business wants capability, and you find yourself tasked with helping them achieve that. If your respond with, “gosh, I don’t know if this will work,” the counter will likely be “make it work.” If, on the other hand, you respond with “80% of classes in this code depend on something we cannot port,” you’ll encounter far less pushback. Whether for .NET Core or, really, for any other major architectural initiative, NDepend gives you the ammo to make good decisions and good arguments.
https://daedtech.com/considering-port-net-core-use-ndepend/
CC-MAIN-2022-40
refinedweb
1,009
66.33
Hello, I created a script to find the latest uploaded items on ArcGIS Online using ArcGIS API for python. There are no errors but it does not find anything. I search for items uploaded in last 7, 30 or even 100 days. It finds few items if I search from 2009 till now but even then I don't think it shows everything. Did I miss something? Do I need to format query differently? Out of ideas... Code: from arcgis.gis import GIS gis = GIS("", "username", "password", proxy_host = "example", proxy_port = 0000) # how far back does the search go? user input input_days = input("how many days back do you want to search? ") # based on days, that user has entered, calculate the date to start the search from in unix time, in seconds as per query format requirements import time now = time.time() # converting input to integer, then to seconds, then calculating date from which items where uploaded to unix time days = int(input_days) days_in_seconds = days*24*60*60 dateFrom_s = now - days_in_seconds # function to format unix time in seconds to the format required def timeQuery(time_in_seconds): # convert time to milliseconds, then to interger to remove everything after point, then convert to string and add 6x0 return ('000000'+str(int(time_in_seconds*1000))) nowQ = timeQuery(now) beforeQ = timeQuery(dateFrom_s) print(beforeQ) print(nowQ) search_result = gis.content.search(query = "uploaded: [beforeQ TO nowQ]") #search_result = gis.content.search(query = "*") # some date in 2009 in unix time, ms, and six zeros at the front as per query requirement 0000001259692864000 search_result2 = gis.content.search(query = "uploaded: [0000001259692864000 TO 0000001542676158153]") #search_result = gis.content.search(query = "*") print(search_result) print(search_result2) The main issue is that the time value from which you are starting, time.time(), is in a format that needs an additional conversion. str(int(time_in_seconds*1000))) should be str(int(time_in_seconds*1000000))). As ArcGIS Online stores the "uploaded" time in UTC, you probably also want to set your "now" using UTC as well. Your code is using your local time zone. For instance, you might try something like this: Also, the default for max_items with gis.content.search is 10, so if the search finds more than ten items, it will only return the first ten results. If you are expecting more than ten results, then you will want to increase max_items appropriately. Hope that helps!
https://community.esri.com/thread/224718-arcgis-api-for-python-find-latest-items
CC-MAIN-2019-22
refinedweb
388
65.22
Enabling Support for Alternate Languages in SharePoint 2010 In this article we will be seeing how to enable support for alternate languages in SharePoint 2010. The installed languages in the SharePoint server can be supported by the site which is specified by the owner of the site, so that users who navigate to the site can then change the display language of the user interface to any of these alternate languages. For more information on Alternate Languages refer (Copy the link and paste). I have installed the Hindi language pack in my SharePoint 2010 server, in this I am going to enable support for Hindi language (Alternate language). Go to Site Actions => Site Settings => Site Administration => Language Settings. Check the Hindi language in the Alternate language's section. Click on Ok. Now the users have the ability to change the language of the User interface for a website to Hindi language as shown in the following. The same thing can be achieved using SharePoint 2010 object model.. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using System.Xml; using System.Globalization;
https://www.thetechplatform.com/post/enabling-support-for-alternate-languages-in-sharepoint-2010
CC-MAIN-2022-21
refinedweb
188
59.3
Following on from the previous post I had one last thing that I wanted to do. At the moment, when my application moves from one feed item to another, it simply hides the UI displaying the feed item and then shows it again at a later point which is a bit “dull”. It’d be nice if it did some kind of transition. I figured that I’d use the Shader Effect Library from CodePlex for this so I downloaded it and then added a reference to the ShaderEffectLibrary from my project; Now I can make use of an effect on my user control. Picking a ripple effect, I can make sure my XAML can “see” the library; xmlns:effects=”clr-namespace:ShaderEffectLibrary;assembly=ShaderEffectLibrary” and then I can add an effect to my declaration of my user control in my main UI as in; <local:ItemDisplayControl x: <local:ItemDisplayControl.Effect> <effects:RippleEffect x: </local:ItemDisplayControl.Effect> </local:ItemDisplayControl> now I can add some Storyboards to my main UI to manipulate that effect ( this XAML is a snippet rather than the whole UI definition ); <Window.Resources> <Storyboard x: <DoubleAnimation From="0" To="0.5" Storyboard. <DoubleAnimation From="0" To="50" Storyboard. </Storyboard> and I need to rejig my code a little bit so that rather than just displaying the new item, we ripple it as well 🙂 Updated code is; void OnItemDisplayReady(object sender, EventArgs args) { ShowItemDisplayControl(); } void ShowItemDisplayControl() { displayControl.Visibility = Visibility.Visible; SbRipple.Begin(); } void HideItemDisplayControl() { displayControl.Visibility = Visibility.Hidden; } Storyboard SbRipple { get { return (mainGrid.FindResource("sbRipple") as Storyboard); } } and that’s it (for now) – I’m done with it. There are some other things I could do here; - Fix a bug that I know is lurking in there where certain Feed Items don’t display. Trying to nail that down. - Add some UI around only include Positive/Negative sentiment in the searches. but I’ve had fun playing with WPF and Blend and it’s also strangely introduced me to the idea that watching Twitter feeds is very addictive and I already learnt a whole bunch of stuff this morning that I wouldn’t otherwise know so I think I might have to become a bit of a Twitter-watcher (is that a Twitcher?) if not a Tweeter. The final bits for download are here.
https://mtaulty.com/2008/11/19/m_10908/
CC-MAIN-2019-51
refinedweb
386
52.8
My Qt5 program needs to use one enum if the version of the ALSA library which it is dependent on is less than a certain value, and a different enum if the version is greater than or equal to that value. Is it possible for qmake to check for the version of that library and to set a define that I can use to set the proper enum expression? It's possible but unnecessary. Your question is yet another X-Y problem: all you want is to check the version of ALSA library. qmake doesn't figure anywhere in it, right? All you want is: #include <alsa/version.h> #if SND_LIB_VERSION >= 0x010005 // 1.0.5 and later enum { FOO = 42 }; #else // 1.0.4 and earlier enum { FOO = 101010 }; #endif Even better, in modern C++ you can ensure that your code won't bit-rot: int constexpr kFoo() { return (SND_LIB_VERSION >= 0x010005) ? 42 : 101010; }
https://codedump.io/share/jr0mdDmboaYz/1/is-it-possible-to-use-qmake-to-check-the-version-of-a-library
CC-MAIN-2017-30
refinedweb
153
72.36
Name | Synopsis | Description | Options | Examples | Environment Variables | Exit Status | Attributes | See Also | Notes | Warnings The nistbladm command is used to administer NIS+ tables. There are five primary operations that it performs: creating and deleting tables, adding entries to, modifying entries within,M bytes.: Searchable. Specifies that searches can be done on the column's values (see nismatch(1)). Case-insensitive (only makes sense in combination with S). Specifies that searches should ignore case. Crypt. Specifies that the column's values should be encrypted. Binary data (does not make sense in combination with S). If not set, the column's values are expected to be null terminated ASCII strings. XDR encoded data (only makes sense in combination with B). access is specified in the format as defined by the nischmod(1) The following options are supported: Adds entries to a NIS+ table. The difference between the lowercase `a' and the uppercase `A' is in the treatment of preexisting entries. The entry's contents are specified by the column=value pairs on the command line. Values for all columns must be specified when adding entries to a table. Normally, NIS+ reports an error if an attempt is made to add an entry to a table that would overwrite an entry that already exists. This prevents multiple parties from adding duplicate entries and having one of them get overwritten. If you wish to force the add, the uppercase `A' specifies that the entry is to be added, even if it already exists. This is analogous to a modify operation on the entry. Creates a table named tablename in the namespace. The table that is created must have at least one column and at least one column must be searchable. Destroys the table named tablename. The table that is being destroyed must be empty. The table's contents can be deleted with the -R option below. Edits the entry in the table that is specified by indexdname. indexdname must uniquely identify a single entry. It is possible to edit the value in a column that would change the indexed name of an entry. The change (colname=value) may affect other entries in the table if the change results in an entry whose indexed name is different from indexedname and which matches that of another existing entry. In this case, the -e option will fail and an error will be reported. The -E option will force the replacement of the existing entry by the new entry (effectively removing two old entries and adding a new one). A synonym for -E. This option has been superseded by the -E option. Removes entries from a table. The xentry is specified by either a series of column=value pairs on the command line, or an indexed name that is specified as entryname. The difference between the interpretation of the lowercase `r' versus the uppercase `R' is in the treatment of non-unique entry specifications. Normally the NIS+ server will disallow an attempt to remove an entry when the search criterion specified for that entry resolves to more than one entry in the table. However, it is sometimes desirable to remove more than one entry, as when you are attempting to remove all of the entries from a table. In this case, using the uppercase `R' will force the NIS+ server to remove all entries matching the passed search criterion. If that criterion is null and no column values specified, then all entries in the table will be removed. Updates attributes of a table. This allows the concatenation path (-p), separation character (specified with the (-s)), column access rights, and table type string (-t) of a table to be changed. Neither the number of columns, nor the columns that are searchable may be changed.- - -. When creating or updating a table, this option specifies the table's search path. When a nis_list() function is invoked, the user can specify the flag FOLLOW_PATH to tell the client library to continue searching tables in the table's path if the search criteria used does not yield any entries. The path consists of an ordered list of table names, separated by colons. The names in the path must be fully qualified. When creating or updating a table, this option specifies the table's separator character. The separator character is used by niscat(1) when displaying tables on the standard output. Its purpose is to separate column data when the table is in ASCII form. The default value is a space. When updating a table, this option specifies the table's type string. This example creates a table named hobbies in the directory foo.com. of the type hobby_tbl with two searchable columns, name and hobby. The column name has read access for all (that is, owner, group, and world) and modify access for only the owner. The column hobby is readable by all, but not modifiable by anyone. In this example, if the access rights had not been specified, the table's access rights would have come from either the standard defaults or the NIS_DEFAULTS variable (see below). To add entries to this table: In the following example, the common root domain is foo.com (NIS+ requires at least two components to define the root domain) and the concatenation path for the subdomains bar and baz are added: To delete the skiers from our list: Note: The use of the -r option would fail because there are two entries with the value of skiing. To create a table with a column that is named with no flags set, you supply only the name and the equals (=) sign as follows: This example created a table, named notes.foo.com., of type notes_tbl with two columns name and note. The note column is not searchable. When entering data for columns in the form of a value string, it is essential that terminal characters be protected by single or double quotes. These are the characters equals (=), comma (,), left bracket ([), right bracket (]), and space ( ). These characters are parsed by NIS+ within an indexed name. These characters are protected by enclosing the entire value in double quote (") characters as follows: If there is any doubt about how the string will be parsed, it is better to enclose it in quotes. This variable contains a defaults string that will be override the NIS+ standard defaults. If the -D switch is used those values will then override both the NIS_DEFAULTS variable and the standard defaults. If this variable is set, and the NIS+ table name is not fully qualified, each directory specified will be searched until the table is found. See nisdefaults(1). The following exit values are returned: Successful operation. Operation failed. See attributes(5) for descriptions of the following attributes: NIS+(1), niscat(1), nischmod(1), nischown(1), nischttl(1), nisdefaults(1), nismatch(1), nissetup(1M), attributes(5) NIS+ might not be supported in future releases of the Solaris operating system. Tools to aid the migration from NIS+ to LDAP are available in the current Solaris release. For more information, visit. To modify one of the entries, say, for example, from “bob” to “robert”: Notice that “[name=bob],hobbies” is an indexed name, and that the characters `[' (open bracket) and `]' (close bracket) are interpreted by the shell. When typing entry names in the form of NIS+ indexed names, the name must be protected by using single quotes. It is possible to specify a set of defaults such that you cannot read or modify the table object later. Name | Synopsis | Description | Options | Examples | Environment Variables | Exit Status | Attributes | See Also | Notes | Warnings
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9mj/index.html
CC-MAIN-2015-32
refinedweb
1,263
63.09
Directed indexed families and sets # This file defines directed indexed families and directed sets. An indexed family/set is directed iff each pair of elements has a shared upper bound. Main declarations # directed r f: Predicate stating that the indexed family fis r-directed. directed_on r s: Predicate stating that the set sis r-directed. directed_order α: Typeclass extending preorderfor stating that αis ≤-directed. A family of elements of α is directed (with respect to a relation ≼ on α) if there is a member of the family ≼-above any pair in the family. def directed_on {α : Type u} (r : α → α → Prop) (s : set α) : Prop A subset of α is directed if there is an element of the set ≼-above any pair of elements in the set. theorem directed_on_iff_directed {α : Type u} {r : α → α → Prop} {s : set α} : directed_on r s ↔ directed r coe theorem directed_on.directed_coe {α : Type u} {r : α → α → Prop} {s : set α} : directed_on r s → directed r coe Alias of directed_on_iff_directed. theorem directed_on_image {α : Type u} {β : Type v} {r : α → α → Prop} {s : set β} {f : β → α} : directed_on r (f '' s) ↔ directed_on (f ⁻¹'o r) s theorem directed_on.mono {α : Type u} {r : α → α → Prop} {s : set α} (h : directed_on r s) {r' : α → α → Prop} (H : ∀ {a b : α}, r a b → r' a b) : directed_on r' s theorem directed.mono {α : Type u} {r s : α → α → Prop} {ι : Sort u_1} {f : ι → α} (H : ∀ (a b : α), r a b → s a b) (h : directed r f) : theorem directed.mono_comp {α : Type u} {β : Type v} (r : α → α → Prop) {ι : Sort u_1} {rb : β → β → Prop} {g : α → β} {f : ι → α} (hg : ∀ ⦃x y : α⦄, r x y → rb (g x) (g y)) (hf : directed r f) : theorem directed_of_sup {α : Type u} {β : Type v} [semilattice_sup α] {f : α → β} {r : β → β → Prop} (H : ∀ ⦃i j : α⦄, i ≤ j → r (f i) (f j)) : A monotone function on a sup-semilattice is directed. theorem monotone.directed_le {α : Type u} {β : Type v} [semilattice_sup α] [preorder β] {f : α → β} : theorem directed_of_inf {α : Type u} {β : Type v} [semilattice_inf α] {r : β → β → Prop} {f : α → β} (hf : ∀ (a₁ a₂ : α), a₁ ≤ a₂ → r (f a₂) (f a₁)) : An antitone function on an inf-semilattice is directed. @[class] structure directed_order (α : Type u) : Type u A preorder is a directed_order if for any two elements i, j there is an element k such that i ≤ k and j ≤ k. Instances @[instance] def linear_order.to_directed_order (α : Type u_1) [linear_order α] :
https://leanprover-community.github.io/mathlib_docs/order/directed.html
CC-MAIN-2021-43
refinedweb
435
63.12
table of contents NAME¶ al_draw_soft_triangle - Allegro 5 API SYNOPSIS¶ #include <allegro5/allegro_primitives.h> void al_draw_soft_triangle( ALLEGRO_VERTEX* v1, ALLEGRO_VERTEX* v2, ALLEGRO_VERTEX* v3, uintptr_t state, void (*init)(uintptr_t, ALLEGRO_VERTEX*, ALLEGRO_VERTEX*, ALLEGRO_VERTEX*), void (*first)(uintptr_t, int, int, int, int), void (*step)(uintptr_t, int), void (*draw)(uintptr_t, int, int, int)) DESCRIPTION¶ Draws a triangle using the software rasterizer and user supplied pixel functions. For help in understanding what these functions do, see the implementation of the various shading routines in addons/primitives/tri_soft.c. The triangle is drawn in two segments, from top to bottom. The segments are deliniated by the vertically middle vertex of the triangle. One of the two segments may be absent if two vertices are horizontally collinear. Parameters: - • - v1, v2, v3 - The three vertices of the triangle - • - state - A pointer to a user supplied struct, this struct will be passed to all the pixel functions - • - init - Called once per call before any drawing is done. The three points passed to it may be altered by clipping. - • - first - Called twice per call, once per triangle segment. It is passed 4 parameters, the first two are the coordinates of the initial pixel drawn in the segment. The second two are the left minor and the left major steps, respectively. They represent the sizes of two steps taken by the rasterizer as it walks on the left side of the triangle. From then on, each step will either be classified as a minor or a major step, corresponding to the above values. - • - step - Called once per scanline. The last parameter is set to 1 if the step is a minor step, and 0 if it is a major step. - • - draw - Called once per scanline. The function is expected to draw the scanline starting with a point specified by the first two parameters (corresponding to x and y values) going to the right until it reaches the value of the third parameter (the x value of the end point). All coordinates are inclusive. SEE ALSO¶ al_draw_triangle(3alleg5)
https://manpages.debian.org/bullseye/allegro5-doc/al_draw_soft_triangle.3alleg5.en.html
CC-MAIN-2021-43
refinedweb
333
54.22
> From: Peter Reilly [mailto:peterreilly@apache.org] > I do not want to be too negative but... Well, I did ask for input... It's sometimes painful to get negative feedback, but it's usually constructive. > >2) Second, is the XML syntax I used too verbose? > The xml looks very verbose. It is. I tried to add as much semantic as I could. > I would drop description and synopsis. I would make "type" and "required" > be attributes and not elements. So you're not interested in having the Task Overview be auto-generated from all the synopsis elements? We'd still hand maintain that other page? > The use of namespaces is clever but a bit offputting. I see the big > picture - allowing antlib documention to be cross-linked, but I wonder > if this is really necessary. Which one? attr: and elem:? Or the automatic recognition of antlib:* URIs? > Adding in cross-refernces etc, is nice but the cost may be too high. I would actually have thought that reading <ac:for/> was the best way, both because it's concise and has semantic. We're talking of facilitating the creation of many AntLibs with the SVN proposal, and we'd not allow automatic cross-links between AntLibs? > The xdoc format looks a lot easier to understand, although > the cdata stuff does stand out like a sore thumb. I think I need to go back to look at it again. But I'd like to know more fundamentally what we are trying to do with the doc. If the goal is to keep it textual, i.e. add the minimum amount of markup (and thus semantic), to replicate what we have in HTML, then my first attempt was overkill. I would content that if we go that route, just adding a few <div>'s and CSS class'es along with removing all explicit HTML formatting from the existing doc would get that. It's only the matter of designing a good .css and 'tagging' the HTML with CSS class's and id's. I hope I don't sounds too disgruntled ;-) Something that would be interesting would be for interested people to modify the existing verbose ant.xml and come up with the markup they'd think is acceptable. Then I and others can try to XSL that into something. Please keep the feedback coming. --DD --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
https://mail-archives.eu.apache.org/mod_mbox/ant-dev/200503.mbox/%3CA672996F85AA7240A9DD2A455BD0D4288782EB@HOUEXCH902.Landmark.lgc.com%3E
CC-MAIN-2021-25
refinedweb
413
76.42
] Lajapathy Arun(5) Rion Williams(4) Jean Paul(4) Manpreet Singh(3) Gopi Chand(3) Sharad Gupta(3) Sachin Bhardwaj(3) Shubham Srivastava(3) Mahak Gupta(3) Deepak Dwij(3) Destin joy(3) Vinoth Rajendran(2) Sanjeeb Lenka(2) Scott Lysle(2) Sonia Bhadouria Vishvkarma(2) Mahesh Chand(2) Vijai Anand Ramalingam(2) Nithya Mathan(1) Tahir Naushad(1) Karthik Elumalai(1) Saravanan Ponnusamy(1) Kishore Chowdary(1) Abdul Rasheed Feroz Khan(1) Manikandan Murugesan(1) Prashant Bansal(1) Delpin Susai Raj(1) Kasam Shaikh(1) Priyaranjan K S(1) Mehul Rajput(1) Manoj Kalla(1) Rahul Sahay(1) Rakesh (1) Veena Sarda(1) Matthew Cochran(1) Saad Mahmood(1) Sara Silva(1) Sreekanth Mohan(1) Rajeev Ranjan(1) Lakshmanan Sethu(1) Ashish Kumar(1) Devesh Omar(1) Sagar Pardeshi(1) Abhishek Jaiswal (1) Satendra Singh Bhati(1) Tripti Tiwari(1) Nitin Bhardwaj(1) Anurag Sarkar(1) Sudhir Choudhary(1) Ashwani Tyagi(1) Mayur Dighe(1) Gaurav Gupta(1) Bryian Tan(1) Manish Singh(1) Rahul Ray(1) Vikas Mishra(1) ArunKumar Yalamarthy(1) Mamta M(1) Petar Minev(1) Amit Choudhary(1) Satyapriya Nayak(1) Marcus (1) Deepak Verma(1) Suthish Nair(1) Abhimanyu K Vatsa(1) Nipun Tomar(1) Karthikeyan Anbarasan(1) Zoran Horvat(1) Puran Mehra(1) Resources No resource found”.. Accessing Identity And HttpContext Info Using Dependency Injection In .NET Core Oct 11, 2016. In this article, you will learn how to access Identity and HttpContext Info using Dependency Injection in .NET Core. Node.JS Vs. Java For Enterprise App Development - Basic Modules Oct 04, 2016. There are many factors to consider when choosing a platform for your enterprise applications. Here is detailed info about why developers would choose Node.JS over Java for enterprise app development. Google Map Settings And Google Map From - Marking With Info TextBox May 22, 2016. In this article we will learn to mark the form to place through LATITUDE -LONGITUDE with data of friend table. User Info In Universal Windows Programs Jan 15, 2016. This article explains how to read user information in UWP application. The Newest Fiddle Sibling: .NET Fiddle Nov 17, 2015. This article will cover some of the major features found in .NET Fiddle and set you on the path to coding from the comfort of your browser.. Stateless Path Drawing Based on Incoming Data Jun 26, 2015. In this article we will learn about Stateless Path drawing based on incoming data. Storing Form Values & Images Path in Database & Images in ASP.Net Application Folder Jun 15, 2015. In this article you will learn how to manage an Employee Registration From with Employee Images saved in the database and project folders. Upgrading to SharePoint 2016 May 14, 2015. In this article, we will see the path for upgrading to SharePoint 2016. Code Scouting Mar 31, 2015. This article describes a way to gather information before committing to a particular path. Unleashing Path List Box: Expression Blend (WPF, Windows Store) Jan 07, 2015. In this article you will learn about Unleashing Path List Box, Expression Blend (WPF, Windows Store).. Release Management For Visual Studio 2013, Configuration Guide Jul 31, 2014. This article provides a walkthrough on the configuration of release management for Visual Studio 2013. Upload File Using PHP Jun 06, 2014. In this article we are going to learn the basic steps for uploading a file into the server or a destination path using PHP Get User Profile Display Picture Information in SharePoint 2013 CSOM May 19, 2014. This article explains how to get the user profile display picture info in SharePoint 2013 CSOM.. Website Translator Jan 02, 2014. This article is for all web developers who want to see their website working in several languages (from languages this time I mean: regional and national languages). Setting Path of JDK in Windows OS Nov 06, 2013. In this article you will learn step-by-step how to set the path of JDK in the Windows operating system. Get the Details of Your Windows Operating System in ASP.Net Aug 17, 2013. This article describes how to get the details of your Windows Operating System. Here I will get the information from the Win32_OperatingSystem class. Get All Intranet User Details in ASP.Net Aug 16, 2013. This article describes how to get the details of users present in a intranet. here i will get these info from Win32_UserAccount class. Create Site Collection With New Managed Path and Content Database Jul 15, 2013. In this article we have explore the PowerShell script to create new site collection with new managed path & content database. Powers of HTML 5 Jul 14, 2013. In this article I will demonstrate the powers of HTML 5, like Canvases, Shapes, Arcs and Curves. Error Logging With Caller Info Jul 10, 2013. This article provides a brief introduction to the use of Caller Information as a means for creating an error logging utility class for your C# 5.0/.NET 4.5 applications. SharePoint 2010 - Moving Sites and Solutions Jul 03, 2013. In this article I would like to explore the shortest path available to achieve the scenario. MySQLi Function in PHP: Part 8 May 24, 2013. In this article I describe the PHP MySQLi functions mysqli_get_server_info, mysqli_info, mysqli_get_server_version, mysqli_init and mysqli_insert_id.#. Drawing and Type Tools in Photoshop Mar 29, 2013. In this article you will learn about Drawing and type tools in Photoshop. AnimateMotion Following a Path Mar 06, 2013. In this article I describe the implementation and use of amimateMotion following a Path. Root Draw On Map in iPhone Feb 01, 2013. In this article I explain how to draw a path between two locations in an iPhone. How to Save Data Using File Path in iPhone Jan 15, 2013. In this article I will explain how to save data temporarily using file path. Calendar Function in PHP: Part 1 Dec 20, 2012. In this article I am going to explain the calendar functions in PHP. TextSearch (Release 1.0) - From Multiple Text Documents Dec 19, 2012. TextSearch-1.0 reveals the track to enhance the search functionality of Text Documents belonging to the same path and folder.. XML Pathfinder - A Visual Basic Utility Nov 09, 2012. This article discusses the construction of a simple utility that may be used to locate and evaluate paths within an XML document, and to test queries against those paths. Use of Motion Path in Expression Blend 4 Sep 03, 2012. Today we are going to see animation using the Motion Path Option. How to Design Email-Box Using Expression Blend 4 Aug 09, 2012. Here we are going to design an Email-Box. How to get full path of a file in C# Jul 14, 2012. How to get full path of a file using C# and .NET. How to get directory of a file in C# Jul 14, 2012. How to get the directory name of a file in C# and .NET. LINQ in Windows Store App Jun 25, 2012. In this article we are going to implement LINQ in Windows Store application and taking List of Books class as a data source. HTML 5 Interactive Map Using SVG Path and Polygon Jun 15, 2012. Here you will learn how to work with a HTML 5 Interactive Map using SVG Path, Polygon, kineticJS and jQuery. Getting Information of Selected Text at Runtime in Windows Store App Jun 06, 2012. In this article we will create a Windows Store application that shows information of the selected text of a text box at run time. Implementation of ListBox in Windows Store App May 29, 2012. In this article we will use a List Box, one of the controls in XAML pages. With the help of this control we will implement a list of player information. Working With Drives and Directories in ASP.NET May 24, 2012. In this article we will discuss how to interact with the File System, Drives and Directories and how we get the path details. Dashed Lines and Transforming Shapes in WPF May 03, 2012. In this article, we will discuss how to create dashed lines in Path Geometrics in WPF and how we create Transforming shapes In WPF. Practical Approach of Deleting Files From FTP Path Apr 27, 2012. In this article we are going to see how to delete a file from the FTP path. Path Geometries in WPF Apr 26, 2012. In this article, we discuss the Path Geometries in WPF. Practical Approach of Creating and Removing Directory To/from FTP Path Apr 25, 2012. In this article we are going to see how to create and remove a directory to/from FTP path. Practical Approach of Getting Directory List and Its Details From FTP Path Apr 24, 2012. In this article we are going to see how to fetch directory and file details from the FTP path. Practical Approach of Uploading File to FTP Path Apr 19, 2012. In this article we are going to see how to upload a file to a FTP path and save it. Practical Approach of Downloading File From FTP Path Apr 18, 2012. In this article we are going to see how to download a file from a FTP path and save it to a local folder.. Canvas Clipping Region Using HTML 5 Mar 02, 2012. In this article we are going to understand working with Canvas Clipping Regions Using HTML 5. In this section we can draw a path and then use the clip() method of the canvas context. Creating Various Text Paths Using HTML 5 Feb 14, 2012. This is a simple application for beginners that shows how to create various text paths using HTML 5 and CSS tools.. URL (Uniform Resource Locator) Rewriting Dec 22, 2011. This article demonstrates the complete URL rewriting concept using regular expression and set up the predefined rules. The article also demonstrates the post back issues in ASP.NET while requesting to the virtual path.. Tangrams in WPF Nov 07, 2011. In this article we will see how to use the WPF move and rotate concepts to implement a tangram game Setting folder full rights permission in Windows 7 Oct 21, 2011. Often developers will be receiving access rights error while installing application in Windows 7 and Vista. The error message will be like “Access to the path is denied”. Graphics in Silverlight 5: Part IV Oct 19, 2011. In the fourth part of this series, we shall explore the Path element. Info Path .XSN Inside: Part 1 Oct 02, 2011. In this series, I’ll explain to you how to work programmatically with .xsn file in a very low level manner. Improve your productivity with New PowerCommands Tool on VisualStudio 2010 Sep 23, 2011. Here I am again with new HotShot stuff that will increase your productivity +1 level up. Are you ready to take a tour on this? yes. Set Path For Java in Windows XP Jul 11, 2011. It's a good idea to set the path permanently instead of writing the full path each time. So, keep reading this article to learn how to set the path for Java in Windows XP. Core Java - Start it Now! Jun 30, 2011. This article is for people who want to start learning Java (core Java), including the development tools required to start learning Java is provided herein. Programmatically create Managed Paths in SharePoint 2010 Jun 23, 2011. Managed Paths - We can specify which paths in the URL namespace of a Web application are used for site collections. We can also specify that one or more site collections exists at a specified path. Convert Rows to Columns in SQL Server May 29, 2011. This article will help to convert values in rows to column/fields name or headers. Editing the Path of any Shape in XAML Silverlight May 09, 2011. In this article, you will learn how to edit the path of any Shape.. Disassemble code in Visual Studio instead of ILDSAM disassembler Mar 08, 2011. Managed Paths in SharePoint 2010 using PowerShell Feb 01, 2011. In this article we will be seeing about Define Managed paths in SharePoint 2010. . Combine Command in the Object menu of Microsoft Expression Blend Sep 16, 2010. Combine Command in the Object menu of Microsoft Expression Blend to create interesting effects – Combine command can be used with shapes or paths to make your drawing or artwork more powerful. About Info-path NA File APIs for .NET Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
http://www.c-sharpcorner.com/tags/Info-path
CC-MAIN-2017-51
refinedweb
2,103
65.83
Setting Unscoped Variables Inside CFThread In ColdFusion A few years ago, I looked at the way scoped-variables were treated inside a ColdFusion component when set within the context of a CFThread tag. Since that posting (which was overly complex in retrospect), I've come to love CFScript; and with that love, I've also dropped most of my explicit references to the Variables and Arguments scopes. This has lead to a few "head scratching" moments when setting unscoped variables inside of a CFThread tag. Each CFThread tag shares the Variables scope of its parent page. And, when you reference unscoped variables within a CFThread tag, ColdFusion will look in the Variables scope for those references (after looking in the thread-local and thread-attributes scopes). If you go to update an unscoped, simple variable within a CFThread tag, however, you do need to provide the "variables." scope; otherwise, ColdFusion will set the new value in the thread-local scope. NOTE: I am explicitly referring to "simple" variables as updating properties and indices of Structs and Arrays, respectively, does not suffer from this problem. To demonstrate this, I've create a ColdFusion component that has two private variables, valueA and valueB. These private variables are then both referenced and updated within a CFThread tag: <cfscript> component output = false hint = "I test unscoped variables in a thread." { public void function testThread() { variables.valueA = "initial value A."; variables.valueB = "initial value B."; thread name = "test-thread" action = "run" { // "Update" unscoped value inside CFThread. valueA = "unscoped set inside thread [ #valueA# ]."; // "Update" unscoped value inside CFThread - via other // method that is bound to variables scope. variables.otherMethod(); thread.local = duplicate( local ); } thread action = "join"; // Dump variables and thread scope to see where the "value" // is currently stored. writeDump( var = variables, label = "Variables Scope" ); writeDump( var = cfthread[ "test-thread" ], label = "Thread" ); } private void function otherMethod() { valueB = "unscoped set inside other method [ #valueB# ]."; } } </cfscript> To make the demo even more exciting, the thread body calls a private ColdFusion component methods, which then updates one of the unscoped variables. And, when we run the above ColdFusion component method - testThread() - we get the following page output: As you can see, both unscoped variables were successfully read from the Variables scope; however, when updated, both unscoped variable updates were stored in the thread-local scope. As a metaphor, perhaps it's easiest to think about the CFThread tag as using Prototypal inheritance in which its CFThread prototype is the Component's Variables scope. In prototypal inheritance (think JavaScript), you can read simple values in from your prototype object; but, when you go to set a simple value, the value is stored in your local scope. Prototypal inheritance provides asymmetric access patterns, like CFThread. Anyway, just a minor note to be aware of. Reader Comments If you want to write to a scope outside the thread you'll have to use a backdoor to pass in a reference (just remember ACF's idiotic legacy behaviour of passing arrays by copy instead of by reference): <cfthread action="run" name="LOCAL.myLittleThread" tunnel="#createObject( 'java', 'java.lang.ref.SoftReference' ).init( LOCAL )#"> @Michael, When it comes to CFThread and Arrays, you're actually fighting two different wars at the same time! On the one hand, Adobe ColdFusion passes arrays by value, not by reference. But, you're also dealing with CFThread attributes - and, ColdFusion performs a deep-copy when you pass something in via the thread attribute :) I understand why they do it - it helps with race conditions and concurrency and all that stuff. But, on the other hand, it is very frustrating. What I usually do now is just pass IDs in via the attributes and then reference the variable's scope when I need to perform actions within the thread body. Something like this (pseudo code): In this case, I am performing a deep-copy of the integer, UserID, which is no big deal; then, I use the variables-scoped service object - userService - to get a a thread-local copy of the object I need to do stuff with. So far, that's been workout out fairly well.": Then elsewhere in the component: Of course this does clutter the "this" namespace, and you wouldn't want to change the function related to the thread name on the fly because of race conditions, but it does work. @Robin, The use of "this" is such an interesting case. I think it has to do with the way that "this" is actually implemented. If you dump out the Variables scope, you will notice that "this" is actually a property of the variables scope: "variables.this = [this scope properties]". That said, I wonder if references to the "this" are really just doing this: [implicit variables].this.something ... where "this" is, itself, an "unscoped" variable on the this scope. I vaguely remember a few years ago trying to override the "this" scope from locally within a function: Anyway, the whole scoping in ColdFusion components is a little funky monkey :)
https://www.bennadel.com/blog/2531-setting-unscoped-variables-inside-cfthread-in-coldfusion.htm
CC-MAIN-2021-31
refinedweb
833
60.24
Detect a face with OpenCV Please Note: This Post refers to OpenCV 2.x! The following sample uses the classes videoand common, which are helper classes from the samples/python2 folder. The cascades used in this sample are located in the data folder of your OpenCV download, so you'll probably need to adjust the filename parameter cascade_fn to make this example work: import cv2 from video import create_capture from common import clock, draw_str # You probably need to adjust some of these: video_src = 0 cascade_fn = "haarcascades/haarcascade_frontalface_default.xml" # Create a new CascadeClassifier from given cascade file: cascade = cv2.CascadeClassifier(cascade_fn) cam = create_capture(video_src) while True: ret, img = cam.read() # Do a little preprocessing: img_copy = cv2.resize(img, (img.shape[1]/2, img.shape[0]/2)) gray = cv2.cvtColor(img_copy, cv2.COLOR_BGR2GRAY) gray = cv2.equalizeHist(gray) # Detect the faces (probably research for the options!): rects = cascade.detectMultiScale(gray) # Make a copy as we don't want to draw on the original image: for x, y, width, height in rects: cv2.rectangle(img_copy, (x, y), (x+width, y+height), (255,0,0), 2) cv2.imshow('facedetect', img_copy) if cv2.waitKey(20) == 27: break
https://www.bytefish.de/blog/object_detection.html
CC-MAIN-2020-40
refinedweb
191
50.84
25966/how-can-i-logarithmic-axes-with-matplotlib-in-python I want to plot a graph with one logarithmic axis using matplotlib. I've been reading the docs, but can't figure out the syntax. I know that it's probably something simple like 'scale=linear' in the plot arguments, but I can't seem to get it right Sample program: import pylab import matplotlib.pyplot as plt a = [pow(10, i) for i in range(10)] fig = plt.figure() ax = fig.add_subplot(2, 1, 1) line, = ax.plot(a, color='blue', lw=2) pylab.show() You can use the Axes.set_yscale method. That allows you to change the scale after the Axesobject is created. That would also allow you to build a control to let the user pick the scale if you needed to. The relevant line to add is: ax.set_yscale('log') You can use 'linear' to switch back to a linear scale. Here's what your code would look like: import pylab import matplotlib.pyplot as plt a = [pow(10, i) for i in range(10)] fig = plt.figure() ax = fig.add_subplot(2, 1, 1) line, = ax.plot(a, color='blue', lw=2) ax.set_yscale('log') pylab.show() Good question. I actually was stuck with ...READ MORE Try virtualenv : This helps you create isolated ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE Hello, Just replace and with , and you're done: try: ...READ MORE You can also use the random library's .. Assuming that your file unique.txt just contains ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/25966/how-can-i-logarithmic-axes-with-matplotlib-in-python?show=25972
CC-MAIN-2021-31
refinedweb
283
70.7
Summary The buzz and crackle of mating unrelated new excitements: Experiences with Amazon web services, Project Atom, and a J2ME camera phone application that acts as a bar code scanner to transform all physical goods into mere floor demos. This isn't anything so coherent as an essay, just a loose collection of things that I've found exciting over the past few weeks, akin to a jar of fireflies that may congregate into a single holy shining globe of brilliance, or on the other hand may just as likely wink out without further notice...Smarter indexing == More value for books I already have Amazon allowing me to search the text contents of a gazillion books makes the old books in my own library more useful. If I vaguely recall a passage, a quote, a character, a witty line or somesuch but can't remember the location, I need only to execute a few text searches in Amazon to locate the source and find the exact page number. It's like an instant index. Great for religious debates, standardized tests, arguments with the in-laws, cheating at trivia contests, or just general casual re-education. Here's an example that queries Ed Roman's book Mastering Enterprise JavaBeans to see if he had any mentions of SOAP. Personally I dislike the notion of having to go to the amazon.com site and digging down into specific HTML-based entries to begin such a query, but fortunately for me Amazon seems to sympathize; how long 'til we see MS Office plugins that can take advantage of this feature via a few RESTful requests to Amazon while you're working in Word or Outlook, or perhaps even a few SOAP-based requests to new operations in the Amazon web services? Amazon has taken another step closer to becoming the central commerce engine for mobilized and service-oriented applications everywhere. By aggregating Google and Amazon's offerings, a developer could have intelligent, query-able access to a content repository unlike any previously assembled. If the apps ran on unobtrusive and ubiquitous mobile devices, such a device could become not a replacement for books (so much folly, wasn't it?) but a sort of satellite aid for enriching the reading experience, working along with books. If only there were a pleasantly simple yet extensible way to develop such apps...Project Atom I don't intend to pick sides between RSS 2.0 and Project Atom, because as a product vendor I think it's best if both are successful so that I have more interesting things to support and simplify in tools a layer above the standards. I have been spending a good deal of time playing with Atom lately, though, so I'll mention my experience with it -- meaning no disrespect to RSS 2.0 (things Winer and Ruby tend to get religious, and religion tends to get violent; this caveat is more necessary than some may know). That said... wow! A simple, extensible format is emerging for content publishing and syndication, editing, and archiving that relies on the RESTful approach of using HTTP GET/POST/PUT/DELETE for its operations, replacing the blog world's current mixed bag of RSS/XML-RPC/WebLog APIs/HTML forms. Nicely extensible via reasonable namespace support, easily managed through tools and XPath/XQuery engines, supportive of multiple locales and content types, etc. Based on asynchronous doc/literal web services rather than RPC. All this is to my liking. My purpose here is not to explain what it is or consider its status, though. If you haven't seen it, browse through the wikki. I mention it because it helps me get to something I've been wanting for myself... Offline Queries of XML Schema-compliant content delivered from Amazon, Google, Exchange, and... What I want is my Amazon and Google (and others, but keeping it to just these two for simplicity's sake) services to return data in the form of a content model that conforms to something like Atom, making it easier for tools and client platforms to manage it intelligently. I want to be able to execute an XPath expression or a full XQuery statement in some sort of explorer window (that is, a file window in any OS or device) to grab relevant data. And I want to be able to update that data even while offline, and have it later synchronize (if using Atom, via a POST or PUT) when I reconnect. And I want the data to find me automatically based possibly on some subscription settings, and I want... well, I want a lot of the things that ex-Microsoft and current-BEA visionary Adam Bosworth speaks about when he talks of a Web Services Browser. Atom takes a pretty good stab at suggesting a content model and an HTTP-based API for approaching such things, stopping short of features or modules such as intermittently connected data and services, which of course we could add (or hijack from WS-Whatever) and integrate using Atom's namespace support. And at Macromedia I have been spending a great deal of time helping design a client-side container named Central to manage just this sort of functionality on the client, and see much headway there as we evolve the web browser (more on Central another time). It will take the right remote services, content model, friendly API, and client container to make it all come together. I fully admit that I'm too enthusiastic to think clearly about this just now, but am comforted by the knowledge that my more familiar cynicism will certainly return -- and that's when I'll probably be able to get something real implemented. In terms of actual code using these sorts of services, though, I did pull together something based on these thoughts and the realization that... Camera Phones are Barcode Scanners!A few weeks ago I built my first real J2ME application: a simple app that uses the Mobile Media API to grab a picture of a book's bar code, perform some image recognition to rip the ISBN (you can also just enter the ISBN or UPC code manually if you don't happen to have a camera phone), send the ISBN to Amazon, and receive reviews and pricing info on the book. I provided the option to add the book to an Amazon cart, assuming it's less expensive than the copy I happen to be holding, and can even make the purchase via One-Click from the phone if I so desire. Mostly I just use it for the reviews so far, and I had to do some hacking to get the One-Click and purchase stuff to work (Amazon's RESTful and SOAP services don't expose enough features yet), but I found this to be a good use of the available technology to create a service-based mobile app that's not just a game (no offense to game designers). Works great on my Nokia 3650. Better: I could hook this up to the service endpoint for an open source UPC database I recently located, and theoretically do the same for all types of products, not just books and CD's and DVD's. All the world's for-sale goods become mere floor demos. Sadly, however, my friend Jeremy Allaire has informed me that this idea has been patented, which I should have guessed. I'm only a mobile developer by night and weekend, and haven't the inclination to battle some angry company that intends to market the thing, even if my version ends up being decently strong. Ah well, it was still a terrific experiment and it left me bitten by the mobile bug -- I see many mobile apps in my future. Not sure what my official day job employer will think of this... And, and, and... A couple of fireflies escaped the jar: Location-Based Services such as Zingo are chief among them. I'll recapture that one in some future entry. Meanwhile, even if nothing comes of these random thoughts, I feel great excitement, which is non-trivial in a world in which I was staring at implementing another wave of J2EE specs, the Red Sox broke my heart, and a fierce Autumn storm prematurely defoliated our New England trees. I'm just too mobile to be dragged down by it all -- there's random exciting stuff flying around out there! Have an opinion? Readers have already posted 5 comments about this weblog entry. Why not add yours? If you'd like to be notified whenever Sean Neville adds a new entry to his weblog, subscribe to his RSS feed.
http://www.artima.com/weblogs/viewpost.jsp?thread=18731
CC-MAIN-2014-52
refinedweb
1,453
54.86
29 May 2012 19:44 [Source: ICIS news] WASHINGTON (ICIS)--?xml:namespace> S&P said that its closely watched survey of nationwide housing prices, based on work by economists Karl Case and Robert Shiller, shows that all three major indices of home prices “ended the first quarter of 2012 at new post-crisis lows”. The three major indicators of housing price movements include a ten-city composite, a 20-city average and an overall national composite index of prices for single-family homes. “The S&P/Case-Shiller national home price index, which covers all nine S&P said that its key national composite index of home prices “fell by 2% in the first quarter alone and is down by 35.1% from its second quarter 2006 peak”, the point at which the US housing bubble started to collapse and housing prices went into a nose-dive. David Blitzer, chairman of S&P’s indexes committee, said that “While there has been improvement in some regions, housing prices have not turned”. The housing industry, especially new home construction, is a major downstream consuming sector for US chemicals and plastics manufacturers. The S&P home price survey results come in the wake of other data reports suggesting that the long-depressed US housing industry might at last be in the first stages of a recovery. Last week, the Commerce Department said that new home sales jumped by 3.3% in April from March. And earlier this month, Pending home sales rose in March, prompting the National Association of Realtors (NAR) to declare that “the housing market has clearly turned the corner”. However, with S&P indicating that With home prices in continuing decline, builders may find it difficult to construct new homes and sell them at prices that will cover their costs. Declining home prices also make it more difficult for builders to get project development loans and for would-be buyers to get mortgage financing. In addition, declining real estate prices put more existing homeowners at risk of default as the value of their homes sink below the mortgage debt on their properties. And with home prices in decline, potential buyers of new or existing homes may be discouraged from making a purchase for fear that any property they acquire could begin to lose value
http://www.icis.com/Articles/2012/05/29/9565194/us-home-prices-fall-to-new-lows-in-first-quarter-this-year.html
CC-MAIN-2015-14
refinedweb
385
55.27
Basic DMA Help Neededjonathan.earl_1591236 Aug 18, 2017 1:23 PM I am trying to read a status register at regular, short intervals. To do this I've connected a clock to my DMA drq signal and have set up the status register DMA through the DMA wizard. I am using all internal clocks just to test the DMA but I have not seen any successful memory transfer. Could you please look over my code and let me know of any errors? I have also attached an image of my Top Design. #include <project.h> /* Defines for DMA_1 */ #define DMA_1_BYTES_PER_BURST 1 #define DMA_1_REQUEST_PER_BURST 1 #define DMA_1_SRC_BASE (CYDEV_PERIPH_BASE) #define DMA_1_DST_BASE (CYDEV_SRAM_DATA_MBASE) int main() { uint8 DMA_1_Chan; uint8 DMA_1_TD[1];], 1, DMA_1_TD[0], TD_INC_DST_ADR); CyDmaTdSetAddress(DMA_1_TD[0], LO16((uint32)Status_Reg_1_Status_PTR), LO16((uint32)CYDEV_SRAM_DATA_MBASE)); CyDmaChSetInitialTd(DMA_1_Chan, DMA_1_TD[0]); CyDmaChEnable(DMA_1_Chan, 1); unsigned char* Receive_Data = (unsigned char*) CYDEV_SRAM_DATA_MBASE; int i; CyGlobalIntEnable; /* Enable global interrupts. */ Clock_4_Stop(); Clock_4_Start(); CyDelayUs(1500); Clock_4_Stop(); for(;;) { for(i = 0; i < 200; i++) { Pin_1_Write(1); if((int)Receive_Data[i]) CyDelay(500); CyDelay(100); Pin_1_Write(0); CyDelay(500); } } } - DMA_Problem.png 22.5 K 1. Re: Basic DMA Help Neededuser_1377889 Mar 10, 2016 7:00 AM (in response to jonathan.earl_1591236) You should use uint8 StatusByte; and later CyDmaTdSetAddress(DMA_1_TD[0], LO16((uint32)Status_Reg_1_Status_PTR), LO16((uint32)&StatsByte))); It is easier for us when you post your complete project, so that we all can have a look at all of your settings. To do so, use Creator->File->Create Workspace Bundle (minimal) and attach the resulting file, next time. ;-) Bob 2. Re: Basic DMA Help Neededjonathan.earl_1591236 Mar 10, 2016 7:10 AM (in response to user_1377889) Thanks for the reply Bob. Please see the project archive attached. I made the change to uint8 but that didn't seem to have an effect. And for further information, I'm using the CY8CKIT-059 (uses CY8C5888LTI-LP097). - Self_Test.cyprj_.Archive01.zip 507.2 K 3. Re: Basic DMA Help Neededuser_1377889 Mar 10, 2016 7:51 AM (in response to jonathan.earl_1591236) Corrected... Bob 4. Re: Basic DMA Help Neededjonathan.earl_1591236 Mar 10, 2016 9:34 AM (in response to user_1377889) Fantastic, thank you Bob. 5. Re: Basic DMA Help Neededuser_1377889 Mar 10, 2016 1:35 PM (in response to jonathan.earl_1591236) You are always welcome! Bob 6. Re: Basic DMA Help Neededjonathan.earl_1591236 Mar 10, 2016 1:56 PM (in response to user_1377889) One more question since you're already familiar with my project setup. I've found that my drq signal only captures properly up to ~7 MHz with a BUS_CLK of 75 MHz. A faster drq rate seems to cause the DMA to skip edges. Is there a max drq rate that the DMA can handle relative to the bus clock? I've looked at the PSoC 5LP Architecture TRM but it's not straightforward how quickly it can accept drq triggers. I would like to capture the status register data as fast as possible (at regular intervals). See the updated higher speed project attached. Thanks. - Self_Test.cyprj_.Archive02.zip 507.8 K 7. Re: Basic DMA Help Neededuser_1377889 Mar 10, 2016 11:49 PM (in response to jonathan.earl_1591236) Answering that question could be done best by Cypress directly. At top of this page select "Design Support -> Create a Support Case" and describe your problem. Attach your latest project. Bob 8. Re: Basic DMA Help Neededuser_78878863 Mar 11, 2016 2:19 AM (in response to jonathan.earl_1591236) The TRM explains what the actual time needed for a transfer is. Basically its N+6 cycles for an interspoke transfer (e.g. peripheral to sRAM), and 2N+5 for an intraspoke transfer (e.g. sRAM to sRAM). This might be higher when the CPU blocks the sRAM (which always takes 2 cycles). And the the triggering takes another cycle (since it needs to be synced to the clock). That comes up to the 10 cycles for a transfer that you observe.
https://community.cypress.com/thread/14364
CC-MAIN-2017-51
refinedweb
656
66.44
Where has XHTML gone? Where has XHTML gone? Join the DZone community and get the full member experience.Join For Free When I started working in the web development field, XHTML had recently been introduced and was all the rage. I even included XHTML templates and a copy of the DTD in our enterprise CMS, believing that in some years the publicly hosted DTD would be targeted by millions of users browsers, trying to validate XHTML code and rejecting malformed documents. But now in 2011, where XHTML has gone? What XHTML is XHTML is a specification which defines the XML serialization of HTML: while HTML itself is not a strict language, and ignore most of the malformed tags and nesting structures, XML is much more draconian. In its original versions, XHTML 1 and 2, XHTML was the reformulation of HTML 4 in order to transform HTML documents in valid XML ones, agnostic with respect to the graphic presentation or the media type. For example, XHTML deprecated or invalidated all tags strictly related to presentation issues, like <b> (substituted by <strong>) but also <font>. Ideally, XHTML documents could just be viewed on different medias by specifying a different CSS. An interesting idea of XHTML was also providing different modules, via XML namespaces. You are able to compose different markup languages in a document, in addition to the standard one: a language for forms, one for mathematical formulas, one for vector graphics. Here's an example of XHTML snippet, including a MathML expression. <p>Some random text.</p> <math xmlns=""> <apply> <plus/> <apply> <times/> <ci>a</ci> <apply> <power/> <ci>x</ci> <cn>2</cn> </apply> </apply> <apply> <times/> <ci>b</ci> <ci>x</ci> </apply> <ci>c</ci> </apply> </math> A bit verbose, but comparing to using cryptic LaTeX notation, which must be parsed on the server-side, it's not so ugly. The approach of HTML 5 has been instead to provide support directly into a single specification: the <input> element has been extended to provide user-friendly forms without the need for additional JavaScript libraries; <canvas> can be used for SVG; and so on. The DOM is also manipulable via JavaScript, in an important part of the spec. Another advantage of XHTML may be the use of XML tools for web pages too: in every language you have an XML parser, but an HTML one is more difficult to find or write. The issues XHTML 1.0 dates back to 2000. If it is so powerful, why it has not been widely adopted? I think here's why: XML has a very strict syntax with respect to SGML-derived languages like HTML. If there is a syntax error or a missing closing tag or attribute double quote in even one row of your XHTML document, it won't be interpreted by the browser. Moreover, character encoding and JavaScript access to the DOM is more difficult in XHTML documents: try write a & character or access an element without its XHTML namespace. The Facebook case Facebook includes an XHTML 1.0 strict doctype in each page. However, it serves documents with the text/html HTTP response header, which means browser do not treat the content as XML. XHTML has spreaded in Facebook in the last years: the last time I saw it was as FBML, a language used to extend the capabilities of Facebook applications on the client side. FBML tags are included in an XML namespace and are interpreted via JavaScript to produce an effect (not by the browser by itself). For example, the following snippet produces a friend selector, of course customized for the current user: <form action="" id="testForm" method="post"> <fb:friend-selector <input type="submit" value="test" /> </form> Rather then resorting to custom attributes over standard tags like many JS frameworks do, this declarative approach adds a fb XHTML namespace and makes available a whole new set of tags with extended capabilities. And it uses a well-documented standard, like XHTML. However Facebook is in the process of deprecating FBML (moving to iframe-based applications), and preaches to just use more widely adopted standard: HTML and CSS. The same feature of FBML are now available via a JavaScript SDK and by a bunch of Social Plugins. XHTML5! Just when you thought XHTML may disappear, I have to tell you that XHTML has been evolved to accomodate the HTML 5 specification. You can write (only if you really want, of course) HTML 5 as valid XML. However it seems that XHTML 5 will try to remain backward compatible with HTML, for example by allowing elements like <i> and <font> in certain use cases. As a web developer, will you use XHTML [5] in the future? Resources admits that translating an HTML document in XHTML won't result in a difference, unless you incude other languages. Wikipedia's article on MathML shows you an example of a language that can be used in an XHTML document as a module. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/where-has-xhtml-gone
CC-MAIN-2020-16
refinedweb
851
59.43
homophony provides zc.testbrowser integration for Django; zc.testbrowser is a lot more robust than the default functional testing client that comes with Django. See the introduction to zc.testbrowser for a better understanding of how powerful it is. First of all, you need to have homophony installed; for your convenience, recent versions should be available from PyPI. Let's say you're working on an application called foobar; the tests for this application are located in foobar/tests.py. Use this as a starting point for foobar/tests.py: from homophony import BrowserTestCase, Browser class FoobarTestCase(BrowserTestCase): def testHome(self): browser = Browser() browser.open('') browser.getControl(name='first_name').value = 'Jim' browser.getForm().submit() self.assertEquals(browser.url, '') self.assertEquals(browser.title, 'Hello Jim') Bear in mind that implementing custom setUp and tearDown methods should involve calling those defined in BrowserTestCase. If you prefer doctests over unit tests (as we do!), use the following as a base for foobar/tests.py: from homophony import DocFileSuite def suite(): return DocFileSuite('tests.txt') And here is an example foobar/tests.txt file: The website welcomes its visitors with a form: >>> browser = Browser() >>> browser.open('') >>> browser.getControl(>> browser.getForm().submit() When a name is given, it echoes back with an informal greeting: >>> browser.title 'Hello Jim' >>> print browser.contents <!DOCTYPE html> ... <h1>Hello Jim</h1> ... And there is a link to go back: >>> browser.getLink('Go back').click() >>> browser.title 'Home' There are some useful helpers on the browser class. You can run XPath queries on HTML documents using queryHTML, like this: >>> browser.queryHTML('//h1') <h1>Hello Jim</h1> When debugging tests, it is sometimes handy to open a browser at a particular point in the test. You can accomplish that by invoking serve: >>> browser.serve() This command will start an HTTP server and open a web browser with live access to your application. Use Ctrl-C to stop the server and continue running tests. There is a known issue that the mini-webserver does not serve static files, so your browser may not be able to access Javascript or CSS used by your app. The browser will persist cookies accross requests, so things like user sessions should work. There is an example Django application in the source distribution. Let's run the tests: wormhole:example admp$ ./manage.py test -v 2 website Creating test database... Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site Installing index for auth.Permission model Installing index for auth.Message model ... testHome (example.website.tests.FoobarTestCase) ... ok Doctest: tests.txt ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.102s OK Destroying test database... The -v 2 parameter is there to get the list of tests printed, and is otherwise unnecessary. For learning purposes, try to break the tests and witness the details in the output of the test runner. Custom hooks are installed for urllib to pass all requests for to a subclass of WSGIHandler (which exposes Django applications through WSGI). The real heavy lifting is performed by wsgi_intercept. There is a home page with instructions on how to access the code repository. Send feedback and suggestions to team@shrubberysoft.com.
https://crate.io/packages/homophony/
CC-MAIN-2015-22
refinedweb
534
51.34
current position:Home>15 advanced Python tips for experienced programmers 15 advanced Python tips for experienced programmers 2022-01-30 11:23:30 【Yunyun yyds】 This article will introduce 15 A concise Python skill , Towards simplicity and efficiency , Learning is easy to understand . 1. Sort objects by multiple key values Suppose you want to sort the following dictionary list : people = [ { 'name': 'John', "age": 64 }, { 'name': 'Janet', "age": 34 }, { 'name': 'Ed', "age": 24 }, { 'name': 'Sara', "age": 64 }, { 'name': 'John', "age": 32 }, { 'name': 'Jane', "age": 34 }, { 'name': 'John', "age": 99 }, ] Copy code Not just by name or age , Also sort the two fields at the same time . stay SQL in , It will be such a query : SELECT * FROM people ORDER by name, age Copy code actually , The solution to this problem can be very simple ,Python Guarantee sort Function provides a stable sort order , This also means that more similar items will retain their original order . To achieve sorting by name and age , You can do this : import operator people.sort(key=operator.itemgetter('age')) people.sort(key=operator.itemgetter('name')) Copy code Notice how to reverse the order . First, by age , Then sort by name , Use operator.itemgetter() Get the age and name fields from each dictionary in the list , So you'll get the results you want : [ {'name': 'Ed', 'age': 24}, {'name': 'Jane', 'age': 34}, {'name': 'Janet','age': 34}, {'name': 'John', 'age': 32}, {'name': 'John', 'age': 64}, {'name': 'John', 'age': 99}, {'name': 'Sara', 'age': 64} ] Copy code The name is the main sort item , If the names are the same , Sort by age . therefore , all John All grouped by age . 2. Data categories since 3.7 After version ,Python Start providing data categories . Compared with conventional classes or other alternatives ( For example, multiple values or dictionaries are returned ), It has more advantages : - Data classes require very little code - You can compare data classes , because eq This function can be realized - Data class needs type prompt , Reduces the possibility of errors - You can easily print data classes for debugging , because __repr__ This function can be realized This is an example of a working data class : from dataclasses import dataclass classCard: rank: str suit: str card=Card("Q", "hearts") print(card == card) # True print(card.rank) # 'Q' print(card) Card(rank='Q', suit='hearts') Copy code 3. The list of deduction List derivation can replace annoying loops in list filling , Its basic grammar is [ expression for item in list if conditional ] Copy code Let's look at a very basic example , Fill the list with a sequence of numbers : mylist = [i for i inrange(10)] print(mylist) # [0, 1, 2, 3,4, 5, 6, 7, 8, 9] Copy code Because you can use expressions , So you can also do some mathematical operations : squares = [x**2for x inrange(10)] print(squares) # [0, 1, 4, 9,16, 25, 36, 49, 64, 81] Copy code You can even call external functions : defsome_function(a): return (a +5) /2 my_formula= [some_function(i) for i inrange(10)] print(my_formula) # [2.5, 3.0,3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0] Copy code Last , have access to if Function to filter the list . under these circumstances , Only reserved for 2 Value of division : filtered = [i for i inrange(20) if i%2==0] print(filtered) # [0, 2, 4, 6,8, 10, 12, 14, 16, 18] Copy code 4. Check the memory usage of the object Use sys.getsizeof() You can check the memory usage of objects : import sys mylist =range(0, 10000) print(sys.getsizeof(mylist)) # 48 Copy code Why is this huge list only 48 Bytes? , This is because range The class returned by the function is represented as a list . Compared to using an actual list of numbers , The storage efficiency of number sequence is much higher . We can use list derivation to create a list of actual numbers in the same range : import sys myreallist = [x for x inrange(0, 10000)] print(sys.getsizeof(myreallist)) # 87632 Copy code By using sys.getsizeof(), We can learn more about Python And memory usage . 5. Find the most frequent values To find the most frequent value in a list or string : test = [1, 2, 3, 4, 2, 2, 3, 1, 4, 4, 4] print(max(set(test), key = test.count)) # 4 Copy code - max() The maximum value in the list will be returned .key Parameter uses a single parameter function to customize the sorting order , In this case test.count, This function applies to each item on the iterator . - test.count yes list Built in features of . It takes a parameter , And calculate the occurrence times of this parameter . therefore test.count(1) Will return 2, and test.count(4) Will return 4. - set(test) return test All unique values in , therefore {1、2、3、4} Then in this line of code will accept test All unique values of , namely {1、2、3、4}. Next ,max It will be applied list.count Function and returns the maximum value . There is a more effective way : from collections import Counter Counter(test).most_common(1) # [4: 4] Copy code 6. Property package You can use attrs Instead of data classes , choice attrs There are two reasons : - The use of Python Version higher than 3.7 - Want more features Theattrs The software package supports all mainstream applications Python edition , Include CPython 2.7 and PyPy. some attrs You can provide unconventional data classes such as validators and Converters . Let's look at some sample code : object): name =attrib(default='John') surname =attrib(default='Doe') age =attrib(init=False) p =Person() print(p) p=Person('Bill', 'Gates') p.age=60 print(p) # Output: # Person(name='John', surname='Doe',age=NOTHING) # Person(name='Bill', surname='Gates', age=60) Copy codeclassPerson( actually ,attrs The authors of are already using the method of introducing data classes PEP 了 . Data classes are deliberately kept simpler 、 Easier to understand , and attrs Provides all the features you may need . 7. Merge dictionaries (Python3.5+) dict1 = { 'a': 1, 'b': 2 } dict2= { 'b': 3, 'c': 4 } merged= { **dict1, **dict2 } print (merged) # {'a': 1, 'b':3, 'c': 4} Copy code If there are overlapping keys , The keys in the first dictionary will be overwritten . stay Python 3.9 in , Merging dictionaries becomes more concise . above Python 3.9 The merge in can be rewritten as : merged = dict1 | dict2 Copy code 8. Return multiple values Python The function in does not have a dictionary , In the case of lists and classes, you can return multiple variables , It works as follows : defget_user(id): # fetch user from database # .... return name, birthdate name, birthdate =get_user(4) Copy code This is a finite return value , But anything more than 3 The contents of each value should be placed in a ( data ) class . 9. Filtering of list elements filter() Use filter() Function acceptance 2 Parameters : - Function object - Objects that can be iterated Next we define 1 Functions, and then to 1 List to filter . First we create 1 A list , And eliminate less than or equal to 3 The elements of : original_list = [ 1,2,3,4,5]# Definition list # Define filter functions 4 def filter_three(number):5 return number > 3 filtered = filter(filter_three, original_list) filtered_list = list(filtered) filtered_list #[4,5] Copy code We define lists original_list Then we define a parameter that accepts a numerical type number Function of filter_three, When the value of the parameter passed in is greater than 3 Will return to True, On the contrary, it will return False We defined filter object filtered, among filter() The first argument accepted is the function object , The second parameter is the list object, and eventually we will filter Object to list , In the end, I got the experience filter_three After filtration original_list The elements left in . Allied , We can also use list derivation to filter list elements , As an elegant way to generate and modify lists , Here's how to do the same thing using list derivation : original_list = [1,2,3,4,5]2 filtered_list = [ number for number in original_list if number > 3]# In the process of list derivation, conditional judgment is introduced print(filtered_list) #[4,5] Copy code 10. Modify the list map() Use Python The built-in map() Functions allow us to apply a function to every element in an iteratable object . For example, we want to get the square of every element in a list object , You can use map() function , Like the following example : original_list = [1,2,3,4,5] def square( number): return number **2 squares =map(square, original_list) squares_list = list( squares) print(squares_list) #[1,4,9,16,25] Copy code similar filter() Working process of , Now let's see what happened : First, we define lists original_list, And a function that takes a numeric parameter and returns its square value square() And then we define map object squares, similar filter(),map() The first argument accepted is the function object , The second parameter is the list object, and eventually we will map object squares List , You get the result you want . In the same way, we can do the same thing with list derivation : original_list = [1,2,3,4,5] squares_list = [number ** 2for number in original_list] print(squares_list) #[1,4,9, 16,25] Copy code 11. utilize zip() To combine lists In some cases we need to combine two or more lists together , This kind of requirement uses zip() It's very convenient to do it . zip() Function receives multiple lists as arguments , And then we get the one-to-one combination of elements in each position , Like the following example : numbers = [ 1,2,3] letters = [ 'a', 'b', 'c'] combined = zip(numbers,letters) combined_list = list( combined) print(combined_list) for item in zip( numbers,letters ): print(item[0], '\t', item[1]) #[(1,'a'),(2,'b'),(3, 'c')] #1 a #2 b #3 c Copy code 12. Reverse the list Python The list in is an ordered data structure , Because of this , The order of the elements in the list is important , Sometimes we need to flip the order of all the elements in the list , Can pass Python Slice operations in , use ::-1 To quickly implement : original_list = [1,2,3,4,5] reversed_list = original_list[ : : -1] print(' Before turning over : ', original_list) print(' After flipping :', reversed_list) # Before turning over :[ 1,2,3,4,5] # After flipping :[5,4,3,2,1] Copy code 13. Check the existence of elements in the list In some cases, we want to check whether there is an element in the list , You can use it at this time Python Medium in Operator , For example, we have a list of the names of all the winning teams in the game , When we want to find out if a team name has won , It can be like the following example : games = [ 'Yankees ', 'Yankees ', 'Cubs ', 'Blue Jays ', 'Giants '] def isin(item,list_name) : if item in list_name: print(f"{item} is in the list! ") else: print(f"{item} is not in the list! ") isin( 'Blue Jays ' , games) isin( ' Angels', games) #Blue Jays is in the list! #Angels is not in the list! Copy code 14. Flatten nested lists In some cases we come across nested lists , Each element is a different list , In this case, we can use the list derivation to flatten the nested list , As below 2 An example of nesting layers : nested_list = [[1,2,3],[4,5,6],[7,8,9]] flat_list = [i for j in nested_list for i in j] print(flat_list) #[1,2,3,4,5,6,7,8,9] Copy code Add extra : Only two nested lists are considered here , If it's more layers nested , You need to write as many layers as you need for loop , More trouble , In fact, there is a better way , We can use pip install dm-tree To install tree This library is dedicated to flattening nested structures , You can flatten nested lists at any level , Examples are as follows : import tree nested_list_2d = [[1,2,3],[4,5,6],[7,8,9]] nested_list_3d = [[[1,2],[3,4]], [[5,6],[7,8]], [[9,10],[11,12]]] print(tree.flatten(nested_list_2d)) print(tree.flatten(nested_list_3d)) #[1,2,3,4,5,6,7,8,9] #[1,2,3,4,5,6,7,,8, 9, 10, 11,12] Copy code 15. Check uniqueness If you want to see if all the values in the list are unique , have access to Python Medium set The characteristics of data structure , For example, the following example : list1 = [ 1,2,3,4,5] list2 = [1,1,2,3,4] def isunique( 1): if len(l) == len(set(l)) : print( only ! ') eise: print((' Not only —! ') isunique( list1) isunique(list2) # only —! # Not only —! Copy code Look forward to your third company ( give the thumbs-up , Collection , Comment on ), Your support is the driving force of my continuous output , thank . ① Interesting source code and learning experience ② Tool installation package , Install video ③ More convenient than books to read anytime, anywhere 300 This ebook ④ Answer questions professionally author[Yunyun yyds]
https://en.pythonmana.com/2022/01/202201301123281541.html
CC-MAIN-2022-27
refinedweb
2,169
53.04
Object Scoping Place the DSL script so that bare references will resolve to a single object. Nested Function and (to an extent) Function Sequence may provide a nice DSL syntax, but in their basic forms they come with a serious cost: global functions and (worse) global state. Object Scoping alleviates these problems by resolving all bare calls to a single object and this avoids cluttering the global namespace with global functions, allowing you to store any parsing data within this host object. The most common way to do this is to write the DSL script inside a subclass of a builder that defines the functions—this allows the parsing data to be captured in that one object. For more details see chapter 36 of the DSL book | Catalog of DSL patterns |
https://martinfowler.com/dslCatalog/objectScoping.html
CC-MAIN-2017-04
refinedweb
131
62.01
In Java there's keyword named this. It can be used inside methods (and Constructors). The value of this is the reference to the current object. class C3 { int x = 1; C3 me () {return this;} } public class T4 { public static void main(String[] args) { C3 c3 = new C3(); System.out.println( c3.x ); // prints 1 System.out.println( c3.me().x ); // same as above System.out.println( c3.me().me().x ); // same as above } } In the above example, the method “me” returns “this”. So, c3.me() is equivalent to the object c3 itself. Therefore, c3.x and c3.me().x and c3.me().me().x are all the same. One common use of this is to refer to current class's variable this.var name or method this.method name(…). class OneNumber { int n; void setValue (int n) {this.n=n;}; } public class Thatt { public static void main(String[] args) { OneNumber x = new OneNumber(); x.setValue(3); System.out.println( x.n ); } } In the above example, the method “setValue” tries to set class variable “n” to the value of the method's argument also named “n”. Because the name n is already used in the parameter name, so n=n is absurd. The workaround is to use the “this” keyword to refer to the object. So, this.n is the class variable n, and the second n in this.n=n is the method's argument. Another common use of this is to call constructor. class BB { int x; BB (int n) {this.x = n;} BB () {this(1);} } public class AA { public static void main(String[] args) { BB bb = new BB(); System.out.println( bb.x ); } } Another practical example of using “this” is when you need to pass your current object to another method. Example: class B { int n; void setMe (int m) { C h = new C(); h.setValue(this, m); }; } class C { void setValue (B obj, int h) {obj.n=h;}; } public class A { public static void main(String[] args) { B x = new B(); x.setMe(3); System.out.println( x.n ); } } In the above example, B has a member variable n. It has a method setMe. This method calls another class method and passing itself as a object. There is also a “super” keyword used to refer to the parent class. See “super” keyword.
http://xahlee.info/java-a-day/this.html
CC-MAIN-2015-27
refinedweb
386
77.84
Matrix Transforms - PDF for offline use - - Sample Code: - - Related APIs: - Let us know how you feel about this Translation Quality 0/250 last updated: 2017-04 Dive deeper into SkiaSharp transforms with the versatile transform matrix All the transforms applied to the SKCanvas object are consolidated in a single instance of the SKMatrix structure. This is a standard 3-by-3 transform matrix similar to those in all modern 2D graphics systems. As you've seen, you can use transforms in SkiaSharp without knowing about the transform matrix, but the transform matrix is important from a theoretical perspective, and it is crucial when using transforms to modify paths or for handling complex touch input, both of which are demonstrated in this article and the next. The current transform matrix applied to the SKCanvas is available at any time by accessing the read-only TotalMatrix property. You can set a new transform matrix using the SetMatrix method, and you can restore that transform matrix to default values by calling ResetMatrix. The only other SKCanvas member that directly works with the canvas's matrix transform is Concat which concatenates two matrices by multiplying them together. The default transform matrix is the identity matrix and consists of 1's in the diagonal cells and 0's everywhere else: | 1 0 0 | | 0 1 0 | | 0 0 1 | You can create an identity matrix using the static SKMatrix.MakeIdentity method: SKMatrix matrix = SKMatrix.MakeIdentity(); The SKMatrix default constructor does not return an identity matrix. It returns a matrix with all of the cells set to zero. Do not use the SKMatrix constructor unless you plan to set those cells manually. When SkiaSharp renders a graphical object, each point (x, y) is effectively converted to a 1-by-3 matrix with a 1 in the third column: | x y 1 | This 1-by-3 matrix represents a three-dimensional point with the Z coordinate set to 1. There are mathematical reasons (discussed later) why a two-dimensional matrix transform requires working in three dimensions. You can think of this 1-by-3 matrix as representing a point in a 3D coordinate system, but always on the 2D plane where Z equals 1. This 1-by-3 matrix is then multiplied by the transform matrix, and the result is the point rendered on the canvas: | 1 0 0 | | x y 1 | × | 0 1 0 | = | x' y' z' | | 0 0 1 | Using standard matrix multiplication, the converted points are as follows: x' = x y' = y z' = 1 That's the default transform. When the Translate method is called on the SKCanvas object, the tx and ty arguments to the Translate method become the first two cells in the third row of the transform matrix: | 1 0 0 | | 0 1 0 | | tx ty 1 | The multiplication is now as follows: | 1 0 0 | | x y 1 | × | 0 1 0 | = | x' y' z' | | tx ty 1 | Here are the transform formulas: x' = x + tx y' = y + ty Scaling factors have a default value of 1. When you call the Scale method on a new SKCanvas object, the resultant transform matrix contains the sx and sy arguments in the diagonal cells: | sx 0 0 | | x y 1 | × | 0 sy 0 | = | x' y' z' | | 0 0 1 | The transform formulas are as follows: x' = sx · x y' = sy · y The transform matrix after calling Skew contains the two arguments in the matrix cells adjacent to the scaling factors: │ 1 ySkew 0 │ | x y 1 | × │ xSkew 1 0 │ = | x' y' z' | │ 0 0 1 │ The transform formulas are: x' = x + xSkew · y y' = ySkew · x + y For a call to RotateDegrees or RotateRadians for an angle of α, the transform matrix is as follows: │ cos(α) sin(α) 0 │ | x y 1 | × │ –sin(α) cos(α) 0 │ = | x' y' z' | │ 0 0 1 │ Here are the transform formulas: x' = cos(α) · x - sin(α) · y y' = sin(α) · x - cos(α) · y When α is 0 degrees, it's the identity matrix. When α is 180 degrees, the transform matrix is as follows: | –1 0 0 | | 0 –1 0 | | 0 0 1 | A 180 degree rotation is equivalent to flipping an object horizontally and vertically, which is also accomplished by setting scale factors of –1. All these types of transforms are classified as affine transforms. Affine transforms never involve the third column of the matrix, which remains at the default values of 0, 0, and 1. The article Non-Affine Transforms discusses non-affine transforms. Matrix Multiplication One big advantage with using the transform matrix is that composite transforms can be obtained by matrix multiplication, which is often referred to in the SkiaSharp documentation as concatenation. Many of the transform-related methods in SKCanvas refer to "pre-concatenation" or "pre-concat." This refers to the order of multiplication, which is important because matrix multiplication is not commutative. For example, the documentation for the Translate method says that it "Pre-concats the current matrix with the specified translation," while the documentation for the Scale method says that it "Pre-concats the current matrix with the specified scale." This means that the transform specified by the method call is the multiplier (the left-hand operand) and the current transform matrix is the multiplicand (the right-hand operand). Suppose that Translate is called followed by Scale: canvas.Translate(tx, ty); canvas.Scale(sx, sy); The Scale transform is multiplied by the Translate transform for the composite transform matrix: | sx 0 0 | | 1 0 0 | | sx 0 0 | | 0 sy 0 | × | 0 1 0 | = | 0 sy 0 | | 0 0 1 | | tx ty 1 | | tx ty 1 | Scale could be called before Translate like this: canvas.Scale(sx, sy); canvas.Translate(tx, ty); In that case, the order of the multiplication is reversed, and the scaling factors are effectively applied to the translation factors: | 1 0 0 | | sx 0 0 | | sx 0 0 | | 0 1 0 | × | 0 sy 0 | = | 0 sy 0 | | tx ty 1 | | 0 0 1 | | tx·sx ty·sy 1 | Here is the Scale method with a pivot point: canvas.Scale(sx, sy, px, py); This is equivalent to the following translate and scale calls: canvas.Translate(px, py); canvas.Scale(sx, sy); canvas.Translate(–px, –py); The three transform matrices are multiplied in reverse order from how the methods appear in code: | 1 0 0 | | sx 0 0 | | 1 0 0 | | sx 0 0 | | 0 1 0 | × | 0 sy 0 | × | 0 1 0 | = | 0 sy 0 | | –px –py 1 | | 0 0 1 | | px py 1 | | px–px·sx py–py·sy 1 | The SKMatrix Structure The SKMatrix structure defines nine read/write properties of type float corresponding to the nine cells of the transform matrix: │ ScaleX SkewY Persp0 │ │ SkewX ScaleY Persp1 │ │ TransX TransY Persp2 │ SKMatrix also defines a property named Values of type float[]. This property can be used to set or obtain the nine values in one shot in the order ScaleX, SkewX, TransX, SkewY, ScaleY, TransY, Persp0, Persp1, and Persp2. The Persp0, Persp1, and Persp2 cells are discussed in the article, Non-Affine Transforms. If these cells have their default values of 0, 0, and 1, then the transform is multiplied by a coordinate point like this: │ ScaleX SkewY 0 │ | x y 1 | × │ SkewX ScaleY 0 │ = | x' y' z' | │ TransX TransY 1 │ x' = ScaleX · x + SkewX · y + TransX y' = SkewX · x + ScaleY · y + TransY z' = 1 This is the complete two-dimensional affine transform. The affine transform preserves parallel lines, which means that a rectangle is never transformed into anything other than a parallelogram. The SKMatrix structure defines several static methods to create SKMatrix values. These all return SKMatrix values: MakeTranslation MakeScale MakeScalewith a pivot point MakeRotationfor an angle in radians MakeRotationfor an angle in radians with a pivot point MakeRotationDegrees MakeRotationDegreeswith a pivot point MakeSkew SKMatrix also defines several static methods that concatenate two matrices, which means to multiply them. These methods are named Concat, PostConcat, and PreConcat, and there are two versions of each. These methods have no return values; instead, they reference existing SKMatrix values through ref arguments. In the following example, A, B, and R (for "result") are all SKMatrix values. The two Concat methods are called like this: SKMatrix.Concat(ref R, A, B); SKMatrix.Concat(ref R, ref A, ref B); These perform the following multiplication: R = B × A The other methods have only two parameters. The first parameter is modified, and on return from the method call, contains the product of the two matrices. The two PostConcat methods are called like this: SKMatrix.PostConcat(ref A, B); SKMatrix.PostConcat(ref A, ref B); These calls perform the following operation: A = A × B The two PreConcat methods are similar: SKMatrix.PreConcat(ref A, B); SKMatrix.PreConcat(ref A, ref B); These calls perform the following operation: A = B × A The versions of these method calls with all ref arguments are slightly more efficient in calling the underlying implementations, but it might be confusing to someone reading your code and assuming that anything with a ref argument is modified by the method. Moreover, it's often convenient to pass an argument that is a result of one of the Make methods, for example: SKMatrix result; SKMatrix.Concat(result, SKMatrix.MakeTranslation(100, 100), SKMatrix.MakeScale(3, 3)); This creates the following matrix: │ 3 0 0 │ │ 0 3 0 │ │ 100 100 1 │ This is the scale transform multiplied by the translate transform. In this particular case, the SKMatrix structure provides a shortcut with a method named SetScaleTranslate: SKMatrix R = new SKMatrix(); R.SetScaleTranslate(3, 3, 100, 100); This is one of the few times when it's safe to use the SKMatrix constructor. The SetScaleTranslate method sets all nine cells of the matrix. It is also safe to use the SKMatrix constructor with the static Rotate and RotateDegrees methods: SKMatrix R = new SKMatrix(); SKMatrix.Rotate(ref R, radians); SKMatrix.Rotate(ref R, radians, px, py); SKMatrix.RotateDegrees(ref R, degrees); SKMatrix.RotateDegrees(ref R, degrees, px, py); These methods do not concatenate a rotate transform to an existing transform. The methods set all the cells of the matrix. They are functionally identical to the MakeRotation and MakeRotationDegrees methods except that they don't instantiate the SKMatrix value. Suppose you have an SKPath object that you want to display, but you would prefer that it have a somewhat different orientation, or a different center point. You can modify all the coordinates of that path by calling the Transform method of SKPath with an SKMatrix argument. The Path Transform page demonstrates how to do this. The PathTransform class references the HendecagramPath object in a field but uses its constructor to apply a transform to that path: public class PathTransformPage : ContentPage { SKPath transformedPath = HendecagramArrayPage.HendecagramPath; public PathTransformPage() { Title = "Path Transform"; SKCanvasView canvasView = new SKCanvasView(); canvasView.PaintSurface += OnCanvasViewPaintSurface; Content = canvasView; SKMatrix matrix = SKMatrix.MakeScale(3, 3); SKMatrix.PostConcat(ref matrix, SKMatrix.MakeRotationDegrees(360f / 22)); SKMatrix.PostConcat(ref matrix, SKMatrix.MakeTranslation(300, 300)); transformedPath.Transform(matrix); } ... } The HendecagramPath object has a center at (0, 0), and the eleven points of the star extend outward from that center by 100 units in all directions. This means that the path has both positive and negative coordinates. The Path Transform page prefers to work with a star three times as large, and with all positive coordinates. Moreover, it doesn't want one point of the star to point straight up. It wants instead for one point of the star to point straight down. (Because the star has eleven points, it can't have both.) This requires rotating the star by 360 degrees divided by 22. The constructor builds an SKMatrix object from three separate transforms using the PostConcat method with the following pattern, where A, B, and C are instances of SKMatrix: SKMatrix matrix = A; SKMatrix.PostConcat(ref A, B); SKMatrix.PostConcat(ref A, C); This is a series of successive multiplications, so the result is as follows: A × B × C The consecutive multiplications aid in understanding what each transform does. The scale transform increases the size of the path coordinates by a factor of 3, so the coordinates range from –300 to 300. The rotate transform rotates the star around its origin. The translate transform then shifts it by 300 pixels right and down, so all the coordinates become positive. There are other sequences that produce the same matrix. Here's another one: SKMatrix matrix = SKMatrix.MakeRotationDegrees(360f / 22); SKMatrix.PostConcat(ref matrix, SKMatrix.MakeTranslation(100, 100)); SKMatrix.PostConcat(ref matrix, SKMatrix.MakeScale(3, 3)); This rotates the path around its center first, and then translates it 100 pixels to the right and down so all the coordinates are positive. The star is then increased in size relative to its new upper-left corner, which is the point (0, 0). The PaintSurface handler can simply render this path: public class PathTransformPage : ContentPage { ... void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args) { SKImageInfo info = args.Info; SKSurface surface = args.Surface; SKCanvas canvas = surface.Canvas; canvas.Clear(); using (SKPaint paint = new SKPaint()) { paint.Style = SKPaintStyle.Stroke; paint.Color = SKColors.Magenta; paint.StrokeWidth = 5; canvas.DrawPath(transformedPath, paint); } } } It appears in the upper-left corner of the canvas: The constructor of this program applies the matrix to the path with the following call: transformedPath.Transform(matrix); The path does not retain this matrix as a property. Instead, it applies the transform to all of the coordinates of the path. If Transform is called again, the transform is applied again, and the only way you can go back is by applying another matrix that undoes the transform. Fortunately, the SKMatrix structure defines a TryInverse method that obtains the matrix that reverses a given matrix: SKMatrix inverse; bool success = matrix.TryInverse(out inverse); The method is called TryInverse because not all matrices are invertible, but a non-invertible matrix is not likely to be used for a graphics transform. You can also apply a matrix transform to an SKPoint value, an array of points, an SKRect, or even just a single number within your program. The SKMatrix structure supports these operations with a collection of methods that begin with the word Map, such as these: SKPoint transformedPoint = matrix.MapPoint(point); SKPoint transformedPoint = matrix.MapPoint(x, y); SKPoint[] transformedPoints = matrix.MapPoints(pointArray); float transformedValue = matrix.MapRadius(floatValue); SKRect transformedRect = matrix.MapRect(rect); If you use that last method, keep in mind that the SKRect structure is not capable of representing a rotated rectangle. The method only makes sense for an SKMatrix value representing translation and scaling. Interactive Experimentation One way to get a feel for the affine transform is by interactively moving three corners of a bitmap around the screen and seeing what transform results. This is the idea behind the Show Affine Matrix page. This page requires two other classes that are also used in other demonstrations: The TouchPoint class displays a translucent circle that can be dragged around the screen. TouchPoint requires that an SKCanvasView or an element that is a parent of an SKCanvasView have the TouchEffect attached. Set the Capture property to true. In the TouchAction event handler, the program must call the ProcessTouchEvent method in TouchPoint for each TouchPoint instance. The method returns true if the touch event resulted in the touch point moving. Also, the PaintSurface handler must call the Paint method in each TouchPoint instance, passing to it the SKCanvas object. TouchPoint demonstrates a common way that a SkiaSharp visual can be encapsulated in a separate class. The class can define properties for specifying characteristics of the visual, and a method named Paint with an SKCanvas argument can render it. The Center property of TouchPoint indicates the location of the object. This property can be set to initialize the location; the property changes when the user drags the circle around the canvas. The Show Affine Matrix Page also requires the MatrixDisplay class. This class displays the cells of an SKMatrix object. It has two public methods: Measure to obtain the dimensions of the rendered matrix, and Paint to display it. The class contains a MatrixPaint property of type SKPaint that can be replaced for a different font size or color. The ShowAffineMatrixPage.xaml file instantiates the SKCanvasView and attaches a TouchEffect. The ShowAffineMatrixPage.xaml.cs code-behind file creates three TouchPoint objects and then sets them to positions corresponding to three corners of a bitmap that it loads from an embedded resource: public partial class ShowAffineMatrixPage : ContentPage { SKMatrix matrix; SKBitmap bitmap; SKSize bitmapSize; TouchPoint[] touchPoints = new TouchPoint[3]; MatrixDisplay matrixDisplay = new MatrixDisplay(); public ShowAffineMatrix); } touchPoints[0] = new TouchPoint(100, 100); // upper-left corner touchPoints[1] = new TouchPoint(bitmap.Width + 100, 100); // upper-right corner touchPoints[2] = new TouchPoint(100, bitmap.Height + 100); // lower-left corner bitmapSize = new SKSize(bitmap.Width, bitmap.Height); matrix = ComputeMatrix(bitmapSize, touchPoints[0].Center, touchPoints[1].Center, touchPoints[2].Center); } ... } An affine matrix is uniquely defined by three points. The three TouchPoint objects correspond to the upper-left, upper-right, and lower-left corners of the bitmap. Because an affine matrix is only capable of transforming a rectangle into a parallelogram, the fourth point is implied by the other three. The constructor concludes with a call to ComputeMatrix, which calculates the cells of an SKMatrix object from these three points. The TouchAction handler calls the ProcessTouchEvent method of each TouchPoint. The scale value converts from Xamarin.Forms coordinates to pixels: public partial class ShowAffineMatrixPage : ContentPage { ... void OnTouchEffectAction(object sender, TouchActionEventArgs args) { bool touchPointMoved = false; foreach (TouchPoint touchPoint in touchPoints) { float scale = canvasView.CanvasSize.Width / (float)canvasView.Width; SKPoint point = new SKPoint(scale * (float)args.Location.X, scale * (float)args.Location.Y); touchPointMoved |= touchPoint.ProcessTouchEvent(args.Id, args.Type, point); } if (touchPointMoved) { matrix = ComputeMatrix(bitmapSize, touchPoints[0].Center, touchPoints[1].Center, touchPoints[2].Center); canvasView.InvalidateSurface(); } } ... } If any TouchPoint has moved, then the method calls ComputeMatrix again and invalidates the surface. The ComputeMatrix method determines the matrix implied by those three points. The matrix called A transforms a one-pixel square rectangle into a parallelogram based on the three points, while the scale transform called S scales the bitmap to a one-pixel square rectangle. The composite matrix is S × A: public partial class ShowAffineMatrixPage : ContentPage { ... static SKMatrix ComputeMatrix(SKSize size, SKPoint ptUL, SKPoint ptUR, SKPoint ptLL) { // }; SKMatrix result; SKMatrix.Concat(ref result, A, S); return result; } ... } Finally, the PaintSurface method renders the bitmap based on that matrix, displays the matrix at the bottom of the screen, and renders the touch points at the three corners of the bitmap: public partial class ShowAffineMatrixPage : ContentPage { ... void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args) { SKImageInfo info = args.Info; SKSurface surface = args.Surface; SKCanvas canvas = surface.Canvas; canvas.Clear(); // Display the bitmap using the matrix canvas.Save(); canvas.SetMatrix(matrix); canvas.DrawBitmap(bitmap, 0, 0); canvas.Restore(); // Display the matrix in the lower-right corner SKSize matrixSize = matrixDisplay.Measure(matrix); matrixDisplay.Paint(canvas, matrix, new SKPoint(info.Width - matrixSize.Width, info.Height - matrixSize.Height)); // Display the touchpoints foreach (TouchPoint touchPoint in touchPoints) { touchPoint.Paint(canvas); } } } The iOS screen below shows the bitmap when the page is first loaded, while the two other screens show it after some manipulation: Although it seems as if the touch points drag the corners of the bitmap, that's only an illusion. The matrix calculated from the touch points transforms the bitmap so that the corners coincide with the touch points. It is more natural for users to move, resize, and rotate bitmaps not by dragging the corners, but by using one or two fingers directly on the object to drag, pinch, and rotate. This is covered in the next article Touch Manipulation. The Reason for the 3-by-3 Matrix It might be expected that a two-dimensional graphics system would require only a 2-by-2 transform matrix: │ ScaleX SkewY │ | x y | × │ │ = | x' y' | │ SkewX ScaleY │ This works for scaling, rotation, and even skewing, but it is not capable of the most basic of transforms, which is translation. The problem is that the 2-by-2 matrix represents a linear transform in two dimensions. A linear transform preserves some basic arithmetic operations, but one of the implications is that a linear transform never alters the point (0, 0). A linear transform makes translation impossible. In three dimensions, a linear transform matrix looks like this: │ ScaleX SkewYX SkewZX │ | x y z | × │ SkewXY ScaleY SkewZY │ = | x' y' z' | │ SkewXZ SkewYZ ScaleZ │ The cell labeled SkewXY means that the value skews the X coordinate based on values of Y; the cell SkewXZ means that the value skews the X coordinate based on values of Z; and values skew similarly for the other Skew cells. It's possible to restrict this 3D transform matrix to a two-dimensional plane by setting SkewZX and SkewZY to 0, and ScaleZ to 1: │ ScaleX SkewYX 0 │ | x y z | × │ SkewXY ScaleY 0 │ = | x' y' z' | │ SkewXZ SkewYZ 1 │ If the two-dimensional graphics are drawn entirely on the plane in 3D space where Z equals 1, the transform multiplication looks like this: │ ScaleX SkewYX 0 │ | x y 1 | × │ SkewXY ScaleY 0 │ = | x' y' 1 | │ SkewXZ SkewYZ 1 │ Everything stays on the two-dimensional plane where Z equals 1, but the SkewXZ and SkewYZ cells effectively become two-dimensional translation factors. This is how a three-dimensional linear transform serves as a two-dimensional non-linear transform. (By analogy, transforms in 3D graphics are based on a 4-by-4 matrix.) The SKMatrix structure in SkiaSharp defines properties for that third row: │ ScaleX SkewY Persp0 │ | x y 1 | × │ SkewX ScaleY Persp1 │ = | x' y' z` | │ TransX TransY Persp2 │ Non-zero values of Persp0 and Persp1 result in transforms that move objects off the two-dimensional plane where Z equals 1. What happens when those objects are moved back to that plane is covered in the article on Non-Affine Transforms..
https://developer.xamarin.com/guides/xamarin-forms/advanced/skiasharp/transforms/matrix/
CC-MAIN-2017-30
refinedweb
3,639
52.39
Dependency Injection: A Beginner's Guide This 'res:"); } } Then, we can construct a small program which creates a container, maps the types, and then queries for a service. Again a simple compact example but imagine what this would look like in a much larger application:"); } } () Once the above has been installed, let's begin. First, open up VS2010 and create a new MVC 3 Web Application (I've called mine MvcNinjectExample): After this screen you will get a dialog asking you to choose an Empty application or an Internet Application - just choose 'Empty'. Next, once the project has loaded, right-click 'References' in solution explorer and choose âAdd library package reference.. (this is Nuget in action): On the left, choose 'Online' then search for Ninject. In the search results, you should be able to see 'Ninject.Mvc3'. Select this and click the âInstallâ button: This will download Ninject and all the other things it needs in order to work, including the WebActivator library, which gives us a place to create our dependencies. Once everything has installed, look for the 'AppStart_NinjectMVC3.cs' file which has now appeared in your solution and open it: namespace MvcNinjectExample {)); } } }. Next, add a new folder called 'Logging'. Head back to your 'AppStart_NinjectMVC3.cs' file and set up the binding for this class: public static void RegisterServices(IKernel kernel) { kernel.Bind<ILogger>().To<TextFileLogger>(); } Finally, lets create a controller which uses it. Right-click on the 'Controllers' folder in Solution Explorer, select 'Add >' and choose the 'Controller'.
https://stevescodingblog.co.uk/dependency-injection-beginners-guide/
CC-MAIN-2017-17
refinedweb
250
54.32
The calendar control is part of the dot net web controls collection. Its in the namespace System.Web.UI.WebControls. It provides an efficient mechanism for the user to to select dates in the desired format.But one of the functionality that we require most of the time, as part of validation is to disable a range of dates. Mostly we weould like to disable past dates so that user wont select them. This can be done with a little bit of tweaking. I will explain this in my article. The DayRender event : The most important event for us in this scenario, is the OnDayRender event. This event fires when the calendar is first rendered on the screen. This happens as a loop, wherein it continuously renders each day, one a time. We have to write our logic in the OnDayRender event hanlder, to disable rendering when the day matches our criteria. The Code: In my code, I have a calendar control. I want to block it for all past days as well as next 7 days. I set this value in a local variable,nDaysToBlock. As you can see from the code, all the action happens in the DayRender event handler. <code> protected void myDayRenderMethod(object sener, DayRenderEventArgs e) { if (e.Day.Date < (System.DateTime.Now.AddDays(_nDaysToBlock))) e.Day.IsSelectable = false; e.Cell.Font.Strikeout = true; } </code> Here, iam checking if the day being rendered is less than the current day plus number of days to block. i.e, is it less than 7 days from now. In that case, we want to block such a day from being rendered, which is done by making e.Day.IsSelectable as false; That is all there is to it. You have the desired dates disabled. You can change the font to strike out, so as to easily distinguish them. The calendar will look as below at runtime, with selected dates disabled. :<lock aspectratio="t" v:<shape id="_x0000_i1026" style="WIDTH: 246pt; HEIGHT: 192pt" type="#_x0000_t75"><imagedata src=":\DOCUME~1\PRASHA~1.RAY\LOCALS~1\Temp\msohtml1\01\clip_image001.png"><group id="_x0000_s1026" style="WIDTH: 6in; HEIGHT: 252pt" coordsize="7200,4320" coordorigin="2527,847" editas="canvas"><lock aspectratio="t" v:<shape id="_x0000_s1027" style="LEFT: 2527px; WIDTH: 7200px; POSITION: absolute; TOP: 847px; HEIGHT: 4320px" o:<fill o:<path o:<lock v:<wrap type="none"><anchorlock> Hope my code snippet was of help to you. Thanks. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here clip_image002.jpg .
https://www.codeproject.com/Articles/20003/How-to-Disable-Selected-Dates-dynamically-from-the?pageflow=FixedWidth&fid=448251&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True
CC-MAIN-2021-04
refinedweb
452
66.13
I have worked with Elm last year and honestly it was an great experience. I really liked it and I would like to use Elm in all my new projects. But I also like Meteor and three weeks ago I started using it again. I discovered Meteor in 2015, I gave a talk about it and it still feels like "Wow!" when you start a project. So in this post I will explain how I make the two work together and how to add Tailwindcss for the UI part (because I love Tailwindcss too 😍), from scratch. I will also explain how to link an existing project to Meteor with a Todo app. Prerequisite You only need two things: - Your favourite IDE (VSCode with the Elm extension fits very well) - A browser You don't need to install NPM, Node or Mongo. All this stuff is already packaged in Meteor environment. So to run a NPM command we will use meteor npm xxx, and for Mongo, we will use meteor mongo. So let's begin 😁 Meteor Install Meteor Here are the commands to install Meteor on OSX/Linux and Windows OSX / Linux # OSX/Linux curl | sh # Windows choco install meteor For more information about the installation, please refer to the official documentation here Create a project Let's start by creating an empty project meteor create meteor-elm-app --bare cd meteor-elm-app This --bare option will create an empty project with static-html instead of blaze and without autopublish and insecure. For more information about the command create, follow the link Add typescript Adding Typescript is really simple. Just run these two commands # under meteor-elm-app meteor add typescript meteor npm i -D @types/meteor @types/mocha Create a tsconfig.json under the directory meteor-elm-app { "compilerOptions": { /* Basic Options */ "target": "es2018", "module": "esNext", "lib": ["esnext", "dom"], "allowJs": true, "checkJs": false, "incremental": false, "noEmit": true, /* Strict Type-Checking Options */ "strict": true, "noImplicitAny": true, "strictNullChecks": true, /* Additional Checks */ "noUnusedLocals": true, "noUnusedParameters": true, "noImplicitReturns": false, "noFallthroughCasesInSwitch": false, /* Module Resolution Options */ "baseUrl": ".", "paths": { /* Support absolute ~imports/* with a leading '/' */ "/*": ["*"] }, "moduleResolution": "node", "resolveJsonModule": true, "types": ["node", "mocha"], "esModuleInterop": true, "preserveSymlinks": true, }, "exclude": [ "./.meteor/**", "./packages/**" ] } This configuration comes from the typescript template provided by Meteor. I just removed the support of JSX. Create the file structure We will setup a simple file structure here. For more complex projects, you should follow the guideline provided by Meteor To initialise the file structure, run these commands # under meteor-elm-app mkdir client server imports/api touch client/main.html client/main.ts client/main.css server/main.ts Your project folder should look like this: # under meteor-elm-app ❯ tree -I node_modules . ├── client │ ├── main.css │ ├── main.html │ └── main.ts ├── imports │ └── api ├── package-lock.json ├── package.json ├── server │ └── main.ts └── tsconfig.json 4 directories, 7 files We will update the package.json file to define the main modules in our Meteor app: "meteor": { "mainModule": { "client": "client/main.ts", "server": "server/main.ts" } } At this point your package.json file should be like: { "name": "meteor-elm-app", "private": true, "scripts": { "start": "meteor run" }, "meteor": { "mainModule": { "client": "client/main.ts", "server": "server/main.ts" } }, "dependencies": { "@babel/runtime": "^7.8.3", "meteor-node-stubs": "^1.0.0" } } If you need more informations about this mainModule options, you can read the content of this pull request We now need to add some basic content to the main.html file: <head> <title>meteor-elm-app</title> </head> <body> <div id="main">Elm app will be here</div> </body> Checkpoint Lets check if everything is OK before starting with Elm. Start your meteor server: # under meteor-elm-app meteor Open on your favorite browser You should see this: Elm Install Parcel We will use Parcel to build our Elm application and we will use the result of this build in our Meteor application To install Parcel, run this command meteor npm i -D parcel Create a Meteor package This Meteor package will contain our Elm application and we will use this package inside the Meteor application. We use a Package because it allows us to isolate our Elm application from the rest of the Meteor context. It is also really useful if we want to remove our Elm application or if one we don't want to use Meteor anymore. Let's start by creating some folders: mkdir -p packages/elm-app/{app,dist} The app folder will contain the sources of our Elm application (Elm, TS and CSS files). The dist folder will contain the result of the build made by Parcel. Because we will build with Parcel and not with Meteor, we will create a new file at the root of the meteor-elm-app called .meteorignore #under meteor-elm-app touch .meteorignore Then add this line inside this new file: /packages/elm-app/app/**/* Because we don't want to push the dist and the elm-stuff folders on our repository, we will add them in the .gitignore located under the folder meteor-elm-app dist elm-stuff Now, let's create a package.js file in our package: #under meteor-elm-app/packages/elm-app touch package.js And add the following content in this file: Package.describe({ name: 'elm-app', version: '1.0.0', summary: 'elm app', documentation: 'add your elm app into meteor', }); Package.onUse(function (api) { api.versionsFrom('1.10.2'); api.use('modules'); api.addFiles('dist/elm-app.css', 'client'); api.mainModule('dist/elm-app.js', 'client'); }); Package.describe says that our package: - is called elm-app, - is in version 1.0.0 Package.onUse says that our package: - is implemented to be use with Meteor 1.10.2, - uses the modulespackage, so we will be able to use import {} from '', - will add the dist/elm-app.cssfile in the client when it will be loaded, - have a main js file for this package called dist/elm-app.js. If you are using elm-css and if you don't need specific css classes in your app, you can remove api.addFiles('dist/elm-app.css', 'client'); from the package.js file. For more informations about the Package.js file, see Create the app We will create our Elm application under the folder packages/elm-app/app. meteor npm i -D elm elm-format Elm-format is not mandatory but you should use it with your IDE to format on save and to avoid problem at compile time Then we will initialise our app with the following command: #under meteor-elm-app/packages/elm-app/app meteor npx elm init Validate the creation of the elm.json file and we are good 👍. At this step, your folder should be like this: #under meteor-elm-app ❯ tree -I 'node_modules|.meteor' -a . ├── .gitignore ├── .meteorignore ├── client │ ├── main.css │ ├── main.html │ └── main.ts ├── imports │ └── api ├── package-lock.json ├── package.json ├── packages │ └── elm-app │ ├── app │ │ ├── elm.json │ │ └── src │ ├── dist │ └── package.js ├── server │ └── main.ts └── tsconfig.json 9 directories, 11 files In a first time, we will create a simple Elm application. Create a Main.elm file inside the folder packages/elm-app/app/src with this content: module Main exposing(main) import Browser import Html exposing (Html, text) type alias Model = String main : Program () Model msg main = Browser.element { init = init , view = view , update = update , subscriptions = subscriptions } init: () -> (Model, Cmd msg) init _ = ("Hello from Elm app", Cmd.none) view: Model -> Html msg view model = text model update: msg -> Model -> (Model, Cmd msg) update _ model = (model, Cmd.none) subscriptions : Model -> Sub msg subscriptions _ = Sub.none The CSS main file In the folder meteor-elm-app/packages/elm-app/app, create an empty main.scss SCSS file (or CSS if you prefer) that we will use later to add some style in our Elm application. NB: if you use elm-css and you don't need a stylesheet, skip this step and remove the line api.addFiles('dist/elm-app.css', 'client'); in the package.js file The Package mainModule In the folder meteor-elm-app/packages/elm-app/app, create a file index.ts that will mount our Elm application and export the ports. A simple version could be: import './main.scss' const { Elm } = require('./src/Main.elm') export const init = (configuration: any) => { const app = Elm.Main.init(configuration) return app.ports } But because we want to Type things as much as possible, let's create this index.ts like this: import './main.scss' const { Elm } = require('./src/Main.elm') interface Flags {} export interface Configuration { node: HTMLElement | null, flags: Flags } export interface Ports {} export const init: (configuration: Configuration) => Ports = (configuration) => { const app = Elm.Main.init(configuration) return app.ports } With this definition, when we will need some flags or some ports, we will add the new stuff in our interface and the client will have to implement them. If you are using CSS instead of SCSS then update the file import accordingly Build with Parcel Let's create a build script in our package.json file: "elm:build": "parcel build packages/elm-app/app/index.ts -d packages/elm-app/dist --out-file elm-app.js --no-cache", This script will build our application in a file elm-app.js (and elm-app.css) and put it in the folder packages/elm-app/dist (the one we added in our .gitignore) We can test our script #under meteor-elm-app meteor npm run elm:build If everything is ok, you should see these lines: Add our package to Meteor Now that we have a package, we have to add it in our Meteor configuration. You must have run the previous build command before adding the package because without a dist folder, you will not be able to add it. Execute this command to add the package #under meteor-elm-app meteor add elm-app You should see Post install To avoid to have to compile manually each time someone clone the repository, we will add a postinstall script in the package.json file: "postinstall": "meteor npm run elm:build", Use the Elm application in our Meteor client Now that we have our Elm application, it is time to import it in the client side of our Meteor application In the client/main.ts file, add the following code: import { init } from "meteor/elm-app"; import { Meteor } from 'meteor/meteor'; Meteor.startup(() => { const ports = init({ node: document.getElementById("main"), flags: {} }) }) In this code, we import the init function from the package meteor/elm-app which is the package we have just created (you can see it in the file .meteor/packages). Then we call it to mount our Elm application on the node document.getElementById("main") (the one we have created in the main.html file) Now, if you start your meteor application by running the meteor command, on you should see: But... The typing is not good You should see that your import is underlined in red: To fix that, we will add a declaration file: #under meteor-elm-app mkdir -p types/meteor touch types/meteor/elm-app.d.ts And add the following content declare module 'meteor/elm-app' { export const init: ( configuration: import('/packages/elm-app/app').Configuration, ) => import('/packages/elm-app/app').Ports; } Now each time we will change the definition of the type Flag or the type Port inside our Elm application, we will be sure to know if we have some stuff to fix in the Meteor client 💪. Live Reload Because we don't want to build manually our Elm application each time we make a change, we will setup the live reload We will install some packages to help us #under meteor-elm-app meteor npm i -D concurrently wait-on rimraf Then we will create an new script in our package.json file: "elm:watch": "parcel watch packages/elm-app/app/index.ts -d packages/elm-app/dist --out-file elm-app.js", With elm:watch, parcel will rebuild our app each time we make a change in Elm, TS or SCSS files under the folder packages/elm-app/app. And because parcel watch create a .cache folder, we will add it to the .gitignore file. The content of your .gitignore should be like this: node_modules/ dist elm-stuff .cache Now to run parcel and meteor in parallel, we will update the package.json file. We will rename the script start to meteor:run, and redefine the script start: "meteor:run": "meteor run", "start": "rimraf \"./packages/elm-app/dist/*\" && concurrently -n \"parcel,meteor\" -c \"magenta,green\" \"meteor npm run elm:watch\" \"wait-on ./packages/elm-app/dist/elm-app.js && meteor npm run meteor:run\"", The script start call rimraf to clean the dist folder, then we call concurrently to run two tasks: - the parcelone, that will be log in magentaand its command is meteor npm run elm:watch - the meteorone, that will be log in greenand its command is wait-on ./packages/elm-app/dist/elm-app.js && meteor npm run meteor:run(the wait-oncommand is use to wait the build from Parcel) Now each time we will change our content under packages/elm-app/app, Parcel will rebuild incrementally our application and update the content under the dist folder, so Meteor will detect a change and refresh the main application. You can now start your application by running: #under meteor-elm-app meteor npm start You can make some changes in your Main.elm file and see that everything will automatically refresh in your browser. Tailwindcss Tailwindcss is a npm package, so we will install it like this meteor npm i -D tailwindcss For more informations about Tailwindcss, see We need to initialize Tailwincss: #under meteor-elm-app/packages/elm-app/app meteor npx tailwindcss init This command will generate a file called tailwind.config.js We can now edit the file main.scss inside our app (packages/elm-app/app/main.scss) to use Tailwindcss @tailwind base; @tailwind components; @tailwind utilities; We will configure postcss to use autoprefixer and the tailwind.config.js file. #under meteor-elm-app/packages/elm-app/app touch postcss.config.js And add this content to this file const path = require("path"); module.exports = { plugins: [ require("tailwindcss")(path.join(__dirname, "tailwind.config.js")), require("autoprefixer"), ], }; We can now edit our Main.elm to add a CSS class ( text-green-500) from Tailwindcss: view: Model -> Html msg view model = div [class "text-green-500"] [text model] Then if you (re)start your server, you should see this: Congratulations 🎉! You made your first application with Elm, Meteor and Tailwindcss 👏. The Todos application It is really awesome right? What? You don't want to use Meteor just to expose static file? Hmm ok, let's go with the Todos application Because the goal of this post is not to learn how to code in Elm, we will start with an application I wrote for the occasion. This application is not linked with Meteor yet, there is no ports defined. The goal is to save each Todo in MongoDB and to be able to sync two browser. Update the Main.elm Replace the content of the Main.elm file with this gist We will need to add elm/svg: #under meteor-elm-app/packages/elm-app/app meteor npx elm install elm/svg Then start your application meteor npm start You can try the application, actually we can: - Add a Todo - Switch the status of a Todo - Filter Todos by status We will keep the filtering part in the client, but we want to: - Load Todos from MongoDB - Save new Todos in MongoDB - Switch the status and save it in MongoDB But let's start with the backend Define the Todos collection and methods Under the folder meteor-elm-app/imports/api, create a file todos.ts. In this file we will define what is a Todo, and create the collection: import { Mongo } from "meteor/mongo"; import { Meteor } from "meteor/meteor"; export interface Todo { _id?: string; value: string; status: "checked" | "unchecked"; createdAt: Date; } export const TodosCollection = new Mongo.Collection<Todo>("todos"); Then in the same file, we will add two Meteor methods, one to add a Todo and another to switch the status of Todo with its ID: Meteor.methods({ "todos.addTodo"(value: string) { if (value !## "") { TodosCollection.insert({ value, status: "unchecked", createdAt: new Date(), }); } }, "todos.toggleStatus"(todoId: string) { const todo = TodosCollection.findOne({ _id: todoId }); if (!todo) { throw new Meteor.Error("Todo not found"); } const newStatus = todo.status ### "checked" ? "unchecked" : "checked"; TodosCollection.update({ _id: todoId }, { $set: { status: newStatus } }); }, }); And at the end of the file, we will publish our collection on the server side: if (Meteor.isServer) { Meteor.publish("todos", function todos() { return TodosCollection.find({}, { sort: { createdAt: -1 } }); }); } Finally we need to import this file in the file server/main.ts: import "/imports/api/todos"; The server side in now ready. Add ports to the Elm application We will start by installing elm/json and NoRedInk/elm-json-decode-pipeline to decode our Todos: #under meteor-elm-app/packages/elm-app/app meteor npx elm install elm/json meteor npx elm install NoRedInk/elm-json-decode-pipeline So we will create 3 ports: - addTodo: port addTodo : String -> Cmd msg - toggleStatus: port toggleStatus : String -> Cmd msg - receiveTodos: port receiveTodos : (Decode.Value -> msg) -> Sub msg Let's put these port at the end of our Main.elm file: port module Main exposing(main) import Json.Decode as Decode import Json.Decode.Pipeline exposing (required) ... port addTodo : String -> Cmd msg port toggleStatus : String -> Cmd msg port receiveTodos : (Decode.Value -> msg) -> Sub msg We have to change the type of the Todo.id to use a String because of the id in Mongo: type alias Todo = { id : String , value : String , status : TodoStatus } type Msg = InputChanged String | AddTodo | ToggleStatus String -- ToggleStatus now need a String not a Int | FilterBy Filter We need a new variant ReceiveTodos (List Todo) for Msg to receive todos: type Msg = InputChanged String | AddTodo | ToggleStatus String | FilterBy Filter | ReceiveTodos (List Todo) We also change the update function because we will not update the todos list anymore. We will get the one we will receive from the port receiveTodos update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of InputChanged value -> ( { model | todo = value }, Cmd.none ) AddTodo -> if String.isEmpty (String.trim model.todo) then ( model, Cmd.none ) else ( { model | todo = "" }, addTodo model.todo ) ToggleStatus todoId -> ( model, toggleStatus todoId ) FilterBy selectedFilter -> ( { model | filter = selectedFilter }, Cmd.none ) ReceiveTodos todos -> ( { model | todos = todos }, Cmd.none ) To finish with the Elm part, we need a subscription and some decoders to receive our Todos: subscriptions : Model -> Sub Msg subscriptions _ = receiveTodos (\value -> Decode.decodeValue decodeTodos value |> Result.withDefault [] |> ReceiveTodos ) decodeTodo : Decode.Decoder Todo decodeTodo = Decode.succeed Todo |> required "id" Decode.string |> required "value" Decode.string |> required "status" decodeStatus decodeStatus : Decode.Decoder TodoStatus decodeStatus = Decode.string |> Decode.andThen (\status -> case status of "checked" -> Decode.succeed Checked _ -> Decode.succeed Unchecked ) decodeTodos : Decode.Decoder (List Todo) decodeTodos = Decode.list decodeTodo If you remember, we have defined an interface Ports in the file meteor-elm-app/packages/elm-app/app/index.ts. It is time to add some definitions: interface Todo { id: string; value: string; status: "checked" | "unchecked"; } export interface Ports { addTodo?: { subscribe: (fn: (todo: string) => void) => void; }; toggleStatus?: { subscribe: (fn: (todoId: string) => void) => void; }; receiveTodos?: { send: (todos: Todo[]) => void; }; } Link ports to Meteor.methods and subscriptions We have some piece of code in Elm and some piece of code in the server side. Now we need to link them together, and we will do that in the file client/main.ts We will need to import our TodosCollection and the Meteor Tracker import { Tracker } from "meteor/tracker"; import { TodosCollection } from "/imports/api/todos"; Then we will subscribe to the output ports: ports.addTodo?.subscribe((todo) => { Meteor.call("todos.addTodo", todo, (err: Error) => { if (err) { // Maybe we should pass this error to Elm console.log("error", err); return; } }); }); ports.toggleStatus?.subscribe((todoId) => { Meteor.call("todos.toggleStatus", todoId, (err: Error) => { if (err) { // Maybe we should pass this error to Elm console.log("error", err); return; } }); }); Here each time addTodo is called from Elm, we add a new Todo with a Meteor.call, same for the toggleStatus. Of course we should manage the error, maybe it could be a good exercice 😁 Finally we need to send todos everytime the collection change. To do that, we use Tracker.autorun that will run the callback when necessary. // We use the Tracker.autorun to send todos each time the fetch result // changes Tracker.autorun(() => { // Maybe one day we will need to manage the subscription const subscription = Meteor.subscribe("todos"); const todos = TodosCollection.find({}, { sort: { createdAt: 1 } }).fetch(); ports.receiveTodos?.send( todos.map((todo) => ({ id: todo._id || "", value: todo.value, status: todo.status, })) ); }); Now you can restart your server, open two browsers on and see that everything is saved and sync 👏. Conclusion I hope you enjoyed this content as much as I enjoyed writing it. Three weeks ago I was sad because I could not use Meteor with Elm, so I started using it with React and Typescript 😳. Today, I dropped React and I use Elm again and it is really pleasant. If you liked this post, do not hesitate to share it on your favorite social networks and if you are interested by this kind of content, you can follow me on twitter @anthonny_q. If you have any feedbacks, comments are open and you can find the sources of the project here. Special thanks to ni-ko-o-kin, as I was very inspired by his post. Big thanks to Yann Danthu for the review of this post 😘. Posted on by: Anthonny Quérouil Hello 👋, I'm a freelance developer🧑💻 living in Nantes who loves to share his knowledge about development and IT. Discussion
https://dev.to/anthonny/how-i-use-meteor-elm-and-tailwindcss-together-3hel
CC-MAIN-2020-34
refinedweb
3,626
56.45
Hi. I've been trying to search pretty much everywhere for a solution to my problem. But i have'nt been able to find it. Im trying to use Sikuli for Visual studio 2017 (C#) So far i got some of it working. I can find images and i can click and send text. But i can't get things like .targetOffset() to work, it does'nt recognize the command. Btw i' m new to programming so examples are appreciated :) This is my current code: using OpenQA.Selenium; using OpenQA. using OpenQA. using Sikuli4Net. using Sikuli4Net. private void button1_ { Pattern Image1 = new Pattern( Pattern Image2 = new Pattern( var driver = new ChromeDriver( } // I want to do something similar to this // Error message: Operator '.' cannot be applied to operand of type 'void' I hope someone can tell me what i'm missing? Question information - Language: - English Edit question - Status: - Solved - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Solved: - 2018-09-26 - Last query: - 2018-09-26 - Last reply: - 2018-08-16 Just a quick followup. I figured it out some time ago. So i just want to share incase anyone else had the same problem. So i switched from Sikuli4Net to SikuliSharp Below is an example of how to set offset in SikuliSharp (C#) private void button1_ { using (var session = Sikuli. { { } } } Though you are using Sikuli4Net (a wrapper for SikuliX), it makes sense to read the docs about SikuliX. start here: http:// sikulix. com targetOffset() is a method of a Pattern object.
https://answers.launchpad.net/sikuli/+question/672323
CC-MAIN-2020-10
refinedweb
252
74.08
To use precompiled headers: #includeThe first time we compile it, KCC reports that it is creating a precompile header: main() { std::cout << "Hello, world" << std::endl; } $ KCC +K0 --pch --pch_dir /foo/bar hello.C "hello.C": creating precompiled header file "/foo/bar/hello.pch"The option +K0 was used to speed up compilation by turning off optimization. The next time we compile it, KCC reports that it is using the precompiled header: $ KCC +K0 --pch --pch_dir /foo/bar hello.C "hello.C": using precompiled header file "/foo/bar/hello.pch"The second time around, the compile is much faster (typically about 2x-4x for this example). You may find that precompiled headers take up too much space. If so, read the detailed description below, which explains the tradeoffs of precompiled headers and how to reorganize your sources to reduce the disk overhead of precompiled headers. Details of Precompiled Headers It is often desirable to avoid recompiling a set of header files, especially when they introduce many lines of code and the primary source files that #include them are relatively small. KCC provides a mechanism for, in effect, taking a snapshot of the state of the compilation at a particular point and writing it to a disk file before completing the compilation; then, when recompiling the same source file or compiling another file with the same set of header files, it can recognize the ``snapshot point,'' verify that the corresponding precompiled header (``PCH'') file is reusable, and read it back in. Under the right circumstances, this can produce a dramatic improvement in compilation time; the trade-off is that PCH files can take a lot of disk space. Automatic Precompiled Header Processing When --pch appears on the command line, automatic precompiled header processing is enabled. This means that KCC will automatically look for a qualifying precompiled header file to read in and/or will create one for use on a subsequent compilation. The PCH file will contain a snapshot of all the code preceding the ``header stop'' point. The header stop point is typically the first token in the primary source file that does not belong to a preprocessing directive, but it can also be specified directly by #pragma hdrstop if that comes first. For example: #include "xxx.h" #include "yyy.h" int i;The header stop point is int (the first non-preprocessor token) and the PCH file will contain a snapshot reflecting the inclusion of xxx.h and yyy.h. If the first non-preprocessor token or the #pragma hdrstop appears within a #if block, the header stop point is the outermost enclosing #if. To illustrate, here's a more complicated example: #include "xxx.h" #ifndef YYY_H #define YYY_H 1 #include "yyy.h" #endif #if TEST int i; #endifHere, the first token that does not belong to a preprocessing directive is again int, but the header stop point is the start of the #if block containing it. The PCH file will reflect the inclusion of xxx.h and conditionally the definition of YYY_H and inclusion of yyy.h; it will not contain the state produced by #if TEST. A PCH file will be produced only if the header stop point and the code preceding it (mainly, the header files themselves) meet certain requirements: // xxx.h class A { // xxx.C #include "xxx.h" int i; }; // yyy.h static // yyy.C #include "yyy.h" int i; When a precompiled header file is produced, it contains, in addition to the snapshot of the compiler state, some information that can be checked to determine under what circumstances it can be reused. This includes: As an illustration, consider two source files: // a.C #include "xxx.h" ... // Start of code // b.C #include "xxx.h" ... // Start of codeWhen a.C is compiled with --pch, a precompiled header file named a.pch is created. Then, when b.C is compiled (or when a.C is recompiled), the prefix section of a.pch is read in for comparison with the current source file. If the command line options are identical, if xxx.h has not been modified, and so forth, then, instead of opening xxx.h and processing it line by line, the front end reads in the rest of a.pch and thereby establishes the state for the rest of the compilation. It may be that more than one PCH file is applicable to a given compilation. If so, the largest (i.e., the one representing the most preprocessing directives from the primary source file) is used. For instance, Consider a primary source file that begins with #include "xxx.h" #include "yyy.h" #include "zzz.h"If there is one PCH file for xxx.h and a second for xxx.h and yyy.h, the latter will be selected (assuming both are applicable to the current compilation). Moreover, after the PCH file for the first two headers is read in and the third is compiled, a new PCH file for all three headers may be created. When a precompiled header file is created, it takes the name of the primary source file, with the suffix replaced by an implementation-specified suffix (see PCH_FILE_SUFFIX, which is set to pch by default). Unless --pch_dir is specified (see below), it is created in the directory of the primary source file. When a precompiled header file is created or used, a message such as "test.C": creating precompiled header file "test.pch"is issued. You may suppress the message by using the command-line option --no_pch_messages. In automatic mode (i.e., when --pch is used) the front end will deem a precompiled header file obsolete and delete it under the following circumstances: Support for precompiled header processing is not available when multiple source files are specified in a single compilation: an error will be issued and the compilation aborted if the command line includes a request for precompiled header processing and specifies more than one primary source file. Manual Precompiled Header Processing Command-line option --create_pch file-name specifies that a precompiled header file of the specified name should be created. Command-line option --use_pch file-name specifies that the indicated precompiled header file should be used for this compilation; if it is invalid (i.e., if its prefix does not match the prefix for the current primary source file), a warning will be issued and the PCH file will not be used. When either of these options is used in conjunction with --pch_dir, the indicated file name (which may be a path name) is tacked on to the directory name, unless the file name is an absolute path name. The --create_pch, --use_pch, and --pch options may not be used together. If more than one of these options is specified, only the last one will apply. Nevertheless, most of the description of automatic PCH processing applies to one or the other of these modes --- header stop points are determined the same way, PCH file applicability is determined the same way, and so forth. Other Ways for Users to Control Precompiled Headers There are several ways in which the you can control and/or tune how precompiled headers are created and used. #include "xxx.h" #include "yyy.h" #pragma hdrstop #include "zzz.h"Here, the precompiled header file will include processing state for xxx.h and yyy.h but not zzz.h. (This is useful if you decide that the information added by what follows the #pragma hdrstop does not justify the creation of another PCH file.) Moreover, when the host system is a Cray, then one of the command-line options --pch, --create_pch, or --use_pch, if it appears at all, must be the first option on the command line; and in addition: In general, it doesn't cost much to write a precompiled header file out even if it does not end up being used, and if it is used it almost always produces a significant speedup in compilation. The problem is that the precompiled header files can be quite large (from a minimum of about 250K bytes to several megabytes or more), and so one probably doesn't want many of them sitting around. Thus, despite the faster recompilations, precompiled header processing is not likely to be justified for an arbitrary set of files with nonuniform initial sequences of preprocessing directives. Rather, the greatest benefit occurs when a number of source files can share the same PCH file. The more sharing, the less disk space is consumed. With sharing, the disadvantage of large precompiled header files can be minimized, without giving up the advantage of a significant speedup in compilation times. Consequently, to take full advantage of header file precompilation, you should expect to reorder the #include sections of their source files and/or to group #include directives within a commonly used header file. The source to KCC's parser provides an example of how this can be done. A common idiom is this: #include "fe_common.h" #pragma hdrstop #include ...where fe_common.h pulls in, directly and indirectly, a few dozen header files; the #pragma hdrstop is inserted to get better sharing with fewer PCH files. The PCH file produced for fe_common.h is a bit over a megabyte in size. Another idiom, used by the source files involved in declaration processing, is this: #include "fe_common.h" #include "decl_hdrs.h" #pragma hdrstop #include ...decl_hdrs.h pulls in another dozen header files, and a second, somewhat larger, PCH file is created. In all, the fifty-odd source files of the parser share just six precompiled header files. If disk space were at a premium, one could decide to make fe_common.h pull in all the header files used --- then, a single PCH file could be used in building the parser. Different environments and different projects will have different needs, but in general, you should be aware that making the best use of the precompiled header support will require some experimentation and probably some minor changes to source code. This file last updated on 27 February 1997.
http://www-d0.fnal.gov/KAI/doc/UserGuide/precompiled-headers.html
crawl-002
refinedweb
1,666
64.2
This file documents the revision history for Perl extension Mojolicious. 1.16 2011-04-15 00:00:00 - Emergency release for a critical security issue that can expose files on your system, everybody should update! 1.15 2011-03-18 00:00:00 - Changed default log level in "production" mode from "error" to "info". - Improved lookup method in Mojo::IOLoop. - Fixed a serious Mojo::DOM bug. (moritz) 1.14 2011-03-17 00:00:00 -. - Fixed typos. 1.13 2011-03-14 00:00:00 - Deprecated Mojo::Client in favor of the much sleeker Mojo::UserAgent. - Made the most common Mojo::IOLoop methods easier to access for the singleton instance. - Fixed typos. 1.12 2011-03-10 00:00:00 - to jQuery to version 1.5.1. - Fixed XSS issue in link_to helper. - Fixed route unescaping bug. - Fixed small Mojo::DOM bug. (yko) - Fixed small documentation bug. - Fixed typos. (kimoto) 1.11 2011-02-18 00:00:00 - 00:00:00 -. - Fixed typos. 1.01 2011-01-06 00:00:00 - 00:00:00 - typos. (punytan) 0.999950 2010-11-30 00:00:00 -. - Fixed typos. 0.999941 2010-11-19 00:00:00 - 00:00:00 - Improved resolver tests. - Fixed IO::Socket::SSL 1.34 compatibility. 0.999939 2010-11-15 00:00:00 -. - Fixed typos. 0.999938 2010-11-09 00:00:00 - Moved all commands into the Mojolicious namespace. - Fixed typo. - Removed OS X resource fork files. 0.999937 2010-11-09 00:00:00 - 00:00:00 - Improved Mojo::Template performance slightly. (kimoto) - Fixed a serious WebSocket bug. - Fixed non-blocking DNS resolver bug. - Fixed connection reset handling in Mojo::IOLoop. 0.999935 2010-11-03 00:00:00 - 00:00:00 - Fixed relaxed HTTP parsing. 0.999933 2010-10-30 00:00:00 - Fixed small connect bug in Mojo::IOLoop. - Fixed WebSocket handshake. 0.999932 2010-10-29 00:00:00 - Deprecated the old plugin hook calling convention and added EXPERIMENTAL hook method to Mojolicious. - Fixed a few small connect bugs in Mojo::IOLoop. - Fixed typos. 0.999931 2010-10-25 00:00:00 - 00:00:00 - config files to Mojolicious::Plugin::JsonConfig. (marcus) - Added reserved route name current. - Simplified transaction pausing by replacing it with an automatism. - Improved RFC3986 compliance of Mojo::Path. (janus) - Improved Mojo::Server::PSGI to preload applications. - Improved FastCGI detection for Dreamhost. (garu) - Improved keep alive timeout handling in Mojo::Client. - Improved documentation. (rhaen) - 00:00:00 - Removed OS X resource fork files. 0.999928 2010-08-15 00:00:00 - Fixed a security problem with CGI environment detection. - Fixed redirect_to without content and render_static bug. - Fixed nested partial rendering bug. (yko) - Fixed multiple small Mojo::DOM bugs. (yko) 0.999927 2010-08-15 00:00:00 - oneliner 00:00:00 - Added version requirement for all optional dependencies. - Improved documentation. - Fixed async client processing. - Fixed small renderer bug. 0.999925 2010-06-07 00:00:00 - perlish documentation. -) - Fixed typos. (jawnsy) 0.999924 2010-03-08 00:00:00 - Added default TLS cert and key to Mojo::IOLoop to make HTTPS testing easier, so "mojo daemon --listen https://*:3000" now just works. - Added request limit support to the daemons. - Added basic authorization and proxy authorization. - 00:00:00 -data. - Fixed typos. 0.999922 2010-02-11 00:00:00 -friendly) - Fixed typos. 0.999921 2010-02-11 00:00:00 - Fixed a small kqueue bug. 0.999920 2010-02-11 00:00:00 - (win32) with tests. - and added tests. 00:00:00 - 00:00:00 - and added tests. (yuki-kimoto) 0.999912 2009-11-24 00:00:00 - Improved ioloop performance. (gbarr) 0.999911 2009-11-14 00:00:00 - 00:00:00 - Fixed url_for without endpoint bug. - Fixed BOM handling in Mojo::JSON. (rsp) - Fixed named redirect_to with arguments. - Improved Mojo::Exception. (yuki-kimoto) 0.999909 2009-11-11 00:00:00 - Cleaned up tutorial. - FIxed renderer exception bug. (yuki-kimoto) 0.999908 2009-11-11 00:00:00 - Fixed bridges/ladders and added tests. 0.999907 2009-11-11 00:00:00 - Fixed another connection close bug in ioloop. - Fixed relaxed placeholder format handling in MojoX::Routes::Pattern. 0.999906 2009-11-11 00:00:00 - Fixed connection close bug in ioloop. 0.999905 2009-11-11 00:00:00 - Fixed routes bug that prevented the root from having formats. 0.999904 2009-11-10 00:00:00 - Cleaned up examples. 0.999903 2009-11-10 00:00:00 - Added ladders to Mojolicious::Lite, they are like bridges but lite. - Added encoding support to renderer. (likhatskiy) - Added dumper helper. - Made tmpdir in Mojo::Asset::File configurable. 0.999902 2009-11-01 00:00:00 - 00:00:00 - callback tests. (melo) - 00:00:00 - 00:00:00 -. - Cleaned up Mojo::Date. - Cleaned up Mojo::Transaction. - 00:00:00 - Fixed typo. 0.991245 2009-07-31 00:00:00 - 00:00:00 - Fixed package. 0.991243 2009-07-28 00:00:00 - 00:00:00 - and added tests. 0.991241 2009-07-20 00:00:00 -. - Added tests. - Allow log level override via environment variable in Mojo::Log. - Code cleanup. 0.991240 2009-07-19 00:00:00 -. - Cleaned up code. 0.991239 2009-07-16 00:00:00 - 00:00:00 - Fixed all shebang lines. 0.991237 2009-07-15 00:00:00 - Renamed process_local to process_app in Mojo::Client, this change is not backward and added more tests. (acajou) - Improved Mojo::Template exception handling. - Cleaned up exception code. - Fixed possible infinite loop in Mojo::Server::FastCGI. - Fixed typos. 0.991236 2009-07-05 00:00:00 - Simplified Mojo::Home. - Moved executable detection to Test::Mojo::Server. - Improved Mojo::Loader::Exception. - Moved persistent_error.t tests to app.t. - Cleaned up code. - Fixed at_least_version. (yuki-kimoto) 0.991235 2009-07-05 00:00:00 - Removed prepare/finalize methods from Mojolicious. - Fixed typos. 0.991234 2009-07-03 00:00:00 - 00:00:00 - Rewrote Mojo::Client::process_local to use the new state machine. - Added Server and X-Powered-By headers. - Fixed external server tests. - Fixed Mojo::Date handling of negative epoch values. 0.991232 2009-06-29 00:00:00 - Fixed tarball. 0.991231 2009-06-29 00:00:00 - Rewrote. 00:00:00 - Added local_address(), local_port(), remote_address() and remote_port() to Mojo::Transaction. - Improved tests. - Fixed some typos. 0.9001 2009-01-28 00:00:00 - Added proper home detection to Mojo itself. (charsbar) - Fixed a bug where errors got cached in the routes dispatcher. (charsbar) - Updated error handling in MojoX::Dispatcher::Static. - Fixed Mojo::Message::Request::cookies() to always return a arrayref. - Fixed url_for to support references. (vti) - Fixed unescaping of captures. (vti) - Fixed typos. (uwe) 0.9 2008-12-01 00:00:00 -. - 00:00:00 - Cleaned up Mojo::Message callbacks and added tests. -. - Cleaned up Mojo::File. (Leon Brocard) -) - Added buffer tests. (Mark Stosberg) 0.8008 2008-11-07 00:00:00 - Fixed multipart parsing for short requests. - Fixed content file storage to specific file. - Fixed lower case appclasses. 0.8007 2008-11-07 00:00:00 - Cleaned up the api some more. - Added param to Mojo::Message. - Added server.t. (Mark Stosberg) - Added documentation. (Mark Stosberg) - Cleaned up Mojo::File api. - Fixed infinite loop in Mojo::File. (Leon Brocard) 0.8006 2008-11-06 00:00:00 -) - Added documentation. (Mark Stosberg) - Fixed typos. (Marcus Ramberg) 0.8.5 2008-11-04 00:00:00 - Fixed version. (Andreas Koenig) - Fixed typos. 0.8.4 2008-11-04 00:00:00 - Improved caching in Mojo::Message. - Added upload and cookie method to Mojo::Message. - Changed uploads behavior in Mojo::Message to bring it in line with cookies. - Added documentation. (Mark Stosberg) 0.8.3 2008-11-03 00:00:00 - Removed filter from Mojo::Base and added warnings. - Added caching to uploads in Mojo::Message. (Mark Stosberg) - Fixed typos. (Robert Hicks) - Added documentation. 0.8.2 2008-11-01 00:00:00 - Removed OS X resource fork files. 0.8.1 2008-11-01 00:00:00 - Made daemon.t developer only. - Fixed typos. 0.8 2008-10-21 00:00:00 - Fixed. 0.7 2008-10-11 00:00:00 - 00:00:00 - Many more bugfixes. 0.5 2008-09-24 00:00:00 - Many small bugfixes. 0.4 2008-09-24 00:00:00 - Moved everything into the Mojo namespace. 0.3 2008-09-24 00:00:00 - Fixed documentation. 0.2 2008-09-24 00:00:00 - First release.
http://web-stage.metacpan.org/changes/release/KRAIH/Mojolicious-1.16
CC-MAIN-2020-34
refinedweb
1,404
64.47
Writes a scene graph to a file. More... #include <Inventor/actions/SoWriteAction.h> Writes a scene graph to a file. This class is used for writing scene graphs in Open Inventor (.iv) format. SoWriteAction traverses the scene graph and uses an instance of SoOutput to write each node. SoOutput methods can be called to specify what file or memory buffer to write to. SoOutput supports both ASCII (default) and binary formats and provides some convenience functions for opening and closing files. See SbFileHelper for more convenience functions. Since Open Inventor 8.1, SoOutput can write compressed data in the lossless Zlib (gzip) format. Both ASCII and binary format files may be compressed. STL Open Inventor can also export geometry to an STL (.stl) format file. See SoSTLWriteAction. SoOutput, SoSTLWriteAction AlternateRep, Simplification Constructor. Returns the type identifier for this specific instance. Implements SoTypedObject.
https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_write_action.html
CC-MAIN-2021-25
refinedweb
142
61.43
Type: Posts; User: jagmit I have a master page which is using the style sheet.. My problem is that my vertical scrollbar does not work and horizontal scrollbar is not showing when i restore down the page. here is my... I have an arraylist which has a list of structures stored in it Now i need to store these structures in a database, where each structure has its own line. public struct BackupSpecEntry { ... ok i got it........ I have a checkbox which when enabled the value of the dropdownlist is entered in database and if not enabled a default value should be enterd in the database.... my code is : if... Darwen thanks, it worked just fine.. vandel212 that worked too.... Thanks guys.. i have a code... protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { for (int i = 1; i <= 100; i++) { ... I need to add the current date time in a bigint field in a database... and then display from that only the date in format: october 1, 2009. I am currently thinking of storing the value in string... I have a master page with two contentplaceholders. I have a default page.aspx which uses this masterpage. In the default page one contentholder has a treeview and the other has a gridview. Now i... I have a master page with two contentplaceholders. I have a default page.aspx which uses this masterpage. In the default page one contentholder has a treeview and the other has a gridview. Now i... the code to populate the tree node: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using... No, I am selecting only one node i.e. the parent node or a child node. But since the treenodes are being populated from the database i dont know how to generate an event when a particular node is... Is there no 1 who knows the answer to this?/? Even how to go about information will do. I have the code to retrieve values from the database and display on the treenode dynamically. Just the selection is a problem. If there is a tutorial of any other information please let me kow I have a treeview control in ASP.NET and C#. Root node This is fixed ---Parent Node 1 Parent node and child are populated from the database directly. ----Child node1 ----Child node2... I have a treeview control in ASP.NET and C#. Root node This is fixed ---Parent Node 1 Parent node and child are populated from the database directly. ----Child... I want to populate my treeview dynamically from my database. In the above example they are providing the node values. I have a database of a company XYZ. It has two tables Departments-(customer support, executives, HR) and Machines-(PC1, PC2, PC3, PC4) The PC I want my tree view to look like XYZ --... it can be through loutus notes but is it possible using outlook express Hi, What is need to do is... There are two departments which are on one email network. I need to calculate the number of emails sent by each user in one department to a user in another...
http://forums.codeguru.com/search.php?s=24eb512286e30918466efc76bfac70eb&searchid=7207285
CC-MAIN-2015-27
refinedweb
536
77.84
I'm working with the bno055 sensor on a Raspberri Pi 4 reading the sensor over UART based on this tutorial: ... cuitpython I have wired as described in the 'Python Computer Wiring - UART' diagram and since I only need the quaternions for the project I intend use it I have simplified the example Python code to the following lines: - Code: Select all | TOGGLE FULL SIZE import time import serial import adafruit_bno055 uart = serial.Serial("/dev/serial0") sensor = adafruit_bno055.BNO055_UART(uart) while True: print("Quaternion: {}".format(sensor.quaternion)) time.sleep(0.05) I do get data in from the sensor but I run into 2 issues: 1.) The data seems rather laggy, as if it buffers all the time and catches up 2.) The script is dropping out at random times always stating 'RuntimeError: UART read error: 7' Here is a traceback error from the shell: Quaternion: (0.998779296875, 0.00128173828125, -0.00311279296875, -0.04974365234375) Traceback (most recent call last): File "/home/pi/Desktop/bno055UARTdevsleeptests.py", line 9, in <module> print("Quaternion: {}".format(sensor.quaternion)) File "/home/pi/.local/lib/python3.7/site-packages/adafruit_bno055.py", line 456, in quaternion return self._quaternion File "/home/pi/.local/lib/python3.7/site-packages/adafruit_bno055.py", line 852, in _quaternion resp = struct.unpack("<hhhh", self._read_register(0x20, 8)) File "/home/pi/.local/lib/python3.7/site-packages/adafruit_bno055.py", line 821, in _read_register raise RuntimeError("UART read error: {}".format(resp[1])) RuntimeError: UART read error: 7 >>> I have made a few screen captures with the sensor both being stationary as well as rotating for impression stored on a google drive folder If you have any troubleshooting advice for getting more stable data and the cause of the read error this would be highly appreciated! In the past I used the old bno055 library and this tutorial and the performance of the sensor was very stable and much faster. Is there anything I should revise BNO055_UART class itself to improve performance? I have to state my Python knowledge is very limited. Thanks kindly, Robin
https://forums.adafruit.com/viewtopic.php?f=19&t=179928&p=877792&sid=0036a55534d884b6e9f8a1e31334c0d3
CC-MAIN-2021-31
refinedweb
340
57.16
In-Depth Visual Studio 2013 came with a new version of Web API. The Web API 2.1 update includes a host of new features, including support for Binary JSON. Learn how to leverage BSON by building a Web API 2.1 service. The Web API framework allows developers to quickly create services using MVC-like conventions. It offers the flexibility of creating REST style services, and serving up data in a variety of formats, including the newly supported Binary JSON (BSON) format. This article will focus on building a Web API 2.1 service utilizing the new BSON Media-Type Formatter. ASP.Net Web API is a relatively new technology from Microsoft, released initially with the Microsoft .NET Framework 4.5. It allows a single Web service to communicate with various clients in various formats such as XML, JSON and OData. (I discussed the introduction of ASP.NET Web API is in a previous article). With Visual Studio 2013, Microsoft released ASP.NET Web API 2 in October 2013, followed by an update to Web API 2.1 in January 2014. Some of the new features in 2.1 include: BSON is a binary-encoded serialization of JSON-like objects (also known as documents) representing data structures and arrays. BSON objects consist of an ordered list of elements. Each element contains Field Name, Type and Value. Field names are strings. Types can be any of the following: The primary advantages of BSON are that it's lightweight with minimal spatial overhead, easy to parse, and efficient for encoding and decoding. To demonstrate, I'll create a sample ASP.NET Web API 2.1 application in ASP.NET MVC 5, using Visual Studio 2013. This solution will be very similar to the one I discussed in my previous article (following the premise of building an inventory app for a used car lot), which will allow for comparison and contrast of old features to new features. I'll call it CarInventory21, to reflect version 2.1 of the ASP.NET Web API. First, I create an ASP.NET Web Application, choosing an Empty template with MVC folders and core references, as seen in Figure 1 and Figure 2. Next, I add a class to the Model folder, calling it Car.cs, shown in Figure 3 and Figure 4. This class contains the following structure, representing each car in inventory (it's been slightly modified from its counterpart in the previous article): public class Car { public Int32 Id { get; set; } public Int32 Year { get; set; } public string Make { get; set; } public string Model { get; set; } public string Color { get; set; } } Next, I add an Empty Web controller by right-clicking the Controllers folder | Add | Controller, as shown in Figure 5. Afterward, when prompted for the type of controller to scaffold, I chose an Empty controller (see Figures 6-8) for the sake of keeping the sample project thin and clutter-free. Like other ASP.NET MVC projects, the controller isn't required to be in the Controllers folder, but it's a good practice and is highly recommended. Unlike typical ASP.NET MVC projects that inherit from the Controller class, the ASP.NET Web API controller inherits the ApiController class. Following the implementation from the previous article, I won't use a database but instead instantiate the records in code. I will also create two action methods within the controller: one to retrieve all car records, and the other to retrieve only a specific record using the ID field. The complete listing of Controllers\CarController.cs is shown in Listing 1. using System.Collections.Generic; using System.Linq; using System.Web.Http; using CarInventory21.Models; //Added manually namespace CarInventory21.Controllers { public class CarController : ApiController { Car[] cars = new Car[] { new Car { Id = 1, Year = 2012, Make = "Cheverolet", Model = "Corvette", Color = "Red" }, new Car { Id = 2, Year = 2011, Make = "Ford", Model = "Mustang GT", Color = "Silver" }, new Car { Id = 3, Year = 2008, Make = "Mercedes-Benz", Model = "C300", Color = "Black" } }; public IEnumerable<Car> GetAllCars() { return cars; } public IHttpActionResult GetCar(int id) { var car = cars.FirstOrDefault((c) => c.Id == id); if (car == null) { return NotFound(); } return Ok(car); } } } Looking at other portions of the project, a WebApiConfig.cs is created in the App_Start folder of the solution by default. As the name suggests, this file is for globally configuring the ASP.NET Web API. Before configuring the solution for BSON, I first need the references for the BSON library. This is done automatically by installing the NuGet package Microsoft ASP.NET Web API 2.1 Client Library, as seen in Figure 8. Installing the NuGet package will also require accepting the license agreement (see Figure 9). Because I want to display only BSON, I will add the following lines at the bottom of the Register method in the App_Start\WebApiConfig.cs file: config.Formatters.Clear(); // Remove all other formatters config.Formatters.Add(new BsonMediaTypeFormatter()); // Enable BSON in the Web service These calls will first clear and remove all other formatters the ASP.NET Web API will use for responding to requests. Then it will add a new instance of the BSON formatter to the empty list, effectively making BSON the only response type available. The complete listing can be seen } } } So far I've built the model and the controller. The only remaining item is a view. For a view, I'll create a simple HTML page. This will serve as a test page and also allow IIS Express to host the ASP.NET Web API. To add a Web page, I right-click on the project and select Add | HTML Page, calling it Index.html. By default, the links to access this new ASP.NET Web API are: Routing is a detailed topic not addressed in this article. However, you can learn more about it here. To help facilitate testing, I modify the newly-created page to include all links. The full listing can be found in Listing 3. <!DOCTYPE html> <html xmlns=""> <head> <title>Test Page</title> </head> <body> <br /> To retrieve all car: <a href="/api/car"> /api/car </a> <br /> <br /> To retrieve car #1: <a href="/api/car/1"> /api/car/1 </a> <br /> To retrieve car #2: <a href="/api/car/2"> /api/car/2 </a> <br /> To retrieve car #3: <a href="/api/car/3"> /api/car/3 </a> <br /> <br /> </body> </html> At this point, the project is ready to run, resulting in the output seen in Figure 11. Clicking on the first link to retrieve all cars will download all cars to the browser in BSON format. Currently there's no code to display the retrieved records, so the browser will simply prompt to save the records to a file. Opening the file in Visual Studio will yield the records in Binary format, as seen in Figure 12. Reading through the file, you'll see the various hex characters decorating the ASCII characters. These are characteristics of the BSON format. At this stage the ASP.NET Web API is ready to be consumed by clients. (I'll discuss the various ways an ASP.NET Web API can be consumed by a client process in upcoming articles.) Lightweight, Flexible Data ASP.NET Web API 2.1 is currently the latest release from Microsoft. It offers a variety of new features, including the new BSON format. It can be utilized like other formats, but offers a lightweight representation of binary data in a format similar to JSON, which gives Web API 2.1 more flexibility while still remaining simple and easy to configure.
https://visualstudiomagazine.com/articles/2014/05/01/implementing-binary-json-in-aspnet-web-api-2_1.aspx?admgarea=features
CC-MAIN-2020-45
refinedweb
1,263
67.04
Roll-API Dart Dart library for Roll-API This icon ↑ indicates if API is working right now What the hell is Roll-API?? It's a super cool API that allows you to roll a real dice and get it's number! Check it out: How to use Import: import 'package:rollapi/rollapi.dart' as roll; Simple way - use getRandomNumber(): print("Rolling a dice..."); roll.getSimpleResult().then((result) => print("Number: $result")); It returns pure Future<int>, and throws an exception if something failed More advance way: use makeRequest(): It returns a Requestobject, which contains UUID and Stream of values: var request = await roll.makeRequest(); request.stateStream.listen((event) { switch (event.key) { case roll.RequestState.queued: print('Wating...'); // .queued value is an ETA DateTime - which can be null if (event.value != null) { var sec = (event.value as DateTime).difference(DateTime.now()); print('Time left: $sec seconds'); } break; case roll.RequestState.failed: print('Request failed :((('); // .failed value is an exception - you can throw it throw event.value; break; case roll.RequestState.finished: print('Finished!!! Number: ${event.value}'); break; default: break; } }); If you want to use another instance, because you want to test your own or official is currently down, you can: import 'package:rollapi/rollapi.dart' as roll; roll.API_BASE_URL = ''; CLI This package also offers really nice CLI, that can, for example, generate you a random password! You can either get it with dart pub global activate rollapi, or from GitHub releases: Then, you can run: $ rollapi roll Rolling the dice... 1 $ rollapi pwd --numbers -l 6 Generating random password... 0% 8% ... 92% DONE! Your password: cwgl0r
https://pub.dev/documentation/rollapi/latest/
CC-MAIN-2021-21
refinedweb
265
60.01
@Request part of Spring MVC module. Also Read: This annotation can be used in class level or method level. If you are using at the class level, that will be set as the common relative path for all the methods inside the class. If you are using at only method level, each will be acting as it’s own relative path from the application’s context root. This tutorial has examples to makes you understand the difference when you are annotating at class level and method level. RequestMapping Annotation Parameters The following are the parameters accepted by this annotation: - consumes : This argument narrows down the acceptable media format from the request. - headers : The headers of the mapped request, narrowing the primary mapping. - method : This argument specifying the HTTP methods can be redirected to that method. More than one can be specified by separating using commas. The HTTP methods that are used are : GET, POST, HEAD, OPTIONS, PUT, PATCH, DELETE, TRACE - name : This is simply a name to the mapping. This can be specified at class level and the method level. If you specify at the both the levels, then a combined name is derived by concatenation with “#” as separator. - params : The parameters of the mapped request, narrowing the primary mapping. - path : This is an alias for the value argument. the path mapping URIs (e.g. “/myPath.do”). Ant-style path patterns are also supported (e.g. “/myPath/*.do”). At the method level, relative paths (e.g. “edit.do”) are supported within the primary mapping expressed at the type level. - produces : This argument narrows down the returned media format to the response. - value : This is the primary mapping used for mapping the web request to that method. Class Level Mapping If you declare @RequestMapping at the class level, the path will be applicable to all the methods in the class. Let’s look at the following example code snippet: @Controller @RequestMapping("/millets") public class SpringMVCController { @RequestMapping(value="/foxtail") public String getFoxtail(){ return "foxtail"; } @RequestMapping(value="/finger") public String getFinger(){ return "finger"; } public String getCommon(){ return "common"; } } In the above code, relative path /millets is enforced to all the methods inside the class. The method getFoxtail() will be invoked when user enter the URI /millets/foxtail. However, the last method getCommon() will not be invoked by any request because this method is not annotated with the @RequestMapping annotation. Method Level Mapping When there is no class level mapping and multiple method level mappings are defined, each method would form an unique relative path. Let’s look at the below example code: @Controller public class SpringMVCController { @RequestMapping(value="/foxtail") public String getFoxtail(){ return "foxtail"; } @RequestMapping(value="/finger") public String getFinger(){ return "finger"; } } In the above code, both the method have their individual relative paths /foxtail and /finger respectively. HTTP Methods Mapping One of the other way to do the Request Mapping is to use the HTTP methods. You can specify in the @RequestMapping annotation that for which HTTP method request the method has to be invoked. Let’s look at the following code: @Controller public class SpringMVCController { @RequestMapping(value="/foxtail",method=RequestMethod.GET) public String getFoxtail(){ return "foxtail"; } @RequestMapping(value="/foxtail",method=RequestMethod.POST) public String getList(){ return "foxtailList"; } @RequestMapping(value="/",method=RequestMethod.GET) public String getMillets(){ return "millets"; } } In the above code, if you look at the first two methods mapping to the same URI, but both have the different HTTP methods. First method will be invoked when HTTP method GET is used and the second method is invoked when HTTP method POST is used. This kind of design is very useful for writing the REST APIs for your project. Mapping using Params Another very useful mapping technique is to use the parameters in the query string to filter the handler mappings. For example, you can restrict the handler mapping by configuring like if the URL query string has this parameter then go to this method or redirect to some other method. Let’s look at this example snippet: @RequestMapping(value="/millets",method=RequestMethod.GET,params="foxtail") public String getFoxtail(){ return "foxtail"; } In the above example code, the request goes to this method only when the following criteria are satisfied: - If the URL is /milletsand - If the HTTP method is GETand - If the query request has the parameter with the name foxtail Summary In this tutorial I have explained how to use the @RequestMapping annotation in your spring mvc application. The various combinations like class level mapping, method level mapping, params matching, HTTP methods matching, headers mapping and multiple path URI mappings. If you have any questions on how to use @RequestMapping annotation and any related spring mvc issues, please feel free to drop a comment in this tutorial for any help. Thank you for reading my tutorial!! Happy Learning!!
https://javabeat.net/requestmapping-spring-mvc/
CC-MAIN-2017-47
refinedweb
798
52.8
Vapor has a nifty build-in feature to derive the working directory of a project. This makes it easy for you to fetch files from your project and serve their content; fx if you want to seed some data in your database, if you are building an initial mock api at the beginning of a project, if you want to ease the process of testing data parsing, or something fourth. In this tutorial we’ll use Vapor’s build in function to derive the working directory and thereby easily fetch files from our project. We’ll then decode the content of our file into a custom Swift object and test that it works as expected1. First we’ll create a new Vapor project in the terminal2 vapor new nobel-laureates-example next we’ll relocate into the project folder and generate our Xcode project. cd nobel-laureates-example/ vapor xcode -y You now have a fully functioning Vapor project; All you have to do is run it. Where to store your files Next step is to decide where to store our files. Our newly created Vapor project came with a number of predefined folders. The app itself goes into Sources, while all test should be in Tests, finally are the Public and Resources folders. (You may not have the Resources by default, but then you can just create it yourself). The Public folder should only be used for files that you want anybody from outside of the app to get access to. Those could be your CSS and Javascript files for your frontend or a document anybody should be able to download. Resources is for files that you do not want the outside to gain access to (but at the same time not super secret files that need encryption, password etc.). We will be storing our files in Resources for this tutorial. Fetching the file Add a sub-folder called Utilities inside Sources. This is for any general methods we’ll be needing during the tutorial. Inside Utilities we’ll add a new Swift file called “Data+fromJSONFile.swift”. First we’ll import Core so we can get easy access to the working directory. Core is a utility package from Vapor that contains tools for byte manipulation, Codable, OS APIs, and debugging. Then we’ll extend Data with a method, which we’ll call fromFile. fromFile(_:,folder:) takes the file name and the relative location of the file as arguments. It then returns the content of the file as Data (Since we are only going to work with json files in this tutorial, we’ll add the relative path to the json folder as a default value). If no file exists with the provided name and location the method will throw an error. import Core extension Data { static func fromFile( _ fileName: String, folder: String = "Resources/json" ) throws -> Data { let directory = DirectoryConfig.detect() let fileURL = URL(fileURLWithPath: directory.workDir) .appendingPathComponent(folder, isDirectory: true) .appendingPathComponent(fileName, isDirectory: false) return try Data(contentsOf: fileURL) } } We use Vapor’s DirectoryConfig.detect() to get the location of the working directory of the project. From here we can use Swift’s URL type initialized by providing the file URL, URL((fileURLWithPath: String). In the end, the URL will look like the following: "/[working-directory]/[folder]/[fileName]" Now that we have the full address of the file, we can fetch it and return it as Data. So let’s test what happens when we provide a correct and a wrong file name, respectively: func testCorrectFileName() throws { let fileName = "femaleNobelLaureates.json" XCTAssertNotNil(try Data.fromFile(fileName)) // Success } func testWrongFileName() throws { let fileName = "femaleNobelLaureate.json" XCTAssertThrowsError(try Data.fromFile(fileName)) // Success } In the first test the correct file name is provided and the content of the file is encode into Data. In the second test there is a spelling error and so the method throws the following error, as expected: "The file “femaleNobelLaureate.json” couldn’t be opened because there is no such file.". Decoding into a custom swift object For the rest of the blog we’ll be working with a list of all female Nobel Laureates in the categories Physics, Chemistry and Physiology or Medicine3. You can find the json file here. Each scientific accomplishment is summed down to a json dictionary that looks like the following: { "id": 19, "fullName": "Donna Strickland", "category": "physics", "year": 2018, "rationale": "for their method of generating high-intensity, ultra-short optical pulses", "isShared": true } We’ll be using Swift’s Codable protocol for easy decoding of the data into an array of our custom object. For that we’ll set up a class called NobelLaureates that has the same six variables as in the json dictionary and add a memberwise initializer. And so, the model is ready. final class NobelLaureates: Codable { var id: Int var fullName: String var category: String var year: Int var rationale: String var isShared: Bool ... // memberwise initializer has been left out here } Next we’ll set up our custom decoder, which we need for decoding the data. We’ll add it in an extension to NobelLaureates along with a private struct that functions as a container for the decoded data. extension NobelLaureates { static func loadFromFile( _ fileName: String = "femaleNobelLaureates.json" ) throws -> [NobelLaureates] { let decoder = JSONDecoder() let laureatesData = try Data.fromFile(fileName) let decodedNobelLaureates = try decoder.decode( LaureatesDecoderObject.self, from: laureatesData ) return decodedNobelLaureates.data } private struct LaureatesDecoderObject: Content { var data: [NobelLaureates] } } Okay, we are now ready to test if the decoder is working. First we’ll test that we have decoded the correct number of recipients of the Nobel price in Physics, Chemistry and Medicine or Physiology, which is 20. func testDecodeLaureatesCount() throws { let testData = try NobelLaureates.decodeFromData() XCTAssertEqual(testData.data.count, 20) // Success } Next we’ll test who was the seventh woman to receive a Nobel in the natural sciences4. Her name was Rosalyn Sussman Yalow. She received the price in 1977 for her work in developing radioimmunoassays of peptide hormones. func testDecodeLaureatesAtIndex() throws { let testData = try NobelLaureates.decodeFromData() XCTAssertEqual(testData.data[6].fullName, "Rosalyn Sussman Yalow") // Success XCTAssertEqual(testData.data[6].year, 1977) // Success } Success, our decoder works as expected! Final notes In this tutorial we have created a convenient method for fetching files in our Vapor project. Next we created a custom decoder to convert the data in our json file to Swift objects. Finally we covered each step with tests to verify that they worked. And a very final note; We used a json file in the tutorial, but the methods presented here can just as well be used to fetch and decode the content of a csv/xlsx/whatever file. Article photo by Fabian Grohs Download the example project here. ↩ If you do not have Vapor installed already, you can follow this tutorial. ↩ The list of female laureates are extracted from here. ↩ Actually, Rosalyn Sussman Yalow was the sixth woman to receive a Nobel in the natural science topics, as Marie Skłodowska Curie receive the price twice. [Physics in 1903 and Chemistry in 1911] ↩
https://engineering.monstar-lab.com/2019/08/15/Fetching-files-easily-in-Vapor-when-you-are-writing-tests
CC-MAIN-2020-45
refinedweb
1,173
54.83
Python Interview Questions and Answers Use os.remove(filename) or os.unlink(filename); The shutil module contains a copyfile() function. Use the popen2 module. For example: import popen2 fromchild, tochild = popen2.popen2("command") tochild.write("input ") tochild.flush() output = fromchild.readline() The select module is commonly used to help with asynchronous I/O on sockets. Yes. Python 2.3 includes the bsddb package which provides an interface to the BerkeleyDB library. Interfaces to disk-based hashes such as DBM and GDBM are also included with standard Python. The standard module random implements a random number generator. Usage is simple: import random random.random() This returns a random floating point number in the range [0, 1). Yes, you can create built-in modules containing functions, variables, exceptions and even new types in C. The highest-level function to do this is PyRun_SimpleString() which takes a single string argument to be executed in the context of the module __main__ and returns 0 for success and -1 when an exception occurred (including SyntaxError)... A tuple is a list that is immutable. A list is mutable i.e. The members can be changed and altered but a tuple is immutable i.e. the members cannot be changed. Other significant difference is of the syntax. A list is defined as list1 = [1,2,5,8,5,3,] list2 = ["Sachin", "Ramesh", "Tendulkar"] A tuple is defined in the following way tup1 = (1,4,2,4,6,7,8) tup2 = ("Sachin","Ramesh", "Tendulkar") You may also like:
http://www.cseworldonline.com/pythoninterview/interview.php
CC-MAIN-2019-04
refinedweb
250
60.31
Blogging on App Engine Interlude: Editing and listing Posted by Nick Johnson | Filed under coding, app-engine, tech, bloggart This is part of a series of articles on writing a blogging system on App Engine. An overview of what we're building is here. A couple of things didn't quite make it into part 2 of the series: Listing and editing posts in the admin interface. This post is a short 'interlude' between the main posts in the series, and briefly covers the changes needed for those features. Editing posts requires surprisingly little work, thanks to our use of the Django forms library. First, we write a decorator function that we can attach to methods that require an optional post ID, loading the relevant BlogPost object for us: def with_post(fun): def decorate(self, post_id=None): post = None if post_id: post = BlogPost.get_by_id(int(post_id)) if not post: self.error(404) return fun(self, post) return decorate Then, we enhance the PostHandler to take an optional post ID argument, using the decorator we just defined. Here's the new get() method: @with_post def get(self, post): self.render_form(PostForm(instance=post)) If no post ID is supplied, post is None, and the form works as it used to. If a post ID is supplied, the post variable contains the post to be edited, and the form pre-fills all the relevant information. The same applies to the post() method. Now all we have to do is add an additional entry for the PostHandler in the webapp mapping: ('/admin/post/(\d+)', PostHandler), Listing posts is extremely simple: First, we refactor the 'render_to_response' method into a BaseHandler, as we suggested in part 2. Then, we create a new AdminHandler. This handler simply fetches a set of posts from the datastore, ordered by publication date, and renders a template listing them. Here's the full code for the AdminHandler, most of which is concerned with providing the Django templates with the correct offsets to use for generating next and previous links and the post count: class AdminHandler(BaseHandler): def get(self): offset = int(self.request.get('start', 0)) count = int(self.request.get('count', 20)) posts = BlogPost.all().order('-published').fetch(count, offset) template_vals = { 'offset': offset, 'count': count, 'last_post': offset + len(posts) - 1, 'prev_offset': max(0, offset - count), 'next_offset': offset + count, 'posts': posts, } self.render_to_response("index.html", template_vals) Finally, Sylvain, from the #appengine IRC channel, pointed out that the blog as it stands doesn't handle unicode gracefully. Fortunately, fixing that is simple - instead of setting the mime type for generated pages to "text/html", we set it to "text/html; charset=utf-8". This simple change is all that's required. You can see the diff for that, along with internationalization improvements to the slugify function, here.Previous Post Next Post
http://blog.notdot.net/2009/10/Blogging-on-App-Engine-Interlude-Editing-and-listing
CC-MAIN-2019-09
refinedweb
470
53.71
Sep 19, 2014 I'm not sure if this post is very on-topic for LW, but we have many folks who understand Haskell and many folks who are interested in Löb's theorem (see e.g. Eliezer's picture proof), so I thought why not post it here? If no one likes it, I can always just move it to my own blog. A few days ago I stumbled across a post by Dan Piponi, claiming to show a Haskell implementation of something similar to Löb's theorem. Unfortunately his code had a couple flaws. It was circular and relied on Haskell's laziness, and it used an assumption that doesn't actually hold in logic (see the second comment by Ashley Yakeley there). So I started to wonder, what would it take to code up an actual proof? Wikipedia spells out the steps very nicely, so it seemed to be just a matter of programming. Well, it turned out to be harder than I thought. One problem is that Haskell has no type-level lambdas, which are the most obvious way (by Curry-Howard) to represent formulas with propositional variables. These are very useful for proving stuff in general, and Löb's theorem uses them to build fixpoints by the diagonal lemma. The other problem is that Haskell is Turing complete, which means it can't really be used for proof checking, because a non-terminating program can be viewed as the proof of any sentence. Several people have told me that Agda or Idris might be better choices in this regard. Ultimately I decided to use Haskell after all, because that way the post will be understandable to a wider audience. It's easy enough to convince yourself by looking at the code that it is in fact total, and transliterate it into a total language if needed. (That way you can also use the nice type-level lambdas and fixpoints, instead of just postulating one particular fixpoint as I did in Haskell.) But the biggest problem for me was that the Web didn't seem to have any good explanations for the thing I wanted to do! At first it seems like modal proofs and Haskell-like languages should be a match made in heaven, but in reality it's full of subtle issues that no one has written down, as far as I know. So I'd like this post to serve as a reference, an example approach that avoids all difficulties and just works. LW user lmm has helped me a lot with understanding the issues involved, and wrote a candidate implementation in Scala. The good folks on /r/haskell were also very helpful, especially Samuel Gélineau who suggested a nice partial implementation in Agda, which I then converted into the Haskell version below. To play with it online, you can copy the whole bunch of code, then go to CompileOnline and paste it in the edit box on the left, replacing what's already there. Then click "Compile & Execute" in the top left. If it compiles without errors, that means everything is right with the world, so you can change something and try again. (I hate people who write about programming and don't make it easy to try out their code!) Here we go: main = return () -- Assumptions data Theorem a logic1 = undefined :: Theorem (a -> b) -> Theorem a -> Theorem b logic2 = undefined :: Theorem (a -> b) -> Theorem (b -> c) -> Theorem (a -> c) logic3 = undefined :: Theorem (a -> b -> c) -> Theorem (a -> b) -> Theorem (a -> c) data Provable a rule1 = undefined :: Theorem a -> Theorem (Provable a) rule2 = undefined :: Theorem (Provable a -> Provable (Provable a)) rule3 = undefined :: Theorem (Provable (a -> b) -> Provable a -> Provable b) data P premise = undefined :: Theorem (Provable P -> P) data Psi psi1 = undefined :: Theorem (Psi -> (Provable Psi -> P)) psi2 = undefined :: Theorem ((Provable Psi -> P) -> Psi) -- Proof step3 :: Theorem (Psi -> Provable Psi -> P) step3 = psi1 step4 :: Theorem (Provable (Psi -> Provable Psi -> P)) step4 = rule1 step3 step5 :: Theorem (Provable Psi -> Provable (Provable Psi -> P)) step5 = logic1 rule3 step4 step6 :: Theorem (Provable (Provable Psi -> P) -> Provable (Provable Psi) -> Provable P) step6 = rule3 step7 :: Theorem (Provable Psi -> Provable (Provable Psi) -> Provable P) step7 = logic2 step5 step6 step8 :: Theorem (Provable Psi -> Provable (Provable Psi)) step8 = rule2 step9 :: Theorem (Provable Psi -> Provable P) step9 = logic3 step7 step8 step10 :: Theorem (Provable Psi -> P) step10 = logic2 step9 premise step11 :: Theorem ((Provable Psi -> P) -> Psi) step11 = psi2 step12 :: Theorem Psi step12 = logic1 step11 step10 step13 :: Theorem (Provable Psi) step13 = rule1 step12 step14 :: Theorem P step14 = logic1 step10 step13 -- All the steps squished together lemma :: Theorem (Provable Psi -> P) lemma = logic2 (logic3 (logic2 (logic1 rule3 (rule1 psi1)) rule3) rule2) premise theorem :: Theorem P theorem = logic1 lemma (rule1 (logic1 psi2 lemma)) To make sense of the code, you should interpret the type constructor Theorem as the symbol ⊢ from the Wikipedia proof, and Provable as the symbol ☐. All the assumptions have value "undefined" because we don't care about their computational content, only their types. The assumptions logic1..3 give just enough propositional logic for the proof to work, while rule1..3 are direct translations of the three rules from Wikipedia. The assumptions psi1 and psi2 describe the specific fixpoint used in the proof, because adding general fixpoint machinery would make the code much more complicated. The types P and Psi, of course, correspond to sentences P and Ψ, and "premise" is the premise of the whole theorem, that is, ⊢(☐P→P). The conclusion ⊢P can be seen in the type of step14. As for the "squished" version, I guess I wrote it just to satisfy my refactoring urge. I don't recommend anyone to try reading that, except maybe to marvel at the complexity :-) EDIT: in addition to the previous Reddit thread, there's now a new Reddit thread about this post.
https://www.lesswrong.com/posts/jshdZw3xofq9wgE7T/a-proof-of-loeb-s-theorem-in-haskell
CC-MAIN-2018-30
refinedweb
968
52.63
import data from Adobe FormsCentral?Asked by virginiapbergo on February 13, 2015 at 05:01 PM Hi Is there a way to import archived data from Formscentral? Reports we don't need to create surveys for that our clients may want to access?Also, to share the data we would need to export a file or create a sub user, correct? We can not send a live link to anyone who is not a sub user?If we wanted to fill out a survey on a tablet or iPad - would we need an app or use the direct website? Thanks! Virginia You will be happy to know Virginia that you can easily import the forms and data from your Adobe account into your JotForm account, all that is needed is one step from you and we handle all the rest. To see how, I would suggest taking a look at the following blog post: Import Both Your Forms and Responses in a Single Step from Adobe FormsCentral I would also recommend taking a look at this blog post as well: 7 Reasons Why JotForm is the Best Adobe FormsCentral Alternative Now in regards to your other question: Reports we don't need to create surveys for that our clients may want to access? I am not able to understand exactly what you want to achieve Virginia, do you want to create reports, but without creating surveys for each or maybe looking to see if our reports can be shared and password protected? In regards to sharing the submitted data I have opened a new thread for you here: and in regards to the question about viewing the jotform on mobile devices, I have moved the question to a new thread here: We do this since we can only assist you with one question per thread, allowing us to avoid any confusion and help you with all issues / questions that you might :)
https://www.jotform.com/answers/515875-Can-we-import-data-from-Adobe-FormsCentral
CC-MAIN-2017-30
refinedweb
321
56.42
Contents - 1 Introduction - 2 Color space Conversion: cv2.cvtColor() - 2.1 Syntax - 2.2 Reading the Sample Image - 2.3 Conversion of BGR to Gray space using cv2.cvtColor() with code cv2.COLOR_BGR2GRAY - 2.4 Conversion of BGR to HSV space using cv2.cvtColor() with code cv2.COLOR_BGR2HSV - 2.5 Conversion of BGR to RGB space using cv2.cvtColor() with code cv2.COLOR_BGR2RGB - 2.6 Conversion of BGR to LAB space using cv2.cvtColor() with code cv2.COLOR_BGR2LAB - 3 Conclusion Introduction OpenCV provides a method named cv2.cvtColor() which is used to convert an image from one color space to another. There are more than 150 color spaces are available in OpenCV. We will discuss the important ones in this article. Following Color spaces we are going to cover in this tutorials – - Grayscale - HSV Color Space - RGB Color Space - LAB Color Space import cv2 Color space Conversion: cv2.cvtColor() cvtColor() function is used to convert colored images to grayscale. Syntax cv2.cvtColor(src, code, dst, dstCn) Parameters: src: It is the image whose color space is to be changed. code: It is the color space conversion code. It is basically an integer code representing the type of the conversion, for example, RGB to Grayscale. Some codes are cv2.COLOR_BGR2GRAY: This code is used to convert BGR colored image to grayscale cv2.COLOR_BGR2HSV : This code is used to change the BGR color space to HSV color space. cv2.COLOR_BGR2RGB : This code is used to change the BGR color space to RGB color space. cv2. cv2.COLOR_BGR2LAB: This code is used to change the BGR color space to LAB color space. dst: It is the output image of the same size and depth as src image. It is an optional parameter. dstCn: It is the number of channels in the destination image. If the parameter is 0 then the number of the channels is derived automatically from src and code. It is an optional parameter. Reading the Sample Image # Reading the sample image img=cv2.imread("image.jpg") #Displaying the sample image window_name='image' cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) cv2.imshow(window_name,img) cv2.waitKey(0) cv2.destroyAllWindows() Conversion of BGR to Gray space using cv2.cvtColor() with code cv2.COLOR_BGR2GRAY #Converting the image into grayscale image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY ) #Displaying the grayscale image window_name='image_Gray' cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) cv2.imshow(window_name,image) cv2.waitKey(0) cv2.destroyAllWindows() Conversion of BGR to HSV space using cv2.cvtColor() with code cv2.COLOR_BGR2HSV The HSV color space has the following three components. H – Hue ( Dominant Wavelength ). S – Saturation ( shades of the color ). V – Value ( Intensity ). The best thing is that it uses only one channel to describe color (H), making it very intuitive to specify color. #Converting image to HSV Space image_HSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV ) #Displaying the HSV image window_name='image_HSV' cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) cv2.imshow(window_name,image_HSV) cv2.waitKey(0) cv2.destroyAllWindows() Conversion of BGR to RGB space using cv2.cvtColor() with code cv2.COLOR_BGR2RGB The RGB colorspace has the following properties – - It is an additive colorspace where colors are obtained by a linear combination of Red, Green, and Blue values. - The three channels are correlated by the amount of light hitting the surface. #Converting image to RGB Space image_RGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB ) #Displaying the RGB Image window_name='image_RGB' cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) cv2.imshow(window_name,image_RGB) cv2.waitKey(0) cv2.destroyAllWindows() Conversion of BGR to LAB space using cv2.cvtColor() with code cv2.COLOR_BGR2LAB The Lab color space has three components. - L – Lightness ( Intensity ). - a – color component ranging from Green to Magenta. - b – color component ranging from Blue to Yellow. The Lab color space is quite different from the RGB color space. In RGB color space the color information is separated into three channels but the same three channels also encode brightness information. On the other hand, in Lab color space, the L channel is independent of color information and encodes brightness only. The other two channels encode color. #Converting image to LAB Space image_LAB = cv2.cvtColor(img, cv2.COLOR_BGR2LAB ) #Displaying the LAB image window_name='image_LAB' cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) cv2.imshow(window_name,image_LAB) cv2.waitKey(0) cv2.destroyAllWindows() Conclusion So in this article, we saw how we can change the color of an image to various color spaces using cvtcolor() function of OpenCV. We converted a sample image into grayscale, HSV, RGB and LAB color space. Reference –
https://machinelearningknowledge.ai/opencv-tutorial-image-colorspace-conversion-using-cv2-cvtcolor/
CC-MAIN-2022-33
refinedweb
738
52.97
The C library function char *ctime(const time_t *timer) returns a string representing the localtime based on the argument timer. The returned string has the following format: Www Mmm dd hh:mm:ss yyyy, where Www is the weekday, Mmm the month in letters, dd the day of the month, hh:mm:ss the time, and yyyy the year. Following is the declaration for ctime() function. char *ctime(const time_t *timer) timer − This is the pointer to a time_t object that contains a calendar time. This function returns a C string containing the date and time information in a human-readable format. The following example shows the usage of ctime() function. #include <stdio.h> #include <time.h> int main () { time_t curtime; time(&curtime); printf("Current time = %s", ctime(&curtime)); return(0); } Let us compile and run the above program that will produce the following result − Current time = Mon Aug 13 08:23:14 2012
https://www.tutorialspoint.com/c_standard_library/c_function_ctime.htm
CC-MAIN-2020-34
refinedweb
154
62.27
#include <MPxToolCommand.h> This is the base class for interactive tool commands. An interactive tool command is a command that can be invoked as a MEL command or from within a user defined context (see MPxContext). Tool commands have the same functionality as MPxCommands, but include several additional methods for use in interactive contexts: setUpInteractive, cancel, finalize, and doFinalize. Class constructor. This constructor should only be called from within a user context that uses this tool. The context is responsible for setting up the internal state variables for the derived tools and for creating a command string for journalling. Class Destructor. The user can override this method to free up any allocated data within the derived MPxToolCommand class. This method should perform a command by setting up internal class data and then calling the redoIt method. The actual action performed by the command should be done in the redoIt method. This is a pure virtual method, and must be overridden in derived classes. Reimplemented from MPxCommand. This method cancels the command. The user should override this method when the original program state needs to be restored. This method is used to create a string representing the command and its arguments. Users should override this method and contruct an MArgList and then pass it to doFinalize for journalling. USE _doFinalize() IN SCRIPT. Call this method with an MArgList representing your command. This method will register the command with the undo manager for journalling.
http://download.autodesk.com/us/maya/2009help/API/class_m_px_tool_command.html
crawl-003
refinedweb
242
57.06
I wanted to take some time to discuss our position on the Contrib projects. Digy and I were a little off topic in the roadmap thread and I brought it up. Digy mentioned he always felt that it was a "nice to have" but not necessarily required for a release. I can't say I disagree. I had some time to look over a lot of the contrib code in Java, and there are quite a few projects that cannot be ported directly, as they rely on 3rd party Java libraries that have no direct .Net equivalent. Porting them could delay releases a long time, which is why I think it hasn't really been kept up to date as it is. I think a requirement for releases should be to have whatever projects that are currently in contrib to be up to date and building, with valid and passing tests (our 2.9.4 Contrib.Analyzers project is missing tests for 9 namespaces). I think it should be as simple as that. I don't think we should worry about adding new projects, unless you feel compelled to do so. My opinion is that if someone wants a Contrib added to the project, someone can port and donate the code to our project, or if they request it, someone can volunteer to port it themselves. People do use our Contrib assemblies, I personally think this is a fair trade-off to only have to maintain what we already have. I would like to know how everyone else feels about it. Thanks, Christopher
https://mail-archives.eu.apache.org/mod_mbox/lucenenet-dev/201111.mbox/%3CCANLkk2zKoujLaaKz4BGmJ5VwfZFqTUHU6UQASHH4nLxdor_O_A@mail.gmail.com%3E
CC-MAIN-2021-39
refinedweb
263
76.86
ISSUE 09 足 FEB 2013 Get printed copies of issues 1足8 plus binder at themagpi.com A Magazine for Raspberry Pi Users Meet Ladyada AAnn iinntteerrvviieew ww wiitthh tthhee FFoouunnddeerr ooff AAddaaffrruuiitt IInndduussttrriieess Raspberry Pi is a trademark of The Raspberry Pi Foundation. This magazine was created using a Raspberry Pi computer. TThhiis IIsssue.... W WeebIIO OPi FFraam mew work BBaacckkupp SSD carrdds VVaallaa & LLeddBoorgg SSccrraattcchh GPIO O RISCO OSS hhttttpp::///www maaggppii..ccoom MagPi team Ash Stone - Chief Editor /Administrator Tim 'meltwater' Cox - Writer /Page Designs /Admin. Chris 'tzj' Stagg - Writer /Photographer /Page Designs Colin Deady - Writer /Page Designs Jason 'Jaseman' Davies - Website /Page Designs Matt '0the0judge0' - Website /Administrator Aaron Shaw - Writer /Page Designs /Graphics Ian McAlpine - Writer /Page Designs /Graphics Sam Marshall - Page Designs /Graphics W. H. Bell - Writer /Page Designs Bryan Butler - Page Desins /Graphics Colin Norris - Graphics Mark Robson - Proof-reading Alex Baker - Proof-reading Richard Wenner - Proof-reading Steve Drew - Proof-reading Chosp - Proof-reading Benjamin Donald-Wilson - Proof-reading Mike Richards - Proof-reading Guest writers Alex Eames Eric PTAK Norman Dunbar Pete Nowosad Ross Taylor Simon Walters Cover: Limor Fried, engineer and founder of Adafruit Industries. 2 Contents 04 An interview with Limor Fried from Adafruit Founder and engineer of Adafruit Industries Limor Fried talks to The MagPi. 08 WebIOPi - Raspberry Pi REST framework Learn how to control the Raspberry Pi's GPIO interface from a web browser. 1 2 Backing up your Raspberry Pi Backup your SD card with optional compression and DVD archiving. 1 5 Win some more Raspberry Pi goodies This month there is an opportunity to win a Gertboard. 1 6 Quick2Wire's I/O interface board for the Raspberry Pi A review of the kit and the assembled board. 1 8 An introduction to RISCOS A basic introduction to the RISCOS operating system, from SD card installation to the desktop. 20 Installing & configuring Arch Linux Learn how to install Arch Linux, a barebones rolling Linux distribution on the Raspberry Pi. 22 An introduction to Vala programming Writing code in a Vala, a high level C# style language. 24 This month's Raspberry Pi events Find out what is on this month. 26 The C Cave - structs, histograms and data analysis Learn how to build more complicated data structures and programs. 32 Scratch Patch - controlling the GPIO interface from Scratch Learn the first steps to GPIO control, allowing more complicated interfacing. 34 The Python Pit - drive your Raspberry Pi with a mobile phone An introduction to webpy, providing mobile phone connections to python projects. 36 Feedback and question time 3 She is an open source hardware advocate, founder of Adafruit and was voted "Entrepreneur of 2012". Who is Limor Fried? [MagPi] First our congratulations on being awarded Entrepreneur magazine’s “Entrepreneur of 201 2” last month. Do you think this is the first sign of mainstream acceptance of “hacking”, in its true definition and the “maker” movement in general? [Limor] Thank you so much for the kind words. The Raspberry Pi community deserves a lot of thanks as well, all of the voting was via the internet and the Pi community really rallied for us! I believe the maker movement is past the "is this a real thing?" stage. About 6 years ago I was invited to a conference about the new maker movement that had just started to happen and a very large company made a point to say Adafruit was not a real company. It's been a challenge every day to prove a great business can support a great cause like open-source. Being awarded Entrepreneur magazine's "Entrepreneur of the year" means there are less barriers for someone starting out now. They don't need to hear something is not possible or not real, they can see there are unlimited opportunities for making and sharing - and running a successful business. [MagPi] We’re getting a little ahead of ourselves. Let’s take a step back. You are the founder and engineer of Adafruit, the New York based company that you formed in 2005 after you graduated from the Massachusetts Institute of Technology (MIT) with a master's 4 degree in Electrical Engineering and Computer Science. What inspired you to start your own company rather than “cut your teeth” with an industry employer? [Limor]. The demand for efforts like the Raspberry Pi has totaly changed Adafruit. Anyone can learn to design electronics, write code and have multiple ways to get the products in the hands of customers. One of the things about running a company is you can take on some projects that at first do not seem to have impact on the bottom line, but you can take the risk. Many of the projects we do at Adafruit would never be approved by a big company solely focused on a few products. We have over 1 ,200 products and some of them are purely experimental. [MagPi] Your nickname is “Ladyada” which I am assuming has some relation to Lady Ada Lovelace, the world’s first computer programmer? Of course the link to Adafruit is more obvious, but what was the inspiration behind the name? Maybe you had a premonition that you would be working with raspberries one day? can see that the Raspberry Pi section is one of the largest. [Limor] When I was younger all I did was play mission; to teach kids programming and making... and it's actually teach everyone, not just kids. We think everyone should be able to use a low-cost educational computer to learn electronics and of course learn a computer language. We struggled with how we would be able to start this endeavor and that's when the Raspberry Pi was announced. It became so popular so quickly that it really kept us on our toes meeting demand. Every single thing we design or curate in the store is tested by me. For the Raspberry Pi we knew it would be important to have the best educational resources in addition to the best support. Six months later, the Pi section is one of our largest and the Pi tutorials are the most viewed on the Adafruit Learning System. around with Linux, installing it on anything I could find and exploring all the things that made it work. In my hacker days, actually I'm still in those days now :) ‌, my nickname was Ada. I was always programming, building, reverse engineering so the Ladyada name has stuck with me from the start. Looking back at all the Linux hacking it seems like I was in training to work on the Raspberry Pi. For the younger folks out there who like spending time hacking away, it really can end up being a fantastic adventure and a career! [MagPi] We have all heard of open source software and it is used every day by millions of people, including 1 million plus Raspberry Pi users. But you are heavily involved in the open source hardware community. Indeed all Adafruit products are made available as open source hardware with free download of schematics, PCB layouts, firmware and software. Arduino is a popular example of open source hardware but can you tell us more about what you think is the future of open source hardware? [Limor] At Adafruit we have a big goal and [Limor], Raspberry Pi and Arduino are great examples. [MagPi] What got you interested in the Raspberry Pi? Looking at I The Adafruit exclusive Pibow case But wait, there's more! We also knew we'd need a great web-based way to teach, so we've invested a lot of time and resources in building our own integrated development environment (IDE), complete with step debugging and visualizer. The Raspberry Pi WebIDE is by far the easiest way to run code on your Raspberry Pi. Just connect your Pi to your local network. Then log on to the WebIDE in your web browser to edit Python, Ruby, JavaScript, or anything and easily send it over to your Pi. Also, your code will be versioned in a local git repository, and pushed remotely out to bitbucket so you can access it from anywhere, and any time. Watch the video at Continued over page... 5 Lastly, there is our Pi product line up. We take the best of Adafruit design and apply it to helping people get the most out of their Pi. Everything is fully documented and supported for makers of all ages... and we are just getting warmed up as they say! [MagPi] Women have always been sparsely represented in the technology industry and unfortunately this is likely to remain while social and educational bias continue to influence young girls. Of the fifteen finalists for “Entrepreneur of 201 2”, you were the only female. But the Arduino introduced electronics and interfacing to artists and creators while the Raspberry Pi is empowering educationalists to engage both boys and girls in fun and interesting ways. My two young daughters each have their own Raspberry Pi. They create their own programs using Scratch or copy Scratch programs that we regularly print in The MagPi and then modify them. You are a great role model for young women today, and I recently discovered 1 1 year old "super awesome Sylvia" (), but what more can be done to encourage girls to study engineering topics? people in the community and elevate them. There's been a lot of progress, and there's still a lot of work to be done. With a $35 Pi on the market all of us can give the gift of engineering to a young kid. You never know what will spark someone on a path of engineering, but I think we have more opportunities, in fact I think we have a million more than we did last year. [MagPi] Our younger readers might be interested in Mosfet. What can you tell us about Mosfet? Maybe Mosfet and Liz’s Mooncake should Skype? [Limor] Mosfet is my black cat of about 1 0 years now. He's grown up at Adafruit and he's on our weekly live engineering show "Ask an Engineer" at. There is something special about cats and engineers, we are not sure what it is exactly, but the best engineers we know have cats - so much so we started "Cats of Engineering" at. In January, MAKE Magazine had their second annual worldwide Maker meet up over Google+ Hangouts. Eben & Liz's cat was able to see Mosfet as we all talked about the 1 million milestone for the Pi and beyond. I'm not sure what will happen with the internet 1 0, 50 and 1 00 years from now, but as far as I can tell, it was meant for cats to communicate with each other :) [MagPi] The Adafruit website has a very Limor, Mosfet, AdaBot and Sylvia [Limor] One of my favorite quotes is "We are what we celebrate" by Dean Kamen. There are many women in tech but they're not celebrated as much or in the same way as their male counterparts. Journalists and conference organizers can put a well deserved spotlight on many amazing women; they are out there. If a young girl growing up does not see women in a particular field, or is not able to say "I want to do that, I want to be like her" then we'll never see more women in that field. It's really up to each of us to find 6 broad range of intriguing products. I’m particularly interested in the 2x1 6 negative RGB display plate for the Raspberry Pi and the MintyBoost looks like it could easily power a Model A Raspberry Pi. Then there’s the whole range of FLORA wearable electronics too. What products currently have the greatest interest among your customers and which products excite you currently? [Limor] The two big product lines are anything Raspberry Pi and our new wearable platform FLORA. People want to learn how to program, make some cool projects and also jump in to the new frontier of wearable electronics. Right now the products that are really exciting to me are the ones that inspire the community to do cool projects and share. I like the Pi based Wi-Fi radios and I really enjoy watching the projects come in that use FLORA in ways we didn't think about. I'm working on a Pi game controller, a touch screen and for FLORA an even smaller version for super-tiny wearable applications. [MagPi] I’m constantly amazed by the projects that folks create with their Raspberry Pi. I either read about them at or I see them on your "Show & Tell" show or we publish them in The MagPi. What projects have you seen built using products that you sell that caused a “Wow” moment? [Limor] One of my favorite projects is one that's in progress. A very prolific maker named Kris in our community is making a Raspberry Pi based snow-blower robot. He's printing out all the robotic parts, controlling servos, vision and more, all with a Pi. It's a functional robot that is built on open-source and we're all learning and sharing together. To think someone could make such a complex project at home in one's spare time, and at a reasonable cost is just "Wow!". [Ed: I saw Kris on the 1 9 January episode of "Show & Tell". His work is impressive.] [MagPi] You talked earlier about the Adafruit Learning System, which contains over 1 00 tutorials and videos including a series for the Raspberry Pi. I personally had no idea just how extensive this educational resource actually is and I will be returning to learn more plus also to download the Raspberry Pi WebIDE. What are your plans to further develop the Adafruit Learning System? [Limor] Thank you, the team has done an amazing job making the best learning system online. Justin, Tyler, Daigo and my team completely took tutorials to the next level for us. The Adafruit Learning System is just one part of the giant task of education for everyone. We're starting with programming and engineering, but we're not going to stop there. We think there's unlimited potential to maximize all of our potential through sharing. We will be adding our badging system, more user features and more ways to seamlessly use and share knowledge. We have PDFs you can print and share, the system works great on devices and we'll be adding more video and more interactivity (sensor logging and more) very soon. [MagPi] I keep reading that 201 3 is the year of the entrepreneur. Maybe this is influenced by the global economy but we certainly live in interesting times. Products such as the Raspberry Pi and Arduino are empowering a new generation and enabling new capabilities plus 3D printers, Hacker Spaces and Maker Faires are all making technology more accessible to everyone in a spirit of community, education and sharing. I won’t ask you what is the next big thing(!) but having been a major influencer in this community and watched it evolve over that past 7 years what trends do you see looking forward? [Limor] You can make any future prediction come true if you're willing to do the hard work, so at Adafruit we try to predict the future by building it. In the next 7 years more people will be able to do anything they can think of, so what does that mean? I think we need to pick some big problems to solve. If a $35 computer can help teach, could it also diagnose and help to heal? Could it help democracy and freedom around the world? Could this type of technology and level of access for all bring us together so we can all move forward? I think so, and that's where I'll be spending my days and nights. Article by Ian McAlpine Photos by John De Cristofaro, Adafruit and its staff. 7 WebIOPi WebIOPi - Raspberry Pi RESTframework Tutorial: Remote controlled robot cam WebIOPi is a REST framework which allows you to control Raspberry Pi’s GPIO from a browser. It’s written in Javascript for the client and in Python for the server. You can fully customize and easily build your own web app. You can even use all the power of WebIOPi directly in your own Python script and register your functions so you can call them from the web app. WebIOPi also includes some other features like software PWM for all GPIO. Installation Installation on the Raspberry Pi is really easy, as it only requires Python. On Raspbian Wheezy, you can use the PiStore to download and install WebIOPi. You can also install it using a terminal or a SSH connection. Check the project page for the latest version, then type: $ wget WebIOPi-0.5.3.tar.gz $ tar xvzf WebIOPi-0.5.3.tar.gz $ cd WebIOPi-0.5.3 $ sudo ./setup.sh * Look in /home/pi/webiopi/examples for Python library usage examples You will have a line for each installed Python version which you can use to launch WebIOPi. It’s time to start WebIOPi, for example with Python 2.X: $ sudo python -m webiopi WebIOPi/Python2/0.5.3 Started http://[IP]:8000/webiopi/ at First Use Open a browser from any device on your network and point it to the given URL: http://[IP]:8000/webiopi/ or you can use localhost if you are connected to your Pi with a keyboard and a display plugged into it. You will then be asked to log in, default user is webiopi and password is raspberry. You should see the default header app: You should see some compile and install messages and finally get a success output with usage instructions: WebIOPi successfully installed * To start WebIOPi with python: sudo python -m webiopi * To start WebIOPi with python3: sudo python3 -m webiopi 8 * To sudo * To sudo start WebIOPi at boot: update-rc.d webiopi defaults start WebIOPi service: /etc/init.d/webiopi start Screenshot from WebIOPi application. With the default header app, you can toggle GPIO functions between input and output, and toggle pin states. Just click on the IN/OUT buttons beside each pin to change their state from input to output. All GPIO can be directly used with the REST API. For instance, to set GPIO 23 as an output, just make an HTTP POST request on /GPIO/23/function/out then to output a logical 1 , make POST on /GPIO/23/value/1 . To retrieve states, make HTTP GET on /GPIO/23/function and /GPIO/23/value. The included Javascript library allows GPIO changes without caring about REST calls. Motor rotation is controlled by IN* and EN*: We can connect +V to Pi’s +5V and IN*/EN* to GPIO pins. +Vmotor will be connected to the battery or a dedicated regulator. You will need at least one 5V regulator to power the Pi with a battery through the micro USB plug. You can use a 7805 with two capacitors to do that: Robot Cam The following parts of this article are about a robotised webcam you can control from any web browser. You will need: With an input voltage higher than 7V, you will have a 5V regulated output voltage for the Pi. - Chassis - Raspberry Pi with WebIOPi - Operational USB Webcam - Operational USB WiFi adapter - L293 H-Bridge - 2 sets of motors, reducers and wheels - Battery and power regulation - Electronic prototyping parts From an electronic point of view, L293 contains an electronic circuit similar to the Skutter’s H-Bridge from the December MagPi issue. L293 adds an enable input that can be used with a PWM signal to limit the speed. It also has two power inputs, one for the logic (+V=5V), and one that suits the motors (+Vmotor<36V). To control the H-Bridge with WebIOPi and create an interface, we have to write a few Python lines for the server side, and some Javascript for the client. Writing the Python Script Start by creating a new Python file with your favourite editor. You will have to import webiopi then instantiate a server. One parameter is required: the port the server has to bind to. You can also change the Continued over page... 9 default login and password. The server will start and run in its own thread until the end of the script. We add a loop to keep the server running. We use the webiopi.runLoop() function for that. It will sleep the main thread until CTRL-C is pressed. We can also pass a function to the loop. import webiopi # Instantiate WebIOPi server # It starts immediately server = webiopi.Server( port=8000, login="cambot", password="cambot") # Run the default loop webiopi.runLoop() # Cleanly stop the server server.stop() The previous script simply starts the WebIOPi server. We can use the default web app to interact with GPIOs, the REST API or the Javascript library. We could directly control HBridge lines from Javascript, but we’ll add go_forward() and stop() REST macros to decrease latency. To continue, we need a GPIO library. We can use RPi.GPIO or the integrated GPIO library, which is a fork from RPi.GPIO. An integrated library removes many sanity checks to give full access from the server and to give more functionality. Right after the import section: # Integrated GPIO lib GPIO = webiopi.GPIO We add variables to ease H-Bridge control: # Left motor L1=9 # L293 L2=10 # L293 LS=11 # L293 # Right R1=23 # R2=24 # RS=25 # 10 GPIOs IN1 on GPIO 9 IN2 on GPIO 10 EN1 on GPIO 11 motor GPIOs L293 IN3 on GPIO 23 L293 IN4 on GPIO 24 L293 EN2 on GPIO 25 Before the server call, we write functions for both left and right motors then wrap them into go_forward() and stop() macros: # Left motor functions def left_stop(): GPIO.output(L1, GPIO.LOW) GPIO.output(L2, GPIO.LOW) def left_forward(): GPIO.output(L1, GPIO.HIGH) GPIO.output(L2, GPIO.LOW) # Right motor functions def right_stop(): GPIO.output(R1, GPIO.LOW) GPIO.output(R2, GPIO.LOW) def right_forward(): GPIO.output(R1, GPIO.HIGH) GPIO.output(R2, GPIO.LOW) # Set the motors speed def set_speed(speed): GPIO.pulseRatio(LS, speed) GPIO.pulseRatio(RS, speed) # Movement functions def go_forward(): left_forward() right_forward() def stop(): left_stop() right_stop() Then, and always before the server call, we initialise GPIO: # Setup GPIOs GPIO.setFunction(LS, GPIO.PWM) GPIO.setFunction(L1, GPIO.OUT) GPIO.setFunction(L2, GPIO.OUT) GPIO.setFunction(RS, GPIO.PWM) GPIO.setFunction(R1, GPIO.OUT) GPIO.setFunction(R2, GPIO.OUT) set_speed(0.5) stop() Finally, we have to register macros on the server to add it to the REST API. This will allow them to be called from the web app: server.addMacro(go_forward) server.addMacro(stop) Article by Eric PTAK This will be continued next time, but if you want to get a head start visit 11 BACKUP YOUR RASPBERRY PI Learn how to backup your Raspberry Pi SD card with optional compression plus splitting files to burn to DVD. In this two part article I shall explain how you can easily backup your Raspberry Pi SD card to a different computer, USB device, CDs or DVDs and also how to restore from these. All of this is simple on a Linux computer. Windows users requiring a backup to CDs or DVDs would be better off using a live Linux distribution such as the excellent Linux Mint. A Few Caveats Why Backup? You could reformat the USB device with NTFS, but if you never intend to use the device with any Windows computers, then you can easily format it with a Linux file system such as EXT4. You should always have one or more backup copies of your SD card. Get into the habit of making a daily backup before it is too late... especially if you frequently edit files or add software. SD cards are much more robust than hard drives, but they can still easily fail with total loss of data. I have had two SD cards become corrupted and completely lock up my Pi. The only recourse was to pull the power lead out and unfortunately in both cases I was left with an SD card where the "/" partition could not be read, so I had no operating system. Luckily I had a backup. It was a backup of a 4GB SD card but I had corrupted my new 8GB one. Fortunately you can restore a 4GB image to a bigger SD card and I will explain how to perform this task later in this article. Windows Users These instructions are intended for Linux only. For Windows users you can make a complete backup image of your SD card using the same Win32DiskImager program that you used to create your Raspbian the SD card. 12 Because Windows is the most common operating system, many vendors supply USB thumb drives and USB hard drives formatted with the FAT32 file system. Note that FAT32 cannot cope with any files that are bigger than 4GB. If your SD card image is larger than this, you will need to split the file into 4GB chunks. Equally, whether on Linux or Windows, if you are using the 32-bit version then once again a file size limitation will be encountered. The maximum size for a single file on a 32-bit operating system is 4GB. If your backup file is larger than this you will have to split it into 4GB chunks. Compressing and splitting backups is covered in detail below. Getting root Access The remainder of this article assumes that you have root access to the computer where you will be saving the backup. To get root access you can either prefix every command with sudo, or you can run the following command to start a root shell: $ sudo sh Find Your SD Card Plug your SD card into your Linux computer, not your Raspberry Pi. If your computer has a built-in card slot, you most likely have the card on /dev/mmcblk0. If you have it in a USB adaptor of some kind, it’s most likely going to be /dev/sdx where ‘x’ is the digit representing the highest device of this /dev/mmcblk0p1 /dev/mmcblk0p2 Start 8192 122880 End 122879 15523839 Blocks Id System 57344 c W95 FAT32 (LBA) 7700480 83 Linux To find out, run the fdisk -l command. The tail end of the output from this command can be seen at the top of this page. I can recognise the card by the size 8GB (or 7948MB as listed above), which is the only device or partition that I have of this size. Also shown are the two partitions on the device and seeing that one of them is small and W95 FAT32 (LBA) and the other is Linux, then I am certain that this is my SD card. In the above, the suffixes 'p1 ' and 'p2' in the Device Boot column indicate the partitions on the card. The card itself is the device named /dev/mmcblk0 and we need the device name, not the partition names. Make a Backup To make a backup switch to your backup directory, where you intend to keep the copy of your SD card, and run the following command, typed all on one line: $ dd if=/dev/mmcblk0 of=Rpi_8gb_backup. img bs=2M The output from the above command will be similar to the following: 3790+0 records in 3790+0 records out 7948206080 bytes (7.9 GB) copied, 1369.42 s, 5.8 MB/s That’s it. The whole 8GB SD card has been (slowly) copied to my computer. How do I know it worked? Type ls -l -h and you will see the new file name and size. Compressing the Backup Once created and checked, the backup image can be compressed to save space. This is a simple matter of using the gzip command : $ gzip 9 Rpi_8gb_backup.img This command attempts to perform maximum compression. This uses a lot of CPU but will result in the smallest output file. You may adjust the trade off between CPU and file sizes by changing the '-9' option to a smaller number. The quickest compression will be obtained with the '-1 ' option at the expense of the file size being larger. The use of gzip in this manner requires that you have enough space to save both the full size image file and the compressed image for as long as the gzip command is running. Once complete, the full sized file will be deleted leaving only a compressed file, named as per the original file name but with an additional '.gz' extension. In this example my compressed image file would be Rpi_8gb_backup.img.gz. Compression on the Fly If you only wish to make a backup or if you don't have enough available space to hold both the full sized image and the compressed image, then you may wish to consider compressing on the fly. Unless otherwise instructed, the dd command defaults its output file to be the console. You can use this feature to pipe the output through gzip and create a compressed image file in a single step. You will not need as much disc space because the full sized image file is never created, only the smaller compressed one. The command to do this is: $ dd if=/dev/mmcblk0 bs=2M | \ gzip 9 > Rpi_8gb_backup.img.gz The trailing "\" must be at the very end of the line, just before you press the Return key. It indicates to the shell that the command is not yet complete and more text will follow. Continued over page... 13 Because you are redirecting the output from gzip to a file, it is your responsibility to supply the file name and the '.gz' extension. You must also include the hyphen after the '-9' in the gzip command. This tells gzip to write its output to the console which, in this case, has been redirected to a file named Rpi_8gb_backup.img.gz. Splitting the Backup As mentioned previously, some file systems cannot cope with files larger than a specific size. FAT32, for example, has a 4GB limit and 32-bit operating systems can also only cope with 4GB for each individual file. If you intend to create or copy your backups with these types of systems or devices, then you must split the image files into appropriately sized chunks. Once again, a pipe comes to the rescue. The split utility in Linux is designed to allow a large file to be split into a number of smaller files, each of a given size. If we wish to create a compressed backup of an SD card, ready for burning onto one or more CDs or DVDs, then the following command will backup the card, compress the backup on the fly and then split the compressed image into a number of 2GB files ready to be burned onto DVDs. $ dd if=/dev/mmcblk0 bs=2M | \ gzip 9 | \ split bytes=2G \ Rpi_8gb_backup.img.gz.part_ Remember to press the Return key immediately after typing the "\" on each line. The split command, similar to gzip, requires a hyphen as the input file name. In the above there is no input file name as we are reading from a pipe, so the hyphen tells split to read the input from the console which, in this case, is the piped output from gzip. The final parameter to split defines the root part of the output file names. Each one will be called 'Rpi_8gb_backup.img.gz.part_xx'. The 'xx' part will be 'aa', 'ab', 'ac' and so on. It is possible that compressing the SD image will make the file small enough to fit within the file system limits of your chosen output device. 14 In this case, you may rename the single part file back to the original name. Restoring Backup Files Having the backup image compressed or split into a number of sections means that restoring the backup takes a little more work. Once again the process is relatively simple and involves piping a number of commands together to form a chain of utilities, with the final output going into the dd command. This then writes the data back to the SD card. The simplest case is when the image file has been compressed but not split. To restore this directly to the SD card, without having to create a full sized uncompressed image, type the following command: $ gunzip Rpi_8gb_backup.img.gz c | \ dd of=/dev/mmcblk0 bs=2M The '-c' in the gunzip command above is required to tell gunzip to write the decompresssed data to the console. The dd command reads its input, unless otherwise specified, from the console so the output from gunzip passes straight into dd and out to the SD card. If the backup is split into chunks, then we need to concatenate these together in the correct order and pipe that to the above command pipeline, as follows: $ cat Rpi_8gb_backup.img.gz.part_* | \ gunzip c | \ dd of=/dev/mmcblk0 bs=2M The list of files to be joined back together again must be specified in the correct order. However, using a wild card to the cat command causes them to be read in alphbetical order, which is exactly how we want them to be. Coming in Part 2 In part 2 of this article, I will show you how you can check that the backup was successful and how you can use the backup file as a pseudo hard disc. I will also show how you can modify the files within it - all without needing to restore the image onto an SD card. Article by Norman Dunbar FEBRUARY COMPETITION Once again The MagPi and PC Supplies Limited are proud to announce yet another chance to win some fantastic Raspberry Pi goodies! This month there are three prizes! The first prize winner will receive one of the new pre-assembled Gertboards! The second and third prize winners will receive a raspberry coloured PCSL case. For a chance to take part in this month's competition visit: Closing date is 20th February 201 3. Winners will be notified in next month's magazine and by email. Good luck! To see the large range of PCSL brand Raspberry Pi accessories visit December's Winners! There were over 540 entries! It was a difficult choice with so many great submissions. The winner of the 51 2Mb Raspberry Pi Model B with 1 A 5V power supply and PCSL Raspberry Pi case is Fiona Parker (Ilkley, UK) . The 2nd and 3rd prize winners of the PCSL Raspberry Pi case are Damien Plunkett (Flagstaff, USA) , and Nico van der Dussen (Pretoria, South Africa) . Congratulations. We will be emailing you soon with details of how to claim your prizes! 15 The Quick2Wire Pi Interface Board is a new interface board for the Raspberry Pi which will go on sale in February 2013. I managed to get hold of one of the beta kits for review and also had a chat with Romilly Cocking, Director of Quick2Wire. So, What’s In The Kit? 1 x printed circuit board (PCB) 1 x multi-coloured ribbon cable 4 x jumpers 1 1 x assorted headers 2 x transistors (FETs) 1 x tantalum capacitor 1 x light emitting diode (LED) 1 x push-button 1 x voltage regulator (3V3) 1 x diode array 1 6 x, printed on the board, shows you what goes where. The only potential 16 pitfalls are the orientation of the diode array and tantalum capacitor. They both have to be the right way round. The instructions explain how to make that happen. It took me about 20 minutes to do all of the resistors (I was fussy about aligning them). Overall, it took 53 minutes from start to finish and it was an enjoyable experience. This is what it looks like finished. Software Installation The next step was to fully update Raspbian and install some software to test the interface. The instructions were excellent and software installation worked flawlessly. The longest part was doing the Raspbian package updates (using sudo apt-get update and sudo apt-get upgrade). “The thing which stands out here, compared to other Python based GPIO drivers, is that as long as you are in the 'gpio' group there’s no need to be a sudo user to run the commands. This is great!” Testing Procedures The following describes the test procedures from the provided manual. Test commands are given for switching the LED on/off and reading the button’s status (pressed/unpressed). These worked perfectly (although the LED on mine is not very bright). Then you are invited to test your 5V FTDI interface (a type of cable you can use to log in directly with no network connection) if you have one. Mine worked perfectly. I was able to get a command line terminal login through the serial port. And that completed the structured test procedures. Everything passed. Testing The Quick2Wire Python Libraries Digging a bit deeper than the test manual instructions, I downloaded the Python Quick2Wire libraries from GitHub and went through two test programs I found there – flashing the led and using the button to control it. These weren’t yet documented. (Quick2Wire numbering scheme) as outputs. That also worked perfectly. Here’s a link to a video guided tour of the Quick2Wire board with test programs in action. We Interrupt This Article... Another area where the Quick2Wire Python libraries stand out (which I haven’t explored yet) is epoll support. This allows you to control things using interrupts instead of continual polling. This means that you can have your program respond to a change, instead of using up loads of processor power continually checking the GPIO ports’ states. This is an important development. Conclusion The Quick2Wire Pi Interface is a very nice board with a lot of promise and what looks like being a very good Python Applications Programming Interface (API) behind it. It’s going to be priced cheaper than you’d be able to. Their web site is at Quick2Wire.com. Article by Alex Eames Alex Eames runs the RasPi.tv blog, where he’s often up to something educational, fun, innovative or just plain silly with a Raspberry Pi. He also wrote the Python port of the Gertboard software. DID YOU KNOW? The Quick2Wire team consists of nine people, spanning a 7 hour time zone. They live in Chicago, London, Bristol and the Pyrenees. They have never all been in one location! 17 Introducing RISC OS on the Raspberry Pi. History The reduced instruction set computing (RISC) operating system (OS) for Acorn RISC machine (ARM) based computers and emulators has been around since 1 987 (originally under the guise of Arthur), almost for as long as the ARM chip itself. The first ARM 2 based computer was eponymously named the Acorn Archimedes after the famous ancient Greek inventor. At the time it represented a revolutionary leap forward on the then ubiquitous 6502 based BBC micro being sold into the home and educational markets. Following on from this the more powerful RISC PC was released in 1 994 based on the StrongARM chip running at 300Mhz, though production ceased around a decade ago. In 1 998 Acorn broke up and Castle Technology bought the rights to RISC OS from Pace Technology and released the Ioynix PC. From 2006 RISC OS Ltd (ROOL) has taken over RISC OS development via a shared source initiative (SSI) and a few variants now exist that run on the RISC PC emulator for Windows and Unix (RPCEmu), the beagle board, the panda board, ARMini, and of particular interest here, the Raspberry Pi. In many ways the Raspberry Pi and RISC OS are ideally suited as partners. Key of course is that the Pi contains at its heart an ARM 1 1 chip. Also however, thanks to its legacy, RISC OS is undemanding on system resources and works efficiently even when CPU power and memory are in short supply. This can be largely attributed to the fact that the majority of the operating system is coded directly in ARM assembler by clever programmers. In addition, 18 although all essential functionality is provided, system extensions and libraries are loadable as modules on an as-required basis. Booting RISC OS boots straight into a windows, icons, menus, pointer (WIMP) desktop with a nice Raspberry Pi background from which applications can be launched. A strip at the bottom of the screen known as the icon bar holds devices at the left and running applications at the right. Clicking on a device icon (e.g. hard drive, SD card, RAM disc) opens a filer window which can be used for browsing and launching different types of file such as BASIC programs, modules and applications. Similarly, left clicking on an application typically opens a window for the user to interact with it, or clicking with the middle button brings up a menu from which configuration options can be set and actions performed. Task windows open up a command line interface (CLI) from which many common tasks can be executed and these can be grouped and packaged into Obey files for convenience. WIMP based applications co-operate through the software interrupt (SWI) based WIMP application programming interface (API) which is documented in the Programmer Reference Manuals (PRMs). These are available from the foundation.riscos.com web site and run to five large volumes effectively constituting the equivalent of the bible for RISCOS application developers. Files can be pinned to the desktop for easy access and wallpapers and screen savers can be easily configured, so anyone familiar with Windows or Unix will quickly feel at home. RISC OS comes with an internet browser called !NetSurf, though at present network access must be provided via an ethernet cable. ARM BBC Basic can be started from a Task Window (press CTRL+F1 2) from the * command prompt by typing Basic. Installation Bundled applications include a text editor !Edit, a drawing program !Draw and a painting application !Paint, however a plethora of third party applications are available including games, music, DTP and art packages. Many of the most used of these are freely installable using the supplied package manager application (!PackMan) which is styled on the lines of Linux Update Manager. RISC OS Open have joined forces with some of the leading software developers in the RISC OS community and sell at a large discount the Nut Pi, a package of flagship RISC OS software specifically for RISC OS Pi. A task window allows memory to be allocated between various parts of the system and user applications, and a number of different screen modes of different pixel counts and number of colours (up to 24 bit colour depth) are supported. To install RISC OS on the Raspberry Pi visit the page and follow the download instructions. The download zip contains a disc image file that can be written to an SD card using the freely available Windows based Win32DiskImage application or the Unix dd tool in the same way as the Raspbian wheezy distribution. My particular interest in RISC OS on the Raspberry Pi is as a host for the Charm set of development tools and demos targetted at the educational and enthusiast sector, for which I am the author. A GPL licensed release is bundled with the distro, however the latest release is available from which is optionally recompilable to utilise the VFP coprocessor for floating point operations. See the article on Charm in the next edition of The MagPi for more information. In summary RISC OS on the Raspberry Pi is easy to install, quick to boot at under 20 seconds, responsive, intuitive and easy to pick up... so why not give it a whirl? Article by Pete Nowosad 19 I n stal l i n g & c o n f i g u ri n g Learn how to install Arch Linux, a barebones rolling LINUX distribution. Many people think of Linux as an operating system, but in fact it's actually just a kernel, the base. To make it into a proper operating system, you need to add a little bit more. As Linux is free and open-source, many people have done this, and each one has done it slightly differently, creating many different 'distributions', or 'distros' for short. The main Raspberry Pi distribution offered at is Raspbian, a version of Debian. However, if you scroll down a bit more, you'll see some others, including one called Arch Linux. So what's the difference between Raspbian and Arch? The main difference is the package managers and the way updates are managed. Debian, and therefore Raspbian, are very strict on package updates. They have to go through testing, so the maintainers can be sure they are stable and work before releasing them to the regular users. This is good because it means software is almost guaranteed to work, but not so good as it means updates take a little while to reach users. Arch Linux is different in this respect, releasing updates as soon as possible. For this reason it is called a 'rolling release' distro, since you only have to install it once and then whenever a package gets an update you can upgrade it there and then. This allows users to get updates more quickly, although it means software is more unstable. In case of trouble, you can simply image the SD card again. The other major difference between the two is that Raspbian comes completely ready, while Arch comes with the bare essentials, allowing users to pick what they want to install. This makes setup harder for newcomers, but this guide should help ease the process. First boot The first boot might take a little while longer, just wait until it's done. Once you get to the login screen, use the user name root and the password root. You will then have a terminal open. You will notice if you try startx, it will not work. That gives you an idea of how little Arch comes with. Don't worry though, we'll get to installing a graphical user interface. Before you begin doing anything, you may want to change the keyboard layout to your country. This is done by typing: loadkeys country code I'm in England, so my country code would be uk. A full list can be found here:. This is only temporary, we will set it permanently later on. Editing language settings The default hostname for the system is alarmpi. I personally don't like this, and would rather it was something else. If you feel the same, you can change the hostname of the system by typing: echo hostname > /etc/hostname where hostname is the new hostname you want. I've used raspberrypi. It will not be effective until after a reboot. Next we will change the language and the timezone. To look at the available timezone areas, type: ls /usr/share/zoneinfo So, if continuous updates and your Pi the way you want it sound good, why not have a go at installing Arch? Choose which area suits you best (for me it would be Europe) and then type: have a low download limit, it may be best to stick with Raspbian. to see the specific timezones. Pick one, and type: You will need an internet connection for your Pi though , so if your internet is particularly slow or you First download the latest image from 20 Then flash the image to the SD card, using Win32DiskImager or dd. More information can be found in Issue 2 of the MagPi. With that done, we can get on with the setup. ls /usr/share/zoneinfo/area ln -s /usr/share/zoneinfo/area/timezone /etc/localtime all on one line. My choice was London, so area would be Europe and timezone London. If you get an error saying “File existsâ€?, type: rm /etc/localtime Then type the previous command again. Now to edit the locale. This is used to determine the country you live in so things like dates can be displayed correctly. To do this, we need to edit some files. Type the following: nano /etc/locale.gen Find your country and remove the '#' symbol in front of it. For example, mine would be en_GB. When you are done, use Ctrl+O to save and Ctrl+X to exit. Then type: locale-gen Now, we need to make another file, so type: nano /etc/locale.conf Edit it to the same language and country code as before. Finally we need to set the console's keymap so it fits the country's keyboard all the time. For this type, nano /etc/vconsole.conf Change KEYMAP to the country code you used with the loadkeys command previously. All the language settings are set now, so if you want you can reboot to see the changes, by typing: reboot Using pacman Arch Linux's package manager is called pacman, and we use that to install packages. To install a package, type: pacman -S <package name> Try it with the package sudo, because we'll need it later. As Arch is a 'rolling release', quite a lot of updates have come out since the image was released, so to upgrade all your packages type: pacman -Syu These should both work fine straight away with the most recent image, though you need an internet connection. Because of how quickly updates come, it's recommended you run a full upgrade regularly, once a week or maybe even once a day to keep on top of it all. Should you want to remove a package, you can do that with: pacman -R <package name> To see a list of installed packages, type: pacman -Q Adding a new user It is vitally important that we make a new user for our system, as logging in as root has security issues. To add a new user, enter: adduser and follow the instructions to add a new user. Next, we need to add them to the sudoers list so they can still install programs, but in a more secure way. If you haven't already, install sudo. To add the user to the sudoers file, type: export EDITOR = nano && visudo This will allow you to edit the sudoers file with the familiar nano editor. Find the line that says root ALL=(ALL) ALL and copy it onto a different line, replacing root with the user name of the new user. If you would like to have it so sudo does not ask for your password, like on the default Raspbian install, put NOPASSWD: in front of the final ALL. Finally, we can change the password for the root account, using the command: passwd root Then be sure to pick a secure password. With that, we're done with the basic setup! Type: logout to log out of root and login as the new user you set up. Setting up a graphical user interface This final part is optional, but if you'd prefer more than just a command line you should do it. To install a graphical interface, simply type all on one line: pacman -S gamin dbus xorg-server xorgxinit xorg-server-utils mesa xf86-videofbdev xf86-video-vesa xfce4 Once the installation is finished, type: cp /etc/skel/.xinitrc ~/.xinitrc Then type: echo "exec startxfce4" >> ~/.xinitrc and finally: startx Your graphical environment should then start. Congrats, you now have a working Arch Linux system on your Raspberry Pi! You may want to edit the config.txt, but this process is the same as Raspbian. Have fun! Article by Alex Kerr 21 Introducing Vala Writing a simple web controller for the LedBorg RPi add-on The Vala language is, as programming languages go, very much a new kid on the block and is still under many programmers' radar. We will use Vala to communicate with LedBorg from over the internet. [Ed: also in this issue we will see how we can perform a similar activity with Python in The Python Pit.] Vala is a C# style language built on the GLib object system providing easy access to the base GNOME libraries and subsystems. The compiler, valac, turns the Vala code into C, and in turn triggers the system's C compiler to produce native code. This means that, unlike C#, there is no .NET framework or virtual machine. Effectively, it is a higher-level way of writing C apps. The project's home page is . Although Vala is based around the GLib and GNOME libraries, it's not only for developing GNOME desktop apps and is also good for writing console-based apps and services. LedBorg protocol. Using LibSoup in Vala, it is easy to set up a light-weight HTTP server that can respond to requests. Testing would be straightforward from any web browser on the network. The server takes GET requests in the following URL format: /?action=SetColour&red=x&green=y&blue=z where x, y, and z are integers between 0 and 2. For the ease of getting started, the program also responds to all requests with a very minimal HTML form containing drop-down selectors for the three colours, and a submit button. The Code To try out this code, you will need to have the following packages installed plus dependencies. Assuming Raspbian/Debian is the running OS: $ sudo aptget install valac \ libsoup2.4dev LedBorg is pre-assembled, it sits on the GPIO pins and has an ultra-bright RGB LED. Each channel - red, green and blue, has three intensities: off, half-brightness, and fullbrightness. After entering the code, below, and saving as LedBorgSimpleServer.vala it can then be compiled with the following command: The LedBorg driver creates the device file /dev/ledborg. This is a great example of the UNIX philosophy at work with devices exposed as files instead of mysterious APIs. We can read and write to /dev/ledborg just like we would with any other file, for example: You may want to enter this command line in a text file and save it as compile.sh - then make it executable by running: echo "202" > /dev/ledborg The three digits control R, G and B, with each set to either 0, 1 or 2, corresponding to the three intensities, eg: '202' makes the LedBorg light up bright purple (red+blue). Network Control I decided that an easy way to control LedBorg remotely was to use the well-known HTTP 22 $ valac pkg libsoup2.4 pkg \ gio2.0 pkg posix thread v \ LedBorgSimpleServer.vala $ chmod +x compile.sh You can re-compile with: $ ./compile.sh The -v flag generates verbose output, giving an idea of what it is doing. If you want to see the generated C code, add the -C flag to get LedBorgSimpleServer.c Run the program with ./LedBorgSimpleServer and navigate to your Pi's IP address in a browser, adding :9999 to specify the port number, eg: 92.1 68.1 .69:9999 . string red = query["red"]; string green = query["green"]; string blue = query["blue"]; /* build our RGB colour string Each 0, 1 or 2: off, half or full brightnesss */ string colour = red + green + blue; The code is missing some robustness: we are not checking for the presence of the red, green and blue GET parameters, nor are we validating their values. Nor is feedback in the HTML sent back to the client, to say whether the operation was successful, what colour has been set, or if the device wasn't found. These additions can be an exercise for the reader! // LedBorgSimpleServer.vala // the namespaces we'll be using using GLib; using Soup; // our main class public class LedBorgSimpleServer : GLib.Object { // define the port number to listen on static const int LISTEN_PORT = 9999; // define the device file to write to static const string DEVICE = "/dev/ledborg"; // the method executed when run public static int main (string[] args) // do colour change do_colour_change(colour); } } // build the html for the client string Red:<select name="red"> <option value="0">Off</option> <option value="1">1/2</option> <option value="2">Full</option> </select> Green:<select name="green"> <option value="0">Off</option> <option value="1">1/2</option> <option value="2">Full</option> </select> Blue:<select name="blue"> <option value="0">Off</option> <option value="1">1/2</option> <option value="2">Full</option> </select> <input type="submit" name="action" value="SetColour" /> </form> </body> </html> """; { // send the html back to the client msg.set_status_full( Soup.KnownStatusCode.OK, "OK"); msg.set_response("text/html", Soup.MemoryUse.COPY, html.data); // set up http server var server = new Soup.Server( Soup.SERVER_PORT, LISTEN_PORT); // handle requests from the client server.add_handler("/", default_handler); } // do the colour change public static void do_colour_change(string colour) { /* Here we use posix file handling to write to the file instead of vala's gio file handling, as we don't want the safety of gio getting in the way when operating in /dev */ // open the file for writing Posix.FILE f = Posix.FILE.open( DEVICE, "w"); // get the running http server server.run(); return 0; } // default http handler public static void default_handler(Soup.Server server, Soup.Message msg, string path, GLib.HashTable<string, string>? query, Soup.ClientContext client) { // action a request if(query != null) { // check parameter to be sure if(query["action"] == "SetColour") { // get RGB from url params // write the colour string to file f.puts(colour); } } Article by Ross Taylor 23 Want to keep up to date with all things Raspberry Pi in your area? Then this new section of the MagPi is for you! We aim to list Raspberry Jam events in your area, providing you with a RPi calendar for the month ahead. Are you in charge of running a Raspberry Pi event? Want to publicise it? Email us at: editor@themagpi.com Preston Raspberry Jam When: Saturday 9th February 201 3 @ 1 0:00am Where: Accrington Academy, Queens Road West, BB5 4FF, UK The meeting will run from 1 0:00am until 5:00pm and is hosted by TechWizZ Tickets and further information are available at New York City Raspberry Jam When: Thursday February 21 st 201 3 @ 7:00pm Where: Two Sigma, 1 6th Floor, 1 00 Avenue of the Americas, New York, NY, USA Bring your projects, ideas and see what others are doing. There will even be a demo of Adafruit's WebIDE. Further information CAS York Hub Meeting When: Wednesday 1 3th February 201 3 @ 4:30pm Where: National STEM Centre, University of York, YO1 0 5DD, UK The meeting will run from 4:30pm until 7:30pm and is entitled "Engaging Pupils with Raspberry Pi". Tickets and further information are available at 24 25 A place of basic low-level programming Tutorial 5 - Structs, header files and data analysis. Welcome back to the C cave. This tutorial includes an example program built from several C source files and compiled with a Makefile. Before continuing, how did you get on with the previous challenge problem? Challenge solution #include #include #include #include <stdio.h> <stdlib.h> <unistd.h> <sys/sysinfo.h> int main() { int i = 0; float ramUsed; char gpCmdOne[250], gpCmdTwo[250], systemCmd[1000]; FILE *cmdPtr = 0, *outPtr = 0; char c, fileName[100], *strPtr = 0; struct sysinfo info; /* A sysinfo struct to hold the status. */ cmdPtr = popen("hostname","r"); /* Run hostname command. */ if(!cmdPtr) return 1; strPtr = &fileName[0]; /* Get a pointer to the string. */ while((c=fgetc(cmdPtr)) != EOF) { /* Reach each character. */ *strPtr = c; /* Set the character value. */ strPtr++; /* Move to the next array position. */ } pclose(cmdPtr); /* Close the hostname file. */ strPtr--; /* Move backwards one array element to overwrite the new line. */ sprintf(strPtr,"-data.txt"); /* Append the suffix. */ printf("%s\n",fileName); outPtr = fopen(fileName,"w"); /* Open the output file. */ if(!outPtr) return 1; /* If the output file cannot be opened return error */ for(i=0;i<60;i++) { sysinfo(&info); /* Get the system information */ ramUsed = info.totalram - info.freeram; ramUsed /= 10240.0; fprintf(outPtr,"%d %f %d\n", i, ramUsed, info.loads[0]); /* Write ram used. */ usleep(500000); /* Sleep for 1/2 a second. */ } fclose(outPtr); /* Close the output file. */ /* Now plot the data */ sprintf(gpCmdOne, "plot \'%s\' using 1:2 title \'%s\'", fileName, "Ram used"); sprintf(gpCmdTwo, ", \'%s\' using 1:3 title \'%s\'\n", fileName, "Load"); /* Create the full command, including the pipe to gnuplot */ sprintf(systemCmd,"echo \"%s%s\" | gnuplot -persist",gpCmdOne,gpCmdTwo); } 26 system(systemCmd); /* Execute the system command. */ return 0; /* Return success to the system. */ The solution includes functions and techniques discussed in previous tutorials. There are more simple ways to form the file name from the host name. C provides a string.h header file which includes the declaration of several useful functions for string operations. The full list of functions can be viewed by typing man string. Strings can be concatenated by using strcat, char fileName[100]="myHost", suffix[10]="-data.txt" strcat(fileName,suffix); /* Append suffix to fileName, result in fileName. */ The host name can be read using fgets rather than fgetc, fgets(fileName,100,cmdPtr); /* Read until EOF or newline or 99 characters. */ where 1 00 is the size of the fileName character array. Lastly, the host name can also be read using the gethostname function from unistd.h, #include <string.h> #include <stdio.h> #include <unistd.h> int main() { char fileName[100], suffix[10]="-data.txt"; gethostname(fileName,100); strcat(fileName, suffix); /* Append the suffix to the fileName */ printf("%s\n",fileName); return 0; } Structs Structs were introduced quickly in the last article in issue 6, to allow the use of system information. Structs occupy a continuous block of memory, similar to FORTRAN common blocks. The syntax of their usage is very similar to C++ classes with public data members. A struct is defined with a name and compound definition of variables. These variables can also be structs. Starting with a simple struct, struct dataPoint { unsigned int timeSeconds; float value; }; The int timeSeconds is defined first in memory and then the float value. The size of a struct in memory is the sum of the sizes of the variables. The definition of a struct should be made before its use and is typically found in a header file, but can also be written in the same file before its usage. To test this simple struct, int main() { /* Declare a variable of "struct dataPoint" type. */ struct dataPoint s; /* Assign the struct s some starting values. */ s.timeSeconds = 60; s.value = 100.0; /* Print the size and memory locations */ printf("sizeof(s) = %ld\n", sizeof(s)); printf("&(s.timeSeconds) = %p\n", &(s.timeSeconds)); printf("&(s.value) = %p\n", &(s.value)); printf("sizeof(unsigned int) = %ld\n", sizeof(unsigned int)); printf("sizeof(float) = %ld\n", sizeof(float)); } return 0; where the program assigns values and prints the memory locations to demonstrate the memory structure. When structs are passed to functions, by default a local copy of the struct is made within the function. This means that when the function finishes, the value of the struct in the function which called it is unchanged. This is the same behaviour as if a basic variable had been passed to the function , Continued over page... 27 void printDataPoint(struct dataPoint dp) { printf("timeSeconds = %d, value = %f\n", dp.timeSeconds, dp.value); } To modify the values of a struct within a function and retain these values, pointers can be used , void clearDataPoint(struct dataPoint *dp) { dp->timeSeconds = 0; dp->value = 0.; } where the dp->timeSeconds syntax is equivalent to (*dp).timeSeconds. Other than the short hand " ->" syntax, the behaviour is exactly the same as that of simple variables discussed in Issue 5. Header files To illustrate the use of header files, the next example defines a histogram data structure and functions. Histograms can be very useful to monitor long term experiments, provide summary figures or be used for data storage themselves. Header files should be included to use functions within standard libraries or functions implemented within other C source files. These files contain the declaration of functions and data structures, but do not contain the implementation of the functions. The implementation is given within .c files, which are complied and built into static or dynamic libraries or directly linked to. Similar to standard header files, additional header files can be written to contain function definitions and data structures. #ifndef HISTOGRAM_H #define HISTOGRAM_H #define MAX_BINS 1000 /* Define a data structure to hold histogram data. */ struct histogram { unsigned int nBins; float xMin; float xMax; float binContents[MAX_BINS]; }; /* Define the struct as a new type, as a shorthand. */ typedef struct histogram Histogram; /* Fill a histogram. */ int fillHist(Histogram *, float value, float weight); /* save a histogram to a file. */ int saveHist(Histogram *, FILE *); #endif is a header file called histogram.h, which defines a struct and declares functions, but does not implement the functions it defines. In the case that the header file is included inside another header file, the header file might be included in a program more than once. To prevent this double declaration, a precompiler ifndef case is used. This case is true the first time the header file is included and false for additional include statements. The define statements define values which are replaced when the precompiler runs, which is just before the compilation of the code. Since dynamic memory allocation has not been discussed yet, a fixed size array is used for the binContents. Lastly typedef is used to simplify the Histogram variable declaration. The header file must be included in a program before the struct or functions are used . #include "histogram.h" #include <stdio.h> #include <stdlib.h> int main() { unsigned int i; Histogram h; /* Create a histogram struct */ initHist(&h,10,0.,10.); /* Initialise the histogram */ for(i=0;i<1000;i++) { /* Generate 1000 random points */ fillHist(&h,10*(float)rand()/RAND_MAX,1.0); /* Histogram each value. */ 28 } } saveHist(&h,"Hist.txt"); /* Save the histogram. */ return 0; This program histograms random numbers, which are generated between zero and one. The program cannot be run without implementing functions defined in the histogram.h header file. This implementation should be given in a .c file, #include "histogram.h" #include <stdio.h> int initHist(Histogram *hist, unsigned int nBins, float xMin, float xMax) { unsigned int i; if((hist->nBins+2) >= MAX_BINS) { printf("Error: too many bins requested.\n"); return 1; /* An error has occured. */ } hist->nBins = nBins; hist->xMin = xMin; hist->xMax = xMax; for(i=0;i<(hist->nBins+2);i++) hist->binContents[i] = 0.; } int fillHist(Histogram *hist, float value, float weight) { unsigned int ibin; float binSize; if(value < hist->xMin) ibin = 0; /* Underflow */ else if(value >= hist->xMax) ibin = hist->nBins+1; /* Overflow */ else { /* Find the appropriate bin. */ ibin = 1; binSize = (hist->xMax - hist->xMin)/hist->nBins; while(value >= (ibin*binSize + hist->xMin) && ibin < hist->nBins && ibin < MAX_BINS) { ibin++; } } if(ibin >= MAX_BINS) { /* Stay within the array */ printf("Error: ibin = %u is out of range\n",ibin); return 1; /* An error has occured. */ } hist->binContents[ibin] += weight; /* Add the weight */ return 0; } int saveHist(Histogram *hist, const char *fileName) { FILE *outputFile = 0; unsigned int ibin; float binSize; outputFile = fopen(fileName, "w"); /* Open the output file. */ if(!outputFile) { /* If the file is not open.*/ printf ("Error: could not open %s for writing\n",fileName); return 1; /* An error has occured. */ } binSize = (hist->xMax - hist->xMin)/hist->nBins; } /* Write the bin centres and their contents to file. */ ibin=0; while (ibin < (hist->nBins+2) && ibin < MAX_BINS) { fprintf(outputFile,"%lf %lf\n", binSize*((double)ibin+0.5) + hist->xMin, hist->binContents[ibin]); ibin++; } fclose(outputFile); /* Close output file. */ return 0; Continued over page... 29 Rather than type gcc several times, a Makefile can be used to build the source files and produce the executable, CC = gcc TARGET = hist OBJECTS = $(patsubst %.c,%.o, $(wildcard *.c)) $(TARGET): $(OBJECTS) @echo "** Linking Executable" $(CC) $(OBJECTS) -o $(TARGET) clean: @rm -f *.o *~ veryclean: clean @rm -f $(TARGET) %.o: %.c @echo "** Compiling C Source" $(CC) -c $< where more information on make is given in issue 7. Put the two .c files in the same directory as the histogram.h and Makefile. Then type make to build the executable. When the program runs it produces a text file which contains the sum of the weights within the underflow, bins and overflow. The format is chosen to allow the histogram to be plotted with gnuplot, gnuplot plot 'Hist.txt' with boxes Similar to the previous tutorial, this plotting command could be added to a program also. The plot can be saved as a png file by typing, set term png set output 'Hist.png' replot after the original plotting command. Challenge problem Use the previous article and this article to histogram the system load for 30 minutes. Add functions to the histogram.h and histogram.c files to calculate the mean and standard deviation of the distribution. The mean of a histogram can be calculated from, xMean = 0.; for (i=1;i<=nBins;i++) { /* Skip underflow and overflow */ xMean += binContents[i]/nBins; } The standard deviation of a histogram is given by, xStdDev = 0.; for(i=1;i<=nBins;i++) { /* Skip underflow and overflow */ xStdDev += pow(xMean - binContents[i],2)/(nBins-1); } if(xStdDev > 0.) xStdDev = sqrt(xStdDev); The solution to the problem will be given next time. 30 Article by W. H. Bell SAFE AND SIMPLE CONNECTION TO YOUR RASPBERRY PI HARDWARE • Interface board • I 2C Port Extender • Analogue board SOFTWARE I 2C, • For GPIO, SPI • Device Libraries • Examples SUPPORT • Articles • Tutorials • Forum Interface board: £1 3.86 | Port Extender: £9.80 | Combo: £22.66 (save £1 .00) Prices include UK VAT but exclude Postage and Packaging: from £2.70 Find out more at quick2wire.com 31 Scratch Controlling GPIO This month's article intends to make it as EASY AS PI to get up and running with GPIO in Scratch and allow your Raspberry Pi to control some lights and respond to switches and sensors. Whilst the Raspberry Pi is a great tool for the creation of software, using languages such as Scratch, Python, C etc., the best way to make it really come alive and to add even more enjoyment to this cheap, credit card sized computer is to start playing around with hardware hacking and physical computing. This involves using the Raspberry Pi to control things like LEDs and respond to switches and sensors. More often than not it also includes knowledge and learning of both hardware and software in a very interesting practical environment – not just coding for the sake of coding but, for example, creating robots and programming them to do cool things! This article is based on a post on the Cymplecy blog by Simon Walters (), a primary school ICT teaching assistant and general Scratch guru! Minimum Requirements – a Raspberry Pi with Raspbian installed and a working internet connection, a breadboard, some Light Emitting Diodes (LEDs), some resistors and some wire connectors. Total cost £5-£10 (not including the Pi). How to get a Rapsberry Pi to control the GPIO Pins from Scratch Your Raspberry Pi needs to be connected to the internet to install the software but internet is not needed to run Scratch GPIO. Copy the text below (starting at sudo and ending at gpio.sh) and paste that into an LX Terminal window and run it to download the installer: sudo wget -O /boot/install_scratch_gpio.sh And then type, and run: 32 sudo /boot/install_scratch_gpio.sh your Pi and run the second instruction) Connecting Components Up Extreme care should be taken when connecting hardware to the GPIO pins as it can damage your Pi – only do this if you're confident of your ability to follow these instructions correctly. At a minimum you should get a breadboard and use some female-male 0.1 leads (available from RS/CPC or your local Maplin). Check out some GPIO pin guides to make sure you know what pins do what. Figure 1 - LED Test Figure 2 - GPIO Test As in Figure 1 above, wire up Pin 1 (3.3V) to (at least) a 330ohm resistor – connect that resistor to the long lead (this is the positive lead) of an LED and then connect the other end of the LED to Pin 6 (Ground). This should cause the LED to light up. If it doesn’t work try reversing your LED, as you probably have the polarities reversed. Once working, you can now move the red (or whatever colour you have used) lead from Pin 1 to Pin 11, as in Figure 2 above. You should now run the special Scratch icon (Scratch GPIO) on your desktop. This is actually a completely normal version of Scratch, it just runs a little Python background program to allow Scratch to talk to the GPIO. If you have any Python programs accessing the GPIO running already this could cause your Pi to crash when opening Scratch GPIO. To avoid this, open an LX Terminal window and run: sudo killall python To test out the GPIO control in Scratch, click File>Open and select blink11 from /home/pi/Scratch. Once the program has loaded, click on the Green Flag and your LED should now blink on for 1 second and off for 2 seconds – see troubleshooting on the Cymplecy blog if this doesn’t happen. What more can I do with Scratch and the GPIO? Next time we will be looking at more exciting ways to use the GPIO with Scratch. Article by Simon Walters and Aaron Shaw 33 I n th i s m on th ' s Pyth on Pi t we sh ow you h ow to con trol you r Raspberry Pi wi th you r sm artph on e by u si n g th e web. py fram ework from www. webpy. org . Th i s arti cl e provi d es an al tern ati ve to u si n g Val a, al so covered i n th i s i ssu e. With web.py you can present web forms on client browsers (we are using Chrome on Android or Safari on iOS) and accept the form content via HTTP Post back to the web server (your Raspberry Pi). The values returned are fed directly to your Python script, which in turn can then execute any commands that you require. Web.py contains an in-built webserver meaning that it is not necessary to install Apache or Lighttpd. By way of an example we will use your smartphone's browser connected via WiFi to your local area network and in turn your Pi to perform basic remote control of LedBorg (). LedBorg is a preassembled LED array capable of outputting 27 colours. However for this tutorial we will just demonstrate the principle with red, green and "black" (all LEDs off). The technique described below works equally well for any other Pi remote control project. Hence replace the LedBorg specific code to meet your own requirements. We can install web.py from the command line via PIP (see The Python Pit in issue 8 for instructions on installing PIP): sudo pip install web.py Next create a directory (pyserver in this example) on your Pi to act as the root of the web server that web.py will start. Inside create two other directories and and save the following files within: pyserver/ledborg.py pyserver/templates/index.html pyserver/static/styles.css We will concentrate on the first two files: ledborg.py contains our python code and index.html the template web page that will be called when the program executes. The stylesheet, styles.css is optional and changes the usual drab grey buttons found on web forms to be coloured and larger, as seen in the screenshot. The styles were generated at. Run the program on your Pi and navigate to the Pi's IP address on your smartphone, appending the port number 8080. This will present you with the web form, enabling commands to be sent to your Python script when you tap the buttons. In the example screenshot the smartphone connects to the Pi via 92.1 68.1 .69:8080 - you can determine your IP address by executing ifconfig at the command line. ledborg.py # web.py: controlling your Pi from a smartphone # using LedBorg () as an example # ColinD 27/12/2012 import web from web import form PYTHON VERSION: 2.7.3rc2 PYGAME VERSION: 1 .9.2a0 O.S.: Debian 7 34 # Define the pages (index) for the site urls = ('/', 'index') render = web.template.render('templates') app = web.application(urls, globals()) # Define the buttons that should be shown my_form = form.Form( form.Button("btn", id="btnR", value="R", form.Button("btn", id="btnG", value="G", form.Button("btn", id="btnO", value="0", ) on the form html="Red", class_="btnRed"), html="Green", class_="btnGreen"), html="足Off足", class_="btnOff") # define what happens when the index page is called class index: # GET us used when the page is first requested def GET(self): form = my_form() return render.index(form, "Raspberry Pi Python Remote Control") # POST is called when a web form is submitted def POST(self): # get the data submitted from the web form userData = web.input() # Determine which colour LedBorg should display if userData.btn == "R": print "RED" lbColour = "200" #Rgb elif userData.btn == "G": print "GREEN" lbColour = "020" # rGb elif userData.btn == "0": lbColour = "000" print "Turn LedBorg Off" else: print "Do nothing else 足 assume something fishy is going on..." # write the colour value to LedBorg (see) LedBorg = open('/dev/ledborg', 'w') LedBorg.write(lbColour) print lbColour del LedBorg # reload the web form ready for the next user input raise web.seeother('/') # run if __name__ == '__main__': app.run() templates/index.html $def with (form, title) <!doctype html> <html> <head> <title>$title</title> <link rel="stylesheet" type="text/css" href="/static/styles.css"> </head> <body> <br /> <form class="form" method="post"> $:form.render() </form> </body> </html> Try adding a Random button to output any one of the 27 possible colours from LedBorg shown in the image at the bottom of the previous page. Web.py supports several other form elements including drop-down lists and checkboxes. Full details can be found at. 35 Feedback & Question Time Great job in making possibly the most useful magazine I have ever read! I'm only 14 years old and your mag has inspired me to start programming. Thanks! J Forster Great job to you all for the 2012 issues. I am one of your 600+ backers with the Kickstarter to get a physical copy of them. I was wondering if you had considered doing a similar 2013 type of fund raiser or offering a subscription service through your website? T Giles I am glad you are making a physical copy of these. I have yet to find a device I want to read soft copies on so this makes you even more awesome. You will go nicely on the shelf next to my physical copies of Linux Format. Yakko TDI Thank you for this amazing magazine . O Bellés As someone who cut his computing teeth on a Sinclair MK14, I think the Pi is amazing. I have read most issues of The MagPi at least twice and am very impressed by the breadth and depth of the articles you have published. As a parent I am very impressed that most of the articles published are not game related. Please keep up the good work guys! J AinhirnWilliams I am one who can remember the first home computer magazines and how important they were in nurturing our programming skills while also acting as an essential catalyst to the creative enthusiasm coming through schools. I can remember typing in line after line of code into a ZX81 only for ‘Ram Pack Wobble’ to leave us having to start all over again. Then there was the computer listing that had been put through full text auto correction and justification! But, computer magazines like The MagPi really are an essential ingredient in sparking young programmers off to achieve greater things. So keep up the good work. D Lockwood [Ed: Yes we are planning a subscription service. Look out for a survey soon on different options] Congratulations on the magazine! As a Raspberry Pi user of a certain “vintage” it’s nice to “tinker” with a cheap, relatively simple, relatively powerful, open source piece of hardware again! P Welsh Your publication is outstanding! The variety of well composed articles each MagPi issue contains are incredibly helpful. Please keep up the good work! T Gomez editor@themagpi.com: Alternatively, send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041 , USA. 36
https://issuu.com/themagpi/docs/issue_9_final
CC-MAIN-2017-13
refinedweb
13,505
62.78
not actually to establish a blogging point where individuals can enrich their learns on facilitating and leveraging .NET-related activities most effectively Holy cow, I wrote a book! If you use the Image File Execution Options registry key to force a program to run under the debugger, all the kernel does is insert the debugger in front of the command line. In other words, the CreateProcess function figure out what program is about to be run and checks the Image File Execution Options. If it finds a debugger, then the debugger is prepended to the command line and then CreateProcess resumes as if that were the command line you had passed originally. CreateProcess In particular, it doesn't do anything with the other parameters to the CreateProcess function. If you passed special parameters via the STARTUPINFO structure, those parameters get passed to the debugger. And the PROCESS_INFO that is returned by the CreateProcess function describes the debugger, not the process being debugged. STARTUPINFO PROCESS_INFO Specifically, if you specified the STARTF_USESHOWWINDOW flag and passed, say, SW_HIDE, as the wShowWindow, then the debugger will be hidden. This bites me every so often when I am called upon to debug a program that happens to be launched as hidden. I'll slap the debugger underneath it with Image File Execution Options, run through the scenario, and then... nothing. STARTF_USESHOWWINDOW SW_HIDE wShowWindow And then eventually I realize, "Oh, right, the debugger is hidden." To unstick myself, I fire up a program like Spy to get the window handle of the hidden debugger and fire up a scratch copy of Notepad so I can make it do my bidding and show the window. ntsd -Ggx notepad <F12> Break instruction exception - code 80000003 (first chance) eax=7ffdf000 ebx=00000001 ecx=00000002 edx=00000003 esi=00000004 edi=00000005 eip=7c901230 esp=00a1ffcc ebp=00a1:001> r esp=esp-4 0:001> ed esp 1 0:001> r esp=esp-4 0:001> ed esp 0x00010164 0:001> r esp=esp-4 0:001> ed esp eip 0:001> r eip=user32!showwindow 0:001> g 0:001> q The first two commands push the value SW_SHOWNORMAL (numerical value 1) onto the stack. Then goes the window handle. And then the return address. Move the instruction pointer to user32!ShowWindow and we've simulated the function call ShowWindow(0x00010164, SW_SHOWNORMAL);. Once I let execution resume, *boom* the debugger window appears and I can continue my work. SW_SHOWNORMAL user32!ShowWindow ShowWindow(0x00010164, SW_SHOWNORMAL); I frequently find myself having to do this for API calls (SendMessage, especially) that aren't available on the command line. Isn't there an easier way to call into Win32 (without VBA)? Another handy call is for when the debugger itself is hosed (works in ntsd): resp=@esp-8 ed @esp 0n{TargetPID} g=kernel32!DebugActiveProcessStop Just what is it that makes Notepad such a perfect designated debugger victim? Make that ed @esp @eip 0n{TargetPID} Er, why not just try again. Why go through all that manual command entering into the debugger when all you need to do is run it again without the SW_HIDE? > Just what is it that makes Notepad such a perfect > designated debugger victim? It's single-threaded and straight-forward crud code. "Just what is it that makes Notepad such a perfect designated debugger victim?" My guess: it's lightweight so it starts fast, and it's in the PATH, so it's fast to type the command line. "Why go through all that manual command entering into the debugger when all you need to do is run it again without the SW_HIDE?" This assumes that it's easy to run it again exactly as it was just run. That may not be the case. PingBack from Yes, but, I wonder why not cmd? or the venerable winver? There's _something_ about notepad or just use winspy () and show the window You can also use the .call command if you don't want to fool around with the stack directly. For example: 0:001> .call user32!NtUserShowWindow(0x303e2, 6) Thread is set up for call, 'g' will execute. WARNING: This can have serious side-effects, including deadlocks and corruption of the debuggee. 0:001> g eax=7ffde000 ebx=00000000 ecx=00000000 edx=77c4f06d esi=00000000 edi=00000000 eip=77c02ea8 esp=01c3f7e0 ebp=01c3f80c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 ntdll!DbgBreakPoint: 77c02ea8 cc int 3 KJK::Hyperion: I wondered this, and decided that Notepad's more popular than Winver and has a main window. In fact, it's got most elements of a Windows program (writing a Notepad replacement was a common project in Win32 programming books), and simpler code than Calc or Winmine. You can also change its title, or type something in and scan memory for it. Cmd has the whole console window thing, which it just annoying. Kevin: what symbols are you using? For sending random messages to windows: For modifying properties of random windows: Ehtyar. I admit I'm not real knowledgeable when it comes to Win32 debugging, but I got lost about the time ntsd was invoked. I assume you're connecting the debugger to Notepad, and then issuing commands to manually execute the ShowWindow API call, correct? I've not heard of ntsd before. Does it come with Windows, or is it part of Visual Studio (which I have installed)? > I've not heard of ntsd before. Does it come with Windows, or is it part > of Visual Studio (which I have installed)? It's part of the "Debugging Tools for Windows". Python (with the pywin32 package) works pretty well too: import win32gui, win32con win32gui.ShowWindow(0x5656, win32con.SW_SHOW) I must say that's a rather poor design on the kernel's part, and very easy to cause minor behavior changes when launched under debugger - exactly those who would hide the bug you're trying to find. Also, this trick is non-trivial (= few would ever figure it out without having the accurate magic spell from a great wizard). Alternately, you could run ntsd as a debugging server: ntsd -server tcp:port=1234 -gGW And then connect with windbg: windbg -remote tcp:server=localhost,port=1234 Also works for services that are not allowed to interact with the desktop. And across the network too (of course). Kevin wrote: ^ Symbol not a function in '.call user32!NtUserShowWindow(0x303e2, 6)' Raymond mentioned that you can do this if you have some other function with the same signature as the API that you're trying to call, but that's unlikely when you're debugging Notepad. Raymond wrote: Yaniv Pessach wrote a program that takes a function name and a parameter list on the command line. Personally I use a version of the Callfunc executable from Undocumented Windows 3.1 that I modified to work as a Win32 console application. Sadly I don't have the exact source of the current version I use available - I tried and failed to add pagination to the dump command (I couldn't work out how to wait for a keypress...) I'm usually a lurker... but after seeing this (never thought of it before) I have to remark: That's awesome! Thanks Raymond Harold [And what would be a better design? -Raymond] Perhaps the debugger could use saved winposinfo for itself and pass on the startup info to the child process? > Perhaps the debugger could use saved winposinfo for itself and > pass on the startup info to the child process? The debugger can choose to do whatever it likes. We're talking about the design of Image File Execution Options here. PingBack from PingBack from
http://blogs.msdn.com/oldnewthing/archive/2007/07/02/3652873.aspx
crawl-002
refinedweb
1,294
63.39
I'm trying to configure varnish for Magento2. Question: I'm unable to configure PURGE varnish cache. Here is my build: Varnish settings: I'm using docker-compose. Varnish FROM varnish:6.2 vcl 4.0; I'm trying to configure purge command in a safe way. I'm not sure what is the best practice on how to configure varnish, but Magento recommends the following: acl purge { "nginx"; } And there is following verification: sub vcl_recv { if (req.method == "PURGE") { if (!client.ip ~ purge) { return (synth(405, "Purging not allowed for " + client.ip)); } In my case when I'm trying to purge varnish from console / admin UI - I'm always facing with the following error: <!DOCTYPE html> <html> <head> <title>405 Purging not allowed for 192.168.128.7</title> </head> <body> <h1>Error 405 Purging not allowed for 192.168.128.7</h1> <p>Purging not allowed for 192.168.128.7</p> <h3>Guru Meditation:</h3> <p>XID: 32770</p> <hr> <p>Varnish cache server</p> </body> </html> After the docker restart, this IP address changes. When I'm trying to configure acl purge with container names I'm still facing the same error. However, adding container name to "whitelist" and running curl -k -X PURGE Passes this verification. What is the best and safest way to configure acl purge section?
https://serverfault.com/questions/1026179/varnish-purge-client-ip-docker-compose
CC-MAIN-2021-10
refinedweb
223
68.87
I want to create a non-thread-safe chunk of code for experimentation, and those are the functions that 2 threads are going to call. c = 0 def increment(): c += 1 def decrement(): c -= 1 Single opcodes are thread-safe because of the GIL but nothing else: import time class something(object): def __init__(self,c): self.c=c def inc(self): new = self.c+1 # if the thread is interrupted by another inc() call its result is wrong time.sleep(0.001) # sleep makes the os continue another thread self.c = new x = something(0) import threading for _ in range(10000): threading.Thread(target=x.inc).start() print x.c # ~900 here, instead of 10000 Every resource shared by multiple threads must have a lock.
https://codedump.io/share/ULVtUtEKQ68N/1/is-the--operator-thread-safe-in-python
CC-MAIN-2017-39
refinedweb
127
66.74