text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
4 ways how MLflow can facilitate your machine learning development As a Data scientist you are constantly working on Machine Learning problems, mostly collaborating with others. To do so successfully, vendors are developing ML workflow tools which can assist teams in managing the lifecycle of Machine Learning. In this insight I focus on one of the leading tools: MLflow and how MLflow can facilitate and improve your ML lifecycle. Why even bother about ML workflow tools? We see that a lot of companies are working on ML projects in teams without having a standardized or centralized solution which can handle or manage ML models. This means that lots of teams are working on different model versions that are stored on different locations. This also means that teams find it hard to compare performance of models and that they lack continuous monitoring. Missing these standards leads to inefficiencies and frustrations. To prevent the aboven ML workflow tools are being developed. In this insight we focus on one of the leading tools: MLflow. What can MLflow bring to the table? MLflow is an open-source solution which can be leveraged from your local notebooks, local scripts and it is even natively integrated in Databricks (out-of-box available). The main advantages of including MLflow in your ML lifecycle is the transparency and standardization it brings to the table when it comes to training, tuning and deploying ML models. - MLflow Tracking: Enables teams to record, have & query a clear overview of the performance of the training runs as all models (& performance metrics) are stored centrally in an “experiment” - MLflow Projects: Enables teams to easily reproduce model versions with its corresponding code on any platform. - MLflow (model) Registry: Enables teams to manage ML model versions, modify lifecycle stages and deploy models to production. - MLflow Models: Packages models in a standard format so that they can e.g. be served as an endpoint through a REST API. Also, MLflow models are compatible with the most important ML libraries. What components can pick the low hanging fruit? Firstly, start to leverage MLflow tracking to centralize all the information on training runs (metrics, parameters, model artifacts) in an experiment. Those experiments are accessible through different machines, so that teams can easily train, tune and compare models during the Model Training phase of the machine learning project. For us, the greatest advantages of having the centralized experiments are: - A user-friendly UI - Centralized environment to log model training runs - The models are compatible with ML’s most important libraries and are logged in MLflow in a standard format, which means that they can be used e.g. in real-time serving through a REST API or in batch in Databricks - Easy to compare performance metrics or model parameters in the UI. Not only available in tables, but also possibility to visualize the differences in graphs. - The experiments are highly scalable, thanks to its smart search option. Even when having lots and lots of models, thanks to the search option, co-workers can look for models through tags, developer name or accuracy levels. - No entry barrier. You can integrate the experiments immediately in existing e.g. python or spark code and start tracking your training runs. Overview of MLFlow process flow: the model registry serves as thé central repository for all data scientists & ML engineers Example of MLflow experiment tracking: every run of the model is versioned & stored with all run parameters & metrics Secondly, enable your team to manage ML model lifecycles easily by using the MLflow model registry. In short, the registry is a central hub, where all the models are stored under a unique model name, model version and model stage. Models can then be easily deployed to production: from the registry UI or through its API. The greatest advantages of leveraging the model registry as a team are: - The UI provides all team members with clear information on what model versions (all versions remain) are used for experiments, for testing or for production. - You could allocate roles in the registry where some could request models to transition from “Staging” to “Production” and others can validate that request. Having the ability to set those stages can be very powerful, as you could just refer to the production model from your code and be certain that you will always refer to the correct model. - A nice recent add-on is that you can now choose to serve your model through a REST API endpoint, which can be done straight from the UI and is directly deployed and served through Azure Databricks MLflow model can be called directly within a Spark UDF function for use In conclusion, both components can improve efficiency and reduce frustration when collaborating in team on various ML projects. How long before your team can benefit from MLflow? Quick. For me, one of the main advantages of working with MLflow is its easiness: just import the package, configure your client & off you go! import mlflow from mlflow.tracking.client import MlflowClient You could get started within a couple of hours, if you have some coding experience and know how to install libraries to e.g. python or spark. During your discovery of MLflow, you will observe that tracking training runs is as simple as running certain predefined lines of code, which can be found in the clear documentation of the MLflow project. Can we leverage MLflow in cloud solutions? Yes! Databricks offers a seamless integration with MLflow, which means that MLflow is a fully managed solution. No need to worry about the right configuration of the platform, you as a Data Science team can immediately start with tracking your experiments from the Databricks notebooks or manage and deploy models from the model registry. As an intensive Databricks user, I really applaud that integration! Another integration that we, at element61, like is the integration with Azure ML. Azure ML is a fully managed cloud solution, which supports the entire machine learning lifecycle thanks to its MLOps. We mostly advice Azure ML to be used at companies where the maturity of machine learning is still growing, because it has automation and visual features which can help citizen data scientists to more easily work on ML projects. To integrate the best of both worlds we propose that Citizen Data Scientists work in Azure ML, while expert Data Scientists can train, tune and compare models at scale using MLflow experiments, while models are managed through their lifecycle towards deployment leveraging the MLflow registry. Models from MLflow are compatible in Azure ML which means that models from the model registry can be stored and deployed as an endpoint in Azure ML. How to get started with MLflow? You can leverage it locally or in the cloud. You will notice that it takes minimal effort before you can start! - For those who want to use MLflow locally - Install the MLflow package by running: pip install mlflow - Add MLflow commands to your existing code & execute your code to start tracking experiments! - To check your experiments, run mlflow ui in command line - To use the model registry, you will need to set up a server using a database-backed store - After successfully setting up the server, you can start leveraging the model registry by accessing it through the mlflow ui - For those who want to use MLflow in the cloud (So simple!) - Create a new Azure Databricks resource (or use an existing one) - Add MLflow commands to your existing code & execute your code to start tracking experiments! - Check your experiments immediately via the top right experiment icon - Check out your model registry directly via the Models icon on the left We love how MLflow allows users to get started with minimal effort. Try it out now and enjoy it yourself! For more information on how to improve your Machine Learning lifecycle, do not hesitate to contact me or one of my colleagues. Building on our partnership with both Databricks; element61 has some other great articles & pages which might enrich above insight. Continue reading or contact us to get started: How to integrate Azure Databricks with Azure Machine Learning for running Big Data Machine Learning jobs? - Read more about Azure Machine Learning
https://www.element61.be/fr/node/31288
CC-MAIN-2021-17
refinedweb
1,368
58.11
#include <iostream> using namespace std; int main() { bool error = false; for (int i=-3; error, i++; error = !error) if (error) cerr << "true "; else cout << "false "; return 0; } I have a C++ exam coming up and I thought i knew everything we had learnt so far. However in the last lecture they gave us some practice questions, one of them was the above code. The lecturer tried to explain it to us but I didn't really get it. The result of the above code is "false true false" and I got confused by how the multiple conditions work in the for loop (seperated by a comma). So I changed the code replacing 'error' and 'i++' with variations of 'true' and 'false'. I found that the result of the condition only depended on the last condition. My question is this; in this code and in any code, if you have multiple conditions is the only one that counts the last one? Are the rest pointless? Thankyou, Isaac
http://www.dreamincode.net/forums/topic/20443-multiple-conditions-in-a-loop/
CC-MAIN-2017-47
refinedweb
166
78.38
Summary: Windows PowerShell MVP, Richard Siddaway, talks about WMI’s missing methods in Windows PowerShell. Microsoft Scripting Guy, Ed Wilson, is here. Today we have a guest blog post by Windows PowerShell MVP, Richard Siddaway. Take it away Richard… Thanks, Ed. WMI has been with us for a long time, and it is one of those technologies that you either love or hate. (Some of us manage both at the same time, but that is another story.) Admins love WMI because it’s so powerful—if I can get a WMI connection to your machine, I can do just about anything to that machine. On the flip side, admins hate WMI because it can be very difficult to use, and finding information can be very difficult. Windows PowerShell has had great WMI support, which was enhanced in Windows PowerShell 3.0 by the introduction of the CIM cmdlets and other WMI enhancements. The new functionality has caused some confusion because the WMI class methods, which we have come to rely on, seem to have disappeared. Before I show you what’s happened to your methods, I need to dive into a little WMI theory. The starting point is that WMI is Microsoft’s implementation of the Common Information Model (CIM). CIM is part of the Web-Based Enterprise Management (WBEM) industry standards that are maintained by the Distributed Management Task Force (DMTF). WBEM is defined as “a set of management and Internet standard technologies developed to unify the management of distributed computing environments.” WBEM standards are designed to be independent of the operating system and hardware. When you start Windows, the WMI repository is created. The repository consists of a number of things: - WMI providers (a DLL that provides access to WMI classes) - WMI namespaces - WMI class definitions - WMI instances This is where some if the confusion arises. WMI classes can be taken to refer to the class definition (comparable to Active Directory schema definitions) and the class instances (compare to an object in Active Directory, such as a user). When we only had the WMI cmdlets in Windows PowerShell 2.0, life was fairly simple in that you do things like this: Get-WmiObject -Class Win32_Volume -Filter “Name = ‘C:\'” | Get-Member -MemberType method And you would see the list of methods: - AddMountPoint - Chkdsk - Defrag - DefragAnalysis - Dismount - Format - Mount - Reset - SetPowerState At this point, I need to remind you that the WMI is COM based, and the WMI cmdlets work over DCOM to local or remote machines. You could access the methods by creating a Windows PowerShell object that represents the class, and call the method directly: $disk = Get-WmiObject -Class Win32_Volume -Filter “Name = ‘C:\'” $disk.DefragAnalysis() The methods that you have seen so far are Instance methods. They only make sense and can only be used in the context of an instance. For example, you can’t perform DefragAnalysis against a non-existent volume! Let’s skip over to Win32_Process for a minute: Get-WmiObject -Class Win32_Process | Get-Member -MemberType method This produces the following list of methods: - AttachDebugger - GetOwner - GetOwnerSid - SetPriority - Terminate Notice that there isn’t a Create method—these are all Instance methods. You have to have an instance of a process before you can terminate it. When you are dealing with instances, you are dealing with the System.Management.ManagementObject .NET class. This is where some of the issues lie because you are working with a .NET wrapper for DCOM-based WMI. Windows PowerShell is .NET based, and you can create just about any .NET object in Windows PowerShell. The Windows PowerShell team recognized that some .NET objects were of great interest to admins, and they provided shortcuts known as “type accelerators.” WMI has a number of type accelerators—in particular, the [wmiclass] type accelerator. If you apply the [wmiclass] accelerator to the Win32_Process class, you get this: $proc = [wmiclass]’\.rootcimv2:Win32_Process’ $proc | Get-Member -MemberType method Name MemberType —- ———- Create Method We now have access to a method that we can use to create instances of the WMI class. The other important point is that we are dealing with a new .NET class: System.Management.ManagementClass. It’s a ManagementClass rather than a ManagementObject. As a side note, the Create method is one of the intrinsic methods of a WMI class. It will always be there even if you can’t access it. Creating a new instance of the Win32_OperatingSysyem class could do interesting things to your machine! In .NET terms, you can think of these as Static methods. You don’t need an instance of the class to utilize the methods. The WMI registry provider is a classic example of using WMI Static methods. Get-WmiObject is a bit of a general work horse. You can use it to work with the instances of a WMI class, and you can use it to investigate WMI classes. The Windows PowerShell 3.0 CIM cmdlets have separated these functions. Get-CimInstance is for working with instances of a WMI class only! Get-CimInstance -Class Win32_Volume -Filter “Name = ‘C:\'” | Get-Member -MemberType method - Clone - Dispose - Equals - GetCimSessionComputerName - GetCimSessionInstanceId - GetHashCode - GetObjectData - GetType - ToString Hang on! This is totally different. These aren’t the methods we want. Remember that the WMI cmdlets work over DCOM to the local or remote machine. DCOM isn’t a firewall-friendly or Internet-friendly protocol. It also relies on an old programming methodology that is being replace by .NET. The CIM cmdlets use DCOM to access the local machine if the –ComputerName parameter isn’t specified. As soon as you utilize the –ComputerName parameter to access the local or a remote machine, the CIM cmdlets switch to using WSMAN through the WinRM service. You can also use a CIM session with the CIM cmdlets. CIM sessions also work over WSMAN by default. (CIM sessions will be the subject of a future post.) WSMAN is the protocol that is used for Windows PowerShell remoting. Windows PowerShell remoting always returns an inert object—one that doesn’t have any methods. This is exactly the situation with the CIM cmdlets—they are designed to work over WSMAN, so they return inert objects. The methods that you saw belong to the .NET class, which has changed from the WMI cmdlets. It is now Microsoft.Management.Infrastructure.CimInstance. So far, it’s been established that you can’t access the Instance methods of a WMI class directly through Get-CimInstance. What you can do is pipe the results from your use of the Get-CimInstance cmdlet to Invoke-CimInstance to trigger the method, for example: Get-CimInstance -ClassName Win32_Process -Filter “Name = ‘notepad.exe'” | Invoke-CimMethod -MethodName Terminate The Invoke-CimMethod cmdlet has an –Arguments parameter, so you can work with methods that need arguments. Get-CimInstance enables you to access instances of a WMI class. How do you investigate WMI classes? This is the function of the Get-CimClass cmdlet. It gives you access to the WMI class metadata (such as properties, methods, and parameters). PS> Get-CimClass -ClassName Win32_Process | fl * CimClassName : Win32_Process Notice that the Create method is listed. Let’s drill down into the method definition: PS> $class.CimClassMethods[“Create”].Parameters Name CimType Qualifiers —- ——- ———- CommandLine String {ID, In, MappingStrings} CurrentDirectory String {ID, In, MappingStrings} ProcessStartupInformation Instance {EmbeddedInstance, ID, In, MappingStrings} ProcessId UInt32 {ID, MappingStrings, Out} The fact that I get the parameter name and expected data type makes life so much easier. It’s worth upgrading to Windows PowerShell 3.0 for this alone. Secondly, Get-CimClass gives you one way to access to the Static methods of a WMI class: Get-CimClass -ClassName Win32_Process | Invoke-CimMethod -MethodName Create -Arguments @{CommandLine=’notepad.exe’} The arguments are presented as a hash table with the parameter names as the keys. This removes the issue that arises with Invoke-WmiMethod, where the expected order of the parameters can change from that in the documentation. One last piece of the puzzle remains: How can we tell if a particular method is a Static (or intrinsic) method as opposed to an Instance method? The information is buried in the qualifiers for the methods. Windows PowerShell MVP, Shay Levy, supplied the following script during the discussions that prompted this post: $class = Get-CimClass -ClassName Win32_Process $class.CimClassMethods | select Name, @{N=’MethodType’; E={if ($_.Qualifiers[‘Static’]){‘Static’}else{‘Instance’} }} Name MethodType —- ———- Create Static Terminate Instance GetOwner Instance GetOwnerSid Instance SetPriority Instance AttachDebugger Instance You can derive the same information by using the WMI cmdlets: $class = Get-WmiObject -List Win32_Process -Amended $class.Methods | select Name, @{N=’MethodType’; E={if ($_.Qualifiers[‘Static’]){‘Static’}else{‘Instance’} }} You need to be aware that using the –Amended parameter drills deep into the WMI repository, and it is an expensive operation in CPU cycles. It will take several seconds to return the data. To return to the original question… Dude, your methods are where they have always been. You just need to access them in a different way. ~Richard Thanks, Richard! You can learn more about using WMI and the new CIM cmdlets with Windows PowerShell in Richard’s book, Windows PowerShell and WMI from Manning Publications. Join me tomorrow when Windows PowerShell MVP, Sean Kearney, will talk about Getting Funky with article Richard, can't wait to read more about CIM sessions. Great article! Just a heads-up, there's a slight correction to be made: there's a reference to "Invoke-CimInstance" instead of "Invoke-CimMethod".
https://blogs.technet.microsoft.com/heyscriptingguy/2013/09/20/hey-dude-where-are-my-methods/
CC-MAIN-2017-26
refinedweb
1,571
56.15
66294Re: pyeval() error Expand Messages - Sep 2, 2012 > This the purpose of pyeval() and py3eval() functions which came withNot exactly. Try to transfer a few hundred KiBs with json.dumps and with my interface and you will see the difference. Self-written dumper is much, much slower as it is not written in C and it was necessary to make it able to get even more information from time to time. That was the real purpose as dumper is trivial and easy to write (though vim.bindeval interface solves custom dumpers problem as well). > a recent patch (by Zyx). By the way, why almost everybody says that it were pyeval functions that were added in first place? Main change is vim.bindeval as you can write code like def return_to_vim(value): if hasattr(vim, 'bindeval'): vim.bindeval('d')['result']=value else: vim.eval('extend(d, {"result": '+dumps(value)+'})') and be compatible with old versions of vim. pyeval() does not allow you to do it so easily (though I can imagine Pyeval function that works in old vims as well, it is more to type). pyeval() is slightly faster, but it is negligible and constant difference. -- You received this message from the "vim_dev" maillist. Do not top-post! Type your reply below the text you are replying to. For more information, visit - << Previous post in topic
https://groups.yahoo.com/neo/groups/vimdev/conversations/messages/66294
CC-MAIN-2018-05
refinedweb
225
76.52
Most C++ programmers are familiar with the pointer to function facility provided by the language; the majority of those that do will be betraying their C heritage. However, few are aware of, or have used, the pointer to member facility. This article introduces C++'s pointer to member and then presents a genuine application of the facility. The problem with most tutorials on a particular language feature is that the examples provided are generally not real world ones - a few demonstration lines of code do not show the programmer how they should apply the facility, just how to use it. The natural outcome of this is that the inexperienced programmer will go out of their way to try to use the facility at the earliest possible opportunity, when it may not be applicable or serve as the best way of implementing a certain functionality. In this description and application I aim to avoid luring the reader into this pitfall. In C and C++ we can declare a pointer to a function as follows: void exampleFn(char *s, int i) { /* ... */ } int (*fnptr)(char *, int); fnptr = &exampleFn; fnptr("string", 1); On the second line we declare a variable that is of type pointer to function with signature (char*, int) and return type void called fnptr. We assign the address of exampleFn to it, and then call exampleFn through it. When assigning to a function pointer variable the supplied function must match the signature (return type and parameter types) exactly. The use of the & preceding exampleFn is optional in the language, we could just have legally written fnptr = exampleFn; Similarly when we call exampleFn through the fnptr variable we are really dereferencing a pointer to the function so we can write (*fptr)("string", 1); to call exampleFn if we choose. However the compiler can deduce that fptr is a pointer to function and will dereference appropriately for us. Because of this the *s are more often than not omitted for clarity. Conventionally we may try to tidy up the syntax using typedefs: typedef void (*FPTR_T)(char *, int); FPTR_T fptr = &exampleFn; fptr("string", 1); This declares a new type name called FPTR_T that points to functions with signature (char *, int) and return type void. fptr is a variable of that type, and the rest follows as before. It is not strictly necessary, but does makes the syntax a lot more readable. This much is common to C and C++: C++ provides us a similar mechanism to point to elements that are within a class. Let's say that we have something like a function pointer, but that points to a member function in a class. How would we call it? Being a function tied to a class we need to provide one extra piece of information: the object on which the member function is to operate. If we tried to use the same syntax as above then we would not be supplying this information. So how do we do it? Say we have the following class: class ExampleClass { public: void exampleMFn(char *s, int i); int data; // Euch! We all know not to have // public data members, don't we? private: void anotherMFn(char *s, int i); int moreData; }; Now, to create and use a pointer to the member function exampleMFn we write the following: // Create the member pointer void (ExampleClass::*mfptr)(char *, int); mfptr = &ExampleClass::exampleMFn; // Use it on an object ExampleClass obj; (obj.*mfptr)("string", 1); // Use it on an object pointer ExampleClass objptr = new ExampleClass; (objptr->*mfptr)("string", 1); delete objptr; The .* and ->* operators associate (or bind) the mfptr member pointer to the object obj and the object pointed to by objptr respectively; the member function call can then proceed as normal. Note that the syntax in both cases includes an asterisk exactly as did the full version of the function pointer syntax. However, with pointers to members the asterisk cannot be omitted. Again, it is common to use typedefs to increase readability: typedef void (ExampleClass::*MFPTR_T)(char *, int); MFPTR_T mfptr = &ExampleClass::exampleMFn; Pointers to members aren't restricted to member functions, although most tutorials focus entirely on them. We can create a pointer to a data member too. For example: int ExampleClass::*miptr = &ExampleClass::data; ExampleClass obj; obj.*miptr = 10; Pointers to member functions honour run-time polymorphism too. If a pointer to virtual member function is bound to an object you are guaranteed that the correct virtual function definition is called for that object. static member functions are treated slightly differently since they are not directly associated with an object. There isn't a pointer to member syntax for them; you use the normal function pointer syntax. It is an error to try to use pointer to member syntax in this case. Pointers to members naturally honour C++'s visibility rules; otherwise they would be a simple way to get access to a class' private data! This means that a non-member function of ExampleClass can only assign pointers to the public parts of the class. For example: MFPTR_T p1 = &ExampleClass::exampleMFn; // OK MFPTR_T p2 = &ExampleClass::anotherMFn; // Error: anotherFn is private However a member function can assign a pointer to its class' non-public members. This is shown in the following (frivolous) definition of exampleMFn: void ExampleClass::exampleMFn(char *s, int i) { void (ExampleClass::*fp)(char *, int); fp = &anotherMFn; // use is OK - note we don't need to fully // qualify the member function name since // it is in the class scope (this->*fp)(s, i); } If a class DerivedClass inherits from ExampleClass then we can assign the address of its members to a MFPT_T variable. However, you can only assign members that exist in the ExampleClass interface to the variable. This again preserves C++'s visibility rules. The above do not really show useful examples of how to use pointers to members, only of how to master the syntax. The next question we should be asking is why would I want to use a pointer to member? The following application section is a good example of a valid use for pointers to members. Be careful that you're not trying to use a pointer to member where a better class hierarchy design would allow you to use virtual functions instead. An indicator of this happening is if the choice of member is based on some form of class type information. Virtual functions are preferable in this kind of situation because the maintenance overhead is smaller (think about what will happen if you add a new class to the program). I present here a framework for creating easily maintainable command line utilities in C++. The implementation of the framework uses pointers to members and should give a good idea of a valid application of this language mechanism. Developing command line utilities is a common task (well, for most UNIX programmers anyway). Such utilities typically have to parse arguments provided on the command line and act upon them accordingly. Because this is such an often performed task C and C++ provide us with a mechanism for accessing the command line arguments in main via the argc and argv parameters. This framework for parsing the contents of argc and argv follows the UNIX style of having an arbitrary number of switches, which can be entered in short or long form. The short form is prefixed by a single dash, the long form by two dashes. Each switch may be followed by a number of arguments. An example command line may be: cmd -s --long1 arg --long2 6 5 --long3 Where cmd is the name of the command line utility, -s is a shortened version of a switch that takes no arguments, --long1 is a long form switch (note the two dashes) that takes one argument (here arg), --long2 is another long form that takes two arguments (here 6 and 5), and --long3 another that takes no arguments. The following code is a canonical example of the framework. It implements a command line utility that accepts a number of switches and requires one unswitched argument that is considered to be an input filename. The implementation is split across three files. main.cpp contains the main function, whilst Application.h and Application.cpp contain the actual application implementation which is in a class called (for the sake of argument) Application. This code uses the STL. Application.h: #ifndef APPLICATION_H #define APPLICATION_H #include <vector> #include <string> class Application { public: Application(int argc, char *argv[]); virtual ~Application(); int go(); private: static const int version = 100; // version no of app (1.00) static const char[] name = "appname"; // name of app // Command switch cunningness struct Switch { typedef void (Application::*handler_t)(int argpos, char *argv[]); std::string lng; // long switch std::string srt; // short switch int nargs; // no. extra arguments following std::string help; // help text for switch handler_t handler; // pointer to handler member Switch(std::string l, std::string s, int n, std::string h, handler_t hd) : lng(l), srt(s), nargs(n), help(h), handler(hd) {} }; // continued on next page std::vector <Switch> switches; // These are the command switch handlers void handle_help(int argpos, char *argv[]); void handle_version(int argpos, char *argv[]); void handle_aardvark(int argpos, char *argv[]); std::string filename; }; #endif main.cpp: #include "Application.h" int main(int argc, char *argv[]) { Application app(argc, argv); return app.go(); } Application.cpp: #include "Application.h" using std::string; using std::vector; Application::Application(int argc, char *argv[]) { // First, we build a list of the switches understood by this program switches.push_back(Switch("help", "h", 0, "provide help", &handle_help)); switches.push_back(Switch("version", "ver", 0, "give version no", &handle_version)); switches.push_back(Switch("aardvark", "a", 1, "command with one parameter", &handle_aardvark)); // Now we parse the command line if (argc <= 1) { handle_version(0, argv); handle_help(0, argv); exit(0); } for (int n = 1; n < argc; n++) { bool done = false; for(vector<Switch>::iterator sw = switches.begin(); !done && sw != switches.end(); sw++){ if(argv[n] == string("-") + sw->srt || argv[n] == string("--") + sw->lng){ done = true; if(n + sw->nargs >= argc) { cerr << "Error in command format (" << argv[n] << " expects " << sw->nargs << " arguments)\n"; exit(1); } // Call appropriate handler through pointer (this->*(sw->handler))(n, argv); n += sw->nargs; } } // This command line utility needs a filename as an unswitched argument. // The following code implements this. if (!done) { filename = argv[n]; } } if (filename == "") { cerr << "No filename specified.\n"; exit(1); } } Application::~Application() { // Clean up in here ... } int Application::go() { // Do something useful ... return 0; } // This member function displays a nicely formatted help text // listing the command line usage of the program. void Application::handle_help(int, char*[]) { // You may wish to change the following descriptive text! cout << "Usage: " << name << " [OPTION]... [FILE]\n" << "Does this that and the other.\n\n" << "OPTIONs are:\n\n"; // Work out column widths for the nicely formatted output unsigned int srtsize = 0; unsigned int lngsize = 0; for(vector<Switch>::iterator n = switches.begin(); n != switches.end(); n++) { if (n->srt.size() > srtsize) srtsize = n->srt.size(); if (n->lng.size() > lngsize) lngsize = n->lng.size(); } srtsize += 2; lngsize += 2; // Produce the nicely formatted output for(vector<Switch>::iterator n = switches.begin(); n != switches.end(); n++) { cout << " -" << n->srt << string(srtsize-n->srt.size(), ' ') << string(" --") + n->lng + " " << string(lngsize-n->lng.size(), ' ') << n->help << endl; } cout << "\nSend bug reports to <pete.goodliffe@pacemicro.com>\n"; exit(0); } void Application::handle_version(int, char*[]) { cout << name << " version " << version/100 << "." << version%100 << " built on " << __DATE__ << "\n"; } void Application::handle_aardvark(int argpos, char *argv[]) { // To get here we are guaranteed that argv contains at least // this switch in argv[argpos] and the argument in argv[argpos+1] string arg = argv[argpos+1]; cout << "--aardvark argument is: " << arg << endl; } This code will compile and create a simple command line 'utility', but you may wish to extend it's capabilities slightly (maybe you won't want the --aardvark facility!) As we can see the above Application class uses an internal vector of Switches to store the table of command line switches it can accept. This table is populated in the constructor and then used to parse the command line (passed in argc and argv). The table of switches is also used to generate the nicely formatted help text in the handle_help member function. We use pointers to members in the Switch structure to store which member function is used to handle a particular command line switch. This requires that each handler conform to a particular signature. Using a framework like this presents us with several benefits. By not hard-coding the command line argument parsing but using this more generic table system we can easily add new switches and extend the functionality of the utility without much extra work. We can now be assured that the help text will always reflect the command line interface. I could have provided the code as a base class to inherit specific applications from. However, I prefer to leave it as a framework to build upon due to the kind of customisations it will need. For example, you may wish to add more unswitched command line arguments, or change the help text output format. This kind of change is better suited to modification than inheritance. The important thing to take away is the table of switches idiom rather than the code itself.
https://accu.org/index.php/journals/495
CC-MAIN-2019-47
refinedweb
2,211
52.29
ZylannMember Content count41 Joined Last visited Community Reputation204 Neutral About Zylann - RankMember [Freetype] Cannot render font to a bitmap Zylann replied to Zylann's topic in Engines and MiddlewareI finally found what was going on: // Load the face FT_Face face; if (FT_New_Memory_Face(m_library, reinterpret_cast<const FT_Byte*>(data), len, 0, &face) != 0) { SN_ERROR("Failed to create Freetype font face from memory"); delete[] data; return false; } delete[] data; // DO NOT DELETE THIS... When FT_New_Memory_Face is used, it doesn't takes ownership or copies the data, so I must keep it alive somewhere until I call FT_Done_Face. I added a field in my Font class to hold this data and delete it in the destructor. Also, it seems I had to include a few more headers so when I initialize the library, Freetype loads appropriate modules to rasterize smoothly. Now everything works fine [Freetype] Cannot render font to a bitmap Zylann posted a topic in Engines and MiddlewareHello, I'm integrating Freetype into my project, but so far I've never been able to render any font into a bitmap. Freetype functions never returns an error, the bitmap I get is just empty, width and height are zero, whatever character I pass. The fon't I'm using is a basic outline TTF. In my project, I split the code in 2 files: FontLoader.cpp and Font.cpp. FontLoader is a class that holds the FT_Library, and Font holds the FT_Face. Here is the part of my code that uses Freetype (unrelated code eluded for clarity): FontLoader.hpp //... #include <core/asset/AssetLoader.hpp> #include <ft2build.h> #include FT_FREETYPE_H namespace freetype { class FontLoader : public sn::AssetLoader { public: //... bool load(std::ifstream & ifs, sn::Asset & asset) const override; //... private: FT_Library m_library; }; } // freetype //... FontLoader.cpp //... #include "Font.hpp" #include "FontLoader.hpp" using namespace sn; namespace freetype { FontLoader::FontLoader(): m_library(nullptr) { // Initialize Freetype if (FT_Init_FreeType(&m_library) != 0) { SN_ERROR("Failed to initialize FreeType library"); } } FontLoader::~FontLoader() { if (m_library != 0) { // Deinitialize Freetype FT_Done_FreeType(m_library); } } //... bool FontLoader::load(std::ifstream & ifs, sn::Asset & asset) const { freetype::Font * font = sn::checked_cast<freetype::Font*>(&asset); SN_ASSERT(font != nullptr, "Invalid asset type"); // Read the whole stream ifs.seekg(0, ifs.end); u32 len = ifs.tellg(); ifs.seekg(0, ifs.beg); char * data = new char[len]; ifs.read(data, len); // Load the face FT_Face face; if (FT_New_Memory_Face(m_library, reinterpret_cast<const FT_Byte*>(data), len, 0, &face) != 0) { SN_ERROR("Failed to create Freetype font face from memory"); delete[] data; return false; } delete[] data; // Select the unicode character map if (FT_Select_Charmap(face, FT_ENCODING_UNICODE) != 0) { SN_ERROR("Failed to select the Unicode character set (Freetype)"); return false; } // Store the loaded font font->setFace(face); return true; } } // namespace freetype Font.hpp //... #include <ft2build.h> #include FT_FREETYPE_H namespace freetype { class Font : public sn::Font, public sn::NonCopyable { //... private: bool generateGlyph(sn::Glyph & out_glyph, sn::u32 unicode, sn::FontFormat format) const; //... bool setCurrentSize(sn::u32 characterSize) const; private: FT_Face m_face; //... }; } // namespace freetype //... Font.cpp #include "Font.hpp" #include FT_GLYPH_H #include FT_OUTLINE_H #include FT_BITMAP_H //... bool Font::generateGlyph(Glyph & out_glyph, sn::u32 unicode, sn::FontFormat format) const { Glyph glyph; if (!setCurrentSize(format.size)) return false; // Load the glyph corresponding the unicode if (FT_Load_Char(m_face, unicode, FT_LOAD_TARGET_NORMAL) != 0) return false; // Retrieve the glyph FT_Glyph glyphDesc; if (FT_Get_Glyph(m_face->glyph, &glyphDesc) != 0) return false; // Apply bold FT_Pos weight = 1 << 6; bool outline = (glyphDesc->format == FT_GLYPH_FORMAT_OUTLINE); if (format.isBold() && outline) { FT_OutlineGlyph outlineGlyph = (FT_OutlineGlyph)glyphDesc; FT_Outline_Embolden(&outlineGlyph->outline, weight); } // Convert the glyph to a bitmap (i.e. rasterize it) if (glyphDesc->format != FT_GLYPH_FORMAT_BITMAP) { if (FT_Glyph_To_Bitmap(&glyphDesc, FT_RENDER_MODE_NORMAL, 0, 1) != 0) { SN_ERROR("Failed to convert glyph to bitmap"); } } FT_BitmapGlyph bitmapGlyph = (FT_BitmapGlyph)glyphDesc; FT_Bitmap& bitmap = bitmapGlyph->bitmap; // Compute the glyph's advance offset glyph.advance = glyphDesc->advance.x >> 16; if (format.isBold()) glyph.advance += weight >> 6; u32 width = bitmap.width; u32 height = bitmap.rows; if (width > 0 && height > 0) { // NEVER ENTERS HERE // Funny conversion stuff //... } else { SN_DLOG("Character " << unicode << " (ascii: " << (char)unicode << ") has an empty bitmap"); } // Delete the FT glyph FT_Done_Glyph(glyphDesc); out_glyph = glyph; return true; } //... bool Font::setCurrentSize(sn::u32 characterSize) const { SN_ASSERT(m_face != nullptr, "Invalid state: Freetype font face is null"); FT_UShort currentSize = m_face->size->metrics.x_ppem; if (currentSize != characterSize) { return FT_Set_Pixel_Sizes(m_face, 0, characterSize) == 0; } else { return true; } } //... EDIT: by stepping into FT_Glyph_To_Bitmap and further, I discovered this: ftobjs.c error = renderer->render( renderer, slot, render_mode, NULL ); if ( !error || FT_ERR_NEQ( error, Cannot_Render_Glyph ) ) // This line is reached break; ftrend1.c renderer->render being a function pointer, I steeped in it too: /* check rendering mode */ #ifndef FT_CONFIG_OPTION_PIC if ( mode != FT_RENDER_MODE_MONO ) { /* raster1 is only capable of producing monochrome bitmaps */ if ( render->clazz == &ft_raster1_renderer_class ) return FT_THROW( Cannot_Render_Glyph ); // THIS LINE IS REACHED } else { /* raster5 is only capable of producing 5-gray-levels bitmaps */ if ( render->clazz == &ft_raster5_renderer_class ) return FT_THROW( Cannot_Render_Glyph ); } #else /* FT_CONFIG_OPTION_PIC */ I want to render with the FT_RENDER_NORMAL mode, but for some reason it seems Freetype can't Swapping header/source in Visual Studio opens wrong file Zylann replied to Zylann's topic in General and Gameplay ProgrammingI use .hpp to differentiate C++ files from C files. The only problem I had was with old compilers a long time ago, but I never had any problems until now. AFAIK there is no really standard extension naming for C++ files, just wide use of common ones (cpp, cxx, hpp, h, hxx...). It seems to me the ".h" naming was hardcoded somewhere in Visual or hidden in a config file maybe, because that's the Microsoft's convention. I thought intellisense would be used and see #include "Control.hpp", but it's not the case. Maybe I could write a macro that handles this better... Swapping header/source in Visual Studio opens wrong file Zylann replied to Zylann's topic in General and Gameplay ProgrammingI use #include "Control.hpp". The shortcut is not a macro, it's built-in. Its name is "EditorContextMenus.CodeWindow.ToggleHeaderCodeFile". I don't want to rename all my files just because Visual Studio does that... Strangely, if I change my header's extension to ".h", it works. But my whole project uses .hpp for C++ headers. Swapping header/source in Visual Studio opens wrong file Zylann posted a topic in General and Gameplay ProgrammingHello, I was not sure where to ask this question: In my project, I have a Control.hpp and Control.cpp files. I use a shortcut to swap between header and source files very often, however with Control.cpp, Visual Studio 2013 leads me to a different Control.h, which is completely unrelated to my project (C:\Program Files (x86)\Windows Kits\8.1\Include\um\Control.h). Actually, when a .cpp file in my C++ project matches a .h header located in this Windows Kits thing, Visual prioritizes the wrong file. This is very annoying, anyone has an idea how to solve this? OpenGL wglSetCurrent() and render to texture fails Zylann replied to Zylann's topic in Graphics and GPU ProgrammingOk, nevermind, I found it. Just forgot to disable depth test before drawing the full-viewport quads on render textures... Now I know RenderTextures have nothing to do with this :) Simple 3D transform in 2D GL_Ortho ? Zylann replied to FlyingSolo's topic in Graphics and GPU ProgrammingMaybe you can pre-transform the vertices of your model before drawing (so it will look like a 3D ellipse), and animate the UVs so you can make it visually rotate. If you want the model itself to rotate, you'll have to use a vertex shader and appropriate perspective projection matrices. How to handle multiple lights? Zylann replied to Zylann's topic in Graphics and GPU ProgrammingOk, so I think I'm going to implement a deferred system. After a few researches I found this post, which confirms the need for that technique. I like the idea to separates passes, it makes shader code easier to maintain and "encapsulate". However, it changes the way they need to be written (if I'm not mistaken?) There seem to be various techniques, generally involving to smash attributes on framebuffers, and them transform them into the final pixel color through fragment shader passes. I wonder what would be the best memory/performance tradeoff? Having a data-based configurable pipeline could be a cool thing to implement too, because I try to be generic when building my engine. OpenGL Modern OpenGL Tutorials Zylann replied to Glass_Knife's topic in Graphics and GPU ProgrammingI don't know how "modern" it is, but anyway, here is more for OpenGL 3.3 on Windows, from scratch (without windowing library) OpenGL wglSetCurrent() and render to texture fails Zylann posted a topic in Graphics and GPU ProgrammingHello, I'm trying to render with OpenGL 3.3 into multiple windows. The technique I'm using at the moment is to use only one context, and switch target windows. On win32, what I did so far is something like this: Initialization: - Create main context with a dummy, invisible window (HWND) - Create one or more visible windows - Set an OpenGL-compatible pixel format on those windows Rendering: - Call wglMakeCurrent() before rendering on any of these windows - Render (OpenGL calls) - After rendering, call SwapBuffer on every window My first test only targets one window, but even then I don't see anything, the screen stays black If I resize the window, I see the clear color appear, but nothing more. However, if I bypass post-processing effects (working with render to textures) during execution with a shortcut, I see my animated scene Once I re-enable effects, I see two static frames flickering rapidly: one with the effects, the other without. Both don't animate. My application worked perfectly fine before I introduced this multi-window technique, and I get no errors so far... I previously tried with shared contexts, but got the same weird result. So... I'm wondering, do I missed something about render to textures? Note: I'm using raw win32 and OpenGL with Glew, no other libraries involved. EDIT: it's "wglMakeCurrent" for the title of the topic, but I don't know how to change it. How to handle multiple lights? Zylann posted a topic in Graphics and GPU ProgrammingHello, I'm currently adding lighting to my game engine, and it's already fine with 1 light (directional, point or spot...). However, how can I handle multiple lights efficiently? I know I can write a shader with uniform arrays where each element is a light struct, or multiple arrays or primitives. Then in a for loop, each light would contribute to the final pixel color. uniform Light u_Lights[32]; //... for(int i = 0; i < 32; ++i) { outputColor = /* light contribution */ } However, if I want to support N lights, would I have to write my shaders so they take N lights in those uniform arrays? If I create 2 lights only, will I have to loop through 32 lights just to avoid branching? Unless branching is OK if N remains constant? Then, what if N can change at runtime? Do I have to recompile every shader that uses lights just to reset the number of lights as a compile-time constant? That's a lot of questions, but they generally refer to the same problem: compile-time VS runtime performance. What would be the best, general purpose approach? OpenGL Nothing visible when doing indexed drawing Zylann replied to Zylann's topic in Graphics and GPU ProgrammingBecause getIndices() returns const std::vector<unsigned int> & OpenGL No matching overloaded function found: mix Zylann replied to Zylann's topic in Graphics and GPU ProgrammingLooking at this doc, I thought it was supported... Ok, so I wrote this instead: mat4 lerpMatrix(mat4 a, mat4 b, float t) { return a + (b - a) * t; } Not sure if it's the best method, but now the shader compiles. OpenGL No matching overloaded function found: mix Zylann posted a topic in Graphics and GPU ProgrammingI'm currently porting Oculus Rift's shaders to GLSL 330, but I'm getting strange errors: ERROR: 0:36: error(#202) No matching overloaded function found: mix ERROR: 0:36: error(#160) Cannot convert from: "const float" to: "highp 4X4 matrix of mat4" ERROR: error(#273) 2 compilation errors. No code generated At this line: mat4 lerpedEyeRot = mix(u_EyeRotationStart, u_EyeRotationEnd, in_TimewarpLerpFactor); I also tried to convert the last mix parameter to a mat4, with no luck I'm using OpenGL 3.3 on Windows 7, my graphic card is an AMD Radeon HD 6670. Here is the vertex shader at time of writing: #version 330 layout (location = 0) in vec3 in_Position; //layout (location = 1) in vec4 in_Color; layout (location = 2) in vec2 in_TexCoord0; // R layout (location = 3) in vec2 in_TexCoord1; // G layout (location = 4) in vec2 in_TexCoord2; // B layout (location = 5) in float in_Vignette; layout (location = 6) in float in_TimewarpLerpFactor; uniform vec2 u_EyeToSourceUVScale; uniform vec2 u_EyeToSourceUVOffset; uniform mat4 u_EyeRotationStart; uniform mat4 u_EyeRotationEnd; out float v_Vignette; smooth out vec2 v_TexCoord0; smooth out vec2 v_TexCoord1; smooth out vec2 v_TexCoord2; vec2 TimewarpTexCoord(vec2 TexCoord, mat4 rotMat) { // Vertex inputs are in TanEyeAngle space for the R,G,B channels (i.e. after chromatic // aberration and distortion). These are now "real world" vectors in direction (x,y,1) // relative to the eye of the HMD. Apply the 3x3 timewarp rotation to these vectors. vec3 transformed = vec3( mul(rotMat, vec4(TexCoord.xy, 1.0, 1.0)).xyz); // Project them back onto the Z=1 plane of the rendered images. vec2 flattened = (transformed.xy / transformed.z); // Scale them into ([0,0.5],[0,1]) or ([0.5,0],[0,1]) UV lookup space (depending on eye) return u_EyeToSourceUVScale * flattened + u_EyeToSourceUVOffset; } void main() { mat4 lerpedEyeRot = mix(u_EyeRotationStart, u_EyeRotationEnd, in_TimewarpLerpFactor); // ERROR HERE v_TexCoord0 = TimewarpTexCoord(in_TexCoord0, lerpedEyeRot); v_TexCoord1 = TimewarpTexCoord(in_TexCoord1, lerpedEyeRot); v_TexCoord2 = TimewarpTexCoord(in_TexCoord2, lerpedEyeRot); gl_Position = vec4(in_Position.xy, 0.5, 1.0); v_Vignette = in_Vignette; // For vignette fade } OpenGL Nothing visible when doing indexed drawing Zylann replied to Zylann's topic in Graphics and GPU ProgrammingOk, I see. I will update my code soon anyways, I did things this way because it was simpler at time of writing. Thank you :)
https://www.gamedev.net/profile/198940-zylann/?tab=topics
CC-MAIN-2018-05
refinedweb
2,320
55.64
XML has become the preferred language of integration. While JSON has its adherents and there are many proprietary text file formats used by various APIs, XML is the lingua franca. At one time, it seemed BPEL4WS would become the standard. While it also has its adherents, it is only one of many XML based approaches. While XML’s ubiquity makes it a preferred language, it is not without overhead. XML data typically includes both the raw data and metadata, which describes the data. An integration platform can take much of the heavy lifting out of XML handling. That’s just one of the many reasons why an integration platform should be used to interface applications, but it is an important one. This blog entry provides a technical overview of the XML handling provided by the Magic xpi Integration Platform. For a detailed “deep dive,” you can consult the documentation on XML Handling. So what can Magic xpi do for you when it comes to XML handling? Magic xpi allows you to build integration flows taking a “drag, drop and configure” approach. When you place the XML Handling component in a flow, Magic xpi automatically opens the Component Properties dialog box for the component and you can specify your XML source type as a file or as a variable. Magic xpi Integration Platform supports a wide variety of XML handling methods that make it much easier to deal with XML in an integration flow. The platform removes the need for any programming and instead allows you to use its built-in methods for XML manipulation. Here is what you will be able to do with the Magic xpi XML Handling component: Use Aliases. One of the first things you can do to simplify working with XML is use aliases. An alias provides an easy way to reference a namespace URI. With the Magic xpi Set Namespace method you can associate an alias with a namespace URI for the root element. The Magic xpi method known as Get Alias likewise retrieves the alias associated with a namespace for the root element. A word about Elements vs. Attributes. When working with XML, information can be contained in either elements or attributes. There are no real rules about when to use elements or attributes. The same information can be stored as either an element or an attribute. In general, however, it is usually best to use attributes for metadata and nested child elements for data. In the following two examples, the same information is presented: <customer loyaltytype="Platinum"> <firstname>Fred</firstname> <lastname>Smith</lastname> </customer> <customer> <loyaltytype>Platinum</loyaltytype> <firstname>Fred</firstname> <lastname>Smith</lastname> </customer> In the first example Loyalty Type is an attribute. In the second example, Loyalty Type is a child element. With Magic xpi, you can easily check to see if an element or attribute exists and count the number of instances that it exists. The Magic xpi Check Exists method returns a true value if an XML element or an XML attribute is located by the XML's element path. The Magic xpi Count method returns the number of occurrences of an XML element or an XML attribute according to its path. Working with XML Elements. XML Elements can be either Alphanumeric text or BLOBs. There are two Magic xpi methods that apply to both types of elements. The Magic xpi Find Element Index method returns the index of an XML element that has a value equal to a specified value. Likewise, the Magic xpi Delete Element method deletes an XML element. Working with Alpha Elements. The Magic xpi Get Element Alpha Value method returns the Alpha value of an XML element or an XML attribute according to its element path. The Magic xpi Insert Element Alpha method inserts an XML element with an Alpha value at a specified location in an XML document. The Magic xpi Modify Element Alpha method modifies the Alpha value of an XML element. Working with Binary Large Objects (BLOB) in XML. A BLOB is a binary large object. These are useful to include in an XML that is being used for integration as they can pass the data in a binary format. The Magic xpi Get Element Blob Value method returns the Blob value of an XML element or an XML attribute according to its element path. The Magic xpi Insert Element Blob method inserts an XML element with a Blob value at a specified location in an XML document. The Magic xpi Modify Element Blob method modifies the Blob value of an XML element. Working with XML Attributes. The Magic xpi Get Attribute Value method returns the value of an XML element or an XML attribute according to its element path. The Magic xpi Insert Attribute method inserts an XML element at a specified location in an XML document or adds an attribute to an existing XML element. The Magic xpi Modify Attribute method modifies the value of an attribute whereas Delete Attribute deletes an XML attribute. A Word About XML Encoding. Magic xpi includes methods that can be used to convert XML content from one character code to another to use that content without affecting the structure or validity of an XML document. Most, but certainly not all, XML uses UTF-8 by default. If you want to access or use special character sets, then you need to specify them. In Magic xpi, the Get XML Encoding method retrieves the encoding of an XML document and the Set XML Encoding method sets the encoding of an XML document that was opened for Write access. Be Sure to Validate. Just as with parking, make sure you always validate your XML. It’s just good practice and saves you from issues down the line. With the Magic xpi Integration Platform, the Validate method returns a list of validation errors. You should remember, however, that any changes made to an XML document will not be seen until the XML Handling step is complete. In my next entry, we’ll discuss XML transformation using the Magic xpi XSLT component.
https://it.toolbox.com/blogs/glennjohnson/integration-basics-xml-handling-081012
CC-MAIN-2018-13
refinedweb
1,014
63.09
21 June 2012 15:57 [Source: ICIS news] MOSCOW (ICIS)--?xml:namespace> In January-May, the country's total output of mineral fertilizers was down by 3.7% year on year at 7.707m tonnes, according to the agency. Nitrogen fertilizers output was up by 0.5% year on year at 3.554m tonnes, while ammonia production was 5.6% down year on year at 5.880m tonnes, the agency said. However, in May 2012, the country's total output of mineral fertilizers was 2.2% up year on year at 1.684m tonnes. Last month, nitrogen fertilizers output was up by 3% year on year at 722,000 tonnes, it said, but ammonia output was still 1.6% down year on year at 1.259
http://www.icis.com/Articles/2012/06/21/9571755/russias-january-may-fertilizer-output-falls-year-on-year.html
CC-MAIN-2014-52
refinedweb
126
79.87
Beginners in the field of data science who are not familiar with programming often have a hard time figuring out where they should start. With hundreds of questions about how to get started with Python for DS on various forums, this post (and video series) is my attempt to settle all those questions. I'm a Python evangelist that started off as a Full Stack Python Developer before moving on to data engineering and then data science. My prior experience with Python and a decent grasp of math helped make the switch to data science more comfortable for me. So, here are the fundamentals to help you with programming in Python. Before we take a deep dive into the essentials, make sure that you have set up your Python environment and know how to use a Jupyter Notebook (optional). A basic Python curriculum can be broken down into 4 essential topics that include: - Data types (int, float, strings) - Compound data structures (lists, tuples, and dictionaries) - Conditionals, loops, and functions - Object-oriented programming and using external libraries Let's go over each one and see what are the fundamentals you should learn. 1. Data Types and Structures The very first step is to understand how Python interprets data. Starting with widely used data types, you should be familiar with integers (int), floats (float), strings (str), and booleans (bool). Here's what you should practice. Type, typecasting, and I/O functions: - Learning the type of data using the type()method. type('Harshit') # output: str - Storing values into variables and input-output functions ( a = 5.67) - Typecasting — converting a particular type of variable/data into another type if possible. For example, converting a string of integers into an integer: astring = "55" print(type(astring)) # output: <class 'str'> astring = int(astring) print(type(astring)) # output: <class 'int64'> But if you try to convert an alphanumeric or alphabet string into an integer, it will throw an error: Once you are familiar with the basic data types and their usage, you should learn about arithmetic operators and expression evaluations (DMAS) and how you can store the result in a variable for further use. answer = 43 + 56 / 14 - 9 * 2 print(answer) # output: 29.0 Strings: Knowing how to deal with textual data and their operators comes in handy when dealing with the string data type. Practice these concepts: - Concatenating strings using + - Splitting and joining the string using the split()and join()method - Changing the case of the string using lower()and upper()methods - Working with substrings of a string Here’s the Notebook that covers all the points discussed. 2. Compound data structures (lists, tuples, and dictionaries) Lists and tuples (compound data types): One of the most commonly used and important data structures in Python are lists. A list is a collection of elements, and the collection can be of the same or varied data types. Understanding lists will eventually pave the way for computing algebraic equations and statistical models on your array of data. Here are the concepts you should be familiar with: - How multiple data types can be stored in a Python list. - Indexing and slicing to access a specific element or sub-list of the list. - Helper methods for sorting, reversing, deleting elements, copying, and appending. - Nested lists — lists containing lists. For example, [1,2,3, [10,11]]. - Addition in a list. alist + alist # output: ['harshit', 2, 5.5, 10, [1, 2, 3], 'harshit', 2, 5.5, 10, [1, 2, 3]] Multiplying the list with a scalar: alist * 2 # output: ['harshit', 2, 5.5, 10, [1, 2, 3], 'harshit', 2, 5.5, 10, [1, 2, 3]] Tuples are an immutable ordered sequence of items. They are similar to lists, but the key difference is that tuples are immutable whereas lists are mutable. Concepts to focus on: - Indexing and slicing (similar to lists). - Nested tuples. - Adding tuples and helper methods like count()and index(). Dictionaries These are another type of collection in Python. While lists are integer indexed, dictionaries are more like addresses. Dictionaries have key-value pairs, and keys are analogous to indexes in lists. To access an element, you need to pass the key in squared brackets. Concepts to focus on: - Iterating through a dictionary (also covered in loops). - Using helper methods like get(), pop(), items(), keys(), update(), and so on. Notebook for the above topics can be found here. 3. Conditionals, Loops, and Functions Conditions and Branching Python uses these boolean variables to assess conditions. Whenever there is a comparison or evaluation, boolean values are the resulting solution. x = True ptint(type(x)) # output: <class bool> print(1 == 2) # output: False The comparison in the image needs to be observed carefully as people confuse the assignment operator ( =) with the comparison operator ( ==). Boolean operators (or, and, not) These are used to evaluate complex assertions together. or— One of the many comparisons should be true for the entire condition to be true. and— All of the comparisons should be true for the entire condition to be true. not— Checks for the opposite of the comparison specified. score = 76 percentile = 83 if score > 75 or percentile > 90: print("Admission successful!") else: print("Try again next year") # output: Try again next year Concepts to learn: if, else, and elifstatements to construct your condition. - Making complex comparisons in one condition. - Keeping indentation in mind while writing nested if/ elsestatements. - Using boolean, in, is, and notoperators. Loops Often you'll need to do a repetitive task, and loops will be your best friend to eliminate the overhead of code redundancy. You’ll often need to iterate through each element of a list or dictionary, and loops come in handy for that. while and for are two types of loops. Focus on: - The range()function and iterating through a sequence using forloops. whileloops age = [12,43,45,10] i = 0 while i < len(age): if age[i] >= 18: print("Adult") else: print("Juvenile") i += 1 # output: # Juvenile # Adult # Adult # Juvenile - Iterating through lists and appending (or any other task with list items) elements in a particular order cubes = [] for i in range(1,10): cubes.append(i ** 3) print(cubes) #output: [1, 8, 27, 64, 125, 216, 343, 512, 729] - Using break, pass, and continuekeywords. List Comprehension A sophisticated and succinct way of creating a list using and iterable followed by a for clause. For example, you can create a list of 9 cubes as shown in the example above using list comprehension. # list comprehension cubes = [n** 3 for n in range(1,10)] print(cubes) # output: [1, 8, 27, 64, 125, 216, 343, 512, 729] Functions While working on a big project, maintaining code becomes a real chore. If your code performs similar tasks many times, a convenient way to manage your code is by using functions. A function is a block of code that performs some operations on input data and gives you the desired output. Using functions makes the code more readable, reduces redundancy, makes the code reusable, and saves time. Python uses indentation to create blocks of code. This is an example of a function: def add_two_numbers(a, b): sum = a + b return sum We define a function using the def keyword followed by the name of the function and arguments (input) within the parentheses, followed by a colon. The body of the function is the indented code block, and the output is returned with the return keyword. You call a function by specifying the name and passing the arguments within the parentheses as per the definition. More examples and details here. 4. Object-Oriented programming and using external libraries We have been using the helper methods for lists, dictionaries, and other data types, but where are these coming from? When we say list or dict, we are actually interacting with a list class object or a dict class object. Printing the type of a dictionary object will show you that it is a class dict object. These are all pre-defined classes in the Python language, and they make our tasks very easy and convenient. Objects are instance of a class and are defined as an encapsulation of variables (data) and functions into a single entity. They have access to the variables (attributes) and methods (functions) from classes. Now the question is, can we create our own custom classes and objects? The answer is YES. Here is how you define a class and an object of it: class Rectangle: def __init__(self, height, width): self.height = height self.width = width def area(self): area = self.height * self.width return area rect1 = Rectangle(12, 10) print(type(rect1)) # output: <class '__main__.Rectangle'> You can then access the attributes and methods using the dot(.) operator. Using External Libraries/Modules One of the main reasons to use Python for data science is the amazing community that develops high-quality packages for different domains and problems. Using external libraries and modules is an integral part of working on projects in Python. These libraries and modules have defined classes, attributes, and methods that we can use to accomplish our tasks. For example, the math library contains many mathematical functions that we can use to carry out our calculations. The libraries are .py files. You should learn to: - Import libraries in your workspace - Using the helpfunction to learn about a library or function - Importing the required function directly. - How to read the documentation of the well-known packages like pandas, numpy, and sklearn and use them in your projects Wrap up That should cover the fundamentals of Python and get you started with data science. There are a few other features, functionalities, and data types that you’ll become familiar with over time as you work on more and more projects. You can go through these concepts in GitHub repo where you’ll find the exercise notebooks as well: Here is 3-part video series based on this post for you to follow along with: Data Science with Harshit You can connect with me on LinkedIn, Twitter, Instagram, and check out my YouTube channel for more in-depth tutorials and interviews. If this tutorial was helpful, you should check out my data science and machine learning courses on Wiplane Academy. They are comprehensive yet compact and helps you build a solid foundation of work to showcase.
https://www.freecodecamp.org/news/python-fundamentals-for-data-science/
CC-MAIN-2021-49
refinedweb
1,717
62.17
Post your Comment The Hashtable Class The Hashtable Class In this section, you will learn about Hashtable and its implementation with the help of example. Hashtable is integrated... for complete list of Hashtable's method. EXAMPLE import java.util.*; public Java collection -Hashtable Java collection -Hashtable What is Hashtable in java collection? Java collection -Hashtable;- The hashtable is used to store value... { public static void main(String [] args){ Map map = new Hashtable J2ME HashTable Example J2ME HashTable Example To use the HashTable, java.util.Hashtable package must be imported into the application. Generally HashTable are used to map the keys to values Java Hashtable Iterator be traversed by the Iterator. Java Hashtable Iterator Example import java.util.*; public class hashtable { public static void main(String[] args) { Hashtable hastab = new Hashtable(); hastab.put("a", "andrews Doubts regarding Hashtable - Java Beginners it possible to create a hashtable like this? java.util.Hashtable hashtable=new...(12,13,10,1)); since we get the key of hashtable from the database. When I tried... then a constructor is no return any value. so your example is occuring a null pointer exception Java Collection : Hashtable Java Collection : Hashtable In this tutorial, we are going to discuss one of concept (Hashtable ) of Collection framework. Hashtable : Hashtable.... When you increase the entries in the Hashtable, the product of the load Java hashtable Java hashtable What is hash-collision in Hashtable and how it is handled in Java Java hashmap, hashtable Java hashmap, hashtable When are you using hashmap and hashtable hashtable - Java Beginners hashtable pls what is a hashtable in java and how can we use it to compine queue and stacks. thnks in advance. Hi , A hash-table... to Roseindia"); Hashtable hash = new Hashtable(); hash.put("amar","amar"); hash.put multiple fields in hashtable Hashtable java prog - Java Interview Questions Hashtable java prog Create a hashtable with some student hall ticket numbers and their results. when we type a hallticket number,it shud display the results? please provide the detail java code for this? thanks in advance Hashtable java prog - Java Interview Questions Hashtable java prog Create a Hashtable with some students hall ticket numbers and their results. when you type a hall tickets number,it shud display... bal; Hashtable table = new Hashtable(); table.put( new Integer(1111),"Selected Media MIDlet Example Media MIDlet Example  ... created an 'item' object for Hashtable and put all 'wav' file in this table with object 'key' and object 'value' which maps the keys to value. In this example we Break a Line for text layout ; In this section, we are providing you an example... is used. The class Hashtable maps keys to values. A string is defined...;new Hashtable(); private static  example example example on Struts framework example example i need ex on struts-hibernate-spring intergration example Struts Spring Hibernate Integration Example J2EE Tutorial - Running RMI Example J2EE Tutorial - Running RMI Example greeter.java import java.rmi....; Hashtable hash1 = new Hashtable(); hash1.put(" Video Player MIDlet Example Video Player MIDlet Example This example is all about how to play the MPEG files on your...;form.setCommandListener(this); items = new Hashtable Collections in Java , LinkedList, Stack and Hashtable. Collections is a group of objects. Objects can... is the example of HashSet collection that displays the methods to add, remove...( "Collection Example!\n" ); int size; // Create a collection HashSet Create JTree using an Object object that works with Hashtable. Program Description: This program... the init() method that uses a Hashtable object to store data .This object is added...;void init(){ Hashtable hash = new Hashtable Example to show Hash table exception in java Example to show Hash table exception in java...;hashtable = new Hashtable(); hashtable.put("Girish"...;Elements in the Hashtable"); Enumeration e  J2ME Enumeration Example J2ME Enumeration Example  ... that can be implement by an object. After going through the example you will be able to run a video file using interface enumeration in J2ME. Interface HashMap in Java is almost equal to HashTable, the only difference is that HashMap allows null... provides a collection-view of the values in the map. Example Map ;Please visit the following links: HashTable Example ArrayList Example HashMAp Example...Map Hi how can we retrieve key - value pair from hashtable Post your Comment
http://roseindia.net/discussion/22808-J2ME-HashTable-Example.html
CC-MAIN-2013-20
refinedweb
706
58.28
Using SQLite in Windows Store Apps - Posted: Nov 14, 2012 at 3:43 PM - 90,346 Views - 52 Comments ![if gt IE 8]> <![endif]> Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” In this episode, Robert shows you how to add SQLite support to your Windows Store apps. SQLite is a free open source library that provides a self-contained transactional SQL database engine. Watch as Robert demonstrates how to setup a project to use SQLite as well as how to query, add, update, and delete items in the database. Check out Robert's blog post to see how to create the sample app he have just finished a new app (currently going through submission) called NewHouse. I used Sqlite for the data source. It's fantastic, especially when combined with RoamingStorage for cross-device apps. Can we create portable class library that refer SQLITE and act as model (MVVM) for both Win8 & WP8 app? How does this work with all async/await statements ? I can see that you are opening/close db each time you take something from it and was wondering how this will work if we will use async/awaits and at some point two different function will try to manipulate on db. kind give us your thoughts around that thanks M. Since SQLite.dll is native dll, existing for x32/x64/arm, you have to make/test/publish 3 different versions of your app - for each platform. Welcome to native hell ;) Can the SQLLite data be encrypted ? @Lex: You will need to build 3 different app packages, one for each architecture and submit each of them to the Store. You don't need different code bases, unless you have architecture specific code, so you should be able to fully test one architecture and then smoke test the others. Robert can I make an invoice to print and reports like pdf with SQLite and can I using a network Metro application with SQLite. Thansk Good tool I use for SQLite Excellent! This is exactly what I was looking for. Thanks Robert! Thanks For This Video Seriously guys, I kind of remember there is a Server and Tools Division inside Microsoft, which happens to have a product called SQL Server, and something like SQL Express, SQL Embeded, SQL Compact, SQL Whatever. Are they all dead? Wecarpool.com: That link () says, "Supported platforms: Windows 2000, XP, Vista, 7." So it doesn't support Windows 8? Best tool for SQLite: SQLite Administrator (Supported Win 2000 Win XP Win Vista) Had ZERO issues using it with Windows 8. I just uploaded the sample to the MSDN Code Samples. You will have to add the references and get the SQLite-net package, but you won't have to write any of the code. Hi Guys, any idea why SQLite3.Open(path, out handle) method can not open any file out side ApplicationData.Current\...? I am trying to unittest my class library which uses sqlite but I want to keep the data for other tests. (the point is that the ApplicationData.Current will remove after each test because my library is not a user interface). and SQLite3.Open(path, out handle) can not open any file in DocumentsLibrary nor DownloadsFolder. Thanks in advance @Clay Shannon: Windows 8 is Windows 7 + RT. It works on Desktop in Win8 no problem Yeah, seriously, the presenter is writing an APP to manage his APPS. Great. That's a real world example of why someone would need a robust FULLY SUPPORTED RDBMS on a device lol. Well Robert, here in the real world, where we have apps written that, as an example, would do the following. 1.) Route sheet with over 50 calls per day/per driver handling sales for a major international Energy drink distributor. That's 70 routes x 50 sales calls/invoices per day. 2.) Sales history for each customer (over 1000 customers per device) for the last 24 months that can be recalled and printed while at the store dealing with the customer 3.) New invoices from the route created that day replicated in real time to the SQL Server at head office 4.) New route sheet for the next day (and 14 day peak into the future for driver planning) 5.) Multiple "Surveys" regarding shelving and presentation for every store on the route. Data is collected while the driver is at the store and immediately replicated to head office for analysis 6.) Real time "chatting" with the drivers with entire dialogue stored on the LOCAL SQL CE DATABASE. blah, what a disaster. Try the above with SQL Lite. If I/We were in the business of writing 99 cent apps, we would be coding for android or IOS. We chose MS because we code APPs for businesses, not 18 year old college students. I am inspired... to code for the android. Unfortunately, I agreed with the client, and moved them over to an iPhone web based app as I saw (and now confirmed) no use for MS in the device market place. what a gong show Hello Robert i'm new to application development this was really nice explanation of sqlite, i got the answers of many doubts i was looking for, however in the MSDN Code Samples provided when i compiled it gave two errors Error 1 The name 'ex' does not exist in the current context C:\Users\Downloads\New folder\Using SQLite in a Windows Store App\C#\SQLiteDemo\SQLite.cs 722 19 SQLiteDemo Error 2 The name 'ex' does not exist in the current context C:\Users\Downloads\New folder\Using SQLite in a Windows Store App\C#\SQLiteDemo\SQLite.cs 767 18 SQLiteDemo is there something that i missed ?? @Mustansir: I just uploaded a new version of this. Download that and follow the Building the Sample instructions. Let me know if that doesn't work. Robert When i try run your Simple SqLite demo application it is getting an exception as displaying 'System.DllNotFoundException' occured, & Unable to load DLL 'sqlite3' What is Team foundation server, Why i have to integrate with it. When i start your sample app it is asking for Team Foundation server Version control "". Please help me, Thanks in advance. @Devendra: Did you follow the Building the Sample instructions? You need to download the SQLite for Windows Runtime extension for Visual Studio. And you can safely ignore the prompt for Team Foundation Server. Robert Hi Robert, Thank you for your tutorial... i am a beginer and i am interested in the development of Winsdows Stores application with MVVM-pattern. I have download your SQLiteDemo and wanted to tested but there is not "start-app". Which template may i select (empty store template)?. I want to develope some store-application with my own page like login-page. Thank, Jibyz @jibyz: You can look at my blog post, where I show how to build this app from scratch. You can also look at Create your first Windows Store app walkthroughs on MSDN. They have them for XAML and for JavaScript apps. Robert Hi Robert, i have retried, but i don't receive the information: and the there was no file added into the Common folder. I just have the StandardStyle.xaml in this folder. Maybe i do something wrong... please can you help me? I have your tutorial step by step... Thanks, Jibyz @jibyz: If you add a new XAML page using the Basic Page item template, you should see that dialog. If you add a new XAML page using the Blank Page item template, you won't see that. So make sure you use the Basic Page template. And let me know if that doesn't work. Robert @rogreen How can I make relationship? I used to do that with the entity framework : public virtual ICollection<KCExample> examples { get; set; } public virtual KCTrade trade { get; set; } But it seems to be unsupported... What's the right syntax? Hi Robert, great video tutorial. Is it possible to create a Windows Store App using SQLite in Visual Basic? I'm having difficulties with the "Imports.SQLite" statement. Thanks @silvioribeiro:Hi Silvio. Bill Burrows has an example of using SQLite with VB. He has a VB front end that talks to the C# code in the SQLite-net NuGet package. Looks pretty straightforward. Robert @rogreen that's exactly what I needed to get started. I'm definitely heading in the right direction now. Thank you for the link and for the great tutorial! Thanks for the video Robert! Is it a good idea to instantiate a new SQLiteConnection each time your interact with the database or could I re-use a single connection over and over again? Does anyone have a best-practice? Thanks. Thanks Robert, it is just what I need for my Windows 8 project. Others had told me to us XML and I was banging my head against the wall. This will let me do exactly what I need for my app. Thanks again Dave Thanks for the video Robert! Can we create portable class library that refer SQLITE and act as model (MVVM) for both Win8 & WP8 app? Perfect! Thank you for this tutorial... Can someone please tell me from where I can locate the database on my Disk so I can modify it using Mozilla Fire Fox? Thanks Thank you for the video I learnt a lot, can I get the source code for the app? @job mwaniki: You can find the sample at. Let me know if you have any questions or comments. Robert Thanks again for the project. Is there a way to automatically sync between devices through cloud and SQLite? What if one device adds data while no internet (offline sync) is available how would it sync back to other devices once internet connection is restored... @Podster: There are some people who have updated the open source Microsoft Sync Framework to work with SQLite on Windows 8. That would be a good place to start. Two good examples are: Robert Why when I change the data and run the app again the data saved are not saved? @Joabe Tavares: Check in the OnLaunched method in the App.xaml.cs file. If there is a call to ResetData, comment it out. ResetData populates the tables the first time you run the app. But if you leave it in there, it will do this every time. Robert Hi Robert, Please can you help. Still on SQLite with Windows Phone. I have found a solution to the type or namespace error with Community However, There are few issues with SQLite and Windows Phone. Please are you able to do a show on Using SQLite with Windows Phone 8. It will be much appreciated. Thanks how i can protect my sqlite file from getting stolen? Hi, Thanks for the video. After you permission, What about windows 8.1 and Visual Studio 2013? @Ghada: There is a Windows 8.1 version of the SQLite for Windows Runtime extension for use with Visual Studio 2013. Check out. Robert X @Robert Thank you very much. Please, how I can get the microsoft visual c++ runtime package for visual studio 2013? @Ghada:You need the C++ 2013 Runtime Package (version 12), which targets .NET Framework 4.5.1. The C++ Runtime Package (version 11) targets .NET Framework 4.5 and is greyed out when you are on Windows 8.1. Thanks again Robert I am very confused. My app required (cloud not local) database that contains images and tables.........,to let people see the updated images and data in the database and enabling them to insert their data. My Question: Is SQLite can help me? @Ghada: As we discussed offline, you should look into Azure Mobile Services for this scenario. Check out. Robert Robert, first thanks for this! I am changing a program from a WPF desktop app to a Windows 8.1 app and I have been killing myself with Data since my previous app used a relation database (SQLCE). My question to you is this (and this is my first time using SQLite): Where does the app store the data, I know it is in local storage, but I noticed when I first start up the app, there is already data populated in it. So what is populating the data? I didn't see anything in the code creating the database. The reason I am asking is I want to script my SQLCe database (which i did through SQLite command line) so I have data in the database I can manipulate and test with. Thanks! @Joe: In the OnLaunched method in App.xaml.cs, you will find a call to ResetData. This populates the database with the sample data. Robert @Rogreen Thanks for pointing that out! Now its time to play with this and see if I can get it to work! According to @Ghada: As we discussed offline, you should look into Azure Mobile Services for this scenario. Check out. Thank you so much. Remove this comment Remove this threadclose
https://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Using-SQLite-in-Windows-Store-Apps?format=flash
CC-MAIN-2015-35
refinedweb
2,196
74.59
Hello. I'm new here and this is my first post ^_^. I started C++..a few hours ago, I had some skills over some other programming languages so it was easy for me to learn the first few steps. I made up my own little code to try and learn and maybe learn more from it. Here is my code: I'd appriciate your help and just one more question.I'd appriciate your help and just one more question.Code:#include <iostream> using namespace std; int main() { char m; int sub; int sub2; cout<<"If you type s subtraction will start: "; cin>>m; if (m == s) { cout<<"Please enter your first number: "; cin>>sub; cout<<"Please enter your second number now: "; cin>>sub2; cout<<"The results are "<<sub - sub2 <<"\n; cin.ignore(); } else { cout<<"You Messed Up"; } cin.get(); } I use Dev C++ and I was wondering if anyone can tell me what is the best compiler to use?. `Thank You
http://cboard.cprogramming.com/cplusplus-programming/60053-what-would-wrong-little-code.html
CC-MAIN-2015-11
refinedweb
163
80.41
5.2 void Method Look at the following program that demonstrate how a method is defined and called. In this program, we have defined a method displayLine that displays a line. Execution of program start from main and when it encounters statment displayLine() control passes to method and after execution of the code of method control comes back to next statement of main method. /** * This program defines and calls a method. */ public class MethodDemo { public static void main(String[] args) { System.out.println("Advantage of methods."); displayLine(); System.out.println("Write once use many times"); displayLine(); } /** * The displayLine method displays a line. */ public static void displayLine() { for (int i = 1; i <= 40; i++) { System.out.print("_"); } System.out.println(" "); } } Output : Advantage of methods. ________________________________________ Write once use many times ________________________________________ Method definition Method definition has two parts, header and body. Following figure explain each of these parts. Calling a Method To call a method, simply type the name of the method followed by a set of parentheses. As we used in above example. displayLine();
http://www.beginwithjava.com/java/methods/void-methods.html
CC-MAIN-2018-30
refinedweb
175
58.79
Psycopg2 concurrency issue Reported by Psycopg website | February 22nd, 2012 @ 09:38 PM Submitted by: Phani I am trying to do share a psycopg2 connection between multiple threads. As was mentioned in the docs, I am doing that by creating new cursor objects from the shared connection, whenever I use it in a new thread. def delete(conn): while True: conn.commit() def test(conn): cur = conn.cursor() thread.start_new_thread(delete,(conn,)) i = 1 while True: cur.execute("INSERT INTO mas(taru,s) values (2,%s)",(i,)) print i i = i +1 conn.commit() After running, I get output like, 1 2 ... 98 99 Traceback (most recent call last): File "postgres_test_send.py", line 44, in cur.execute("INSERT INTO mas(taru,s) values (2,%s)",(i,)) psycopg2.InternalError: SET TRANSACTION ISOLATION LEVEL must be called before any query Daniele Varrazzo February 22nd, 2012 @ 11:03 PM - State changed from new to hold Strange race condition. However, more recent psycopg versions do without "set transaction isolation level". What you report is strange, because these versions used to execute a "begin; set isolevel ..." into the same command, I don't know how it would be possible to sneak a commit between the two. I'd be interested in knowing what would be the behaviour in the recent psycopg version. However I take your example as a pathological example, not really a reasonable behaviour: if you want every insert to be committed you should run the connection in autocommit mode, which will avoid begin, commit and set isolevel. Conversely if you wanted all the insert to be committed together the cnn.commit should be executed after the worker threads have joined. Closing this ticket as we don't plan to keep on supporting version 2.2.2. Feel free to re-open it if you notice a wrong behaviour in 2.4.4. Thank you. Daniele Varrazzo February 24th, 2012 @ 03:31 AM - State changed from hold to resolved I've looked into the problem. The issue has been mostly fixed in 2.4.2: it has gone mostly because SET ISOLATION LEVEL is not used anymore. However, a concurrency problem is still there in 2.4.4: commit() checks for the status outside the critical section, so it can decide to send a commit even if there is no need. The problem is fixed. I've added a script similar to the test you proposed into the test suite. Thank you very.
http://psycopg.lighthouseapp.com/projects/62710/tickets/103
CC-MAIN-2014-35
refinedweb
410
66.54
Archive Why are enums so tedious?). Dear Canonical… Parallel exercise: Palindromes and reversible words. I’ve been looking for projects to test various methods of work-distrubtion, without being yet-another-Fibonacci-series that has been done to death already, and I stumbled on the idea of finding all the reversible words and palindromes within a given range of lengths (3-7 letters). It immediately seemed like a fun challenge: the simplest approach would be to load an entire dictionary in from one of the open-source resources someplace, and simply iterate over each word and find those which still make a valid word when reversed. But just as easily, you could start ignorant of the lexicon and use brute (or genetic) force to find it… Remember, this is an exercise, it’s not really so much about finding the results. One of my variations used ZeroMQ to distribute tasks across 20 free Amazon EC2 servers: there are both Linux and Windows instances available for free (within limits) if you use “c1.micro”. Searching for palindromes and reversible words (“Elle”, “tuba” => “abut”) provides a number of sub-tasks which you can compartmentalize in a variety of different ways. For example: combine letters to generate a word, match word against a dictionary, reverse the word and match that against a dictionary. Stroustrup and the C++ community are right, which proves them wrong. When Bjarnes Stroustroup designed the C++ class concept, he made the default accessibility “private”, so as to encourage encapsulation and data hiding. That’s as far as he went, immediately violating his own principle. The more redeeming features of C++11 C++11 is the now ratified and gradually being implemented C++ standard that finally solidifed last year after far far too many years in design. The process by which C++11 was finally settled on leaves me thinking that the people on the committee are far, far too divorced from day-to-day C++ usage and the needs of the average software developer. I’m not talking about what they finally gave us, but how they reached the decision on what would be included, how it would be phrased, and so forth. Some of the features, especially lambdas, are going to be a real pain in the ass when they begin being commonly deployed, introducing write-once-scratch-head-forever crap into the language, and half-assed fixing solutions with copious obfuscation for lesser issues of the same source problem (virtual ‘override’). But somehow, someway, somefolks got some good stuff into C++11 that I think I will personally benefit from. auto While I may not be happy about the word itself, the “auto” feature is very nice. // Before typedef std::map< std::string, std::set<std::string> > StringSetMap ; StringSetMap ssm ; for ( StringSetMap::iterator it = ssm.begin() ; it != ssm.end() ; ++it ) // After std::map< std::string, std::set<std::string> > ssm ; for ( auto it = ssm.begin() ; it != ssm.end() ; ++it ) // Before Host::Player::State* playerState = (Host::Player::State*)calloc(1, sizeof(Host::Player::State)) ; // After auto playerState = (Host::Player::State*)calloc(1, sizeof(Host::Player::State)) ; range-based for One of the big concerns of the C++ standards committee was breaking pre-existing code. As a result, awful choices like naming auto “auto” were made. range-based for is another case of “I’d rather teach people visual basic than explain C++11 range-based for to them”, but in practice it’s really nice: // Before std::map<unsigned int, std::string> myMap ; // ... for ( std::map<unsigned int, std::string>::iterator it = myMap.begin() ; it != myMap.end() ; ++it ) { const unsigned int i = it->first ; const std::string& str = it->second ; // ... } // After std::vector<unsigned int> myUints ; // ... for ( auto it : myUints ) { const unsigned int i = it.first ; const std::string& str = it.second ; // ... } One thing I’m not clear on, with range-based-for is whether it copes with iterator validity, e.g. what happens (yeah, I know, I could try it, duh) if you do for ( auto it : myMap ) { if ( it.first == 0 ) myMap.erase(it) ; } extended enum definitions In the beginning, there were names. And the Stroustrop said, Let there be name spaces, that separate the code from the library. And he named the standard library “std::” and the rest “::”. And it was about bloody time. Unfortunately, “enum”s slipped thru the cracks. In C and C++ I find to enums be something of a red-headed stepchild, lacking a few really, really important features that programmers always end up resorting to sloppy bad practices to work around. In C++3 (before C++0x/C++11) there was no way to pre-declare them. If you wanted to declare a function prototype that accepted a LocalizationStringID enumeration, that mean’t including the whole bloody list of LocalizationStringIDs too. Compounding this issue is the fact that enums are exposed in the scope they are declared in, so they generally pollute namespaces, so being forced to continually include is a real pain in the butt. It also means you have to remember the special prefix that every enum list uses, because in order to hide stuff, people tend to make their enum names long. But the compiler couldn’t, otherwise, tell what sort of variable was going to be needed for the enum, and there was no way to specify it. Especially when you’re dealing with networking, this is an abject pain in the ass because it’s a variable who’s type you don’t control in a language without reflection there is no way to find out what it has been given, in a situation where you care a great deal about exactly how the data is stored. I’ll get to my other enum issues after I touch on what C++11 did do for enums. // Before // Localization string identifiers. // Try to keep these in-sync with the localization database, please. typedef enum LSTRING_ID { LSTR_NONE // No message, , LSTR_NOTE // NOTE: prefix ... , LSTR_SPILLED_BEER_ON_KEYBOARD // = #6100 as of 11/21/09 ... , LSTR_PREFIX_LINE = -1 // More text to follow. // ^- this may cause the compiler to use signed storage for the enum, // or the compiler might always use signed storage for enums, // or the compiler might never use signed storage. // Either way, it's going to lead to some interesting type-pun errors. }; extern void sendLString(playerid_t /*toPlayer*/, int /*lstringID*/) ; extern void sendLStringID(playerid_t /*toPlayer*/, LSTRING_ID /*lstringID*/); // ... LSTRING_ID x = 90210 ; // Valid but bad. sendLString(pid, LSTR_NONE) ; // Valid but bad practice. sendLString(pid, 99999) ; // Valid but not good. sendLStringID(pid, -100) ; // Valid but not good. sendLStringID(pid, Client::Graphics::RenderType::OpenGL) ; // Valid but OMFG LTC&P MORON! // Last but not least ... sendLStringID(pid, LSTR_SPILLED_BEER_ON_KEYBOARD) ; C++11 cleans up on enums big time. Firstly with enum class, and here, I think hats off to the committee for coming up with a rather nice syntax although they had to fudge it to avoid what I find a dumb caveat of the C++ class definition :) An enum class is a class-like namespace, complete with type safety, that contains an enumeration. Borrowing further from the class definition, it allows you to specify a base class which will be the underlying type used for the enumeration. If omitted, though, it’ll use the good old I-dunno-wtf-type-that-is enum type. enum class LStringIds : signed short { PrefixLine = -1 , None = 0 , Note ... , SpilledBeerOnKeyboard }; sendLStringID(playerID, LStringIds::SpilledBeerOnKeyboard) ; By using this class mechanism, you can also pre-declared enums now: enum class LSTRING_ID : short ; // VS11 Beta doesn't support this as of 2012-03-03, though it says it does. GCC 4.6 supports this, and it reduced game-server compilation time by about 8%. RAR! We can also use nice names for the LString IDs now without worrying about naming clashes. I would like to point out that anyone who knows C++ should have spotted that there is no access type specification, enum class Foo : public unsigned char { public: Fred ... } ; It makes no sense that the names in an enum class be private. But then it also makes no sense that the stuff in a class be private by default. Note: There is also “enum struct” … which as far as I understand is exactly the same, it’s just less likely to make you forget to put “public” at the top of your next real class declaration :) I’m going to write a separate post about my other gripes with enums :) time Not many people know this, but computers are utterly shit at keeping track of time. The hardware involved is a joke, and because of various historical bugs in each operating systems’ time keeping routines, it’s really bloody messy and expensive to get time in meaningful terms, never mind portable. For example, under Windows you have to pratt about with QueryPerformanceCounter, and then you have to make sure to check for time going backwards or leaping forwards, and stuff. C++11 doesn’t fix that, but it helps by providing functions to deal with a lot of this stuff in the standard library via the <chrono> header. Yay! The template origamists really came out and did their thing, there are some really nice features involved there, including the ability to create timing variables that include their quanta in their compile-time type information so that the compiler can do smart stuff like working out that you’re comparing seconds with minutes and what it needs to do to handle that situation… parallelism Visual. Recent Comments
https://kfsone.wordpress.com/2012/03/
CC-MAIN-2016-44
refinedweb
1,581
59.84
On Tue, 2004-09-28 at 13:38, Mike Waychison wrote:> -----BEGIN PGP SIGNED MESSAGE-----> Hash: SHA1> > John McCutchan wrote:> |> | --Why Not dnotify and Why inotify (By Robert Love)--> |> > | * inotify has an event that says "the filesystem that the item you were> | watching is on was unmounted" (this is particularly cool).> > | +++ linux/fs/super.c 2004-09-18 02:24:33.000000000 -0400> | @@ -36,6 +36,7 @@> | #include <linux/writeback.h> /* for the emergency remount stuff */> | #include <linux/idr.h>> | #include <asm/uaccess.h>> | +#include <linux/inotify.h>> |> |> | void get_filesystem(struct file_system_type *fs);> | @@ -204,6 +205,7 @@> |> | if (root) {> | sb->s_root = NULL;> | + inotify_super_block_umount (sb);> | shrink_dcache_parent(root);> | shrink_dcache_anon(&sb->s_anon);> | dput(root);> > This doesn't seem right. generic_shutdown_super is only called when the> last instance of a super is released. If a system were to have a> filesystem mounted in two locations (for instance, by creating a new> namespace), then the umount and ignore would not get propagated when one> is unmounted.> > How about an approach that somehow referenced vfsmounts (without having> a reference count proper)? That way you could queue messages in> umount_tree and do_umount..I was not aware of this subtlety. You are right, we should make sureevents are sent for every unmount, not just the last.John-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2004/9/28/154
CC-MAIN-2014-15
refinedweb
237
57.98
.applications.xmlimporter;12 13 import java.util.*;14 15 /**16 * This is a basic implementation of ObjectMerger.17 * It applies these rules:18 * <ul>19 * <li>Leave the fields of the merging objects unaffected.20 * <li>Move the relations of both to the merged object.21 * <li>Relations are considered duplicates when of same type and22 * with same source and destination.23 * <li>Add objects to the persistent cloud that are not present already.24 * </ul>25 *26 * @author Rob van Maris: Finalist IT Group27 * @since MMBase-1.528 * @version $Id: BasicMerger.java,v 1.5 2005/01/30 16:46:38 nico Exp $29 */30 public class BasicMerger implements ObjectMerger {31 32 /** Initialize this instance. This implementation simply stores33 * the initialization parameters.34 * @param params The initialization parameters, provided as35 * name/value pairs (both String).36 */37 public void init(HashMap params) {38 }39 40 /** Merge a field. This implementation leaves all fields unaffected.41 * @param name The name of the field.42 * (Note: "number" and "owner" are not considered fields in this context,43 * so this method will not be called with these values for name.)44 * @param tmpObj1 The first object to be merged. This will hold45 * the resulting merged object afterwards.46 * @param tmpObj2 The second object. this object must be deleted47 * afterwards.48 */49 public void mergeField(TmpObject tmpObj1, TmpObject tmpObj2, String name) {}50 51 /** Merge relations. This implementation moves all relations of the second52 * object to the merged object.53 * @param tmpObj1 The first object to be merged. This will hold54 * the resulting merged object afterwards.55 * @param tmpObj2 The second object. this object must be deleted56 * afterwards.57 * @param relations1 List of all relations of the first object.58 * @param relations2 List of all relations of the second object.59 */60 public void mergeRelations(TmpObject tmpObj1, TmpObject tmpObj2,61 List relations1, List relations2) {62 63 Iterator i = relations2.iterator();64 while (i.hasNext()) {65 TmpObject relation = (TmpObject) i.next();66 if (tmpObj2.isSourceOf(relation)) {67 relation.setSource(tmpObj1);68 }69 if (tmpObj2.isDestinationOf(relation)) {70 relation.setDestination(tmpObj1);71 }72 }73 }74 75 /** Tests if two relations should be considered duplicates,76 * indicating that one of them can be disposed of.77 * This test will only be called for pairs of relations that78 * have already been verified to be of the same type, and have the79 * same source, destination.80 * This implementation always considers these pairs duplicates,81 * it provides no additional tests.82 * @param relation1 The first relation.83 * @param relation2 The second relation.84 * @return true if these relations should be considered duplicates.85 */86 public boolean areDuplicates(TmpObject relation1, TmpObject relation2) {87 return true;88 }89 90 /** Tests if this object should be added to the persistent cloud91 * when not present already.92 * When this returns false, the object will be deleted from the93 * transaction if no object is found to merge it with.94 * This implementation allows all objects to be added when not present95 * already.96 * @param tmpObj The object.97 * @return true If this object should be added, when not present already.98 */99 public boolean isAllowedToAdd(TmpObject tmpObj) {100 return true;101 }102 103 }104 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/mmbase/applications/xmlimporter/BasicMerger.java.htm
CC-MAIN-2016-44
refinedweb
544
52.87
Osm2pgsql/o5m The o5m import interface for osm2pgsql is used to allow osm2pgsql to read data which have been written in o5m file format. Installation Proceed as described in osm2pgsql#From_source_(generic), with one exception: After having exported the source via svn, download the o5m import interface 2012-10-14. Then unpack the files and move them into osm2pgsql subdirectory. Two of the files are new: parse-o5m.c and parse-o5m.h, two are already existing and must be overwritten: osm2pgsql.c and Makefile.am. The import interface has been successfully tested with osm2pgsql revision 26278. As usual: There is no warranty, to the extent permitted by law. In case the regular source file osm2pgsql.c should have been modified since revision 26278, here the additional lines (marked with +) which will be necessary to get the import interface running: *************** *** 54,59 **** --- 54,60 ---- #include "sprompt.h" #include "parse-xml2.h" #include "parse-primitive.h" + #include "parse-o5m.h" #ifdef BUILD_READER_PBF # include "parse-pbf.h" *************** *** 174,179 **** --- 175,181 ---- #ifdef BUILD_READER_PBF printf(" \t\tpbf - OSM binary format.\n"); #endif + printf(" \t\to5m - OSM binary format.\n"); printf(" -O|--output\t\tOutput backend.\n"); printf(" \t\tpgsql - Output to a PostGIS database. (default)\n"); printf(" \t\tgazetteer - Output to a PostGIS database suitable for gazetteer\n"); *************** *** 520,525 **** --- 522,529 ---- streamFile = &streamFileXML2; } else if (strcmp("primitive", input_reader) == 0) { streamFile = &streamFilePrimitive; + } else if (strcmp("o5m", input_reader) == 0) { + streamFile = &streamFileO5m; #ifdef BUILD_READER_PBF } else if (strcmp("pbf", input_reader) == 0) { streamFile = &streamFilePbf; *************** *** 550,555 **** --- 554,563 ---- /* if input_reader is not forced by -r switch try to auto-detect it by file extension */ if (strcmp("auto", input_reader) == 0) { + if (strcasecmp(".o5m",argv[optind]+strlen(argv[optind])-4) == 0 || + strcasecmp(".o5c",argv[optind]+strlen(argv[optind])-4) == 0) { + streamFile = &streamFileO5m; + } else #ifdef BUILD_READER_PBF if (strcasecmp(".pbf",argv[optind]+strlen(argv[optind])-4) == 0) { streamFile = &streamFilePbf; In Makefile.am you simply would have to add parse-o5m.c and parse-o5m.h to the compile list. Further Information Please see o5m related Wiki pages to get further information about this file format: - o5m - osmconvert – convert between different data formats, clip regions - osmfilter – filter specific OSM objects or specific tags, do offline taginfo Benchmarks Reading .o5m file format is usually a bit faster than reading most other OSM file formats, but you should not expect significantly increased processing speed, because writing to the database will be the bottleneck. Feel free to add your own benchmark results.
http://wiki.openstreetmap.org/wiki/Osm2pgsql/o5m
CC-MAIN-2014-42
refinedweb
407
51.24
In this tutorial we will be creating a very basic Windows Store App using XAML and C#. As this is the very first tutorial of this series, I'll be focusing mainly on project setup and basic workflows and later on, in other upcoming series, I'll be picking up more advance concepts. So, before moving forward, let's talk about the environment setup to execute our app. In order to complete Windows 8 metro style app, our machine need following things: In the left panel, you can select 'Windows Store' template. Once selected, center pane will show you the list of available items for the selected template. Here we are using Blank App, which will not contain any user controls by default. If require, we can add the controls at later point of time. Give the name and location of the project nad press OK button. After clicking on OK, you will see the structure similar to as shown below: Here you will see that, your solution contains lots of file. I'll try to brief about each of these items. These above files are required for all Windows Store apps, which are built using XAML and C#. While using Blank App template, we can replace our blank page with any other page templates, in order to take advantage of layout and other helper classes. Click Yes, to add these files. You will see, all the newly added files under the Common folder. Build your app and now you will be able to see your page in design view. Press F5 and you will be able to see your app running as: At this point of time, there is no button to close the app. So, you can use Alt+F4 to close it, typically we don't close Windows App (what is the reason behind this, we will see in our next article of this series). Now press the Windows key, you will be able to see a new tile added for your new app. Now to run the app, you can click or tap on that directly. Isn't it a good feature ??? Congratulations on building your first Windows store app. App.xaml is one of the most important files as this file is store the things, which you can access across the application. Double click and open the file. You will notice that, it contains a ResourceDictionary, which in turn has a reference of StandardStyles.xaml ResourceDictionary. This StandardStyles.xaml contain lots of style, which give look and feel to our app: <application.resources> <resourcedictionary> <resourcedictionary.mergeddictionaries> <resourcedictionary source="Common/StandardStyles.xaml"> </resourcedictionary> </resourcedictionary.mergeddictionaries> </resourcedictionary></application.resources> Now go to code behind file of App.xaml. It contains a constructor, which calls InitializeComponent() method. This is the auto generated method, whose main purpose is to iniyialize all the elements, which are declared in xaml file. It also contains the suspension and activation methods as: using System; using System.Collections.Generic; using System.IO; using System.Linq; Application template is documented at namespace HelloWorldSample { /// when the application is launched to open a specific file, to display /// search results, and so forth. /// </summary> /// <param name="args" />Details about the launch request and process. protected override(); } /// <summary> /// Invoked when application execution is being suspended. Application state is saved /// without knowing whether the application will be terminated or resumed with the contents /// of memory still intact. /// </summary> /// <param name="sender" />The source of the suspend request. /// <param name="e" />Details about the suspend request. private void OnSuspending(object sender, SuspendingEventArgs e) { var deferral = e.SuspendingOperation.GetDeferral(); //TODO: Save application state and stop any background activity deferral.Complete(); } } } Now moving to MainPage.xaml. This file defines the UI for your app. In code behind of this file, you will notice that, it uses LayoutAwarePage, which extends Page class and provides various mathods for navigation, view management and page management. In MainPage.xaml.cs file, you can add logic and event handlers for your application. The Basic Page template has two mathods, which you can use to save and load the page state. protected override void LoadState(Object navigationParameter, Dictionary<string,string> pageState) { } protected override void SaveState(Dictionary<string,string> pageState) { } Open MainPage.xaml from solution explorer. Now if you want to add more content, just open the xaml and start adding it as: <stackpanel margin="120,80,0,0" grid. <textblock width="Auto" height="29" text="Welcome to my first windows store app"> </textblock></stackpanel> Run your application by pressing F5 and you will see below changes: Hope this material will be a very good start.
http://www.codeproject.com/Articles/472009/Creating-Windows-Store-app-Beginners-tutorial?fid=1795196&df=90&mpp=10&sort=Position&spc=None&tid=4391405
CC-MAIN-2015-32
refinedweb
766
65.62
. 333 thoughts on “Android Lesson One: Getting Started” Hola, This is a very good intro to the OpenGL ES 2.0 on Android. (at least the best I have seen so far on the net!!) Thanks a lot and looking forward to read the next lesson… 🙂 Thanks, Miguel, I appreciate the feedback! The next lesson will be up hopefully soon! I stumbled upon this via the android app store after suspecting the 2 tutorials I did were failing because JellyBean did not support openGLES2.0. It has been 2 days of research and I have done 3 triangle tutorials. All of them failed. Was able to get a colored screen. But that is as far as I could get. The checking of the compile state of the shader I had to learn from a lecture. Which I then implemented to error check my tutorial code. But it didn’t work. This looks very promising and seems to cover everything I have found on my own via lectures and tutorials. AKA: This information looks dense and high quality. -John Mark This was very helpful – thanks for posting it. Anybody looking for an intro to 3D geometry could start with this section of the “Blender 3D: Noob to Pro” WikiBook: Thanks for sharing this. 🙂 Lots of tutorials on OpenGL ES 1.0 on the net, but those on OpenGL ES 2.0 are hard to find … Yours is great, thanks a lot. This was very helpful for me! Thanks for posting it. This is the most concise, detailed and best explained opengl es 2 tutorial for android beginners. I have been googling for days and there is almost nothing out there except the google tutorial that only part works and has warnings and intermittently crashes my phone. The way you describe the camera matrix is so clear and has ended the confusion caused by the google tutorial. Big thanks!! Consider changing the article title because you cannot find this by searching with the expected terms. I only stumbled across it because I saw the demos on Android marketplace. We spoke a bit in email, but thanks again for the great feedback. I have started to promote the site more and add tags, so I hope that this makes it easier to find the tutorials in the future. Nicely done! Any specific reason why your clear color has an alpha of 0.5? (Set in onSurfaceCreated() with GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f)) Never mind, I see you never turn on alpha blending. Yep, there is no specific reason for it since no alpha blending being done. Even if it was turned on I guess it wouldn’t matter unless we were using destination alpha. really helped me a lot Holy shit. This is fucking stupid. Why the fuck would anyone ever design an API so that you have to pass strings into a compiler, and then link things up AT FUCKING RUNTIME. Honestly, are you people fucking assholes. You just want us all to be miserable failures, bang our heads against some shitty wall, don’t you? First of all there are better ways to pass character data through a Java VM. And second, this shit just shouldn’t be happening at runtime. Plain and simple, end of story. Sounds like someone hasn’t ever programmed in Java or OpenGL for that matter.. Why else would you be reading this? Please elaborate on your better way to pass a character data besides a final String to the function glShaderSource(int handle, String src). Inline OpenGL source in a final String is the easiest way I can think of. It happens elsewhere, ever see Linux kernel source? There’s lots of inline-asm. Oh and linking at runtime? Ever hear of DLLs, kernel modules, or drivers? It happens all the time. OpenGL ES is ported to various platforms of ranging capabilities. OpenGL hides the complexities of various hardware accelerators and platforms to provide a unified interface. Thus the API and the inline rendering source code. Stop your whining bitch and just learn, asshole Good points, kangta. It’s also possible to read in the shader source from a text file, and it’s even possible to store GPU-specific compiled binaries (see:) so I’m not sure I understand all the swearing and hate from the originator. 😉 That’s interesting. How do I compile the shader on a computer? or shall I compile it once on the phone when the game is started first and store it afterwards in a binary file cos it’s GPU-specific. I’m going to implement bump mapping and specular lighting, which will make my shaders very large. Precompiled shaders could make the loading times shorter. I don’t think every phone will support this, but for those that do you could just precompile on the phone itself on first launch. Lol. You seem to know what you are talking about. As someone who has done both game and ui programming. I see nothing wrong with openGL’s approach here. Initialization can be clunky and non-optimized. So strings are acceptable. And they even have handy methods for checking if the inlined strings have errors! I wish other openGL tutorials would have told me about that! -John Mark final strings are concatenated at compile time in java Thanks for the tutorial, it’s very very helpful. Annotations: Unfortunely the Download is not complete ready for Eclipse. I had to edit the .classpath file, I inserted the lines, now it works (Eclipse Indigo/Android 3.2): At Lesson One there is something wrong with the html version. The declaration of “fragmentShaderHandle” is missed. The download project do not contains this error. Yours, Lars Hi Lars, The comment did not contain the lines, (WordPress probably interpreted as HTML and filtered them out) but I updated the project, tried importing it as read-only from github and it seems to work now. Let me know if there are still issues. Also not sure what you mean about the HTML version. Let me know, and thanks! 🙂 Thanks for your helpful tutorial the only problem is that “fragmentShaderHandle” definition and codes is missed in the tutorial you can just easily press CTRL+F when you are in browser opening the Lesson 1 tutorial page and then you will see that you use “fragmentShaderHandle” but u hadn’t declare it before Good catch, the steps for the fragment shader were left out, as it’s the same sequence of code as for the vertex shader. I’m guessing, aside from touch or tilt listeners, this code is completely portable so you can plug in the verticies, surface, color, motion varibles, camera data… after it’s been written once. Because I’m new, I have to ask, will the tutorials end with standard code we can customize with geometry and textures from an obj file created with a CAD program? BTW Great tutorial! Thanks for what must have been a tremendous effort. Hi, My name is Austin. Currently, I practice the project of OpenGL ES 2.0 on Android ADT. But It didn’t operate on ADT. So, I have found more information. Today, I find your information about the Android emulator does not support OpenGL ES 2. It is good information to me. So, I am going to buy Galaxy Nexus. At that time, I will make the project. Hi Austin, That’s right, unfortunately the emulator doesn’t support OpenGL ES 2. Most devices should have support for it now. Thanks for the comment! 🙂 Great tutorial, just what i was looking for. Much appreciated =O i just went to your github and you have 7 lessons on there, THANK YOU!!! No problem, hope you find it useful! Seriously, the best tutorial on the Internet for OpenGL ES 2.0. I’m amazed. Please don’t turn off this website. I bookmarked some sites that went down. Hopefully you guys keep this site alive and make more tutorials! Thanks for the kind compliments, Oscar, and no worries, I’m in it for the long haul! Hi, my name is Gun, its been a week i’m trying to figured it out how to do transformation in opengl es 2.0. Well some tutorial maker often did less or much for their “lesson one”. Most of them missed the most commonly operation which is transformation, and they skip it or sometimes they put too much on the camera view thats makes the whole scene of the tutorial very confusing. I wanted to thank to you for providing this tutorial, especially lesson one. its very clean and shows clearly the base we step on. Thank you again. =) Thanks, Gun, I’m glad that it helped out! Before I start posting, first thing’s first, great website! I’ve recently got interested in android phone programming and tried out some 3D modelling and stumbled upon your website, manage to understand how opengl es 2.0 works in general and all that stuff! But you’d guess it, since I’m posting, I must have hit a problem! This tutorial works fine when I deployed it to a real android device, but it crashes immediately on the SDK emulator, I was wondering if you’re aware of why it’s happening, I’ve did my fair share of googling and could come to no conclusion =( I have the lastest emulator/SDK installed, with GPU emulation enabled to support opengl es 2.0, yet this error message pops up the moment I click on the application: ! Sorry! The application LessonOneActivity(process lesson.One) has stopped unexpectedly. Please try again. Force close If there’s anymore information you think would help with pinpointing the problem, I’ll gladly provide it! Thanks! Hi Hobbyist, Thank you for the compliments! For the crash, please change the following line: final boolean supportsEs2 = configurationInfo.reqGlEsVersion >= 0x20000; to : final boolean supportsEs2 = true; This is a bug with the emulator, even if you have GPU emulation turned on! Let me know if this helps. Thanks! hey there, I’m not Hobbyist but tested what you said. You are right. changing that line to final boolean supportsEs2 = true; on the activity of each lesson is enough to avoid the error. Thanks ! ! btw, it would be awesome if you could do a lesson about 3d models that change their shape over time. I had the same thing, GPU enabled, latest api, but that change did not fix it. It would stil stop. Even running the OpenGL ES API Demos would also crash like that. After some digging around it turned out there was a problem with my EGL configuration. For some reason the EGL was not being configured automatically. The log cat would give the following error “java.lang.IllegalArgumentException: No config chosen” as the cause in the trace. Anayway, what I had to do was add a call to set the configuration manually, for example: class MyGLSurfaceView extends GLSurfaceView { public MyGLSurfaceView(Context context){ super(context); setEGLContextClientVersion(2); ———-> setEGLConfigChooser(8, 8, 8, 8, 16, 8); setRenderer(new MyGL20Renderer()); } } i have the same problem: java.lang.IllegalArgumentException: No configs match ConfigSpec. is it because this line -> //final boolean supportsEs2 = configurationInfo.reqGlEsVersion >= 0x20000; was replaced w/ this: final boolean supportsEs2 = true;? in any case could you be more specific and show the code explicitly that allowed the lesson to run in the emulator. seems u can just place the call (setEGLConfigChooser(8, 8, 8, 8, 16, 8);) in the true branch of the if(supportsEs2) statement. if you can provide the code for the function it would be very much appreciated. thanks Hey, Found out about that bug the hard way, after spending some time tweaking the emulator, I decided to delete all the checks in the code, then made a separate app to check for the GK version, what a trek, lol. Turns out that there’s some other incompatibilities with the new emulation too, it conflicts with certain firewalls like ZoneAlarm and will not run unless it’s completely shut-downed. Thanks for the great tutorials again! Hope to make something interesting soon! When LEARNING something new, the reader should not be forced to debug the tutorial code in order to get it to compile. Too many minor issues in this code that can cause those just getting started to get too frustrated to continue. Hi Bob, Care to elaborate? I can’t fix said minor issues if you don’t point them out. Thanks. This is my 4th OpenGL tutorial that has not worked. Perhaps you’ve learned something since you posted this? I really need help. I am feeling really stupid. This is my 3rd day of focusing ONLY on getting a triangle on the screen. -John Mark The debugging is as most as useful as the main content. It’s important to understand it, if you start debugging at the start, in the future is useful if you’re developing a more complex app. Is it possible that admin write the tutorial in native layer (c)? That would be great. But thanks for the posting, really. NDK code? That would be interesting. I should do it sometime. 🙂 +1 I know this might be a one in thousand messages thanking you, but for me its a HUGE help!!!Thanks so much for awesome tutorials bud, keep up… No problem, I appreciate each and every comment. 🙂 This looks like a very good introduction to OpenGL on Android. I did stumble upon a compile error with line … mGLSurfaceView.setEGLContextClientVersion(2); I created the project for API version 7, “setEGLContextClientVersion” appears in API version 8. Just thought I’d mention that. Luckily, it probably doesn’t happen to many developers. So, the code becomes public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mGLSurfaceView = new GLSurfaceView(this); mGLSurfaceView.setRenderer(new LessonOneRenderer()); setContentView(mGLSurfaceView); } P.S. I do realise that OpenGL ES 2.0 is from version 8+. I am just using 7 at the moment. A good tutorial for OpenGL 1.x users. I wrote out everything from this lesson in Eclipse, yet when I try to run it, I get errors for the overridden methods in LessonOneRenderer saying that they must override a superclass. It must not be recognizing the renderer interface because when I remove the @Override, the methods are simply not called. What is going on here? Stack Overflow Link I believe the errors stem from using Java 1.5 instead of Java 1.6. You can’t use @Override on interface methods in 1.5 IIRC. Try changing your project properties to use Java 1.6. The methods should definitely get called so long as you call gLSurfaceView.setRenderer(new YourRenderer()); from your activity. I tried setting the compliance to 1.6, but still the compiler complains about the @Override, and getting rid of it still ensured that the methods are not called. That is so strange… are you testing on an emulator or device? Could you publish the smallest set of code possible (including activity, manifest, and renderer) to GitHub or StackExchange? I’m curious. Hi. Nice Tutorial. Is there a specific reason that you use the modifier “final” for many of your variables? Even the ones that are in a short-lived scope. Is there an advantage to it? Hi Demios, It’s mainly a style issue — it helps you avoid inadvertently changing the value of a variable where you didn’t want to do so. It can also be used to ensure that you assign a value to a variable through all of your code paths. I might have overused it in these early lessons. 😉 Hai. I just want to say thanks for the tutorial. THANK YOU ! 🙂 No problem, and thank you for stopping by 🙂 I am the Anon above who say thanks. Btw, people should check this site too for good OpenGL ES 2.x tutorial. 🙂 Yes, these are good tutorials as well. The link above is dead i guess. Getting a 404 in firefox hi, I am getting the below error the method setLookAtM is undefined for the type Matrix. when i cam creating anndroid project i have selected android 4.1 and minsdk as 2.3.3 version. Hi Sankar, You’ll probably want to check you imported android.opengl.Matrix and not android.graphics.Matrix. ya it worked fine thank you i have copied the code for lesson one and it working fine. I have understood the code on a high level, but there are some conceptual points that i am not quite clear on. when you defined triagle1vertices date, where is origin located in terms of mobile screen ( is it center of the screen or top right or top left ). same question when you defined camera with setlookatM method thanks sankar Just to add what i have been trying.. i have commented out the code for triangle2 and 3, also commented the code for rotation of trainle1. so what i have finally is a static triangle at the middle of my mobile screen. now i was trying to change the value for below parameters to understand the effect of the camera position on final display of triangle. but not quite understanding the effect. //; sorry for troubling you with so many questions..i have debugged the program and saw that mViewMatrix has values below 1 0 -0 0 -0 1 -0 0 0 0 1 0 0 0 -1.5 1 I would like to know how this matrix is generated sorry the matrix was actually 1 0 -0 0 -0 1 -0 0 0 0 1 0 -1 -1 -1.5 1. i understood the calculation from site. also i have got projection matrix calculation from. so at least i am clear about how view and projection matrices gets generated :). but still i have below questions – In the tutorial we have set the model matrix to identity matrix. is there is any reason for it? can we use any random 4×4 matrix for? thanks, sankar. Hi Sankar, I would have pointed you over to those sites as they have a great description of how the matrices themselves get generated. I’ve learned quite a bit, myself, since I wrote these first programs. 😉 So at a high level, what’s happening is that in OpenGL, normally -1, -1 is the bottom left corner and +1, +1, is the top right corner. These are known as normalized device coordinates. To create the illusion of 3D, we use a projection matrix. The matrix itself doesn’t create the 3D effect; what it does is it maps all of our Z positions onto a special coordinate component known as “W”, and OpenGL will later divide X, Y, Z by their Ws. This PDF goes into more details: This image shows how the same coordinate gets closer to the center of the screen as the W value increases: All a projection matrix will do is something like this. Say you have two source positions of the following: (3, 3, -3) (3, 3, -6) The second point is a little bit further, or more “into the screen” than the first. The projection matrix will convert the coordinates into something like this: (3, 3, -3, 3) (3, 3, -6, 6) That last component is W. Now, OpenGL will divide everything by W, so you get something like this: (1, 1, -1) (0.5, 0.5, -0.5) Remembering that these are in normalized device coordinates, where -1, -1 is the lower left corner of the screen and +1, +1 is the upper right corner. Model and view matrices Alright, so once we have a projection matrix setup, we need to move stuff in and out of this viewspace so that we can actually see it. This is where we use a model and a view matrix. The main purpose of these matrices is to translate and rotate objects so that they end up within our view frustum, and we can see them on the screen. They both have the same function — the view matrix does the same ting that the model matrix does. The difference is that the view matrix applies to all objects at the same time, so it’s like moving around the entire world. The model matrix, on the other hand, is applied to a single object at a time, so we use this to take the definition of the object, and place it somewhere in our world. We can take the same data, render it once at 0, 0, 0, render it a second time at 1, 1, 1, and so on. So, to come back and answer your questions: “In the tutorial we have set the model matrix to identity matrix. is there is any reason for it? can we use any random 4×4 matrix for it?” We reset a matrix to identity because an identity matrix, if multipled with any vector, returns the same vector. It’s like multiplying any number by 1 — we get back the same number. This is to get a matrix that doesn’t have any translations, rotations, or whatever applied to it. We could use any matrix we want here, just as long as we understand what the results on the object will)?” That’s correct. When using attributes, OpenGL assigns a default of 1 to the 4th component if it’s not specified. “?” Yep that’s pretty much how it works. The projection matrix influences how near/far you can see, and the view matrix influences what you see, by moving around and transforming the entire scene. If you wanted everything to be 10x smaller, you could use a view matrix that scaled down all coordinates by 10 before multiplying them with the projection matrix. This comment is a little bit convoluted, but hopefully a few things make a bit more sense. If I were to redo the tutorials I’d take a different approach now, as I think all of the matrix stuff can be unnecessarily confusing in the very beginning. Another way of understanding the matrix code: Column-Major order The numbers are all written in column-major order, which means the array offsets are specified like this: The view matrix Meaning the view matrix actually looks like this: 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, -1.5 0, 0, 0, 1 This matrix is almost an identity matrix, except for that “-1.5”. This will end up translating the Z by that amount, which has the effect of pushing everything -1.5 units into the screen. We could remove this matrix if we applied the same translation to the model matrix, instead. To verify this, try it out with a matrix calculator: Projection matrix As for the projection matrix, here are the rounded values: This will transform coordinates as follows (with default W of 1 in the source coordinates): After division by W, we get this: Notice that the projection matrix just sets up the W, and it’s the actual divide by OpenGL that does the perspective effect. No worries, this is how I learn myself 🙂 Hi Sankar, Looking back at the code, I believe that 0, 0 is the center of the screen. hi, You are amazing.. 🙂 it is so nice of you to take time and answer all of my basic questions. thank you so much. keep up the good work thanks sankar Hi, I am confuesd that how (1, 1, -1, 1) (1, 1, -2, 1) (2, 2, -2, 1) multiply 1.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, -1.2, -2.2, 0, 0, -1, 0 to get –> (1.5, 1, -1, 1) –> (1.5, 1, 0.2, 2) –> (3, 2, 0.2, 2) if you want to get the answer i think the projection matrix should be 1.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, -1.2, -1, 0, 0, -0.2, 0 Hi James, It seemed to work fine with the original matrix when I tried with an online matrix multiplier. Please check you’re not confusing rows with columns; this is common as OpenGL uses Column Major order. That is, m[0] – m[3] refer to the first column, not the first row. Aha, I am confused with rows. Btw, I am a new one to OpenGLES, I want to know how to set the value of view matrix, model matrix, and projection matrix. Is there any rules? Thanks. Hi James, I’m gonna post a new article on this topic. Please keep an eye out. 🙂 Here it is! hi! thank’s for your great tutorial, I’m very new to android and opengl es 🙂 one thing I’d like to ask, I’m trying to draw a cube in android which its width is equal to screen width. I did it but its width didn’t equal to screen width. So If I set GLU.gluPerspective(gl, 45.0f, (float)width/height, 1.0f, 100.0f), camera is on (0,0,0), and I don’t use any glTranslatef or glRotatef, how should I set the vertices (especially the X axis) so the width of the cube will be the same as screen width? thank you so much for your help 🙂 Hi popo, What would be the purpose of this cube? The most straightforward way could be to skip the matrix stuff entirely and just work in normalized device coordinates, which range from -1, -1 on the bottom left corner to 1, 1 on the top right corner. In your shader you’d remove “gl_Position = matrix * a_Position” and instead you’d put “gl_Position = a_Position”. hello, thank’s for the answer. Actually I’m going to draw some cubes on the top of some map tile images (just like google map 3D, but in a simple version). On a certain zoom level, one of my object on that map has its width equals to screen width, so my cube had to have its width equal to screen width too. So I tried to draw that cube but later I found that my cube didn’t have a screen-width width >.< This seems useless to build that app, but I really want to learn how to make it 😀 If you have any advice, I appreciate it so much 🙂 Hi popo, Hmm, one way to do it is to see what kind of projection matrix you’re ending up with. If you’re using frustumM I think you can just take the right, left, top, bottom, and near plane as your coordinates! For example: Setting the Z for all of those to whatever you had set as the near plane, inverted. For example, if your near plane is 1 put -1 as the Z. I think this should work, but you might need to use a very small offset (-0.999) if there are artifacts. P.S. These would be coordinates after view/model takes effect, so this is feeding right into the projection matrix. Also, I haven’t tested this, but if you don’t have the right, left, top, down, because you’re using a helper function, try this: Take the matrix value at m[0], and set X range = 1 / m[0]; Then you can use minus X range to plus X Range as the screen width. Something similar should work for the height at m[5]. Again I haven’t tested this but let me know, as it makes good research material. 😉 hi! finally I don’t use any matrix modification as my mistake was because I got wrong to pick half-height of near plane to be my half-width of near plane. Seems like it’s very simple matter, but maybe it could help somebody someday 😀 I set GLU.gluPerspective(gl, 45.0f, (float)width / (float) height, 1.0f, 100.0f), so basically my near plane will be on -1.0f, and maximal half-width of my near plane was calculated : zNear * tan(fovy / 2) * aspectRatio = 0.230f –> my near plane half-width. and that’s all 😀 thank’s alot for your help and sorry for bothering you with my question >.< Thanks for sharing this! 🙂 Hi, i am playing with the view matrix generated by setlookatM method. i have made the projection matrix as in identity matrix and model matrix is set to identity in your code. so only matrix that affects the vertices is view matrix. when i execute the program i do not see anything on the screen. why is that so? view matrix is 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, -1.5 0, 0, 0, 1 these are vertices defined in th eprogram //}; the view matrix should only push the z coordinate right? thanks sankar Hi Sankar, If you are only using a view matrix, you’ll want to be sure that the final coordinates end up in the range [-1, 1] for X, Y, Z, and W. They will need to be in this range to be visible on the screen. With the -1.5 it seems that they would not be in this range. hi the view matrix in the example is 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, -1.5 0, 0, 0, 1 projection matrix is 1.508 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 -1.220 0.000 0.000 0.000 -1.000 0.000 final MVP matrix is 1.508 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.278 0.000 0.000 0.000 -1.000 0.000 model matrix is set to identity matrix and i did multiplication from right to left ( V multiplied by P and the result is multiplied by M). hope i am right. now with the above matrix i have multiplied each of the vertices ( arranged column wise) -0.500 0.500 0.000 -0.250 -0.250 0.560 0.000 0.000 0.000 1.000 1.000 1.000 the result coordinates are -0.754 0.754 0.000 -0.250 -0.250 0.560 0.000 0.000 0.000 0.000 0.000 0.000 apparently W in this case is 0 an divide with 0 is not allowed(by the way i can see the triangle on the screen) so how this situation is handled. am i missing something? thanks sankar Hi Sankar, You should really put the Z coordinates as something like -2 so that they end up in the range between -1 and 1 after multiplied with the matrix. I’m surprised that things still show up for you, since clip space is defined as -W to +W. I guess it also says somewhere in the specs that the graphics card isn’t supposed to blow up if W is zero. 😉 I couldn’t find a clear answer myself but would be interested if someone has one. Also it seems a bit odd that the last column of your projection matrix is all zeroes. What are you passing in to frustumM ? hi, I took the values from your code. in fact i haven’t changed any values in any of the functions. i have commented out rotations part of code. i debugged it and those are matrices i see. thanks sankar Interesting, I get a matrix that looks more like this: I think there should be something in that last column… What happens when your code gets to Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far); ? What are the actual values? sorry i messed up last time. this is proj matrix after the frustrumM code 1.508 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 -1.220 -2.200 0.000 0.000 -1.000 0.000 after multiplication with viewmatrix result is 1.508 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 -1.220 -0.370 0.000 0.000 -1.000 1.500 sorry for the confusion Ah that makes more sense then! Then it’s probably just a question of making sure that you’re pushing the right amount into the Z axis so that it appears on the screen. You can try with a matrix calculator (here is one:) with that matrix, and see which values give you a Z inbetween -1 and W. It seems like a point of (1, 1, -3, 1) should show up. Hi, Do you know any good tutorials for collision detection in 3D with open gl ES 2.0? thanks I’ve heard good things about and it seems that some Android games are using it! has a forum where you can ask questions and get more help. You could also try for a Java port, so no need to use the NDK! it’s a great tutorial for beginner. Really helped me in getting, how openGl works..I tried to draw a point only when i touch the screen, i tried to play with vertices array, but didn’t get expected result. Can you help with some sample or tutorial? You could try posting a question to StackOverflow with some sample code. Let me know and I’ll see if I can help out, too. First off, thanks so much for this amazing tutorial series. I’m sure it took a lot of hardwork and dedication to explain everything so clearly. The code is also well commented and documented. I’m trying to develop a 3D game in android, probably not a first-person-shooter or anything, but a simple 2D game with 3D models. Will I be able to port the models to android using OpenGLES? I don’t want to go with the standard mobile game frameworks and engines that are available since this is more fun! 🙂 Thanks for the compliments, Plasty! We actually had a couple of community members working on an OBJ loader; will have to see how they’re coming along. I’m so sorry I haven’t said anything to this topic for weeks, but here is a short Code Snippet: Basic Aspects of the .Obj FileFormat can be looked up here() import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; import android.content.Context; public class QREOBJFile_Parser { public static float[][] qreparseObject(Context mActivityContext, int resourceId) { final InputStream inputStream = mActivityContext.getResources().openRawResource( resourceId); final InputStreamReader inputStreamReader = new InputStreamReader( inputStream); final BufferedReader bufferedReader = new BufferedReader( inputStreamReader); String nextLine; String[] vector = new String[2]; String texvector; String[] indexdata = new String[2]; List unindexedVectors = new ArrayList(); List unindexedNormals = new ArrayList(); List unindexedtexVectors = new ArrayList(); List Vectorslist= new ArrayList(); List Normalslist= new ArrayList(); List texVectorslist= new ArrayList(); try { while ((nextLine = bufferedReader.readLine()) != null) { if(nextLine.substring(0, 2).equals(“vn”)) { vector = nextLine.substring(3).split(” “); for(int j=0;j<3;j++) { unindexedNormals.add(Float.parseFloat(vector[j]+"f")); } }else if(nextLine.substring(0, 2).equals("vt")) { texvector = nextLine.substring(3); unindexedtexVectors.add(Float.parseFloat(texvector.substring(0,8)+"f")); unindexedtexVectors.add(Float.parseFloat(texvector.substring(9)+"f")); }else if(nextLine.substring(0, 2).equals("v ")) { vector = nextLine.substring(2).split(" "); for(int j=0;j<3;j++) { unindexedVectors.add((float) (Float.parseFloat(vector[j].substring(0,5)+"f"))); } }else if(nextLine.substring(0, 1).equals("f")) { vector = nextLine.substring(2).split(" "); int k; for(k=0;k<3;k++){ indexdata =vector[k].split("/"); for(int l=0;l<3;l++){ Vectorslist.add(unindexedVectors.get((Integer.parseInt(indexdata[0])-1)*3+l)); Normalslist.add(unindexedNormals.get((Integer.parseInt(indexdata[2])-1)*3+l)); } texVectorslist.add(unindexedtexVectors.get((Integer.parseInt(indexdata[1])-1)*2)); texVectorslist.add(unindexedtexVectors.get((Integer.parseInt(indexdata[1])-1)*2+1)); } } } } catch (IOException e) { return null; } System.gc(); final float[][] returndata={toPrimitive(Vectorslist),toPrimitive(texVectorslist),toPrimitive(Normalslist)}; return returndata; } private static final float[] toPrimitive(List floatList){ final float[] floatArray = new float[floatList.size()]; for (int i = 0; i < floatList.size(); i++) { floatArray[i] = floatList.get(i).floatValue(); // Or whatever default you want. } return floatArray; } } Thanks so much for the code Martin. I’ll have to meditate on it for a while though 🙂 Thanks for sharing this, Martin! Do you have a GitHub or similar link as well? I could share that in an upcoming post. We had one other member who was working on an implementation, so we could share them both. I think this tutorial is correct, but I think it’s a bit silly to expect people to write 200+ lines of code and get it to run the first time. That’s why you have GitHub. Download the source if need be. 🙂 Hi, thank you for a great tutorial! I am creating a map using OpenGL, my objects are getting loaded into an ArrayList from a Spatialite database in my activity class. My main question is: how would I get my ArrayList to the renderer? You’ll need to use ByteBuffers and FloatBuffers to get your data into the native heap using syntax like the following: ByteBuffer.allocateDirect(LENGTH_IN_BYTES) .order(ByteOrder.nativeOrder()) .asFloatBuffer() .put(ArrayList.toArray(new Type[])) .position(0); Once your data is there you can follow the techniques in these tutorials to use either glVertexAttribPointer with this vertex array, or upload it to the GPU as a vertex buffer object. Forgot to add, you’ll probably also need to cherrypick and serialize the data out of your array, so a simple put will not work. Instead you can extract all of your data into a float[], short[] or byte[] array, depending on your needs, and then use a FloatBuffer, ShortBuffer, or ByteBuffer as your view. You can then call .put with that array. I must admit this is a little confusing for me. So basically the data in my ArrayList is held as a Geometry, its know that it is a polygon, point or linestring, I need to process each one accordingly to get node coordinates (x and y’s) and create a float array. At the same time as creating the individual float arrays, I need to create a FloatBuffer. I will then need to keep both the float array and FloatBuffer in an array of their own. Is that logic of mine correct? Some thing like: //layer.geometries is a list of geometries in each table ListIterator GeometryList_it = layer.geometries.listIterator(); while (GeometryList_it.hasNext()) { Geometry geometry = GeometryList_it.next(); float[] shapeVerticeData; FloatBuffer mShapeVertices; if (geometry.getType() == SHP_POINT){ float cX = geometry.getX; float cY = geometry.getY; shapeVerticeData = { // X, Y, Z, // R, G, B, A cX, cY, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f}; // Initialize the buffers. mShapeVertices = ByteBuffer.allocateDirect(shapeVerticeData.length * mBytesPerFloat) .order(ByteOrder.nativeOrder()).asFloatBuffer(); mShapeVertices.put(shapeVerticeData).position(0); //Add both to an encompassing array } else if (geometry.getType() == SHP_LINESTRING){ } else if (geometry.getType() == SHP_POLYGON){ } } Hi Hank, Yes, you’ll need to extract node coordinates from your ArrayList and create a float array. You could start out by organizing it like this: One array for all points. One array for all lines. One array for all polygons (they will need to be converted into triangles). So something like this ArrayList pointData; for (Geometry geometry : layer.geometries) { if (geometry.getType() == SHP_POINT) { pointData.add(geometry.getX); pointData.add(geometry.getY); } } FloatBuffer pointBuffer = ByteBuffer.allocateDirect(pointData.size() * 4) .order(ByteOrder.nativeOrder()).asFloatBuffer(); pointBuffer.put(pointData.toArray(new float[pointData.size()])); // You can let pointData go out of scope now, you only need to keep around the FloatBuffer If you want separate colors per point or stuff like that you can add that to the float buffer; otherwise, you could just use a uniform. Using one array works if you draw everything as GL_POINTS, GL_LINES, or GL_TRIANGLES because OpenGL will treat each as disjoint. You’ll probably need to convert line strings as well, breaking into individual lines. Thanks! So just a couple more general questions. What if you wanted to select a line and have it highlight? Can you have that kind of interactivity? Second question, can you have layers? On a map as you zoom in and out different layers should be visible? Maybe that can be controlled in what you do and don’t draw, think I just answered that one. However having an interactive map would make for a nice feeling map. I know I could capture where the user taps on screen then create a buffer of sorts to query what is at that location, highlighting an object though whether it is a point, line or polygon? If lines were broken up that makes it a little more tricky and can I alter the style or color in the buffer later maybe? Yeah, you could for sure. You would just need to have a Java-side meta-data object controlling what’s highlighted or not, and associate it with the offsets for the vertices associated with that object. For example, if you know that floats 10 – 20 correspond to one line loop, and there are two floats per position, then positions 5 – 10 correspond to the same line loop. Then if you want to highlight it, you just need to do this (pseudocode): setUniform(Red); drawArrays(LINES, start at 5, 5 positions); To have layers, you’re exactly right: just control what you draw and don’t draw. you could use more than one vertex array and control the layers at that level. Alternatively you could use meta-data for this as well, and not draw the positions that correspond with a given layer. That could get a bit crazy though and less efficient because you’d have to break down one array into a bunch of draw calls. Better to keep it efficient and group each layer together. For touch feedback it’s a bit trickier. You’d need to map the touched coordinate to the appropriate source object. I haven’t done too much research on this, so would actually be interested in what you come up with here! 😉 Hi Admin, one more small question about the following line. pointBuffer.put(pointData.toArray(new float[pointData.size()])); I am getting the following: “The method toArray(Object[]) in the type ArrayList is not applicable for the arguments (float[])” If I do this: pointBuffer.put(pointData.toArray(new Float[pointData.size()])); I get: “The method put(float) in the type FloatBuffer is not applicable for the arguments (Object[])” Any ideas? I am trying to run the app samples (and they req the gpu emul) but when i add gpu emulation to be trun on the emulator is crashing on start up I lookt it up on google and from what iv being reading it was a bug but it got fixt at v19 and I have v20 i have win7 64 gtx 8800 u happan to run in to this problem and know how to fix it? Let me know if this post helps: thats what i did remove the supported check set the boolean to true the problem is when i set the “GPU Emulation” to true (yes) it crash on start up ;/ Could it be an issue with the emulator image? I was able to get it to work with Arm V7 4.0.3. I don’t know if it works in other images. Unfortunately the support is quite buggy and might not work for all hardware configurations. yep just test it on my laptop and it works fine ,so this like you sayd hardware configuration not supported ;/ thx for the help sorry to bug you with this silly problem 🙂 It’s too bad they haven’t fixed these bugs yet. The Nexus 7 tablet isn’t too expensive at least, so if you’re looking for a hardware solution you could always go for that. 😉 Hi, So I’ve got my data in the buffers, I am going to try having line strings in one buffer to start with hopefully this won’t be a problem. I have created a drawLine method for them in my renderer and it seems to be working without error. So I process as follows: setContentView(mGLView); I have my own CustomGLView that sets the renderer called mGLRenderer then it calls LoadMapData(); Which starts a new thread to show a progress wheel as it processes the geometry records. After it is done it calls mGLView.mGLRenderer.initDrawLayers(); public void initDrawLayers(){ GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT); ListIterator orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator(); while (orgNonAssetCatLayersList_it.hasNext()) { mapLayer MapLayer = orgNonAssetCatLayersList_it.next(); ListIterator mapLayerObjectList_it = MapLayer.objFloatBuffer.listIterator(); ListIterator mapLayerObjectTypeList_it = MapLayer.objTypeArray.listIterator(); while (mapLayerObjectTypeList_it.hasNext()) { switch (mapLayerObjectTypeList_it.next()) { case PointObject: break; case LineStringObject: Matrix.setIdentityM(mModelMatrix, 0); Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f); drawLineString(mapLayerObjectList_it.next(), MapLayer.lineStringObjColorBuffer); break; case PolygonObject: break; } } } } private void drawLineString(final FloatBuffer geometryBuffer, final FloatBuffer colorBuffer) { //Log.d(“”,”Drawing”); // Pass in the position information geometryBuffer.position(mPositionOffset); GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mFloatStrideBytes, geometryBuffer); GLES20.glEnableVertexAttribArray(mPositionHandle); // Pass in the color information colorBuffer.position(mColorOffset); GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false, mFloatStrideBytes, color_LINES, 0, geometryBuffer.capacity()); } The bounding rectangle for these lines is approx: Top Left 152.068, 26.458 Bottom Right 152.769 27.565 What do I need to do to view it? Is it the view matrix that I need to manipulate. I will eventually add in zoom, pan, possibly rotation, functionality so I am also interested in what I would need manipulate for these as well. Also I set setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); so it is not constantly running, how do you programmically tell the renderer to redraw. Kind Regards Hank Hi Hank, To tell it to redraw, you can call requestRender () on your GlSurfaceView. You are quite right in that you’ll need a view matrix to zoom, pan, rotate and so on. What you can do is something like this: Projection Matrix: stores your overall projection View Matrix: Stores global rotations, pans, whatever. Model Matrix: Used for transforming individual objects in a scene, if needed. You can prebake Projection * View into a matrix, then if you also need a model matrix, just multiply ProjectionView * Model and pass that into the shader. If your models are already stored in global coordinates, then you don’t need a model matrix and can use the ProjectionView everywhere. Regarding threading, please keep in mind that all calls that touch OpenGL *must* occur on the OpenGL thread. You can dispatch anything you need to this thread by calling GlSurfaceView.queueEvent(). Anything which touches Android views needs to happen on the main thread (the default thread that calls onCreate() and so forth, as well as your listener callbacks). Hope this helps! Hey. It seems there is a bug with glGetShaderInfoLog() if the compilation process actually fails. I have seen some suggestions to workarounds, but I did not understand any of them (there is one in the comments of the GoogleCode link I posted) Do you know how to get glGetShaderInfoLog() to actually return some information? Debugging shader code is virtually impossible without compiler output. Yikes. You could try the LIBGDX bindings to see if they work (instructions in this post:) And by the way. Thanks for the great tutorial 🙂 You are welcome, thanks for stopping by 🙂 Hi, so I tried shifting your flat triangle (No.1) to the following: (Basically the same size just a different location) final float[] triangle1VerticesData = { 152.0f, -27.25f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 153f, -27.25f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 152.5f, -26.404508497f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f}; // Position the eye behind the origin. final float eyeX = 152.5f; final float eyeY = -27.0f; final float eyeZ = 1.5f; // We are looking toward the distance final float lookX = 152.5f; final float lookY = -27.0f; final float lookZ = 0.0f; // Set our up vector. This is where our head would be pointing were we holding the camera. final float upX = 152.5f; final float upY = -26.0f; final float upZ = 0.0f; Matrix.frustumM(mProjectionMatrix, 0, 152.0f, 153f, -27.25f, -26.404508497f, 1.5, 10); Matrix.setIdentityM(mModelMatrix, 0); Matrix.rotateM(mModelMatrix, 0, 0, 152.5f, -27.0f, 1.0f); drawTriangle(mTriangle1Vertices); This does not work it just comes back with a grey screen, I am obviously missing something here, can you see where I am going wrong? Whoops ignore the rotateM x and y should be set to 0.0f there. What I was attempting to do was shift both the triangle and camera to get the same picture from a different spot. Is perhaps the problem in your stride? Which needs to be 7 * Float.SIZE Which I am thinking Float.SIZE could be 4 or 8. (The values are measured in bytes). 4 for 32 bit systems. 8 for 64 bit systems maybe? I am also a bit lost. I’ve done a few openGL examples now and none of them work. This seemed to be the most promising example… But this one just crashed on me. If you have learned anything since posting this, I’d like to know. -John Mark Hi, Nevermind, tried shifting it a smaller amount and found that you don’t alter the Up vector or Perspective as those two are relational to the camera. How can I display my different layer objects with differing colors? One is defined like: final float[] pointObjColor = {1.0f, 0.2f, 0.0f, 1.0f}; final float[] lineStringObjColor = {1.0f, 0.3f, 0.0f, 1.0f}; final float[] polygonObjColor = {1.0f, 0.4f, 0.0f, 1.0f}; another is: final float[] pointObjColor = {0.0f, 1.0f, 0.5f, 1.0f}; final float[] lineStringObjColor = {0.0f, 1.0f, 0.6f, 1.0f}; final float[] polygonObjColor = {0.0f, 1.0f, 0.7f, 1.0f}; On each layer the color is going to be the same for each geometry type. I figured that I would do it this way to save a little memory and it doesn’t need to be defined for each vertex, but obviously it needs to be applied to each vertex, if I’m understanding how the vertex shader works correctly. Any ideas on how I could accomplish this? Kind regards Hi Hank, You could do this by using a uniform. An OpenGL uniform keeps its value until changed. Follow the code in this tutorial to see how to use a uniform. The general gist will be: Fragment shader: uniform vec4 u_Color; … gl_FragColor = u_Color; Java code: Do this once, after building the shaders: colorUniformLocation = GLES20.glGetUniformLocation(programHandle, “u_Color”); … Do this whenever you need to change the color: GLES20.glUniform4f(colorUniformLocation, 0f, 1f, 0f, 1f); Unlike with attributes, all elements of a uniform must be specified, so if you use a vec4, you have to use a corresponding Uniform4f. Hope this helps! Hi, that helps a lot thankyou! I was also curious about two things: – If I am receiving polygons from the database, which at the moment I am just rendering as GL_LINE_LOOPs, if I want to create a fill color, it will be a uniform color still but different from the border, I need to create an extra buffer that holds the indexing of my triangles, then I create a GL_TRIANGLE_STRIP, correct? What would be good practice to form my triangle index dynamically, for I won’t know the number of sides? – My other question is about text, I will be wanting to label certain objects on the map, one of the layers contains road center-line data and I will be wanting to place the road name along it. How would this be accomplished? Regards Hank Hi Hank, Indeed, you’d need to draw the interior differently. For convex shapes and some concave shapes, I think you could get away with drawing it as a triangle fan. Otherwise, you’d need to use an algorithm to prepare triangles out of the polygon. You’d have to do a bit more research on this. It could definitely make things easier to make for you if you store all the vertices once and refer to them with an index buffer. For text, it’s also going to be a bit tricky. There are many different ways of doing this, including overlaying a canvas on top of your OpenGL view and drawing to it using Android’s text methods! You could try to start with. This seems like it will be an interesting project once it’s done! Hi all, I try to create new project width android target is 2.3.3 then I copy the LessonOneActivity.java & LessonOneRenderer.java from the AndroidOpenGLESLesson project(I down from learnopengles). I build and load into real android device 2.3.4 but I just got the error message: “Unfortunately Opengles20demo has stopped” although I tried to set supportsEs2 = true too. I try run on emulator but the result is the same. I saw in the AndroidOpenGLESLesson project there’re many files like the library such as:AndroidGL20.java… and in the libs folder are armeabi & armeabi-v7a… in my project not has. Anyone could help me fix this bug? Thanks and Best regards, Vu Phung Hmm, what happens if you try to run the code from the site by itself? You can avoid using the libs stuff if you’re targeting API 9 or higher. hi Admin.. i just read the lesson one getting started. wow.. really the best tutorial for OpenGL Android. Thanks for the best tutorial.. 🙂 All the best for your future tutorials..:) This is the best source for OpenglES2 tutorials, i updated my knowledge from openGL ES 1.x to 2 in a couple of days by learning from this tutorials and porting my own game, thank you thank you thank you!! This is the best site for OpenGL ES 2.0 that I have found to date….Keep up the awesome work. Thank you so much. Hi y’all, I’m getting a runtime error, actually a caught exception. The shaders fail to compile, and I have used the shaders included with your code as well as one from the android website. Also, I am getting a “called unimplemented Open GL ES API” error. I have tried so may websites and tutorials trying to get a GLSL program working to no avail. I have created working android OpenGLES programs, but as soon as I throw shaders into the mix, my programs break. I’m starting with a straight up copy of your code from github before I create my own based on your tutorial, I just want to make sure it works first. So far, Im still frustrated. :/ Please help! Hello! Admin The firt thing, I want to say thanks to your tutorials. It’s very very nice. I have a question for you. (But it is’n about above subject). I have an image (called img1). + I have a frame I want to combine with them and result is an image (call img3) (img2 have background is a part of img1 and size = frame) following link to image detail: Hmm, will that image always remain the same? I would do it with a paint program. If it should move then one way you can do this is with alpha blending. What you do is you have a black & white image that is the outline of your shape above, with the black areas the parts you want to exclude and the white areas the parts you want to keep. You then blend that with the texture of the apples to get the part that you want to keep, like in the second image. I have a quick intro to multiplicative blending at, and for more info, I’d recommend to search for “Alpha Masking” or “Alpha Blending”. Well, im very new to the entirety of opengl and have been fiddling with it to try and realize an idea I had. Anyway, Im a tad confused on what exactly the up vector does in the display. I cant seem to find anywhere else that makes use of it except in your code. I was wondering if all that the up vector changed was the rotation of the camera, so to speak? Sorry if my question is difficult to understand. hi, i downloaded the source and created android project from existing code. edited final boolean supportsEs2 = true; and ran as android application on emulator. it loaded on my emulator (android api17) and i selected the icon. lesson 1 thru 4 appeared. i tried each one but they each returned … “Unfortunately Opengles20 tutorials has stopped ——————————————— ok” running eclipse 4.2.1 fully updated on vista 32bit. any suggestions on how to get it to work? thanks Hi. Hope you’re still active, and thanks for some great tutorials! But, can you explain a little thing: I’m looking into ray-picking so I can determine when a user clicks on the triangle. To do so I need: modelview matrix and projection matrix. But what is the modelview matrix in this example? As far as I understand each object that is drawn on the screen has its own model matrix, so I should not use that one as the modelview matrix. When I check older versions of openGL the modelview matrix can easily be fetched by calling glGetFloatv (GL_MODELVIEW_MATRIX, matrix); But can’t find a function for doing this in OpenGL ES 2.0? Thanks for any help! In OpenGL ES 2 you keep track of the matrices yourself, so you always have access to them. For this example we have a mViewMatrix and a mModelMatrix, so we could get the modelview matrix by multiplying them together like this: multiplyMM(mModelViewMatrix, 0, mViewMatrix, 0, mModelMatrix, 0); You’re right in that each object has its own model matrix, but since the model matrix is used mainly to put stuff into the “world”, maybe what they mean is that you should use the “view” part of the viewmodel matrix, which represents the camera. I did a bit of object picking and I actually used the view matrix and the projection matrix, and created an inverted one using code like this: multiplyMM(viewProjectionMatrix, 0, projectionMatrix, 0, viewMatrix, 0); invertM(invertedViewProjectionMatrix, 0, viewProjectionMatrix, 0); This will help you convert a 2D touched point into a 3D ray, with code like this: private Ray convertNormalized2DPointToRay(float normalizedX, float normalizedY) { final float[] nearPointNdc = {normalizedX, normalizedY, -1, 1}; final float[] farPointNdc = {normalizedX, normalizedY, 1, 1}; final float[] nearPointWorld = new float[4]; final float[] farPointWorld = new float[4]; multiplyMV(nearPointWorld, 0, invertedViewProjectionMatrix, 0, nearPointNdc, 0); multiplyMV(farPointWorld, 0, invertedViewProjectionMatrix, 0, farPointNdc, 0); divideByW(nearPointWorld); divideByW(farPointWorld); Point nearPointRay = new Point(nearPointWorld[0], nearPointWorld[1], nearPointWorld[2]); Point farPointRay = new Point(farPointWorld[0], farPointWorld[1], farPointWorld[2]); return new Ray(nearPointRay, Geometry.vectorBetween(nearPointRay, farPointRay)); } private void divideByW(float[] vector) { vector[0] /= vector[3]; vector[1] /= vector[3]; vector[2] /= vector[3]; } I need to write up a tutorial to share more info, but hopefully that gives you a bit of a start with the resources that you’re looking at! Hi, my question is: you are sending position of vertices as 3 dimensional vector, but in vertex shader it is defined as 4 dimensional vector. I think the fourth dimension is w, but I don´t know where it is added. best regards With an “attribute” the unspecified parameters are implicit, with the first 3 defaulting to 0 and the 4th defaulting to 1, so in this case W is defaulting to 1. hi, i’m trying to find the source code on the github. is it under the webgl of android breifcase. I found a lesson 1 from webgl but it does not work/seem like its the same from this lesson and I looked through all the folders and I can not find another file for the life of me. Can you help direct me please? For the WebGL you may need a few additional steps to get it to work, this post has a few more details: Hello Admin, It is really a good tutorial to know how the 3D shapes(triangles, cubes) can be drawn over GLSurfaceView. Now I am able to draw and rotate those shapes. I want to apply pinch/zoom effects to those shapes simply with fingers. I studied how to apply zoom effects for images but couldn’t able to do the same for objects over GLSurfaceView. Could you give me some demo example for that… If you want I can post my code over here.. Thanks in advance Hi Jimmy, For a GLSurfaceView, you’ll probably want to adjust the perspective or the view matrix in response to a pinch/zoom; for example, you could use a zoom in to decrease the FOV, and a zoom out to increase it, or you could use zoom in / out to move the camera in and out by adjusting the view matrix. I have no code off hand but I imagine you could use as a base. The best tutorial on openGl for android that I’ve found! Damn it is a long tutorial, but keep them coming! Looking forward for more stuff! That was very informative information about the Android. It something gives brief information on OpenGL. I have learn these OpenGL in C,C++ Programming languages. It was hard for me. But as far I have done more practice , its now easy for me. Nice tutorial ! Suggestion: Next version should include some basic panning, zooming, selecting item, camera view movement……processing event in general That’s what I found weak in your demo …lack of interaction 🙂 But….the demo build nicely ! Good job….thanks! I pretty much copy and pasted the code that you have into my project, but when I try to run it on my actual device it displays a blank white screen, no triangles or anything. I’ve done other opengl tutorials and this problem happens to me everytime. Afterwards, eclipse says “source not found”. Is it because my device doesn’t support openGL es 2.0, or maybe because I forgot to import some jar file? Thank you in advance. Hi Alan, What happens if you download the project from GitHub and import that into Eclipse? Hi, I was wondering about the level of understanding one needs about Shader Languages when doing game programming. Do we need to be fluent with it? If so, do you have any introductory reference online/ebook? P.S. Awesome tutorials, by the way! The best I could find online! Although, there’s one minor thing I would suggest (or maybe it’s just me), though — and that is to add more diagrams. 🙂 Hi Harold, It would really depend on the complexity of the graphics within your game. For mobile-level graphics, I think you can get away with a limited understanding especially if it’s a 2D game! For more complex graphics, you’ll need to know more details. I haven’t checked out any books specifically for shaders; when I was first learning myself I checked out “OpenGL ES 2.0 Programming Guide” which is more of a reference work. You can adapt the shaders there right onto Android. Hi, it’s been a while since my last post, just getting back into things. With your examples you place the 3 dimensions into FloatBuffers. With my data, coordinates are kept as doubles, 2 dimensions only. So the fastest way to get this data out for me is by placing it into a DoubleBuffer with the 2 dimensions. So what I am wondering is, is it possible to actually use DoubleBuffer? I have tried altering the following: private final int mBytesPerFloat = 8; private final int mPositionDataSize = 2; private void drawPolygon(final DoubleBuffer geometryBuffer, final float[] colorArray) { …. GLES20.glDrawArrays(GLES20.GL_LINE_LOOP, 0, geometryBuffer.capacity()/2); } Although it is not throw any errors it is not drawing. And if it is possible to only two dimensions then does the z just default to zero? Hi Admin, I have a FloatBuffer fbb. This holds about 17000 line loops, which is displaying fine, however I have just added a touch event handler to pan the map, only pan no zoom or rotate. onDrawFrame loops through each of the FloatBuffer’s I have found that it pans just fine but it is not what you would call a smooth panning experience. Is there any way to make this more efficient or alternatives to my interactivity that you can see? Would appreciate your feedback! I have placed an example of my code on to stack exchange: Hi Hank, I saw you had some pretty good answers on Stack Overflow! From my experience, if your vertex shaders are simple then the mobile GPUs can handle very high polygon counts, so I wouldn’t expect a slowdown to come from that. However, it’s important that you batch the operations. It seems that you’re drawing things by individual polygon and line, and this will be very slow. I’m not sure how fast line loop drawing is compared to polygon drawing, so that is also something you may need to address later on, by pre-rendering the lines into a texture and drawing a texture, instead. It’s hard to time stuff accurately, OpenGL buffers the calls and even the Android tracer gives inaccurate results, from my experience. What you can do is try removing a bit of code completely and time between runs, see how things change then. Try removing Thread.currentThread().setPriority(Thread.MIN_PRIORITY);, and re-engineer the app to put your data into two vertex buffers: one for the polygons and one for the lines. Calling OpenGL thousands of times per second is most likely the source of poor performance. Draw those in two draw calls. If you need to modify the lists you can do that when needed, at the beginning or at the end of onDrawFrame(). Wow! I remember feeling kind of like EEYUGH when I dusted off an old openGL project and realized that the standard (non ES ) 3.x version of open GL no longer supported immediate rendering. Wow. Instead of flaming the closest person I could find, I looked into it and researched the history of shaders (They were first developed by Pixar.) the graphics pipeline and the awesomeness of delegating all of the heavy lifting to the GPU. My kludgy scene which had about 10K polygons, 2 textures, lighting and fog took about 1.5 seconds to render a frame in immediate mode. It rendered flawlessly with shaders. So, EEYUGH, ever heard of a dll in a linux kernel? Me either – get your facts straight if your’e going to bitch and preach. Why have the overhead of a device driver to send data to a device driver? Why or rather, how can dynamic rendering NOT occur at runtime? Jesus Christ you’re a moron. BTW, the java VM has nothing to do with this. The strings are actually “programs” which are written in a language called GLSL, not java. Open your eyes. Do those strings look like java to you? If you’re so offended by strings, write them as plaintext files and just stream the output. Christ you’re dense. Google GLSL and the rendering pipeline: and see how far your ridiculous flame gets you there. Pull your righteous head out of your ass and do some work. It’s people like you who write shitty code so people like me (and the other eager participants on this site.) have to clean up. Thanks appreciate the feedback. I decided to optimize what I’ve got before hitting LOD’s etc. These two layers (parcel and roads) are not going to be changing. So I’ll use the GL_STATIC_DRAW I have placed all the individual FloatBuffers into one large FloatBuffer for each dataset. There must be something else I need to do to the drawPolygon function, as it is throwing errors. I am now wondering how I go about adjusting my drawPolygon function to suit the batch style? Also where the best place to set GLES20.glBufferData(target, size, data, usage) is? Hank I should have gone into more detail, the polygons and lines are drawing but they are now one continuous polygon and one continuous line. It is a bit of a mess to look at. However at this point it is now very fast and panning the map is smooth. Whether this changes by breaking up them up remains to be seen. Do I need to send GLES an array of start positions or something along those lines for each individual buffer? If you just want to set the data once you can call glBufferData from onSurfaceCreated. Can you find a way to generate the line data so that you can draw the lines individually instead of as line loops/strips? That would let you batch all the lines as well. This may increase the size of your data, but saving on thousands of draw calls may be worth it. If you use index buffers, the increase in size will be limited to the indices which can take up a lot less space than the vertices. To continue to use different colors, you can add the color as an attribute of the vertex data instead of as a uniform. You can also calculate and pass in the matrix uniform once per overall draw instead of per line. No need to calculate it per individual line/polygon unless you want to move them around relative to each other. Yes I believe I can generate the lines that way, I’ll read up on lesson eight before I start 🙂 Just thinking if I have this 17000 polygons some of them have 4 sides a lot of them have more. So if we say at a minimum 17000*4 = 68000 points. If I create an index buffer and these can be a byte or short value, then won’t I hit issues with values over (32,767 * 2)? Yeap if I create a ShortBuffer and try to stick my indexing for each point I get some pretty interesting results. For the polygons there were a total of 250625 points, averaging about 15 points per polygon. Obviously this won’t go into a java short which can only go to 32767 even if it were able to handle unsigned it would only go to 65535. What do you think could resolve this? You can go up to 65535 by casting the short as follows: (short) 65535; It will “just work” even if Java thinks it’s -32768. What if you just use a single vertex buffer for now without using an index buffer? That will reduce the complexity of the code. Later on you can figure out a way to LOD the data to get to or under 65536 unique vertices, or you can render using multiple index/vertex buffers. Great guide, but it is a bit complicated. Can you please explain what does the following line TriangleBuffer.position (mColorOffset); And why is it set to 3? aTriangleBuffer.position (mPositionOffset) and here’s to 0? Perhaps you could have one bit of advice what exactly does GLES20.glEnableVertexAttribArray (mPositionHandle);? The vertex data is laid out like this: position 1, color 1, position 2, color 2, position 3, color 3, etc…. Since the position takes up 3 floats, we want to skip over those 3 floats when we start reading the colors. This is why we set the position to 3 before we read in the colors. GLES20.glEnableVertexAttribArray (mPositionHandle) tells OpenGL to enable using the position data in the shaders. We read in mPositionHandle before with this line: mPositionHandle = GLES20.glGetAttribLocation(programHandle, “a_Position”); We have to enable the attribute array before using it in the shaders. Let me know if you have any more questions! 🙂 Thanks for the great tutorial, am I correct that mColorOffset and mPositionDataSize should be equal? or is it that mPositionDataSize should always be >= mColorOffset ? Thanks Thank you. I have another problem: When you adjust the position of the first triangle: final float[] triangle1VerticesData = { -0.5f, 0.25f, 0.0f // zde 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 00.25f, 0.0f, / / here 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, / / here 0.0f, 1.0f, 0.0f, 1.0f}; I have a problem with pivot. (triangle1 pivot) Because pivot is outside the triangle. (not in the center) So how can I set the pivot at your location? I tried to do it using Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f); howto but I did not. Hi stety, Please post a question on stack overflow and add some example code there, then post the link here — I will check it out. 🙂 Hi Admin, Have got the buffers working really well now, so thank you for your help on that. I have a new issue that has to do with zooming in and panning. I see possibly three different ways to zoom. – Projection – make the projection area smaller. – View – eye gets closer to the model. – Model – model is scaled larger. Initially I tried using the first which worked well up until it gets to a certain level then the panning starts jumping. After testing I found that it was because the floating point value of the eye, could not cope with such a small shift in position. I keep my x and y eye values in doubles so it continues to calculate shifting positions, then when calling setLookAtM() I convert them to floats. You’ve seen the image of the map and this is quite a large area. Panning starts to jump noticeably when it gets to a level of about 50m away. I would preferably like to get this to a sub-metre level. Now I was going to try using one of the other two methods, probably scaling the model larger first. I think I might run into the same issues if I use the eye getting closer. If I scale the model larger then the scale of the eye shift shouldn’t have to change… I think. How’s my logic so far? Tried to do a scale test but it didn’t work. Not even sure if it should work. I just tried this test in my eye shift function, to see if it would at least scale. public void setEye(double x, double y){ eyeX -= (x / screen_vs_map_horz_ratio); lookX = eyeX; eyeY += (y / screen_vs_map_vert_ratio); lookY = eyeY; Matrix.scaleM(mModelMatrix, 0, 0.5f, 0.5f, 1f); Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ); } Surely appreciate your advice on this. Thanks Hank Hi Hank, When working with floating point doing things like eyeX -= and eyeY += will cause the inaccuracies to build up over time, so you may want to work with absolute assignment instead of relative additions and subtractions. When you said it didn’t work, what happened? It didn’t scale or you just don’t see anything anymore? Hard to say from looking at this code snippet. It would all depend on where you’re calling scaleM as well — maybe let’s try another StackOverflow question with some example code? 😉 I already knew about. Pivot does not move. When I want to rotate the triangle at the center of, I place it in the center triangle1VerticesData. A move him through Matrix.translateM. Or mile? it somehow move the pivot?? I do not know why but when I answer always give a brand new post.? Hi Admin, I opened a question on StackOverflow,. I have just posted my code with original zoom by projection and pan by scaled eye movements. My attempts so far have been unsuccessful, it didn’t scale, it didn’t disappear or anything, so I thought I would start with a clean slate for the question. Hi Admin, any luck with looking over my scaling issue? Hi Hank, Nothing jumps out to me as I look at the code. Something you can try is to temporarily disable the zoom code based on touch, and just zoom things yourself based on a variable you change each frame. For example, you could do like this: scaleFactor = 1; In draw: scaleFactor *= 1.001; … draw Just to see what happens. If this way is tricky (you said things move off to the side; that’s because the scale is done from (0,0) and not from the center of the screen) you may want to go back to the way you were doing before that was working, and try to fix the jumping around. Also you can try scaling the view matrix instead, just for fun? I’m curious as to what will happen. Hey , thanks. it is well articulated lesson. No errors But I cant see the triangle. Can tell me why? Thxs in advance 🙂 You can compare against the source here: Sometimes working backwards from working code can help to find the missing spot! Hi Admin, I have the zoom working by scaling the model and translating to the correct point. Panning is just by translating the model. However the same issue occurs with the panning but in the opposite way, the values for are too large for the floating point, at a close zoom. As mentioned I keep all my values in doubles and cast them last second into floats for OpenGL. What I was thinking was I am scaling and translating the model right now, I have stored the difference in accuracy like so. public void pan(double x, double y){ modelXShift = modelXShift + x; modelYShift = modelYShift – y; diffXShift = modelXShift – (float)modelXShift; diffYShift = modelYShift – (float)modelYShift; } Then use the difference to make slight adjustments in one of the following the eye, view or projection. Either shifting the eye, translate the view or translate the projection. However I don’t think this is quite as simple as its logic. The following, only once zoomed in, is jumping around. I tried by using setLookAtM() but it was a worse reaction, one side of the image would disappear. I have tried the following with the projection matrix as well with the same results. Matrix.translateM(mModelMatrix, 0, (float)modelXShift, (float)modelYShift, 0f); Matrix.scaleM(mModelMatrix, 0, (float)mScaleFactor, (float)mScaleFactor, 1.0f); Matrix.translateM(mViewMatrix, 0, (float)diffXShift, (float)diffYShift, 0f); My question is and hopefully explained it clearly enough, how does my logic seem? The fact that my image is jumping around maybe the amount I’m adjusting by or something I’m not adding to the equation. Hi Hank, Off hand I don’t know a simple solution to this problem. There must be a solution since Google Maps etc… have found a solution. Just as a test, what happens if you blew up all of the source data (that is, before it touches any matrix transforms) by say, 1000x, truncated that to a reasonable area visible on the screen and then just scrolled that around as normal without any further zooming in on your part. Would that still jump around? I think the key will be to keep the floating point numbers that pass through your transforms from getting too large or too small. Unfortunately I can’t tell you exactly how to get there, that will take more research on your part. 😉 Perhaps would have some interesting ideas for you. I have one big problem. Nothing gets displayed , neither in the emulator nor in my Galaxy S3 … why would this happen ? I don’t have any errors in Eclipse Later observation : I copied the source code from the LessonOneActivity LessonOneRenderer into Eclipse , modified the Manifest file to use open GL 2.0 , and still the app stops , reason : Unfortunately NAME has stopped. Why is this happening ? I am using API 17 so there shouldn’t be a problem … Hope you are still there to answer me! Log Cat output:) Hi Suflet, Try adding this to your Activity before calling setRenderer: glSurfaceView.setEGLConfigChooser(8 , 8, 8, 8, 16, 0); That might fix the issue. Nope , didn’t do it . Tried it a few hours ago … now i get this :) Does the emulator have “Use host GPU” enabled? What happens if you download the code samples from or and run them on the device (not the emulator) without changing any code? There’s also an APK you can download here: Fixed that too , now this is happening : 08-20 13:29:44.341: E/AndroidRuntime(1355): FATAL EXCEPTION: GLThread 90 08-20 13:29:44.341: E/AndroidRuntime(1355): java.lang.RuntimeException: Error creating vertex shader. 08-20 13:29:44.341: E/AndroidRuntime(1355): at com.example.incercarigl.primaIncercareRenderer.onSurfaceCreated(primaIncercareRenderer.java:127) 08-20 13:29:44.341: E/AndroidRuntime(1355): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1494) 08-20 13:29:44.341: E/AndroidRuntime(1355): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) it’s hard writing 300 lines of code line by line to try and understand it and then to get errors , to much lines of code , i’ve spent the last 4 hours trying to figure out what happened because there is no syntax error , so algorithm error is the main problem… I downloaded the APK works great , as for the example I copied the entire code from the manifest file , the main java file and the renderer file . Same error messages … Hi Suflet, Instead of copying the code, what if you just import the project directly into Eclipse and run it as is, without copying? If the APK works great, then the code is fine — something is probably just getting lost when you copy code. From the above error, it looks like you might not have copied the vertex shader source. I would recommend work with the APK code as a base, get it to run in Eclipse, and then modify that code step-by-step to suit your needs. Then it will be much easier to see when things go wrong, and what caused it. What happens to the downloaded source codes : after installing the apk and try to run it , i get a static image with a red square on top and a blue one on the bottom , but no action , nothing moves. Just static display. That sounds like the air hockey example — that is what you are supposed to see from the early examples. 🙂 Everything sounds good. Sorry , I was a bit looney today , lots o’ stuff on my mind. Yeah I figured that AirHockey1 was supposed to do that , i didn’t get the chance to try other apk’s , and in the earlier comment i was talking about the apk on the market , the one with all the lessons in it. Well , to make the long story short , after ( i lost count how many ) hours , i finally got this lesson working. I didn’t figure out what the problem was , maybe next time it will pop in my mind. But for now all is good. Thanks for the quick replies. Didn’t think you were still around after 2 years from posting this tutorial ! Now up to the next Lesson !!! good tutorial. there is no fragment shader compile code, but anyway its good, got my rotated triagle, We are an Android App developer at Japan. Here, only few material related to Open GL ES 2 exist. On Android environment, in local language we now only one book. Despite in english “Learn Open GL ES 2” is the best one. It is organized according to japanese learning style. We had translated it to native language for internal (office) use. Many people outside the office interested on this material. We intend to put it in our Web Site (107.co.jp). This site is free and open for public access. Certainly, We will mention all the necessary credit information about the source, but no else more! No payment of any kind will be paid to you or your company. We appreciate your comment about this possibility. Congratulation for written premium quality tutorial. We hope continuous success in all future publications. Sincerely, K.K.107 obana@107.co.jp Hello, If you are referring to this website, then the content here is licensed CC-BY-SA (), so you are completely free to translate and make the material available on your own website! You only have to give me attribution and release your translated material also under the CC-BY-SA. If you are referring to the book, that is under copyright by The Pragmatic Bookshelf and the best way to inquire about translation rights is to contact them here: Thanks! Great tutorial – really helps – going to take a look at your book and probably buy it soon! I did have a problem, however, which was the same as Suflet’s. I kept getting the “Error creating vertex shader” problem. The reason for this is that when I copied the code from the website to my Eclipse project file, there were some weird Unicode characters (which weren’t visible to the eye !) which got copied across as well. This didn’t hinder compilation, since the problem lie within the strings, which of course Java wouldn’t care about. However, clearly the shader compiler picked up the problem, which gave the error. There are two solutions to this: (1) completely delete the “final String vertexShader = ‘ … ‘” line and type it by hand, or (2) copy it directly from the downloadable project files. Either way, you don’t end up with those unwanted and invisible Unicode characters! Great tutorial – the best on the net. This happened when copying the code from this blog post? Ouch, that’s not good! Thanks for letting me know about this; maybe the better solution is to switch to GitHub Gists or something similar. Sorry to bother you, but I had a strange issue. I copied the code line for line so that I could understand and get a feel for what’s going on, and then installed the package on my phone and had the strangest behavior. Once the app loaded, I had a black screen. Nothing happened. No animation. However, once I hit the home button, I could, for a second or two, see the three triangles spinning before my phone returned to the home screen. Very strange. Is there any possible explanation for such behavior? Hi Eric, I’m just wondering if that happens with the downloadable APK or with the code from GitHub? Yes, I was select/copy/pasting across from the blog post. I should have picked up the error earlier really because the problem wasn’t restricted to the text inside the strings, but all the code in the copy block. I suspect its probably a tabbing Unicode that’s to blame! Hello Kevin, That’s really a great job you’ve done here. I’m a beginner with openGL and this article really worth the read. I have a question though. I noticed that when rotating the screen the full process of creation is performed : creation of GLSurfaceView, GLRenderer as well as the creation of the objects (Triangle). So basicaly, it’s like it restarts the application nearly from scratch. This is easy to observe by adding traces in onCreate, onPause, OnResume of the Activity and adding other traces in GLRenderer (onSurfaceChanged, onSurfaceCreated). My code does create a pretty complex object, so it takes time (20 seconds) to create it. So if the user rotates the screen I don’t want my object to be recreated from scratch as it means a lot of time which shall not be necessary as the object already exists. So far, I’ve : . declared my object as static to not loose it. Indeed, that works but when I rotate the screen i just does not work at all.(note that I’ve added the piece of code that prevents from recreating the object if it is not null). I also tried to create the Renderer as static but no luck. But it does not work at all. I was wondering what would be the modification to bring to your OpenGLES20Complete example code so that there is no recreation of the object when rotating the screen. Well, that would really help me a lot. The easiest way I can think of is to just add something like this to your AndroidManifest.xml: Your activity can still be restarted so you should still handle anything you need to persist in onPause() / onResume(), it just won’t restart in as many places. Whoops, WordPress filtered out the tags. That “android:configChanges” should be added to your activity tag in AndroidManifest.xml. Ooooh man, what a lovely solution. Thanks so much. I was spending so much time on dealing with the EGL context being lost. There’s a new function with GLSurfaceView (GLSurfaceView.setPreserveEGLContextOnPause(booelan) to prevent it from being destroyed but that assumes that your GPU is capable of dealing with multiple context which does not seem to be the case for mine. A huge THANK YOU Kevin. Glad that adding that stuff to the manifest helped out. 🙂 When you go on to more complex apps, I think GLSurfaceView.setPreserveEGLContextOnPause can still be useful, as it will help you out when onPause() and onResume() are called, i.e. if the user receives a call or presses the home key. In those cases the activity is not destroyed, but you could still lose the GL context so it’s still worth keeping that call around. Yes, that worked on your example. I then tried to implement in my own code but it’s a bit more complex because I have to change my layout depending on the orientation of the screen (portrait or landscape). I noticed that as soon as I load an other layout the EGLcontext is lost. That’s so annoying. I will still investigate a bit. It’s a bit more complicated but you can hide/show views instead of reloading the layout… or you can get fancy and reload the layout, but when you do so in onConfigurationChanged(), you can replace the GLSurfaceView in the new layout with the one you already have instantiated (viewGroup.removeView, viewGroup.addView). In any case, just some ideas. If it’s possible a layout that works in both portrait and landscape will be easier to deal with. Thanks for the answer. I was about to write that I had decided to hide/show views finaly This solution works fine even if I don’t like it a lot. I’ve performed many tests with changing the layout in onConfigurationChange (XML file). I also replaced dynamically the parent of my GLSurfaceView view as suggested (remove/add) but then … I noticed that the simple fact of replacing the layout with SetContent just made the EGL context being lost. This is weird. I believe that is because changing the layout removes the old GLSurfaceView from the hierarchy, which causes it to destroy its GL thread which destroys the EGL context. To keep the same context, you have to keep the same GLSurfaceView. Hi Kevin. What you suggest is pretty convincing. I don’t have my dev environment with me during the weekend (family time) but I’ll give it a go on monday and let you know. I can’t remember if I was loading the new layout before or after removing the GLSurfaceView’s parent. I’m just sure that I was keeping the same GLSurfaceView. What I will try in sequence : 1) remove GLSurfaceView’s parent. 2) Load the new layout (setcontent) 3) Add the view to its new parent. Nice Tutorial But When i run my application in emulator error Unfortunately, the Android emulator does not support OpenGL ES 2 Please Help Me How to run OpenGL Application my emulator Please let me know if this post helps: Just performed the test as follow in onConfigurationChanged : 1) removeLSurfaceView’s parent 2) SetContent(new xml layout) 3) find the target LinearLayout per ID 4) add GLSurfaceView to the target LinearLayout Unfortunately, the context is lost. I confirm that GLSurfaceView was kept the same. If you wish I can post the little modified source somewhere. But that’s what I discovered. It seems like SetContent(layout) destroys every existing EGL context. Indeed, thinking back on the implementation I remember now that the GL context gets destroyed as soon as you remove the GLSurfaceView from the view hierarchy, even if you use the same GLSurfaceView later. So, I think the only other options may be to use a shared EGL context (not sure how well these work on Android), or to use one XML for both portrait and landscape. You might be able to do that by toggling visibility between GONE and VISIBLE for your other UI elements. Hey, Can you explain why when i change the eyeZ to 1.0f the triangle is not showing up? Like this // Position the eye behind the origin. final float eyeX = 0.0f; final float eyeY = 0.0f; final float eyeZ = 1.0f; Whats wrong with 1.0f? I’m guessing that this is because the projection matrix is setting the near plane to 1.0f, so the stuff that gets drawn ends up 1.0f distance away from the eye and gets clipped by the near plane. Try changing the numbers for the projection matrix and you’ll probably see a difference. This is much more advance than of the book. The method you use is very very different. BTW can you tell me why float[16]? Hi Skadush, The float[16] is a raw buffer to hold the data for a 4×4 matrix, for example, the projection matrix. This post is much older than the book, so it is more unrefined. 😉 So ah…… I should follow the book? Because its much more advance? and I havent seen all the content of the book yet but does the book contains all the tutorial in here as well with better coding atleast? I definitely recommend to follow the book as it goes a lot more in-depth, explains things in more detail, and assumes less knowledge on the part of the reader. Once you’ve read the book you’ll have more of a knowledge base to continue your education on this site and on other sites. Hello, I am following your tutorial and it begin to make more sense when i read it a couple times. However, I try to make the triangle move to the right but it does not work. Can you please tell me how to do that using this existing code skeleton? Thank you! Hi Jacky, for the first triangle that gets drawn like this: // Draw the triangle facing straight on. Matrix.setIdentityM(mModelMatrix, 0); Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f); drawTriangle(mTriangle1Vertices); You could move it to the right by translating it, like this: Matrix.setIdentityM(mModelMatrix, 0); Matrix.translateM(mModelMatrix, 0, 1.0f, 0.0f, 0.0f); Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f); drawTriangle(mTriangle1Vertices); You can read up more about transformation matrices here: and WAAAIT!.. XD What is the difference of attribute and uniform?? Cant I just use uniform? Is uniform only for constants? An attribute is something that you would set for each vertex, while a uniform is something that can apply to many vertices and that you’re only able to change between draw calls. So, while a uniform isn’t only for constants (you can still change them between draw calls), it can make sense to use them in that way. This tutorial is a work of art, and is very, very helpful, thank you a lot! But I still don’t understand some things like: why to multiply matrices to transform a object. In my unsuitable concept, to move an object, I should sum a number to it’s position. So, I not understand, the “because” of the use of matrices for this and why multiply it instead. Another thing I don’t understand is why fragment shader is a programmable step since it does a so single thing. Well, again, thank you for this nice tutorial! 🙂 When it comes to translation it’s actually kind of a hack. 😉 By multiplying a matrix with a translation matrix, what actually happens is that the translation amount is added to the X, Y, and Z parts of the matrix. You can learn more here: However, normally we’re doing a lot more than just translation. We’re also projecting to the screen, and we might be rotating and doing other transformations as well. That’s why transformation code will usually premultiply these matrices together, and just pass the final constructed matrix into the shaders. In OpenGL ES 1.0, the fragment shader wasn’t programmable. This made it easier to do simple renderings like the one we do in the lesson. On the other hand, it was a lot more complicated to do more advanced effects. In OpenGL ES 2.0, having a programmable fragment shader adds more initial overhead to simple scenes like this, but the flexibility really pays off later on. Another thing I didn’t understand is why set colors on vertex? I always thing about colors on pixels not on vertex Hi Jair, indeed colours are set on pixels in the fragment shader. By setting a colour on each vertex, the fragment shader can interpolate the colours between the 3 neighbouring vertices for the current fragment. In a real app or game, it might be combining the colours from several different textures and lighting effects before applying to each fragment. Hi First thank you very much for the great tutorial. I need a little help with orthogonal projection. This is what I do in the onDrawFrame method onDrawFrame() { setPerspectiveProjection(); // code took from onSurfaceChanged method // trangle drawing code in the original example … // set matrix to orthogonal Matrix.orthoM(mProjectionMatrix, 0, 0, screenWidth, 0, screenHeight, 0, 2); // Draw the triangle facing straight on. Matrix.setIdentityM(mModelMatrix, 0); drawTriangle(mTriangle1Vertices); } But I cannot see the trangle drawn in 2D // reset the perspective project Hi! I have a question. Why doesn’t use GLES20.glDisableVertexAttribArray function after draw? Hi, really good lesson. I followed the link to download the source code but I can’t see it on the website. I’m prob being blind but can you please tell me which link on repository will give me the source code? Many thanks. I do not mean to be rude, and please forgive my noobieness, but i also was caught on this and without this comment, it would have rendered my experience on this site pretty much useless. Sorry if i already should have known but it said for beginners, and im starting here from scratch into openGL es. Very helpful website. I am a newbie and please forgive me for asking this simplistic question, it’s stuck in my head. The triangles have been defined to be in the xy plane with z = 0.0 And the near and far clipping planes are z = 1.0 and z = 10.0 respectively. So the triangles lie outside the view frustum, right? I do not understand how they are visible. What makes me even more curious is that the triangles vanish when I set their z coordinates to 1.0, which should be included in the view frustum. I searched and searched on Google but I could not find the answer or the correct keywords to use. Somebody please answer, or this curiosity will kill me. Hi S, The key here is that the triangle coordinates get multiplied by the view matrix before the projection matrix, and the view matrix moves the triangles into the right range. It might be easier to understand it if you debug your program, copy the actual values into a calculator, like this one: Enter the values like this 0 4 8 12 1 5 9 13 2 6 10 14 3 7 11 15 With the number corresponding to that position in the array. Then you can multiply the matrices, or matrix * vector, just as you see it in the code, and also compare the results to what you see in the code. Hey 🙂 It doesn’t work on my samsung galaxy s3 :/ it only shows a black screen. Do you what the problem is? Does it happen with the downloadable app? You could try it here: Hello, Firstly, thanks for the android it’s very helpful. I just have a question. In the code, your triangle data is x, y, z then r, g, b, a and it repeats like that. The way I’ve set up my code has all of the coordinates and then the colors, also, it is in 2D. So my triangle data is something like {x, y, x, y, x, y, r, g, b, a, r, g, b, a, r, g, b, a} Because of this change, I don’t know how to implement your drawTriangle method, the variables for the offsets and such have to change but I don’t know how to set it up. Any help is appreciated. Hi Ogen, You can do this too, it’s just a matter of changing the calls to glVertexAttribPointer. For example, you can do it like this: /** Offset of the position data. */ private final int mPositionOffset = 0; /** Offset of the color data. */ private final int mColorOffset = 3; // FIXME: Instead of 3, this should be the start position of your colour data. // Pass in the position information aTriangleBuffer.position(mPositionOffset); GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, 0, aTriangleBuffer); // NOTE: A stride of 0 since the data is packed., 0, aTriangleBuffer); // NOTE: A stride of 0 since the data is packed. GLES20.glEnableVertexAttribArray(mColorHandle); I didn’t compile and try this out, but I believe that this should work. 🙂 Thank you SO much, that worked like a charm! Wait a minute, it’s not T.T I though it was… What do you set mPositionDataSize and mColorDataSize to? I set them to 2 and 4 respectively. Is that right? Yeah that would make sense if it was X, Y, and R, G, B, A Here is the code for my draw method. protected void draw(GL10 gl) { // Add program to OpenGL ES environment GLES20.glUseProgram(mProgram); // get handles mPositionHandle = GLES20.glGetAttribLocation(mProgram, “vPosition”); mColorHandle = GLES20.glGetUniformLocation(mProgram, “vColor”); InGameActivity.checkGlError(“glGetAttribLocation”); InGameActivity.checkGlError(“glGetUniformLocation”); vertexBuffer.position(0); GLES20.glVertexAttribPointer(mPositionHandle, 2, GLES20.GL_FLOAT, false, 0, vertexBuffer); // NOTE: A stride of 0 since the data is packed. GLES20.glEnableVertexAttribArray(mPositionHandle); // Pass in the color information vertexBuffer.position(6); GLES20.glVertexAttribPointer(mColorHandle, 4, GLES20.GL_FLOAT, false, 0, vertexBuffer); // NOTE: A stride of 0 since the data is packed. GLES20.glEnableVertexAttribArray(mColorHandle); GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3); } Hi Ogen, I don’t see anything wrong in particular with that code — would you be able to paste a self-contained sample to a GitHub Gist? That way I could compile it on my side and see. Okay I made one, here it is: Thanks for taking the time 🙂 Hi Ogen, Np, I will make some time to check it out tomorrow. Hi Ogen, I was able to get it working. I’ll give you some hints to try and hit the solution on your own (as you’ll learn much more that way ;): 1) To read in array data in the vertex shader, declare it as an attribute, and bind it with glVertexAttribPointer. Uniforms are only for “constant” data that is shared across many vertices. 2) To access color from the fragment shader, output it as a varying from the vertex shader and read it as a varying in the fragment shader. So the main changes you need to make are with the shaders. If you’re still stuck after a couple of days, don’t hesitate to get back in touch and I’ll send over the solution. At the risk of being overly self-promoting, I also wrote a book that is just for beginners, since I was once in the same shoes as you: OpenGL ES 2 for Android: A Quick-Start Guide. Hope this helps out! Hay there! I’ve been working on OpenGL for 3 days now with no luck. I’ve done 4 tutorials so far and watched lectures. Feel like I am getting closer. I need some contacts who are also serious about OpenGL. From your git-hub post, I am taking you as serious. My email is J4M4I5M7 AT hotmail.com -John Mark Thank you so much for looking intro to OPENGL 2.0, I changed my shaders to this and it’s still not working: private final String vertexShaderCode = // This matrix member variable provides a hook to manipulate // the coordinates of the objects that use this vertex shader “uniform mat4 uMVPMatrix;” + “attribute vec4 a_Position;” + “attribute vec4 a_Color;” + “varying vec4 v_Color;” + “void main() {” + ” v_Color = a_Color;” + // the matrix must be included as a modifier of gl_Position ” gl_Position = uMVPMatrix * a_Position;” + “}”; private final String fragmentShaderCode = “precision mediump float;” + “varying vec4 v_Color;” + “void main() {” + ” gl_FragColor = v_Color;” + “}”; From my point of view, it looks like it’s right. I have outputted the color as a varying from the vertex shader and I’ve read it in as a varying from the fragment shader. Nevermind I just made a silly mistake, I got it working. Thanks for not doing it all for me, I did actually learn more! Nice 🙂 Glad that it worked. Hello, Sorry to bother you again but I’m ready to publish the opengl es 2.0 app I’ve been working on (thanks to your help) but I just have one last problem. I have an activity where some triangles are being drawn etc and at one point, I want to quit the activity and then later, I want to start it again fresh. I am calling the finish() method and it exits the activity but then when I start it again, some of my triangles are still there eventhough I called finish(). I get the feeling that it must be the mGLView and the renderer and the GLSurfaceView objects that didn’t get deleted or something. How can I just get rid the entire activity? I have tried using System.exit(0) but I read that it is bad practice and it was causing a lot of buggy behavior so I decided to just try to finish the activity normally but I’ve been really struggling. Any help with this final hurdle would be greatly appreciated. Also, I don’t want you to think I’m handing this problem on to you, I am actively trying to find a solution but nothing is working. Hi Ogen, Is this when pressing the home button or the back button? Normally you should never need to call finish() manually, and especially not System.exit(0). Is it possible that you’re using static variables to hold onto stuff and thus that stuff doesn’t get cleared? I would avoid statics except when used as constant data, or as some sort of in-memory cache. I think I found the line in your code that’s doing it, actually. Are there any static variables that you keep concatenating to, over and over? I’m not sure cause I didn’t trace it, but check it out. 😉 Hi, All the of triangle data is not static, but my main activity does have a lot of static variables. I have a bunch of Timers, Tasks, ArrayList that are all static. I ‘didn’t know static variables stick around. I’ll try to make them all non-static and see if it works. Thanks for the reply Also, its not when pressing the home button OR the back button. When the game finishes, an alert dialog comes out that has a button called “Play Again” and “Back to main menu”. I want to kill the activity when they click “Back to main menu” The reason I have so many static variables is because of the openGL stuff. I have this class: public static class MyGLRenderer implements GLSurfaceView.Renderer It’s static so everything inside it needs to be static otherwise I get an error. Am I allowed to remove the static from MyGLRenderer? Or does it need to be static. Hello, I got it working, I just removed everything that is static and now my finish() and recreate() (to restart the game) are working. Thanks. You don’t mind if I put you in the acknowledgements for my app right. I’d still be stuck on the shaders if it weren’t for you 🙂 Here is a link to my app. Thanks for all the help. Hi Ogen, This looks pretty neat! I’ll link to it in my next roundup and help to promote. I had a few observations I wanted to share with you: 1) It wasn’t obvious to me at first that I had to move the corners of the triangle. It might be helpful to have a short tutorial the first time you start the game just showing how to move the corners to dodge the enemy triangles. 2) At first I wasn’t able to play because my Google Play Services was out of date. If it isn’t mandatory (it seems to be for the leaderboard?) it might be worth making the UI flow for this optional. 3) There’s a performance issue right now where the framerate starts to slow down after 15 seconds or so. I’m not sure why this is. I would trace over the onDrawFrame and what it’s doing in there to check nothing in there is slowing it down too much. 4) I saw that you disabled the back button, probably to avoid restarting the game when the user returns to the app. One way you can have your cake and eat it too is by using onSaveInstanceState / onCreate to save and restore the game state. 5) The “Rate” option wasn’t working for me. This is pretty neat, and it probably required you to solve several different problems, like detecting and reacting to the touch events, calculating triangle overlap, and so on. Thanks for the advice, I’ll try to fix all the stuff you said. The main reason I disabled the back button was because I felt that people could abuse the system by just pausing it when it gets to a hard position, then restarting it again. I wanted one game to happen in one hit, similar to flappy bird where you can’t go back. As for the rate button, It works for me and I just copied code from a stackoverflow question that starts up the playstore so I don’t know how to debug that because I never get any errors. The main thing that bothers me is the quality of the game how it degrades over time like you said. This is my onDrawFrame(). public void onDrawFrame(GL10 gl) { // Redraw background color GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); mTriangle.draw(); if (!stopDrawing) { Triangle current; for (int i = 0; i < numActiveTriangles; i++) { current = fallingTrianglesArray[i]; if (current != null) { calculateHits(current); if (!current.fell) { current.draw(); } if (current.isGarrisonGoldTriangle()) { mTriangle.draw(); } } } } else { triangleThatHitYou.draw(); } } And this is after I optimized it a lot so it is probably still doing too much work or something. The calculateHits method is pretty long. And it does a lot of work which is probably the cause. Hi Ogen, Is calculateHits ignoring the fallen triangles? It might be worth keeping all the triangle data in one vertex buffer, and simply appending to that data when you add new triangles, and swapping them out when you remove. You can preallocate a buffer big enough for say, 200 triangles, or whatever is the upper limit of triangles you expect to have on the screen at the same time. This reduces your draw calls to 1 for all of the triangles. To get an idea of how this could be implemented, check out the code for the Particles chapter here: Wow I never thought of that. That’s a really good idea and it’ll make the app run much more smoothly. Currently, every triangle object has a boolean called hasFallen. And the draw method is only drawing triangles from an arraylist that contains non-fallen triangles. So I am only taking the triangles on the screen into account. I’m going to try to put all of the coordinates of the on screen triangle into one buffer and draw it. Thanks for the tip 🙂 Hi, I’m new to OpenGL and Eclipse, but want to learn fast. I downloaded the sample project but when run and press any lesson button it says “Unfortuantely, Learn OpenGL ES Tutorials has stopped.”. I don’t know what did I wrong, I just open the project with Eclipse and run. Any ideas? Hi Kervin, If this happens only in the emulator you could try the tips on this page:. What is the error you see in your logcat? Thnx for the tutorial. How can I change triangle to a square or a star? Hi Hami, The simplest way would be by drawing additional triangles. For example, you can draw a square out of 2 triangles, and a star out of 5 triangles that slightly overlap. I would recommend to experiment and try it! 🙂 Thank you for these helpful tutorials! Really great, especially when the Android OpenGL reference lists all the methods but not what they do, and you have to hope that khronos’ c++ documentation is sufficient for the java version. 0_o. Might pay to update the page – some emulators can support OpenGL ES 2.0 now. Also, would you pretty please be able to draw a diagram of the app lifecycle, and when the OpenGL context is destroyed and recreated? Every time I lock my phone and unlock it, the VBOs are no longer valid. It would be helpful to have an image to show exactly when to expect this so I can reload all GL content. Cheers, Andrew Hi Andrew, normally everything will get destroyed in GLSurfaceView’s onPause() method, but you can avoid this by calling GLSurfaceView.setPreserveEGLContextOnPause(boolean preserveOnPause). This doesn’t work on all devices but should work on most newer ones. Is there any easy way to check if the context was preserved? At the moment, all I can think of is creating something trivial, like a 2×2 texture, and checking if it can be bound in onResume(). Thanks again for this website. I can’t say enough about how much it helped me to understand OpenGL. Hi Andrew, If your onSurfaceCreated() gets called again, then you know you lost the context. 😉 In addition to the preserve EGL call, if you’re only using GLSurfaceView and don’t have a complicated layout with other views or fragments, then you can ask Android not to restart your activity on rotation with this line in your AndroidManifest.xml: android:configChanges=”keyboard|keyboardHidden|orientation|screenLayout|uiMode|screenSize|smallestScreenSize” This is what AdMob is using, for the same reason of not wanting to restart the activity. You can remove any of those if you want Android to restart your activity when they change. Thank you for the kind compliments, glad that the site helped. 🙂 Great tutorial. One question. The opengl.org page on fragment shaders has no mention to gl_FragColor… how come? For OpenGL ES stuff you can use the resources here directly for 2.0 and 3.0/3.1: Why we need allocatedirect() for attributes and isn’t neccesary for uniforms? Hi Diego, A uniform is usually a small amount of data represented by 3 or 4 floats, or up to 16 in the case of a matrix, so it can be sent directly to OpenGL through one of the method calls. An attribute, on the other hand, represents an array which can be hundreds of thousands of elements or larger, so for this reason, a pointer to the data is passed to OpenGL rather than the data itself. AllocateDirect allows OpenGL to safely access this data by making sure that the VM won’t move the data or perform other shenanigans that the native code would not expect. Very nice Tutorial Easy to understand if you have basic knowledge of rendering pipeline. Best comment I’ve ever read in this website. Cheer! Raymond, take a look on this (this is how I get the surfaceview) GLSurfaceView glmapsurface = (GLSurfaceView) findViewById (R.id.map_gl); glmapsurface.setEGLContextClientVersion(2); // <- THIS IS THE IMPORTANT LINE glmapsurface.setZOrderMediaOverlay(true); glmapsurface.setRenderer(glmapview.mRenderer); Also don't forget to put " in the manifest. I’m learning so much from your book. Thank you. This doesn’t work on current Android emulators Nevermind, this is an emulator problem Did OpenGL es working on Android Studio? The only issue I’ve had coding OGL in Android Studio is that the plugin places some kind of comment in the first line of the glsl. I get an error when I compile, unless I delete that line. Other than that, I haven’t seen any issues. I test to a connected device. I don’t use the emulator, so I can’t comment on that. Is: // Bind attributes GLES20.glBindAttribLocation(programHandle, 0, “a_Position”); GLES20.glBindAttribLocation(programHandle, 1, “a_Color”); Combinded with: mPositionHandle = GLES20.glGetAttribLocation(programHandle, “a_Position”); mColorHandle = GLES20.glGetAttribLocation(programHandle, “a_Color”); A redundancy? I Thought you used one or the other. glBindAttribLocation or glGetAttribLocation I used Android Studio. Created a blank activity, and after the project was built, created another class file. Deleted the contents of both files except for the first line. Then went to GitHub and copied the contents of both LessonOneActivity.java and LessonOneRenderer.java, except the first line of each, and pasted them into may files of the same names. Compiled and ran on an emulator, first try. Thank you for a great tutorial Ok so how do i create a 2d and 3d air hockey game using Android Studio and *NOT* use Eclipse since it is now unsupported. Migrating from Eclipse to Android Studio is very confusing to me. Codewise it’s basically the same but IDE-wise some steps will be different. If you’re building a new project from scratch, Android Studio is easy to get up and running with.–mobile-22958 Thanks for the great tutorial. I have used desktop OpenGL a few years ago. What comes to my mind is this OpenGL ES is very low level, for a very simple and basic task, many things MUST be implemented manually which personally I do not like it. I prefer a little more high level interface (such min3D) to do my things. Anyway, for those who will refer to this page and read your helpful tutorial, the following link may be helpful too: Thanks a lot for this amazing tutorial. I have just 1 question. I tried multiplying the mViewMatrix with mProjectionMatrix and then multiplying the result with mRotationMatrix and it still worked. Now I see that you’ve multiplied the rotation matrix with view matrix and the result with projection matrix. I tried it your way and it worked too. But from what I understand of 3d mathematics is that matrix multiplication is commutative and they both should not give the same result. P.S.- I can’t thank you enough for the way you’ve explained everything. It cleared almost all of my doubts. Hi Utkarsh, this might help: Hi guys I just wanted to make the page longer, thanks. Hi Admin, This is nice Demo for beginner. I read sankar ,Hank and your reply… now i want to draw star instead of triangle what can i do? which site is helpful for this creation? Hi, First ! thank you for sharing your work. I am new in opengl and android. And when i tried to run the lesson7 in found out that one package was missing. import com.learnopengles.android.R; It is possible to get it ? thanks. Has anybody got a working OBJ loader that I can use? I am looking to build a simple game but I realise I am going to need more complex shapes in order to do this. I am following Kevin Brothalers book on Opengl ES and have built a basic Air Hockey table. This book does not cover an OBJ loader which maybe a suggestion for the second ed! Regards John Большое спасибо за прекрасный урок! Таких мало в интернете. Здоровья Вам и всего самого хорошего! I have error: cannot resolve symbol air_hockey_surface. Please help Is the tutorial up-to-date? As far as OpenGL ES 2 goes? Yes, more or less. The code in the repo also now supports Android Studio thanks to contributors. Thanks for this tutorial. I adapted much of your code to make an app called fork20. This comment keeps getting discarded as being spam so I removed the website which is fork20 dot xyz. It’s a free app with no ads, so I suppose this is technically spam, but I wanted to show you what I made with your generous help. Sorry if that’s a bad thing. Apologies for the delay — email notifications were completely borked on my side. Just approved, and thank you for the shout out.
http://www.learnopengles.com/android-lesson-one-getting-started/
CC-MAIN-2017-51
refinedweb
20,320
73.68
Products and Services Downloads Store Support Education Partners About Oracle Technology Network When memory mapping a file, it is possible to run out of address space in 32-bit VM. This happens even if the file is mapped in smallish chunks and those ByteBuffers are no longer reachable. The reason is that GC never kicks in to free the buffers: === import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import static java.nio.channels.FileChannel.MapMode.READ_ONLY; public class Test { public static void main(String[] args) throws Exception { String name = args[0]; FileChannel channel = new RandomAccessFile(name, "r").getChannel(); long size = channel.size(); System.out.println("File " + name + " is " + size + " bytes large"); long ofs = 0; int chunksize = 512 * 1024 * 1024; while (ofs < size) { int n = (int)Math.min(chunksize, size - ofs); ByteBuffer buffer = channel.map(READ_ONLY, ofs, n); System.out.println("Mapped " + n + " bytes at offset " + ofs); ofs += n; } channel.close(); } } === java -showversion -verbose:gc Test large.iso java version "1.6.0-beta2" Java(TM) SE Runtime Environment (build 1.6.0-beta2-b81) Java HotSpot(TM) Client VM (build 1.6.0-beta2-b81, mixed mode) File large.iso is 6603407360 bytes large Mapped 536870912 bytes at offset 0 Mapped 536870912 bytes at offset 536870912 Mapped 536870912 bytes at offset 1073741824 Mapped 536870912 bytes at offset 1610612736 Mapped 536870912 bytes at offset 2147483648 Mapped 536870912 bytes at offset 2684354560 Mapped 536870912 bytes at offset 3221225472 Exception in thread "main" java.io.IOException: Not enough space at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:742) at Test.main(Test.java:18) EVALUATION Rather than simply forcing gc any time the map fails with an IOExcepion, we look for the instances in the native call's return value where the failure was due to lack of memory (e.g. ENOMEM), throw an OutOfMemoryError and retry only for those cases. If we fail again, we give up and throw the expected IOException. WORK AROUND ByteBuffer buffer; try { buffer = channel.map(READ_ONLY, ofs, n); } catch (java.io.IOException e) { System.gc(); System.runFinalization(); buffer = channel.map(READ_ONLY, ofs, n); } *** (#1 of 1): [ UNSAVED ] ###@###.### SUGGESTED FIX sun.nio.ch.FileChannelImpl.map() should call System.gc() if the native map0() call fails, similar to what java.nio.Bits.reserveMemory() does. EVALUATION Reasonable solution to help with resource issues that can arise when mapped byte buffers aren't unmapped in a timely manner.
http://bugs.sun.com/view_bug.do?bug_id=6417205
CC-MAIN-2013-20
refinedweb
410
52.97
When you see such statement, one thing which comes in your mind about purpose of new keyword is that it creates an instance, isn't it?Class objC= new Class();When this is certainly the main purpose, there are other behind the scene tasks our well known new keyword is responsible for, lets understand what: 1- Verifies the size needed for object allocation on managed heap.2- If sufficient space is available then allocate the object, where application root pointer is pointing.3- If required space is not available then, it triggers the GC (Garbage collection) lives in System.GC namespace, which does the heap cleanup and then after reclamation of needed space, the object will be allocated.So, new is not just instance creation, its more that that. By the way, IL instruction for new is newobj. .
http://www.c-sharpcorner.com/Blogs/11190/purpose-of-new-keyword.aspx
CC-MAIN-2015-48
refinedweb
138
60.95
I’m opening this topic on behalf of @imranhsayed and @smit.soni22 because they want to work on a Frontity package that will add support for Contact Form 7 in Frontity. It’s probably worth noting here that Frontity is different to other React frameworks like Gatsby or NextJS. In those frameworks, the steps involved to integrate a WP plugin like CF7 will usually be something like: - Add a contact page in your theme/app. - In that page, create new React components for the form. - When the form is submitted, fetch manually the CF7 endpoint. - In the settings of your app, add the ID of the form you want to point to. That’s not the way things work on Frontity and the purpose of this topic is to teach how to create a package. It’s totally our fault that this information is not available yet in our docs but I hope people can check this topic for reference. If you want to work on another Frontity package, please open a new topic and we can discuss the best possible implementation. Frontity’s design goal is to make things work out of the box with WordPress. Ideally, this package will allow any contact form you create on any WordPress post or page to work without any problem, just like it does in any PHP theme. How to structure your project You can develop a package inside a Frontity project. This is not only possible, but recommended. You can easily publish the package to npm once it’s ready from its own folder: cd packages/my-awesome-package npm publish That way, anyone that wants to contribute to the package can do so by just cloning the Frontity project, doing npx frontity dev and making the necessary changes. If you’re package is not a theme, you should not have your theme installed as a local package. Move it instead to node_modules. The only local package should be the one you’re developing. That will avoid confusion and will help with future updates. The package.json dependencies of your Frontity project should look like something like this: "dependencies": { "@frontity/core": "^1.2.1", "@frontity/html2react": "^1.1.11", "@frontity/mars-theme": "^1.2.0", // <- your theme is on node_modules "@frontity/tiny-router": "^1.0.14", "@frontity/wp-source": "^1.3.1", "frontity": "^1.3.1", "my-awesome-package": "./packages/my-awesome-package" } The package.json of your package is the one inside your package’s folder ( /packages/my-awesome-package). It should have the name and information you want to be used when publishing the package to npm. How does Contact Form 7 work - After you install the CF7 WP plugin, there’s a new custom post type in your WP dashboard called “Contact Form”. - You can create any number of contact forms in your dashboard and you receive a shortcode for each one. - When you include the shortcode in a post or page, a <form>appears in the content of that post/page. - Besides that, a new JS file and a new CSS file is added to your theme. - When a user sends a form, the JS code captures the submission and sends the info to a new REST API endpoint exposed by CF7: - The id of the URL is the form (custom post type) and the info sent is what the <form>has. For example: _wpcf7: 1922 _wpcf7_version: 5.1.4 _wpcf7_locale: en_US _wpcf7_unit_tag: wpcf7-f1922-p1925-o1 _wpcf7_container_post: 1925 name: Name email: [email protected] subject: The subject... message: The message... - I believe this is just form-data, taken directly from the form. Content-type is multipart/form-data. Maybe it works with application/jsonas well. - The fields starting with _wpcf7are hidden fields already present on the <form>of the content. - As far as I know, there’s no need for authentication when sending the form. - The response of the REST API endpoint contains both errors or the thank-you message, for example: Validation error: into: "#wpcf7-f1922-p1925-o1" invalidFields: 0: idref: null into: "span.wpcf7-form-control-wrap.email" message: "The e-mail address entered is invalid." message: "One or more fields have an error. Please check and try again." status: "validation_failed" Successfully submitted: into: "#wpcf7-f1922-p1925-o1" message: "Thank you for your message. It has been sent." status: "mail_sent" - All these validation/success messages can be edited in the WP Dashboard. Implenentation proposal This is my initial proposal for a “zero-configuration” package: Use html2react to process any CF7 <form> // packages/contact-form-7/src/index.js export default { libraries: { html2react: { processors: [contactForm7] } } } By the way, our recommendation was to add processors in init, but we have realized you can do it directly in libraries, so we think it’s cleaner. This was the equivalent: // packages/contact-form-7/src/index.js export default { actions: { cf7: { init: ({ libraries }) => { libraries.html2react.processors.push(cf7Form); } } } } The processor can look something like this: const cf7Form = { name: "cf7Form", test: node => node.component === "form" && node.props.className === "wpcf7-form", process: node => { node.component = Form; return node; } }; Now, we have the whole form in React, with our <Form> component. Use html2react to capture <inputs> The same way we captured <form>, we can capture all the inputs: const cf7Inputs = { name: "cf7Inputs", test: node => node.component === "input" && /wpcf7-form-control/.test(node.props.className), process: node => { node.component = Input; return node; } }; Inputs can use internal state ( useState) to retain and modify their value although it may be a better idea to store them in the state, using an object in state.cf7.forms[id].values. Add that processor to the array: // packages/contact-form-7/src/index.js export default { ... libraries: { html2react: { processors: [cf7Form, cf7Inputs] } } } Use the <Form> component to submit the form - Capture the onSubmitand trigger an action with the event.targetdata. - That action (for example, actions.cf7.sendForm) will fetch the endpoint with the relevant data. - After receiving the data, it can use state.cf7.forms[id]to add relevant information to the state, for example, a flag to control if there’s any error or if it succeed, and the messages received from the REST API. For example: export default { ... actions: { cf7: { sendForm: ({ state }) => async data => { const res = await fetch(`{data}/feedback`); const body = await res.json(); // Populate state with the errors, or thank-you message... state.cf7.forms[data.id].message = body.message; if (body.mail_sent) { state.cf7.forms[data.id].status = "sent"; state.cf7.forms[data.id].message = body.message; } else if (body.validation_failed) { state.cf7.forms[data.id].status = "failed"; // Populate errors from the response so React components // can see them and re-render appropriately... state.cf7.forms[data.id].validationErrors = { email: "The e-mail address entered is invalid." }; } } } } - If the form was successfully submitted, it can show the thank-you message. How to get the ID of the form in the <Form> component The id is inside a hidden field with this structure. It’s the first children of the first children: <form action="..." method="post" class="wpcf7-form"> <div style="display: none;"> <input type="hidden" name="_wpcf7" value="1922"> In the html2react processor you can easily iterate over the children, than extract the id and pass it as a prop the the <Form> component: const contactForm7 = { name: "contactForm7", test: node => node.component === "form" && node.props.class === "wpcf7-form", process: node => { // Get the id from the hidden field. // It's the first children of the first children: const id = node.children[0].children[0].props.value; // Pass the id as a new prop to <Form>. node.props.id = id; // Change from <form> to our React component <Form>. node.component = Form; return node; } }; Then, you can use that id to access state and show either the success message, the form ( children) or the form with the failure message: const Form = ({ state, id, children }) => { const form = state.cf7.forms[id]; return form.status === "sent" ? ( <div>Success: {state.cf7.forms[data.id].message}</div> ) : ( <> {children} {form.status === "failed" && <div>Something wrong: {state.cf7.forms[data.id].message}</div>} </> } How to pass down the ID of the form to the children <Input> components You can use React context for that. export const FormIdContext = React.createContext(null); const Form = ({ state, id, children }) => { const form = state.cf7.forms[id]; return ( <FormIdContext.Provider value={id}> ... <FormIdContext.Provider /> ); } Then use the context in the <Input> components: import { FormIdContext } from "./Form"; const Input = ({ state, actions, name }) => { const id = React.useContext(FormIdContext); const onChange = event => { actions.cf7.changeInputValue(id, name, event.target.value); }; return ( <input onChange={onChange} ... /> ); } Show invalidation errors You can create styled components with the CSS styles of CF7 and use the more html2react processors to insert them. After receiving in the <Form>: invalidFields: 0: idref: null into: "span.wpcf7-form-control-wrap.email" message: "The e-mail address entered is invalid." You can add that message to the state, using the id of the form: state.cf7.forms[id].validationErrors = { email: "The e-mail address entered is invalid." } Then add yet another processor for wpcf7-form-control-wrap with access to the id (via context) and name (it’s in its class) and turn it red if it finds an error. const cf7Spans = { name: "cf7Spans", test: node => node.component === "span" && /wpcf7-form-control-wrap/.test(node.props.className), process: node => { node.component = Span; return node; } }; const Span = ({ className, state, children }) => { // Get the name from the class const name = className.split(" ")[1]; // Get id from context const id = useContext(FormIdContext); // Get error from the state. const error = state.cf7.forms[id].validationErrors[name]; return ( <span css={css`background-color: ${error ? red : white};`}> {children} </span> ); }; Please bear in mind that this is just an implementation proposal, not the final one. I haven’t tested any code. If you find problems or something is not clear enough, please reply to this topic and we can discuss it further and find the best solution together. Installation Any form created with CF7 and included with a shortcode in any post/page should work out of the box by simply installing the package. The only steps involved would be: - Install the package from npm: npm install contact-form-7 - Add it to your frontity.settings.jsfile: export default { ... packages: [ "contact-form-7", // <-- add it to your packages "your-theme", "@frontity/tiny-router", "@frontity/html2react", ... ] }; That’s it. As long as the theme is using html2react, it should work. Additional features: reCaptcha CF7 has support for reCapatcha v3. That’s something worth exploring in the Frontity package as well, although maybe it will be better managed by a separate Frontity package that can add reCaptcha to any page of the site (not only to CF7). Additional features: expose actions to send forms programatically The methods used by this package can be exposed in actions so people who want to send a form programatically from other parts of their theme can do so by simply doing: actions.cf7.sendForm({ id: X, fields: { ... } // name, surname... whatever form with id X needs. }); Additional note: fetch vs axios It’s worth noting here that for this type of package that you should not use an external fetch library, like axios or superagent. Axios is 12.6kB and SuperAgent is 20.4kB. If each Frontity package adds its own favourite fetch library people will end up with many fetch libraries and many extra Kbs in their apps. You should stick to fetch, which is included in Frontity by default and it weights 0Kbs, because it’s a native library. Do not use window.fetch because that’s won’t work on Node. Import it from the frontity package: import { fetch } from "frontity"; const getFromSomeAPI = async (resource) => { const response = await fetch("" + resource); const body = await response.json(); return body; }; It’s explained in more detail here:
https://community.frontity.org/t/how-to-create-a-frontity-package-for-contact-form-7/623
CC-MAIN-2019-43
refinedweb
1,961
58.48
Basics¶ This chapter introduces some core concepts of mypy, including function annotations, the typing module and library stubs. Read it carefully, as the rest of documentation may not make much sense otherwise. Function signatures¶ A function without a type annotation is considered dynamically typed: def greeting(name): return 'Hello, {}'.format(name) You can declare the signature of a function using the Python 3 annotation syntax (Python 2 is discussed later in Type checking Python 2 code). This makes the the function statically typed, and that causes type checker report type errors within the function. Here’s a version of the above function that is statically typed and will be type checked: def greeting(name: str) -> str: return 'Hello, {}'.format(name) If a function does not explicitly return a value we give the return type as None. Using a None result in a statically typed context results in a type check error: def p() -> None: print('hello') a = p() # Type check error: p has None return value Arguments with default values can be annotated as follows: def greeting(name: str, prefix: str = 'Mr.') -> str: return 'Hello, {} {}'.format(name, prefix) Mixing dynamic and static typing¶ Mixing dynamic and static typing within a single file is often useful. For example, if you are migrating existing Python code to static typing, it may be easiest to do this incrementally, such as by migrating a few functions at a time. Also, when prototyping a new feature, you may decide to first implement the relevant code using dynamic typing and only add type signatures later, when the code is more stable. def f(): 1 + 'x' # No static type error (dynamically typed) def g() -> None: 1 + 'x' # Type check error (statically typed) Note The earlier stages of mypy, known as the semantic analysis, may report errors even for dynamically typed functions. However, you should not rely on this, as this may change in the future. The typing module¶ The typing module contains many definitions that are useful in statically typed code. You typically use from ... import to import them (we’ll explain Iterable later in this document): from typing import Iterable def greet_all(names: Iterable[str]) -> None: for name in names: print('Hello, {}'.format(name)) For brevity, we often omit the typing import in code examples, but you should always include it in modules that contain statically typed code. The presence or absence of the typing module does not affect whether your code is type checked; it is only required when you use one or more special features it defines. Type checking programs¶ You can type check a program by using the mypy tool, which is basically a linter – it checks your program for errors without actually running it: $ mypy program.py All errors reported by mypy are essentially warnings that you are free to ignore, if you so wish. The next chapter explains how to download and install mypy: Getting started. More command line options are documented in The mypy command line. Note Depending on how mypy is configured, you may have to explicitly use the Python 3 interpreter to run mypy. The mypy tool is an ordinary mypy (and so also Python) program. For example: $ python3 -m mypy program.py Library stubs and the Typeshed repo¶ In order to type check code that uses library modules such as those included in the Python standard library, you need to have library stubs. A library stub defines a skeleton of the public interface of the library, including classes, variables and functions and their types, but dummy function bodies. For example, consider this code: x = chr(4) Without a library stub, the type checker would have no way of inferring the type of x and checking that the argument to chr has a valid type. Mypy incorporates the typeshed project, which contains library stubs for the Python builtins and the standard library. The stub for the builtins contains a definition like this for chr: def chr(code: int) -> str: ... In stub files we don’t care about the function bodies, so we use an ellipsis instead. That ... is three literal dots! Mypy complains if it can’t find a stub (or a real module) for a library module that you import. You can create a stub easily; here is an overview: Write a stub file for the library and store it as a .pyifile in the same directory as the library module. Alternatively, put your stubs ( .pyifiles) in a directory reserved for stubs (e.g., myproject/stubs). In this case you have to set the environment variable MYPYPATHto refer to the directory. For example: $ export MYPYPATH=~/work/myproject/stubs Use the normal Python file name conventions for modules, e.g. csv.pyi for module csv. Use a subdirectory with __init__.pyi for packages. If a directory contains both a .py and a .pyi file for the same module, the .pyi file takes precedence. This way you can easily add annotations for a module even if you don’t want to modify the source code. This can be useful, for example, if you use 3rd party open source libraries in your program (and there are no stubs in typeshed yet). That’s it! Now you can access the module in mypy programs and type check code that uses the library. If you write a stub for a library module, consider making it available for other programmers that use mypy by contributing it back to the typeshed repo. There is more information about creating stubs in the mypy wiki. The following sections explain the kinds of type annotations you can use in your programs and stub files. Note You may be tempted to point MYPYPATH to the standard library or to the site-packages directory where your 3rd party packages are installed. This is almost always a bad idea – you will likely get tons of error messages about code you didn’t write and that mypy can’t analyze all that well yet, and in the worst case scenario mypy may crash due to some construct in a 3rd party package that it didn’t expect.
http://mypy.readthedocs.io/en/latest/basics.html
CC-MAIN-2017-13
refinedweb
1,016
61.77
At Ignite 2019, WinUI 3.0 Alpha was released. The details are as following links: From the link, you can get a vsix file to project templates for WinUI 3.0. After installing it, you can find many templates for WinUI. There is a Microsoft.WinUI version 3.0.0-alpha.19101.0 package reference. Under Microsoft.WinUI.Controls namespace, there are already a lot of controls, we can use them.(But there are a few limitation in alpha version.) It works almost all fine.(Alpha version has a few limitation, for ex: transparency, etc...) One of great features of WinUI 3.0 I think is to support XAML Islands. XAML Islands is working currently on Windows 10 1903. However, WinUI 3.0 is going to support Windows 10 Creators Update or above. After end of Windows 7 support, we can use all UWP controls on our Win32 apps(exclude running on Windows 8.1). I'm really looking forward to be released it. I just looked a project template for Blank App(WinUI UWP) in this article. It was already worked fine. I think it is great milestone for Windows Developers, if you feel it is interesting, then please check update info of Windows UI Library from the repository. Happy developing Windows app! You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
https://techcommunity.microsoft.com/t5/windows-dev-appconsult/let-s-look-winui-3-0-alpha/ba-p/982830
CC-MAIN-2021-49
refinedweb
236
71.1
Every few months, a nearby user group does a code kata du jour. Everyone pairs up, and uses common practices like TDD and red-green-refactor. A couple weeks ago, I was partnered with someone who hadn’t done this before, so I had the opportunity to talk him through it. Assuming you’re in the same boat (maybe you’re going to a user group where you’ll be doing a kata for the first time), you might be wondering what to expect when you get there. Let’s define some terms first So what exactly is unit testing, TDD, red-green-whatever, and the other nonsense I just mentioned? (Everyone’s experience-level is different, so feel free to skip anything you’re already familiar with.) What is a Unit Test? When you create a program, you should be thinking about how you’ll know it’s right when it’s done. How will you make sure it’s doing what it’s supposed to do, *before *it goes to production? You could test it manually. That’s what we usually do with coursework in school… run it with one input and see what happens. Run it with other inputs and see what happens. Walk through the code trying to figure out what’s going on. But as your program grows from a few methods to thousands of methods across hundreds of classes, that quickly becomes impossible. So we put the computer to work, testing itself. These automated tests come in all kinds of flavors – from testing one piece, to testing the entire system end to end, testing just the UI, load testing, etc. The most basic kind of automated test is a unit test. You test a single piece of functionality in a single method. You completely control the environment around that method, so that your test doesn’t inadvertently hit any external resources like the database, a network drive, or even the local disk. If it’s possible for the database to go down, or the network to become inaccessible, or to accidentally try writing to an unauthorized location on disk, your test could fail for a reason outside of your control, and that’s no good. There’s another concept called programming to the interface that helps avoid these pitfalls, but that’s too much for this post. Unit tests and isolation go hand-in-hand. You test one method with specific, known inputs, and check that the output is exactly what you thought it would be. Let’s run through an example… You’ve been tasked with making a calculator, and the first feature is a “Divide” capability. The method takes two numbers, divides them, and returns the quotient. Ground-breaking, right? You might start with a method like the following (all my examples are in C#, but hopefully it’s obvious what this is doing even if you’re not familiar with C#), then run it and manually test some numbers. Whatever you throw at it seems to work okay, so you call it a day. Well, thanks for reading, have a good one! public class Calculator { public decimal Divide(decimal dividend, decimal divisor) { return dividend / divisor; } } …… oh wait, skip 6 months ahead, you’re on to other projects, and the users have decided they need to multiply too (so demanding). Also, in the meantime, adding and subtracting has been implemented by someone else, and now the code is intermittently spewing unexpected results and blowing up at runtime. Time to manually run numbers through again. Better yet, manually run numbers through all the methods, every time someone builds it! (That’s ridiculous. Don’t do it manually.) The point is, the “Divide” method is now failing and no one knows exactly why, or even when it might have started. Your users like to add and subtract, but they rarely divide. All your fine work, unappreciated. :p That’s a silly example, but on a larger scale, we run into it all the time when we’re developing large applications that dozens of programmers have contributed to over a span of years. It’s time to start** automating your tests**. You could add a second class full of methods that instantiate the Calculator, run some numbers through it, and return a value indicating whether or not the expected answer and actual answer match up. The following two “test” methods return true if the answers match; false otherwise. public class CalculatorTests { Calculator c = new Calculator(); public bool Is6DividedBy3EqualTo2() { var quotient = c.Divide(6, 3); if (quotient == 2) return true; else return false; } public bool Is9DividedBy2EqualTo4Point5() { var quotient = c.Divide(9, 2); if (quotient == 4.5m) return true; else return false; } } You could even move that class into its own project, and use the magic of reflection to run all the tests in your test class, check the return values, and display a list of tests that failed. Here’s a console application that does just that, but it may be hard to follow if you’re unfamiliar with C#, and certainly isn’t the way I’d recommended running the tests, so don’t spend too much time on it. :) public class Program { public static void Main() { var cTests = new CalculatorTests(); var failedTests = new List<string>(); // using reflection, run every test method and record the names of those methods that fail foreach (var m in typeof (CalculatorTests).GetMethods() .Where(m => m.DeclaringType != typeof (object))) { if (Convert.ToBoolean(m.Invoke(cTests, null)) != true) failedTests.Add(m.Name); } // display all the failed tests, or a message that everything passed if (failedTests.Any()) Console.WriteLine("Failed Tests: \r\n\r\n{0}", string.Join("\r\n", failedTests)); else Console.WriteLine("All tests passed!"); Console.ReadLine(); } } In the second screen shot, I’ve changed my inputs to produce invalid results, so the tests fail. So, this is better than manual, but still ridiculous. Instead, you’d use mature testing tools like NUnit and xUnit (.NET languages like C#, F#, VB.NET), JUnit (Java), Minitest (Ruby), etc. There are frameworks for different kinds of tests in nearly every language, and they make running the tests easy (individually or all at once), and tell you loads more about what specifically may have gone wrong. I’ll stop there for now. Hopefully that gives you at least a starting idea of what a unit test is. Just remember… small, isolated, tests one thing at a time, tightly controlled. (Find the code on .NET Fiddle) What is TDD and Red-Green-Refactor? In the last section, we wrote the “Divide” method first, and then wrote the tests to validate it much later. This is common in legacy code, that was written without any tests originally. TDD, or test-driven development, flips this around. We write the tests *before *writing the code. In essence, the tests should drive the program, stating what the code should do. Let’s define our method again, but just enough to make the code compile. It doesn’t take the inputs into account at all, and certainly isn’t implemented correctly. public decimal Divide(decimal dividend, decimal divisor) { return 0; } Now we write a test that states exactly what we expect this code to do. I’m going to switch over to NUnit syntax now, which should still be easy enough to follow, but is more likely to match what you might see when you’re doing your own testing. NUnit is available in Visual Studio via NuGet. Of course, being programmers, we obsess over every detail, including how we name our tests. There are different opinions, and you can view several here and here, but I’ll follow a naming convention that specifies what we’re testing, the expected result, and when that result should occur (aka MethodName_ExpectedBehavior_StateUnderTest in the second article linked above). [TestFixture] public class CalculatorTests { Calculator c; [SetUp] public void Setup() { c = new Calculator(); } [Test] public void Divide_Returns2_WhenDividing6By3() { var quotient = c.Divide(6, 3); Assert.IsTrue(quotient == 2); } [Test] public void Divide_Returns4_5_WhenDividing9By2() { var quotient = c.Divide(9, 2); Assert.IsTrue(quotient == 4.5m); } } Feel free to leave a comment below if you want clarification on anything so far. A couple quick notes regarding the above code… The method marked with the “SetUp” attribute runs before every single test. By creating a new instance of Calculator before each test, we isolate our tests from one another (remember, isolation is good – we don’t want one test modifying some values in the lone instance of Calculator, and then the next test failing due to those changed values). Also, the methods aren’t returning a value anymore. The “Assert” class and its methods capture the result of the test and report it to us. Most testing libraries will have similar methods built-in. The above can actually be further shortened in NUnit, using the “TestCase” attribute to combine similar tests. It’s not pertinent to a discussion of TDD, but I’ll include it here in case you’re interested. The test method has been updated to accept parameters, which we pass in when we run the tests. [TestCase(6, 3, 2, Description = "6 / 3 = 2")] [TestCase(9, 2, 4.5, Description = "9 / 2 = 4.5")] public void Divide_ReturnsExpectedQuotient(decimal dividend, decimal divisor, decimal expectedQuotient) { var actualQuotient = c.Divide(dividend, divisor); Assert.AreEqual(expectedQuotient, actualQuotient); } The term “Red-Green-Refactor” is closely tied to TDD. When we first run our unit tests, the tests are going to fail. The “Divide” method is returning 0 in all cases, so the initial state of our tests is red. Notice how NUnit tells what the expected and actual values were, and provides a stack trace and some other useful info. We can now fix the original “Divide” method, changing it to return dividend / divisor again; now the tests pass (and are green): Alright, now a new requirement comes in from the users. If the quotient is negative, make it positive before returning it. It’s weird, but luckily you’re the type who goes with the flow. Let’s write another test to indicate the expected behavior (-10 / 5 should return 2, not -2) and watch the first two test cases fail: [TestCase(-10, 5, 2, Description = "-10 / 5 = 2")] [TestCase( 10, -5, 2, Description = "10 / -5 = 2")] [TestCase(-10, -5, 2, Description = "10 / 5 = 2")] public void Divide_ReturnsPositiveQuotient_WhenInput(decimal dividend, decimal divisor, decimal expectedQuotient) { var actualQuotient = c.Divide(dividend, divisor); Assert.GreaterOrEqual(actualQuotient, 0); } Now we fix the code to make the tests pass (green) again. public decimal Divide(decimal dividend, decimal divisor) { // make negative dividends positive if (dividend < 0) dividend = -dividend; // make negative divisors positive if (divisor < 0) divisor = -divisor; return dividend / divisor; } The refactoring part can come in at any point where our tests are passing. When we’re confident that our code is working as it should (because our tests pass), we’re free to refactor the code as we see fit. Replace the above code with the .NET-provided Math.Abs function and run the tests again. They pass, so the changes didn’t break anything. public decimal Divide(decimal dividend, decimal divisor) { return Math.Abs(dividend / divisor); } Things continue this way, with you writing tests to state what the program should do, they’ll most likely fail, and then you fix up the code to make the tests pass. I’ll go through one more. Now someone comes along and says, “Wow, looks what happens if I divide by 0!” And some kittens get swallowed into a swirling vortex. Cute ones. Really unfortunate. The users decide they don’t want to throw an exception.. they want to return 0 when the divisor is 0, no matter what the dividend is. We need another test. [TestCase(5, Description = "5 / 0 = 0")] [TestCase(0, Description = "0 / 0 = 0")] [TestCase(-5, Description = "-5 / 0 = 0")] [Test] public void Divide_ReturnsZero_WhenDivisorIsZero(decimal input) { var actualQuotient = c.Divide(input, 0); Assert.AreEqual(0, actualQuotient); } Check out the “message” above. The new tests failed (I collapsed the passing tests), because the original method threw a DivideByZeroException. Time to go green again. We *could *catch that particular exception, but (in the .NET world at least) exceptions are expensive and it’s better to prevent one if possible. After all, if you can anticipate a condition and code around it, it’s not all that exceptional. public decimal Divide(decimal dividend, decimal divisor) { if (divisor == 0) return 0; return Math.Abs(dividend / divisor); } Run the tests again… good to go! Now if you wanted, you could go back and change the code, maybe catch the DivideByZeroException instead, and the tests will still pass, letting you know you haven’t broken anything. Clear as mud? If you need clarification, leave a comment below! (Find the code in this section on .NET Fiddle.) What is Pair Programming? Pair programming is pretty much what it sounds like. Two heads are better than one. If you’ve ever asked a fellow programmer for help and then you both sat together figuring a problem out, you’ve pair programmed. There are two ways people see this: - Some people see this as a waste of resources – two salaries with half the output - Other people see this as collaboration – instead of two people getting stuck on individual tasks or getting distracted by the latest cat videos, they can bounce ideas off each other and keep driving ahead Some advocates take it to the extreme, and do nothing *but *pair programming, all day every day. I’ve never tried it, so I can’t say much about it. Except that I hope no one gets paired with a Wally. How does this all fit with Code Katas? When you pair up during a code kata, you learn from each other, and discuss problems as they arise. You’ll basically “ping-pong” back and forth, following a pattern like the following: - You write a unit test that fails (red), then pass control of the keyboard to your partner. - She modifies the code to make your test pass (green), then writes a unit test of her own (to indicate what the program should do next), which also fails (red again). She passes control back to you. - You write more code to make her test pass (green), then write the next (failing) test (red!). And so on… This continues, until time is up. That’s right, there’s usually a time limit. It could be a half-hour, or maybe the length of the user group meeting. You see, you aren’t really aiming to *complete *the kata, though with some shorter ones you certainly will. You’re more focused on following a disciplined process: - What’s the next thing our program should be able to do? - What kind of test can we write, to reflect the next requirement to test for? - Test fails. How can we modify the code, to make the test pass? (the requirement’s been met) - Test passes. How can we refactor the code to make it more efficient? (make sure the tests still pass) - Repeat Can we run through a sample kata? Sure. Glad you asked. A popular (and short) one is the Fizz Buzz kata: (there are plenty more at cyber-dojo.org) list out the requirements, if the kata doesn’t do so already: - Print the numbers from 1 to 100. - If the number is a multiple of 3, print “Fizz” instead. - If the number is a multiple of 5, print “Buzz” instead. - If the number is a multiple of 3 *and *5, print “FizzBuzz” instead. Now we have 4 distinct steps, and we can begin writing tests for these. Step 1: Return the same number We need a method that accepts a number, and (for now) spits it back out. Let’s start with a basic method that accepts a number and outputs an empty string, so we can compile. public class FizzBuzz { public string FizzyOutput(int input) { return ""; } } Let’s just test 1 and 2 to start. Of course it fails, because we always return an empty string: [TestFixture] public class FizzBuzzTests { private FizzBuzz fizzBuzz; [SetUp] public void Setup() { fizzBuzz = new FizzBuzz(); } [Test] public void FizzyOutput_OutputsOne_WhenInputIsOne() { var output = fizzBuzz.FizzyOutput(1); Assert.AreEqual("1", output); } [Test] public void FizzyOutput_OutputsTwo_WhenInputIsTwo() { var output = fizzBuzz.FizzyOutput(2); Assert.AreEqual("2", output); } } Now you’d pass the keyboard to your pair to fix the code and make the test pass. public string FizzyOutput(int input) { return input.ToString(); } Step 2: Return “Fizz” for multiples of 3 The second requirement is to print “Fizz” for multiples of 3. Your partner could create multiple tests, or with a tool like NUnit, use the TestCase attribute. Run the new test and watch it fail. After all, we’re still returning the same number no matter what. [TestCase(3)] [TestCase(6)] [TestCase(9)] [Test] public void FizzyOutput_OutputsFizz_WhenInputIsMultipleOfThree(int input) { var output = fizzBuzz.FizzyOutput(input); Assert.AreEqual("Fizz", output); } To get this to pass, you could do something silly, like return “Fizz” for exactly the specified inputs. Or take a more practical approach that handles any multiple of 3. Always run the tests again when you’re done, to make sure they pass. public string FizzyOutput(int input) { if (input % 3 == 0) return "Fizz"; return input.ToString(); } Step 3: Return “Buzz” for multiples of 5 Now you write the next test, indicating that multiples of 5 return “Buzz”, and pass the keyboard again. [TestCase(5)] [TestCase(10)] public void FizzyOutput_OutputsBuzz_WhenInputIsMultipleOfFive(int input) { var output = fizzBuzz.FizzyOutput(input); Assert.AreEqual("Buzz", output); } The test fails, and your pair fixes it in a very similar manner to the previous requirement. Step 4: Return “FizzBuzz” for multiples of 15 One last requirement, and your partner writes a test for it.. multiples of 3 *and *5: [TestCase(15)] [TestCase(30)] [TestCase(45)] public void FizzyOutput_OutputsFizzBuzz_WhenInputIsMultipleOfThreeAndFive(int input) { var output = fizzBuzz.FizzyOutput(input); Assert.AreEqual("FizzBuzz", output); } Your turn to finish up the kata, and you do it by checking for multiples of 15: public string FizzyOutput(int input) { if (input % 15 == 0) return "FizzBuzz"; if (input % 3 == 0) return "Fizz"; if (input % 5 == 0) return "Buzz"; return input.ToString(); } Run the tests one more time and verify they all pass. Here’s the complete set of tests we’re running. Although I think the above tests are sufficient, you could test for every value just to be sure: [TestCase(1, "1")] [TestCase(2, "2")] [TestCase(3, "Fizz")] [TestCase(4, "4")] [TestCase(5, "Buzz")] // .... [TestCase(14, "14")] [TestCase(15, "FizzBuzz")] [TestCase(16, "16")] // ... [TestCase(95, "Buzz")] [TestCase(96, "Fizz")] [TestCase(97, "97")] [TestCase(98, "98")] [TestCase(99, "Fizz")] [TestCase(100, "Buzz")] public void FizzyOutput_OutputsExpectedValues(int input, string expectedOutput) { var actualOutput = fizzBuzz.FizzyOutput(input); Assert.AreEqual(expectedOutput, actualOutput); } You can find the code for this FizzBuzz example on .NET Fiddle. Final Thoughts At this point, we could refactor again, since the tests are all passing. I don’t think there’s much left to do though. Do you see anything I missed? Typos? I used C# because I’m most familiar with it, but you could pair up with someone who knows a different language. I’ve paired up a couple times to do katas in Ruby – it never hurts to learn something new (and meet someone new!), and to see how other programmers/languages approach testing. There’s no pressure of business requirements or deadlines… just learning from others and sharing knowledge. And maybe having a good laugh when the time is up and you realize you’ve come up with a hideous solution (not that *that *ever happens). Thanks for reading this far… I hope you learned something new or interesting. Thoughts? Do you have questions or something to add? Let me know in the comments below!
https://grantwinney.com/an-intro-to-code-katas-tdd-and-red-green-refactor/
CC-MAIN-2019-18
refinedweb
3,298
64.2
Free for PREMIUM members Submit Learn how to a build a cloud-first strategyRegister Now We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! Simon It's located in the System.Collections namespace. Alternatively, there is the ArrayList from the same namespace. but it would depend exactly on Why you want a replacement ?? What extra features you want ?? Dim c As New Collection c.Add("value", "key") I agree with arif_eqbal...if you need a replacement, then what are your requirements. None of the collection classes that come with .Net work quite like the Collection class did: The Collection class allows you to retrieve by Index or Key. The ArrayList only allows access by Index. The HashTable and SortedList only allow access by Key. ~IM Even I was tempted earlier to tell him that the Collection Class is still available and pretty much the same But when I re-read his question I thought he knows its there but does not want to use them. That's why I wanted to know WHY, What extra he wants
https://www.experts-exchange.com/questions/21211025/collection-class-of-vb6-in-net.html
CC-MAIN-2017-51
refinedweb
194
75.81
29 April 2011 10:18 [Source: ICIS news] SINGAPORE (ICIS)--German specialty chemicals maker Bayer said on Friday it plans to invest €15bn ($22bn) through 2013 primarily for research and development (R&D) to promote technological innovation. Bayer chairman Marijn Dekkers made the announcement at the company’s annual stockholders' meeting in ?xml:namespace> Two-thirds of the investment will be used for R&D expenses, while the remaining balance will go towards capital expenditures, said Dekkers. “The company plans to keep its research and development expenditures in 2011 level with the record €3.1bn spent in 2010,” he said. Notwithstanding a 4.3% year-on-year fall in net profit last year to €1.3bn, Bayer has proposed a higher dividend of €1.50 for the year from €1.40 in 2009, the company said. The figure is equivalent to a total dividend payment of €1.24bn, it added. In the first quarter of 2011, Bayer posted an 8.4% year-on-year increase in net profit to €684m, with sales rising 13.2% to €9.4
http://www.icis.com/Articles/2011/04/29/9456095/germanys-bayer-plans-15bn-investment-through-2013.html
CC-MAIN-2014-35
refinedweb
178
60.82
What’s Coming in Ext JS 4.1 Update (12/23): Ext JS 4.1 Beta 1 is now available The primary focus of the upcoming Ext JS 4.1 release is performance. We have been hard at work to improve across the board, but we have concentrated on two main areas: rendering and layout. While the majority of our time has been dedicated to this effort, there are many other exciting developments to share. Chief among these are some major improvements to grid and border layout and a preview of the new Neptune theme. Learn the latest about Ext JS and HTML5/JavaScript for three intensive days with 60+ sessions, 3+ parties and more at SenchaCon 2013. Register today! Performance The necessary precursor to improving performance is measuring it. To successfully and permanently increase performance, however, measurement has to become part of the regular build and test process. This was the first piece we put in place for Ext JS 4.1. Going beyond the use of profilers like dynaTrace, we created some simple measurement tools to use on a continuous basis. We use these tools to track key metrics on every build. The performance metrics we track correspond to the page life cycle: Load, Initialize, Render and Layout. Load An Ext JS application comes to life when its “onReady” function is called. Prior to that time, many things have to take place. When we say “page load”, we can mean many different things, but for the sake of simplicity, here we define page load as the time period that starts at the execution of the first line of “ext-all.js” and ends just before any onReady functions are called. This primarily includes time spent executing all of the Ext.define statements that populate the Ext namespace, and also the detection of when the page’s DOM is ready. Initialize When the onReady functions are called, the application takes over. Applications perform whatever custom initialization they may need, but at some point they will create components and containers to display. In some applications, there will be literally hundreds of components created. Each of these components and containers has to be constructed, initialized and wired together. In Ext JS 4, many more things are components compared with previous releases. Consider the header of a panel. The Header component is actually a container that contains a basic component for the title and (optionally) a set of Tool components: all managed by an hbox layout. This means you can add components to the panel’s header quite easily. It also means that there are more components and containers created in Ext JS 4 given the same panel configuration. Looking at the Themes example in Ext JS 3, there were 148 components in 50 containers. That same configuration in Ext JS 4 generates 271 components in 97 containers. This makes optimization of this area essential. Render The next step is the conversion of these initialized components and containers into HTML. In previous versions of Ext JS, rendering was a mixture of calls to the createElement API and setting innerHTML. In Ext JS 4.0, each component’s primary element was created using createElement and its internal structure was produced using an XTemplate instance referred to as the “renderTpl”. If a component happened to be a container such as a Panel, additional elements were created using createElement and the child components repeated the process as they rendered into the panel’s body element. At each step, special component methods were called and events were fired to allow derived classes and applications to extend this process. In version Ext JS 4.1 we have optimized component rendering, so components are rendered in bulk. Instead of alternating calls to createElement and innerHTML, bulk rendering creates the entire component tree as HTML and then adds it to the DOM with a single write to innerHTML. To support this change we added a new method to components called “beforeRender”. There has always been the “beforerender” event, but derived classes typically had to choose between overriding the “render” or “onRender” method if they needed to do any work just before their primary element was created. They could do what they needed and then call the base version of the method which would then create the element. The general flow of rendering in 4.0 vs 4.1 is shown in Figure 1. In both cases, the process starts at a particular component and descends the component tree. Layout Once the DOM has all the necessary elements, the final step is to determine the size and position of any elements that need special handling. Or in other words: the final step is to lay out the components. This process is the most complex and time consuming. It represented just over half the total time in loading the Themes example in 4.0.7. The challenge with layouts comes from how browsers handle requests for style information (such as margins, width and height), especially if these are being changed along the way. The first rule of performance is that CSS calculations are expensive. Because of this, browsers cache these results. When JavaScript comes along and sets a width or height, however, the browser has to invalidate some or all of this cache. How much of the cache this affects is a function of what was changed and the cleverness of the browser’s CSS engine. The next request for style information will typically then trigger a “reflow” to refresh the cache. In general, one could say “write + read = reflow”. Given that reflows are expensive, an obvious way to increase performance is to reduce the number of reflows that occur during a layout. In Ext JS 4.0, an hbox layout, for example, buffered all of its calculations and wrote those results only after it had read all that it needed from each component. If the hbox needed to know the size of a component, it had to measure the component’s element (read), but before it could do that, the layout of that component had to do its work first. In other words, the component’s layout performed some calculations (reads) and then stored the results to the DOM (writes). The hbox then measured the component’s element (read). What started out as a sequence of reads followed by a sequence of writes often became a highly interleaved set of reads and writes, which, of course, resulted in a large number of reflows. To eliminate these reflows the child layouts needed a way – external to the DOM – to report their results to their owner. Layouts in Ext JS 4.1 have been refactored to use a layout context object to share results while avoiding the write/read to the DOM (and its associated reflow). This change, while largely internal, breaks custom layouts. While we believe this is a fairly rare practice, it is something to be aware of when upgrading. From 4.0.7 to 4.1 PR1 All of these optimizations produced some very significant gains. One of the key examples used to benchmark the performance of Ext JS is the Themes example. The performance difference of 4.1 PR1 compared to 4.0.7 in this example is shown in Figure 2, as tested on IE8. Next Steps While 4.1 is clearly a big improvement over 4.0, it is not yet as fast across the board as 3.4. This is not the last word on optimizing performance. In fact, we have many other performance optimizations planned for 4.x that just could not fit in this release. Our goal right now is to stabilize and ship a final release of Ext JS 4.1 as quickly as possible. We will then be hard at work to accelerate getting those additional gains delivered in subsequent releases. Other Goodies As promised, this release is not purely about performance. We demonstrated the new Neptune theme at SenchaCon this year, and we are very pleased that a Neptune preview will be part of this release. Much to our delight, the Calendar example will be returning as well. The list could go on with the many other improvements, but let’s dive in to some of the more exciting changes. Grid By popular demand, we went back and investigated other solutions to the buffered scrolling and “infinite” scrolling mechanisms in Ext JS 4.0. We wanted to see if we could solve our technical problems without resorting to so-called “virtual scrolling”, and we are happy to report that, in fact, we can. In 4.1, grids of (almost) every kind now use native scrolling. This vastly improves the user experience because things like acceleration, momentum and friction all work as well for grid as they do for any other scrolling content. Another welcome improvement is that this also means that scrolling is done by pixels and not whole rows. This is also true for “infinite” grids, even when the rows are variable height. The only situation where virtual scrolling is still used is on the locked half of a locking grid. Since it has no scrollbar, native scrolling is not an option there. Lastly, though not part of grid per se, metadata handling is now supported by Store.. We are pleased to say that all of these limitations have been removed in Ext JS 4.1. XTemplate Internally, Ext JS uses the XTemplate class for many things. It is a critically important part of the framework but was missing one important feature: it could not efficiently append to an array for a subsequent join operation. When we started work on bulk rendering, we decided that both DomHelper and XTemplate needed to collaborate on markup production by pushing their output onto a shared array. We then discovered that the internals of XTemplate could not be surgically modified to support this, which allowed us to reconsider just how this piece needed to work. Some long-standing challenges and issues with XTemplate: - It only supported the most basic control structures: “for” and “if”. - The code generated from the template was somewhere between very hard and impossible to debug. As a result, errors in the template text were very difficult to track down. - The template text was compiled at XTemplate construction time, which was undesirable because many XTemplate instances were never actually used. - Executing the compiled code for a template was not as fast as it could be because it contained many internal function calls and string concatenations. In 4.1, XTemplates are now compiled the first time they are used. This makes construction of an XTemplate nearly free. Further, the compiled code is now a single function that can be stepped into using the debugger, and it looks very much like the original template. With this approach, many things became simple to support. Like “else” and “else if” statements and “switch” statements. Even literal code insertion (similar to JSP or ASP) was now a trivial extension. var tpl = new Ext.XTemplate( '<tpl for="orders">', 'Order {id} is ', '<tpl if="total > 100">', 'large', '<tpl elseif="total > 25">', 'medium', '<tpl elseif="total > 0">', 'small', '<tpl else>', '{% continue; %}', '</tpl>', 'Items:', … '</tpl>'); The “<tpl for>” statement generates a proper “for” loop while the “<tpl if>”, “<tpl elseif>” and “<tpl else>” generate the obvious “if” and “else” blocks. The new “{% x }” syntax is used similar to “{[ x ]}”. The body of both of these is treated as arbitrary code. In the “{[ x ]}” expression, x is an expression that produces a value that is placed into the output. In the “{ x %}” case, “x” is simply inserted into the function. In this case, it will continue the for loop when reached. Overrides In Ext JS, it has long been a common practice to share bug fixes and enhancements in the form of an “override”. In the past, these had to be manually managed as special entities. They operated on existing classes, whereas just about all other code in Ext JS 4.0 uses class names as strings. For example, to derive from Ext.panel.Panel: Ext.define('My.app.Panel', { extend: 'Ext.panel.Panel', method: function () { this.callParent(); } }); But to apply an override to the same Panel class (in Ext JS 4.0), the shape changes completely: Ext.panel.Panel.override({ method: function () { this.callOverridden(); // not possible before 4.x } }); And this code will fail if the Panel class is not loaded already. The inheritance use case would not fail, but would instead inform the loader/builder that Ext.panel.Panel was required. Overrides are now first-class citizens. They can be named and loaded when needed. In fact, writing an override is just about identical to writing a derived class. Ext.define('My.app.PanelPatch', { override: 'Ext.panel.Panel', method: function () { this.callParent(); } }); Not only does this support the classic uses for override in a managed way, but overrides can actually become tools in your designs similar to a mixin. Where a mixin is always part of the class (like the base class), overrides can be bolted on later and only if desired or needed. Conclusion We hope you get a chance to download and try out the new features and improvements as we approach the final release of Ext JS 4.1. We are looking forward to getting feedback from everyone on how this release has benefited you, and where we should look at further improvements. There are 70 responses. Add yours. Zach Gardner3 years ago Performance and native grid scrolling are two of the biggest parts of 4.1 I’m most looking forward to. The Themes Example Performance chart didn’t specify which browser was being used to achieve those results, nor how different browsers take on the same example. Is there any data on that, specifically for IE 8/9? Michael Camden3 years ago @Zach Gardner - According to the article the figure represents data from IE8. I’m also excited for this, performance has been a major issue in 4.0.x releases. Hoping this update will resolve it. Don Griffin Sencha Employee3 years ago The performance tests were made using IE8 running on a low-end laptop (Core i3 2.1 GHz, Windows 7 x64). We internally measure other browsers, and have discussed how to make more of that kind of data available. In relative terms, the improvements are pretty similar on other browsers though. James Crow3 years ago We’re still on ExtJS 3.1.1 and planning our next upgrade however I’m keen to understand if this 4.1 is now faster than 3.1.1? Understand from the above that it is not quite as fast as 3.4 yet but we’re not on that version yet. If this can be provided it would greatly help planning our upgrade. Thanks. Don Griffin Sencha Employee3 years ago @James Crow, The comparison is most likely very similar to 3.4, but we have not measured it specifically. ldonofrio3 years ago The override thing it’s really a must have. Thanks ykey3 years ago I am confused by the Neptune preview line. Will the new theme be in 4.1 or not? Stefaan Vandenbussche3 years ago These are very interesting improvements. However I need to comment on your “low-end” platform. Core i-3 might be low-end for the consumer market, but it’s mid-range in the enterprise market. I would recommend you also test on a Centrino Core Duo + Vista which is still more likely to be encountered in enterprises. Another thing I was anticipating a lot was support for RTL, and I thought it was going to be part of 4.1. Is it? Thanks. Rick3 years ago Do we have a definitive date for the 4.1 general availability release. Patrick3 years ago Is the latest version available for download or has nothing changed since original 4.1 preview release? ahmed3 years ago what is about right to left support? Crysfel3 years ago Finally!! if-else support for the XTemplate, this is awesome guys, good job! looking forward for the final release Marc3 years ago Great, elseif support!!! Ali3 years ago How can an *improved* version of Ext 4.1 still be slower than Ext 3.4 ? Rodolfo De Nadai3 years ago Great work guys! I’m can wait to test! Pancho Gonzalez3 years ago The biggest problem of ExtJS4 is it doesn’t really work on iOS (iPad)... so if you create a website using ExtJS 4 you need to recode it also using Sencha Touch, instead of having a functional (but not touch optimized) site on the iPad by using only ExtJS4. Coding a site two times is not funny, you can share some code between them but it a shame ExtJS4 doesn’t support iPad. Les3 years ago The new way to create patches is interesting, but I think it will be difficult to use in some cases. Let’s say I wanted to patch the Ext.draw.Sprite class, and this class is required by the SVG and VML engines. How would I ensure that these engines require the patched Sprite class? Do I have to patch the engines, so they require the new patches? Dmitry Pashkevich3 years ago I am now really concerned about your phrase on custom layouts that might break in 4.1. Please provide more detail! What do you mean by custom layouts and how can the new things affect it? If I implement my custom layout logic in doComponentLayout() method where I manually arrange the elements, am I safe or no? Also, what does that mean: ” metadata handling is now supported by Store.” ? Daniel3 years ago We’re still using ExtJS 3.4 with our application. Would the ‘Sandbox’ be updated to support ExtJS 4.1? Les3 years ago Don, You mentioned in another post that “It is possible for a class to even “require” its own overrides”. Can you explain in more detail how this could be useful? Gus3 years ago I hope that you, or someone reading these posts, can help me out. I am looking for a component(s) that will allow visitor to our application to create and modify web-pages in a browser. Go Daddy and other hosting services allow their users to easily create and edit web-pages in a browser. Thank you in advance, Gus MrSparks3 years ago Thanks for the update guys. Has createElement been completely removed from the framework in favour of innerHTML now? Fingers crossed the answer is YES tangix3 years ago Yeah, a clarification of the override example would be great! Does the override directive mean that the base class is overridden (Ext.panel.Panel) or do I need to change all controls to instead use “My.app.PanelPatch”? I really hope that My.app.PanelPatch is only a “helper” class, otherwise the override method is pretty much unusable and Sencha has missed how most people (guessing from forums where tons of overrides are available, can’t be a surprise to Sencha). The move from objects to strings for classes has really made development a pain-in-the-b**t as you find yourself constantly chasing typos and the useless errors don’t help isolate where the ficking error is… Don Griffin Sencha Employee3 years ago @tangix, The role of the override name (My.app.PanelPatch) is to enable the override to be used in a “requires” or “uses” statement. That way you can state dependencies on overrides and the loader will load them and the build tools will include them in builds only when needed. Once an override is processed, it modifies (overrides) the target class named in the “override” property (Ext.panel.Panel). The application continues to use the same class (Ext.panel.Panel) but that class is now adjusted in someway. There is no other use for the name of the override itself (My.app.PanelPatch), though there is an object generated in that namespace with that name, so the name must be truly unique. Hopefully that clears things up. svdb3 years ago Hi, Will RTL be supported in 4.1? Thanks. SV Don Griffin Sencha Employee3 years ago @Stefaan Vandenbussche, Thanks for the feedback and specifics. This was the lowest end machine we could by new, but we are considering adding older machines. Good thing for eBay tangix3 years ago @Don, thanks - that makes perfect sense. Phew Don Griffin Sencha Employee3 years ago @Les, The patches would be required at a higher-level, possibly at the app layer. This is not unlike overrides in the past. The overridden code can not reach out for overrides that it does not know about, so the app (html, jsp, asp) had to load the needed override files. If your company has its own value-add layer of components that you use to build apps, sometimes an override could be required at that lower-level. Don Griffin Sencha Employee3 years ago @Dmitry Pashkevich, The inner workings of layouts had to change quite a lot for layouts to coordinate their activities outside the DOM. This mechanism is implemented in the Ext.layout.Context and Ext.layout.ContextItem classes. The Context class has a solid first draft of docs in the PR download. The doComponentLayout method is not used in the same way as it was in 4.0 from the perspective of a component author overriding the method. Users can still call doComponentLayout to update the layout, but how that is accomplished internally is different in 4.1. To layout a component in 4.0, we have the concept of a “component layout” (Ext.layout.component.Component). This object was used by doComponentLayout to perform the actual layout. Obviously, if you were implementing your own component in 4.0, you could skip this and just do the work in doComponentLayout. In 4.1, the component layout object is required. The reason is that all of the active layout objects in a component tree are executed together and iteratively (please see the aforementioned docs) in a basic “solver” process. This includes component layouts and container layouts. For more discussion on this, the forum would be a better place to sort through specifics. Don Griffin Sencha Employee3 years ago @Les, The usefulness of a class requiring overrides of itself would be for large classes. Internally Element comes to mind. We are considering using this technique to better break-up the class into cohesive units (like we do now manually at a file level). Don Griffin Sencha Employee3 years ago @MrSparks, We still have some few places that use createElement. And of course we still support DomHelper methods that wrap createElement for convenience. But I would say we have moved 95% of all our rendering into the core markup production phase (which then goes into innerHTML). Don Griffin Sencha Employee3 years ago @ykey, There is still some internal discussion on the status of Neptune in the 4.1 release, but it boils down to the effort required to support modern browsers vs IE. It is likely (though not yet decided) that “Neptune preview” will mean “only supported on modern browsers”. Don Griffin Sencha Employee3 years ago Sorry, but RTL and ARIA were de-scoped from 4.1 to get the release out sooner. Don Griffin Sencha Employee3 years ago @Daniel, Yes, sandbox mode will be supported in 4.1 Marc3 years ago Where is the download for the 4.1 preview release? I can’t find it anywhere, so I guess it’s invite-only or something? Given that the article concluded with “We hope you get a chance to download and try out the new features and improvements…” I figured that we could download it already? Marc3 years ago Sorry, my bad. The “Ext JS 4.1 Performance Preview” link has a download link. I clicked on the link below it thinking that the first one would just be an article about its performance. The second “update” link only links to 4.0.7, though. Don Griffin Sencha Employee3 years ago @Marc, Thanks for the info on the link. I’ll see what I can do about that. The article with the download is The download for 4.1 PR1 is Don Griffin Sencha Employee3 years ago @Dmitry Pashkevich, Stores in v3 supported a “metachange” event which was lacking in 4.0. See Jay Garcia3 years ago Just full of win. :D Thanks for the hard work guys! zombeerose3 years ago @Don Thanks for the clarification about the overrides config. I knew I was missing something! Excited and anxious for a stable release Thomas3 years ago Any updates on nested stores and models? I’m currently working with trees and find it a PITA that my tree store model is applied to all nodes (which doesn’t fit many cases in reality)...Would love to see an improved model/store approach when those models are nested. Dawesi3 years ago Keep the communication coming… thanks for putting metadata back in this makes me very happy. also noticed the article used camelCase for the events… I’m assuming this was an editor oversight? (I thought all events where lowercase)... @ahmed “right to left”, it is support for languages that are written right to left, as english (and many other languages) is written left to right. They are aligned right amoungst other things. (a overly simplistic answer) Can’t wait to use this baby… Barry3 years ago These speed improvements are great! you really notice the difference in render times on IE at the moment. Looking forward to trying out the better grid scrolling too. Any update on getting a proper Router & Controller actions? I’m using PathJS for this at the moment but would love to see a native implementation that allows action names such as ‘new’ and ‘delete’ or ‘destroy’. Currently naming controller functions like that conflicts with javascript keywords or Ext events. MrSparks3 years ago @Don, With regards to the remaining 5% that’s using create createElement. Can this be moved / do you plan to move this to innerHTML? There’s a great blog detailing the performance penalty in createElement. Google “Benchmark - W3C DOM vs. innerHTML”. Don Griffin3 years ago @Thomas, There is some investigation underway on nested stores and models, but that is not scheduled for 4.1. This is a common use case, so I hope we can make improvements there. Don Griffin3 years ago @Dawesi, The camelCase is the beforeRender method (like afterRender). The event name is beforerender. Unless I am missing the name to which you are referring. Don Griffin3 years ago @MrSparks, The remaining uses of createElement are “harder” to convert, so we haven’t pursued them. In some cases we now render them in the “detached body” (Ext.getDetachedBody). Then we move the elements to the body on first show. The drag proxy stuff is one example of this. Don Griffin3 years ago @Barry, We have been looking at MFC improvements in Touch 2 and these would (eventually) be merged with Ext JS. Not sure when that will happen, but you could track the latest progress on MVC by watching Touch 2. Don Griffin3 years ago @MrSparks, If you are referring to that ,was a big supporting argument for pursuing this approach. Great article! Artur Bodera3 years ago Native scrolling: Does this mean that current 4.0.7 bugs with broken scrollbars (i.e.after panel switching, hiding) are going to be fixed? Multiple panels in a single (border layout) region: 1) how does collapse/expand work? 2) Can I have multiple panels expanded or collapsed in the same region ? 3) Can I configure if multiple or single panel can be expanded at the same time? 4) Will it support panel hiding? (because I want to use it with “sliding panel UI layout”, not yet natively implemented in Ext 4) MrSparks3 years ago @Don, When 4.0 first dropped I did a little digging and found this article. I suspected that createElement might be the cause of the IE issues, however my limited JavaScript/programming knowledge meant I couldn’t prove it either way. Can Sencha seriously consider removing the remaining 5% createElement bottleneck moving forward, it would be very beneficial to the overall performance That’s the article Michael L3 years ago @Don Griffin quirksmode article is a bit outdated… createElement with document.createDocumentFragment works pretty well in my case, since appending to document fragment object does not cause page reflow. Did You Guys did some tests, please share if so? Jeff Norris3 years ago is there a timeframe that we can be expecting this release. Im excited for the changes and cant wait to see how it helps my app out. I would like to have a timeframe because i have some bugs that i think this will fix and i can delay looking at those bugs if i can tell my PM when this release is coming kredytomaniak3 years ago Great work guys! I can’t wait to test it, thanks. krishnaM3 years ago @Don: any word on the 4.1 release? We are in the middle of a project go-live and the IE7/IE8 performance issue has become a MAJOR bottleneck. Is there a way we can get access to the beta release before Christmas? Appreciate your help and inputs, Rich028183 years ago @krishnaM Suggest you look into Chrome Frame if you truly need performance on IE7-8 Michael Camden3 years ago @Rich02818 IE7/8 performance is a concern of ours as well, and if we had the means to install Chrome tab on our clients computers, wouldn’t we just install Chrome? krishnaM3 years ago @Rich02818: Unfortunately, in most corporate environments, asking customers to allow their users to download a plug-in or the Chrome Frame is not a realistic solution. Rich028183 years ago It is claimed that Chrome Frame can be installed without admin privileges. Given that we have no reason to expect acceptable performance of ExtJS 4.x in old IE versions, I consider it worth investigating…. MrSparks3 years ago @Rich02818, most corporate environments prevent downloads by users, they also may have strict internet usage policies. Typically violating these is gross misconduct. Mark Rhodes3 years ago I was really interested in the new “to png” feature of the charts - I’ve been trying to figure out how to do that for a while (particularly in VML).. However, looking the source it look like you’re just sending the rendered chart data over to some sencha server ()! This is completely unacceptable from a security point of view on my project and I’m sure others will be in the same boat. Why isn’t it mentioned in the docs that this is how it works? PS: Why is down so often!! Do the other servers covering have the same issue? If so that’s a pretty bad show too. firefly3 years ago I am also very really concerned about your phrase on custom layouts that might break in 4.1. You ‘believe this is a fairly rare practice’, but we are doing some custom layouts here ! Winters3 years ago Good Job for 4.1.0 Beta 1. Thank you again for publishing it before Christmas! Dennis3 years ago Ext.define(‘My.app.PanelPatch’, { override: ‘Ext.panel.Panel’, Breaks the loader if you have this class before loading Ext.panel.Panel. Then Ext.create will throw [Ext.create] ‘Ext.panel.Panel’ is a singleton and cannot be instantiated Michael Camden3 years ago @Dennis I know it’s not specified but did you try to add a ‘requires’ property to your definition? Ext.define(‘My.app.PanelPatch’, { requires: ‘Ext.panel.Panel’, override: ‘Ext.panel.Panel’ }); Little redundant, but it may work. Dennis3 years ago @Camden, that doesnt work. And even if it did, it would still have the problem of requiring me to load the panel up front, which I am not really interested in doing. I wanted to use this for loading localizations, and then just expect them to work whenever the appropiate class was actually loaded. Hermes3 years ago Why Extjs MVC app do not work with multiple applications ? for example: Ext.applicationBundle({ name: ‘Mainapp’, bundles:[‘App1’,‘App2’], controllers:[‘Myappcontrol’] }) Ext.create(‘Ext.app.Application’,{ name:‘App1’, controlles:[‘app1controller1’,‘app1controller2’] }) these is good!! because i develop my application by modules, and modules can be developed like application. Dennis3 years ago For multiapplication, Check out Hermes3 years ago That is ok !. Hermes3 years ago. Comments are Gravatar enabled. Your email address will not be shown.Commenting is not available in this channel entry.
http://www.sencha.com/blog/whats-new-in-ext-js-4-1/
CC-MAIN-2014-41
refinedweb
5,435
65.52
# sizeinfo.tcl -- # Display window size info while resizing. # Version : 0.0.1 # Author : Mark G. Saye # Email : markgsaye@gmail.com # Copyright : Copyright (C) 2003 # Date : February 19, 2003 # ====================================================================== namespace eval sizeinfo {} package require Tk package provide sizeinfo 0.0.1 # ====================================================================== # create -- proc sizeinfo::create {W} { toplevel $W.sizeinfo -bd 0 wm withdraw $W.sizeinfo update idletasks wm transient $W.sizeinfo $W wm overrideredirect $W.sizeinfo 1 label $W.sizeinfo.label -relief raised -bd 2 pack $W.sizeinfo.label } # ====================================================================== # destroy -- proc sizeinfo::destroy {W} { ::destroy $W.sizeinfo } # ====================================================================== # refresh -- proc sizeinfo::show {W w h x y} { variable $W upvar 0 $W _ if { [info exists _(after)] } { after cancel $_(after) } if { ![winfo exists $W.sizeinfo] } { create $W } set label $W.sizeinfo.label $label configure -text "$w x $h + $x + $y" set x0 [expr {$x + ($w / 2)}] set y0 [expr {$y + ($h / 2)}] set _w [winfo reqwidth $label] set _h [winfo reqheight $label] set _x [expr {$x0 - ($_w / 2)}] set _y [expr {$y0 - ($_h / 2)}] wm geometry $W.sizeinfo ${_w}x${_h}+${_x}+${_y} if { ![winfo ismapped $W.sizeinfo] } { wm deiconify $W.sizeinfo update idletasks } set _(after) [after 1000 [list sizeinfo::destroy $W]] } # ====================================================================== proc sizeinfo::sizeinfo {W} { if { [string equal $W .] } { set w "" } { set w $W } bind $W <Configure> [list sizeinfo::show $w %w %h %x %y] } # ====================================================================== # Demo code if { [info exists argv0] && [string equal [info script] $argv0] } { toplevel .t sizeinfo::sizeinfo . sizeinfo::sizeinfo .t } # ====================================================================== MSW / 20. Feb 2003I think it is not satisfying the initial question. The problem was that the user should be able to do the resize continously and as well continously see the updated geometry info. Your package is nice, for it mimics a feature I like about some window managers :) But the asker also pointed out that the Configure Event only is triggered when the resizing is finished. I have looked a bit into the sources of fvwm1 which copies the resizing from the wm window manager (not our wm :), and it uses a MotionNotify Event (that is an X Event I believe, I'm no X diver), which our bind does not accept as argument. Yes, there is Motion, but it's kinda hard to get a motion when you are OUTSIDE the widget (as you are typically resizing with the BORDER of a window which is not part of the widget!) Fire up a wish and run the following to see what I mean (and resize . some after it) % bind . <Enter> { puts -nonewline stderr "\nEntering . " } % bind . <Leave> { puts -nonewline stderr "\nLeaving . " } % bind . <Motion> { puts -nonewline stderr "." }So bindings don't qualify (you need them continously - and you don't get them), and I did not find the right thing which could be handled by wm protocol, although there might be something.Still, I like sizeinfo :) - Oh and I know that my solution (sizepanel) is not satisfying the question either, but it offers a continous resizing with simultaneous information about the dimensions. MGS - I'm not sure I really even understand the question. What do you mean by 'continuously'? I figured that showing the updated geometry, as the window is being resized, was what the OP was after. Anyway, wouldn't the resizing be done by a ConfigureNotify event, rather than a MotionNotify event? (Tk's bind doesn't handle that either.) I, like you, am no Xlib expert, though. Glad you like sizeinfo. MSW (for ConfigureNotify vs. MotionNotify) could well be that I misread or misremember, don't have the my notes here atm:)(for what the original poster (imho) was after): Imagine you start resizing a toplevel window. Let's say your window manager handles resizing by clicking on one of the corners of the border around the window, and then dragging the window "into shape". While you do that, each time it changes by one pixel, there is a little window which pops up at a well defined place (say upper left screen corner or middle of the window being resized) which updates information about the dimenions of the window while it is being resized.So you'd start resizing a 200x200 window. You'd click on the upper left corner and drag one pixel diagonal into the window, the window would shrink to 199x199, and the little label (wherever it may be) would now immediately display 199x199, you drag one more pixel, it goes 198x198 etc. etc. Then you are done with the resize, and release the mousebutton, and it's only then where the tk event <Configure> fires.My understanding is that the OP wanted to emulate the behaviour some window managers (like e.g. fvwm with my settings - dunno if it always does it, it's been ages since I configured it) show and which I explained above. The point is the immediate update of information as in contrast to the "delayed" update of information tcl can provide only when reacting on it - that's why I chose to take the stick rather and make tcl act as it can do that immediately. MGS - Ah, I'm with you now. We seem to be talking about the same thing. I guess this is window manager dependent, 'coz with my sizeinfo package above, the info does update continuously. i.e. <Configure> events are being received while the window is being resized. FWIW, I'm using Linux/KDE 3.1 with the default kwin window manager. Maybe some window managers have an "opaque-move" option, or something similar. MSW It won't work (continuously) neither with fvwm2 @ netbsd nor with openwin @ sunos 2.8/2.9. It's a pity that it's window manager dependant - on the other hand resizing a window is window manager dependant, too, eh ? ... EB: OpaqueResize is the first feature I disable in a WM. Here is a way to mimic it through a resize handle: proc opaque_resize_handle {W} { # replace here with an image or whatever set handle [frame $W.resize_handle -background red -cursor sizing] place $handle -relx 1.0 -rely 1.0 -width 10 -height 10 -anchor se # bindings to resize W bind $handle <ButtonPress-1> { bind %W <Motion> \ [string map [list WIDTH [winfo width [winfo parent %W]] \ HEIGHT [winfo height [winfo parent %W]]] \ { wm geometry \ [winfo parent %W] \ [expr {WIDTH+(%%X - %X)}]x[expr {HEIGHT+(%%Y - %Y)}] }] } bind $handle <ButtonRelease-1> { bind %W <Motion> {} } }MSW: Neat!MGS - See also Resize control and widget:resizeHandle
http://wiki.tcl.tk/8423
CC-MAIN-2017-09
refinedweb
1,061
64.61
Results 1 to 2 of 2 - Join Date - Aug 2008 - 1 - Thanks - 0 - Thanked 0 Times in 0 Posts C++ coding help -- Having issues with an if/else statement /sigh I'm having issues with the following code: Code: #include <iostream> using namespace std; int main (int argc, char * const argv[]) { string next; string res1; string res2; string res3; string res4; string res5; string res6; string res7; cout << "Welcome to NAMEGAME" << endl; cout << "Your lit cigarette is the only light in your cramped, \nsqualid room. You’re sitting up-right in your bed, \nstaring into the darkness. Your room has only three \nwalls, the forth wall is made entirely of the door. \nLight spills in from the sides of the door, slightly \nilluminating your ceiling, floor, and walls. \nYou face towards your door." << endl; cout << "Type 'Next' to continue reading." << endl; cin >> next; if (next == "Colon") { cout << "lol, colon" << endl; goto LINE3; } else { cout << "\n"; goto LINE3; } LINE3:cout << "Ash drops from your cigarette to your bed, and you \nwipe it off with your hand. It’s burned down to the \nbutt, and you flick it onto the floor. You search \nyour pocket for another. You brush aside your LIGHTER \nand find your pack of Marlboros. It feels suspiciously \nlight, and indeed, on further inspection it is empty. \nYou curse and throw it to the floor. You dig in your \nPILLOW CASE for some money, and take out your \nlast few bills and coins adding up to around $9.50. \n(Enough for a pack, and you don’t feel like counting all the little coins). \nYou shove the money into your short’s pocket. \nFloor 7 Landing lies SOUTH." << endl; restart:std::getline(std::cin,res1); if (res1 == "SOUTH" || res1 == "South" || res1 == "south" || res1 == "GO SOUTH" || res1 == "Go SOUTH" || res1 == "go south" || res1 == "GO south" || res1 == "Go South" || res1 == "go SOUTH" || res1 == "S" || res1 == "s") { cout << "You went SOUTH, arriving at the Floor 7 landing. DESCRIPTION1. OPTIONS1." << endl; } else { cout << "I'm sorry, could you try re-phrasing that?\n" << endl; goto restart; } return 0; } Okay, I managed to fix it by instead of using cin << I used std::getline(std::cin,STRING) I could still use help in finding a way to make it null rather than typing Next. Last edited by MrNotkewl; 08-20-2008 at 08:58 AM. - Join Date - May 2002 - Location - Virginia, USA - 621 - Thanks - 0 - Thanked 6 Times in 6 Posts Do you mean something like: Code: std::cout << "Hit <enter>" << std::endl; std::cin.get();
http://www.codingforums.com/computer-programming/147029-c-coding-help-having-issues-if-else-statement.html
CC-MAIN-2017-09
refinedweb
423
77.77
We'll devote less space to SCSI than IDE because IDE drives dominate the PC platform, but we will try to hit the high points of SCSI. SCSI (Small Computer Systems Interface) is a general-purpose I/O bus that is used in PCs primarily for connecting hard disks and other storage devices, and secondarily for connecting a variety of devices, including scanners, printers, and other external peripherals. Although common in the Apple Macintosh world, SCSI has remained a niche product in PCs, limited primarily to network servers, high-performance workstations, and other applications where the higher performance and flexibility of SCSI are enough to offset the lower cost of ATA. SCSI is confusing because of the proliferation of terms, many of which refer to similar things in different ways or to different things in similar ways. There are actually three SCSI standards, each of which refers not to any particular implementation, but to the document that defines that level. The SCSI standard was adopted in 1986 and is now obsolete. Originally called simply SCSI, but now officially SCSI-1, this standard defines a high-level method of communicating between devices, an Initiator (normally a computer), and a Target (normally a disk drive or other peripheral). SCSI-1 permits data to be transferred in asynchronous mode (unclocked mode) or synchronous mode (clocked mode), although commands and messages are always transferred in asynchronous mode. SCSI-1 uses the low-density 50-pin connector for both internal and external connections. The external low-density 50-pin connector is also referred to as the Centronics SCSI connector. SCSI-1 is a single comprehensive document that defines all physical and protocol layers, and is published as ANSI X3.131-1986. SCSI-2 was adopted in 1994, and many current SCSI devices are SCSI-2 compliant. SCSI-2 updated the SCSI-1 standard to include faster data rates and to more tightly define message and command structures for improved compatibility between SCSI devices. SCSI-2 devices use various connectors, depending on the width and speed of the implementation. SCSI-2 is a single comprehensive document that defines all physical and protocol layers, and is published as ANSI X3.131-1994. The monolithic documents that describe SCSI-1 and SCSI-2 became too unwieldy for the greatly expanded SCSI-3 specification, so beginning with the SCSI-3 specification the document was separated into multiple layered components, each defined by an individual standards document. Together, these individual documents comprise the SCSI-3 standard, which is now officially referred to as simply SCSI. For more information about SCSI standards, visit the SCSI Trade Association (). SCSI implementations are characterized by their width (bits transferred per clock cycle), clock rate, and overall throughput, which is the product of those two figures. Bus width determines how much data is transferred per clock cycle, and may be either of the following: Narrow SCSI transfers one byte per clock cycle, using a one-byte wide data bus on a 50-pin parallel interface, which is defined by SCSI-1. Wide SCSI transfers two bytes per clock cycle, using a two-byte wide data bus on a 68-pin parallel interface, which is defined by the SCSI-3 SPI document. Although SCSI-3 allows bus widths greater than two bytes, all current Wide SCSI implementations use two bytes. The signaling rate (or clock rate), properly denominated in MegaTransfers/Second (MT/s) but more commonly stated in MHz, specifies how frequently transfers occur. Various SCSI implementations use signaling rates of 5 MHz, 10 MHz, 20 MHz, 40 MHz, and 80 MHz, which are given the following names: SCSI when used without qualification to describe a transfer rate refers to the 5 MT/s transfer rate defined in SCSI-1. Because SCSI-1 supports only narrow (8-bit) transfers, SCSI-1 transfers 5 MB/s (5 MT/s x 1 byte/transfer). Fast SCSI describes the 10 MT/s transfer rate defined in SCSI-2. Used with a narrow interface (called Fast Narrow SCSI or simply Fast SCSI), transfers 10 MB/s (10 MT/s x 1 byte/transfer). Used with a wide interface, called Fast Wide SCSI, transfers 20 MB/s (10 MT/s x 2 bytes/transfer). Ultra SCSI , also called Fast-20 SCSI, describes the 20 MT/s transfer rate defined in an extension to the SCSI-3 SPI document (ANSI standard X3T10/1071D revision 6). Used with a narrow interface (called Narrow Ultra SCSI or simply Ultra SCSI), transfers 20 MB/s (20 MT/s x 1 byte/transfer). Used with a wide interface (called Wide Ultra SCSI), transfers 40 MB/s (20 MT/s x 2 bytes/transfer). Ultra2 SCSI , also called Fast-40 SCSI, describes the 40 MT/s transfer rate defined in SCSI-3 SPI-2. Used with a narrow interface (called Narrow Ultra2 SCSI or simply Ultra2 SCSI), transfers 40 MB/s (40 MT/s x 1 byte/transfer). Used with a wide interface (called Wide Ultra2 SCSI or U2W SCSI), transfers 80 MB/s (40 MT/s x 2 bytes/transfer). Ultra3 SCSI , also called Fast-80DT SCSI or Ultra160 SCSI, describes the 80 MT/s transfer rate defined in SCSI-3 SPI-3. Fast-80DT actually uses a 40 MHz clock, but is double-pumped, which is to say that it makes two transfers during each clock cycle. Only wide interfaces are defined for speeds higher than Ultra2 SCSI, which means that Ultra3 SCSI transfers 160 MB/s (80 MT/s x 2 bytes/transfer). Ultra320 SCSI , also called Fast-160DT SCSI, describes the 160 MT/s transfer rate defined in SCSI-3 SPI-4. Fast-160DT uses a double-pumped 80 MHz clock, and transfers 320 MB/s (160 MT/s x 2 bytes/transfer). The fastest current SCSI hard drives transfer less than 80 MB/s, which means that it requires at least two hard drives to saturate Ultra160 SCSI. Accordingly, few desktop systems or workstations require anything faster than Ultra160 SCSI. Ultra320 SCSI is used almost exclusively on midrange or larger servers. In addition to being differentiated by bus width and signaling speed, SCSI devices may be one of two general types, which are incompatible with each other: Single-ended SCSI (SE SCSI) devices use unbalanced transmission (one wire per signal), which minimizes the number of wires required in the connecting cable, but also limits maximum bus length and maximum data rates. Until recently, all PC-class SCSI devices were SE, but SE SCSI devices are now obsolescent. Differential SCSI devices use balanced transmission (two wires per signal, plus and minus), which reduces the effects of noise on the SCSI channel. This requires a more expensive cable with additional wires, but extends the maximum allowable bus length and allows increased data rates. Originally, differential SCSI was used only on large computers, where the greater bus length of differential SCSI allows connecting mainframes and minicomputers to external disk farms. In modified form, differential SCSI is now commonplace on PCs. Two forms of differential SCSI exist: High-Voltage Differential SCSI (HVD SCSI) was originally called simply Differential SCSI before the advent of Low-Voltage Differential SCSI, described next. HVD SCSI is very seldom used in the PC environment. Low-Voltage Differential SCSI (LVD SCSI) devices use differential transmission, but at lower voltage than HVD SCSI devices. LVD is where the action is in high-performance PC SCSI drives now, and where it is likely to remain for the foreseeable future. Although they are technically unrelated, LVD and U2W were often used as synonyms because most U2W hard drives use LVD transmission. However, Ultra160 devices have become common, and they also use LVD. Table 13-9 summarizes implementations of SCSI you may encounter. For Narrow SCSI implementations, the word "Narrow" in the name is optional, and is assumed unless Wide is specified. The Clock column lists the signaling rate in MT/s. The DTR column lists the total data transfer rate, which is the product of the signaling rate and the bus width in bytes. The Devices column lists the maximum number of SCSI devices that may be connected to the SCSI bus, including the host adapter. The maximum number of devices supported on any Narrow SCSI bus is 8, and on a Wide SCSI bus is 16. Because a longer bus results in signal degradation, the number of devices supported is sometimes determined by the length of the bus. For example, Wide Ultra SCSI supports up to eight devices on a 1.5-meter (~ 4.9-foot) bus, but only four devices (host adapter plus three drives) on a bus twice that length. SCSI devices use a variety of connectors. Until recently, there was little standardization, and no way to judge the SCSI standard of a device by looking at its connector. For example, current U2W devices use the 68-pin high-density connector, but that connector has also been used by old Digital Equipment Corporation (DEC) machines for single-ended devices. By convention, all SCSI devices have female connectors and all SCSI cables have male connectors. This rule is generally followed by modern SCSI devices intended for use on PCs, although it is frequently violated by very old PC devices and by devices intended for use outside the PC environment. Mainstream SCSI devices use the following cables and connectors: Some scanners, external Zip drives, and other Narrow SCSI devices use the DB25 SCSI connector, also called the Apple-Style SCSI connector. Unfortunately, this is the same connector used on PCs for parallel ports, which makes it easy to confuse the purpose of the connector on the PC. Devices are linked using a straight-through DB25M-to-DB25M cable. The 50-pin Centronics SCSI connector is also called the Low-density 50-pin SCSI connector or the SCSI-1 connector and resembles a standard Centronics printer connector. Male SCSI-1 connectors are used on external cables for SCSI-1 devices, and by internal ribbon cables for both SCSI-1 and SCSI-2 devices. The Micro DB50 SCSI connector is also called the Mini DB50 SCSI connector, the 50-pin High-density SCSI connector, or the SCSI-2 connector. Male SCSI-2 connectors are used on external cables for SCSI-2 devices. The Micro DB68 SCSI connector is also called the Mini DB68 SCSI connector, the 68-pin High-density SCSI connector, or the SCSI-3 connector. Male SCSI-3 connectors are used on external cables and internal ribbon cables for SCSI-3 devices. The Ultra Micro DB68 SCSI connector is also called the Very high-density condensed 68-pin SCSI connector or the VHDCI SCSI connector, and is also often incorrectly called the SCSI-4 connector or the SCSI-5 connector. The VHDCI SCSI connector is used by Ultra160 SCSI devices. The SCA interface, originally used primarily in large IBM computers, uses a standard 80-pin connector that provides power, configuration settings (such as SCSI ID), and termination of the SCSI bus. SCA was designed to allow hot-swappable drives to connect directly to the SCSI bus via an SCA backplane connector, without requiring separate power or interface cables. SCA interface drives can be connected to a standard 50- or 68-pin connector on a PC SCSI host adapter by using an SCA-to-SCSI adapter, which is readily available from most computer stores and mail-order sources. SCA devices are seldom used in PC-class hardware except in servers with hot-swappable drives. Narrow (8-bit) SCSI transfer modes use narrow (50-pin) cables. Officially, a narrow cable is called a SCSI A cable, but it may also be called a SCSI-1 cable or a 50-pin SCSI cable. An A cable may use any of several connectors, including standard-density 50-pin internal, high-density 50-pin internal, DD-50 50-pin external, Centronics 50-pin external, and high-density 50-pin external. Narrow SCSI uses 50 signals, each carried on one of the 50 wires in the SCSI A cable, with the 50 wires organized into 25 pairs. For SE SCSI, each pair includes a signal wire and a signal return (ground) wire. Figure 13-5 shows a SCSI A cable with an internal 50-pin connector. Table 13-10 lists the pinouts for SCSI A cables and connectors. A "#" following a signal name indicates that signal is active-low. In an A cable SCSI bus, (reserved) lines should be left open in SCSI devices, may be grounded at any point, and are grounded in the terminator. All A cables use the same signals on the same conductor in the cable, but the pinouts to the connectors vary by connector type. In the table, "External" refers to a SCSI A cable that uses an external shielded connector. "Internal" refers to an unshielded internal header-pin connector. Wide (16-bit) SCSI transfer modes use wide (68-pin) cables. Officially, a wide cable is called a SCSI P cable, but it may also be called a SCSI-2 cable or a 68-pin SCSI cable. A P cable may use any of several connectors, most commonly high-density 68-pin internal, high-density 68-pin external, and VHDCI 68-pin external. Wide SCSI uses 68 signals, each carried on one of the 68 wires in the SCSI P cable, with the 68 wires organized into 34 pairs. For SE SCSI, each pair includes a signal wire and a signal return (ground) wire. Figure 13-6 shows a SCSI P cable with an internal 68-pin high-density connector. Note the twisted pairs in the cable segment at top. Table 13-11 lists the pinouts for SCSI P cables and connectors. A "#" following a signal name indicates that signal is active-low. In a P cable, (reserved) lines are left open in SCSI devices and terminators. Although conductor numbers do not map directly to pin numbers, all P cable connectors use the same pinouts. LVD SCSI transfer modes use a wide (68-pin) cable of special design and construction, which is labeled and referred to as a SCSI LVD cable. An LVD cable uses the same high-density 68-pin external and VHDCI 68-pin external connectors as a P cable. However, all LVD connectors, internal or external, must be shielded, so the high-density 68-pin internal connector is not supported for LVD. Table 13-12 lists the pinouts for SCSI LVD cables and connectors. Because LVD uses differential signaling rather than the signal/ground method used by SE implementations, each LVD signal is actually a plus and minus signal pair, carried on a twisted pair within the cable. So, for example, whereas in SE SCSI conductors 2 and 1 carry the DB(12)# (active-low) signal and its "signal return" (ground), in LVD SCSI those same conductors carry the DB(12)- (negative) and DB(12)+ (positive) signal pair, respectively. LVD adds one signal not used by earlier variants. The DIFFSENS signal (conductor 31 in LVD Wide, and conductor 21 on LVD Narrow) is used to control differential signaling.
http://etutorials.org/Misc/pc+hardware/Chapter+13.+Hard+Disk+Interfaces/13.3+SCSI/
CC-MAIN-2018-39
refinedweb
2,506
50.77
On Aug 6, 2007, at 12:06, Xell Zhang wrote: > I have compiled and installed SVN 1.4.4 and want to configure it to > run by using apache2. (On Ubuntu 7.04) > I followed the book <Version Control with Subversion> and apache2 > successfully started up with svn mods. My configuration in > httpd.conf is like below: > <Location /svn> > DAV svn > SVNParentPath /var/svnroot > SVNListParentPath on > AuthType Basic > AuthName "Subversion repository" > AuthUserFile /etc/svn-auth-file > Require valid-user > </Location> > > My apache is started as apache:apache user so I change the owner > of /var/svnroot from root to apache.apache. > Then I use: > svn import dummy.test -m "test" > to create the very first test. But after I input the password I got: > svn: PROPFIND request failed on '/svn/test' > svn: Could not open the requested SVN filesystem > > In apache2 error logs I found: > [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1] (20014) > Internal error: Can't open file '/var/svnroot/test/format': No such > file or directory > [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1] Could not > fetch resource information. [500, #0] > [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1 ] Could not > open the requested SVN filesystem [500, #2] > [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1] Could not > open the requested SVN filesystem [500, #2] > > Then I use: > sudo svnadmin create /var/svnroot/dummy > It seems successful to create a repository because in my browser > when I visit "" I can get a page of > which content is like: > Revision 0: /Powered by Subversion version 1.4.4 (r25188). > > So I don't know why I cannot import a file. I google this for a > long time but cannot find useful information... > Thanks for helping! When you use "SVNParentPath /var/svnroot" you are informing Subversion that you would like to host multiple repositories under that directory. Each repository must be created with "svnadmin create" before it can be used. You created a repository called "dummy" and were able to access it just fine. You should be able to import into it too. You were not able to import into the "test" repository because it doesn't sound like you ever "svnadmin create"d a test repository. If you would prefer to have just a single repository, then you would use "SVNPath /var/svnroot" and you would "svnadmin create" just the single repository /var/svnroot. --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org For additional commands, e-mail: users-help@subversion.tigris.org Received on Mon Aug 6 19:18:24 2007 This is an archived mail posted to the Subversion Users mailing list.
https://svn.haxx.se/users/archive-2007-08/0127.shtml
CC-MAIN-2019-22
refinedweb
456
64.51
In this part about zippers with two holes, we discuss the relationship between zippers and (database) transactions of various isolation modes. We show the updating enumerator and the corresponding zipper that maximally preserve sharing and can walk terms with directed loops. We demonstrate that a zipper can convert a (sequential) map to a fold. First we need to determine what a zipper with two holes -- two updating cursors into the same data structure -- actually mean. Here are a few plausible semantics: 1. We traverse and update the data structure with one zipper first. After we're finished, we `open' another cursor on the result and do another updating traversal. 2. We treat the term as if each subterm were mutable. Each cursor may navigate to and mutate any subterm. 3. Two cursors operate on essentially separate copies of the same data structure. The movement and updates by one cursor have no effect on the other. This semantics of complete isolation and treating a data structure _as if_ it existed in two separate copies has been already implemented in the first part. It is mentioned here for completeness only: normally we do not wish to fork different versions of data, so some interaction between updating traversals is required. Semantics 1 corresponds to a so-called serializable isolation mode. Two transactions essentially run sequentially. The challenge here is to permit two transactions run in parallel but still assure the serializable semantics. Semantics 2 corresponds to a so-called `Dirty Read' (ANSI Read Uncommitted) isolation mode -- that is, no isolation at all. Updates by one transaction are instantly visible to the others. There are points in-between: In Committed Read, updates by one transaction are not visible to the others until that transaction commits. In Repeatable Read, transaction 1 will see the updates of the committed transaction 2 only in the parts of the structure that transaction 1 hasn't visited yet. That is, one can't affect the past. Obviously, the serializable isolation mode permits the least concurrency -- it is a challenge even to permit any concurrency at all. The dirty read mode allows the most concurrency -- often at a peril to the programmer. Indeed, imagine what happens when we modify a subterm with one cursor and truncate the term at the root with the other cursor. Where is the first cursor to return to? Database manuals are filled with horror stories of one transaction wishing to modify rows that a concurrently running transaction has already deleted (see PostgreSQL 8.0 documentation, Chapter 12.2 and especially Chapter 12.4). As it turns out, zippers can support all these isolations modes, even at a finer granularity: subtransactions. We can indeed traverse a term _as if_ all of its subterms were mutable -- although none are. We can use either the `push' mode -- one cursor broadcasts its updates to the others -- or the pull mode. A cursor may chose to broadcast accumulated updates (rather than every update) -- at arbitrary checkpoints of its own choosing. A cursor may examine updates made by the other cursor and perhaps disregard some of them -- or apply in a different order, after its own updates. We are using terms zipper, cursor and transaction somewhat interchangeably -- for a reason that any update made through a zipper is `instantly' reversible. If we remember the value of the zipper at the beginning or other suitable point, we can in no time return to the state of the data at that point. Also, delimited continuations let us view a single-thread system as cooperatively multi-threading one. The implementation of some database management systems does share similarities with zipper-like traversals. For example, in Illustra (a richer, short-lived brother of PostgreSQL), updates are done functionally rather than destructively. Old versions of data were preserved, so to permit the implementation of isolation modes, roll-backs and time travel. Garbage collection -- vacuuming -- though was not automatic. As before, all our zippers are generic with respect to the data structure: we do not need to derive data constructors that depend on the term in question. Also, we can handle terms that are general graphs with directed loops, rather than trees. The design space is quite large and there is no hope of covering it all in this message. Therefore, in this message we show an updating traversal function that underlies all the zippers, and a dumb zipper, on which we build smarter one- and two-hole zippers. The two-hole zipper in this message is not very good -- but it is the most explicit and the easiest to explain (and besides, this code was written first, a couple of weeks ago). The other messages, if any, will show one-hole zippers with smarts built-in. We can the realize two-hole zippers of various isolation modes, including the most challenging ones. The smarter zippers indeed use multi-prompt delimited continuations in non-trivial ways, with one delimited continuation capturing a _part_ of another. A zipper is as good as the underlying traversal function. We start therefore with an improved term enumerator that maximally preserves sharing. If no updates were done, the result of the traversal is the original term itself -- rather than its copy. Furthermore, this property holds for each subterm. The new traversal function lets us operate on subterms in pre-order, in-order, or post-order. More importantly, it lets us effectively span a `canonical' spanning tree on a term, so each node can be unambiguously identified. We do not require equality on (sub)terms. > {-# OPTIONS -fglasgow-exts #-} > module Zipper2 where [imports as before. Some code elided to keep the message under 20k] We will be using the same term as in Part 1: > data Term = Var String | A Term Term | L String Term > data Direction = Down | DownRight | Up | Next deriving (Eq, Show) Instead of directions Down and DownRight we could've defined a single direction `Down Int', with `Down 0' meaning descent to the first child subterm, `Down 1' meaning descent to the second child subterm etc. The direction `Next' means to proceed in the `natural' (depth-first) order. The following function does an updating traversal of `term' in an arbitrary monad. The user-supplied function `tf' receives the current term and the direction how the current term was reached. That direction is never `Next'. For a composite term, `tf' will normally be called several times: once with the direction `Down' or `DownRight' (when we first descended into that term), and once or more in the direction `Up' (after we have finished walking each of the term's children). The function `tf' returns a pair: Maybe newTerm and the new Direction. The newTerm replaces the current term; the new Direction tells where to go afterwards. > traverse :: (Monad m) => > (Direction -> Term -> m (Maybe Term, Direction)) -> Term -> m Term > traverse tf term = traverse' id Down term >>= maybeM term id > where traverse' next_dir init_dir term = > do > (term', direction) <- tf init_dir term > let new_term = maybe term id term' > select (next_dir direction) new_term >>= maybeM term' Just > select Up t = return Nothing > select Next t@(Var _) = return Nothing > select dir t@(L v t1) | dir == Next || dir == Down = > do > t' <- traverse' id Down t1 >>= (return . fmap (L v)) > traverse' (next Up) Up (maybe t id t') >>= maybeM t' Just > select DownRight t@(A t1 t2) = > do > t' <- traverse' id DownRight t2 >>= > (return . fmap (\t2'->(A t1 t2'))) > traverse' (next Up) Up (maybe t id t') >>= maybeM t' Just > select dir t@(A t1 t2) | dir == Next || dir == Down = > do > t' <- traverse' id Down t1 >>= > (return . fmap (\t1'->(A t1' t2))) > traverse' (next DownRight) Up (maybe t id t') >>= > maybeM t' Just > > next next_dir dir = if dir == Next then next_dir else dir > maybeM onn onj v = return $ maybe onn onj v As you can see, if we descend to a term `A t1 t2' which `tf' leaves unmodified, traverse `t1' and `t2', again with no updates, check `A t1 t2' again and decline any modifications -- the result is the original subterm `A t1 t2' itself, rather than its copy. When `tf' does return a new term, we have to rebuild some subterms, but only to the minimal extent necessary. So, the new term will share as much as possible with the original one. Our sample terms are the P2 numeral term1, which prints as \f.\x.((f \f.(f \f.\x.x)) ((f \f.\x.x) x)) and a tree spanning infinitely in depth and in breadth: > term2 = L "f" (A (A f (A term2 f)) (A term2 f)) where f = Var "f" If interpreted in lambda-calculus, this term satisfies the equality term2 f = f (term2 f) (term2 f) which makes it sort of a wide Y combinator, with recursion implemented by sharing. It is not a good idea to print term2. We can traverse term1 in the Identity monad > testt1 = runIdentity (traverse (\_ term -> return (Nothing,Next)) term1) or, more illustratively, in the IO monad, so we can print the encountered subterms: > testt2 = traverse tf term1 > where tf dir term = do print dir; print term; return (Nothing,Next) We can also modify the term: > testt4 = runIdentity (traverse tf term1) > where tf _ (L "x" (Var "x")) = return (Just (L "y" (Var "y")),Up) > tf _ _ = return (Nothing,Next) the result is term1 with all occurrences of \x.x replaced with \y.y. The infinite term2 is harder to traverse (and avoid looping). Zipper is far better for that. In general, we observe that the enumerator like `traverse' is better for context-insensitive and bulk transformations. OTH, Zipper is better for making few and highly-context-sensitive updates. Zipper is an updating cursor into the data structure: > data Zipper r m term dir = > Zipper dir term (CC r m (Maybe term, dir) -> CC r m (Zipper r m term dir)) > | ZipDone term The `ZipDone' alternative represents the `End-of-File'. As in part 1, Zipper is a recursive data type representing the delimited continuation of traversal. Unlike Gerard Huet's zipper, our zipper is completely polymorphic with respect to the term and the traversal direction. Our zipper is a derivative of the traversal function rather than of a data structure itself: > zip'term :: (Monad m, Monad (CC r m)) => > ((dir -> term -> CC r m (Maybe term, dir)) -> term -> CC r m term) > -> term -> CC r m (Zipper r m term dir) > zip'term enumerator term = > promptP (\p -> enumerator (tf p) term >>= (return . ZipDone)) > where tf p dir term = shiftP p (\k -> return (Zipper dir term k)) We can use zipper to do the full traversal, printing out subterms: > lprint x = liftIO $ print x > > zip'through (ZipDone term) = lprint "Done" >> lprint term > zip'through (Zipper dir term k) = do lprint dir; lprint term > nz <- k (return (Nothing,Next)) > zip'through nz > tz1 :: IO () = runCC (zip'term traverse term1 >>= zip'through) More illustrative however are partial traversals with updates. The term term2 makes especially good example as its full traversal is impractical. > tz3 :: IO () > = runCC ( > do > zipper <- zip'term traverse term2 > let max_depth = 5 > t <- traverse_replace max_depth zipper 0 > lprint "Final"; lprint t) > where > traverse_replace max_depth (Zipper dir term k) depth = > do > let new_depth = update_depth dir depth > let loop z = traverse_replace max_depth z new_depth > if new_depth <= max_depth then k (return (Nothing, Next)) >>= loop > else case term of > L "f" _ -> k (return (Just (L "f" (Var "f")),Up)) >>= > loop > _ -> k (return (Nothing, Next)) >>= loop > traverse_replace max_depth (ZipDone term) depth = return term > > update_depth Up = (+ (-1)) > update_depth _ = (+ 1) In test tz3, we truncate term2 at depth 5 -- that is, we replace \f.something forms that occur deeper than 5 levels with just \f.f. The result \f.((f (\f.((f (\f.f f)) (\f.f f)) f)) (\f.((f (\f.f f)) (\f.f f)) f)) is a finite term, the result of a `partial unrolling'. This is the example of a highly context-sensitive update: we have to keep track of the context (the subterm depth) and update only particular subterms that occur at particular depths. The example tz3 -- especially the function traverse_replace -- illustrates using zipper to implement `fold' over a data structure. We explicitly carry the state of the traversal, the current depth. In contrast, the original enumerator, `traverse', did _not_ permit the function `tf' to pass any state among its invocations. It appears then that a zipper lets us convert a map to a fold! That seems paradoxical -- indeed, map can be evaluated in parallel -- until we realize that only a specific kind of map can be converted to fold. Namely, a monadic (aka, sequential, or serialized) map. Such a map already has the state being passed from one invocation of the mapping function to the other. This state is the `stack' -- an implicit argument to every function. Delimited continuations have made this threaded-through argument explicit and let us piggy-back our own state on it. As an example of such a state is the current location in the term. We take note of the values of tf's dir argument to compute the path of the current node -- the sequence of Down or DownRight moves from the root to the current node. We define a `higher-order' zipper: > data ZipperD r m term dir = ZD{ zd_z:: Zipper r m term dir, > zd_path :: [dir] } > zd_term ZD{ zd_z = Zipper _ term _ } = term > zd_dir ZD{ zd_z = Zipper dir _ _ } = dir the constructor function > zipd'term enumerator term = > zip'term enumerator term >>= (\z -> return $ ZD z []) and the function to move and update the cursor, keeping track of the path: > zipd'move dir nt ZD{zd_z = Zipper _ _ k, zd_path = path} = > do z1 <- k (return (nt,dir)) > return $ > case (z1,path) of > (Zipper Up _ _,(_:rpath)) -> ZD z1 rpath > (Zipper dir _ _,_) -> ZD z1 (dir:path) > (ZipDone _,[]) -> ZD z1 [] and the `end-of-file' function > zipd'result ZD{zd_z = ZipDone term} = Just term > zipd'result _ = Nothing Now we can traverse the term and print the path to each node during the depth-first traversal. > tz4 :: IO () = runCC ( > do > zd <- zipd'term traverse term1 > let loop zd = maybe (traverse zd) final (zipd'result zd) > final term = do lprint "Finished"; lprint term > traverse zd at ZD{zd_z = Zipper dir term _, zd_path = path} = > do lprint $ (show dir) ++ "->" ++ (show path) > lprint term > zipd'move Next Nothing zd >>= loop > loop zd) Here's an excerpt from tz4 output: "DownRight->[DownRight,Down,Down,Down]" \f.(f \f.\x.x) "Down->[Down,DownRight,Down,Down,Down]" (f \f.\x.x) "Down->[Down,Down,DownRight,Down,Down,Down]" f "Up->[Down,DownRight,Down,Down,Down]" (f \f.\x.x) "DownRight->[DownRight,Down,DownRight,Down,Down,Down]" \f.\x.x In the next messages, we show how to push the path accumulation into the zipper itself. For now, we will use ZipperD to implement a two-hole zipper. This is not the best zipper, but it is more explicit and easier to explain. Our Zipper2 is a push-mode zipper with a particular isolation mode. We have two cursors. Updates made with cursor1 are immediately propagated to cursor2. Updates made with cursor2 are invisible to cursor1. The user of Zipper2 had better move cursor2 slower than cursor1: cursor1 can modify cursor2's past and thus cause a temporal paradox. Zipper2 is just a pair of ZipperDs: > data Zipper2 r m term dir = Zipper2 (ZipperD r m term dir) > (ZipperD r m term dir) > make'zip2 term = do z1 <- zipd'term traverse term > z2 <- zipd'term traverse term > return $ Zipper2 z1 z2 As we know, two different zippers traversing the same data structure are completely isolated. We wish to break the isolation, to let one zipper see updates of the other. The key insight is that a zipper sees its own updates. Therefore, to let z1 push its updates to z2, the cursor z1, having received a new term from the user, not only updates its own data structure. It also moves z2 to the position that matches z1's position, pushes the update, and then returns z2 to its original place. If z1 modifies a subterm in z2's future, this procedure is safe. The path accumulation is the key: each cursor always knows where it is. Cursor z1 also knows where z2 is. So z1 can move z2 arbitrarily through the tree, provided it returns the cursor where it found it. Cursor z1 moves z2 in an optimal way, rather than in the depth-first traversal way. This all works provided that the user treats Zipper2 as an abstract data type and does not, for example, references z1 and z2 individually. In the next messages, we relax this requirement, and so permit two seemingly independent and independently created cursors to communicate. The following code implements the simple idea of Zipper2: > zip2'move1 Nothing dir (Zipper2 z1 z2) = > do > lprint $ "move z1, no updates, " ++ (show dir) ++ "; path " ++ > (show $ zd_path z1) > lprint $ zd_term z1 > z1 <- zipd'move dir Nothing z1 > return $ Zipper2 z1 z2 > > zip2'move1 (Just nt) dir (Zipper2 z1 z2) = > do > lprint $ "move z1, update, " ++ (show dir); lprint nt > z2 <- update_z2 z1 z2 nt -- z1 updating z2 > z1 <- zipd'move dir (Just nt) z1 -- before updating itself > return $ Zipper2 z1 z2 > > zip2'move2 nt dir (Zipper2 z1 z2) = <elided> > > update_z2 z1 z2 nt = move_to_path target_path z2 >>= > zipd'move Up (Just nt) >>= move_to_path orig_path >>= > restore_state (zd_dir z2) > where orig_path = zd_path z2 ; target_path = zd_path z1 > > move_to_path target_path z = > if target_path == zd_path z then return z > else if lsp > ltp then zipd'move Up Nothing z >>= > move_to_path target_path > else <elided> > where sp = zd_path z > lsp = length sp > ltp = length target_path > > restore_state dir z = > case (dir, zd_dir z) of > (Up,Up) -> return z > (Up,_) -> zipd'move Down Nothing z >>= zipd'move Up Nothing > <elided> and we can show an example: > tt2 :: IO () = runCC ( > make'zip2 term1 >>= > zip2'move1 Nothing Next >>= > zip2'move2 Nothing Next >>= > zip2'move1 Nothing Next >>= > zip2'move1 Nothing Next >>= > zip2'move1 (Just (A (Var "y") (Var "y"))) Up >>= > zip2'move2 Nothing Next >>= > zip2'move2 Nothing Next >>= > zip2'move2 Nothing Up >>= > zip2'move2 Nothing Next >>= > zip2'move2 (Just (A (Var "z") (Var "z"))) Up >>= > zip2'through1 >>= zip2'through2 >> return ()) We use cursor 1 to navigate to a subterm and update it; we use cursor 2 to navigate and update. The final result of cursor 1 is \f.\x.((y y) ((f \f.\x.x) x)) and of cursor 2 is \f.\x.((y y) (z z)) The result of cursor 1 reflects only its own updates. The result of cursor 2 two reflects the updates by cursor 1 and the updates made by cursor 2 itself. It appears as if we first updated the term with cursor 1 and only afterwards with cursor 2. The printed trace however demonstrates that updates do proceed concurrently, so to speak. We move both cursors first. We make the update with cursor 1. We navigate with cursor 2 -- which already sees the update. We make another update. We move cursor 1, which does _not_ see the update by cursor 2.
http://www.haskell.org/pipermail/haskell/2005-May/015844.html
crawl-002
refinedweb
3,170
58.72
DEBSOURCES Skip Quicknav Patches / ruby-dbi /0.4.5-3 lib/dbi.rb | 6 0 + 6 - 0 ! lib/dbi/columninfo.rb | 6 0 + 6 - 0 ! 2 files changed, 12 deletions(-) remove usage of rubygems test/dbi/tc_dbi.rb | 8 0 + 8 - 0 ! 1 file changed, 8 deletions(-) skip available drivers test DBI drivers were split out into their own packages specifically so that one doesn't have to install all of them for DBI to work. bin/dbi | 518 0 + 518 - 0 ! bin/test_broken_dbi | 37 0 + 37 - 0 ! 2 files changed, 555 deletions(-) remove executables * dbi: too primitive to be worth the namespace in /usr/bin * test_broken_dbi: irrelevant for a Debian package build/Rakefile.dbi.rb | 2 1 + 1 - 0 ! lib/dbi.rb | 27 11 + 16 - 0 ! lib/dbi/columninfo.rb | 4 2 + 2 - 0 ! lib/dbi/utils/date.rb | 3 2 + 1 - 0 ! lib/dbi/utils/time.rb | 3 2 + 1 - 0 ! lib/dbi/utils/timestamp.rb | 4 3 + 1 - 0 ! test/dbi/tc_date.rb | 2 1 + 1 - 0 ! test/dbi/tc_time.rb | 2 1 + 1 - 0 ! test/dbi/tc_timestamp.rb | 2 1 + 1 - 0 ! 9 files changed, 24 insertions(+), 25 deletions(-) update to deprecated-3.0.0 Update to use "deprecated" version 3.0.0 to get rid of "already initialized constant Deprecate" warning. rubygems defines its own Deprecate module in rubygems/deprecate.rb. This causes a warning when deprecate 2.0.1 is required because it aliases the Deprecated module to Deprecate. Version 3.0.0 no longer does this aliasing and gets rid of the warning. Updating to 3.0.0 requires some changes in how methods are deprecated, and how set_action is called. Based on lib/dbi/columninfo.rb | 2 1 + 1 - 0 ! lib/dbi/row.rb | 40 20 + 20 - 0 ! 2 files changed, 21 insertions(+), 21 deletions(-) ruby-1.9 compatibility Fixes merged from: * Fix clone and dup for ruby 1.9 * Can't add keys to a hash during iteration lib/dbi/row.rb | 2 1 + 1 - 0 ! test/dbi/tc_row.rb | 2 1 + 1 - 0 ! test/dbi/tc_types.rb | 2 1 + 1 - 0 ! 3 files changed, 3 insertions(+), 3 deletions(-) add support for ruby2.0 Bug-Debian: lib/dbi/columninfo.rb | 2 1 + 1 - 0 ! 1 file changed, 1 insertion(+), 1 deletion(-) fix raise method lookup in ruby 2.1 Use complete namespace for the raise function call to prevent get caught by the 'method_missing' method. Bug-Debian:
https://sources.debian.org/patches/ruby-dbi/0.4.5-3/
CC-MAIN-2019-43
refinedweb
406
63.46
Claudenw commented on issue #83: WIP: Initial bloom filter code contribution URL: Are you suggesting then that a Bloom filter should accept an iterator or stream of int and perform comparisons against that as well as the standard Bloom filter match calculation? On Thu, Oct 24, 2019 at 1:52 PM Alex Herbert <notifications@github.com> wrote: > If the 225,000 bits of a bloom filter are 3519 longs then you have 3519 > ((filter & target) == target) operations with possible early exit: > > long[] filter = ... > long[] target = ... > for (int i = 0; i < filter.length; i++) { > if ((filter[i] & target[i]) == target[i]) { > return NO_MATCH; > } > } > return MATCH; > > If the expected number of bits turned on is 16 this seems wasteful, but > I've not worked through the probability distribution of bits and the > expected number of loop executions. > > The key point is that we can skip all the zeros in the target. Since this: > > if ((filter[i] & target[i]) == target[i]) > > is always true when the target is zero. It is only when the target is not > zero that a result is of interest. > > If you know that you only have 16 bits to compare you just isolate those: > > // Dynamically computed bits of the hash function > IntStream bits = ... > > boolean noMatch = bits.filter(bit -> { > long value = filter[bit / 64]; > // Check the bit is not set > return (value & (1L << (bit & 63))) == 0; > }).findAny().isPresent(); > > I've used the streams API here but you could just compute each hash bit in > a loop: > > for (int i = 0; i < 16; i++) { > // Dynamically computed bits of the hash function > int bit = computeHash(i); > long value = filter[bit / 64]; > // Check the bit is not set > if (value & (1L << (bit & 63))) == 0) { > return NO_MATCH; > } > } > return MATCH; > > There are more operations per loop with the shifts used to isolate the > correct bit but only 16 possible loops. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <>, > or unsubscribe > <> > . > -- I like: Like Like - The likeliest place on the
http://mail-archives.us.apache.org/mod_mbox/commons-issues/201910.mbox/%3C157193034296.16437.12559795266228296594.gitbox@gitbox.apache.org%3E
CC-MAIN-2020-24
refinedweb
334
58.62
Learn All About Data Types In C++ With Examples. In this Complete C++ Training Tutorials, we will discuss data types in C++ in this tutorial. We have already seen identifiers that are used to identify various entities in C++ by name. Apart from the identifiers, we also know that the variable store’s information or data. In order to associate data with the variable, we also need to know what data will we associate exactly i.e. whether variables are storing only alphabets or numbers or both. In other words, we need to restrict the data or information that is to be stored in a variable. This is exactly where the data type comes into the picture. We can say that data types are used to tell the variable what type of data it should store. Based on the data type assigned to a variable, the operating system allocates memory and decides what type of data is to be stored in the variable. What You Will Learn: Types Of Data C++ supports two types of data to be used with its programs. - Primitive/Standard data types - User-defined data types. Given below is the pictorial representation of the data types in C++. Primitive or Standard Data Types Primitive data types are the built-in types, that C++ language provides. We can directly use them to declare entities like variables, constants, etc. Alternatively, we can also call them as pre-defined data types or standard data types. Following are the various primitive data types that C++ supports with their corresponding keywords: - Integer => int - Character => char - Floating Point =>float - Double Floating Point => double - Boolean => bool - Void or Valueless type => void - Wide Character => wchar_t User-defined Data Types In C++ we can also define our own data types like a class or a structure. These are called user-defined types. Various user-defined data types in C++ are listed below: - Typedef - Enumeration - Class or object - Structure Out of these types, the class data type is used exclusively with object-oriented programming in C++. Primitive Data Types The following table shows all the primitive data types supported by C++ along with its various characteristics. Data Type Modifiers Primitive data types that store different values use entities called data type modifiers to modify the length of the value that they can hold. Accordingly, the following types of data modifiers are present in C++: - Signed - Unsigned - Short - Long The range of the data that is represented by each modifier depends on the compiler that we are using. The program below produces the various sizes of different data types. #include<iostream> using namespace std; int main() { cout<<"Primitive datatypes sizes: "<<endl; cout << " short int: " << sizeof(short int) << " bytes" << endl; cout << " unsigned short int: " << sizeof(unsigned short int) << " bytes" << endl; cout << " int: " << sizeof(int) << " bytes" << endl; cout << " unsigned int: " << sizeof(unsigned int) << " bytes" << endl; cout << " long int: " << sizeof(long int) << " bytes" << endl; cout << " unsigned long int: " << sizeof(unsigned long int) << " bytes" << endl; cout << " long long int: " << sizeof(long long int) << " bytes" << endl; cout << " unsigned long long int: " << sizeof(unsigned long long int) << " bytes" << endl; cout << " char: " << sizeof(char) << " byte" << endl; cout << " signed char: " << sizeof(signed char) << " byte" << endl; cout << " unsigned char: " << sizeof(unsigned char) << " byte" << endl; cout << " float: " << sizeof(float) << " bytes" <<endl; cout << " double: " << sizeof(double) << " bytes" << endl; cout << " long double: " << sizeof(long double) << " bytes" << endl; cout << " wchar_t: " << sizeof(wchar_t) << " bytes" <<endl; return 0; } Output: Primitive datatypes sizes: short int: 2 bytes unsigned short int: 2 bytes int: 4 bytes unsigned int: 4 bytes long int: 8 bytes unsigned long int: 8 bytes long long int: 8 bytes unsigned long long int: 8 bytes char: 1 byte signed char: 1 byte unsigned char: 1 byte float: 4 bytes double: 8 bytes long double: 16 bytes wchar_t: 4 bytes Screenshot for this output is given below. As we see, using the size of the operator, we can get the maximum size of data that each data type supports. All these data types and their corresponding sizes can be tabularized as below. This is all about primitive data types in C++. User-defined Data Types These data types as the name itself suggest are defined by the user itself. As they are user-defined, they can be customized as per the requirements of the program. Typedef By using the typedef declaration, we create an alias or another name for the data type. Then we can use this alias to declare more variables. For Example, consider the following declaration in C++: typedef int age; Through this declaration, we have created an alias age for the int data type. Hence, if we want to declare anything similar, then we can use the alias instead of the standard data type as shown below: age num_of_years; Note that alias is just another name for the standard data type, it can be used in a similar way like the standard data types. Enumeration The enumeration in C++ is a user-defined data type which consists of a set of values with corresponding integral constants for each value. For Example, we can declare the days of the week as an enumerated data type as shown below: enum daysOfWeek {Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday}; By default, integral constants for each of the enum value starts with zero. So ‘Sunday’ has value 0, ‘Monday’ has 1 and so on. However, we can also change the default values from the start of in-between as follows: enum daysOfWeek {Sunday, Monday, Tuesday=5, Wednesday, Thursday, Friday, Saturday}; Here, Sunday will have value 0, Monday will have value 1, and Tuesday will have value 5 that we have assigned. After Tuesday, remaining values will have 6, 7, and so on in continuation with the previous value (in this case 5). Let us make use of this enum that we declared earlier in the following program: #include<iostream> using namespace std; enum daysOfWeek {Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday}; int main() { daysOfWeek today; today = Thursday; cout<<"This is day "<<today<<" of the week"; return 0; } Output: This is day 4 of the week Screenshot for the same is given below The above program is self-explanatory. We have defined the enum and then we create its type variable to output the day of the week. Class In C++, we can define yet another user-defined type named “Class”. Class is nothing but a collection of objects. Class acts as a blueprint for an object and using the class definition we can design various real-time problems. For Example, consider a class named “Student” which will be defined as follows: class student{ char* name; int age; public: void printDetails() { cout<<”Name: “<<name; cout<<”Age: “<<age; } }; Once we have defined this class, we can use the class name to declare variables of type class. These variables of type class are nothing but objects. So we declare an object of type student as follows: student s1; s1.printDetails(); As shown above, we can also access the members of this class which are public. We will see the classes and objects in detail when we cover object-oriented programming in C++. Structure A structure in C++ is similar to that in C>. In fact, the concept of structure in C++ is directly picked up from C language. As a class, the structure is also a collection of variables of different data types. But class has both variables and methods that operate on these variables or members as we call them. Structures, on the other hand, have only variables as its members. We can define a structure person as follows using the struct keyword: struct employee{ Char name[50]; Float salary; Int empId; }; Once the structure is defined we can declare a variable of type struct as follows: Employee emp; Then we can access the members of the structure using the structure variable and member access operator (dot Operator). Conclusion We will learn more about structure and class and the differences between them once we start with the object-oriented programming in C++. In our upcoming tutorial, we will explore C++ variables and its other aspects. => Check The In-Depth C++ Training Tutorials Here
https://www.softwaretestinghelp.com/data-types-in-cpp/
CC-MAIN-2021-17
refinedweb
1,360
56.49
Hey everyone 👋🏻, today we are going to make a discord bot 🤖 which will send gifs according to the user in just 30 lines of code! The way this bot will work is, if you write .gif happy then the bot will send a random happy gif. What are we going to use to build this mini-project: - JavaScript - NPM Packages: - Discord.js - DOTENV - node-fetch Okay so let's get started 🎉!! Steps : - We have to go to the discord developer portal and create a new application. - Then you have to create a new application ☝🏻. (the blue button on top-right corner). - Give a name to your application. - Then on the left hand side, click on bot👇🏻 . - After clicking on bot, now click on Add Boton the right hand side, and after this step you will have a screen like this 👇🏻. - Now the Token is something which you have to keep a secret and not reveal anywhere or to anyone. - If you reveal it by mistake, no worries just regenerate it, but make sure you don't or else someone can take over your bot. - Now we have to decide what permissions does our bot need, and after deciding this, just head to OAuth2 section on the right hand side of your screen. - You will have a screen when many check boxes, and you have to click on the checkbox which says bot👇🏻. - Then click on the permission you have to give to the bot. - After that click on the link and copy it, after that paste it into a new tab and authorize it to add it to a new server. Now we just have to code it! Before explaining the code, let me explain you the folder structure 👇🏻. - There is a folder called srcin which we have a main file called bot.jsin which we are going to code our bot. - Okay so you can see that there are two files and a folder, named as package-lock.json, package.jsonand node_modulesrespectively, they are basically of node packages and their information. - There is also a .envfile but we will discuss about it later in this blog. - Okay so we have to use 3 packages to make a discord bot, they are as follows: - discord.js ( npm i discord.js) - dotenv ( npm i dotenv) - node-fetch ( npm i node-fetch) - Now using this image as my reference, I am going to explain the code. As you can see ☝🏻, there are only 30 lines of code! How amazing it that? Your own discord bot 🤖 in just 30 lines of code! Okay so the first and the third line of code are the import statements which can also be written as : import discord from 'discord.js;' The second line of code is basically us initializing the client/user, which in this case will be our bot and the users themselves . and the fourth line is importing the env package and configuring it, so basically .env files stores all your secrets, like your discord bot's token or your API Key, these things will not be uploaded on GitHub using the git ignore file. Okay so in JavaScript there is this thing called addEventListner which helps us to react to certain events, like if a user clicks on something or double-tap on something a particular function should run. In the same way here in discord.js addEventListner is more or less replaced by .on function. All of the .on functions are called in regards to the client so we have to write client.on('event', callBackFunction). On line number 6 you can see that I have written a function which is This basically means that, whenever the user is ready and logged in the console should log <Name of the Bot> is up and ready! and name of the bot is fetched by this inbuilt property known as .user.tag , which is to be called in regards to the client . Now we have to make our bot login to the server. And for that we have another inbuilt method/function called So we can write : client.login(process.env.TOKEN) Now you might wonder what is this process.env.TOKEN, this is the way we call variables from our .env file. So let me show what is stored inside .env file. Here in this file, we have to put our bot token inside a pair of single or double quotes and our tenor API key (you can generate it from here) For example if you want to call the tenor api key inside your bot.js file, you just have to write process.env.TENOR. And you can make a try-catch block around the client.login() function, so if any error occurs, we can catch it and show it on the console. So as of now, we have our boiler plate code ready with us, which is 👇🏻: Let's code the main functionality of the bot now. Now all the code discussed below will be in the reference to 👇🏻 this image. Now let's understand the above code step-by-step: - Creating an add event listener to react when the user sends message: - Here the parameter msgwill contain the message which user has sent. - Let's add a prefix to our bot, so it will only react if we write .gif. - Just to be a little safe, I am going to write the main functionality inside a try-catchblock. msg.contenthelps us to fetch the content inside the msg. In leman's term, it is like .innerTextin JavaScript. - Here when the user will write .gifthe code inside the ifstatement will be executed. - Now let's get user's queries. - Now if a user writes .gif batmanthen this will be considered as a string and a problem arises here, which is how do we separate the bot command and the user's query. - We do that by an inbuilt function called .split(), which will help us to separate the whole string into two different values stored in an array, for example: if I write .gif batmanthen .split()will make an array : ['.gif', 'batman']. - Let's see it's code. - We are going to compare the first index of querywhich will be .gifto the string .gif. - Let's discuss about the API and Fetching it. - I am using node-fetch to fetch the API. - The base of the API is - And in order to take query from the user and give the key as your API Key we have to make this URL dynamic. - We can do that by using template literals.{query[1]}&key=${process.env.TENOR} - And now the code looks like this. - And the query has to be the second value (First Index) in the array. - Let's fetch the API now. - We just have to put asyncin front of the callback function as you can see in the above image on line number 10. asyncwill make your function, asynchronous and then we will use awaitto wait for the response from the API. - Now here we will have a problem, which is we will only receive one GIF every time. - Now the API will return 20 GIFs and we have to pick a random one (on line 17). - So to do this, we will make a random variable which will choose one GIF. - Now the final code looks like 👇🏻 - Let's run this. - Just open the terminal, change the directory to the home directory and inside srcfolder, then write node bot.js. Thank you for reading the whole blog 🎉!! If you liked it do share it with your developer friends and feel free to comment and give suggestions. Discussion (2) I guess the screenshots make the blog more appealing, and people tend to read more stuff when they see images in it! It looks cool 😆 Thanks for sharing! Just curious: why you insert code as screen shots (pictures)? DEV’s markdown supports code highlight and it looks nice; yet, one can easily copy it to play with.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/basecampxd/make-a-discord-bot-in-just-30-lines-of-code-jbj
CC-MAIN-2021-39
refinedweb
1,337
82.44
What is CSV? CSV stands for comma separated values. It is a spreadsheet format where each column is separated by a comma and each row by a newline. Here is an example CSV file: Name,Age,Country John,48,United States Daniel,67,Germany Qi,25,China You can download this file and view it in Excel, Google Docs, or even directly in a text editor. This same data saved in Excel format uses 4591 bytes and is supported by less applications. A CSV file can be imported into a database or parsed with a programming language. This flexibility makes CSV the most common output format requested by clients for their scraped data. Here is an example showing how to parse a CSV file with Python: import csv filename = 'example.csv' reader = csv.reader(open(filename)) for row in reader: # display the value at the last column in this row print row[-1]
https://webscraping.com/blog/What-is-CSV/
CC-MAIN-2019-18
refinedweb
153
65.73
On the “Learn to Code” Movement and Its Lies A hobbyist’s take on the fallacies of the coding hysteria The fixation on learning to code has become an international phenomenon, as of recent. All the large news publications are posting articles espousing the merits of computer programming literacy and its pivotal role in the future, where those who write code will be the ones writing the future. The latter is not entirely wrong, but the movement that seeks to supposedly educate children on matters of programming and prepare them for the digital future, is very flawed. I will go in detail on this. Non-profit organizations like Code.org, backed by large corporations and small firms alike, are rallying for “computer science” to become a core subject, alongside the archetypal staples of the liberal arts. The Chicago school district recently announced that they would be adding “computer science” as part of the core curriculum. U.S. representative Tony Cárdenas proposed a new piece of legislation, entitled the “America Can Code Act” (or, quite laughably the 416d65726963612043616e20436f646520 Act of 2013, complete with the awkward trailing whitespace at the end and lack of a specified character encoding, though ostensibly ASCII). The examples are numerous, and the hysteria all happened at quite an alarming rate. Going from complete apathy, the public was suddenly swooning from the need for this generation to learn computer programming. With the imminence of a completely computerized society, programming languages would become more important than natural languages, and we can’t let the compulsory schooling system to leave these kids unwashed and illiterate! Vendors even started creating idealized electronic toys to cash in on the sentiments. No further exposition is required. Let’s tackle what’s actually going on. On the face of it, the idea couldn’t be more well-intentioned, seemingly. Why would anyone decry teaching children computer programming? A scrooge, a technophobe and perhaps a hopelessly cynical COBOL programmer out of the times who’s off his rocker and nearing the grave, either way. In reality I am a member of this generation and my opposition to the movement is not out of ignorance of technology, elitism or obscurantism. I merely want to voice my concerns and point out that the initiative is not as altruistic as it may appear. It is plagued with misconceptions, inanity and quite contrary to altruism: bitter pragmatism under a rainbow cloak of education and youth preparedness. To begin with, right off the bat, the movement rarely mentions “programming”, but code. Code. “Learn to code.” “Anybody can learn to code.” “Coding is ${X}, coding is ${Y).” The reality is that coding is only a small part in the grand scheme of computer science and computation, with the movement claiming to teach the former. This is deceiving at best and a devaluation at worst. There is already a widespread misconception as to the nature of computer science, with most people being unaware that it is a vast field which overlaps with many aspects of science and engineering, including in large part discrete mathematics. Type theory, automata theory, data structures, complexity theory, algorithms… that is computer science. Writing lines of Java is not. The movement and the educators jumping on aboard will only serve to perpetuate this. But, this is just semantics, you might say. Why get so uptight over a triviality like that? Well, in comparison to my other arguments which will follow later, yes. It is a triviality. But really, is it? You’re basically spreading an outright fallacy down to the institutional level. The same institution that is supposed to educate. There is nothing wrong with children writing code, but calling it “computer science” is dishonest and gives them false notions. In any event, this can be brushed off to an extent. The second concern involves the same point: the campaign’s maniacal infatuation with “code”. All of their advertisements and materials focus on describing and worshiping this mystic and powerful essence known as “code”, as if it contains some secret of the universe. They sugar coat all the time and adulate it as something so grandiose. Long ago, Alan Turing discovered that any algorithm can be expressed computationally with only five basic instructions. Any language that supports these instructions is Turing complete, and thus it can be used to perform every single computation that is theoretically possible. The theory of computation is indeed a fascinating one, although Code.org’s enchantment fits the profile of a juvenile, more than a thinker. Yet, Turing machines are not the only theoretical device to express algorithmic principles. Herein lies the fallacy: the media campaign presented by this movement assumes that all languages are procedural and ALGOL-like, as is evident in the previously linked video. This actually leads me to a central question: Just what does “learning to code” mean, anyway? Nothing. It’s an ambiguous and nonsensical statement. It makes about as much sense as “learning a [natural] language”. What language? Human languages are different. So are programming languages. Japanese is not English. Python is not Forth. The inherent bias to programming languages in the typical imperative idioms of ALGOL and FORTRAN is misleading, and also again dishonest with the supposed focus on teaching “computer science”. But more importantly, “learning to code” means nil. This is Java (taken from here): public class PrimeSieve { public static void main(String[] args) { int N = Integer.parseInt(args[0]); // initially assume all integers are prime boolean[] isPrime = new boolean[N + 1]; for (int i = 2; i <= N; i++) { isPrime[i] = true; } // mark non-primes <= N using Sieve of Eratosthenes for (int i = 2; i*i <= N; i++) { // if i is prime, then mark multiples of i as nonprime // suffices to consider mutiples i, i+1, ..., N/i if (isPrime[i]) { for (int j = i; i*j <= N; j++) { isPrime[i*j] = false; } } } // count primes int primes = 0; for (int i = 2; i <= N; i++) { if (isPrime[i]) primes++; } System.out.println("The number of primes <= " + N + " is " + primes); } } This is Erlang (taken from here): -module(primes). -export([sieve/1]). -include_lib("eunit/include/eunit.hrl"). sieve([]) -> []; sieve([H|T]) -> List = lists:filter(fun(N) -> N rem H /= 0 end, T), [H|sieve(List)]; sieve(N) -> sieve(lists:seq(2,N)). This is APL (taken from here): (~R∊R∘.×R)/R←1↓ιR These programs all implement the Sieve of Eratosthenes for finding prime numbers up to a given limit. Java is a famous object-oriented programming language (based on Kristen Nygaard’s definition of OOP). Erlang is a primarily functional programming language with a high focus on fault tolerance and concurrency. APL is an array-based/vector programming language. The point is to illustrate the many different paradigms, nuances and constructs of different programming languages and that they’re not all alike. Learning to code entails far much more than typing instructions. But speaking of typing instructions, it is apparent that the “learn to code” initiative is largely focused not on understanding and education, but like any other subject taught in a run-of-the-mill public school, on rote memorization. They rarely mention “programming”, as I already noted. It’s all about the code. About the procedures and symbols you blindly learn and type on a screen. The movement envisions to produce 9-to-5 code monkeys who can write loops and conditionals, but who do not have any passion or true understanding of their craft. Even within programming, which is only a subset of computer science, coding is not the only activity. Methodology, work environment, language theory, design patterns and logic play a large part as well. Giving kids a brief tour of a procedural language does not give them much insight into anything, with real-world software being much more complex than the lines of code. There’s build systems, version control, frameworks, libraries and many other components behind just the software development part. In this regard, after building a workforce of automatons, the movement’s backers will have cheap and easily disposable labor at their hands. You could always retract that companies need skilled laborers to raise the bar and innovate. This is true, yet as computing grows more ubiquitous, there will also be a large need for basic workers who do what basically amounts to oil changing in the software world. The ones who innovative will largely be guided by their passion and desire to autodidact, rather than school. These people would have gotten into computer science in the first place, regardless of whether formal education explicitly teaches it or not. Let us divulge from the technical details for a moment. When Code.org first began, it maintained an extensive page of testimonials about the importance of learning to code, in an effort to motivate and persuade. They are still available at this location. What strikes out is how laughable the entire mosaic really is. You have politicians, athletes, musicians and business moguls all having blurbs about coding and computer science under their names. But where are the actual computer scientists and programmers? Out of about 100 quotes, I could only find 3 by legitimate computer scientists (Peter Denning, Mehran Sahami and Ed Lazowska). The amount of programmers is harder to gauge, as there are a lot of entrepreneurs who might have (or had) enough foundational knowledge to hack prototypes, but not necessarily to be serious programmers. Nonetheless, they are still a painfully small minority. The main counterargument is that most average folk would want to see regular people do the pitching and that their message describing the beauty of computer science would reach a wider audience. To which I respond that I don’t know how much politicians and big entrepreneurs can be considered “average folk”, but that also who else could describe the beauty of computation and computer science (which it really is beautiful, I don’t deny that) better than… you know… actual computer scientists? In fact, having celebrities actually seems to help reinforce the opposite sentiment. Computer scientists, programmers and other technologists are already stereotyped to a degree as being awkward and socially inept. By isolating them even in a campaign where their word is most relevant, you’re hardly helping with that. So, let’s recap the optimistic dream of everyone learning to code (whatever that might entail). Then realize that, as you would expect, most of the initiative’s partners are large proprietary software vendors. But what’s wrong with this? After all, it’s a given and it should be good to see that the big businesses care. Not until it registers that in fact, proprietary software is completely antithetical to education. By definition. This is something RMS and others have been talking about for ages. Coding is great and all, but what use is there when the only platform available to you is a completely locked down, DRM-packed device? None. Whatever software you build is essentially under the mercy and control of the vendor. Yet this is exactly how it works in the age of smartphones and app stores. The issue of software freedom, is likely more important than coding. Many people will accuse you of being a crackpot for this and are rather hostile towards views promoting free software, but they are more relevant now than ever in the age of mass surveillance, vendor lock-in and cloud computing. Our entire lives are heading for digitization, which means that whoever controls our computing controls our lives. Some will object that free software is irrelevant if you’re not a programmer. This is untrue. Free software guarantees essential freedoms that ensure openness, redistribution and positive benefit to the community. Even if the user cannot program, they can make use of and support modified copies made by developers. By eschewing proprietary software, the user is taking an ethical and social stand. One first needs a good environment to code in. Relying on closed standards signifies dependence, gives power to the vendor and is morally wrong. RMS, in his speeches, has frequently raised a scenario along these lines: Imagine a classroom where children are being taught fundamentals of computer science and imperative programming. The teacher is demonstrating a feature in a text editor program and is trying to explain it in the context of software engineering. A student raises their hand and asks “How exactly does this feature work? What does it look like, internally?”, to which the teacher has no choice to respond, but “I’m sorry, we’re not authorized to know this. The developers use a proprietary license and do not allow the source code to be examined.” Proprietary software offers no educational value, besides reverse engineering exercises. Yet since the movement is backed by the top vendors and companies, it is probably a given that they will force their products in classes. Much like how Microsoft forced their then freshly new Java clone called C# on the University of Waterloo in 2002. It’s probably a safe bet to assume these kids won’t be taught about the tyrannical power proprietary software holds over their lives, about GNU, the *BSD projects and other initiatives which aim to fight this. Indeed, their tutorial resources seem to hint at this. Quite humorously, they also recommend W3Schools for learning HTML. The movement also has a rather pronounced social justice slant. You could write it off as just them trying to empower female and minority students, but it appears they are fond of affirmative action. Disparity between the sexes is a well known issue in computing, though the movement’s take on it will probably cause more problems in the long run than it solves. Finally, worth mentioning is that it is dubious how effective coding as a core subject will be on students in public schools. As with lots of other subjects, schoolteachers are not necessarily of high calibre, and often are most certainly not academics. They serve basic roles of merely transcribing lessons to their student audience, following a strict routine, and then preparing them for standardized testing. The compulsory schooling system is not known for providing engagement, but for being abominably dull and bureaucratic. One could retort that it all depends on the individual teacher. To a certain degree, yes. However in large part, they must follow whatever routine is prescribed to them while minimizing deviation. Aptitude is another story. In the same way a math teacher is not necessarily a mathematician, a “computer science” teacher is not necessarily a computer scientist (or even a programmer, for that matter). Following a curriculum is hopeless, and the skills inherited from rote memorization of state-approved material will likely be mediocre. But how could something as exciting as programming ever be dull, boring and not engaging? Surely, programming is the ultimate art. You see your code come to life right before your eyes. Except, one should not underestimate how adept compulsory schooling is sucking the life and joy out of learning. When the entire system is focused on bureaucracy and grading, it’s hard to believe that they will handle coding as idealistically as portrayed in the “learn to code” movement’s media campaign. But, even more damagingly: what leads one to believe that children will be properly taught and be able to comprehend coding, if they lack even basic reading and writing skills? Unsurprisingly, Code.org gives large attention to the job market. But just what worth does a code monkey who cannot read natural language on a proficient level have? It appears as though U.S. politicians and educators have been overtaken by hysteria. Failing and abysmal results in core subjects are very prevalent throughout the whole country, and it seems they’re trying to patch over their mistakes by focusing on an ever growing market: computer programmers and software developers. This is a final act of desperation. To deflect from their failures, they try to prematurely shove a new, very extensive (and admittedly popular) subject with the hopes of providing an illusory perception of advancement in the public education system. It should come as no surprise, then, that none other than the city of Chicago, are among the first to introduce “computer science” into their curriculum. The same Chicago, the school district of which, is well known as having quite less than stellar performance. Indeed, it looks as if they’re trying to take a sprint before they can even walk. Knowing the capacity of public schooling’s incompetence, one might not be off in thinking that coding will become the next reviled subject, up there with mathematics. Even if it isn’t reviled, it might still be considered as something elite, difficult to approach or even undesirable. Students will grow to despise or be apathetic towards coding. First impressions play a large role and frankly, if my first introduction to coding was anything remotely similar to Visual Studio and C#, I’d be driven away, too. In the end, programming is something that is best learned through autodidacticism, or self-education. Like all great crafts, if you try to force it upon people, you only end up diluting it and its culture along with that. The software ecosystem is very well developed and fact of the matter is, anything an average user needs a solution for, there almost certainly exists a solution. In case one needs something more niche, then they will naturally stumble upon it and pick up programming, regardless of outside intervention. Programming truly does involve self-discovery. At most, the common user might find benefit in simple things like task automation, but once again these things are trivial to pick up, especially with the abundance of simple and abstracted frameworks, on top of already high-level and dynamic programming languages. Churning out arbitrary lines of code does not offer much insight into the software development process, as I already mentioned previously. Serious programming often turns out to be a general exercise in computing knowledge. In fact, as Jeff Atwood pointed out, it is a fallacy to assume that coding is the be-all-end-all, as the movement’s propaganda seems to insist. Code is a means to an end, but there is much more than that in getting things done, and exalting to such a high degree is deceiving. This is not to say you can’t learn something about computer science and programming from formal education. Of course you can. But once again, what is being offered here is mere coding. Given the way the entire movement has presented themselves and the general naivete of the school districts and politicians adopting coding in the core curriculum, I am very skeptical that anything noteworthy will accomplished. Face it. Programmers throughout the ages have owed most of their skills on being self-taught. This is unlikely to change. It’s simply how programming works. In general, laymen might find much more practical benefit from system administration (which also involves shell scripting, primarily used as a form of task automation) than plain coding. Learning to code really isn’t an emergency, I’m afraid. There are bigger priorities, and if coding for you is one of them, you’ll know and you’ll take action. Don’t dilute this fine craft. If you are going to support compulsory programming education, then at least be aware of the caveats and don’t present things as a crisis in need of a magical rainbow solution. How Chomsky was right.
https://medium.com/be-seeing-you-and-so-on/1982aab8aa8e
CC-MAIN-2014-15
refinedweb
3,251
54.22
On Tue, Feb 4, 2014 at 12:11 AM, Alvaro Herrera <alvhe...@2ndquadrant.com> wrote: > I have run into some issues, though: > > 1. certain types, particularly timestamp/timestamptz but really this > could happen for any type, have unusual typmod output behavior. For > those one cannot just use the schema-qualified catalog names and then > append the typmod at the end; because what you end up is something like > pg_catalog.timestamptz(4) with time zone > because, for whatever reason, the "with time zone" is part of typmod > output. But this doesn't work at all for input. I'm not sure how to > solve this. Advertising How about doing whatever pg_dump does? > 2. I have been having object definitions be emitted complete; in > particular, sequences have OWNED BY clauses when they have an owning > column. But this doesn't work with a SERIAL column, because we get > output like this: > > alvherre=# CREATE TABLE public.hijo (b serial); > NOTICE: expanded: CREATE SEQUENCE public.hijo_b_seq INCREMENT BY 1 MINVALUE > 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE OWNED BY > public.hijo.b > NOTICE: expanded: CREATE TABLE public.hijo (b pg_catalog.int4 DEFAULT > nextval('hijo_b_seq'::regclass) NOT NULL ) > > which is all nice, except that the sequence is using the column name as > owner before the column has been created in the first place. Both these > command will, of course, fail, because both depend on the other to have > been executed first. The tie is easy to break in this case: just don't > emit the OWNED BY clause .. but it makes me a bit nervous to be > hardcoding the decision of parts that might depend on others. OTOH > pg_dump already knows how to split objects in constituent parts as > necessary; maybe it's not so bad. Well, the sequence can't depend on a table column that doesn't exist yet, so if it's in effect doing what you've shown there, it's "cheating" by virtue of knowing that nobody can observe the intermediate state. Strictly speaking, there's nothing "wrong" with emitting those commands just as you have them there; they won't run, but if what you want to do is log what's happened rather than replay it, that's OK. Producing output that is actually executable is a strictly harder problem than producing output that accurately describes what happened. As you say, pg_dump already splits things and getting executable output out of this facility will require the same kinds of tricks here. This gets back to my worry about maintaining two or three copies of the code that solve many of the same problems in quite different ways... > 3. It is possible to coerce ruleutils.c to emit always-qualified names > by using PushOverrideSearchPath() facility; but actually this doesn't > always work, because some places in namespace.c believe that > PG_CATALOG_NAMESPACE is always visible and so certain objects are not > qualified. In particular, text columns using default collation will be > emitted as having collation "default" and not pg_catalog.default as I > would have initially expected. Right now it doesn't seem like this is a > problem, but it is unusual. We have a quote_all_identifiers flag. We could have a schema_qualify_all_identifiers flag, too. Then again, why is the behavior of schema-qualifying absolutely everything even desirable? -- Robert Haas EnterpriseDB: The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription:
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg232392.html
CC-MAIN-2018-26
refinedweb
572
56.05
Binary Analysis with Jupyter and Radare2 Last Updated: 2019-03-15 10:57:26 UTC by Remco Verhoef (Version: this. If you combine Radare2 together with Jupyter, you'll have an interactive way of working with your binaries. You'll be able to execute individual steps, change them and re-execute, helping you with your analyses flow. What I really like about working with radare2 from within a notebook, is that all steps are being documented, registered and could be changed and re-run easily. Combining Radare2 possibilities with all that come with Jupyter is powerful beyond imagination. There's a docker image that can be used, surprisingly nl5887/radare2-notebook which contains jupyter-notebook with radare2 build on top. Cutter, the gui frontend of radare2 has also Jupyter support built-in, which can be used also. To start the image run the following, which wil start jupyter while exposing ports 8888 (jupyter) and 6006 (tensorboard). docker run -p 8888:8888 -p 6006:6006 -v $(pwd)/notebooks/:/home/jovyan/ nl5887/radare2-notebook The output will show the url that needs to be used to connect to the notebook. This url contains a token that is being used to authenticate to Jupyter. Let's start with a simple notebook, that will extract (potentially) interesting IOCs out of a linux malware binary. Notebooks consists of different cell types, which could be markdown or code. We'll use a Python kernel with Jupyter, though many other languages are supported. Every code block is created as a separate cell. try: # if using jupyter within cutter, use the following. This will use the current active binary. import cutter # we'll assign cutter to variable r2 to be consistent with r2pipe r2 = cutter except ModuleNotFoundError as exc: # using r2pipe to open a binary import r2pipe r2 = r2pipe.open("/home/jovyan/radare2/malware/vv") Now we've created a r2pipe session with binary, we'll start basic analyses. We can use Jupyter magic commands, like %time to get information about timings etc. %time r2.cmd('aaa') The binary has been analysed, now we can output information about the binary. print(r2.cmd('i')) If you append the character j to the command, radare2 will output as json. The code below will parse the json information, pretty print it and extract the arch out of the structure. from pprint import pprint r = json.loads(r2.cmd('ij')) pprint(r) print(r.get('bin').get('arch')) This is all we need to know to build a simple IOC extractor, this cell will walk through all found string references and check it against some matchers. If it identifies ip addresses, urls, ansi output or email addresses, they'll be outputted. import r2pipe import json import struct import re import base64 from pprint import pprint, pformat IP_MATCHER = re.compile("(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(?:[:]\d+)?)") URL_MATCHER = re.compile('(?:(?:https?|ftp|file)://|www\.|ftp\.)[-A-Z0-9+&@#/%=~_|$?!:,.]*[A-Z0-9+&@#/%=~_|$]', re.IGNORECASE) EMAIL_MATCHER = re.compile('([A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4})', re.IGNORECASE) def regex_matcher(matcher): return lambda st: matcher.findall(st) def contains_matcher(s): return lambda st: [st] if s in st else [] matchers = [regex_matcher(IP_MATCHER), regex_matcher(URL_MATCHER), regex_matcher(EMAIL_MATCHER), contains_matcher('\\e['), contains_matcher('HTTP')] def print_s(s, r): print('0x{:08x} 0x{:08x} {:10} {:4} {:10} {}'.format(s.get('paddr'), s.get('vaddr'), s.get('type'), s.get('length'), s.get('section'), r)) strings = json.loads(r2.cmd('izj')) for s in strings: try: st = base64.b64decode(s.get('string')).decode(s.get('type')) for matcher in matchers: matches = matcher(st) for match in matches: print_s (s, match) except ValueError as e: # print(e) continue except LookupError as e: # print(e) continue Giving this output: 0x0010c3be 0x0050c3be ascii 15 .rodata \e[01;32mresumed 0x0010c3f0 0x0050c3f0 ascii 49 .rodata \e[01;33mpaused\e[0m, press \e[01;35mr\e[0m to resume 0x0010c4e0 0x0050c4e0 ascii 71 .rodata \e[1;32m * \e[0m\e[1;37mPOOL #%-7zu\e[0m\e[1;%dm%s\e[0m variant \e[1;37m%s\e[0m 0x0010c528 0x0050c528 ascii 60 .rodata \e[1;32m * \e[0m\e[1;37m%-13s\e[0m\e[1;36m%s/%s\e[0m\e[1;37m %s\e[0m 0x0010c568 0x0050c568 ascii 41 .rodata \e[1;32m * \e[0m\e[1;37m%-13slibuv/%s %s\e[0m 0x0010f8b0 0x0050f8b0 ascii 5 .rodata \e[0m\n 0x0010f8b6 0x0050f8b6 ascii 7 .rodata \e[0;31m 0x0010f8be 0x0050f8be ascii 7 .rodata \e[0;33m 0x0010f8c6 0x0050f8c6 ascii 7 .rodata \e[1;37m 0x0010f8ce 0x0050f8ce ascii 5 .rodata \e[90m 0x0011031d 0x0051031d ascii 7 .rodata \e[1;30m 0x00110388 0x00510388 ascii 61 .rodata \e[1;37muse pool \e[0m\e[1;36m%s:%d \e[0m\e[1;32m%s\e[0m \e[1;30m%s 0x001103c8 0x005103c8 ascii 81 .rodata \e[01;31mrejected\e[0m (%ld/%ld) diff \e[01;37m%u\e[0m \e[31m"%s"\e[0m \e[01;30m(%lu ms) 0x00110450 0x00510450 ascii 67 .rodata \e[01;32maccepted\e[0m (%ld/%ld) diff \e[01;37m%u\e[0m \e[01;30m(%lu ms) 0x001104c0 0x005104c0 ascii 78 .rodata \e[1;35mnew job\e[0m from \e[1;37m%s:%d\e[0m diff \e[1;37m%d\e[0m algo \e[1;37m%s\e[0m 0x001106c4 0x005106c4 ascii 8 .rodata \e[1;31m- 0x001106cd 0x005106cd ascii 7 .rodata \e[1;31m 0x0011076e 0x0051076e ascii 15 .rodata \e[1;31mnone\e[0m 0x0011077e 0x0051077e ascii 16 .rodata \e[1;32mintel\e[0m 0x0011078f 0x0051078f ascii 16 .rodata \e[1;32mryzen\e[0m 0x001107a0 0x005107a0 ascii 93 .rodata \e[1;32m * \e[0m\e[1;37m%-13s\e[0m\e[1;36m%d\e[0m\e[1;37m, %s, av=%d, %sdonate=%d%%\e[0m\e[1;37m%s\e[0m 0x00110828 0x00510828 ascii 73 .rodata \e[1;32m * \e[0m\e[1;37m%-13s\e[0m\e[1;36m%d\e[0m\e[1;37m, %s, %sdonate=%d%%\e[0m 0x00110878 0x00510878 ascii 37 .rodata \e[1;32m * \e[0m\e[1;37m%-13sauto:%s\e[0m 0x001108a0 0x005108a0 ascii 32 .rodata \e[1;32m * \e[0m\e[1;37m%-13s%s\e[0m 0x001108c8 0x005108c8 ascii 49 .rodata \e[1;32m * \e[0m\e[1;37m%-13s%s (%d)\e[0m %sx64 %sAES 0x00110900 0x00510900 ascii 45 .rodata \e[1;32m * \e[0m\e[1;37m%-13s%.1f MB/%.1f MB\e[0m 0x00110930 0x00510930 ascii 127 .rodata \e[1;32m * \e[0m\e[1;37mCOMMANDS \e[0m\e[1;35mh\e[0m\e[1;37mashrate, \e[0m\e[1;35mp\e[0m\e[1;37mause, \e[0m\e[1;35mr\e[0m\e[1;37mesume\e[0m 0x001124d0 0x005124d0 ascii 96 .rodata \e[1;37mspeed\e[0m 10s/60s/15m \e[1;36m%s\e[0m\e[0;36m %s %s \e[0m\e[1;36mH/s\e[0m max \e[1;36m%s H/s\e[0m 0x001131c8 0x005131c8 ascii 7 .rodata \e[1;33m 0x00113230 0x00513230 ascii 110 .rodata \e[1;32mREADY (CPU)\e[0m threads \e[1;36m%zu(%zu)\e[0m huge pages %s%zu/%zu %1.0f%%\e[0m memory \e[1;36m%zu.0 MB\e[0m This is just a basic example of what you can do with radare2 together with Jupyter. You can find the complete notebook here, Github supports notebooks also, giving a nice view of it. Please share your ideas, comments and/or insights, with me via social media, @remco_verhoef or email, remco.verhoef at dutchsec dot com. Remco Verhoef (@remco_verhoef) ISC Handler - Founder of DutchSec PGP Key
https://isc.sans.edu/diary.html?date=2019-03-15
CC-MAIN-2019-13
refinedweb
1,275
68.16
Dear all, <?xml:namespace prefix = o Please find attached a sample excel file containing a long data table (worksheet 2). The worksheet has a defined header. The first 7 rows should be printed out on every page needed to show the table content. There is also a footer that provides additional information (page number, department, date…). If I save the workbook as pdf, there are two issues. - The content of the table interleaves the footer area for all pages > 1 - The bookmark “detail” is linked to the last generated page instead to the first page, but in the code it is set to Cell[0,0] Please find also the resulting pdf document as well as the sample code attached. Thanks in advance Erik
https://forum.aspose.com/t/page-break-converting-long-data-tables-to-pdf/67542
CC-MAIN-2022-33
refinedweb
124
70.84
Step 2: Assemble the pan and tilt system I chose to put another DC motor and a servo on it as a pan and tilt system that could be used to aim whatever you wanted. The servo is controlled by the Arduino and the panning motor is controlled by a DPDT switch that I bought at radio shack for around two dollars. To control the servo I wrote some code in the Arduino software environment that reads the voltage drop off of a potentiometer and converts that to the angle that the servo should be moved to. To implement this on the Arduino you hook the servo data wire to one of the digital output pins on the Arduino and the plus voltage wire to 5V and the ground wire to ground. For the potentiometer you need to connect the outer two leads to +5V and the other to ground. The middle lead from the potentiometer should then be connected to an analog input. The potentiometer then acts as a voltage divider having possible values of 0V to +5. When the Arduino reads the analog input it reads it from 0 to 1023. To get an angle to run the servo at I divided the value that the Arduino was reading by 5.68 to get a scale of roughly 0-180. Heres the code that I used to control the tilt servo from a potentiometer: #include <Servo.h> } If you need help working with the Arduino like I did then I highly suggest going to Its a fantastic open source website that is really helpful. So after testing the control of the servo and the switch I needed a place to put them. I ended up using a piece of scrap wood cut to about the same length as Ard-e and screwing it into the back board with a piece of aluminum bent at a 90 degree angle. I then installed the DPDT switch and the potentiometer into the controller. It was a tight squeeze and I had to drill another hole in the top of it to run wires out of but overall it worked out pretty nicely. I also ended up soldering wires onto the existing controller circuitry to power the worm gear box. I really probably should have used another servo for the panning but the hobby store I went to only had one of the ten dollars ones and the motor can turn 360 degrees unlike the servo. The motor is a little too slow though. Now on to testing.
http://www.instructables.com/id/Ard-e-The-robot-with-an-Arduino-as-a-brain/step2/Assemble-the-pan-and-tilt-system/
CC-MAIN-2015-06
refinedweb
426
75.44
Python NumPy Searching is a method in python and numpy searching is a library of Python programming language that adds support for large, multi-dimensional arrays and matrices that add support for large, multi-dimensional arrays and matrices. Along with it, it adds a large collection of high-level mathematical functions to operate on the arrays In this article, we will learn about the techniques we use for Python NumPy searching. A NumPy array stores similar types of elements in a continuous structure. We have seen so many times that there is a need to look at the maximum and minimum elements of the arrays at a dynamic run time. NumPy provides us with a set of functions that enables us to search for the specific elements having certain conditions applied to them. How to Search NumPy Arrays - argmax() function: With this function, it becomes easy to fetch and display the index of the maximum element present in the array structure. By this, the index of the largest elements is a result value from the argmax() function. - NumPy nanargmax() function: With the nanargmax() function, one can easily deal with the NAN or NULL values present in the array. It does not get treated differently. There will be no effect on the functioning of the search values of the NAN values. The syntax will be: numpy.nanargmax() In the example given, the array elements contain a NULL value passed using the numpy.NaN function. Now we use nanargmax() function to search NumPy arrays and find the maximum value from the array elements without letting the NAN elements affect the search. import numpy as np x = np.array([[40, 10, 20,np.nan,-1,0,10],[1,2,3,4,np.nan,0,-1]]) y = np.nanargmax(x) print(x) print("Max element's index:", y) The output will be: [[40. 10. 20. nan -1. 0. 10.] [ 1. 2. 3. 4. nan 0. -1.]] Max element's index: 0 - NumPy argmin() function : With the argmin() function, we can easily search NumPy arrays and fetch the index of the smallest elements that are present in the array at a border scale. It searches for the smallest value present in the array structure and returns the index of the same value. Therefore, with the index, it becomes easy to get the smallest element present in the array. The syntax will be: numpy.argmin() function import numpy as np x = np.array([[40, 10, 20,11,-1,0,10],[1,2,3,4,5,0,-1]]) y = np.argmin(x) print(x) print("Min element's index:", y) The output will be: [[40 10 20 11 -1 0 10] [ 1 2 3 4 5 0 -1]] Min element's index: 4 > In the example given below, there are two indexes that cover the lowest element that is [-1]. The argmin() function returns the index of the first occurrence of the smallest elements from the array values. - NumPy where() function: This function easily searches NumPy arrays for the index values of any element. It matches the condition passed as the parameter of the function. - NumPy nanargmin() function: This function helps you to search NumPy arrays. It gets easy to find the index of the smallest value present in the array elements. The user doesnt have to worry about the NAN values present in them. The NULL values have a zero effect on the search of the elements.
https://www.developerhelps.com/python-numpy-searching/
CC-MAIN-2022-05
refinedweb
571
62.78
I. - Python 2.7 : Download here and install to the default location. (C:\Python27) - Install PIP - Download get-pip.py - Open CMD/Python IDLE and run the following command python get-pip.py - Add pip to your environment variables - Right click on start icon, click on System, then click on Advanced System Settings - Click on Environment Variables - Under System Variables, select path and click edit - Add a new path C:\Python27\Scripts - Open CMD and type pip install numpy. This will install the latest numpy on your pc. - Open Python IDLE, and run the following to check if numpy is working properly. import numpy print numpy.__version__ - Download OpenCV from here - Once downloaded, extract the files and open the folder, goto build/python/2.7/x86 and copy cv2.pyd to C:\Python27\Lib\site-packages - To check if OpenCV is working properly do the same as in step 3 import cv2 print cv2.__version__
http://prtk.in/blog/how-to-install-opencv-python-on-windows/
CC-MAIN-2020-16
refinedweb
155
74.69
Find Minimum cost to connect all cities in C++ In this article, we are going to learn how to find the minimum cost to connect all cities in C++ programming. The problem is that we are given a number of cities there is a specific cost of connecting one city to others. So we need to connect the cities in a manner so that we can achieve the minimum cost. To solve the given problem, we can consider the city as nodes of a graph and the connecting roads as edges of a graph. We need to find the calculate the minimum cost of connecting the cities so this would be a weighted graph, and we need to find the minimum spanning tree of the graph. A minimum spanning tree is a subset of a graph that connects all the vertices without forming a cycle and with minimum cost. As shown in the diagram C++ program to find the minimum spanning tree Below is our code for the task: #include<bits/stdc++.h> using namespace std; // this function finds the minimum // index in the graph int findminIndex(int*weight,bool*visited,int n){ int minIndex = -1; for(int i = 0 ; i < n; i++){ if(!visited[i] && (minIndex == -1 || weight[i] < weight[minIndex])){ minIndex = i; } } return minIndex; } // this function calculates and returns // cost of minimum spanning tree void prims(int**edges , int n){ int*weight = new int[n]; bool*visited = new bool[n]; int*parent = new int[n]; for(int i = 0 ; i < n ; i++){ weight[i] = INT_MAX; visited[i] = false; } parent[0] = -1; weight[0] = 0; for(int i = 0 ; i<n ;i++){ int minIndex = findminIndex(weight,visited,n); visited[minIndex] = true; for(int j = 0 ; j < n ; j++){ if(edges[minIndex][j] != 0 && !visited[j]){ if(edges[minIndex][j] < weight[j]){ weight[j] = edges[minIndex][j]; parent[j] = minIndex; } } } } int ans = 0; for(int i = 1 ; i < n ; i++){ if(parent[i] < i){ ans += weight[i]; }else{ ans += weight[i]; } } cout<<ans; } int main(){ // taking number of vertices and // number of edges as input int n; int e; cin>>n>>e; // making a 2 d array and assigning // '0' value to all the indexes int**edges = new int*[n]; for(int i = 0 ; i < n ; i++){ edges[i] = new int[n]; for(int j = 0 ; j < n ; j++){ edges[i][j] = 0; } } // taking input for graph i.e //which 2 nodes are connected for(int i = 0 ; i < e ; i++){ int f,s,weight; cin>>f>>s>>weight; edges[f][s] = weight; edges[s][f] = weight; } cout<<endl; prims(edges,n); for(int i = 0 ; i < n ; i++){ delete[]edges[i]; } delete []edges; } // sample input for graph /*5 7 0 1 4 0 2 8 1 3 6 1 2 2 2 3 3 2 4 9 3 4 5 */ Function implementation 1. Prims -: This function follows the prims algorithm and calculates the cost of the minimum spanning tree. 2. findminIndex-: This function returns the minimum index of the graph. Implementation Firstly we have taken a graph as input in a 2D array, indexes with zero have no edge between them, and indexes with 1 has an edge between them. As shown in the diagram. Now we call function prims(edges,n). In this function first, we have created a certain number of arrays like 1.int weight[] which will store the weight of the edge 2.bool visited[] this will keep a track of the nodes which have been visited as in minimum spanning tree we don’t need cyclic graph so we will visit a node one time only. 3.int parent[] this will store the parent of a node, the parent node is from where the edge is starting. For example, if we are connecting “0 to 1” so “0” will be the parent and “1” will be the child. Then we will initialize visited as false as we haven’t visited any of the nodes. And weight as maximum int. Then we run a loop which calls findminIndex(int*weight,bool*visited, int n) this gives the current minimum index of the node which hasn’t visited yet. Then we compare the weight of the node with and update the weight and parent if the latest weight is smaller. For example from “0 to 2” the weight stored is “8” and “0” is the parent of “2” but from “1 to 2” the weight is “2” so we will update the weight as 2 and mark the parent of “2” as “1”. At last, we will traverse the path of the minimum spanning tree and add all the weight to get the minimum weight. Now its time to run our code and see the output. In our example we can able to see the output like you can see below: 14
https://www.codespeedy.com/find-minimum-cost-to-connect-all-cities-in-c/
CC-MAIN-2021-10
refinedweb
806
63.32
Data cleaning is an important part of data manipulation and analysis. We need to clean data with any null values, unknown characters, etc. Data cleaning is a time taking process which cannot be neglected because when we are preparing data for the machine learning model the data should be cleaned otherwise we won’t be able to generate useful insights. Or predictions. We can apply different functions on the pandas dataframe which can help us in cleaning the data which in turn cleans the data, remove junk values, etc. But before that, we need to perform data analysis and know what all we need to do, what are the junk values, what are the datatypes of different columns in order to perform different operations for different datatypes. But what if we can automate this cleaning process? It can save a lot of time. Register for Data & Analytics Conclave>> Datacleaner is an open-source python library which is used for automating the process of data cleaning. It is built using Pandas Dataframe and scikit-learn data preprocessing features. The contributors are actively updating it with new features. Some of the current features are: - Dropping columns with null values - Replacing null values with a mean(numerical data) and median(categorical data) - Encoding non-numerical values with numerical equivalents. In this article, we will see how datacleaner automates the process of data cleaning to save time and effort. Implementation: We will start by installing datacleaner using pip install datacleaner. - Importing required libraries We will be loading a dataset using pandas so we need to import pandas and for data cleaning, we will import autoclean function from datacleaner. from datacleaner import autoclean import pandas as pd The dataset we are using in this article is a car design dataset that contains different attributes like ‘price’, ‘make’, ‘length’, etc. of different automobile companies. In this data, we will see that there are some junk values and some data is missing. df = pd.read_csv('car_design.csv') df.shape # Shape of the dataset df.isnull().sum() #Checking Null Values Here we can see that most of the columns contain null values. Now let us see the dataset. print(df) Here we can see that other than null values the data also contains some junk values as ‘?’. Now let us use autoclean and clean this data in just a single line of code. clean_df = autoclean(df) clean_df.shape The shape remains the same as we have not dropped any column. Now let us see the null values. It replaced all the null values with mean and median respectively. Now let us see what happened to junk values. print(clean_df) Here we can see that it also replaced all the junk values with the mean and median of that column respectively. Conclusion: In this article, we saw how we can clean data using data cleaner in just a single line of code. Autoclean removed all the junk values, missing values and cleaned the data so that it can be further used for machine learning models..
https://analyticsindiamag.com/tutorial-on-datacleaner-python-tool-to-speed-up-data-cleaning-process/
CC-MAIN-2021-43
refinedweb
506
63.39
Every name has linkage, which determines how the compiler and linker can use the name. Linkage has two aspects: scope and language. Scope linkage dictates which scopes have access to an entity. Language linkage dictates an entity's properties that depend on programming language. Scope linkage can be one of the following: A name with internal linkage can be referred to from a different scope within the same source file. At namespace scope (that is, outside of functions and classes), static declarations have internal linkage, as do const declarations that are not also extern. Data members of anonymous unions have internal linkage. Names in an unnamed namespace have internal linkage. A name with external linkage can be referred to from a different scope, possibly in a different source file. Functions and objects declared with the extern specifier have external linkage, as do entities declared at namespace scope that do not have internal linkage. A name with no linkage can be referred to only from within the scope where it is declared. Local declarations that are not extern have no linkage. Every function, function type, and object has a language linkage, which is specified as a simple character string. By default, the linkage is "C++". The only other standard language linkage is "C". All other language linkages and the properties associated with different language linkages are implementation-defined. You can specify the language linkage for a single declaration (not a definition) or for a series of declarations and definitions. When you specify linkage for a series of declarations and definitions, you must enclose the series in curly braces. A language linkage declaration does not define a scope within the curly braces. For example: extern "C" void cfunction(int); extern "C++" { void cppfunc(int); void cppfunc(double); } Language linkage is part of a function's type, so typedef declarations keep track of the language linkage. When assigning a function to a function pointer, the function and pointer must have the same linkage. In the following example, funcptr is a function pointer with "C" linkage (note the need for curly braces because it is a definition, not a declaration). You can assign a "C" function to funcptr, but not a "C++" function, even though the rest of the function type matches. extern "C" { void (*funcptr)(int); } funcptr = cfunction; // OK funcptr = cppfunc; // Error C does not support function overloading, so there can be at most one function with "C" linkage of a given name. Even if you declare a C function in two different namespaces, both declarations refer to the same function, for which there must be a single definition. Typically, "C" linkage is used for external functions that are written in C (such as those in the C standard library), but that you want to call from a C++ program. "C++" linkage is used for native C++ code. Sometimes, though, you want to write a function in C++ that can be called from C; in that case, you should declare the C++ function with "C" linkage. An implementation might support other language linkages. It is up to the implementation to define the properties of each language: how parameters are passed to functions, how values are returned from functions, whether and how function names are altered, and so on. In many C++ implementations, a function with "C++" linkage has a "mangled" name, that is, the external name encodes the function name and the types of all its arguments. So the function strlen(const char*) might have an external name of strlen_ _FCcP, making it hard to call the function from a C program, which does not know about C++ name-mangling rules. Using "C" linkage, the compiler might not mangle the name, exporting the function under the plain name of strlen, which can be called easily from C.
http://etutorials.org/Programming/Programming+Cpp/Chapter+2.+Declarations/2.4+Linkage/
crawl-001
refinedweb
634
60.75
PreloadAssetManager – An Intelligent Preloading Queue - Compatibility: Flash Player 6 and later (ActionScript 2.0) (FLV preloading requires Flash Player 7) - File Size added to published SWF: About 3Kb Join Club GreenSock to get updates and a lot more NOTE: PreloadAssetManager has been sunset and is no longer actively enhanced or supported. Feel free to use it if you want, though. DESCRIPTION). By default, it will initially only load enough of each asset to determine the size (bytes) of each asset so that it can accurately report the percentLoaded_num, getBytesLoaded() and getBytesTotal(), then it loops back through from the beginning and finishes all the preloading. If you're not going to use a preloader status bar that polls these methods/properties, you can just set the trackProgress_boolean property to false to skip that initial delay. OBJECTIVES - Provide an easy way to sequentially preload assets. - Determine the _width and _height of any preloaded SWFs or image assets as well as the duration of any FLV assets. - For easy status reporting, provide percentLoaded_num, getBytesLoaded() and getBytesTotal() information for the entire group of assets in any PreloadAssetManager instance. - Allow for any asset to be prioritized in the loading queue at any time (for example, if the user clicks on something that requires an asset that hasn't loaded yet). - Allow the entire queue to be paused and resumed at any time (if, for example, you need to perform some other bandwidth-intensive action) - Build in the ability to call any function when either a particular asset has finished preloading or when a group of assets have finished preloading and allow the developer to pass any number of arguments/parameters to that function. - If an asset cannot be loaded, implement a timeout procedure so things don't get hung up indefinitely. USAGE - Description: Constructor. If you pass in an assetUrls_array, it will automatically call the start() function and begin preloading. You can pause() if you want. - Arguments: - assetUrls_array: [optional] An array of urls that should be preloaded - onComplete_func: [optional] A reference to a function that you'd like to call as soon as all of the assets in this PreloadAssetManager instance have been preloaded. - onCompleteArguments_array: [optional] An array of arguments to pass the onComplete_func function. - trackProgress_boolean: [optional] true by default. If you're NOT going to use a preloader status bar that polls the precentLoaded_num, getBytesLoaded() or getBytesTotal(), set this value to false to skip that initial delay and speed things up. - Description: Adds an asset to the PreloadAssetManager's queue. - Arguments: - url_str: The url of the asset that needs to be preloaded - onComplete_func: [optional] A reference to a function that you'd like to call as soon as this asset has been preloaded. - onCompleteArguments_array: [optional] An array of argument values to pass the onComplete_func function. - Description: Adds an asset to the PreloadAssetManager's queue. - Arguments: - assetUrls_array: An array of URLs that need to be preloaded - onComplete_func: [optional] A reference to a function that you'd like to call as soon as all of the assets in this PreloadAssetManager have been preloaded. - onCompleteArguments_array: [optional] An array of argument values to pass the onComplete_func function. - start_boolean: [optional] false by default. If true, the PreloadAssetManager will be forced to start preloading as soon as these assets have been added. - Description: Starts preloading the assets (same as resume() if things are paused). - Description: Resumes all preloading. - Description: Pauses preloading in ALL PreloadAssetManagers. - Description: A STATIC method that allows you to prioritize a particular asset (bump it up to the top of the queue). You can also use the non-static prioritizeAsset() method or the PreloadAsset.prioritize() method to perform a similar action. - Arguments: - url_str: The URL of the asset that should be prioritized. - Description: A STATIC method that allows you to find the PreloadAsset instance with a particular URL. This is useful if you want to find the _width or _height or duration of an asset but only know the URL. - Arguments: - url_str: The URL of the asset you'd like to find. EXAMPLES - import gs.dataTransfer.PreloadAssetManager; - var preloader_obj = new PreloadAssetManager(["myFile1.swf","myFile2.swf"], onFinish); - function onFinish(pl_obj:PreloadAssetManager):Void { - trace("Finished preloading all " + pl_obj.assets_array.length + " assets!"); - var a = pl_obj.assets_array; - for (var i = 0; i <a.length; i++) { - if (a[i].fileType_str == "flv") { - trace("--Asset: "+a[i].url_str+" had a duration of: "+a[i].duration); - } else { - trace("--Asset: "+a[i].url_str+" had a width of: "+a[i]._width+", and a height of "+a[i]._height); - } - } - } - import gs.dataTransfer.PreloadAssetManager; - import gs.dataTransfer.PreloadAsset; - var preloader_obj = new PreloadAssetManager(); - var pl1_obj = preloader_obj.addAsset("myFile1.swf", onPreload); - var pl2_obj = preloader_obj.addAsset("myFile2.swf", onPreload); - preloader_obj.start(); - function onPreload(pl_obj:PreloadAsset):Void { - trace("finished preloading: "+pl_obj.url_str+", _width: "+pl_obj._width+", _height: "+pl_obj._height); - } preloader progress bars) like so: - import gs.dataTransfer.PreloadAssetManager; - var preloader_obj = new PreloadAssetManager(["myFile1.swf","myFile2.swf"]); - this.onEnterFrame = function() { - myPreloader_mc.bar_mc._xscale = preloader_obj.percentLoaded_num; - if (preloader_obj.percentLoaded_num == 100) { - gotoAndPlay("start"); - } - } 7th, 2007 at 8:42 am Great Class! How do I resolve the scope in an onFinish function? coverMC = target.createEmptyMovieClip(”cover”,target.getNextHighestDepth()); var preload_obj = new PreloadAsset(coverImage, onFinish); function onFinish(asset_obj:PreloadAsset):Void { trace(”Finished preloading: “+ asset_obj.url_str+” and its width is: “+asset_obj._width+” and its _height is: “+asset_obj._height); trace(this); // here I want to load in coverMC } how do I load the preloadAsset into coverMC when onFinish is called? on February 7th, 2007 at 4:53 pm If you want to control the scope of the onFinish call, I’d recommend using the mx.utils.Delegate class that comes with Flash. Then to load your asset into coverMC, you could use something like loadMovie() So your edited code would be: import mx.utils.Delegate; coverMC = target.createEmptyMovieClip(â€coverâ€,target.getNextHighestDepth()); var preload_obj = new PreloadAsset(coverImage, Delegate.create(this, onFinish)); function onFinish(asset_obj:PreloadAsset):Void { trace(â€Finished preloading: “+ asset_obj.url_str+†and its width is: “+asset_obj._width+†and its _height is: “+asset_obj._height); trace(this); coverMC.loadMovie(asset_obj.url_str); } on April 4th, 2007 at 9:59 am Preloading works super fine! Thank you very much for this class. I am preloading a bunch of FLVs. Once they get at 100%, how do I access them using a NetStream object (or other) ? (I need to implement something that initates the playback of the first one, when it reaches the end I must start the second one, so on…) on April 4th, 2007 at 10:41 am Leolea, this class simply preloads your assets into your browser’s cache; it is not meant to be used to play back and manage your FLVs once they’re preloaded. There’s nothing special that you need to do in order to access the preloaded FLVs – just call them as you normally would either using a NetStream object of your own or an FLVPlayback component or whatever. The user’s browser will be smart enough to used the cached versions instead of going out to the web and downloading them again. on April 25th, 2007 at 11:43 am Hmmm. A little customization might be necessary for streaming FLV handling. An ideal FLV preloader might wait until the currently-streaming FLV is fully cached before loading an FLV in the background. If the user requests a new FLV, it would ideally halt the bg-downloading stream and resume it after the requested FLV was fully cached. That assumes it’s possible to halt (and resume) a stream? If the user requests the background-loading FLV, it’d need to be swapped into the playback area without re-starting that netstream from the beginning. Has anyone out there tried something like this? on April 25th, 2007 at 11:59 am Gabriel, the issues you bring up are exactly why I built this class. It’s very easy to pause preloading and resume it anytime you want (see the pause() and resume() functions) so that you can manage the user’s bandwidth efficiently. You can also prioritize any asset anytime based on what the user clicks on. So if asset #2 is currently preloading, but the user clicks on something that requires asset #6 (which hasn’t been preloaded yet), you can prioritize it immediately (see the prioritize() function). on April 25th, 2007 at 3:11 pm I definately like thought of being able to prioritize, pause and resume the loading of assets. After a little investigating, it seems like Flash can’t resume downloading a half-cached FLV? It just starts the stream over from scratch. This can make pausing and resuming the queue a frustrating experience–if the currently-playing clip ends before the preloading clip is finished, the preloading clip is flushed and then starts loading again from the beginning. It’s not a flaw in your preloader, so much as it is a difficulty with FLV caching. Ideally, we should be able to pause the video stream itself (pause buffering), then resume it later (resume buffering). If we could drop the currently-preloading clip into the video object without re-starting the stream (allowing it continue buffering from where it is), but still start playing from the beginning, we’d be set. on August 10th, 2007 at 5:02 pm Hi! This class has made my life much easier… I have a couple of questions, though: 1. Is there a way to get the width/height of the FLV, alongside the duration? 2. Do you anticipate adding an XML option to this class? For example, the ability to load ANY kind of file… images, xml, swfs, flvs, etc. Thanks, great job! -nick on August 14th, 2007 at 12:32 am Nicolas, I just added the ability for the width, height, videodatarate, framerate, audiodatarate, and basically all of the available meta data in an FLV to be read in when it preloads. The sample is updated too. Keep in mind that not all FLVs have this meta data available – it depends on the software you used to encode the FLV. Sorenson Squeeze seems to do a good job, but several other software packages omit most of the meta data in which case this class obviously won’t be able to grab it. Enjoy! on September 25th, 2007 at 10:17 pm Great class. Implemented it in a project we just did requiring preloading a number of FLVs (user has to disconnect their internet connection in the middle of a set of videos). Worked like a charm. Thanks. on October 5th, 2007 at 4:35 pm Is there a way to destroy the PreloadAssetManager object or will simply deleting the var kill any preloads that may currently exist? Thank you! on October 5th, 2007 at 4:46 pm Krom, you can call the destroy() method on your PreloadAssetManager instance. Or if you want to pause ALL preloading that’s being handled by any/all PreloadAssetManagers, just call the static PreloadAssetManager.pauseAll() method. on October 29th, 2007 at 3:55 pm Please keep in mind that PreloadAssetManager is meant to preload assets, not buffer video. So if you have a 10 minute FLV and want to buffer 2 minutes of it before playing it, use a NetStream and its bufferTime property instead of PreloadAssetManager. My class simply leverages the browser’s cache to do its magic, but in order to work properly, it needs to fully download an asset. If you download 20% of an FLV using PreloadAssetManager and then try to play that on the screen, Flash will request the file from the browser which will in turn check its cache and since it’s not fully there, it’ll make the request from the server and start the load over again. Not ideal by any means. on November 7th, 2007 at 4:01 pm Very useful. Is there any way to get the percentLoaded_num of an individual asset? I want to build separate progress bars for each asset. on November 7th, 2007 at 4:26 pm Mark, You can check the progress of any asset after you’ve added it to the PreloadAssetManager. For example, to check the status of a file named “myURL.swf”, do something like: var asset_obj = PreloadAssetManager.getAsset(”myURL.swf”); var percentLoaded_num = (asset_obj.getBytesLoaded() / asset_obj.getBytesTotal()) * 100; on December 4th, 2007 at 2:53 pm I’m just now starting to learn some of the more advanced functions of actionscript and I’m pretty sure that I’ve got the hang of this class but I have a question that will hopefully clear some things up for me. Say I’ve got a movieclip that loads a random jpg from a list as a “background” so, in order to make sure that the background comes up without a hitch, I have to have all of the backgrounds loaded. My question is whether or not I should use loadMovie to get the jpg into the movieclip or some other function now that the jpg has already been preloaded. Also, should I be referring to the PreloadAssetManager’s array_asset for the src of the image now or just stick with the original url? I may just be missing something but, I’ll preload the backgrounds and when I go to load the image, I notice in the bandwidth profiler that the swf has requested the file again even though the file should be cached. Whatever help you can provide would be greatly appreciated. on December 4th, 2007 at 3:04 pm Yes, you’ve got it right, Stephen – you need to use loadMovie() (or its equivalent) to load your background images into place. PreloadAssetManager is ONLY meant to get the assets into your browser’s cache, nothing more. And that also explains why Flash’s bandwidth profiler is requesting the file again. That’s to be expected, and that’s exactly what the browser does too, but since it checks its cache FIRST before hitting the server again, it finds the asset(s) and loads them from there for much snappier performance. Again, PreloadAssetManager leverages the BROWSER’S cache but Flash’s bandwidth profiler doesn’t. on January 3rd, 2008 at 6:03 pm Thank Goodness. Spent the whole day looking for a sollution. This one works. YOU ROCK! on January 26th, 2008 at 11:29 am awesome, i was starting to implement a simple version of this and happened across yours, does everything i want and does it very well :) will be sure to let you know of the project url when it’s done! thank you for saving my time :) on January 28th, 2008 at 5:30 pm Hi, I would like to know if this preloading asset exist for AS3 ? if it doesn’t, will u think about converting this asset for AS3 ? Thanks for ur time if you could answer on my email. it would be great. on January 30th, 2008 at 11:47 am I’ll probably port this to AS3 when I get a project that requires it. I’m not entirely sure when that’ll be though. on March 11th, 2008 at 10:48 am Jack, thanks for the code. I got it to work (took a while because I’m novice and never worked with custom classes before). I’ll be spreading the good word about your blog and generous sharing. Thanks again. on April 7th, 2008 at 3:45 pm Buddy you are a genius. This is by far the best preloader I’ve come across. on May 9th, 2008 at 7:36 pm Hi, RE AS3 integration. I guess we can use this on a preload page to cache as3 assets. ie preload.html _as2 preload assets (AS3.swfs) _onComplete ( getURL( “next.html” ) next.html _AS3 assets yes? Thanks i love this class. I train my students to use it in managing their flash frameworks. Oyster on January 25th, 2009 at 1:37 pm for some who were looking for a AS3 solution, I just found this it looks pretty good, I have not dug into it seriously yet. Hope you do not mind me posting an alternative Jack, but it seemed from your comments that you were not likely to be doing an AS3 solution anytime soon.
http://blog.greensock.com/preloadassetmanageras2/
crawl-002
refinedweb
2,694
64.3
LabN December 2006 GMPLS - Communication of Alarm. This document updates RFC 3473, "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions", through the addition of new, optional protocol elements. It does not change, and is fully backward compatible with, the procedures specified in RFC 3473. Table of Contents 1. Introduction ....................................................3 1.1. Background .................................................3 2. Alarm Information Communication .................................4 3. GMPLS-RSVP Details ..............................................5 3.1. ALARM_SPEC Objects .........................................5 3.1.1. IF_ID ALARM_SPEC (and ERROR_SPEC) TLVs ..............5 3.1.2. Procedures ..........................................9 3.1.3. Error Codes and Values .............................10 3.1.4. Backwards Compatibility ............................11 5.1. New RSVP Object ...........................................15 5.2. New Interface ID Types ....................................16 5.3. New Registry for Admin-Status Object Bit Fields ...........16 5.4. New RSVP Error Code .......................................16 6. References .....................................................17 6.1. Normative References ......................................17 6.2. Informative References ....................................17 7. Acknowledgments ................................................18 8. Contributors ...................................................18 1. Introduction GMPLS signaling provides mechanisms that can be used to control the reporting of alarms associated with a label switched path a time. This makes it hard to correlate all of the problems that may be associated with a single LSP and to allow an operator examining the status of an LSP to view a full list of current problems. This situation is exacerbated 198 (assigned by IANA in the form 11bbbbbb, per Section 3.1.4). o Class = 198, C-Type = 1 Reserved. (C-Type value defined for ERROR_SPEC, but is not defined for use with ALARM_SPEC.) o Class = 198, C-Type = 2 Reserved. (C-Type value defined for ERROR_SPEC, but is not defined for use with ALARM_SPEC.) - IPv4 IF_ID ALARM_SPEC object: Class = 198, C-Type = 3 Definition same as IPv, and TLVs received with this field set to zero MUST be ignored.: - bits This field is reserved. It MUST be set to zero on generation, MUST be ignored on receipt, and MUST be forwarded unchanged and unexamined by transit nodes. Impact: - bits Indicates the impact of the alarm indicated in the TLV. See [M.20] for a general discussion on classification of failures. The following values are defined in this document. The details of the semantics may be found in [M.20]. Value Definition ----- --------------------- 0 Unspecified impact 1 Non-Service Affecting (Data traffic not interrupted) 2 Service Affecting (Data traffic is interrupted) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Global Timestamp: 32 bits An unsigned fixed-point integer that indicates the number of seconds since 00:00:00 UT on 1 January 1970 according to the clock on the node that originates this TLV. This time MAY include leap seconds if they are used by the local clock and SHOULD contain the same time value as used by the node when the alarm is reported through other systems (such as within the Management Plane) if global time is used in those reports. extensions defined in this document SHOULD store any alarm information from received ALARM_SPEC objects for future impacted LSP. In all cases, appropriate Error Node Address, Error Code, and Error Values MUST be set (see below for a discussion on Error Code and Error Values). As the InPlace and NotGuilty flags only have meaning in ERROR_SPEC objects, they SHOULD NOT be set. TLVs SHOULD be included in the ALARM_SPEC object to identify the interface, if any, associated with the alarm. The TLVs defined in [RFC3471] for identifying interfaces in the IF_ID ERROR_SPEC object [RFC3473] SHOULD be used for this purpose, but note that TLVs type 4 and 5 (component interfaces) are deprecated by [RFC4201] to indicate the number of times an alarm has been repeated at the reporting node. ALARM_SPEC objects received from other nodes are not impacted by the addition of local ALARM_SPEC objects, i.e., they continue to be processed as described above. The choice of which alarm or alarms to advertise and which to omit is a local policy matter, and may be Path and Resv states. Failure to follow the above directives, in particular the ones labeled "SHOULD" and "SHOULD NOT", may result in the alarm information not being properly or fully communicated. 31 and is referred to as "Alarms". The values used in the Error Values field when the Error Code is "Alarms" are the same as the values defined in the IANAItuProbableCause Textual Convention of IANA-ITU-ALARM-TC-MIB in the Alarm MIB [RFC3877]. Note that these values are managed by IANA; see. 3.1.4. Backwards Compatibility The support of ALARM_SPEC objects is OPTIONAL. Non-supporting nodes will (according to the rules defined in [RFC2205])) and there is local alarm information present, it SHOULD add the that, [RFC4208] defines how GMPLS may be used in an overlay model to provide a user-to-network interface (UNI). [RFC4208]. In particular, the following observations apply. -. - Similarly, an egress core-node MAY choose not to request alarm reporting on Path messages that it sends downstream to the overlay network. 3.5. Relationship to GMPLS E-NNI GMPLS may be used at the external network-to-network interface (E-NNI); see [ASON-APPL]. At this interface, restrictions may be applied to the information that is signaled between an egress and an ingress [ASON-APPL]. In particular, the following observations apply. - An ingress or egress core-node MAY filter internal core network alarms. This may be to protect information about the internal network or to indicate that the core network is performing or has completed recovery actions for this LSP. -. - Similarly, an egress/ingress core-node MAY choose not administered assignment of new values for namespaces defined in this document and reviewed in this section. 5.1. New RSVP Object IANA made the following assignments in the "Class Names, Class Numbers, and Class Types" section of the "RSVP PARAMETERS" registry located at. A new class named ALARM_SPEC (198) was created in the 11bbbbbb range with following values o Class = 198, C-Type = 1 RFC 4783 Reserved. (C-Type value defined for ERROR_SPEC, but is not defined for use with ALARM_SPEC.) o Class = 198, C-Type = 2 RFC 4783 Reserved. (C-Type value defined for ERROR_SPEC, but is not defined for use with ALARM_SPEC.) - IPv4 IF_ID ALARM_SPEC object: Class = 198, C-Type = 3 RFC 4783 Definition same as IPv4 IF_ID ERROR_SPEC [RFC3473]. - IPv6 IF_ID ALARM_SPEC object: Class = 198, C-Type = 4 RFC 4783 Definition same as IPv6 IF_ID ERROR_SPEC [RFC3473]. The ALARM_SPEC object uses the Error Code and Error Values from the ERROR_SPEC object. 5.2. New Interface ID Types IANA made the following assignments in the "Interface_ID Types" section of the "GMPLS Signaling Parameters" registry located at. 512 8 REFERENCE_COUNT RFC 4783 513 8 SEVERITY RFC 4783 514 8 GLOBAL_TIMESTAMP RFC 4783 515 8 LOCAL_TIMESTAMP RFC 4783 516 variable ERROR_STRING RFC 4783 5.3. New Registry for Admin-Status Object Bit Fields IANA created a new section titled "Administrative Status Information Flags" in the "GMPLS Signaling Parameters" registry located at and made the following assignments: Value Name Reference ----------- -------------------------------- ----------------- 0x80000000 Reflect (R) [RFC3473/RFC3471] 0x00000010 Inhibit Alarm Communication (I) RFC 4783 0x00000004 Testing (T) [RFC3473/RFC3471] 0x00000002 Administratively down (A) [RFC3473/RFC3471] 0x00000001 Deletion in progress (D) [RFC3473/RFC3471] 5.4. New RSVP Error Code IANA made the following assignments in the "Error Codes and Values" section of the "RSVP PARAMETERS" registry located at. 31 Alarms RFC 4783 6. References3877] Chisholm, S. and D. Romascanu, "Alarm Management Information Base (MIB)", RFC 3877, September 2004. [M.3100] ITU Recommendation M.3100, "Generic Network Information Model", 1995. 6.2. Informative References [RFC4201] Kompella, K., Rekhter, Y., and L. Berger, "Link Bundling in MPLS Traffic Engineering (TE)", RFC 4201, October 2005. [M.20] ITU-T, "MAINTENANCE PHILOSOPHY FOR TELECOMMUNICATION NETWORKS", Recommendation M.20, October 1992. [GR833] Bellcore, "Network Maintenance: Network Element and Transport Surveillance Messages" (GR-833-CORE), Issue 3, February 1999. . [ASON-APPL] Papadimitriou, D., et al., "Generalized MPLS (GMPLS) RSVP-TE signaling usage in support of Automatically Switched Optical Network (ASON)", Work in Progress, July 2005. 7. Acknowledgments Valuable comments and input were received from a number of people, including Wes Doonan, Bert Wijnen for the DISMAN reference, and Tom Petch for getting the DISMAN WG interactions started. We also thank David Black, Lars Eggert, Russ Housley, Dan Romascanu, and Magnus Westerlund for their valuable comments. 8. Contributors Contributors are listed in alphabetical order: Deborah Brungard AT&T Labs, Room MT D1-3C22 200 Laurel Avenue Middletown, NJ 07748, USA Phone: (732) 420-1573 EMail: dbrungard@att.com Igor Bryskin Adrian Farrel Movaz Networks, Inc. Old Dog Consulting 7926 Jones Branch Drive Suite 615 McLean VA, 22102, USA Phone: +44 (0) 1978 860944 EMail: ibryskin@movaz.com EMail: adrian@olddog.co.uk Dimitri Papadimitriou (Alcatel) Arun Satyanarayana Francis Wellesplein 1 Cisco Systems, Inc B-2018 Antwerpen, Belgium 170 West Tasman Dr. San Jose, CA 95134 USA Phone: +32 3 240-8491 Phone: +1 408 853-3206 EMail: dimitri.papadimitriou@alcatel.be EMail: asatyana@cisco.com Editor's Address Lou Berger LabN Consulting, L.L.C. Phone: +1 301-468-9228 EMail: lberger@labn.
http://pike.lysator.liu.se/docs/ietf/rfc/47/rfc4783.xml
CC-MAIN-2021-49
refinedweb
1,495
54.63
Urrr I've done a little .NET and a little web services work - mostly using my favorite programming methodology and language. The methodology is R&D. The language C&P. (I'll detail both the method and the language specifications below). But now I want to delve a little further into building some web services. Nothing complicated. Pass a little data, return a little data - should be fairly simple. Hey, you there in the peanut gallery, stop your snickering... I'm building them on a Microsoft Platform - IIS, SQL, etc. My first mistake was that I visited Microsoft's primers to this technology and now I am confused!!! I realized years ago that most technical things are simply not that complicated. Networking, for instance. Stuff (data) flows through pipes (transport media) to get to some other place. A plumbing analogy works well to get people started in networking. Now, now… you Cisco guys need to stop right there! Don't start describing Catalyst switching technologies, don't even venture into the OSI model, and do not start to describe routing protocols. You would be missing the point. And yes, I know that you don't need a really big cable to pass more information (bandwidth). This dilbert cartoon seems to get it.. Snicker - subtle irony for those who read my blog in their native tongue.... Try this one, I promise it is relevant... What I am saying is that to the uninitiated, a simple plumbing analogy works fairly well. A city map, etc. Network addressing can be explained using a phone number (area code-prefix=network - suffix=host) analogy because people understand that. These ideas can all help someone understand something they do not yet understand. It helps to make analogous connections between what is known to what is unknown. For some reason, some network tutorials start by describing mac addressing, physical transport, blah, blah, blah. My kids understand a street map but the differences between layer 2 and layer 3 switching bore them. Just between the two of us - it bores me too. Back to our situation and where I could use your help. I visited the Microsoft Web Services page - believing that they would have a primer to get developers started. The very first sentence caused me to shudder. "This is the home for developers building Web services and writing applications that consume Web services." Sure, I understand web services consumption. I've connected to web services and consumed them (yummy). But lets think about the uninitiated…. Consuming needs an explanation at or before its time of use. If this is the starting page, you need to describe a technology specific term after a conceptual explanation is used. At list provide one of those hold your mouse of explanations for this word. The IT industry's love affair with jargon is pretty well the stuff of laughable legend but the jokes over folks. Like Lindsay Buckingham sang, "It's not that funny is it" Get the glossary going! Make educational information incremental. As you delve into each of the links at the web services site above, you are inundated with abstract endpoints, concrete endpoints, namespaces, XLT, WDSI, etc. - all before a non-jargonized conceptualized explanation is provided or any incremental examples provided. In fact, many of the examples are lengthy - convoluted applications that demonstrate almost nothing and that require waayyyy too much setup to get started. I used to think I was relatively intelligent. They have successfully taken my ego and stomped into near oblivion. After reading a couple pages I was curled into a fetal position, hugging my daughter's teddy bear..."mama...mama..." Their explanations are a lot like giving a toddler the task of identifying the parts of a sentence when they have not yet learned to spell. They don't know what the characters "D O G" mean, let alone that they are a noun. Don't start describing subject/predicate. I think this is partially why some technology professionals believe they are underpaid. If they can weed through these poorly written technology primers with even a cursory understanding, they must be geniuses and their mental exertion itself should be compensated. Maybe the vendors, like Microconvolutedsoft are hoping to prop everyone's esteem. Once again, I've worked with the technology and still find their explanations jargonized and confusing. "Mongo's head hurt!! Mongo need nap!!" A Better Example So I went searching around for the incremental explanation of a variety of technologies and found this site. Sure, a lot of these guys write for MS but apparently, they also wanted an understandable site. In particular, their Quickstart tutorials do a good job of explaining, incrementally, what is going on. They get both kudos and a hearty thank you. Those things you have picked up through struggle, sweat, and mental anguish, you can bypass and get to the meat. Each sample incrementally builds on what you did prior, so you can easily see the impact of each added piece. Pretty nice. This is one of the first sites I found in my search. In the world of XML and webservices, what sites and tutorials have you found that de-jargonize the learning experience? Or, does it aggravate you that mere mortals might end up learning the technology is just not that complicated? Either way, let me know. Oh yeah, my programming methodology (R&D) and language (C&P). R&D = Rip off and duplicate Don't smirk, you do it too. C&P = Cut & Paste Hey, listen, there are really smart guys out there who have done it before. Remember, me no smart...me no get it.
https://it.toolbox.com/blogs/matthewmoran/of-tutorials-explanations-and-primers-me-no-smartme-no-get-it-080405
CC-MAIN-2018-51
refinedweb
943
67.04
On Wed, Nov 25, 1998 at 02:49:04AM +0100, Richard Braakman wrote: > Julian Gilbey wrote: > > If you use the 'release --to master' command, you will be asked > > whether you want to close the bugs mentioned in the latest changelog > > after the new version has been uploaded. > > Please don't. This closes them far too early. I've seen enough packages > get rejected from Incoming. Don't close the bugs until your package > is actually in the archive. I thought there was going to be a dinstall feature that closes bugs exactly when the new packet is installed in the archive. Wasn't that the whole point of the crazy regexp that was posted here a few months ago? If I want to close bugs manually, it's no easier to run "release" than to just e-mail close notifications to all my bugs. (What/where is the "release" command, anyway, and how can people get away with such blatant namespace pollution?) Have fun, Avery
https://lists.debian.org/debian-devel/1998/11/msg02092.html
CC-MAIN-2017-26
refinedweb
165
70.84
A fresh maintenance release version 0.9.7 of Rcpp went onto CRAN and into Debian earlier today. This release contains two contributed fixes. The first, suggested by Darren Cook via the rcpp-devel mailing list, corrects how we had set up exceptions specifications, reflecting a bit of Java-think on our part. The idiom is generally discouraged in C++, and we now conform. The second came in two excellent patches by R Core member Martyn Plummer which finally get us compilation on Solaris. This is much appreciated as our hands were tied here for lack of access to such a box. Otherwise, two new examples and a new unit test were added. The complete NEWS entry is below; more details are in the ChangeLog file in the package and on the Rcpp Changelog page. 0.9.7 2011-09-29 o Applied two patches kindly provided by Martyn Plummer which provide support for compilation on Solaris using the SunPro compiler o Minor code reorganisation in which exception specifiers are removed; this effectively only implements a run-time (rather than compile-time) check and is generally seen as a somewhat depreated C++ idiom. Thanks to Darren Cook for alerting us to this issue. o New example 'OpenMPandInline.r' in the OpenMP/ directory, showing how easily use OpenMP by modifying the RcppPlugin output o New example 'ifelseLooped.r' showing Rcpp can accelerate loops that may be difficult to vectorise due to dependencies o New example directory examples/Misc/ regrouping the new example as well as the fibonacci example added in Rcpp 0.9.6 o New Rcpp-FAQ example warning of lossy conversion from 64-bit long integer types into a 53-bit mantissa which has no clear fix yet. o New unit test for accessing a non-exported function from a namespace Thanks to CRANberries, you can also look at a diff to the previous release 0.9.6....
http://www.r-bloggers.com/rcpp-0-9-7/
CC-MAIN-2016-07
refinedweb
319
52.6
There is no support for ANY of WPF/SL design namespace members at ALL: d:DesignHeight, d:DesignWidth, d:DataContext, d:DesignInstance, d:DesignData Xaml designer report "Invalid markup" on any usage. How to build "fast & fluid" apps design without design-level attributes support, without providing sample data for databound controls? Are you serious, folks? Thanks for your feedback! Could you please clarify if this is for which project type and what I could be missing in the steps below? I tried the following and it works: C# Grid application Build Add new UserControl If you look at the XAML now, you will see the use of d:DesginHeight and d:DesignWidth. If you also open GroupedDetailPage.xaml, you will see the use of sample data in there. Thanks! Unni Unni Ravindranathan, Program Manager, Microsoft Expression This posting is provided "AS IS" with no warranties, and confers no rights.
https://social.msdn.microsoft.com/Forums/en-US/949dd23e-6c8c-4cff-9851-15cdc2f82bda/xmlnsdhttpschemasmicrosoftcomexpressionblend2008?forum=toolsforwinapps
CC-MAIN-2019-47
refinedweb
149
64.3
官网教程 dart是一个单线程的语言,没有多线程 but not const. Here’s an example of creating and setting a final variable: final name = 'Bob'; // Without a type annotation // name = 'Alice'; // Uncommenting this causes an error final String nickname = 'Bobby';. // Note: [] creates an empty list. // const [] creates an empty, immutable list (EIL). var foo = const []; // foo is currently an EIL. final bar = const []; // bar will always be an EIL. const baz = const []; // baz is a compile-time constant EIL. // You can change the value of a non-final, non-const variable, // even if it used to have a const value. foo = []; // You can't change the value of a final or const variable. // bar = []; // Unhandled exception. // baz = []; // Unhandled exception. For more information on using const to create constant values, see Lists, Maps, and Classes. ----------------------------------------------------------- Default parameter values Your function can use = to define default values for both named and positional parameters. The default values must be compile-time constants. If no default value is provided, the default value is null. void doStuff( {List<int> list = const [1, 2, 3], Map<String, String> gifts = const { 'first': 'paper', 'second': 'cotton', 'third': 'leather' }}) { print('list: $list'); print('gifts: $gifts'); } ------------------------------------------------------------ The main() function Every app must have a top-level main() function, which serves as the entrypoint to the app. The main() function returns void and has an optional List<String>parameter for arguments. // Run the app like this: dart args.dart 1 test void main(List<String> arguments) { print(arguments); assert(arguments.length == 2); assert(int.parse(arguments[0]) == 1); assert(arguments[1] == 'test'); } ------------------------------------------------------------ Operators if null ?? cascade .. Arithmetic operators / Divide ~/ Divide, returning an integer result print(5 / 2 == 2.5); // Result is a double print(5 ~/ 2 == 2); // Result is an int Type test operators as Typecast is True if the object has the specified type is! False if the object has the specified type Assignment operators As you’ve already seen, you can assign values using the = operator. To assign only if the assigned-to variable is null, use the ??= operator. // Assign value to a a = value; // Assign value to b if b is null; otherwise, b stays the same b ??= value; Conditional expressions condition ? expr1 : expr2 If condition is true, evaluates expr1 (and returns its value); otherwise, evaluates and returns the value of expr2. expr1 ?? expr2 If expr1 is non-null, returns its value; otherwise, evaluates and returns the value of expr2. Cascade notation (..) Cascades ( ..) allow you to make a sequence of operations on the same object. In addition to function calls, you can also access fields on that same object. This often saves you the step of creating a temporary variable and allows you to write more fluid code. Consider the following code: querySelector('#confirm') // Get an object. ..text = 'Confirm' // Use its members. ..classes.add('important') ..onClick.listen((e) => window.alert('Confirmed!')); The first method call, querySelector(), returns a selector object. The code that follows the cascade notation operates on this selector object, ignoring any subsequent values that might be returned. The previous example is equivalent to: var button = querySelector('#confirm'); button.text = 'Confirm'; button.classes.add('important'); button.onClick.listen((e) => window.alert('Confirmed!')); You can also nest your cascades. For example: final addressBook = (new AddressBookBuilder() ..name = 'jenny' ..email = 'jenny@example.com' ..phone = (new PhoneNumberBuilder() ..number = '415-555-0100' ..label = 'home') .build()) .build(); Be careful to construct your cascade on a function that returns an actual object. For example, the following code fails: var sb = new StringBuffer(); sb.write('foo') ..write('bar'); // Error: method 'write' isn't defined for 'void'. The sb.write() call returns void, and you can’t construct a cascade on void. Note: Strictly speaking, the “double dot” notation for cascades is not an operator. It’s just part of the Dart syntax. Other operators ?. Conditional member access Like ., but the leftmost operand can be null; example: foo?.bar selects property bar from expression foo unless foo is null (in which case the value of foo?.bar is null) ------------------------------------------------------------ Exceptions Your Dart code can throw and catch exceptions. Exceptions are errors indicating that something unexpected happened. If the exception isn’t caught, the isolate that raised the exception is suspended, and typically the isolate and its program are terminated. In contrast to Java, all of Dart’s exceptions are unchecked exceptions. Methods do not declare which exceptions they might throw, and you are not required to catch any exceptions. Dart provides Exception and Error types, as well as numerous predefined subtypes. You can, of course, define your own exceptions. However, Dart programs can throw any non-null object—not just Exception and Error objects—as an exception. Throw Here’s an example of throwing, or raising, an exception: throw new FormatException('Expected at least 1 section'); You can also throw arbitrary objects: throw 'Out of llamas!'; Because throwing an exception is an expression, you can throw exceptions in => statements, as well as anywhere else that allows expressions: void distanceTo(Point other) => throw new UnimplementedError(); ------------------------------------------------------------ Classes Dart is an object-oriented language with classes and mixin-based inheritance. Every object is an instance of a class, and all classes descend from Object. Mixin-based inheritance means that although every class (except for Object) has exactly one superclass, a class body can be reused in multiple class hierarchies. To create an object, you can use the new keyword with a constructor for a class. Constructor names can be either ClassName or ClassName.identifier. For example: var jsonData = jsonDecode('{"x":1, "y":2}'); // Create a Point using Point(). var p1 = new Point(2, 2); // Create a Point using Point.fromJson(). var p2 = new Point.fromJson(jsonData); Dart 2 note: You can omit the new before the constructor. Example: p1 = Point(2, 2).. Some classes provide constant constructors. To create a compile-time constant using a constant constructor, use const instead of new: var p = const ImmutablePoint(2, 2); Dart 2 note: You can omit the const before the constructor. Example: p = ImmutablePoint(2, 2). Constructing two identical compile-time constants results in a single, canonical instance: var a = const ImmutablePoint(1, 1); var b = const ImmutablePoint(1, 1); assert(identical(a, b)); // They are the same instance! To get an object’s type at runtime, you can use Object’s runtimeType property, which returns a Type object. print('The type of a is ${a.runtimeType}');. For details, see Getters and setters. class Point { num x; num y; } void main() { var point = new (plus, optionally, an additional identifier as described in Named constructors). only). class Employee extends Person { Employee() : super.fromJson(getDefaultData()); // ··· } Note: When using super() in a constructor’s initialization list, put it last. For more information, see the Dart usage guide.) { if (_cache.containsKey(name)) { return _cache[name]; } else { final logger = new Logger._internal(name); _cache[name] = logger; return logger; } } Logger._internal(this.name); void log(String msg) { if (!mute) print(msg); } } Note: Factory constructors have no access to this. To invoke a factory constructor, you use the new keyword: var logger = new Logger('UI'); logger.log('Button clicked'); Methods Methods are functions that provide behavior for an object. = new... } } Calling an abstract method results in a runtime error. Overridable operators You can override the operators shown in the following table. For example, if you define a Vector class, you might define a + method to add two vectors. Abstract classes Use the abstract modifier to define an abstract class—a class that can’t be instantiated. Abstract classes are useful for defining interfaces, often with some implementation. If you want your abstract class to appear to be instantiable, define a factory constructor.(new Person('Kathy'))); print(greetBob(new Impostor())); } Here’s an example of specifying that a class implements multiple interfaces: class Point implements Comparable, Location { // ··· } } // ··· } To narrow the type of a method parameter or instance variable in code that is type safe, you can use the covariant keyword.. For more information, see the informal nosuchMethod forwarding specification.. For more information, see the Dart Language Specification. Adding features to a class:; } } To implement a mixin, create a class that extends Object, declares no constructors, and has no calls to super. For example: abstract class Musical { bool canPlayPiano = false; bool canCompose = false; bool canConduct = false; void entertainMe() { if (canPlayPiano) { print('Playing piano'); } else if (canConduct) { print('Waving hands'); } else { print('Humming to self'); } } } Note: As of 1.13, two restrictions on mixins have been lifted from the Dart VM: - Mixins allow extending from a class other than Object. - Mixins can call super(). These “super mixins” are not yet supported in dart2js and require the --supermixin flag in dartanalyzer. For more information, see the article Mixins in Dart. Class variables and methods Use the static keyword to implement class-wide variables and methods. Static variables Static variables (class variables) are useful for class-wide state and constants: class Queue { static const int initialCapacity = 16; // ··· } void main() { assert(Queue.initialCapacity == 16); } Static variables aren’t initialized until they’re used. Note: This page follows the style guide recommendation of preferring lowerCamelCase for constant names. Static methods Static methods (class methods) do not operate on an instance, and thus do not have access to this. For example: import 'dart:math'; class Point { num x, y; Point(this.x, this.y); static num distanceBetween(Point a, Point b) { var dx = a.x - b.x; var dy = a.y - b.y; return sqrt(dx * dx + dy * dy); } } void main() { var a = new Point(2, 2); var b = new. ------------------------------------------------------------ Generics If you look at the API documentation for the basic array type, List, you’ll see that the type is actually List<E>. The <…> notation marks List as a generic (or parameterized) type—a type that has formal type parameters. By convention,. Using collection literals List and map literals can be parameterized. Parameterized literals are just like the literals you’ve already seen, except that you add <type> (for lists) or <keyType,valueType> (for maps) before the opening bracket. Here is example of using typed literals: var names = = new List<String>(); names.addAll(['Seth', 'Kathy', 'Lars']); var nameSet = new Set<String>.from(names); The following code creates a map that has integer keys and values of type View: var views = new Map<int, View>(); Generic collections and the types they contain Dart generic types are reified, which means that they carry their type information around at runtime. For example, you can test the type of a collection: var names = new. // T must be SomeBaseClass or one of its descendants. class Foo<T extends SomeBaseClass> { // ··· } class Extender extends SomeBaseClass { // ··· } It’s OK to use SomeBaseClass or any of its subclasses as generic argument: var someBaseClassFoo = new Foo<SomeBaseClass>(); var extenderFoo = new Foo<Extender>(); It’s also OK to specify no generic argument: var foo = new Foo(); Specifying any non- SomeBaseClass type results in an error: var foo = new Foo<Object>();). For more information about generics, see Using Generic Methods. ------------------------------------------------------------ Libraries and visibility The import and library directives can help you create a modular and shareable code base. Libraries not only provide APIs, but are a unit of privacy: identifiers that start with an underscore (_) are visible only inside the library. Every Dart app is a library, even if it doesn’t use a library directive. Libraries can be distributed using packages. See Pub Package and Asset Manager for information about pub, a package manager included in the SDK. Using libraries Use import to specify how a namespace from one library is used in the scope of another library. For example, Dart web apps generally use the dart:html library, which they can import like this: import 'dart:html'; The only required argument to import is a URI specifying the library. For built-in libraries, the URI has the special dart: scheme. For other libraries, you can use a file system path or the package: scheme. The package: scheme specifies libraries provided by a package manager such as the pub tool. For example: import 'package:test/test.dart'; Note: URI stands for uniform resource identifier. URLs (uniform resource locators) are a common kind of URI. Specifying a library prefix If you import two libraries that have conflicting identifiers, then you can specify a prefix for one or both libraries. For example, if library1 and library2 both have an Element class, then you might have code like this: import 'package:lib1/lib1.dart'; import 'package:lib2/lib2.dart' as lib2; // Uses Element from lib1. Element element1 = new Element(); // Uses Element from lib2. lib2.Element element2 = new lib2.Element(); Importing only part of a library If you want to use only part of a library, you can selectively import the library. For example: // Import only foo. import 'package:lib1/lib1.dart' show foo; // Import all names EXCEPT foo. import 'package:lib2/lib2.dart' hide foo; Lazily loading a library Deferred loading (also called lazy loading) allows an application to load a library on demand, if and when it’s needed. Here are some cases when you might use deferred loading: - To reduce an app’s initial startup time. - To perform A/B testing—trying out alternative implementations of an algorithm, for example. - To load rarely used functionality, such as optional screens and dialogs. To lazily load a library, you must first import it using deferred as. import 'package:greetings/hello.dart' deferred as hello; When you need the library, invoke loadLibrary() using the library’s identifier. Future greet() async { await hello.loadLibrary(); hello.printGreeting(); } In the preceding code, the await keyword pauses execution until the library is loaded. For more information about async and await, see asynchrony support. You can invoke loadLibrary() multiple times on a library without problems. The library is loaded only once. Keep in mind the following when you use deferred loading: - A deferred library’s constants aren’t constants in the importing file. Remember, these constants don’t exist until the deferred library is loaded. - You can’t use types from a deferred library in the importing file. Instead, consider moving interface types to a library imported by both the deferred library and the importing file. - Dart implicitly inserts loadLibrary()into the namespace that you define using deferred as namespace. The loadLibrary()function returns a Future. Implementing libraries See Create Library Packages for advice on how to implement a library package, including: - How to organize library source code. - How to use the exportdirective. - When to use the partdirective. ------------------------------------------------------------ Asynchrony support. Handling Futures When you need the result of a completed Future, you have two options: - Use asyncand await. - Use the Future API, as described in the library tour. Code that uses async and await is asynchronous, but it looks a lot like synchronous code. For example, here’s some code that uses await to wait for the result of an asynchronous function: await lookUpVersion(); To use await, code must be in an async function—a function marked as async: Future checkVersion() async { var version = await lookUpVersion(); // Do something with version } Note: Although an async function might perform time-consuming operations, it doesn’t wait for those operations. Instead, the async function executes only until it encounters its first await expression (details). Then it returns a Future object, resuming execution only after the await expression completes. Use try, catch, and finally to handle errors and cleanup in code that uses await: try { version = await lookUpVersion(); } catch (e) { // React to inability to look up the version } You can use await multiple times in an async function. For example, the following code waits three times for the results of functions: var entrypoint = await findEntrypoint(); var exitCode = await runExecutable(entrypoint, args); await flushThenExit(exitCode); In await expression, the value of expression is usually a Future; if it isn’t, then the value is automatically wrapped in a Future. This Future object indicates a promise to return an object. The value of await expression is that returned object. The await expression makes execution pause until that object is available. If you get a compile-time error when using await, make sure await is in an async function. For example, to use await in your app’s main() function, the body of main() must be marked as async: Future main() async { checkVersion(); print('In main: version is ${await lookUpVersion()}'); } Declaring async functions An async function is a function whose body is marked with the async modifier. Adding the async keyword to a function makes it return a Future. For example, consider this synchronous function, which returns a String: String lookUpVersion() => '1.0.0'; If you change it to be an async function—for example, because a future implementation will be time consuming—the returned value is a Future: Future<String> lookUpVersion() async => '1.0.0'; Note that the function’s body doesn’t need to use the Future API. Dart creates the Future object if necessary. If your function doesn’t return a useful value, make its return type Future<void>. Handling Streams When you need to get values from a Stream, you have two options: - Use asyncand an asynchronous for loop ( await for). - Use the Stream API, as described in the library tour. Note: Before using await for, be sure that it makes the code clearer and that you really do want to wait for all of the stream’s results. For example, you usually should not use await for for UI event listeners, because UI frameworks send endless streams of events. An asynchronous for loop has the following form: await for (varOrType identifier in expression) { // Executes each time the stream emits a value. } The value of expression must have type Stream. Execution proceeds as follows: - Wait until the stream emits a value. - Execute the body of the for loop, with the variable set to that emitted value. - Repeat 1 and 2 until the stream is closed. To stop listening to the stream, you can use a break or return statement, which breaks out of the for loop and unsubscribes from the stream. If you get a compile-time error when implementing an asynchronous for loop, make sure the await for is in an async function. For example, to use an asynchronous for loop in your app’s main() function, the body of main() must be marked as async: Future main() async { // ... await for (var request in requestServer) { handleRequest(request); } // ... } For more information about asynchronous programming, in general, see the dart:async section of the library tour. Also see the articles Dart Language Asynchrony Support: Phase 1 and Dart Language Asynchrony Support: Phase 2, and the Dart language specification. ------------------------------------------------------------ Generators When you need to lazily produce a sequence of values, consider using a generator function. Dart has built-in support for two kinds of generator functions: To implement a synchronous generator function, mark the function body as sync*, and use yield statements to deliver values: Iterable<int> naturalsTo(int n) sync* { int k = 0; while (k < n) yield k++; } To implement an asynchronous generator function, mark the function body as async*, and use yield statements to deliver values: Stream<int> asynchronousNaturalsTo(int n) async* { int k = 0; while (k < n) yield k++; } If your generator is recursive, you can improve its performance by using yield*: Iterable<int> naturalsDownFrom(int n) sync* { if (n > 0) { yield n; yield* naturalsDownFrom(n - 1); } } For more information about generators, see the article Dart Language Asynchrony Support: Phase 2. Callable classes To allow your Dart class to be called like a function, implement the call() method. Isolates Most computers, even on. For more information, see the dart:isolate library documentation.. Consider the following code, which doesn’t use a typedef: class SortedCollection { Function compare; SortedCollection(int f(Object a, Object b)) { compare = f; } } // Initial, broken implementation. int sort(Object a, Object b) => 0; void main() { SortedCollection coll = new SortedCollection(sort); // All we know is that compare is a function, // but what type of function? assert(coll.compare is Function); } Type information is lost when assigning f to compare. The type of f is (Object, Object) → int (where → means returns), yet the type of compare is Function. If we change the code to use explicit names and retain type information, both developers and tools can use that information. typedef int Compare(Object a, Object b); class SortedCollection { Compare compare; SortedCollection(this.compare); } // Initial, broken implementation. int sort(Object a, Object b) => 0; void main() { SortedCollection coll = new SortedCollection(sort); assert(coll.compare is Function); assert(coll.compare is Compare); } Note: Currently, typedefs are restricted to function types. We expect this to change. Because typedefs are simply aliases, they offer a way to check the type of any function. For example: typedef int Compare<T>(T a, T b); int sort(int a, int b) => a - b; void main() { assert(sort is Compare<int>); // True! } New function type syntax: Dart 1.24 introduced a new form of function types, the generic function type alias. You might use this feature if you pass around generic methods, define fields that are function types, or define arguments with generic function types. Here’s an example of using the new syntax: typedef F = List<T> Function<T>(T); Metadata Use metadata to give additional information about your code. A metadata annotation begins with the character @, followed by either a reference to a compile-time constant (such as deprecated) or a call to a constant constructor. Two annotations are available to all Dart code: @deprecated and @override. For examples of using @override, see Extending a class. Here’s an example of using the @deprecated annotation: class Television { /// _Deprecated: Use [turnOn] instead._ @deprecated void activate() { turnOn(); } /// Turns the TV's power on. void turnOn() { // ··· } } You can define your own metadata annotations. Here’s an example of defining a @todo annotation that takes two arguments: library todo; class Todo { final String who; final String what; const Todo(this.who, this.what); } And here’s an example of using that @todo annotation: import 'todo.dart'; @Todo('seth', 'make this do something') void doSomething() { print('do something'); } Metadata can appear before a library, class, typedef, type parameter, constructor, factory, function, field, parameter, or variable declaration and before an import or export directive. You can retrieve metadata at runtime using reflection. ------------------------------------------------------------ Dart supports single-line comments, multi-line comments, and documentation comments. Single-line comments A single-line comment begins with //. Everything between // and the end of line is ignored by the Dart compiler. void main() { // TODO: refactor into an AbstractLlamaGreetingFactory? print('Welcome to my Llama farm!'); } Multi-line comments A multi-line comment begins with /* and ends with */. Everything between /* and */ is ignored by the Dart compiler (unless the comment is a documentation comment; see the next section). Multi-line comments can nest. void main() { /* * This is a lot of work. Consider raising chickens. Llama larry = new Llama(); larry.feed(); larry.exercise(); larry.clean(); */ } Documentation comments Documentation comments are multi-line or single-line comments that begin with /// or /**. Using /// on consecutive lines has the same effect as a multi-line doc comment. Inside a documentation comment, the Dart compiler ignores all text unless it is enclosed in brackets. Using brackets, you can refer to classes, methods, fields, top-level variables, functions, and parameters. The names in brackets are resolved in the lexical scope of the documented program element. Here is an example of documentation comments with references to other classes and arguments: /// A domesticated South American camelid (Lama glama). /// /// Andean cultures have used llamas as meat and pack /// animals since pre-Hispanic times. class Llama { String name; /// Feeds your llama [Food]. /// /// The typical llama eats one bale of hay per week. void feed(Food food) { // ... } /// Exercises your llama with an [activity] for /// [timeLimit] minutes. void exercise(Activity activity, int timeLimit) { // ... } } In the generated documentation, [Food] becomes a link to the API docs for the Food class. To parse Dart code and generate HTML documentation, you can use the SDK’s documentation generation tool. For an example of generated documentation, see the Dart API documentation. For advice on how to structure your comments, see Guidelines for Dart Doc Comments. Summary This page summarized the commonly used features in the Dart language. More features are being implemented, but we expect that they won’t break existing code. For more information, see the Dart Language Specification and Effective Dart. To learn more about Dart’s core libraries, see A Tour of the Dart Libraries.
https://my.oschina.net/u/4388296/blog/4005807
CC-MAIN-2020-40
refinedweb
4,044
56.86
Created on 2018-02-04 23:13 by cheryl.sabella, last changed 2018-07-28 13:04 by steve.dower. This issue is now closed. There is a description of the annotation in the Glossary, but with 'function annotation' maybe we could rename 'function annotation' by 'annotation' This is a rather small change, so probably it would be easier to discuss it in a PR. PRs are for discussing proposed text/code. Discussion of whether or not to do this belongs here. (I have no opinion on this issue myself.) I wanted to say implicitly that I like the idea, and that we should figure out details in a PR. But of course if someone is against this, then we should wait with a PR. I'm all for adding a bunch of terms related to type hints/annotations to the Glossary. I just made a PR to start working on the right wording. Guido, now that we are working on this, perhaps you can list what other terms related to type hints/annotations you were thinking for addition. Perhaps you can start with the more important terms from PEP 484 (and perhaps also PEP 483, and then PEP 526 and PEP 544). E.g. start with terms from the ToC of those PEPs. Great, I'll look into them. It will take me some time to make a list of the new terms and write proper descriptions. Perhaps we could deliver the updates to the glossary by waves so people can make benefit of it? As in multiple PR being merged while retaining this issue open until all required terms are added. I'm okay with multiple PRs, but do beware that each PR requires a core dev to open a browser window etc., so try to group them a bit. But no need to wait for the perfect list! New changeset f2290fb19a9b1a5fbeef0971016f72683e8cd1ad by Ivan Levkivskyi (Andrés Delfino) in branch 'master': bpo-32769: Write annotation entry for glossary (GH-6657) Maybe we can consider backporting this to 3.7? Andrés, if you think it makes sense, you can cherry-pick the commit and open a PR against 3.7 branch. Yes, Ivan, I was thinking about that. I think it makes sense, and I'll be glad to do it. Shouldn't we have this on 3.6 also? Yes, all this also applies to 3.6. Before we backport this to 3.7 and 3.6, let's iterate on the wording a bit. I don't think the distinction between annotations and type hints is that annotations are materialized at runtime while type hints aren't. I think syntactically they are the same, but annotations are a slightly more general concept because they may be used for other purposes than to indicate the type of a variable (or argument, attribute etc.). So IMO in def f(): x: int 'int' is both an annotation and a type hint, OTOH in x: 'spam eggs ham' we have an annotation that is not a type hint. > ... but annotations are a slightly more general concept because they may be used for other purposes than to indicate the type of a variable ... Yes, I agree. I see a conflict here in that annotations can be used for other purpouses, for example variable annotations, they are heavily intended for type hinting (PEP 526 says "This PEP aims at adding syntax to Python for annotating the types of variables (including class variables and instance variables), instead of expressing them through comments") I think it's correct to separate annotations from type hints, and say: annotations can be used for type hinting. I can make a new PR based on master having this in mind, and then cherry picking it for 3.7/3.6. Guido, do you really want to stress at this point for Python 3.8 that annotations do not have an intended purpose? That is clearly not true. 1. PEP 526 was created purely with types in mind and was inspired by PEP 484's lack of this functionality. 2. PEP 563 defines a future import which will become the default in a future release (currently marked as 4.0). This default stops evaluating annotations at runtime, which again was inspired by the typing use case. 3. Dataclasses in PEP 557 are designed to leverage types in annotations. A few quotes from that PEP: > A class decorator is provided which inspects a class definition for variables with type annotations as defined in PEP 526, "Syntax for Variable Annotations". > One main design goal of Data Classes is to support static type checkers. The use of PEP 526 syntax is one example of this, but so is the design of the fields() function and the @dataclass decorator. 4. PEP 560 added support for `__class_getitem__` and `__mro_entries__` in core purely for the purpose of generic types. 5. For the above reasons, PEP 563 states that non-typing usage of annotations is deprecated: What I think Guido might mean is that some type annotations are not strictly speaking type hints. For example, `dataclasses.InitVar`, is not really a type, it is just a way to indicate how constructor should be constructed. I could see similar potential features in future (like `typing.Final` discussed recently). Even `typing.ClassVar` I would say is not a type but an access qualifier. Also for me the two terms: annotations and hints are a bit orthogonal, first is a syntax, while second is semantics. I think Guido is right that we should say something like (approximately) `annotation is a syntax to express type hints and other related metadata` or similar. The current formulation seems a bit too restrictive. I actually intended to say that annotations continue to be usable for non-typing purposes (beyond ClassVar/InitVar). It may be deprecated but the usage still exists. This is a glossary, not a manifesto. I'm fine with adding that the main use of annotations is for type hints of course. And I was mostly reacting to the fact that the text that was just committed seemed to imply that 'annotation' refers to something stored at runtime in __annotations__ while 'type hint' was a syntactic form (and this is borne out by the omission of *local* type hints from the section on annotations). Yes, local annotations are important and should be mentioned (maybe even with an example). Hopefully I address your comments with the last PR update. > It may be deprecated but the usage still exists. This is a glossary, not a manifesto. Agreed. However, based on the current wording users will argue that Python documentation itself is stressing the lack of an intended purpose for annotations, equaling the typing use case with all other use cases (including future ones). This isn't our intent. I think it would be clearer to change the current wording from: > A metadata value associated with a variable, a class attribute or a > function or method parameter or return value, that has no specific > purpouse (i.e. it's up to the user to use it as they see fit). > (...) > Annotations can be used to specify :term:`type hints <type hint>`. to: > A metadata value associated with a variable, a class attribute or a > function or method parameter or return value, that stores a > :term:`type hint`. > (...) > Annotations can be used for other purposes unrelated to typing. This > usage is deprecated, see :pep:`563` for details. The `type hint` phrasing is already there, we just need to delete the word "global" that currently appears before "variable". Note that saying that annotations in class attributes and variables have no assigned meaning contradicts PEP 526. Your phrasing still makes it sound like an annotation is a runtime concept -- I think of it as a syntactic construct. (Otherwise fine.) Guido, could you point out what parts make it sound that way to you so I can change them? (1) The word "stores" in this paragraph: A metadata value associated with a global variable, a class attribute or a function or method parameter or return value, that stores a type hint. (2) The description of how annotations are stored in __annotations__ in the following paragraph. (3) The omission of local variables from the lists of things in those paragraphs that can be annotated (those are the only category whose annotation is not stored at runtime). I'm sorry, because of your comment, I believe you haven't read the last update on the PR. Could you take a look at it? New changeset 6e33f810c9e3a549c9379f24cf1d1752c29195f0 by Ivan Levkivskyi (Andrés Delfino) in branch 'master': bpo-32769: A new take on annotations/type hinting glossary entries (GH-6829) New changeset e69657df244135a232117f692640e0568b04e999 by Guido van Rossum (Andrés Delfino) in branch '3.7': [3.7] bpo-32769: A new take on annotations/type hinting glossary entries (GH-6829) (#7127) New changeset 717204ffcccfe04a34b6c4a52f0e844fde3cdebe by Ivan Levkivskyi (Andrés Delfino) in branch '3.6': [3.6] bpo-32769: A new take on annotations/type hinting glossary entries (GH-6829) (GH-7128)
https://bugs.python.org/issue32769
CC-MAIN-2019-43
refinedweb
1,503
62.98
: extern "language_name" declaration ; extern "language_name" { declaration ; declaration ; ... }: extern "C" { void f(); // C linkage extern "C++" { void g(); // C++ linkage extern "C" void h(); // C linkage void g2(); // C++ linkage } extern "C++" void k();// C++ linkage void m(); // C linkage }: extern "C" { #include "header.h" }: #ifdef __cplusplus extern "C" { #endif ... /* body of header */ #ifdef __cplusplus } /* closing brace for extern "C" */ #endif Adding C++ features to C structs Suppose you want to make it easier to use a C library in your C++ code. And suppose that instead of using C-style access you might want to add member functions, maybe virtual functions, possibly derive from the class, and so on. How can you accomplish this transformation and ensure the C library functions can still recognize the struct? Consider the uses of the C struct buf in the following example: struct buf { char* data; unsigned count; }; void buf_clear(struct buf*); int buf_print(struct buf*); /* return status, 0 means fail */ int buf_append(struct buf*, const char*, unsigned count); /* same return */ You want to turn this struct into a C++ class and make it easier to use with the following changes: extern "C" { #include "buf.h" } class mybuf { // first attempt -- will it work? public: mybuf() : data(0), count(0) { } void clear() { buf_clear((buf*)this); } bool print() { return buf_print((buf*)this); } bool append(const char* p, unsigned c) { return buf_append((buf*)this, p, c); } private: char* data; unsigned count; }; The interface to the class mybuf looks more like C++ code, and can be more easily integrated into an Object-Oriented style of programming -- if it works. What happens when the member functions pass the this pointer to the buf functions? Does the C++ class layout match the C layout? Does the this pointer point to the data member, as a pointer to buf does? What if you add virtual functions to mybuf? The C++ standard makes no promises about the compatibility of buf and class mybuf. This code, without virtual functions, might work, but you can't count on it. If you add virtual functions, the code will fail using compilers that add extra data (such as pointers to virtual tables) at the beginning of a class. The portable solution is to leave struct buf strictly alone, even though you would like to protect the data members and provide access only through member functions. You can guarantee C and C++ compatibility only if you leave the declaration unchanged. You can derive a C++ class mybuf from the C struct buf, and pass pointers to the buf base class to the mybuf functions. If a pointer to mybuf doesn't point to the beginning of the buf data, the C++ compiler will adjust it automatically when converting a mybuf* to a buf*. The layout of mybuf might vary among C++ compilers, but the C++ source code that manipulates mybuf and buf objects will work everywhere. The following example shows a portable way to add C++ and Object-Oriented features to a C struct. extern "C" { #include "buf.h" } class mybuf : public buf { // a portable solution public: mybuf() : data(0), count(0) { } void clear() { buf_clear(this); } bool print() { return buf_print(this); } bool append(const char* p, unsigned c) { return buf_append(this, p, c); } }; C++ code can freely create and use mybuf objects, passing them to C code that expects buf objects, and everything will work together. Of course, if you add data to mybuf, the C code won't know anything about it. That's a general design consideration for class hierarchies. You also have to take care to create and delete buf and mybuf objects consistently. It is safest to let C code delete (free) an object if it was created by C code, and not allow C code to delete a mybuf object. If you declare a C++ function to have C linkage, it can be called from a function compiled by the C compiler. A function declared to have C linkage can use all the features of C++, but its parameters and return type must be accessible from C if you want to call it from C code. For example, if a function is declared to take a reference to an IOstream class as a parameter, there is no (portable) way to explain the parameter type to a C compiler. The C language does not have references or templates or classes with C++ features. Here is an example of a C++ function with C linkage: #include <iostream> extern "C" int print(int i, double d) { std::cout << "i = " << i << ", d = " << d; } You can declare function print in a header file that is shared by C and C++ code: #ifdef __cplusplus extern "C" #endif int print(int i, double d); You can declare at most one function of an overloaded set as extern "C" because only one C function can have a given name. If you need to access overloaded functions from C, you can write C++ wrapper functions with different names as the following example demonstrates: int g(int); double g(double); extern "C" int g_int(int i) { return g(i); } extern "C" double g_double(double d) { return g(d); } Here is the example C header for the wrapper functions: int g_int(int); double g_double(double); You also need wrapper functions to call template functions because template functions cannot be declared as extern "C": template<class T> T foo(T t) { ... } extern "C" int foo_of_int(int t) { return foo(t); } extern "C" char* foo_of_charp(char* p) { return foo(p); } C++ code can still call the the overloaded functions and the template functions. C code must use the wrapper functions. Can you access a C++ class from C code? Can you declare a C struct that looks like a C++ class and somehow call member functions? The answer is yes, although to maintain portability you must add some complexity. Also, any modifications to the definition of the C++ class you are accessing requires that you review your C code. Suppose you have a C++ class such as the following: class M { public: virtual int foo(int); // ... private: int i, j; }; You cannot declare class M in your C code. The best you can do is to pass around pointers to class M objects, similar to the way you deal with FILE objects in C Standard I/O. You can write extern "C" functions in C++ that access class M objects and call them from C code. Here is a C++ function designed to call the member function foo: extern "C" int call_M_foo(M* m, int i) { return m->foo(i); } Here is an example of C code that uses class M: struct M; /* you can supply only an incomplete declaration */ int call_M_foo(struct M*, int); /* declare the wrapper function */ int f(struct M* p, int j) /* now you can call M::foo */ { return call_M_foo(p, j); } You can use C Standard I/O, from the standard C header <stdio.h>, in C++ programs because C Standard I/O is part of C++. Any considerations about mixing IOstream and Standard I/O in the same program therefore do not depend on whether the program contains C code specifically. The issues are the same for purely C++ programs that use both Standard I/O and IOstreams. Sun C and C++ use the same C runtime libraries, as noted in the section about compatible compilers. Using Sun compilers, you can therefore use Standard I/O functions freely in both C and C++ code in the same program. The C++ standard says you can mix Standard I/O functions and IOstream functions on the same target "stream", such as the standard input and output streams. But C++ implementations vary in their compliance. Some systems require that you call the sync_with_stdio() function explicitly before doing any I/O. Implementations also vary in the efficiency of I/O when you mix I/O styles on the same stream or file. In the worst case, you get a system call per character input or output. If the program does a lot of I/O, the performance might be unacceptable. The safest course is to stick with Standard I/O or IOstream styles on any given file or standard stream. Using Standard I/O on one file or stream and IOstream on a different file or stream does not cause any problems. A pointer to a function must specify whether it points to a C function or to a C++ function, because it is possible that C and C++ functions use different calling conventions. Otherwise, the compiler does not know which kind of function-calling code to generate. Most systems do not have have different calling conventions for C and C++, but C++ allows for the possibility. You therefore must be careful about declaring pointers to functions, to ensure that the types match. Consider the following example: typedef int (*pfun)(int); // line 1 extern "C" void foo(pfun); // line 2 extern "C" int g(int) // line 3 ... foo( g ); // Error! // line 5 Line 1 declares pfun to point to a C++ function, because it lacks a linkage specifier. Line 2 therefore declares foo to be a C function that takes a pointer to a C++ function. Line 5 attempts to call foo with a pointer to g, a C function, a type mis-match. Be sure to match the linkage of a pointer-to-function with the functions to which it will point. In the following corrected example, all declarations are inside extern "C" brackets, ensuring that the types match. extern "C" { typedef int (*pfun)(int); void foo(pfun); int g(int); } foo( g ); // now OK Pointers to functions have one other subtlety that occasionally traps programmers. A linkage specification applies to all the parameter types and to the return type of a function. If you use the elaborated declaration of a pointer-to-function in a function parameter, a linkage specification on the function applies to the pointer-to-function as well. If you declare a pointer-to-function using a typedef, the linkage specification of that typedef is not affected by using it in a function declaration. For example, consider this code: typedef int (*pfn)(int); extern "C" void foo(pfn p) { ... } // definition extern "C" void foo( int (*)(int) ); // declaration The first two lines might appear in a program file, and the third line might appear in a header where you don't want to expose the name of the private typedef. Although you intended for the declaration of foo and its definition to match, they do not. The definition of foo takes a pointer to a C++ function, but the declaration of foo takes a pointer to a C function. The code declares a pair of overloaded functions. To avoid this problem, use typedefs consistently in declarations, or enclose the typedefs in appropriate linkage specifications. For example, assuming you wanted foo to take a pointer to a C function, you could write the definition of foo this way: extern "C" { typedef int (*pfn)(int); void foo(pfn p) { ... } } Propagating Exceptions What happens if you call a C++ function from a C function, and the C++ function throws an exception? The C++ standard is somewhat vague about whether you can expect exceptions to behave properly, and on some systems you have to take special precautions. Generally, you must consult the user manuals to determine whether the code will work properly. No special precautions are necessary with Sun C++. The exception mechanism in Sun C++ does not affect the way functions are called. If a C function is active when a C++ exception is thrown, the C function is passed over in the process of handling the exception. Mixing Exceptions with set_jmp and long_jmp The best advice is not to use long_jmp in programs that contain C++ code. The C++ exception mechanism and C++ rules about destroying objects that go out of scope are likely to be violated by a long_jmp, with unpredictable results. Some compilers integrate exceptions and long_jmp, allowing them to work together, but you should not depend on such behavior. Sun C++ uses the same set_jmp and long_jmp as the C compiler. Many C++ experts believe that long_jmp should not be integrated with exceptions, due to the difficulty of specifying exactly how it should behave. If you use long_jmp in C code that you are mixing with C++, ensure that a long_jmp does not cross over an active C++ function. If you cannot ensure that, see if you can compile that C++ code with exceptions disabled. You still might have problems if the destructors of local objects are bypassed. At one time, most C++ compilers required that function main be compiled by the C++ compiler. That requirement is not common today, and Sun C++ does not require it. If your C++ compiler needs to compile the main function but you cannot do so for some reason, you can change the name of the C main function and call it from a wrapper version of C++ main. For example, change the name of the C main function to C_main, and write this C++ code: extern "C" int C_main(int, char**); // not needed for Sun C++ int main(int argc, char** argv) { return C_main(argc, argv); } Of course, C_main must be declared in the C code to return an int, and it will have to return an int value. As noted above, you do not need to go to this trouble with Sun C++. Even if your program is primarily C code but makes use of C++ libraries, you need to link C++ runtime support libraries provided with the C++ compiler into your program. The easiest and best way to do that is to use the C++ compiler driver to do the linking. The C++ compiler driver knows what libraries to link, and the order in which to link them. The specific libraries can depend on the options used when compiling the C++ code. Suppose you have C program files main.o, f1.o, and f2.o, and you use a C++ library helper.a. With Sun C++, you would issue the command CC -o myprog main.o f1.o f2.o helper.a The necessary C++ runtime libraries like libCrun and libCstd are linked automatically. The documentation for helper.a might require that you use additional link-time options. If you can't use the C++ compiler for some reason, you can use the -dryrun option of the CC command to get the list of commands the compiler issues, and capture them into a shell script. Since the exact commands depend on command-line options, you should review the output from -dryrun with any change of the command line. Steve Clamage has been at Sun since 1994. He is currently technical lead for the C++ compiler and the Sun ONE Studio Compiler Collection. He has been chair of the ANSI C++ Committee since 1995.
http://developers.sun.com/solaris/articles/mixing.html
crawl-002
refinedweb
2,484
67.38
A class for thread synchronization and mutual exclusion. More... #include <yarp/os/Semaphore.h> A class for thread synchronization and mutual exclusion. A semaphore has an internal counter. Multiple threads can safely increment or decrement that counter. If one thread attempts to decrement the counter below zero, it must wait for another thread to first increment it. This is a useful primitive for regulating thread interaction. Definition at line 28 of file Semaphore.h. Constructor. Sets the initial value of the counter. Definition at line 15 of file Semaphore.cpp. Destructor. Definition at line 20 of file Semaphore.cpp. Decrement the counter, unless that would require waiting. If the counter would decrement below zero, this method simply returns without doing anything. Definition at line 35 of file Semaphore.cpp. Increment the counter. If another thread is waiting to decrement the counter, it is woken up. Definition at line 39 of file Semaphore.cpp. Decrement the counter, even if we must wait to do that. If the counter would decrement below zero, the calling thread must stop and wait for another thread to call Semaphore::post on this semaphore. Definition at line 27 of file Semaphore.cpp. Try to decrement the counter, even if we must wait - but don't wait forever. This method wiill wait for at most timeoutInSeconds seconds (this can be fractional). If the counter has not been decremented within that interval, the method will return false. Definition at line 31 of file Semaphore.cpp. Definition at line 79 of file Semaphore.h.
http://yarp.it/classyarp_1_1os_1_1Semaphore.html
CC-MAIN-2017-43
refinedweb
255
69.68
This tutorial shows how to put the ESP32 in deep sleep mode and wake it up using different wake up sources using MicroPython firmware. We’ll cover timer wake up and external wake up. The ESP32 can also be awaken from deep sleep using the touch pins by defining a threshold. Although this feature is already implemented in MicroPython, it is not working properly at the time of writing this tutorial. We’ll update this article as soon as it is working. Deep sleep with the ESP8266 works a bit different than with the ESP32. So, we’ve created a dedicated article to deep sleep with ESP8266 using MicroPython. If you have an ESP8266, we recommend reading our MicroPython ESP8266 Deep Sleep and Wake Up Sources Guide.) Learn more about MicroPython: MicroPython Programming with ESP32 and ESP8266 eBook Introducing Deep Sleep Having your ESP32/ESP8266 running on active mode with batteries it’s not ideal, since the power from batteries will drain very quickly. If you put your ESP32 in deep sleep mode, it will reduce the power consumption and your batteries will last longer. Having the ESP32 in deep sleep mode means cutting with the activities that consume more power while operating but leave just enough activity to wake up the processor when something interesting happens. When operating in deep sleep mode, the ESP32 have a current consumption on the μA range. With a custom and carefully designed board you can get a minimal comsumption of only 5 μA. However, if you use a full-feature ESP32 development board with built-in programmer, on-board LEDs, and so on (like the ESP32 DOIT board) you won’t be able to achieve such a low power state, but you can still save power. Wake Up Sources After putting the ESP32 into deep sleep mode, there are several ways to wake it up: - You can use the timer: waking up the ESP32 after predefined periods of time. - You can use an external wake up: this means the ESP32 can wake up when a change in the state of a pin occurs. - You can use the touch pins: implemented, but not working as expected at the time of writing, so we won’t cover this at the moment; - You can use the ULP co-processor to wake up: we haven’t tested this feature yet. Timer Wake Up The ESP32 can go into deep sleep mode, and then wake up at predefined periods of time. This feature is especially useful if you are running projects that require time stamping or daily tasks, while maintaining low power consumption. To put the ESP32 in deep sleep mode for a predetermined number of seconds, you just have to use the deepsleep() function from the machine module. This function accepts as arguments, the sleep time in milliseconds as follows: machine.deepsleep(sleep_time_ms) Let’s look at a simple example to see how it works. In the following code, the ESP32 is in deep sleep mode for 10 seconds, then it wakes up, blinks an LED, and goes back to sleep. # Complete project details at from machine import deepsleep from machine import Pin from time import sleep led = Pin (2, Pin.OUT) sleep(10000) How the Code Works First, import the necessary libraries: import machine from machine import Pin from time import sleep Create a Pin object that refers to GPIO 2 called led. This refers to the on-board LED. led = Pin (2, Pin.OUT) The following commands blink the LED. led.value(1) sleep(1) led.value(0) sleep(1) In this case, we’re blinking an LED for demonstration purposes, but the idea is to add your main code in this section of the script. Before going to sleep, we add a delay of 5 seconds and print a message to indicate that it’s going to sleep. sleep(5) print('Im awake, but Im going to sleep') It’s important to add a 5 seconds delay before going to sleep when we are developing the scripts. When you want to upload a new code to the board, it needs to be awake. So, if you don’t have the delay, it will be difficult to catch it awake to upload new code later on. After having the final code, you can delete that delay. Finally, put the ESP32 in deep sleep for 10 seconds (10 000 milliseconds). machine.deepsleep(10000) After 10 seconds, the ESP32 wakes up and runs the code from the start, similarly to when you press the EN/RST button. Demonstration Copy the code provided to the main.py file of your ESP32. Upload the new code and press the EN/RST button after uploading. The ESP32 should blink the on-board LED and print a message. Then, it goes to sleep. This is repeated over and over again. If you don’t know how to upload the script follow this tutorial if you’re using Thonny IDE, or this one if you’re using uPyCraft IDE. External Wake Up The ESP32 can also be awaken from sleep when there is a change on the state of a pin. There are two possibilities of external wake up with the ESP32: ext0 and ext1. The ext0 mode allows you to use one GPIO as a wake up source. The ext1 mode allows you to set more than one GPIO as a wake up source at the same time. Only RTC GPIOs can be used as a wake up source. The RTC GPIOs are highlighted with an orange rectangle box in the next diagram. Learn more about the ESP32 GPIOs: ESP32 Pinout Reference: Which GPIO pins should you use? External wake up – ext0 To illustrate how to use the external wake up ext0, we’ll use one pushbutton as a wake up source. The ESP32 awakes when you press the pushbutton. For this example, you need the following components: You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price! Schematic Diagram Wire the circuit by following the next schematic diagram. In this example, we’re using GPIO14 to wake up the ESP32, but you can use any other RTC GPIO. Script The following script shows how ext0 works: it uses one GPIO as an external wake up source. # Complete project details at import esp32 from machine import Pin from machine import deepsleep from time import sleep wake1 = Pin(14, mode = Pin.IN) #level parameter can be: esp32.WAKEUP_ANY_HIGH or esp32.WAKEUP_ALL_LOW esp32.wake_on_ext0(pin = wake1, level = esp32.WAKEUP_ANY_HIGH) #your main code goes here to perform a task print('Im awake. Going to sleep in 10 seconds') sleep(10) print('Going to sleep now') deepsleep() How the code works First, you need to import the necessary modules. You need to import the esp32 module that contains the methods to set a pin as a wake up source. After importing the necessary modules, define a wake up pin. In this case we’re using GPIO14 and we call it wake1. This GPIO should be set as an input (Pin.IN). wake1 = Pin(14, mode = Pin.IN) Then, set ext0 as a wake up source using the wake_on_ext0() method as follows: esp32.wake_on_ext0(pin = wake1, level = esp32.WAKEUP_ANY_HIGH) The wake_on_ext0() method accepts as arguments the pin and the level: - pin: an object of type Pin (the GPIO that acts as a wake up source) - level: defines the state of the GPIO that wakes up the ESP32. The level can be one of the following parameters: - WAKEUP_ANY_HIGH - WAKEUP_ALL_LOW In this case, we’re using the WAKEUP_ANY_HIGH method that wakes up the ESP32 when the GPIO goes HIGH. Your main code to execute a task should go after defining the wake up source and right before going to sleep. We add a 10 second delay before going to sleep. To put the ESP32 into deep sleep, you just need to use the deepsleep() method as follows: machine.deepsleep() Demonstration Upload the code to the ESP32 main.py file. Press the EN/RESET button to run the code. The ESP32 should go to sleep. Now, you can press the button to wake it up from deep sleep. External wake up – ext1 External wake up ext1 works very similar to ext0 but it allows you to set more than one GPIO as a wake up source. To demonstrate how it works we’ll use two pushbuttons wired to different GPIOs. For this example, you need the following components: Schematic Diagram Wire the circuit by following the next schematic diagram. In this case, we’re using GPIO14 and GPIO12. You can use any other suitable GPIOs, but these need to be RTC GPIOs, otherwise this method doesn’t work. Script The following script shows how ext1 works: it uses two GPIOs as an external wake up source, but you can use more if you want. # Complete project details at import esp32 from machine import deepsleep from machine import Pin from time import sleep wake1 = Pin(14, mode = Pin.IN) wake2 = Pin(12, mode = Pin.IN) #level parameter can be: esp32.WAKEUP_ANY_HIGH or esp32.WAKEUP_ALL_LOW esp32.wake_on_ext1(pins = (wake1, wake2), level = esp32.WAKEUP_ANY_HIGH) #your main code goes here to perform a task print('Im awake. Going to sleep in 10 seconds') sleep(10) print('Going to sleep now') deepsleep() How the code works The code is similar to the ext0 example, but it uses the wake_on_ext1() method instead. esp32.wake_on_ext1(pins = (wake1, wake2), level = esp32.WAKEUP_ANY_HIGH) The wake_on_ext1() method accepts as arguments the pins and the level: - pins: a tupple or list with objects of type Pin (the GPIOs that act as a wake up source) - level: defines the state of the GPIOs that will wake up the ESP32. The level can be one of the following parameters: - WAKEUP_ANY_HIGH - WAKEUP_ALL_LOW After defining the wake up source, you can put the ESP32 in deep sleep: machine.deepsleep() Wrapping Up We hope you’ve found this tutorial about deep sleep with the ESP32 using MicroPython useful. We’ve covered timer wake up and external wake up. There is also the possibility to wake up the ESP32 using the touch pins. Although this is already available in the latest MicroPython firmware, it is not working as expected, so we didn’t include it in the tutorial. In the meanwhile, you can use ESP32 Touch Wake Up with Arduino IDE. If you want to learn more about programming the ESP32 and ESP8266 boards with MicroPython, take a look our eBook: MicroPython Programming with ESP32 and ESP8266. We have other tutorials related with deep sleep that you might be interested: - Low Power Weather Station Datalogger (MicroPython) - ESP32 Deep Sleep and Wake Up Sources (Arduino IDE) - ESP8266 Deep Sleep (Arduino IDE) - ESP8266 Deep Sleep and Wake Up Sources (MicroPython) Thanks for reading. 3 thoughts on “MicroPython: ESP32 Deep Sleep and Wake Up Sources” Hello, Thanks for a great tutorial. I followed it and noticed my ESP8266 NodeMCU v3 is waking for a random number ot times and after that stops sending messages (mqtt). In a debug.log which I created I see the last entry just before machine.deepsleep(10000) command. And strange thing is that in theory it should be put into sleep but I can connect it using rshell so it is waken up but not rinning the code (sending mqtt). Have no idea what is wrong :/ No real difference in DeepSleep between the ESP32 and 8266 except that the reset pin (RST) needs to be physically connected to GPIO16 (D2 on the board). What irritates me is that on the ESP32 the PowerLED doesn’t turn off in DeepSleep which renders it pretty much useless for long term battery operation. If anyone knows how to turn it off I’d love to know. Set the led pin value to 0 to turn it off, then hold the state. Example: run this at the beginning of your code esp32 built in led pin==2 led = machine.Pin(2, machine.Pin.OUT, None) run this at the end of your code before running deep sleep led.value(0) # turn light off led = machine.Pin(2, machine.Pin.OUT, machine.Pin.PULL_HOLD)
https://randomnerdtutorials.com/micropython-esp32-deep-sleep-wake-up-sources/
CC-MAIN-2022-21
refinedweb
2,040
71.34
When an object is retrieved from the datastore by JDO 3 types of "fetch groups" to consider The fetch group in use for a class is controled via the FetchPlan interface. To get a handle on the current FetchPlan we do FetchPlan fp = pm.getFetchPlan(); JDO provides an initial fetch group, comprising the fields that will be retrieved when an object is retrieved if the user does nothing to define the required behaviour. By default the default fetch group comprises all fields of the following types :- If you wish to change the Default Fetch Group for a class you can update the Meta-Data for the class as follows (for XML) <class name="MyClass"> ... <field name="fieldX" default- </class> or using annotations @Persistent(defaultFetchGroup="true") SomeType fieldX; When a PersistenceManager is created it starts with a FetchPlan of the "default" fetch group. That is, if we call Collection fetchGroups = fp.getGroups(); this will have one group, called "default". At runtime, if you have been using other fetch groups and want to revert back to the default fetch group at any time you simply do fp.setGroup(FetchPlan.DEFAULT); As mentioned above, JDO2 allows specification of users own fetch groups. These are specified in the MetaData of the class. For example, if we have the following class class MyClass { String name; HashSet coll; MyOtherClass other; } and we want to have the other field loaded whenever we load objects of this class, we define our MetaData as <package name="mydomain"> <class name="MyClass"> <field name="name"> <column length="100" jdbc- </field> <field name="coll" persistence- <collection element- <join/> </field> <field name="other" persistence- <fetch-group <field name="other"/> </fetch-group> </class> </package> or using annotations @PersistenceCapable @FetchGroup(name="otherfield", members={@Persistent(name="other")}) public class MyClass { ... } So we have defined a fetch group called "otherfield" that just includes the field with name other. We can then use this at runtime in our persistence code. PersistenceManager pm = pmf.getPersistenceManager(); pm.getFetchPlan().addGroup("otherfield"); ... (load MyClass object) By default the FetchPlan will include the default fetch group. We have changed this above by adding the fetch group "otherfield", so when we retrieve an object using this Persistence JDO 1.0 The FetchPlan applies not just to calls to PersistenceManager.getObjectById(), but also to PersistenceManager.newQuery(), PersistenceManager.getExtent(), PersistenceManager.detachCopy and much more besides. To read more about named fetch-groups and how to use it with attach/detach you can look at our Tutorial on DAO Layer design. The mechanism above provides static fetch groups defined in XML or annotations. That is great when you know in advance what fields you want to fetch. In some situations you may want to define your fields to fetch at run time. This became standard in JDO2.2 (was previously a DataNucleus extension). It operates as follows import org.datanucleus.FetchGroup; // Create a FetchGroup on the PMF called "TestGroup" for MyClass FetchGroup grp = myPMF.getFetchGroup(MyClass.class, "TestGroup"); grp.addMember("field1").addMember("field2"); // Add this group to the fetch plan (using its name) fp.addGroup("TestGroup"); So we use the DataNucleus PMF as a way of creating a FetchGroup, and then register that FetchGroup with the PMF for use by all PMs. We then enable our FetchGroup for use in the FetchPlan by using its group name (as we do for a static group). The FetchGroup allows you to add/remove the fields necessary so you have full API control over the fields to be fetched. The basic fetch group defines which fields are to be fetched. It doesn't explicitly define how far down an object graph is to be fetched. JDO2 provides two ways of controlling this. The first is to set the maxFetchDepth for the FetchPlan. This value specifies how far out from the root object the related objects will be fetched. A positive value means that this number of relationships will be traversed from the root object. A value of -1 means that no limit will be placed on the fetching traversal. The default is 1. Let's take an example public class MyClass1 { MyClass2 field1; ... } public class MyClass2 { MyClass3 field2; ... } public class MyClass3 { MyClass4 field3; ... } and we want to detach field1 of instances of MyClass1, down 2 levels - so detaching the initial "field1" MyClass2 object, and its "field2" MyClass3 instance. So we define our fetch-groups like this <class name="MyClass1"> ... <fetch-group <field name="field1"/> </fetch-group> </class> <class name="MyClass2"> ... <fetch-group <field name="field2"/> </fetch-group> </class> and we then define the maxFetchDepth as 2, like this pm.getFetchPlan().setMaxFetchDepth(2); A further refinement to this global fetch depth setting is to control the fetching of recursive fields. This is performed via a MetaData setting "recursion-depth". A value of 1 means that only 1 level of objects will be fetched. A value of -1 means there is no limit on the amount of recursion. The default is 1. Let's take an example public class Directory { Collection children; ... } <class name="Directory"> <field name="children"> <collection element- </field> <fetch-group <field name="children" recursion- </fetch-group> ... </class> So when we fetch a Directory, it will fetch 2 levels of the children field, hence fetching the children and the grandchildren. A FetchPlan can also be used for defining the fetching policy when using queries. This can be set using pm.getFetchPlan().setFetchSize(value); The default is FetchPlan.FETCH_SIZE_OPTIMAL which leaves it to DataNucleus to optimise the fetching of instances. A positive value defines the number of instances to be fetched. Using FetchPlan.FETCH_SIZE_GREEDY means that all instances will be fetched immediately.
http://www.datanucleus.org/products/accessplatform_4_0/jdo/fetchgroup.html
CC-MAIN-2016-22
refinedweb
931
65.73
We want to make a hello world program that would do the following: - Allocate a console. - Set the console title to "HelloWorldProgram" . - Get and save the standard output handle. - Calculate the length of the "Hello World! \r\n" message. - Print the "Hello World! \r\n" message. - Set the console cursor position to (0, 15). - Wait 2000 milliseconds (2 seconds). - Free the console allocated earlier. - Exit, returning 0. Hello World - The Code Okay, it's time to write the hello world program. The comments within the code do a lot of the explanation of the code. So here's the code: ;; When symbols are not defined within our program, we need to use 'extern', to tell NASM that those will be assigned when the program is linked. ;; These are the symbols for the Win32 API import functions we will use. extern GetStdHandle extern WriteFile extern AllocConsole extern FreeConsole extern SetConsoleTitleA extern SetConsoleCursorPosition extern Sleep extern ExitProcess ;; Now, we need a symbol import table, so that we can import Win32 API functions from their DLLs. ;; Note, though, that some functions have ANSI and unicode versions; for those, a name suffix is ;; required (ie "<function_name>A" for ANSI, and "<function_name>W" for unicode; SetConsoleTitleA ;; is an example of one). import GetStdHandle kernel32.dll import WriteFile kernel32.dll import AllocConsole kernel32.dll import FreeConsole kernel32.dll import SetConsoleTitleA kernel32.dll import SetConsoleCursorPosition kernel32.dll import Sleep kernel32.dll import ExitProcess kernel32.dll ;; Here, we tell NASM to put the following stuff into the code section of the program. ;; The 'use32' tells NASM to use 32-bit code, and not 16-bit code. section .text use32 ;; The '..start:' special symbol tells NASM (and, later on, the linker) that this is ;; where the program entry point is. This is where the instruction pointer will point ;; to, when the program starts running. ..start: ;; Since this is a Windows subsystem program, we need to allocate a console, ;; in order to use one. ;; Note how we use 'AllocConsole' as if it was a variable. 'AllocConsole', to ;; NASM, means the address of the AllocConsole "variable" ; but since the ;; pointer to the AllocConsole() Win32 API function is stored in that ;; variable, we need to call the address from that variable. ;; So it's "call the code at the address: whatever's at the address AllocConsole" . call [AllocConsole] ;; Here, we push the address of 'the_title' to the stack. push dword the_title ;; And we call the SetConsoleTitleA() Win32 API function. call [SetConsoleTitleA] ;; We push -11 (yes, that's legal, it basically means 0 - 11, in two's complement), ;; which is the Windows constant for STD_OUTPUT_HANDLE, to the stack. push dword -11 ;; Then we call the GetStdHandle. call [GetStdHandle] ;; The Win32 API functions return the result in the EAX register. ;; Therefore, to save the return, we need to save the value in EAX. ;; Here, we move the value from EAX to [hStdOut] ("to the memory location ;; at the address hStdOut"). mov dword [hStdOut], eax ;; We move the address of msg_len to EAX. mov eax, msg_len ;; Then we subtract the address of msg from EAX, to get the ;; size of the msg variable, since msg_len comes right after msg. sub eax, msg ;; Since there's a trailing 0, the actual text is really 1 byte less. ;; So we decrement (or subtract 1 from) EAX. dec eax ;; Then we save that result in the msg_len variable. mov dword [msg_len], eax ;; WriteFile() has 5 parameters. ;; When we call a function in assembly language, we push the parameters ;; to the stack in backwards order, so that it's easier for the function ;; we're calling to access these parameters, because for that function ;; the parameters will actually be in the correct order, because of ;; the way the Intel procedure stack works. ;; The fifth parameter is usually 0. push dword 0 ;; The fourth parameter is the address of the variable where we want ;; the actual number of bytes written (or read, for ReadFile()) saved. push dword nBytes ;; The third parameter is the number of bytes to write (or read, for ReadFile()). push dword [msg_len] ;; The second parameter is the pointer to the buffer where ;; the text to write (or read, for ReadFile()), is located. push dword msg ;; The first parameter is the handle to the file we want to write to ;; (or read from, for the ReadFile() function). push dword [hStdOut] ;; Then we call the Win32 API WriteFile() function. call [WriteFile] ;; It's time to set the console cursor position. ;; We want to set the high-order part of EAX to ;; the new Y coordinate of the console cursor ;; and the low-order part of EAX to the new ;; X coordinate of the console cursor. ;; Set the low-order part of EAX to 15. mov ax, 15 ;; Shift the bits in EAX left, so that ;; the high-order part of EAX is 15, now. shl eax, 16 ;; EAX is 32 bits in size, so it would make sense to ;; shift things by 16 bits. ;; Note that AX is not 15 anymore, because the 15 ;; has been shifted. AX should be 0, at this point. ;; Set the low-order part of EAX to 0. mov ax, 0 ;; I know it's kind of silly to set AX to 0 if it ;; should already be 0, but we do that anyway. ;; The second parameter to the SetConsoleCursorPosition() function ;; is a COORD structure (that we just made) for the new ;; position of the console cursor. push eax ;; The first parameter is the standard output handle of the console ;; of which we want to set the cursor. push dword [hStdOut] ;; Then we call the Win32 API SetConsoleCursorPosition() function. ;; It's the same thing as pushing the EIP and jumping to the ;; function, but we can't directly access EIP, so we have to ;; use the CALL instruction. call [SetConsoleCursorPosition] ;; Sleep() is a Win32 API function that suspends the execution of ;; the current code for a number of milliseconds that we specify. ;; So we specify 2000 milliseconds (2 seconds). push dword 2000 ;; And we call the Sleep() function. call [Sleep] ;; When we're done using the console, we need to free it, ;; if we were the ones who allocated it. ;; Same applies for other resources, such as file handles ;; and memory pointers; like if we open a file, we need to ;; close the handle after we're done using the file. call [FreeConsole] ;; XOR reg, reg is a way to clear reg, so that it's 0. xor eax, eax ;; We pass EAX (which is 0) to ExitProcess(). push eax ;; Then we call the ExitProcess() Win32 API function. call [ExitProcess] ;; Now we tell NASM that this next stuff is supposed to go ;; into the data section. section .data ;; We define the_title, and initialize it to "HelloWorldProgram" the_title db "HelloWorldProgram", 0 ;; Now we define msg, and initialize that to "Hello World! \r\n" msg db "Hello World! ", 13, 10, 0 ;; Note that 13 means "\r" and 10 means "\n" ;; 13 is the ASCII code for carriage return (CR) ;; and 10 is the ASCII code for line feed (LF) ;; CRLF is the character combination used for ;; new lines (or at least under DOS/Windows). ;; Since msg_len has to come right after msg, ;; in order for us to get correct results in ;; the above code, we have to define it right ;; after msg and in the data section. ;; We can initialize it to whatever we want, ;; since it will be changed, later on, anyway. ;; I decided to initialize it to 0. msg_len dd 0 ;; Here we tell NASM that the following is for the bss section. section .bss ;; We reserve 1 double-word for hStdOut. hStdOut resd 1 ;; And we reserve 1 double-word for nBytes. nBytes resd 1 Assembling And Linking - The Command-Line Now time to open a command-prompt window. Opening Command Prompt If you're using Notepad++, open the .asm file with it, go to the 'Run' menu, and choose 'Open current dir cmd' . Else, open command prompt and change to the directory where you have the .asm file saved. If you don't want to do that, open notepad (the one that comes with Windows), or a similar text editor, and type: cmdThen save that file into the same directory (folder) that contains the .asm file. For the filename, type something like "cmd_dir.bat" (preferably with the quotes), and make sure you save it as a text document/file (if you're using an editor such as WordPad). Then go to that folder and open the cmd_dir.bat that you just saved. Building The Project You should now have a command prompt window open to the directory with the .asm file. (In the command prompt window:) To assemble a file, you type: "\nasm\nasm -fobj the_file.asm" (without the quotes, replace the_file.asm with the filename of the .asm file). To link a file or files, type: "\alink\alink -oPE the_files" (without the quotes, replace the_files with a space-separated list of files to link for the current project). If you want to specify an output filename for linking, use: "\alink\alink -oPE the_files -o the_file.exe" , instead, and instead of the_file.exe use the filename you want ALINK to save the PE as. If you installed NASM or ALINK to a different directory, let's say to "F:\files", then add the directory name right before the command: "F:\files\alink\alink -oPE ..." For our hello world example, I saved the .asm file as "hello.asm" . Then I used these commands to assemble and link the program: \nasm\nasm -fobj hello.asm \alink\alink -oPE hello.obj And "hello.obj" with "hello.exe" appeared in the folder where "hello.asm" was saved. The Output Here's somewhat what the output should look like: Well, it's the end of the tutorial. Hopefully you learned at least something. I'm planning on making another tutorial sometime soon, so you can watch out for that. Here's the link to my next tutorial, if you want to go on to that: Local Variables and Functions Tools: NASM - Netwide Assembler ALINK (Linker) Notepad++ (Source Code Text Editor) References: PE/COFF Specification NASM Manual Some Common Instructions Intel Architecture Software Developer's Manual volume 1 Intel Architecture Software Developer's Manual volume 2 Intel Architecture Software Developer's Manual volume 3 First Tutorial: Part 1 Previous Tutorial: Part 2 Next Tutorial: Local Variables and Functions Edited by RhetoricalRuvim, 03 February 2012 - 11:55 AM.
http://forum.codecall.net/topic/65224-intro-to-win32-assembly-using-nasm-part-3/
CC-MAIN-2016-36
refinedweb
1,739
73.58
Access your node.js console remotely. Console.io allows you to access the console of your running node.js instances and even execute code remotely. In order to see another console in the dashboard, run sample2.js (8082) as well. Logs will start showing in real time. Errors will be displayed in red. There will be a prompt with the name of the host, where you can start running scripts. Notice that if you want to see something in the output you are going to have to either: //this will print 4.return 2 + 2;//this will print undefined.2 + 2;//this will print 4 and then undefined.console.log2 + 2; The script will be printed right before the result, so that other dashboards looking at the console, see the script and can make something out of the output If you would like to see the logs in your own front end, you pass an instance of socket.io to the "server" module. require'console.io/server'hookio; From the website perspective it's pretty easy to receive messages. You start by subscribing to the 'web' namespace. You can see how it works in the \public\index.html file. You will find that integrating console.io to your node.js apps is ridiculously simple. nccconnectendpoint: ""name: "marketplace"; npm install console.io-client Add this once in every node.js process: var ncc = require'console.io-client';nccconnectoptions callback; endpoint: url to the dashboard. name: unique name of this particular node.js process. disableExec: disable the remote execution of code.
https://www.npmjs.com/package/console.io
CC-MAIN-2016-22
refinedweb
259
70.9
string – Working with text¶ The string module dates from the earliest versions of Python. In version 2.0, many of the functions previously implemented only in the module were moved to methods of str and unicode objects. Legacy versions of those functions are still available, but their use is deprecated and they will be dropped in Python 3.0. The string module still contains several useful constants and classes for working with string and unicode objects, and this discussion will concentrate on them. Constants¶ The constants in the string module can be used to specify categories of characters such as ascii_letters and digits. Some of the constants, such as lowercase, are locale-dependent so the value changes to reflect the language settings of the user. Others, such as hexdigits, do not change when the locale changes. import string for n in dir(string): if n.startswith('_'): continue v = getattr(string, n) if isinstance(v, basestring): print '%s=%s' % (n, repr(v)) print Most of the names for the constants are self-explanatory. $ python string_constants.py ascii_letters='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' ascii_lowercase='abcdefghijklmnopqrstuvwxyz' ascii_uppercase='ABCDEFGHIJKLMNOPQRSTUVWXYZ' digits='0123456789' hexdigits='0123456789abcdefABCDEF' letters='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' lowercase='abcdefghijklmnopqrstuvwxyz' octdigits='01234567' printable='0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' punctuation='!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~' uppercase='ABCDEFGHIJKLMNOPQRSTUVWXYZ' whitespace='\t\n\x0b\x0c\r ' Functions¶ There are two functions not moving from the string module. capwords() capitalizes all of the words in a string. import string s = 'The quick brown fox jumped over the lazy dog.' print s print string.capwords(s) The results are the same as if you called split(), capitalized the words in the resulting list, then called join() to combine the results. $ python string_capwords.py The quick brown fox jumped over the lazy dog. The Quick Brown Fox Jumped Over The Lazy Dog. The other function creates translation tables that can be used with the translate() method to change one set of characters to another. import string leet = string.maketrans('abegiloprstz', '463611092572') s = 'The quick brown fox jumped over the lazy dog.' print s print s.translate(leet) In this example, some letters are replaced by their l33t number alternatives. $ python string_maketrans.py The quick brown fox jumped over the lazy dog. Th3 qu1ck 620wn f0x jum93d 0v32 7h3 142y d06. Templates¶ String templates were added in Python 2.4 as part of PEP 292 and are intended as an alternative to the built-in interpolation syntax. With string.Template interpolation, variables are identified by name prefixed with $ (e.g., $var) or, if necessary to set them off from surrounding text, they can also be wrapped with curly braces (e.g., ${var}). This example compares a simple template with a similar string interpolation setup. import string values = { 'var':'foo' } t = string.Template(""" $var $$ ${var}iable """) print 'TEMPLATE:', t.substitute(values) s = """ %(var)s %% %(var)siable """ print 'INTERPLOATION:', s % values As you see, in both cases the trigger character ($ or %) is escaped by repeating it twice. $ python string_template.py TEMPLATE: foo $ fooiable INTERPLOATION: foo % fooiable One key difference between templates and standard string interpolation is that the type of the arguments is not taken into account. The values are converted to strings, and the strings are inserted into the result. No formatting options are available. For example, there is no way to control the number of digits used to represent a floating point value. A benefit, though, is that by using the safe_substitute() method, it is possible to avoid exceptions if not all of the values needed by the template are provided as arguments. import string values = { 'var':'foo' } t = string.Template("$var is here but $missing is not provided") try: print 'TEMPLATE:', t.substitute(values) except KeyError, err: print 'ERROR:', str(err) print 'TEMPLATE:', t.safe_substitute(values) Since there is no value for missing in the values dictionary, a KeyError is raised by substitute(). Instead of raising the error, safe_substitute() catches it and leaves the variable expression alone in the text. $ python string_template_missing.py TEMPLATE: ERROR: 'missing' TEMPLATE: foo is here but $missing is not provided. import string class MyTemplate(string.Template): delimiter = '%' idpattern = '[a-z]+_[a-z]+' t = MyTemplate('%% %with_underscore %notunderscored') d = { 'with_underscore':'replaced', 'notunderscored':'not replaced', } print t.safe_substitute(d) In this example, variable ids must include an underscore somewhere in the middle, so %notunderscored is not replaced by anything. $ python string_template_advanced.py % replaced %notunderscored For more complex changes, you can override the pattern attribute and define an entirely new regular expression. The pattern provided must contain four named groups for capturing the escaped delimiter, the named variable, a braced version of the variable name, and invalid delimiter patterns. Let’s look at the default pattern: import string t = string.Template('$var') print t.pattern.pattern Since t.pattern is a compiled regular expression, we have to access its pattern attribute to see the actual string. $ python string_template_defaultpattern.py \$(?: (?P<escaped>\$) | # Escape sequence of two delimiters (?P<named>[_a-z][_a-z0-9]*) | # delimiter and a Python identifier {(?P<braced>[_a-z][_a-z0-9]*)} | # delimiter and a braced identifier (?P<invalid>) # Other ill-formed delimiter exprs ) If we wanted to create a new type of template using, for example, {{var}} as the variable syntax, we could use a pattern like this: import re import string class MyTemplate(string.Template): delimiter = '{{' pattern = r''' \{\{(?: (?P<escaped>\{\{)| (?P<named>[_a-z][_a-z0-9]*)\}\}| (?P<braced>[_a-z][_a-z0-9]*)\}\}| (?P<invalid>) ) ''' t = MyTemplate(''' {{{{ {{var}} ''') print 'MATCHES:', t.pattern.findall(t.template) print 'SUBSTITUTED:', t.safe_substitute(var='replacement') We still have to provide both the named and braced patterns, even though they are the same. Here’s the output: $ python string_template_newsyntax.py MATCHES: [('{{', '', '', ''), ('', 'var', '', '')] SUBSTITUTED: {{ replacement Deprecated Functions¶ For information on the deprecated functions moved to the string and unicode classes, refer to String Methods in the manual. See also
https://pymotw.com/2/string/
CC-MAIN-2016-44
refinedweb
962
58.18
Keys and Rooms — Day 100 (Python) Today the rooms start locked (except for room 0). You can walk back and forth between rooms freely. Return true if and only if you can enter every room. Example 1: Input: [[1],[2],[3],[]] Output: true Explanation: We start in room 0, and pick up key 1. We then go to room 1, and pick up key 2. We then go to room 2, and pick up key 3. We then go to room 3. Since we were able to go to every room, we return true. Example 2: Input: [[1,3],[3,0,1],[2],[0]] Output: false Explanation: We can't enter the room with number 2. Note: 1 <= rooms.length <= 1000 0 <= rooms[i].length <= 1000 - The number of keys in all rooms combined is at most 3000. We have a list of rooms. Each index in the list represents room numbers. Within each room, we have few keys. Except for room 0, we would require keys to enter any other room. We start from room 0, collect the keys present in room 0. Let us keep all the keys that we collect in a queue. We visit the next rooms by taking keys present in our queue. Each time we visit a new room, let us keep all the keys present in this room in the queue. What about edge cases when Room 1 has the key for Room 2 and Room 2 has the key for Room 1. In such a case, we will have a circular relation. To avoid circular relations, we can make use of a visited list that is of the same size as the number of rooms. The visited list is initialized as False. This reminds us of BFS algorithm. Let us use BFS to solve this problem. class CanVisitAllRoomFinder: def canVisitAllRooms(self, rooms: List[List[int]]) -> bool: key_queue = [] visited = [False] * len(rooms) for keys in rooms[0]: key_queue.append(keys) visited[0] = True while(key_queue): out = key_queue.pop(0) if visited[out] != True: for keys in rooms[out]: key_queue.append(keys) visited[out] = True return True if all(visited) else False Complexity analysis. Time Complexity The time complexity for this algorithm is O(N + K), where N is the number of rooms and K is the number of Keys. Space Complexity The space complexity is O(N) since we are using a list to store the visited rooms list.
https://atharayil.medium.com/keys-and-rooms-day-100-python-9af3df2b5691?responsesOpen=true&source=user_profile---------0----------------------------
CC-MAIN-2021-39
refinedweb
406
85.28
. Each); You never know how your technology is going to be used.. Regular expressions are a very powerful tool for validating or parsing a string. I don't claim to be an expert in the use of Regular expressions by any means. But I have used them with great success on a number of occasions. They are very useful. I have used them to validate a special URL that is used to create hyperlinks to forms within a smart client application, or to parse the syntax of a Powerbuilder DataWindow to name just two. In ASP.NET there is a Regular Expression validator control. This control lets you validate input based on Regular Expressions. The nice thing about this control is the hard work is done for you for a variety of different strings. For example there is a sample expression for internet email addresses. \w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)* and one for an Internet URL http://([\w-]+\.)+[\w-]+(/[\w- ./?%&=]*)? If you want to use Regex in .NET (C# or VB.NET) getting the expression right is the tough part, as you can see from the expressions above. The rest is pretty easy. To help you out there is a tool written by Eric Gunnerson called Regular Expression Workbench that I have found very useful on many occasions. You can down load it from gotdotnet. This wonderful tool will help you format your expression, interpret or execute your expression on a sample string, even tell you how efficient the operation is. For those of you who don't really know how to use Regular Expressions here is a simple example to get you started. Using this example you can test a string for swear words. First: to use Regular Expressions you must use the System.Text.RegularExpressions namespace. Then create a regular expression and call the match method to search your string. using System.Text.RegularExpressions; Regex r; //Declare Regular expression r = new Regex("hell"); //Create expression Match m = r.Match(this.textBox1.Text); //Call match method if (m.Success) //Check to see if a match was made { MessageBox.Show("You can't say that."); } else { MessageBox.Show("nothing found."); } The above example works to find the word hell but it will also find it inside the word hello. You can add the escape character /b to denote word boundaries. For example if your expression is /bhell/b it will only match on the word hell. /b has two uses in a regular expression it's a word boundary. When used within a [ ] character class it refers to the backspace character. Fine so you can now validate a string to keep users from entering a swear word. "But Dave there are many words I need to check for.", you say. Not to worry this can be done in the same expression using a pipe. If you separate each word you want to find with a pipe Match( ) will look for all the words listed. See the example below. using System.Text.RegularExpressions;Regex r; //Declare Regular expression r = new Regex("\\bhell\\b|\\bfrig\\b"); //Create expression Match m = r.Match(this.textBox1.Text); //Call match if (m.Success) //Check to see if a match was made { MessageBox.Show("You can't say that."); } else { MessageBox.Show("nothing found."); } While Sun is very complimentary about Microsoft's strengths on the smart client, Sun's answer seems to be....put HTML on the client. One of their reasons is that “The skill-set requirements and the complexity of HTML are much lower than for .NET”. Are you kidding me? Last time I built a Windows Forms application I didn't have to test it on 5 different browsers. Don't even get me started on CSS and javascript. The metrics that I see suggest that it is no less than twice the development/testing effort for a Web Form over a Win Form application. That's conservative and sometimes goes up to 3-4 times. Sun also fails to address the need for the occassionally connected/offline access application.
https://blogs.objectsharp.com/cs/blogs/deb/?page=172
CC-MAIN-2020-40
refinedweb
678
67.04
WHY (on php app, seems that nothing was sent, but on Apache LOG, there were a call...). Could by any bad configuration on Netbeans Project? I tried to change many different things, but nothing works! Following is part of the code - maybe somebody could see a thing I could not (please, help me!!!): 1. public class HttpHandler extends Thread { 2. 3. 4. String url=""; 5. protected StringBuffer buff=new StringBuffer(); 6. String rep=null; 7. public String sendData=null; 8. HttpListener hListener = null; 9. 10. public HttpHandler(HttpListener h){ 11. hListener = h; 12. } 13. 14. synchronized public void run(){ 15. try{ 16. HttpConnection hc = (HttpConnection)Connector.open(url); 17. hc.setRequestMethod(HttpConnection.POST); 18. hc.setRequestProperty("User-Agent", "Profile/MIDP-1.0 Configuration/CLDC-1.0"); 19. //hc.setRequestProperty("User-Agent", "Profile/MIDP-2.0 Configuration/CLDC-1.1"); 20. hc.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); 21. DataOutputStream dos=hc.openDataOutputStream(); 22. dos.write(sendData.getBytes()); 23. dos.flush(); 24. dos.close(); 25. 26. if (hc.getResponseCode()==hc.HTTP_OK) { 27. // You have successfully connected. 28. DataInputStream dis = hc.openDataInputStream(); 29. // Now Process your request. 30. int ch = dis.read(); 31. while (ch!=-1){ 32. buff.append((char)ch); 33. ch = dis.read(); 34. } 35. rep=buff.toString(); 36. hListener.processRequest(rep); 37. dis.close(); 38. } 39. }catch(SecurityException se){ 40. se.printStackTrace(); 41. } 42. catch(IOException ioe){ 43. System.out.println(ioe.getMessage()); 44. } 45. } 46. } I did not close the output stream, and problem still occurs... I wondering if I am sending data in right way. See, the sendData has the value like: "login=01180809090&password=909090&app_version=1.0&ip_orig=&language=ptBR" It is right sending that way? Note:These values were get on form. If the server page is designed to GET the information, and not have it POSTed to it then you should put your information on the URL. Your ip_orig does not have a value which may be causing some type of server parsing issue. You might try putting a '?' at the start of your data string so it would begin like "?login=0..". -Shawn Thanks Shawn, I 've try to use GET and POST, without success. I even have put "?" on data string (at its beggining), and put a value on ip_orig variable, and nothing... PHP program is trying to get data by $_REQUEST, and no one value could be read... Do you think that could be any server problem? Is there any way to see, on the server, what information has been received? Try accessing the URL you have listed from a desktop browser. When I do I'm getting a file not found. Looks like your server is not set up correctly. -Shawn Oh, sorry about that Shawn - for secutiry purposes, that URL was just an example... But the correct URL is beeing accessed with no problems. I even have an HTML page for tests, and it could call the PHP and pass the parameters correcty. If you wanna see it, please, PM me. I could even send you the project: if you could help me, I will be very grateful ! One last thought, try commenting out both line 23 & 24: 23. dos.flush(); 24. dos.close(); Maybe your phone/provider is now sending the data in chunked format and the server may not like that, or the carriers gateway may not like it. If that doesn't solve it I'm all out of thoughts. Good luck -Shawn Some HttpConnection implementations doesn't handle HTTP/1.1 301 Moved Permanently or similar non ok return codes, could be something like that. Is the response code HTTP_OK? Yes, the response code is HTTP_OK: the code is passing through the IF. But, the code is not passing by next WHILE, cause ch has the value -1 if (hc.getResponseCode()==hc.HTTP_OK) { // You have successfully connected. DataInputStream dis = hc.openDataInputStream(); // Now Process your request. int ch = dis.read(); while (ch!=-1){ buff.append((char)ch); ch = dis.read(); } I mean, the code int ch = dis.read(); is returning -1 Don't close your output stream. Save that till after you have your input then close both. Also don't write out all of your data in one call unless you know that it will never be more than a few hundred bytes. If you start trying to send binary images or video files you need to break up your data writes to the server into smaller 512, 1k or 2k chunks so the carrier doesn't drop the connection. -Shawn
https://www.java.net/node/696101
CC-MAIN-2015-18
refinedweb
759
69.28
StatixStatix Statix is an Elixir client for StatsD-compatible servers. It is focused on speed without sacrificing simplicity, completeness, or correctness. What makes Statix the fastest library around: [1] In contrast with process-based clients, Statix has lower memory consumption and higher throughput – Statix v1.0.0 does about 876640 counter increments per flush: It is possible to measure it yourself. for _ <- 1..10_000 do Task.start(fn -> for _ <- 1..10_000 do StatixSample.increment("sample", 1) end end) end Make sure you have StatsD server running to get more realistic results. InstallationInstallation Add Statix as a dependency to your mix.exs file: def application() do [applications: [:statix]] end defp deps() do [{:statix, ">= 0.0.0"}] end Then run mix deps.get in your shell to fetch the dependencies. UsageUsage A module that uses Statix becomes a socket connection: defmodule MyApp.Statix do use Statix end Before using connection the connect/0 function needs to be invoked. In general, this function is called when your application starts (for example, in its start/2 callback): def start(_type, _args) do :ok = MyApp.Statix.connect() # ... end Once the Statix connection is open, its increment/1,2, decrement/1,2, gauge/2, set/2, timing/2, and measure/2 functions can be used to push metrics to the StatsD-compatible server. SamplingSampling Sampling is supported via the :sample_rate option: MyApp.Statix.increment("page_view", 1, sample_rate: 0.5) The UDP packet will only be sent to the server about half of the time, but the resulting value will be adjusted on the server according to the given sample rate. TagsTags Tags are a way of adding dimensions to metrics: MyApp.Statix.gauge("memory", 1, tags: ["region:east"]) ConfigurationConfiguration Statix can be configured globally with: config :statix, prefix: "sample", host: "stats.tld", port: 8181 and on a per connection basis as well: config :statix, MyApp.Statix, port: 8811 The defaults are: - prefix: nil - host: "127.0.0.1" - port: 8125 Note: by default, configuration is evaluated once, at compile time. If you plan using other configuration at runtime, you must specify the :runtime_config option: defmodule MyApp.Statix do use Statix, runtime_config: true end LicenseLicense This software is licensed under the ISC license.
https://libraries.io/hex/statix
CC-MAIN-2020-16
refinedweb
368
59.3
I will fix the keysig thing tonight. I will take out the mode option. A user requested it several years ago but I dont think its necessary and it complicated the code. Eventually I would like to see all dialogs written in scheme rather than c. We could script it now without the use of any gtk widgets by having menu items for each key in subdirectories like key->set initial->d major. Then possibly a popup can ask if you want all staffs. Then these keychanges can be recordable actions. What do you think? Jeremiah Sent from my Samsung smartphone on AT&T Richard Shann <address@hidden> wrote: >I have checked the screenshot stuff, working for gtk2 but disabled for >gtk3. >It turns out that the gtk3 code cannot be easily back ported to gtk2, so >I have both versions in screenshot.c >The gtk3 version is surrounded by #if 0 ... #endif as I have not been >able to compile it, and it is almost sure not to work even if it >compiles as it is just an initial hack of the gnome-screenshot code from >gtk3. >Can you check that this branch still compiles for gtk3 please? There are >only two issues with running the gtk3 branch version on gtk2 that I have >found so far > * Initial Keysignature dialog broken > * Horizontal scroll bar jammed wide open >I haven't looked at the DenemoGraphic stuff yet. > >Richard > > >On Thu, 2011-11-24 at 11:16 -0600, Jeremiah Benham wrote: >> Sure, >> >> > > > >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > Denemo-devel mailing list >> >> > address@hidden >> >> > >> >> >> >> >> >> >> >> _______________________________________________ >> >> Denemo-devel mailing list >> >> address@hidden >> >> >> > >> > > >
http://lists.gnu.org/archive/html/denemo-devel/2011-11/msg00061.html
CC-MAIN-2013-48
refinedweb
265
71.44
strpattern_match_invoke_action() Get the action of an invoke associated with a pattern match. Synopsis: #include <strpattern.h> const char* strpattern_match_invoke_action(const strpattern_match *match, int index, int *err) Since: BlackBerry 10.0.0 Arguments: - match The match containing the invoke whose action is returned. - index The index of the invoke associated with the match. - err STRPATTERN_EOK if there is no error. Library:libstrpattern (For the qcc command, use the -l strpattern option to link against this library) Description:. Returns: A NULL-terminated string with the action. NULL if no action is set for the invoke or on error. Ownership is retained by the callee. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.strpattern.lib_ref/topic/strpattern_match_invoke_action.html
CC-MAIN-2017-22
refinedweb
123
54.08
view raw def min_diff(arry_): max_ =0 temp_ =0 for i in arry_: nonlocal max_ nonlocal temp_ if i > max_: nonlocal max_ nonlocal temp_ temp_ = max_ max_ =i return max_-temp_ max_ temp_ SyntaxError: no binding for nonlocal 'max_' found nonlocal can only applied in functions that have a nested scope. You only get a nested scope when you define your function inside of another function. Python doesn't have block scopes; the for loop doesn't create a new scope, so you don't need to use nonlocal in a loop. Your variables are available throughout the rest of the function. Just drop the nonlocal statements altogether: def min_diff(arry_): max_ = 0 temp_ = 0 for i in arry_: if i > max_: temp_ = max_ max_ = i return max_ - temp_ In Python, only functions, class definitions and comprehensions (list, set, and dict comprehensions as well as generator expressions) get their own scope, and only functions can act as a parent scope for closures (nonlocal variables). There is also a bug in your code; if you pass in a list where the first value is also the maximum value in the list, temp_ is set to 0 and then never changes. You won't ever find the second-highest value in that case, because only for the very first i will if i > max_: be true. You'd also need to test if i is greater than temp_ in that case: def min_diff(arry_): max_ = 0 temp_ = 0 for i in arry_: if i > max_: temp_ = max_ max_ = i elif i > temp_: temp_ = i return max_ - temp_ As a side note: you don't need to use trailing underscores in your local variables. Of all the local names used, only max_ would potentially shadow the built-in max() function, but since you don't use that function at all, using max_ instead of max in your function is not actually a requirement. I'd personally drop all the trailing _ underscores from all the names in the function. I'd also use different names; perhaps highest and secondhighest.
https://codedump.io/share/IAsSSCP0I5pP/1/trying-to-use-variables-outside-of-a-for-loop-gives-a-syntaxerror-no-binding-for-nonlocal-39max39-found
CC-MAIN-2017-22
refinedweb
346
57.84
Any application that uses Entity Framework’s spatial data type support to target SQL Server requires the ‘CLR Types for SQL Server’ to be available on the machine the application runs on. This also applies to applications that use SQL Server spatial data types directly, without using Entity Framework. Deployment Issues When developing your application the CLR Types for SQL Server are usually installed system-wide, since they are included in Visual Studio. Issues arise when you try to deploy to a machine that does not have the CLR Types for SQL Server installed. Initially you will get the following InvalidOperationException. Spatial types and functions are not available for this provider because the assembly ‘Microsoft.SqlServer.Types’ version 10 or higher could not be found. If you were to find and deploy the Microsoft.SqlServer.Types assembly you’ll then get the following DllNotFoundException. Unable to load DLL ‘SqlServerSpatial110.dll’: The specified module could not be found. (Exception from HRESULT: 0x8007007E) The Solution If you have control over the server you can just install the CLR Types for SQL Server. The SQL Server 2012 SP1 version of the CLR Types can be downloaded here. SQLSysClrTypes.msi is the installer you want and there is an x86 (32 bit) and x64 (64 bit) version depending on the architecture of the machine you are deploying to. However, installing extra software on the target machine is not always an option – especially if you are deploying to a machine you don’t own (such as Windows Azure Web Sites). Fortunately the required assemblies can be deployed along with your application. - Step 1: Install the Microsoft.SqlServer.Types NuGet package. PM> Install-Package Microsoft.SqlServer.Types Step 2: Ensure the appropriate version of the native SqlServerSpatial110.dll assembly is copied to the output directory and deployed with your application. Steps on how to do this are included in a ReadMe.txt file that will open in Visual Studio when you install the package. What the Microsoft.SqlServer.Types Package Does The Microsoft.SqlServer.Types package gives you the two assemblies that need to be deployed along with your application: - Microsoft.SqlServer.Types.dll – This is a .NET assembly that is added as a reference to your project. This assembly will be automatically copied to the output directory of you application and deployed for you. - SqlServerSpatial110.dll – This is a native assembly so it cannot be added as a project reference. Instead, installing the NuGet package will add the x86 and x64 version of this assembly as items in your project under a SqlServerTypes folder. You will need to ensure the appropriate version of this assembly is loaded at runtime. Steps on how to do this are included in a ReadMe file that will open in Visual Studio when you install the package. Join the conversationAdd Comment This is quite awesome. Since there is no pure CLR solution, I assume you can't use the GeoSpatial types in mono? @Justin Dearing – Correct, at least not the SQL Server implementations. It's now common to deploy both the x86 and the x64 binaries. Then load the right one at runtime using a statement like this: Native.LoadLibrary(Path.Combine(IntPtr.Size > 4 ? "x64" : "x32", "Sqlite.Interop.dll")); I'd like to see this package implement this pattern. This has the advantage of working with "Any CPU". It eliminates the need for the developer to decide which item he wants to keep as part of his project. @Brannon – Good idea, I'm working on an update now that uses this approach. It's not working for me. I followed the directions but I still get an error when I deploy and run from the WebServer. It works fine on my local dev machine, which has SQL 2012 installed. On the WebServer, which does NOT have SQL installed, the SqlServerSpatial11.dll file is in the "bin" folder as expected but is not detected by the application. To fix this problem, I had to install the SQLSysClrTypes.msi from the SQL Server Feature Pack. I hoped this NuGet package would let me avoid that. Shortened exception message: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. —> System.DllNotFoundException: Unable to load DLL 'SqlServerSpatial110.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E) at Microsoft.SqlServer.Types.GLNativeMethods.GeodeticIsValid(GeoMarshalData g, Double eccentricity, Boolean forceKatmai, Boolean& result, Boolean& isSmallerThanAHemisphere) at Microsoft.SqlServer.Types.GLNativeMethods.GeodeticIsValid(GeoData& g, Double eccentricity, Boolean forceKatmai) at Microsoft.SqlServer.Types.SqlGeography.IsValidExpensive(Boolean forceKatmai) at Microsoft.SqlServer.Types.SqlGeography..ctor(GeoData g, Int32 srid) at Microsoft.SqlServer.Types.SqlGeography.GeographyFromText(OpenGisType type, SqlChars taggedText, Int32 srid) —.Data.Entity.SqlServer.SqlSpatialServices.GeographyPolygonFromText(String polygonText, Int32 srid) at PRIME.MappingController.Get_Map_Data(MapDataRequest request) at lambda_method(Closure , ControllerBase , Object[] ).<BeginInvokeSynchronousActionMethod>b__36(IAsyncResult asyncResult, ActionInvocation innerInvokeState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult`2.CallEndDelegate(IAsyncResult asyncResult) —truncated— @Jeremy – Same here. Deploying the 2 DLLs (SqlServerSpatial110.dll and Microsoft.SqlServer.Types.dll) to the bin folder of my app did not fix this issue. The strange thing is, sometimes SqlServerSpatial110 DOES get detected, but usually not. It's sporadic. I might need to install the CLR types on the web servers and be done with it… George Follow-up: My "intermittent" issue was caused by web farm nodes being out of sync. One node already had geo types installed, the others didn't. I installed the types on all nodes and we're good now. It looks like the SqlServerSpatial110 dll has dependencies on a number of other dlls that are not guaranteed to exist. George @Jeremy & @George – I just pushed an update to the package (11.0.1) that uses a different approach to load the native assemblies explicitly from code (without needing to copy them to the output directory manually). That should solve (or help you diagnose) issues with resolving the native assemblies. Thanks @Rowan for the update. For now I think I'm fine installing the types anyway, but this package will be handy in the future! George @Rowan and @Jeremy: I pulled down the latest package that Rowan mentions, and I can confirm that now all necessary DLLs get deployed correctly. I had a production deployment and did not want to take down any nodes in order to install the types (I would have had to deal with taking down nodes, wait to drain active connections, coordinate with IT group, etc). So I went with the latest NuGet package and we're in business! Thanks Rowan. George did Global.asax SqlServer Types.Utilities.Load Native Assemblies (Server.MapPath ("~ / bin")); did not work, where am I going wrong? @Wans – Are you seeing an error? Any chance you can post more details? We had an issue with the transition from SQL Server 2008 R2 to SQL Server 2012. Even though we added the Nuget Package to our MVC application we kept getting the "DataReader.GetFieldType(…) returned null" Exception. The cause of the issue can be read here: technet.microsoft.com/…/ms143179.aspx TLDR: The SqlClient loads by default the Microsoft.SqlServer.Types Version 10.0 and with SQL Server 2012 V. 11 is needed. This problem can be solved by adding the following to the web.config. <dependentAssembly> <assemblyIdentity name="Microsoft.SqlServer.Types" publicKeyToken="89845dcd8080cc91" culture="neutral" /> <bindingRedirect oldVersion="10.0.0.0" newVersion="11.0.0.0" /> </dependentAssembly> It would be great if the Nuget package could be updated to automatically add this to the web.config. @Arne Klein – Feel free to open an issue for us to make this change entityframework.codeplex.com/…/Create everything is working correcty in my local compile however when i publish to azure. I am having the same issue as @Wan Environment is Azure Mobile Service .net backend public class WebApiApplication : System.Web.HttpApplication { protected void Application_Start() { SqlServerTypes.Utilities.LoadNativeAssemblies(Server.MapPath("~/bin")); WebApiConfig.Register(); } } the dll's are copy to output director : copy allways yet after publishing I receive this message in the azure log Could not load assembly 'D:homesitewwwrootbinSqlServerTypesx86SqlServerSpatial110.dll'. Error received: 'Could not load file or assembly '' or one of its dependencies. The module was expected to contain an assembly manifest.'. Could not load assembly 'D:homesitewwwrootbinSqlServerTypesx86msvcr100.dll'. Error received: 'Could not load file or assembly '' or one of its dependencies. The module was expected to contain an assembly manifest.'. Could not load assembly 'D:homesitewwwrootbinSqlServerTypesx64SqlServerSpatial110.dll'. Error received: 'Could not load file or assembly '' or one of its dependencies. The module was expected to contain an assembly manifest.'. Could not load assembly 'D:homesitewwwrootbinSqlServerTypesx64msvcr100.dll'. Error received: 'Could not load file or assembly '' or one of its dependencies. The module was expected to contain an assembly manifest.'. @Carroll Lee – I've opened a new work item for us to track the issue you are seeing – entityframework.codeplex.com/…/2311. For the life of me I cannot get this to work for Azure based solutions. Azure SQL is using version 10 of Microsoft.SqlServer.Types and this imports version 11+ into my ASP.Net solution AND the class library that it uses. I have the above code in place, however "SqlServerTypes.Utilities.LoadNativeAssemblies(AppDomain.CurrentDomain.BaseDirectory);" in my class library causes "System.Exception: Error loading msvcr100.dll (ErrorCode: 126)" at SqlServerTypesLoader.cs Line: 43 which is the loader that gets added to my solution. Alternatively, I tried to call the httpContext version of the assembly by putting this in my class library instead – "SqlServerTypes.Utilities.LoadNativeAssemblies(HttpContext.Current.Server.MapPath("~/bin"));" however that still produces the identical problem below… thoughts? assemblyGAC_MSILMicrosoft.SqlServer.Types10.0.0.0__89845dcd8080cc91Microsoft.SqlServer.Types.dll'. Type B originates from 'Microsoft.SqlServer.Types, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' in the context 'Default' at location 'C:WINDOWSassemblyGAC_MSILMicrosoft.SqlServer.Types11.0.0.0__89845dcd8080cc91Microsoft.SqlServer.Types.dll'. Arne Klein's solution below worked for me: <dependentAssembly> <assemblyIdentity name="Microsoft.SqlServer.Types" publicKeyToken="89845dcd8080cc91" culture="neutral" /> <bindingRedirect oldVersion="10.0.0.0" newVersion="11.0.0.0" /> </dependentAssembly> @Chris – Glad you found a solution 🙂 After adding this to my VS2013 Web API V2 Project using EF6, I always get the file is in use when building my solution after a run. I run Visual Studio as Administrator because I need IIS Express to have access to port 80 so Fiddler can redirect the client requests to me for debugging. I think its the Admin privs that causes the file in use error (from what I have found minimal references on the internet for this). If I remove the package I no longer have this problem. I really want to use this with EF to replace the complicated Stores Procedures for calculating distances in queries. Any advice? Currently I'm having to manually stop the IIS Express site before building and its getting tiresome. @Dale – Perhaps try changing the 'Copy to Output Directory' setting on the files in the SqlServerTypes folder to 'Copy if Newer' rather than 'Copy Always'. That may resolve the issue for you. Can we have an update to this package to use a .targets file (see link) so we don't need to include assemblies in our solution. npe.codeplex.com/…/462174 @Steve Hipwell – Could you open a request here entityframework.codeplex.com/…/Create. I'm not promising we'll do it right away, but I agree it would be good to do. We're using the NuGet package for this, however Microsoft Code Analysis rules fail, specifically CA2101 and CA1060. Any plans to make this right? @Brent Pabst – Can you open an issue on our CodePlex project and we'll look at correcting the violations entityframework.codeplex.com Does this package work with SqlServer2014 (v12) on azure? Well, i just installed the package and it hosed my application. I'm getting this error and have no idea what to do about it: Error Could not load file or assembly 'Microsoft.SqlServer.Types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. Is there a package available for version 12? @jrummell – no v12 package yet. Feel free to open a request for one and we'll add it to our queue of work – entityframework.codeplex.com
https://blogs.msdn.microsoft.com/adonet/2013/12/09/microsoft-sqlserver-types-nuget-package-spatial-on-azure/
CC-MAIN-2018-13
refinedweb
2,027
51.24
A tool that generates github wiki compatible API documentation from your project's jsdocs and adds them to your wiki. That is wicked! npm install -g wicked wikiin your project Features settings npm install -g wicked wicked Steps 4 - 5 can be repeated everytime you want to re-generate API docs for your project. wicked does not overwrite other pages you created in your wiki so keep running wicked all you need. More specifically wicked only removes old *.API.md files from your wiki and updates the links _Sidebar.md without affecting any other links in the sidebar. See an example of API docs added by wicked in its own wiki. usage: wicked <wicked-options> -- <jsdoc-options>Generates wiki API docs for the gihub project in the current directory.Both options are optional, jsdoc-options get passed to [jsdoc]().Note: overriding the jsdoc destination (-d, --destination) is not possible since wicked will write files to a temp dirOPTIONS:--noclean don't remove the temp directory into which wiki is checked out when finished--nocommit don't commit the updated wiki automatically nor remove the temp directory-t, --toc causes wicked to generate a table of contents on top of each wiki page-l, --loglevel level at which to log: silly|verbose|info|warn|error|silent -- default: info-h, --help Print this help message.EXAMPLES:Generate with default options:wickedGenerate and include table of contents:wicked --tocOverride [jsdocconf.json]():wicked -- --configure ./myconf.jsonOverride loglevel and jsoc configuration and don't remove temp directory:wicked --loglevel silly --noclean -- --configure ./myconf.json Since wicked is using jsdoc under the hood, it is helpful to review its documentation. I highly recommend this page explaining how to specify @param types among other useful specs. In order to avoid all functions being attached to the global namespace resulting in one API page per function, I namespaced functions in wicked with @namespace and @memberof working together. As an example the Internal namespace is defined here and used by all the lib functions like this one. Feel free to study the commenting style used in wicked itself and compare with the wiki pages it produced. In order to make your wicked API pages appear properly styled, please install the chrome extension or bookmarklet. Generates jsdoc wiki pages for project of current working directory and updates github wiki with them. It is assumed that this is run from the root of the project whose wiki should be generated. Additionally the currently checked out branch will be used when generating blob urls to link source examples. However the github remote and branch can also be set via environment vars as explained in the documentation of jsdoc-githubify which is used by wicked under the hood. MIT
https://www.npmjs.com/package/wicked
CC-MAIN-2017-22
refinedweb
456
52.8
Code. Collaborate. Organize. No Limits. Try it Today. I'm coding a database application using WTL, and so far I've been using a standard list view for viewing data, and separate dialogs for editing. There are lots of different grids out there, but I didn't like the ones available for WTL very much. Not that they are bad or anything, they just didn't fit my needs. And instead of patching and fixing any existing ones to fit my need, it would probably be faster to make my own. A little note about the demos. The Northwind demo requires a SQL server at localhost with the northwind database installed, and no password for the sa account. You can modify the connection string in MainFrm.h and recompile. The solutions are made with Visual Studio.NET 2003, so they won't open in older versions, but creating a new empty project and adding the files should be ok. SetRedraw DateTimePicker _variant_t _bstr_t There are lots of others things that I probably will add to this grid, but for now this is enough for my needs. I welcome any comments and suggestions you might have for improvements. Let me show you how to create the most basic grid. Here I start with a standard WTL AppWizard application, and replace the main forms view with a grid. After creating the grid I set the style to allow a context menu to be shown if you click the right mouse button in the grid. CGridCtrl m); m_view.SetExtendedGridStyle(GS_EX_CONTEXTMENU); // ... } m_view.AddColumn("Last Name",140,CGridCtrl::EDIT_TEXT); m_view.AddColumn("First Name",140,CGridCtrl::EDIT_TEXT); Now we will add another column where you can enter the sex of a person. Here you will see that we use more arguments to the AddColumn function. The first new parameter is alignment, and the last one is what data type this column uses. The default is VT_BSTR which we used on the first two columns. Now we will store the sex information as integer, so we use VT_I4. AddColumn VT_BSTR VT_I4 We also tell the grid, what should be possible to choose from the combobox, we will add to the grid. m_view.AddColumn("Sex",100,CGridCtrl::EDIT_DROPDOWNLIST, CGridCtrl::CENTER,VT_I4); m_view.AddColumnLookup("Sex",1,"Male"); m_view.AddColumnLookup("Sex",2,"Female"); We should now have a grid where we can enter a person's last name, first name, and sex. You add rows by calling AddRow and SetItem. In this example, I use SetRedraw before and after adding the row. For just one row, this wouldn't be necessary, but for many rows this is a must. AddRow SetItem m_view.SetRedraw(FALSE); long nItem = m_view.AddRow(); m_view.SetItem(nItem,"Last Name","Henden"); m_view.SetItem(nItem,"First Name","Bjørnar"); m_view.SetItem(nItem,"Sex",1); m_view.SetRedraw(TRUE); For catching events, I created a class CListener that you inherit from. This class is not only used for events, but also to query information about cell background color. CListener class CListener { public: virtual bool OnRowChanging(UINT uID,long nRow); virtual void OnRowChanged(UINT uID,long nRow); virtual void OnEdit(UINT uID,long nRow); virtual bool OnDeleteRow(UINT uID,long nRow); virtual void OnNewRow(UINT uID,long nRow); virtual void OnModified(UINT uID,LPCTSTR pszColumn,_variant_t vtValue); virtual void OnRowActivate(UINT uID,long nRow); virtual COLORREF GetCellColor(UINT uID,long nRow,LPCTSTR pszColumn); virtual bool OnValidate(UINT uID); }; The following example will show you how to be notified when a row is deleted from the grid, and set a new background color for rows that are modified. First inherit CMainFrame from CGridCtrl::Clistener, and then override the functions you want. CMainFrame CGridCtrl::Clistener class CMainFrame : public CFrameWindowImpl<CMainFrame>, ..., public CGridCtrl::CListener LRESULT CMainFrame::OnCreate(UINT /*uMsg*/, WPARAM /*wParam*/, LPARAM /*lParam*/, BOOL& /*bHandled*/) { // ... m_view.SetListener(this); // ... } virtual bool OnDeleteRow(UINT uID,long nRow) { CString str; str.Format("Do you want to delete row %d?",nRow); // Returning false will abort the delete return IDYES==AtlMessageBox(m_hWnd,(LPCTSTR)str, IDR_MAINFRAME,MB_YESNO|MB_ICONQUESTION); } virtual COLORREF GetCellColor(UINT uID,long nRow,LPCTSTR pszColumn) { _variant_t vt = m_view.GetItem(nRow,"Sex"); if(!m_view.IsNull(vt)) { if((long)vt==1) // Blue-ish for males return RGB(192,192,255); else // And red-ish for females return RGB(255,192,192); } // Return (COLORREF)-1 to use default colors return (COLORREF)-1; } Here is a brief function reference. I will only list the function names, and a short description of what it does. void AddColumn() Adds a column to the grid. You can't add columns if there are rows in the grid. The last parameter to this function is the name of the column, which can be used as arguments to other functions. If this is omitted, the column title is used as name. void AddColumnLookup() Adds a lookup value to a column. This doesn't have to be columns that use dropdownlists, but could be any column. long AddRow() Adds a row to the grid and returns the row number inserted. Use this return value to set individual cell values on this row. void ClearModified() When cells are edited they set the row status to modified. Call this function to reset the specified row to, not modified. Specify -1 for all rows. void DeleteAllColumns() Deletes all columns. void DeleteAllItems() Deletes all rows. void EnsureVisible() Display the row in the visible area of the grid. long GetColumnCount() Returns the number of columns in the grid. _variant_t GetEditItem() When in edit mode, call this to get the current value of one of the cells being edited. _variant_t GetItem() Returns the value of the row and cell. bool GetModified() Returns true if the row is modified. Use -1 for all rows. true long GetRowCount() Returns the number of rows in the grid. long GetSelectedRow() Returns the number of the selected row, and -1 if no row is selected. bool IsNull() A static function that can be used to check if a _variant_t is null. Returns true for VT_NULL and VT_EMPTY. VT_NULL VT_EMPTY BOOL PreTranslateMessage(MSG* pMsg) This should be called from your main frame PreTranslateMessage function. Without it, tab between cells will not work. If you use this grid in a modal dialog, tabs will also not work, since PreTranslateMessage can't be called. Open the dialog modeless instead, and disable the parent window. PreTranslateMessage void SetColumnFocus() Set focus to a column. Only matters when in edit mode. Nice if validation for a field fails, and you want to focus the missing value. void SetItem() Set the value of a cell. void SetListener() Must be set to a class inherited from CGridCtrl::CListener. CGridCtrl::CListener void SetNull() Static function that can be used to set the value of a _variant_t to
http://www.codeproject.com/Articles/4253/Another-WTL-Grid?fid=15662&df=90&mpp=10&sort=Position&spc=None&select=1469083&tid=873646
CC-MAIN-2014-15
refinedweb
1,118
58.28
Weeble, Try to use the full arguments of insert(i, x), instead of using list slices. Every time you create a slice, Python copies the list into a new memory location with the sliced copy. That's probably a big performance impact there if done recursively. My 2cp, Xav On Fri, Mar 19, 2010 at 10:13 AM, Weeble <clockworksaint at gmail.com> wrote: > I am loading a dictionary from a text file and constructing a trie > data structure in memory. However, it takes longer than I'm happy with > - about 12 seconds on my computer. I profiled it, came up with some > clever ideas to cut down on the work (such as by exploiting the fact > that the dictionary is sorted) and was only able to shave a small > fraction of the time off. However, then I tried calling gc.disable() > before loading the trie and it halved the running time! I was > surprised. Is that normal? I thought that the cost of garbage > collection would be in some way proportional to the amount of garbage > created, but I didn't think I was creating any: as far as I can tell > the only objects deallocated during the load are strings, which could > not be participating in cycles. > > I have spent a lot of time in C#, where the standard advice is not to > mess about with the garbage collector because you'll probably just > make things worse. Does that advice apply in Python? Is it a bad idea > to call gc.disable() before loading the trie and then enable it again > afterwards? Does the fact that the garbage collector is taking so much > time indicate I'm doing something particularly bad here? Is there some > way to give the garbage collector a hint that the whole trie is going > to be long-lived and get it promoted straight to generation 2 rather > than scanning it over and over? > > $ python > Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55) > [GCC 4.4.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> > $ time python -c "import > trie;t=trie.Trie();t.load(open('sowpods.txt'))" > > real 0m12.523s > user 0m12.380s > sys 0m0.140s > $ time python -c "import > trie;t=trie.Trie();t.load(open('sowpods.txt'))" > > real 0m12.592s > user 0m12.480s > sys 0m0.110s > $ time python -c "import gc;gc.disable();import > trie;t=trie.Trie();t.load(open('sowpods.txt'))" > > real 0m6.176s > user 0m5.980s > sys 0m0.190s > $ time python -c "import gc;gc.disable();import > trie;t=trie.Trie();t.load(open('sowpods.txt'))" > > real 0m6.331s > user 0m5.530s > sys 0m0.170s > > > === trie.py === > > class Trie(object): > __slots__=("root", "active") > def __init__(self): > self.root=[] > self.active=False > def insert(self, word): > if len(word) == 0: > self.active=True > else: > head = word[0] > for ch, child in reversed(self.root): > if ch == head: > child.insert(word[1:]) > return > child = Trie() > self.root.append((head, child)) > child.insert(word[1:]) > def seek(self, word): > if len(word) == 0: > return self > head = word[0] > for ch, child in self.root: > if ch == head: > return child.seek(word[1:]) > return EMPTY_TRIE > def load(self, file): > for line in file: > self.insert(line.strip().lower()) > def empty(self): > return (not self.root) and not self.active > def endings(self, prefix=""): > if self.active: > yield prefix > for ch, child in self.root: > for ending in child.endings(prefix+ch): > yield ending > > EMPTY_TRIE = Trie() > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-list/2010-March/571644.html
CC-MAIN-2016-44
refinedweb
588
70.6
How to Construct a Basic FOR Loop in the C Language The core of most modern programs, including those in the C language, is the loop. A loop gives a program the ability to repeat a group of statements, sometimes for a given count or duration, or, often, until a certain condition is met. The C language gives you many ways to create loops in your code, but the most common is the for loop. A for loop has three parts: The setup The exit condition for which the loop finishes The part that loops, which is the statements that are repeated In the C language, the for loop can handle these conditions in one handy statement, which makes it easy to understand, despite how complex it looks. There was once a time when teachers would punish students by making them write some life lesson, say “I shall refrain from calling my friends names,” on the chalkboard 100 times. The following program does the same thing on a computer screen in less than one second: #include <stdio.h> int main() { int c; for(c=0;c<100;c=c+1) { puts("I shall refrain from calling my friends names."); } return(0); } When you save the source code to disk, compile it, and run it, you get this: I shall refrain from calling my friends names. I shall refrain from calling my friends names. I shall refrain from calling my friends names. And so on, for 100 lines. Here’s how it works: The for keyword is followed by a set of parentheses. Inside the parentheses are three separate items that configure the loop. Consider the preceding for loop: for(c=0;c<100;c=c+1) The c variable is already defined as an int (integer). It’s used by the for loop to control how many times the loop — the statements belonging to for — is repeated. First comes the setup: c=0 The variable c is assigned the value 0. The for statement does this first, before the loop is ever repeated, and then only once. Note that starting at 0 rather than 1 is a traditional C language thing. Zero is the “first” number. Get used to that. Next comes the exit condition: c<100 The loop repeats itself as long as the value of variable c is less than 100. Finally, here’s the “do this” part of the loop: c=c+1 Each time the loop is repeated, the for statement executes this statement. It must be a real C language statement, one that you hope somehow manipulates the variable that’s set up in the first step. Here, the value of variable c is increased, or incremented, by one. The loop itself consists of the statements following for. These are enclosed in braces: for(c=0;c<100;c=c+1) { puts("I shall refrain from calling my friends names."); } Or, since there is only one statement after for, you can eliminate the braces: for(c=0;c<100;c=c+1) puts("I shall refrain from calling my friends names.");
https://www.dummies.com/programming/c/how-to-construct-a-basic-for-loop-in-the-c-language/
CC-MAIN-2019-26
refinedweb
516
78.99
Database gateway and ORM for Dart Trestle is the database package used in Bridge. It was created with extensibility and clean API in mind. Providing a unified interface to work with different databases across multiple setups for maximum reusability and agility. The package is divided into two parts – the Gateway and the ORM. The Gateway is the common abstraction that the different database drivers implement, and the ORM uses the Gateway to talk to the database. The Gateway has both a Schema Builder and a Query Builder, accessible from the common Gateway class. One of the more controversial features of Trestle are the so called Predicate Expressions. They are callback-style lambda functions that are translated into SQL constraints. So we can say where((user) => user.age > 20), which then gets parsed into something like WHERE "age" > 20. An it works with pretty complex functions! As soon as you create a predicate that's too complex, the runtime will tell you in time, so that you can straighten things out. Just know that Trestle doesn't get all rows and then run the constraint, even though that's what it looks like. To get started, choose what database implementation you want to use (you can easily change your mind later). In this example, we use the InMemoryDriver. It doesn't need schema and it doesn't need any configuration. import 'package:trestle/gateway.dart'; main() async { // The database implementation Driver driver = new InMemoryDriver(); // The gateway takes the driver as a constructor argument Gateway gateway = new Gateway(driver); // Next, connect! await gateway.connect(); // ... Do some work // Disconnect when you're done await gateway.disconnect(); } Later, if we want, we can just swap out the driver and call it a day. // Driver driver = new InMemoryDriver(); // Driver driver = new SqliteDriver('storage/production.db'); // Driver driver = new MySqlDriver(username: 'myuser', password: '123', database: 'mydatabase'); Driver driver = new PostgresqlDriver(username: 'myuser', password: '123', database: 'mydatabase'); Think of the gateway as the actual database in SQL. It contains the tables, which can be accessed and modified using a few simple methods. To create a new table we use the create method on the Gateway class. This method takes two parameters: the name of the table to be created, and a callback containing the Schema Builder. It looks like this: await gateway.create('users', (Schema schema) { schema.id(); // shortcut for an auto incrementing integer primary key schema.string('email').unique().nullable(false); schema.string('username').unique().nullable(false); schema.string('password', 60); schema.timestamps(); // adds created_at and updated_at timestamps (used by the ORM) }); This method returns a Future (much like everything else in Trestle), and should probably be await-ed. Altering a table is almost identical to creating one, except we use the alter method instead: await gateway.alter('users', (Schema schema) { schema.drop('username'); schema.string('first_name'); schema.string('last_name'); }); Deleting (or dropping) a table could not be simpler: await gateway.drop('users'); When we're satisfied with the columns of our table, we can start a query by calling the table method. This starts up the Query Builder, providing a fluent API to construct queries. The builder is stateless, so we can save intermediate queries in variables and fork them later: // Full query Stream allUsersOfDrinkingAge = gateway.table('users') .where((user) => user.age > 18).get(); // At least in Sweden... // Intermediate query Query uniqueAddresses = gateway.table('addresses').distinct(); // Continued query Stream allUniqueAddressesInSweden = uniqueAddresses .where((address) => address.country == 'SWE').get(); // A function extending an intermediate query Query allUniqueAddressesIn(String country) { return uniqueAddresses .where((address) => address.country == country); } // An aggregate query int count = await allUniqueAddressesIn('USA').count(); There's a bunch of stuff you can do. Experiment with the query builder and report any bugs! 🐛 You can think of migrations as version control for your database. It's an automated way to ensure that everyone on your team is using the same table schema. Each migration extends the Migration abstract class, enforcing the implementation of a run method, as well as a rollback method. The run method makes a change to the database schema (using the familiar syntax). The rollback method reverses that change. For example, creating a table in run, and dropping it in rollback. By storing a Set<Type> (where the types are subtypes of Migration), we can ensure that each migration is run in order. And if we need to change something, we can roll back and re-migrate. class CreateUsersTable extends Migration { Future run(Gateway gateway) { gateway.create('users', (Schema schema) { schema.id(); schema.string('email'); // ... }); } Future rollback(Gateway gateway) { gateway.drop('users'); } } final migrations = [ CreateUsersTable, // more migrations CreateAddressesTable, DropUsernameColumnInUsersTable, ].toSet(); // Somewhere in a command line utility or something gateway.migrate(migrations); // Somewhere else – remember to import the same migrations set gateway.rollback(migrations); Trestle's primary feature is to provide an ORM for the Bridge Framework. One of the key features of Bridge is the WebSocket transport system Tether. So it was important that Trestle would be able to map rows to plain Dart objects, that could be shared with the client. So instead of embracing the full Active Record style, we had to move the database interaction from the data structures to a Repository class. However, using a plain object without any intrusive annotations is kind of brittle. So we can optionally extend a Model class and use annotations if we don't care that we're coupling ourselves to Trestle. It works like this: // Create a data structure class Parent { int id; String email; String firstName; String lastName; String password; int age; } // Or a value object class Parent { // Override the table name with a constant "table" on // any of these types of models static const String table = 'my_own_table_name'; final int id; final String email; final String firstName; final String lastName; final String password; final int age; const Parent(this.id, this.email, this.firstName, this.lastName, this.password, this.age); } // Or create a full model class Parent extends Model { @field String email; @field String firstName; @field String lastName; @field String password; @field int age; // Relationships are very expressive. Here, all Child models // whose table rows has a key "parent_id" matching this model's // "id" field, are eager loaded to this List. @hasMany List<Child> children; // You can also lazy load the children by setting the property // type to Stream<Child>, or (if you want to perform queries on // the children) to RepositoryQuery<Child>. } class Child extends Model { // Single relationships can be annotated as either `Child` (eager) // or `Future<Child>` (lazy). @belongsTo Parent parent; @belongsTo Future<Parent> parent; } // Instantiate the repository with a gateway as an argument and the model as a type argument. final parents = new Repository<Parent>(gateway); // You're done! The repository works like `gateway.table('parents')` would, // but it returns `Parent` objects instead of maps. Parent parent = await parents.find(1); // The relationships are mapped automatically. Child child = parent.children.first; print(child.parent == parent); // true print(parent.child == child); // true We can use this class to implement some query scopes or filters: class UsersRepository extends Repository<User> { RepositoryQuery<User> get ofDrinkingAge => where((user) => user.age > 20); } // And use it like so: users.ofDrinkingAge.count(); As (soon to be) mentioned in the Bridge docs, Trestle is automatically set up for you, so we can use dependency injection to get immediate access to a repository: // An example in the context of the HTTP router – not a part of Trestle router.get('/users/count', (Repository<User> users) async { return 'There are ${await users.count()} users registered'; });.10.0 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:trestle/trestle.dart';
https://pub.dev/packages/trestle/versions/0.10.0
CC-MAIN-2019-30
refinedweb
1,288
57.27
WordPress 5.0 update was one of the biggest updates in WordPress history. It changed the way WordPress was before that. Blocks were introduced in this update which is very helpful for content creation and best alternatives of page builders. You can also create your own custom Gutenberg blocks as per your needs. This tutorial is meant to be a simple guide to help you create custom Gutenberg block easily. The official block documentation on WordPress is just another piece of crap for beginners as well as experienced programmers. Here is an example of AnglePicker Component from WordPress block documentation. There is no preview or image of the component, you need to manually use it in the code and test how it actually looks. The documentation is full of these annoying things. I have tried to simplify the example for beginners to understand the concept of Gutenberg development. What are Custom Gutenberg Blocks Before diving deep into WordPress blocks we need to know what WordPress blocks are? Blocks exist from WordPress 5.0 to enable smooth visual editing as compare to previous TinyMCE editor. At first, I thought that Gutenberg was not easy to get comfortable as compared to TinyMCE editor. I started hating it, but all this changed in less than a week. Gutenberg is far better than the previous WordPress editor, but it takes some time to get comfortable with it. All elements in the Gutenberg editor including Paragraph, heading and images are blocks. In reality, blocks are nothing but simple HTML tags enclosed by predefined comments (key-value pair comments as defined by WordPress). Example: For a simple paragraph, the HTML will be: <!-- wp:paragraph --> <p>Some Element of P tag</p> <!-- /wp:paragraph --> For Heading the HTML will be: <!-- wp:heading --> <h2>Some Text of h2 tag</h2> <!-- /wp:heading --> Gutenberg blocks are an easy solution and best alternative to page builders. There are a number of plugins which extends the power of Gutenberg by providing more useful blocks. Why You need to Create Custom Gutenberg Blocks Blocks can be very helpful in structuring your content. If you need custom post type structure then it is a good idea to create your own custom block. It is super easy to create custom Gutenberg block. For example, I needed custom code block with an optional title and output so I created a custom code block for this site. There are several blocks examples which you can create to extend the functionality of Gutenberg editor like tabs, accordion, click to tweet, newsletter block, call to action block and so on. How To Create Custom Gutenberg Blocks Creating simple Gutenberg blocks is super easy, you just need some basic understanding of React.js and PHP. If you don’t know both then no problem we will explain each and every line of code briefly. It is recommended to set up a local development environment for this tutorial. You can use flywheel by local for easy setup. - Create an empty folder inside the plugin directory of your local development environment. 2. Create a file named plugin.php with the following content. <?php /** * Plugin Name: HolyCoders Tutorial Plugin * Author: Digvijay Singh * Version: 1.0.0 * Description: Awesome Plugin to create beautiful blocks. */ function addMessageBlock() { wp_enqueue_script( 'Message Block', plugin_dir_url(__FILE__) . 'message-block.js', array('wp-blocks','wp-editor'), true ); } add_action('enqueue_block_editor_assets', 'addMessageBlock'); Blocks are contained inside plugins so we have created a plugin and defined to add our block (enqueued message-block.js). We have added an array of two more script dependencies named wp-blocks and wp-editor. The script wp-blocks handles the block registration as we will see later and wp-editor provides useful prebuilt components. You can read more about wp_enqueue_script()function here. 3. Now add a file message-block.js in the same directory. wp.blocks.registerBlockType('hcblock/messagebox', { title: 'Message Box', icon: 'warning', category: 'common', attributes: { message: {type: "string"}, backgroundColor: {type: "string"} }, edit: function(props) { return React.createElement("div", null, React.createElement("input", { type: "text", placeholder: "Enter Your Message Here", value: props.attributes.message, onChange: function onChange(e) { return props.setAttributes({ message: e.target.value }); } }), React.createElement(wp.components.ColorPicker, { value: props.attributes.backgroundColor, onChangeComplete: function onChangeComplete(color) { props.setAttributes({ backgroundColor: color.hex }); } })); }, save: function(props) { return wp.element.createElement("div", { style: { backgroundColor: props.attributes.backgroundColor } }, props.attributes.message); } }) 4.) Activate the Plugin. 5.) Find and Use the Block from Gutenberg editor. 6.) Publish the post and check its output. In-depth explanation of message-block.js Code: Remember that we passed wp-blocks from plugins.php file (line 13) to our block while registering it, we used it in message-block.js file to register the block. In the very first line, we register the block. The first parameter of registerBlockType function is a unique name for the block, the general convention is namespace/block-name (all in small case). wp.blocks.registerBlockType('hcblock/messagebox', {}) Note: A block name can only contain lowercase alphanumeric characters and dashes, and must begin with a letter. The second parameter in the registerBlockType function is the object of key-value pair that defines the configuration of the block. wp.blocks.registerBlockType('hcblock/messagebox', { title: '', //Title to display in Gutenberg editor icon: '', // (Optional) icon of block description: '', // (Optional) short description about block category: '', // Category which block belongs to attributes: {}, edit: function(props) {}, save: function(props) {} }) All names are self-explanatory so I will explain attributes, edit and save function which handles all operation of render and save the data. Attributes: It contains the key-value pairs of data that we want to deal with. Here we are creating Message Box so let us take the message and boxBackgroundColor attribute for simplicity. It will look like this in the code. attributes: { message: {type: 'string', default: 'This is default Message'}, boxBackgroundColor: {type: 'string', default: '#EEEEEE'} } Edit function: It is a function which describes the render on the backend side (what we see on Gutenberg editor). We define a function which receives props and returns React element (Javascript) to render on the backend side. Note you can use this online tool to generate React code for the corresponding HTML. Save Function: It is a function which describes what to render on the frontend (what user see). It should return the instance of WordPress element and should only depend only on attributes and no outside data or logic. Here is the complete documentation of edit and save function. You can view all the parameters and their types on the official WordPress blocks documentation. Create Custom Gutenberg Blocks Using ‘create-Guten-block’ It will be a nightmare to write such code without babel and Webpack because we can’t use the babel online tool each time to write and paste code. The easy solution to this is create-guten-blocks. It is a toolkit for building WordPress blocks without any complex configuration. It has Webpack, babel and other necessary tools preconfigured. If you have used React.js create-react-app then it is similar to that, the only difference is that is build for WordPress block development. You should go with this method If you are just getting started or you don’t want to configure everything on your own (quite complex). If you ever feel that the configuration is blocking you from certain features then you can eject to custom setup at any time. Create WordPress blocks Using Plugins The easiest way to create Gutenberg blocks. If you hate coding (you shouldn’t) then you must opt this way to easily create useful Gutenberg blocks. This was my first choice to create custom Gutenberg blocks for this blog because I haven’t worked on PHP. But I didn’t want to increase the load of one extra plugin (this is a complete lie, it was programmer inside me that forced me to start everything from scratch). There are three plugins which you can use to create Gutenberg blocks easily with very little coding. LazyBlocks The best plugin to create Gutenberg blocks for Free. This is the only plugin which is completely free and opensource. This plugin is just a piece of art. This should be your first choice if you want to create Gutenberg blocks easily. Here are some beautiful features of this plugin: - More than 20 Control Components - Well Documented - Custom Templates for Posts and pages - PHP and Handlebars support for displaying content - Conditional display content - Import and export of blocks using JSON BlockLab This is another plugin to create Gutenberg blocks. Most of its components are available in the free version but if you want to create some complex block then you need to buy its premium version because some very useful features like Repeater fields are not free. The features that it offers in premium version are already free in lazy blocks. ACF – Advanced Custom Field The most popular plugin to create custom fields. They introduced the feature to create custom Gutenberg blocks in version 5.8.0. ACF Pro is required to create custom blocks. It will not be a wise decision to use a sword for sewing purpose. The same thing can be related here, Advanced Custom Fields plugin is used for more complex tasks. If your website heavily depends on custom content then you should go for ACF. Conclusion These were some of the best ways to create custom Gutenberg blocks. Sadly there are not many resources available for Gutenberg development as of now, because it is relatively new and not everyone is ready to accept the Gutenberg update (heavy content websites). WordPress block documentation is the only way to seek help if you get stuck. Some sections are really well explained while some are crap and require proper explanation and documentation. I will try to create more tutorials on Gutenberg development. If you have any confusion or question feel free to ask in comments. You may also like: Best managed WordPress Hosting.
https://holycoders.com/create-custom-gutenberg-block/
CC-MAIN-2020-34
refinedweb
1,655
57.27
Sometimes, people will argue that C++ scales better than C for development teams and large projects because it has classes. While code organization is certainly helpful, Linus Torvalds will argue it is unnecessary because there are other ways of achieving code organization – such as prefixing function names with a “namespace.” JS++ is influenced a lot by C++ and Bjarne Stroustrup’s philosophies. While most people point to classes as the reason for C++’s success and scalability, there is a more subtle reason it scales so well: readability. The C++ STL provides a level of abstraction without sacrificing performance. Stroustrup said he wanted std::vector to be as fast as C arrays. You can implement these details yourself, but why not just use std::vector? While working on the JS++ Standard Library, Stroustrup’s philosophies have once again come into play. As an example, think about converting a JavaScript number from decimal (base 10) to hexadecimal (base 16). Can you think – off the top of your head – how it would be done? There is a certain pleasure derived when you can read through someone else’s code as clearly and effortlessly as you might be able to read a book. We enforce these standards internally at Onux, and I’m going to reveal how it influences the design of JS++. Getting back to the example, this is how it’s done in JavaScript: var x = 97; x.toString(16); I’ve argued that we need to deprecate JavaScript’s Number.prototype.toString(base). Instead, we are proposing a toBase(unsigned int base) method to convert from base 10 (decimal) to a specified arbitrary base. Consider the readability of the following functions: x.toString(16); x.toBase(16); Both functions do the same thing, but I’ve argued that no programmer will ever have to look up what toBase(16) means versus toString(16). The beauty of a compiled language is that we have full control over optimization. Of course, in JavaScript, you can do this: function toBase(number, base) { return number.toString(base); } All else being equal, and you care about performance, wouldn’t you much rather have this? x.toString(base); In JS++, we can perform this exact optimization in a process known as “function inlining”. Here’s an example from Wikipedia. In other words, even though JS++’s toBase(16) is clearer than toString(16), there is no loss of performance because toBase just gets compiled to toString with no additional function calls; the toBase method basically doesn’t even exist in the final generated code. We can take this one step further. Converting from base 10 (decimal) to base 16 (hexadecimal) is quite common. Why not provide a Standard Library function to do this? char x = `a`; x.toHex(); In this case, toHex() is just an alias for toBase(16), which is in turn an alias for toString(16). With each layer of abstraction, we gain more clarity and readability in our code. This makes it easy for others to work on our code, it makes it easy to scale development for teams, and – most of all – each layer of abstraction results in no performance loss. In fact, toHex() just gets compiled to toString(16). Think for a moment. You’re tired, groggy, and unmotivated. Would you rather read code that looks like toString(16) or toHex()? This readability gain has only been focused on 1-3 lines of code so far. What we want to achieve with JS++ will expand across your entire code base. When I meet programmers and they say they have problems with “spaghetti code”, I almost always guess (correctly) that they’re working at a JavaScript house. Classes – by themselves – won’t stop developers from writing unreadable code. We need abstraction… without sacrificing performance. This is what’s coming with the JS++ Standard Library. Here’s to a better future!
https://www.onux.com/jspp/blog/scaling-jspp-abstraction-performance-and-readability/
CC-MAIN-2020-34
refinedweb
648
65.01
Hi there, I've tested Pycharm beta version which has already a professional IDE features list. i have some question about features : 1) intention actions for functions : for example, you type : def main(): print("hello world") myfun(1,2,3) if __name__=='__main__': main() if you put cursor on myfun and press Alt+Enter, there are no intention action for creating a function with this name. It seems that this kind of intention actions only exists for classes (missing method). 2) Can you put specific parameter in Run > Edit Configurations in order to open pop-up when execute a command in order to set a script parameter ? for example : in menu Run > Edit Configurations, you add script parameters : Script : .../p1.py Script parameters : --flows=flows.xml --toxml=flow1 (here i want to set --toxml parameter value each time i run my script (flow1, flow32, flow 76) via a popup window) this feature exists in eclipse (see variable ${string_prompt}, ${file_prompt},... (for example :) 3) for refactoring purposes, an extract function similar to extract method could be interesting. what do you think ? regards, volov Hello vo, I've tested Pycharm beta version which has already a professional IDE Indeed, right now we don't offer any quickfixes for unresolved unqualified references. This will likely be implemented in a future version: No, there is no such feature in the IntelliJ Platform (and, consequently, in PyCharm) at the moment. You can file a YouTrack issue. In fact the "Extract method" refactoring works just fine for functions outside of classes. The name might be confusing, but we prefer to stick to the canon here. -- Dmitry Jemerov Development Lead JetBrains, Inc. "Develop with Pleasure!" Hello Dmitry, thanks a lot for your feedbacks. A) first, happy to learn about future features like quickfixes for unresolved unqualified references ! B) for the ${prompt_string}, i will take a look on how to create a track as i think more in python than in java, you could use script with required parameters that could vary (you have the feeling of the Up key in shell, replace a script argument then press Enter : i think that this one could be longer in PyCharm to setUp without this prompt and even with prompt it will be longer than shell command line + history. But IDE leverages when you could mix this feature with Run in debug mode :-) . note : i've noticed that Run in debug mode opens a Debug tab at the bottom of the window but when your script exits with an error, you're stick to this tab with "no content" or "disconnected" msgs. A cool thing would be to focus on Console tab when an error occurs. C) for the extract function, i've tested on a simple script and it works as you said. On another script, a little popup appears with msg like : "Cannot perform refactoring when execution flow is interrupted". As this script requires script parameters to work (filenames to work on), i wonder if this is possible to fix this point in order to refactor as my script works fine when it gets its required parameters. Do not know how to fix it. regards, volov. Hello vo, We try to minimize focus switches not controlled by the developer because they can disrupt the development flow. We do highlight tabs which contain new content with special icons. Most likely this depends not on the parameters, but on the structure of the code block you're trying to extract. If you show me a sample code fragment, I'll try to explain what happens in that specific case. -- Dmitry Jemerov Development Lead JetBrains, Inc. "Develop with Pleasure!" Hello Dmitry, here is a piece of code where i want to use extract_method <from here> to <to here> marks (added in the source) : Pycharm open a little popup at the beginning of the selection zone with this following message : "Cannot perform refactoring when execution flow isinterrupted". I've installed the too big Pydev IDE to check if it fails too. In fact, it succeeds to create a function that i've called update_tree (see below) : i don't know why this message pops up when i'm trying to refactor in pycharm. regards, volov. Thanks a lot for reporting this, I've created an issue: Feel free to watch it and get notified once it is fixed. Regards, Oleg Hi Oleg, thanks a lot for your quick feedback ! i'll watch this issue. regards, volov It is already fixed :-) Regards, Oleg
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205801089-Pycharm-fast-review
CC-MAIN-2019-18
refinedweb
749
69.52
: January 2018 edited January 2018 in FireCloud Announcements January 22, 2018 Improvements - FireCloud has been upgraded to Cromwell version 30.1. This version includes all the upgrades from Cromwell version 30, plus some bug fixes that were applied to 30.1. You can read about what is included in this upgrade on Cromwell's release page here January 17, 2018 New Features - Owners of billing projects will now automatically have the Google Billing Manager role on all of their projects. This gives them the power to switch billing accounts on their projects via the Google Cloud Console. - Billing project owners can now switch off batch compute (via Cromwell) for all workspaces in the project via the API. - There are new API endpoints for Project Owners to enable/disable the BigQuery Job User role, which allows running queries, for users in their projects. Bug Fixes - There were errors encountered when exporting consents from ORSP to DUOS, which has now been fixed. You can now export consents as expected. January 4, 2018 New Features - There is a new Can Share option when sharing your workspace or method. If you grant this permission to someone who has access to your workspace or method, they can then go on to share it with others. Previously, if you wanted to allow someone to share your workspace or method with others, you would have to give them Owner access, which comes with a lot more permissions than just the ability to share. - Method namespaces can now be marked as certified. Similar to the verifiedrole on many social media sites, you will be able to have your namespace marked as certified. We are still working on a process in order to become certified, so please stay tuned! - There is a new Featured Methods section in the Methods Repository. We will be using this section to feature GATK workflows published by the Broad, as well as user-created methods suggested by you. Post edited by KateN on Tagged:
https://gatkforums.broadinstitute.org/firecloud/discussion/11125/release-notes-january-2018
CC-MAIN-2019-51
refinedweb
330
69.01
#include <wchar.h> long wcstol(const wchar_t *restrict nptr, wchar_t **restrict endptr, int base); long long wcstoll(const wchar_t *restrict nptr, wchar_t **restrict endptr, int base); #include <widec.h> long wstol(const wchar_t *nptr, wchar_t **endptr, int base); long watol(wchar_t *nptr); long long watoll(wchar_t *nptr); int watoi(wchar_t *nptr); The wcstol() and wcstoll() functions convert the initial portion of the wide character string pointed to by nptr to long and long long representation, respectively. They first decompose the input string into three parts: an initial, possibly empty, sequence of white-space wide-character codes (as specified by iswspace(3C)) a subject sequence interpreted as an integer represented in some radix determined by the value of base. These functions do not change the setting of errno if successful. Since 0, {LONG_MIN} or {LLONG_MIN}, and {LONG_MAX} or {LLONG_MAX} are returned on error and are also valid returns on success, an application wanting to check for error situations should set errno to 0, call one of these functions, then check errno. The wstol() function is equivalent to wcstol(). The watol() function is equivalent to wstol(str,(wchar_t **)NULL, 10). The watoll() function is the long-long (double long) version of watol(). The watoi() function is equivalent to (int)watol( ). Upon successful completion, these functions return the converted value, if any. If no conversion could be performed, 0 is returned and errno may be set to indicate the error. If the correct value is outside the range of representable values, {LONG_MIN}, {LONG_MAX}, {LLONG_MIN}, or {LLONG_MAX} is returned (according to the sign of the value), and errno is set to ERANGE. These functions will fail if: The value of base is not supported. The value to be returned is not representable. These functions may fail if: No conversion could be performed. See attributes(5) for descriptions of the following attributes: iswalpha(3C), iswspace(3C), scanf(3C), wcstod(3C), attributes(5), standards(5) Truncation from long long to long can take place upon assignment or by an explicit cast.
http://docs.oracle.com/cd/E36784_01/html/E36874/wstol-3c.html
CC-MAIN-2017-17
refinedweb
335
51.58
We noticed that some developers were having trouble overlaying the AIR 2 beta SDK on top of the Flex SDK in Flex Builder and Flash Builder. Looking into the reports a bit further, we discovered that our instructions were not quite accurate. I just updated the AIR 2 release notes with more detailed instructions, so hopefully that will clear things up. If they’re still not clear, let us know via the comments and I’ll clarify further. Also, we are looking at ways of making this process much easier in the future. Thanks for bearing with us in the meantime. Its great that the Adobe team took the effort to correct it. It makes them more professional. I follow the instructions and still get errors like this (I have the correct namespace as I am using the example provided: VerifyError: Error #1014: Class flash.events::StorageVolumeChangeEvent could not be found. at _FileTileWatcherSetupUtil$/init() at mx.managers::SystemManager/[C:autobuild3.2.0frameworksprojectsframeworksrcmxmanagersSystemManager.as:3058]
http://blogs.adobe.com/flashplayer/2009/12/better_sdk_overlay_instructions.html
CC-MAIN-2018-17
refinedweb
164
57.57
Debugging is like being the detective in a crime movie where you are also the murderer. – Felipe Fortes, technical lead at Flipboard Debugging, while frustrating, is absolutely essential. Figuring out what your crazed, sleep-deprived self of yesterday meant to do and what your code is actually doing is no simple task. So we’re discussing some more debugging tips before you start building this search engine. Tip 0: Read the GDB Notes I mean Professor Campbell’s this time, particularly the part of patching code while inside of GDB. This is super awesome. Learn it. Not having to quit GDB to make every little change to your code is fantastic, and it will save you lots of time in the long run. Step through the examples in both the notes as well as the previous recitation yourself and see if you can really employ the practices. I urged you to do this in the lab as well, and I hope you followed my advice. Moving on. Tip 1: Small Functions Why do we split things up into multiple functions? Why not have one giant main method? It’s to save the programmer’s time. Anytime you see code appearing in multiple places, you should factor it out and write a single function to do this. C, unlike many other places, has much much smaller function call overhead, so you’ll find this won’t impact your performance much. But besides all of the usual bull-**** you hear, why do you really write many functions? Let me be devil’s advocate for a moment. I don’t care that I have to type out lots of things. I have the power of Vim and regular expressions! I can copy-paste content onto fifty files with a single Unix command! Hell, I won’t even write loops. I’ll literally write the same loop body fifty times and manually decrement i for all I care. Come on, give me a better reason. Well, there are really two main reasons two write small functions besides the whole saving programmer time / code design. Tip 2: Code, Don’t Write - Test Driven Developoment This really is a part of the previous point, but it’s large enough that I decided to allocate a separate bullet point for it. There are lots and lots of different kinds of tests. There are unit tests, integration tests, smoke tests, regression tests, acceptance tests, build tests, and many many more. People dedicate their whole lives to testing code - yes, it is that important. But since this is your first introduction to standardized testing, we’ll start with unit tests. The purpose of a unit test is to test a very narrow segment of code - in Java, it would be to test a single method of a class. In C, it would be to test a single function. Designing tests in a rigorous manner to really make sure your functions work is super important. In fact, there is a whole group of people who write their tests first. By writing the tests first, you plan out the entire design of the program down to what functions will do what. It allows you to think abstractly about the problem and focus on high-level design choices. Then, you go ahead and write the individual functions. It’s just like writing an outline and then filling it in to create an essay. This approach is called Test Driven Development. It’s all the rage these days in industry. But we’re not in industry, so I won’t mandate you do it this way. Suffice it to say that I’m thoroughly convinced, and these days even for school projects I employ TDD. The most common concern by testing newbies is something of the following sentiment: “Well okay, I have to in addition to writing the code I would normally, write all these tests that have to be all rigorous and thorough, yo. Talk about wasting time.” Listen well “yo”. This is work you’ll have regardless - it’s just that you’ll be doing it in pieces throughout rather than all at once. By doing it all at once, you’re forced to think about your design and implementation - and hey by the way, it’s been proven by many a person smarter than you or I that TDD saves time in the long run. It’s become standard industry practice. The real reason people run into so many issues coding is that they treat it like writing an essay. When writing a paper, often times you breeze through the entire paper with a rough draft. Then you go over it several more times in phases of editing, and eventually a polished paper comes out. This is not how to approach coding at all. If you think you can write the entire assignment and just “edit” your way through it, you’re gonna have a bad time. Write a small function, make sure it works. Move onto the next small function. Rinse and repeat. Tip 3: Unit Testing So I hope the immensely long rant convinced you that testing your code is absolutely essential. If you’re ever going to be doing industry standard coding (getting a job, doing an internship, even doing open source volunteer work,) submitting the tests along side your code is expected, not viewed as a bonus. So let’s look at doing some actual unit tests! Admittedly, unit testing in C is slightly more difficult than Python’s nose framework or the very famous Java JUnit. The Google Testing Framework is in my opinion the best one - some may argue that Check is better, but I disagree. Regardless of which one you use, unit test your code. Today’s recitation, however, will focus on the incredibly simple MinUnit. The entire “framework” is literally just 3 lines of code – 4 because one line is wrapped, and all of it is contained within minunit.h. It doesn’t get better than this. It’s my hope that by presenting the most simple framework in the existance of the web, all of you will do some sort of testing of your code. /* file: minunit.h */ #define mu_assert(message, test) do { if (!(test)) return message; } while (0) #define mu_run_test(test) do { char *message = test(); tests_run++; \ if (message) return message; } while (0) extern int tests_run; That’s it! That is the entire framework. So how do we use it? Let’s go through an example. Here is my obviously buggy swap.c C file that I want to test: void swap(int *a, int *b) { return; } And here is the corresponding header file: void swap(int *a, int *b); Let’s test it! For this extremely introductory example, let’s just do one single test case. I’ve put all my tests in test.c: #include <stdio.h> #include "swap.h" #include "minunit.h" int tests_run = 0; int a = 5; int b = 4; static char *test_swap() { swap(&a, &b); mu_assert("error, a != 4", a == 4); mu_assert("error, b != 5", b == 5); return 0; } static char *all_tests() { mu_run_test(test_swap); return 0; } int main() { char *result = all_tests(); if (result != 0) printf("%s\n", result); else printf("ALL TESTS PASSED\n"); printf("Tests run: %d\n", tests_run); return result != 0; } Couple things to note here. Obviously, whatever file is doing the actual testing will have to #include "minunit.h". Feel free to include this file in your SVN repo. Next, you’ll notice the actual testing file has to do a bit of setup to create the data that you will test. This is common practice, and it leads to testing files being a bit long - but luckily it’s not too difficult to create it. Lastly, the all_tests function here is rather unnecessary since we only have a single test; however, if we had many tests, it would be nice to put all the calls within a single function. Okay, let’s run it. error, a != 4 Tests run: 1 Bummer. It doesn’t work. Well, the fix is simple enough: #include "swap.h" void swap(int *a, int *b) { int temp = *a; *a = *b; *b = temp; } And the new output: ALL TESTS PASSED Tests run: 1 If you are more interested, here are further readings. Tip 4: Advanced GDB These are two things I forgot to mention in my actual GDB tutorial! Debugging segmentation faults with GDB is super easy. Use the backtrace or bt command if you’re running the executable within GDB. If not, don’t worry. Usually segmentation faults will produce core dumps, which produce a .core file. Usually there will be a series of numbers after the word “core” in the filename, but choose the latest one (otherwise you’ll be debugging a wrong core dump!). The other really handy trick is attaching GDB to a process that’s already begun. Once you have the process ID to which you wish to attach, simply call gdb -p PID. One last thing to talk about before we discuss the details of the lab: GNU Make. For this lab, you’ll have many source files and many header files. Imagine you have some of the following completely made up files: dictionary.c, dictionary.h, crawler.c, crawler.h, url.c and url.h. Now, every time you compile, you have to do mygcc dictionary.c url.c crawler.c, and if you made a special debugging version, you have to do a different command. Then all those .o and .gch files created may have to be cleaned up after your test run. There should be a way to automate this process. Enter GNU Make. GNU Make is an awesome tool used to create configurable executables quickly an easily, including debuggable versions of those executables, and remove any ‘junk’ created afterwards (think .gch, .o, and other files). Here is an example make file to help you get started. # Lines that start with "#" are comments # Filename # Description / Purpose # Any specific warnings to build special files, should they apply # Which compiler? This should be gcc CC=gcc # Any params you'll pass to gcc # I have 2 sets of params - one for normla, one for a debugging version CFLAGS=-Wall -pedantic -std=c99 -O3 DEBUG_FLAGS=$(CFLAGS) -ggdb # What are the relavent .c .h files SOURCES=./crawler.c ./crawler.h ./dictionary.c ./dictionary.h ./url.c ./url.h # Here are the make commands crawler:$(SOURCES) $(CC) $(CFLAGS) -o crawler $(SOURCES) debug:$(SOURCES) $(CC) $(DEBUG_FLAGS) -o debug $(SOURCES) clean: rm -f debug rm -f crawler rm -f *.o rm -f *.gch rm -f *# rm -f *~ rm -f *.swp Now, if I call make clean it will execute all the statements below it. If I call make debug, it will create an executable called debug that I can use GDB on. No more long complicated gcc commands - it’s all done for you! The directory to find your source files (notice ./crawler.c instead of crawler.c) is rooted at your Makefile. So if crawler.c is inside another folder foo, the Makefile would have said ./foo/crawler.c. The most common source of bugs is that the indented lines underneath each command like crawler or debug or clean must use a tab - not spaces. Do soft tabs screw this up? Try it and find out. :) You may be surprised. You should of course include in your README how someone else should build / use your Makefile, and any special instructions. Obviously, submit your Makefile to SVN as well. Okay, now that we have our toolbox filled to the brim with new knowledge and tips, let’s talk about the lab. This is not an easy lab. I’ll say it again - this is not an easy lab. Start early, get it working. You’ll find your future labs depend on this one being a success. Do I expect all of you to turn in a bug-free lab this time? Nope. And I’m not even going to point out all of the bugs. Your job over the next four weeks is to build a cohesive search engine. If that means your query engine breaks because of some small bug you wrote into your crawler three weeks beforehand then so be it. You should be thinking of this as one large lab. In fact, part of your later assignment will be to come back to this lab and make it squeaky clean - no memory leaks, optimize this, etc. But we’ll talk about tools like valgrind and gprof later. For now, let’s talk about the design pattern for this assignment. You’ve actually seen the implementation and design patterns for this assignment in class already, and most likely Professor Campbell has spoken about this at length. There’s not much discussion here - you’re all implementing the design we laid out in class, end of story. How does a crawler work? We’re given a starting URL, often referred to as the seed. We crawl until we hit a predetermined maximum depth value. The seed is given a depth of 0 by convention. All URLs found in the seed are of depth 1, and so on. So what data structures do we need? These are of course just suggestions, but I think they’re very good ones that we should all use. The first one is obviously some sort of ordered data structure we have to implement; the real question is what order should this be. Should it be a stack? A queue? Something else? Well, this depends on your implementation of the crawler. I would do something in the style of Breadth First Search: crawl everything with depth 0 first, then crawl everything with depth 1, and so on. This would mean a standard queue. For the actual implementation, you need something that allows you to pop off the front quickly and something that allows you to add onto the end very quickly. What this becomes is up to you. The obvious solution is a linked list. If you have a better, non-obvious solution, feel free to implement that as well. For the second data structure, we have some choices. But there will be many, many URLs that we are searching for. One choice - one that optimizes for time complexity, rather than memory - is a dictionary (or a HashMap for the Java crowd). How do dictionaries (or hash tables) work? A dictionary is a data structure that, in the worst case, takes O(n) lookup time. So why use them? Hell, a simple array takes O(n) worst case lookup time. Why not just use those? Well, most of the time, a dictionary takes constant lookup time. Let’s imagine a dictionary as a simple array with length 1000. Now let’s say we want to insert a string "Hello world" into this dictionary so we can easily find it. We take a hash function, apply it to a string, and it gives us a value. Good hash functions always provide unique results for unique inputs - that is, they should have no collisions. If two different strings "goodbye" and "hello" both had a hash value of 6, well then that wouldn’t be a very good hash function would it? The hash function we’ve given you is a pretty decent one - feel free to use a better one should you find one online (of course you will cite it…). Great! So we have some unique hash value as a result of this awesome mathy hash function. Now what? Well we can’t just use that as an index automatically - remember how our dictionary is just a 1000 element array? So how do we fit this integer into our array? How about the modulus operator. By modding the output of the hash function with the size of the dictionary, we get some value between 0 and 999 inclusive - the size of our dictionary. So we can use this value as our index into the dictionary. So… is that it? No! The modulus function artificially creates collisions! Think about it. Imagine we have a perfect hash function which never creates the same value given two different inputs. Great. So we have two values 1000 and 2000. Well, when we mod them both by 1000, won’t they both be assigned to 0? Of course. So what do we do now? Well, what if we changed our data structure from being a simple array of ints to an array of linked lists. Now that each element in the array contains a linked list, we can assign both both of those hashes to the same index in the linked list. But this adds another wrench into our problem. What if we wanted to find out whether the string "goodbye" was in our dictionary? Well, we’d input that value into our hash function, mod the operator by the size of the dictionary, and look in that index. But we can’t just assume that if the value is not null it will be what we’re looking for. Remember, there can be collisions - the string "hello" could have been assigned to the same value. So we iterate through the linked list and look for the string, and only if we don’t find it there can we conclude it is not in the dictionary. Is it becoming clear why in the worst case a dictionary is also O(n)? What’s the worst case? Imagine a dictionary with the size of 1. Just a single pointer to a linked list. That means every call to the dictionary would have to look through the entire linked list - in other words, just like a normal array. You will implement a hash table for this lab, so you’ll get to experience its inner workings first hand. Fancier hash tables in languages like Java or Python actually dynamically adjust in size. When they see that the number of collisions has reached a certain threshold, they’ll increase their size from 1000 to 2000 (or whatever the size is) and then re-hash all their elements. This is naturally a rather time intensive operation, but the amortized cost is quite small (algorithms, people!). Great, so now you’ve designed all your three data structures, the methods that go along with them, and you’ve unit tested all of them so you know the error cannot be within any of these files. Awesome! If you’ve truly done all of these things, then the actual crawler.c file just has to be a single main method - in fact, it doesn’t have to be more than 25 lines of code! 1. Create all the data structures you'd use - your dictionary, your queue, etc 2. As long as your queue is not empty 3. Pop off the first element from the queue 4. Download the contents of the page the URL points to 5. Search through all URLs in this page, adding them to the dictionary / queue if they're not already there and they're not beyond MAXDEPTH 6. Store the depth of this URL, and all the URLs you found in it into a file That’s it! There’s the lab. Go get ’em!
https://cs50.notablog.xyz/lecture/Recitation3.html
CC-MAIN-2018-43
refinedweb
3,217
82.65
July 10, 2008. In this tutorial we are going to look at building a simple Django application that integrates with the Yahoo BOSS search framework. More specifically we're going to be using the BOSS Mashup Framework. First, lets address the most pressing question: What the hell is Yahoo BOSS? BOSS is Build Your Own Search Service and presents us with a fairly low level interface with Yahoo's search engine, not just to search our own site, but to search pretty much anything. The BOSS Mashup Framework, which is what we are going to be using, is open for any developers and has very few restrictions. First lets get all the little configuration stuff out of the way. There is a fair bit, but none of it is very difficult. As a warning, I'll point out that the BOSS Mashup Framework requires Python 2.5, and won't work with previous versions without some changes1. Create a new Django project, lets call it my_search. django-admin.py startproject my_search Create a Django app inside my_search, lets name it yahoo_search. python2.5 manage.py startapp yahoo_search Download the Python library for controlling BOSS. Unzip it into the my_search/yahoo_search folder, and rename it to boss. unzip boss_mashup_framework_0.1.zip rm boss_mashup_framework_0.1.zip mv boss_mashup_framework_0.1 boss Yahoo didn't do a great job of packaging something that just works, so we have to go through a few steps to build the framework. (Although, these sub-instructions here are lifted almost directly from the included README file, so its not that they didn't document it, just that its a bit of a pain to get working.) In Yahoo's defense, I think the reason they did a 'bad' job of packaging is that they probably ran into some incompatable licenses. Install Simple JSON if you don't have it installed. You can check if you have it installed by entering a Python2.5 prompt and typing import simplejson If that didn't work, download Simple JSON. And then install it. python2.5 setup.py build python2.5 setup.py install Create the folder my_search/yahoo_search/boss/deps/. Download dict2xml and xml2dict, and extract them into the deps folder, remove the .tgz files, and return to the boss directory. tar -xzvf dict2xml.tgz tar -xzvf xml2dict.tgz rm *.tgz cd .. Now we can finally build the framework. python2.5 setup.py build python2.5 setup.py install Next, we have to update the settings in boss/config.json. I only changed the first three settings: appid, org. The appid is the one you were given upon signing up for BOSS. Check that it all worked by running (from within the boss directory): python2.5 examples/ex3.py From here on things are going to deviate from the README a bit, we're going to move example and yos into our yahoo_search directory, move config.json into our my_search directory and get rid of everything else (well, you might want to keep the examples folder for your own benefit). mv config.json ../../ mv yos ../ mv examples ../ cd .. rm -r boss Okay, now we're all done with the setup, and are ready to move on to putting together a simple Django application that uses the BOSS Mashup Framework. Now that we have all the setup out of the way, we need to decide exactly what our app is going to do. To begin with (however, fear not, this is posed to turn into a multi-part series where we gradually put together a more interesting app) we're going to do something really simple: search Yahoo News based on the results of a posted form. Yep. As simple as you can get. We'll make it more interesting afterwards, when we have something that works. First lets edit our project's urls.py to include urls from our yahoo_search app. my_search/urls.py is should look like this: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^', include('my_search.yahoo_search.urls')), ) However, we haven't actually created my_search/yahoo_search/urls.py yet, so lets do that real quick. from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^$', 'my_search.yahoo_search.views.index'), ) As you can see by looking at urlpatterns we're only going to have one view index, and it is going to be handling everything for us. indexview Now we're going to write the index view, which will be handling everything for us. Start out by opening my_search/yahoo_search/views.py. Lets start out with all the imports we're going to need. from django.shortcuts import render_to_response from django import newforms as forms from yos.boss import ysearch from yos.yql import db We're going to use render_to_response to render templates, newforms to query our user for their search term, ysearch for retrieving data from BOSS, and db to format those retrieved results into something a bit more managable. searchfunction Now lets write a simple search function we'll use for querying BOSS. def search(str): data = ysearch.search(str,vertical="news",count=10) news = db.create(data=data) return news.rows If you wanted to search from Yahoo's web results instead of their news, you'd simply change the line data = ysearch.search(str,vertical="news",count=10) to data = ysearch.search(str,count=10) The data returned by the search function is a list of dictionaries that look like this: { u'sourceurl': u'', u'language': u'en english', u'title': u'Google Works With eBay And PayPal To Curtail Phishing', u'url': u'', u'abstract': u'Google Gmail requires eBay and PayPal to use DomainKeys to authenticate mail in an anti-phish effort', u'clickurl': u'', u'source': u'ChannelWeb', u'time': u'22:26:08', u'date': u'2008/07/11' } The search function is very basic, but will be enough for this initial version of the application. Lets move forward. newform Next we need to create a (very) simple newform that we will use for querying our users' for their search terms. class SearchForm(forms.Form): search_terms = forms.CharField(max_length=200) Thats all we'll need for now, carry on. (I said it was simple.) indexview Okay, now lets stop for a moment and consider what the index view needs to accomplish. searchto put together the results. render_to_responseto render a template contain a Okay, translating that into Python we get our index function: def index(request): results = None if request.method == "POST": form = SearchForm(request.POST) if form.is_valid(): search_terms = form.cleaned_data['search_terms'] results = search(search_terms) else: form = SearchForm() return render_to_response('yahoo_search/index.html', {'form': form,'results': results}) Admittedly we haven't written the index.html template yet, that will be our next task. Beyond that, this is a pretty standard Django view. index.htmltemplate First, we need to create the template directory for our yahoo_search app. From inside the my_search/yahoo_search directory: mkdir templates mkdir templates/yahoo_search And then create the file templates/yahoo_search/index.hml, and open it up in your editor. This is going to be a simple template, containing only an input box for searching, and a listing of the results. It'll look like this: <html> <head> <title>My Search</title> </head> <body> <h1>My Search</h1> <form action="/" method="post"> <table> {{ form }} <tr><td><input type="submit" value="Search"></td></tr> </table> {% if results %} <ol> {% for result in results %} <li> {% comment %} Notice we are using {{ result.clickurl }} instead of {{ result.url }}. You might wonder why we are doing that, and the answer is pretty simple: because thats what Yahoo is asking us to. {% endcomment %} <span class="title"> <a href="{{ result.clickurl }}">{{ result.title }}</a> </span> <span class="date"> {{ result.date }} </span> <span class="time"> {{ result.time }} </span> <span class="source"> <a href="{{ result.sourceurl }}">{{ result.source }}</a> </span> <p class="abstract"> {{ result.abstract }} </p> </li> {% endfor %} </ol> {% endif %} </body> </html> If you haven't been keeping up, or if your code is behaving strangely, you can grab a zip of all these files. Just unzip these somewhere, fill in the first three entries (your BOSS appid, org) in my_search/config.json, and you'll be ready to take a look at the app in the next step. Update 7/12/2008: Unfortunately, the way the BOSS library has been built it isn't enough to simply copy over yos folder, and instead you will need to follow the installation steps for the BOSS Framework listed above (step #6). Specifically, you need to work through those steps and finish with: python2.5 setup.py build python2.5 setup.py install Its a bit of a pain, and I'll see if I can clean things up to make it simpler. Now we've finished building the app, lets fire it up. python2.5 manage.py runserver Navigate over to, and you'll see a friendly search box waiting for you. Type in a search term, hit enter, and voila, you'll see a list of your results. I searched for iPhone and got a page of results like this: One gotcha I'll point out is that the helper library Yahoo has supplied relies on config.json being in the base directory where the Python is being run from. This will be true for your development setup, but won't necessarily be the case on your deployment server. I believe the best solution here would be to add the contents of config.json to your project's settings.py file and tweak the yos/boss/ysearch.py file to load the settings using django.conf.settings instead of from disk. Let me know if you have any questions, and I'll try to answer them. Time permitting, I'll continue with another segment or two working on building a slightly more compelling search service than what we have created so far. Update 7/12 Thanks to Wayne's comments I was able to simplify the search function quite a bit. Specifically, he pointed out that I was using the library to prepend ynews$ to all the dictionaries' keys, then getting upset it was there and removing it manually. Woops.
https://lethain.com/yahoo-s-build-your-own-search-service-in-django/
CC-MAIN-2019-51
refinedweb
1,693
66.44
[ Usenet FAQs | Search | Web FAQs | Documents | RFC Index ] Search the FAQ Archives Search the FAQ Archives - From: spp@psa.pencom.com Newsgroups: comp.lang.perl.announce, comp.lang.perl.misc Subject: comp.lang.perl.* FAQ 2/5 - Information Sources Date: 27 Jan 1996 01:22:29 GMT Message-ID: <SPP.96Jan26202229@syrinx.hideout.com> Archive-name: perl-faq/part2 Version: $Id: part2,v 2.9 1995/05/15 15:44:29 spp Exp spp $ Posting-Frequency: bi-weekly Last Edited: Thu Jan 11 00:54:41 1996 by spp (Stephen P Potter) on syrinx.psa.com This posting contains answers to general information questions, mostly of about information sources. 2.1) Is there a USENET group for Perl? Yes there is: comp.lang.perl.misc. This group, which currently can get up to 150 messages per day, contains all kinds of discussions about Perl; everything from bug reports to new features to the history to humour and trivia. This is the best source of information about anything Perl related, especially what's new with Perl5. Because of its vast array of topics, it functions as both a comp.lang.* style newsgroup (providing technical information) and also as a rec.* style newsgroup, kind of a support group for Perl addicts (PerlAnon?). There is also the group comp.lang.perl.announce, a place specifically for announcements related to perl (new releases, the FAQ, new modules, etc). Larry is a frequent poster to this group as well as most (all?) of the seasoned Perl programmers. Questions will be answered by some of the most knowledgable Perl Hackers, often within minutes of a question being posted (give or take distribution times). 2.2) Have any books or magazine articles been published about Perl? There are a number of books either available or planned. Mostly chronologically, they are: Programming Perl (the Camel Book): Author: Larry Wall and Randal Schwartz Publisher: O'Reilly and Associates ISBN 0-937175-64-1 (English) ISBN 4-89052-384-7 (Japanese) ISBN 3-446-17257-2 (German) (Programmieren in Perl) (translator: Hanser Verlag) This is probably the most well known and most useful book for 4.036 and earlier. This part of O'Reilly's hugely successful "Nutshell Handbook" series. Besides serving as a reference guide for Perl, it also contains tutorial material and is a great source of examples and cookbook procedures, as well as wit and wisdom, tricks and traps, pranks and pitfalls. The code examples contained therein are available from or. Corrections and additions to the book can be found in the Perl4 man page right before the BUGS section under the heading ERRATA AND ADDENDA. Learning Perl (the Llama Book): ISBN 1-56592-042-2 (English) ISBN 4-89502-678-1 (Japanese) ISBN 2-84177-005-2 (French) ISBN 3-930673-08-8 (German) Another of O'Reilly's "Nutshell Handbooks", by Randal Schwartz. This book is a smaller, gentler introduction to perl and is based off of Randal's perl classes. While in general this is a good book for learning perl (like it's title), early printings did contain many typos and don't cover some of the more interesting features of perl. Please check the errata sheet at, as well as the on-line examples. If you can't find these books in your local technical bookstore, they may be ordered directly from O'Reilly by calling 1-800-998-9938 if in North America and 1-707-829-0515 otherwise. Johan Vromans* created a beautiful reference guide. The reference guide comes with the Camel book in a nice, glossy format. The LaTeX (source) and PostScript (ready to print) versions are available for FTP from in Europe or from in the United States. Obsolete versions in TeX or troff may still be available, but these versions don't print as nicely. See also: [] [] [] Johan has also updated and released a reference guide based on version 5.000. This is available from the same places as the 4.036 guide. This version is also available from prep.gnu.ai.mit.edu in the /pub/gnu section along with the perl5 source. It may be added to the standard perl5 distribution sometime after 5.002. If you are using version 5.000, you will want to get this version rather than the 4.036 version. Larry routinely carries around a camel stamp to use when autographing copies of his book. If you can catch him at a conference you can usually get him to sign your book for you. Prentice Hall also has two perl books. The first is ``Perl by Example'' by Ellie Quigley. (385 pages, $26.96, ISBN 0-13-122839-0) A perl tutorial (perl4); every feature is presented via an annotated example and sample output. Reviews of this book have varied widely. Many new perl users have used this book with much success, while many "veteran" programmers have had many complaints about it. The second book is called ``Software Engineering with Perl'' by Carl Dichter and Mark Pease. Randal Schwartz was a technical reviewer for this book and notes this: SEwP is not meant as instruction in the Perl language, but rather as an example of how Perl may be used to assist in the semi-formal software engineering development cycles. There's a lot of Perl code that's fairly well commented, but most of the book describes software engineering methodologies. For the perl-challenged, there's a *light* treatment of the language as well, but they refer to the llama and the camel for the real meat. SAMS Publishing also has a Perl book available, as part of their "Teach Yourself in 21 Days" series, called "Teach Yourself Perl in 21 Days". ISBN 0-672-30586-0 Price: $29.95, 841 Pages. This book is the first book to have a section devoted to version 5.000, although it was written during an alpha stage and may not necessarily reflect current reality. Please note that none of the above books are perfect, all have some inaccurances and typos. The two which Larry is directly associated with (the O'Reilly books) are probably the most technically correct, but also the most dated. Carefully looking over any book you are considering purchasing will save you much time, money, and frustration. Starting in the March, 1995 edition of ``Unix Review''. Randal Schwartz* has been authoring a bi-monthly Perl column. This has so far been an introductory tutorial. Larry Wall has published a 3-part article on perl in Unix World (August through October of 1991), and Rob Kolstad also had a 3-parter in Unix Review (May through July of 1990). Tom Christiansen also has a brief overview article in the trade newsletter Unix Technology Advisor from November of 1989. You might also investigate "The Wisdom of Perl" by Gordon Galligher from SunExpert magazine; April 1991 Volume 2 Number 4. The Dec 92 Computer Language magazine also contains a cover article on Perl, "Perl: the Programmers Toolbox". Many other articles on Perl have been recently published. If you have references, especially on-line copies, please mail them to the FAQ maintainer for inclusion is this notice. The USENIX LISA (Large Installations Systems Administration) Conference have for several years now included many papers of tools written in Perl. Old proceedings of these conferences are available; look in your current issue of ";login:" or send mail to office@usenix.org for further information. Japan seems to be jumping with Perl books. If you can read japanese here are a few you might be interested in. Thanks to Jeffrey Friedl* and Ken Lunde* for this list (NOTE: my screen cannot handle japanese characters, so this is all in English for the moment NOTE2: These books are written in Japanese, these titles are just translations): Title: Welcome to Perl Country (Perl-no Kuni-he Youkoso) Authors: Kaoru Maeda, Hiroshi Koyama, Yasushi Saito and Arihito Fuse Pages: 268+9 Publisher: Science Company Pub. Date: April 25, 1993 ISBN: 4-7819-0697-4 Price: 2472Y Author Email: maeda@src.ricoh.co.jp Comments: Written during the time the Camel book was being translated. A useful introduction, but uses jperl (Japanese Perl) which is not necessarily compatible. Title: How to Write Perl (Perl Shohou) Author: Toshiyuki Masui Pages: 352 Publisher: ASCII Corporation Pub. Date: July 1, 1993 ISBN: 4-7561-0281-6 Price: 3200Y Author Email: masui@shocsl.sharp.co.jp Comments: More advanced than "Welcome.." and not meant as an introduction. Uses the standard perl and has examples for handling Japanese text. Title: Introduction to Perl (Nyuumon Perl) Author: Shinji Kono Pages: 203 Publisher: ASCII Corporation Date: July 11, 1994 ISBN: 4-7561-0292-1 Price: 1800Y Author Email: kono@csl.sony.co.jp Comments: Uses the interactive Perl debugger to explain how things work. Title: Perl Programming Authors: L Wall & R Schwartz Translator: Yoshiyuki Kondo Pages: 637+32 Publisher: Softbank Corporation Pub. Date: February 28, 1993 ISBN: 4-89052-384-7 Price: 4500Y Author Email: cond@lsi-j.co.jp Comments: Official Japanese translation of the Camel book, "Programming Perl". Somewhat laced with translator notes to explain the humour. The most useful book. Also includes the Perl Quick Reference -- in Japanese! 2.3) When will the Camel and Llama books be updated? As of August, 1995, ORA has contracted with Stephen to handle the Camel update. According to the accepted timeline, the first draft is to be finished by the end of April, 1996. The tutorial sections are being cut some, and the book will take on much more of a reference style. Don't worry, it will still contain it's distinctive humor and flair. There are no current plans to update the Llama. For the most part, it serves as a good introduction for both major versions of perl. There may be some minor editing to it, but probably nothing major. If anything, it is more likely that a third book (working title: Learning More Perl) will be written as a tutorial for the new perl5 paradigm. 2.4) What FTP resources are available? Since 1993, several ftp sites have sprung up for Perl and Perl related items. The site with the biggest repository of Perl scripts right now seems to be [128.227.100.198] in /pub/perl. The scripts directory has an INDEX with over 400 lines in it, each describing what the script does. The src directory has sources and/or binaries for a number of different perl ports, including MS-Dos, Macintosh and Windows/NT. This is maintained by the Computing Staff at UF*. Note: European users please use the site src.doc.ic.ac.uk [149.169.2.1] in /pub/computing/programming/languages/perl/ The link speed would be a lot better for all. Contact L.McLoughlin@doc.ic.ac.uk for more information. It is updated daily. There are also a number of other sites. I'll add more of them as I get information on them. [site maintainers: if you want to add a blurb here, especially if you have something unique, please let me know. -spp] The Comprehensive Perl Archive Network (CPAN) is in heavy development. Once the main site and its mirrors are fully operational, this answer will change to reflect it's existance. 2.5) What WWW/gopher resources are available? The World Wide Web is exploding with new Perl sites all the time. Some of the more notable ones are:, which has a great section on Perl5., a great site for European and UK users. 2.6) Can people who don't have access to USENET get comp.lang.perl.misc? "Perl-Users" is the mailing list version of the comp.lang.perl.misc newsgroup. If you're not lucky enough to be on USENET you can post to comp.lang.perl.misc by sending to one of the following addresses. Which one will work best for you depends on which nets your site is hooked into. Ask your local network guru if you're not certain. Internet: PERL-USERS@VIRGINIA.EDU Perl-Users@UVAARPA.VIRGINIA.EDU BitNet: Perl@Virginia uucp: ...!uunet!virginia!perl-users The Perl-Users list is bidirectionally gatewayed with the USENET newsgroup comp.lang.perl.misc. This means that VIRGINIA functions as a reflector. All traffic coming in from the non-USENET side is immediately posted to the newsgroup. Postings from the USENET side are periodically digested and mailed out to the Perl-Users mailing list. A digest is created and distributed at least once per day, more often if traffic warrants. All requests to be added to or deleted from this list, problems, questions, etc., should be sent to: Internet: Perl-Users-Request@Virginia.EDU Perl-Users-Request@uvaarpa.Virginia.EDU BitNet: Perl-Req@Virginia uucp: ...!uunet!virginia!perl-users-request Coordinator: Marc Rouleau <mer6g@VIRGINIA.EDU> 2.7) Are archives of comp.lang.perl.misc available? Yes, there are..*/monthly has an almost complete collection dating back to 12/89 (missing 08/91 through 12/93). They are kept as one large file for each month. A more sophisticated query and retrieval mechanism is desirable. Preferably one that allows you to retrieve article using a fast-access indices, keyed on at least author, date, subject, thread (as in "trn") and probably keywords. Right now, the MH pick command works for this, but it is very slow to select on 18000 articles. If you have, or know where I can find, the missing sections, please let perlfaq@perl.com know. 2.8) Is there a WAIS server for comp.lang.perl.*? Yes there is. Set your WAIS client to archive.orst.edu:9000/comp.lang.perl.*. According to their introduction, they have a complete selection from 1989 on. 2.9) What other sources of information about Perl or training are available? There is a #Perl channel on IRC (Internet Relay Chat) where Tom and Randal have been known to hang out. Here you can get immediate answers to questions from some of the most well-known Perl Hackers. The perl5-porters (perl5-porters@nicoh.com) mailing list was created to aid in communication among the people working on perl5. However, it has overgrown this function and now also handles a good deal of traffic about perl internals. 2.10) Where can I get training classes on Perl? USENIX, LISA, SUG, WCSAS, AUUG, FedUnix and Europen sponsor tutorials of varying lengths on Perl at the System Administration and General Conferences. These public classes are typically taught by Tom Christiansen*. In part, Tom and Randal teach Perl to help keep bread on their tables long enough while they continue their pro bono efforts of documenting perl (Tom keeps writing more man pages for it :-) and expanding the perl toolkit through extension libraries, work which they enjoy doing as it's fun and helps out the whole world, but which really doesn't pay the bills. Such is the nature of free(ly available) software. Send mail to <perlclasses@perl.com> for details and availability. Tom is also available to teach on-site classes, included courses on advanced perl and perl5. Classes run anywhere from one day to week long sessions and cover a wide range of subject matter. Classes can include lab time with exercises, a generally beneficial aspect. If you would like more information regarding Perl classes or when the next public appearances are, please contact Tom directly at 1.303.444.3212. Randal Schwartz* provides a 2-day lecture-only and a 4-5 day lecture-lab course based on his popular book "Learning Perl". For details, contact Randal directly via email or at 1.503.777.0095. Internet One provides a 2 day "Introduction to Perl" and 2 day "Advanced Perl" workshop. The 50% hands-on and 50% lecture format allow attendees to write several programs themselves. Supplied are the user manuals, reference copies of Larry Wall's "Program- ming Perl", and a UNIX directory of all training examples and labs. To obtain outlines, pricing, or scheduling information, use the following: o Phone: 1.303.444.1993 o Email: info@InternetOne.COM o See our Ad in the "SysAdmin" magazine o View the outlines via the Web: 2.11) What companies use or ship Perl? At this time, the known list of companies that ship Perl includes at least the following, although some have snuck it into /usr/contrib or its moral equivalent: BSDI Comdisco Systems CONVEX Computer Corporation Crosspoint Solutions Data General Dell DRD Corporation IBM (SP systems) Intergraph Kubota Pacific Netlabs SGI (without taintperl) Univel Some companies ship it on their "User Contributed Software Tape", such as DEC and HP. Apple Computer has shipped the MPW version of Macintosh Perl on one of their Developer CDs (Essentials*Tools*Objects #11) (and they included it under "Essentials" :-) Many other companies use Perl internally for purposes of tools development, systems administration, installation scripts, and test suites. Rumor has it that the large workstation vendors (the TLA set) are seriously looking into shipping Perl with their standard systems "soon". People with support contracts with their vendors are actively encouraged to submit enhancement requests that Perl be shipped as part of their standard system. It would, at the very least, reduce the FTP load on the Internet. :-) If you know of any others, please send them in. 2.12) Is there commercial, third-party support for Perl? Not really. Although perl is included in the GNU distribution, at last check, Cygnus does not offer support for it. However, it's unclear whether they've ever been offered sufficient financial incentive to do so. Feel free to try. On the other hand, you do have comp.lang.perl.misc as a totally gratis support mechanism. As long as you ask "interesting" questions, you'll probably get plenty of help. :-) While some vendors do ship Perl with their platforms, that doesn't mean they support it on arbitrary other platforms. And in fact, all they'll probably do is forward any bug reports on to Larry. In practice, this is far better support than you could hope for from nearly any vendor. If you purchase a product from Netlabs (the company Larry works for), you actually can get a support contract that includes Perl. The companies who won't use something unless they can pay money for it will be left out. Often they're motivated by wanting someone whom they could sue. If all they want is someone to help them out with Perl problems, there's always the net. And if they really want to pay someone for that help, well, any of a number of the regular Perl "dignitaries" would appreciate the money. ;-) If companies want "commercial support" for it badly enough, speak up -- something might be able to be arranged. 2.13) What is a JAPH? What does "Will hack perl for ..." mean? These are the "just another perl hacker" signatures that some people sign their postings with. About 100 of the of the earlier ones are available from the various FTP sites. When people started running out of tricky and interesting JAPHs, some of them turned to writing "Will hack perl for ..." quotes. While sometimes humourous, they just didn't have the flair of the JAPHs and have since almost completely vanished. 2.14) Where can I get a list of Larry Wall witticisms? Over a hundred quips by Larry, from postings of his or source code, can be found in many of the FTP sites or through the World Wide Web at "" 2.15) What are the known bugs? This is *NOT* a complete list, just some of the more common bugs that tend to bite people. 5.001: op.c: Inconsistent parameter definition for pad_findlex - fixed in 5.001a, get development patches a-l. walk.c: redeclaration of emit_split - fixed in perl5.001a, get development patches a-l. On linux systems "make test" fails on "op/exec Failed test 5". This is a known bug with bash, not perl. You can get a new version of bash. Also on linux systems, "make test" hangs on lib/anydbm if you include NDBM in the extentions. Do not include NDBM. Another linux problem is getting Dynamic Loading to work. You must use dld-2.3.6 (the newest version at the time of writing) to use Dynamic Loading. - All versions of h2ph previous to the one supplied with perl5.001 tended to generate improper header files. Something such as: #if __GNUC__ was incorrectly translated into if ( &__GNUC__ ) { instead of if ( defined(&__GNUC__) ? &__GNUC__ : 0 ) { Perl5 binaries compiled on SunOS4 exhibit strange behaviour on SunOS5. For example, backticks do not work in the scripts. You need to compile perl for both architectures, even with Binary Compatibility. 2.16) Where should I post bugs? Before posting about a bug, please make sure that you are using the most recent versions of perl (currently 4.036 and 5.001) available. Please also check at the major archive sites to see if there are any development patches available (usually named something like perl5.001a.patch or patch5.001a - the patch itself, or perl5.001a.tar.gz - a prepatched distribution). If you are not using one of these versions, chances are you will be told to upgrade because the bug has already been fixed. If you are reporting a bug in perl5, the best place to send your bug is <perlbug@perl.com>, which is currently just an alias for <perl5-porters@nicoh.com>. In the past, there have been problems with the perlbug address. If you have problems with it, please send your bug directly to <perl5-porters@nicoh.com>. You may subscribe to the list in the customary fashion via mail to <perl5-porters-request@nicoh.com>. Feel free to post your bugs to the comp.lang.perl.misc newsgroup as well, but do make sure they still go to the mailing list. If you are posting a bug with a non-Unix port, a non-standard Module (such as Tk, Sx, etc) please see the documentation that came with it to determine the correct place to post bugs. To enhance your chances of getting any bug you report fixed: 1. Make sure you are using a production version of perl. Alpha and Beta version problems have probably already been reported and fixed. 2. Try to narrow the problem down to as small a piece of code as possible. If you can get it down to 1 line of Perl then so much the better. 3. Include a copy of the output from the myconfig script from the Perl source distribution in your posting. 2.17) Where should I post source code? You should post source code to whichever group is most appropriate, but feel free to cross-post to comp.lang.perl.misc. If you want to cross-post to alt.sources, please make sure it follows their posting standards, including setting the Followups-To header line to NOT include alt.sources; see their FAQ for details. 2.18) Where can I learn about object-oriented Perl programming? The perlobj(1) man page is a good place to start, and then you can check out the excellent perlbot(1) man page written by the dean of perl o-o himself, Dean Roehrich. Areas covered include the following: Idx Subsections in perlobj.1 Lines 1 NAME 2 2 DESCRIPTION 16 3 An Object is Simply a Reference 60 4 A Class is Simply a Package 31 5 A Method is Simply a Subroutine 34 6 Method Invocation 75 7 Destructors 14 8 Summary 7 Idx Subsections in perlbot.1 Lines 1 NAME 2 2 INTRODUCTION 9 3 Instance Variables 43 4 Scalar Instance Variables 21 5 Instance Variable Inheritance 35 6 Object Relationships 33 7 Overriding Superclass Methods 49 8 Using Relationship with Sdbm 45 9 Thinking of Code Reuse 111 The section on instance variables should prove very helpful to those wondering how to get data inheritance in perl. 2.19) Where can I learn about linking C with Perl? [h2xs, xsubpp] While it used to be deep magic, how to do this is now revealed in the perlapi(1), perlguts(1), and perlcall(1) man pages, which treat with this matter extensively. You should also check the many extensions that people have written (see question 1.19), many of which do this very thing. 2.20) What is perl.com? Perl.com is just Tom's domain name, registered as dedicated to "Perl training and consulting". While not a full ftp site (he hasn't got the bandwidth (yet)), it does have some interesting bits, most of which are replicated elsewhere. It serves as a clearinghouse for certain perl related mailing lists. The following aliases work: perl-packrats: The archivist list perl-porters: The porters list perlbook: The Camel/Llama/Alpaca writing committee perlbugs: The bug list (perl-porters for now) perlclasses: Info on Perl training perlfaq: Submissions/Errata to the Perl FAQ (Tom and Steve) perlrefguide: Submissions/Errata to the Perl RefGuide (Johan) 2.21) What do the asterisks (*) throughout the FAQ stand for? To keep from cluttering up the FAQ and for easy reference all email addresses have been collected in this location. For each person listed, I offer my thanks for their input and help. * Larry Wall <lwall@netlabs.com> * Tom Christiansen <tchrist@wraeththu.cs.colorado.edu> * Stephen P Potter <spp@pencom.com> * Andreas Koenig <k@franz.ww.TU-Berlin.DE> * Bill Eldridge <bill@cognet.ucla.edu> * Buzz Moschetti <buzz@bear.com> * Casper H.S. Dik <casper@fwi.uva.nl> * David Muir Sharnoff <muir@tfs.com> * Dean Roehrich <roehrich@ironwood.cray.com> * Dominic Giampaolo <dbg@sgi.com>, * Frederic Chauveau <fmc@pasteur.fr> * Gene Spafford <spaf@cs.purdue.edu> * Guido van Rossum <guido@cwi.nl> * Henk P Penning <henkp@cs.ruu.nl> * Jeff Friedl <jfriedl@omron.co.jp> * Johan Vromans <jvromans@squirrel.nl> * John Dallman <jgd@cix.compulink.co.uk> * John Lees <lees@pixel.cps.msu.edu> * John Ousterhout <ouster@eng.sun.com> * Jon Biggar <jon@netlabs.com> * Ken Lunde <lunde@mv.us.adobe.com> * Malcolm Beattie <mbeattie@sable.ox.ac.uk> * Matthias Neeracher <neeri@iis.ee.ethz.ch> * Michael D'Errico <mike@software.com> * Nick Ing-Simmons <Nick.Ing-Simmons@tiuk.ti.com> * Randal Schwartz <merlyn@stonehenge.com> * Roberto Salama <rs@fi.gs.com> * Steven L Kunz <skunz@iastate.edu> * Theodore C. Law <TEDLAW@TOROLAB6.VNET.IBM.COM> * Thomas R. Kimpton <tom@dtint.dtint.com> * Timothy Murphy <tim@maths.tcd.ie> * UF Computer Staff <consult@cis.ufl.edu> --
http://www.faqs.org/faqs/perl-faq/part2/
crawl-002
refinedweb
4,425
66.64
--- Hi Guys, i have implemented basic support for the backlight function on OpenBSD. The problem here is that /dev/ttyC0 permission is 600 (root:wheel) so a user cannot read from the device without changing the permission or running slstatus as root. Anyway, this is just an proposal. Hopefully there is another solution to solve this. I know that we could also do this by linking against xcb-xrandr (like xbacklight does) but i think we should not do this for now. Greetings, Tobias components/backlight.c | 26 ++++++++++++++++++++++++++ config.def.h | 1 + 2 files changed, 27 insertions(+) diff --git a/components/backlight.c b/components/backlight.c index f9c4096..21e06a1 100644 --- a/components/backlight.c +++ b/components/backlight.c _AT_@ -29,4 +29,30 @@ return bprintf("%d", cur * 100 / max); } + +#elif defined(__OpenBSD__) + #include <fcntl.h> + #include <sys/ioctl.h> + #include <sys/time.h> + #include <dev/wscons/wsconsio.h> + + const char * + backlight_perc(const char *unused) + { + int fd, err; + struct wsdisplay_param wsd_param = { + .param = WSDISPLAYIO_PARAM_BRIGHTNESS + }; + + if ((fd = open("/dev/ttyC0", O_RDONLY)) < 0) { + warn("could not open /dev/ttyC0"); + return NULL; + } + if ((err = ioctl(fd, WSDISPLAYIO_GETPARAM, &wsd_param)) < 0) { + warn("ioctl 'WSDISPLAYIO_GETPARAM' failed"); + return NULL; + } + return bprintf("%d", wsd_param.curval * 100 / wsd_param.max); + } + #endif diff --git a/config.def.h b/config.def.h index 75debe5..3a0f838 100644 --- a/config.def.h +++ b/config.def.h _AT_@ -14,6 +14,7 @@ static const char unknown_str[] = "n/a"; * * backlight_perc backlight percentage device name * (intel_backlight) + * NULL on OpenBSD * battery_perc battery percentage battery name (BAT0) * NULL on OpenBSD * battery_state battery charging state battery name (BAT0) -- 2.16.2Received on Wed May 23 2018 - 20:41:17 CEST This archive was generated by hypermail 2.3.0 : Wed May 23 2018 - 20:48:24 CEST
http://lists.suckless.org/hackers/1805/16358.html
CC-MAIN-2019-47
refinedweb
287
53.27
Build a Secure SPA with React Routing When building an SPA (single page application) with React, routing is one of the fundamental processes a developer must handle. React routing is the process of building routes, determining the content at the route, and securing it under authentication and authorization. There are many tools available to manage and secure your routes in React. The most commonly used one is react-router. However, many developers are not in a situation where they can use the react-router library. Because of this, they may need to use Reach Router, Wouter, or maybe even no router at all. This tutorial will show you how to quickly build a secure SPA using React, Okta, and Wouter. Okta easily enables you to manage access to your SPAs (or any application for that matter). By using the @okta/okta-react library you can quickly build secure applications with React. At the time of writing this article, reach-router does not support React version 17+. Prerequisites: Table of Contents - Create an Okta OIDC application - Create a React application with routing - Run your React application - Learn more about React Create an Okta. Once Okta is done processing your request it will return an Issuer and a Client ID. Make sure you note these, as you will need them in your application. Create a React application with routing Next, open your favorite IDE (I use Visual Studio Code) and use the task runner npx create-react-app react-routing-demo to scaffold your React project. You’ll be prompted to install create-react-app. Type y to approve. This process takes a minute, so grab a cup of coffee or tea if you prefer. The task runner does a great job of getting your application started quickly, but you will need to add and edit a few of the files. Before you begin, you will need to install some packages. First, of course, wouter. cd react-routing-demo npm i wouter@2.7.5 You will use Bootstrap along with the react-bootstrap library to style your app. npm i bootstrap@5.1.3 npm i react-bootstrap@1.6.4 Next, you will get dotenv as a way of storing your sensitive Okta information. npm i dotenv@10.0.0 Finally, you will add the Okta React SDK to your project. npm i @okta/okta-react@6.2.0 @okta/okta-auth-js@5.6.0 With these dependencies installed, you can begin to create your app. First, add a new file called .env to the root of your project, and add the following items. Note here that the REACT_APP_OKTA_ISSUER should match the issuer from the Okta CLI. REACT_APP_OKTA_CLIENTID={yourClientId} REACT_APP_OKTA_APP_BASE_URL= REACT_APP_OKTA_ISSUER= Next, add a folder for Components and one for Pages. mkdir src/Components mkdir src/Pages In Components add a new file for Header.jsx and add the following code. import React from "react"; import { Navbar, Nav, Form, Button } from "react-bootstrap"; const Header = ({ authState, oktaAuth }) => { if (authState?.isPending) { return <div>Loading...</div>; } const button = authState?.isAuthenticated ? ( <Button variant="secondary" onClick={() => { oktaAuth.signOut("/"); }} > Logout </Button> ) : ( <Button variant="secondary" onClick={() => { oktaAuth.signInWithRedirect(); }} > Login </Button> ); return ( <Navbar bg="light" expand="lg"> <Navbar.BrandReact Routing</Navbar.Brand> <Navbar.Toggle <Navbar.Collapse <Nav className="mr-auto"></Nav> <Form inline>{button}</Form> </Navbar.Collapse> </Navbar> ); }; export default Header; This component displays a login button for your users. This button turns into a logout button once the user authenticates. The component also provides a place to access the Profile page you will create. Add Profile.jsx to the Pages folder. The code for this file is as follows. import React, { useEffect } from "react"; import { Container } from "react-bootstrap"; import { useOktaAuth } from "@okta/okta-react"; import Header from "../Components/Header"; const Profile = () => { const { authState, oktaAuth } = useOktaAuth(); useEffect(() => { async function authenticate() { if (!authState) return; if (!authState.isAuthenticated) { await oktaAuth.signInWithRedirect(); } } authenticate(); }, [authState, oktaAuth]); if (!authState?.isAuthenticated) { return ( <Container> <p>Please wait while we sign you in</p> </Container> ); } else { return ( <Container> <Header authState={authState} oktaAuth={oktaAuth}></Header> <h4>Your profile page</h4> <p>Welcome to your profile page </p> </Container> ); } }; export default Profile; This page leverages the useOktaAuth hook to determine if the user is logged in. If the user is not logged in, then you will prompt them to log in with Okta. Otherwise, you will display a brief profile page. Finally, add Home.jsx to the Pages folder with the following code. import React from "react"; import { Link, Redirect } from "wouter"; import Header from "../Components/Header"; import { Container, Row, Col, Card } from "react-bootstrap"; import { useOktaAuth } from "@okta/okta-react"; const Home = () => { const { authState, oktaAuth } = useOktaAuth(); return authState?.isAuthenticated ? ( <Redirect to="/Profile" /> ) : ( <Container> <Header authState={authState} oktaAuth={oktaAuth}></Header> <Row> <Col sm={12} <h3>React routing Demo</h3> <h5> A <a href=" Demo using{" "} <a href=" Secured by{" "} <a href=" </h5> <p> A tutorial written by{" "} <a href=" Fisher</a> </p> </Col> </Row> <br></br> <Row> <Col sm={12} <Card style={{ width: "21.5em", margin: "0 auto" }}> <Card.Header>Already have an Okta Account?</Card.Header> <Card.Body> <Link to="Profile">Login Here</Link> </Card.Body> </Card> </Col> </Row> </Container> ); }; export default Home; This page uses the Redirect component from wouter combined with the useOktaAuth hook to redirect authenticated users to the profile page. The page also serves as a landing page with some additional information about your application. Finally, you will need to update your App.js file with the following code. import React from "react"; import "./App.css"; import { Router, Route } from "wouter"; import { Security, LoginCallback } from "@okta/okta-react"; import { OktaAuth, toRelativeUrl } from "@okta/okta-auth-js"; import Home from "./Pages/Home"; import Profile from "./Pages/Profile"; import "bootstrap/dist/css/bootstrap.min.css"; const issuer = process.env.REACT_APP_OKTA_ISSUER; const clientId = process.env.REACT_APP_OKTA_CLIENTID; const redirect = process.env.REACT_APP_OKTA_APP_BASE_URL + "/callback"; class App extends React.Component { constructor(props) { super(props); this.oktaAuth = new OktaAuth({ issuer: issuer, clientId: clientId, redirectUri: redirect, }); this.restoreOriginalUri = async (_oktaAuth, originalUri) => { window.location.replace( toRelativeUrl(originalUri || "/", window.location.origin) ); }; } render() { return ( <Router> <Security oktaAuth={this.oktaAuth} restoreOriginalUri={this.restoreOriginalUri} > <Route path="/" component={Home} /> <Route path="/callback" component={LoginCallback} /> <Route path="/Profile" component={Profile} /> </Security> </Router> ); } } export default App; Version 6.0 of the @okta/okta-react library includes a few changes to be aware of, depending on which version of the package you most recently used. First, you must provide the restoreOriginalUri property in the <Security> component. If you were previously using version 4.x, you will notice that you need to inject the OktaAuth object into the Security component rather than the issuer, client ID, and other information. You can read more about these changes on Okta’s React GitHub page. Here you are also importing the bootstrap CSS. Finally, you are setting up the routing by wrapping the wouter Router in Okta’s Security component. This will give your routes access to the Okta API. Run your React application Your application is now complete. You can run it using npm start and navigate to From there you click the Login button, which will display the Okta login screen. Once you’ve logged in successfully you are redirected to the Profile page. Learn more about React With Okta React v6.0+, you no longer need access to the react-router package to set up routing in your React application. This version should make it easier and cleaner to set up Reach Router, Wouter, or to bypass using a router altogether. You can find the example code from this tutorial in the oktadev/okta-react-router-example project on GitHub. If you liked this tutorial, you might like these others: - Build a Simple React Application Using Hooks - Quickly Consume a GraphQL API from React - Build Reusable React Components Make sure you follow us on Twitter and subscribe to our YouTube channel. If you have any questions, or you want to share what tutorial you’d like to see next, please comment below. Okta Developer Blog Comment Policy We welcome relevant and respectful comments. Off-topic comments may be removed.
https://developer.okta.com/blog/2021/11/01/react-routing
CC-MAIN-2022-21
refinedweb
1,341
51.04
Let users recover a deleted file without admin intervention by aliasing the rm command with mv or by writing your own script that moves the data to another location. Autonomous File Recovery Have you ever deleted a file and immediately thought, “Ah! I needed that!”? There are some great stories about users who store data in filesystems that they know are not backed up, manage to erase all their data, and then start yelling that they need to have their data back – now! My favorite story was about a well-known university researcher who was storing data in a filesystem that was mounted with no_backup in the path name. Although warned several times that the data was not backed up, he still managed to erase his data, causing a fire drill right around Christmas time. Although I hope this is a rare occurrence, there must be some way to help users who do their very best to shoot themselves in the foot. Users aren’t the only ones who can suffer from this problem. Admins also remove files from systems in the interest of cleaning up cruft, only to find out those files were important. Begin at the Beginning Coming up with ideas to help users recover data is part of an age-old question. One admin friend described this as, "How do we keep the users from hurting themselves?" As an engineer, I like to look for solutions to problems, so I started examining the data recovery request from several angles. Perhaps this problem is looking for more of a policy solution. Or perhaps it is a problem requiring a technical solution. Or is it both? Policy Problem? To gain a better perspective on this issue, I spoke with a number of admin friends, and one of the common themes I encountered during my discussions was that whatever policies were created and communicated by administrators, upper management sometimes intervened and countered the policies. This situation was infrequent, typically for a critical project and a critical piece of data, but more often than not, it followed the old adage “the squeaky wheel gets the grease.” Although I didn’t conduct a scientific survey, one thing I did notice was that when the problem made it to the upper levels of management, managers were not aware of the policies that had been set and publicized. To my mind, this pointed to one possible solution to the problem – developing policies in conjunction with management while addressing the resource implications. To begin the development of policies, one should assume that users will erase data by accident and need it restored. The resource implications of this assumption can be quantified on the basis of past experience, data growth, and other factors. For example, will it require additional staff? Will it require additional hardware? Then the conclusions are presented to management. During the discussion, management should be made aware of the effect of restoring or recovering data and the effect of users erasing data before a “management approved” policy is established and communicated to the user base and any changes to resources is resolved. This approach can help alleviate the problem because management is fully aware of the resource implications of the final decision and users realize a policy is in place. The subtle context is that the entire management hierarchy is now aware of the policies so that the “squeaky wheel” approach will (should) have little effect on operations (although there will always be exceptions). Technical Solutions I first started using Unix on a VAX in 19.., er, a long time ago. We didn’t have much disk space, but perhaps more importantly, it was a new operating system to everyone. Using Unix, one could interactively log in to a system while sitting in front of a terminal rather than submit jobs to the big mainframe. This shift in how we worked meant an associated period of learning. Because disk space was at such a premium, one of the things people had to learn was how to erase files when they were finished, including how to use the deadly options for rm: -r, -f, and -rf *. To help people during the adjustment period, the administrators “aliased” the rm command so that, when used, the data was actually moved to a temporary disk location instead of being erased. Then, if you had an “oops” moment, you could go to the directory at the new location and recover the files yourself. If you didn’t know the location of the “erased” files, a quick email to the administrator would allow them to recover the files for you. Because disk space was expensive, the data only lived in the temporary disk location for a certain period of time and then was removed permanently. This approach saved my bacon on several occasions. Why not bring back this practice? You could alias the rm command to something else (e.g., mv) so that the data is not actually erased, but moved to a different location. Or, you could write a simple script that moves the data to a different location, from which users could then copy the data back if needed. For either solution, a cron job or a daemon can be used to erase the files in the “special” directory based on some policies (e.g., oldest files are erased if a certain usage level is reached – the “high water” mark, or if the files have reached a certain age). Of course, it takes disk resources to do this because you need a target location for storing the data, but that can be part of resource planning, as discussed in the previous section on policies. Alias rm with mv If you want to try to alias the rm command with mv, the first step is to read the rm man page. A few key rm options are shown in Table 1. The rm command takes the form: rm [OPTION]... FILE... In my experience, some of these options are used fairly rarely, but to maintain compatibility with the original command, all of the options need to be considered. Table 1: Key rm Options Next, you should examine the mv man page. It, too, has a few key options (Table 2). The mv command takes the forms: mv [OPTION]... [-T] SOURCE DEST mv [OPTION]... SOURCE... DIRECTORY mv [OPTION]... -t DIRECTORY SOURCE... Table 2: Key mv Options As an admin, you have to make a decision about whether it’s possible simply to alias rm with mv. You have to be prepared for users that apply some of the lesser used options, and you should be prepared to tell users that the classic rm does not exist but has been aliased to mv. Scripting Using mv as a substitute for rm is not a perfect solution. Some corner cases will likely cause problems. For example, when a user removes a file using the aliased rm command, it is copied to the temporary disk storage and could be recovered. If the user then creates a new file with the exact same name and then removes that file, the first file on the temporary storage would be overwritten. Perhaps this is acceptable, perhaps it is not. It could be part of the policy. By writing your own script, you can precisely define what you want to happen when a user “removes” a file. You could incorporate versioning so that the user wouldn't overwrite previously removed files. You could couple a cron job with the script, so it cleans the temporary directory by, for example, sweeping files out of the temporary directory when they reach a certain age or if they are very large. As you write the code, be sure you take into consideration the properties of the file being removed that should be kept. At a minimum, you probably want to keep the file name, the user/group, and the file permissions. You might also want to keep the three dates of the file. As mentioned previously, you might want to add versioning to the file name, so multiple file removals could be stored in the temp directory; however, be careful, because this will change the file name. It’s also highly recommended to keep some sort of log of what the script does. Although this might sound obvious, you would be surprised how many admins do not keep good logs of what is happening on their systems. The logs should include any cron job you use periodically to sweep the temporary directory. Be a lumberjack. This approach cannot help you save files that applications erase or remove as part of their processing. When this happens, the only real alternative is to have a second copy of the data somewhere. This scenario should be brought to the attention of management so that policies can be developed (e.g., having two copies of all data at all times, or telling users that there is no recourse if this happens). Extended File Attributes With modern filesystems, one key aspect that must be considered for moving or copying a file is extended attributes. Extended File Attributes (EFAs) allow you to add metadata to files beyond what is normally there. A simple example is: $ setfattr -n user.comment -v "this is a comment" test.txt In this example, the user is adding a comment to the user namespace of the EFAs and is labeling the text comment (i.e., user.comment). The comment is this is a comment, which you can find by using the getfattr command. In aliasing the rm command or writing your own script, at some point you will need to address EFAs by making it part of your policies that management endorses. Do you copy them along with the file or not? Personally, I think you should copy the EFAs along with the file itself, but that decision is up to you in consultation with users and management. Backups One thing I haven’t touched on yet are backups. They can be beautiful things that save your bacon, but they are somewhat limited, as I’m sure all administrators are aware. Backups happen at certain intervals, whether full or incremental. In between backups, users, scripts, and programs create, change, and remove data that backups miss. Backups might be able to restore data, but only if the data has, in fact, been backed up. Also, how many times have administrators tried to restore a file from backup only to discover that the backup failed or the media, most likely tape, is corrupt? Fortunately, this scenario is becoming more rare, but it still happens. (Be sure to check your backup logs and test a data restoration process on a small amount of data every so often). Backups can help with data recovery, but they are not perfect. Moreover, given the size of filesystems, it might be impossible to do full backups, at least in an economical way. You might be restricted to performing a single full backup when the system is first installed and then doing incremental backups for the life of the system, which even for petabyte-size filesystems could be very difficult to accomplish and might require more hardware than can be afforded. Using backups in combination with the options discussed here can offer some data protection and perhaps reduce the likelihood that users will hurt themselves.
http://www.admin-magazine.com/HPC/Articles/User-File-Recovery
CC-MAIN-2019-22
refinedweb
1,896
60.75
Given a shader, it’s not too hard to find all materials that use (aka “own”) an instance of that shader. Here’s a Python snippet that does just that. Note that I don’t check whether or not the shader is actually used. This snippet finds all instances, whether they are used or not (last week I posted another snippet for checking whether a shader instances was ultimately connected to the material). from sipyutils import si # win32com.client.Dispatch('XSI.Application') from sipyutils import disp # win32com.client.Dispatch from sipyutils import C # win32com.client.constants si = si() def get_materials_that_use_shader( s ): mats = disp( "XSI.Collection" ) oShaderDef = si.GetShaderDef( s.ProgID ) for i in oShaderDef.ShaderInstances: try: mats.Add( i.Owners(0) ) except Exception: pass mats.Unique = True return mats # # Find all materials that use a specific shader # s = si.Selection(0) if s.IsClassOf( C.siShaderID ): mats = get_materials_that_use_shader( s ) for m in mats: print( "%s in %s" % (m.Name, m.Owners(0)) ) else: si.LogMessage( "Cannot find shader instances. Please select a shader." ) # Material in Sources.Materials.DefaultLib # Material1 in Sources.Materials.DefaultLib # Material2 in Sources.Materials.DefaultLib # Material3 in Sources.Materials.DefaultLib # Material7 in Sources.Materials.DefaultLib # Material6 in Sources.Materials.DefaultLib # Material5 in Sources.Materials.DefaultLib # Material4 in Sources.Materials.DefaultLib I’m eager to try this script. I gave the ProgID of the shader I’m looking for I suppose in the first part of the script; oShaderDef = si.GetShaderDef( “Softimage.BA_color_switcher.1.0” ) Second Part of the script I entered in the material I want to search for in all objects, in my case all the objects that are within a model object. mats = get_materials_that_use_shader( “material.BA_color_switcher1” ) The end result an error; # ERROR : Traceback (most recent call last): # File “”, line 25, in # for m in mats: # NameError: name ‘mats’ is not defined # – [line 25] You need to select a shader and then run the script. I updated the script a bit to deal better with non-shader objects. All the materials in my model are selected. I then open the render tree, select the shader node that I want the script to find within all the materials in the model, the script gives me an error “Cannot find shader instance”. What exactly is a shader instance, probably simple ! I assume the script won’t check if it’s actually plugged into the material or simple lying randomly in the render tree, unplugged. Hi Select a shader in the Explorer, then run the script. This script doesn’t check whether or not the shader instance (shader instance = a node in a render tree) is connected. I originally selected a shader node in explorer, ran the script, there was no result, I assumed I did something wrong.
https://xsisupport.com/2013/07/23/scripting-finding-all-materials-that-contain-a-specific-shader/
CC-MAIN-2022-27
refinedweb
459
52.46
1 /* 2 * Copyright (c) 1996, /** 30 * Convenience class for reading character files. The constructors of this 31 * class assume that the default character encoding and the default byte-buffer 32 * size are appropriate. To specify these values yourself, construct an 33 * InputStreamReader on a FileInputStream. 34 * 35 * <p><code>FileReader</code> is meant for reading streams of characters. 36 * For reading streams of raw bytes, consider using a 37 * <code>FileInputStream</code>. 38 * 39 * @see InputStreamReader 40 * @see FileInputStream 41 * 42 * @author Mark Reinhold 43 * @since JDK1.1 44 */ 45 public class FileReader extends InputStreamReader { 46 47 /** 48 * Creates a new <tt>FileReader</tt>, given the name of the 49 * file to read from. 50 * 51 * @param fileName the name of the file to read from 52 * @exception FileNotFoundException if the named file does not exist, 53 * is a directory rather than a regular file, 54 * or for some other reason cannot be opened for 55 * reading. 56 */ 57 public FileReader(String fileName) throws FileNotFoundException { 58 super(new FileInputStream(fileName)); 59 } 60 61 /** 62 * Creates a new <tt>FileReader</tt>, given the <tt>File</tt> 63 * to read from. 64 * 65 * @param file the <tt>File</tt> to read from 66 * @exception FileNotFoundException if the file does not exist, 67 * is a directory rather than a regular file, 68 * or for some other reason cannot be opened for 69 * reading. 70 */ 71 public FileReader(File file) throws FileNotFoundException { 72 super(new FileInputStream(file)); 73 } 74 75 /** 76 * Creates a new <tt>FileReader</tt>, given the 77 * <tt>FileDescriptor</tt> to read from. 78 * 79 * @param fd the FileDescriptor to read from 80 */ 81 public FileReader(FileDescriptor fd) { 82 super(new FileInputStream(fd)); 83 } 84 85 }
http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jdk/src/share/classes/java/io/FileReader.html
CC-MAIN-2018-13
refinedweb
285
58.21
Swiftfall Swiftfall is a wrapper written in Swift for the API Scryfall. Documentation for Scryfall API. Scryfall is API which handles information about the card game Magic: The Gathering. Swiftfall Documentation Types All types are Structs and can be reach through a Swiftfall.get*(). Types That Hold Data Card - Struct containing data about a Magic Card. - Contains the Card.Face Struct - Some Cards have faces, Card.Face contains those faces. ScryfallSet - Struct containing data about a Set of Magic cards. - named ScryfallSet due to Set already existing in Swift. Ruling - Struct containing data about a Magic Card's rulings. Catalog - Struct containing data about Magic. - Example: "land-types" Structs which contain Arrays of Types CardList - Struct containing a list of Cards. SetList - Struct containing a list of ScryfallSets. RulingList - Struct containing a list of Rulings. Functions These are some functions you can call which will handle information from Scryfall's API. Get a Card Swiftfall.getCard(fuzzy:String) throws -> Card (Fuzzy search) Swiftfall.getCard(exact:String) throws -> Card (Exact search) Swiftfall.getCard(code: String, number: Int) throws -> Card (Set Code, ID Number) Swiftfall.getRandomCard() throws -> Card (Random Card) ... and more! Ex. import Swiftfall do { let card = try Swiftfall.getCard(exact:"Black Lotus") print(card) } catch { print(error) } Out. Name: Black Lotus Cost: {0} Type Line: Artifact Oracle Text: {T}, Sacrifice Black Lotus: Add three mana of any one color to your mana pool. Double-Sided Cards Ex. import Swiftfall do { let card = try Swiftfall.getCard(exact:"Jace, Vryn's Prodigy") let faces = card.cardFaces let front = faces![0] let back = faces![1] print(front) print(back) } catch { print(error) } Out. Name: Jace, Vryn's Prodigy Cost: {1}{U} Type Line: Legendary Creature — Human Wizard Oracle Text: {T}: Draw a card, then discard a card. If there are five or more cards in your graveyard, exile Jace, Vryn's Prodigy, then return him to the battlefield transformed under his owner's control. Power: 0 Toughness: 2 Name: Jace, Telepath Unbound Cost: Type Line: Legendary Planeswalker — Jace Oracle Text: his or her library into his or her graveyard." Loyalty: 5 Get a list of Cards Swiftfall.getCardList() throws -> CardList (The first page) Swiftfall.getCardList(page:Int) throws -> CardList (Loads a specific page) Ex. import Swiftfall do { let cardlist = try Swiftfall.getCardList(page:0) // this is the same as .getCardList() print(cardlist) } catch { print(error) } Get a ScryfallSet Swiftfall.getSet(code:String) throws -> Set (String must be a three letter code) Ex. import Swiftfall do { let set = try Swiftfall.getSet(code: "KTK") print(set) } catch { print(error) } Out. Name: Khans of Tarkir (ktk) Block: Khans of Tarkir Number of Cards: 269 Release Date: 2014-09-26 Set Type: expansion Get a list of ScryfallSets Swiftfall.getSetList() throws -> SetList (All Sets) Ex. import Swiftfall do { let setlist = try Swiftfall.getSetList() print(setlist) } catch { print(error) } Get a list of Rulings Swiftfall.getRulingList(code:String,number:Int) throws -> RulingList Ex. import Swiftfall do { let rulings = try Swiftfall.getRulingList(code: "ima", number: 65) print(rulings) } catch { print(error) } Get a Ruling To get a specific ruling you must first get a Ruling List. Once you have a RulingList you may call .data[index: Int] Ex. import Swiftfall do { let rulings = try Swiftfall.getRulingList(code: "ima", number: 65) let ruling = rulings.data[1] print(ruling) } catch { print(error) } Get a Catalog Catalog objects are provided by the API as aids for building other Magic software and understanding possible values for a field on Card objects. Ex. import Swiftfall do { let catalog = try Swiftfall.getCatalog(catalog: "land-types") print(catalog) } catch { print(error) } Out. Desert Forest Gate Island Lair Locus Mine Mountain Plains Power-Plant Swamp Tower Urza’s Testing Testing allows for us to check certain scenarios quickly and determine the problems in a easy to understand and grasp manner. Example Ex. func testRandomCard(){ do { _ = try Swiftfall.getRandomCard() } catch { print(error) XCTFail() } } How to set up Swiftfall First, create an executable package. The executable includes a Hello World function by default. $ mkdir MyExecutable $ cd MyExecutable $ swift package init --type executable $ swift build $ swift run Hello, World! $ swift package generate-xcodeproj Then, set Swiftfall as a dependency for the executable. import PackageDescription let package = Package( name: "MyExecutable", dependencies: [ // Dependencies declare other packages that this package depends on. // .package(url: /* package url */, from: "1.0.0"), .package(url:"", from: "1.2.0") ], targets: [ // Targets are the basic building blocks of a package. A target can define a module or a test suite. // Targets can depend on other targets in this package, and on products in packages which this package depends on. .target( name: "MyExecutable", dependencies: ["Swiftfall"]), ] ) Then, run: $ swift package generate-xcodeproj Now you're ready to use Swiftfall! If you are interested in checking out a project using Swiftfall you can checkout: Catalog Examples card-names word-bank creature-types planeswalker-types land-types spell-types artifact-types powers toughnesses loyalties watermarks Github Help us keep the lights on Dependencies Used By Total: Releases 1.5.2 - Jul 2, 2019 camelCase is now supported throughout the library. variables like "represent_mana" and now "representMana" 1.5.1 - Jul 2, 2019 Fixed the issues caused by deprecation. 1.5 - Jul 2, 2019 Bug Fixes: Remove missing properties that were deprecated. You can find what was deprecated in the Scryfall API here: Thanks to @naknut for notifying me of the deprecation and making the changes. 1.4.1 - Mar 7, 2018 - parsing is even cleaner - printing works more consistently - fractional mana costs fix - documentation is more accurate 1.4.0 - Mar 6, 2018 - Card.CardFace is now Card.Face - All get*() now return throws -> * - parse* now is parseResource - ResultType handles whether a call failed or succeeded. - print() now works on all types - simplePrint() doesn't exist do { let card = try Swiftfall.getRandomCard() print(card) } catch { ... }
https://swiftpack.co/package/bmbowdish/Swiftfall
CC-MAIN-2019-30
refinedweb
965
60.82
Hello, can I disable all inserts into table PDWDB.LSW_TASK (BPM system table)? Thank you in advance! Answer by S.Baumann (2871) | Jan 03 at 09:56 AM Hello @-= Alex =- , the short answer: This is unsupported. Never prevent inserts into a product database table as it might cause follow up problems or in worst case inconsistencies. Though, there might be options to prevent the feeding of data here. As you are referring to the PDWDB's (Performance Data Warehouse Database's) LSW_TASK table, everything is related to the PDW features. Thus, you could consider disabling the autotracking features entirely. I am not entirely sure this finally stops inserts into the PDWDB.LSW_TASK table, but it definitely reduces the utilization of the PDWDB as a whole. However, it means your business does not rely on any generated data here. In my honest opinion, if you are seeking for options to disable those inserts, you seem uninterested in that data. ;-) You can check on how to disable PDW autotracking here: Disabling tracking data generation for a Process Server or Process Center in IBM Business Process Manager (BPM) Important: This applies to the PDW Database only. There is a BPM Database with another LSW_TASK table. As this database table is covering the actual business data (and not only tracking information), preventing inserts is out of scope at all. In any case, the major question is about why you would like to prevent the inserts. Maybe you are just suffering from too much data getting processed by PDW? Did you do any housekeeping? The following post might help you to keep your PDW DB in good shape: Performance Data Warehouse database (PDWDB) growing fast. How can we reduce the size? This is surely the recommended way and better than considering blocking inserts into a Product Database Table Best regards S.Baumann Answer by -= Alex =- (21) | Jan 09 at 05:40 AM Thank you, mr @S.Baumann! These data don't use in our process, but they get to much space, that's why we turned off tracking data using recommended from you link: 173 people are following this question. undeclared variable "user_fullName" in namespace "system"; 1 Answer Is there a Swagger definition of the Process Federation Server REST API? 1 Answer Some of the CSHS ( BPD ) in BPM 8.5.5 have not picked up changes from new snapshot after instance migration 1 Answer How to find Most expensive service from BPM database? 3 Answers Perfomance Issues when using Rest API 0 Answers
https://developer.ibm.com/answers/questions/486792/disable-all-inserts-into-table-pdwdblsw-task-bpm-s.html
CC-MAIN-2019-35
refinedweb
419
65.22
Python Binding of chealpix Homepage Repository Pypi C pip install chealpy==0.1.0 chealpy is the Python binding of chealpix. chealpix is the C implementation of HealPix. HealPix linearizes spherical coordinate into integers. You will need numpy > 1.6. The installation is just python setup.py install or easy_install chealpy Refer to HealPix manual for usage. Two precisions are provided: chealpy.high :upto nside=1<<29 chealpy.low : upto nside=8192 functions under the namespace chealpy are imported from chealpy.high MAX_NSIDES is the upper limit of nsides wrappers are automatically generated Cython code, making use of NpyIter API. For a feature-rich Healpix C++ binding, take a look at healpy. Author: Yu Feng 2012 <yfeng1@andrew.cmu.edu> Description of Functions: def ang2pix_ring (nside,theta,phi,ipix=None) angle to pix def ang2pix_nest (nside,theta,phi, ipix = None): def pix2ang_ring (nside,ipix, theta = None,phi = None): def pix2ang_nest (nside,ipix, theta = None,phi = None): def vec2pix_ring (nside,vec, ipix = None): def vec2pix_nest (nside,vec, ipix = None): def pix2vec_ring (nside,ipix, vec = None): def pix2vec_nest (nside,ipix, vec = None): def nest2ring (nside,ipnest, ipring = None): def ring2nest (nside,ipring, ipnest = None): def npix2nside (npix, nside = None): def nside2npix (nside, npix = None): def ang2vec (theta,phi, vec = None): def vec2ang (vec, theta = None,phi = None):
https://libraries.io/pypi/chealpy
CC-MAIN-2018-47
refinedweb
215
66.74
ESP8266 based IoT Panic Alarm for Old Age People using Blynk In this tutorial we are going to setup a ESP8266 based IoT Panic Alarm for Old Age People using Blynk. We are going to see the design and the requirement for this project. This project is specially designed for old age people. We can see many of elder people around us and some of them are not that technical while using gadgets. Subsequently, they need a gadget when they need someone’s help they can inform them. This IoT panic alarm will help them which works over WiFi. We are using ESP8266 WiFi module to simplify it and make it more portable. Things Needed - ESP8266-01 WiFi Module - A Push Button. - A battery pack or a power source Working The working of this alarm is simple. The person has to just push the button seeking for attention. Immediately an alert will be sent in Blynk app and parallelly, an email will be sent to the concerned email address. It can be configured further to send an SMS to nearest hospital or call an emergency service. As an example, You can configure the device for sending emails to very closed ones who can take care of the person. Aged people can call someone by pressing one button. To make it more compact and handy you can add a small battery pack. Using this project can provide lots of benefits. Benefits of IoT Panic Alarm - For sending an emergency alert to family or any authority - Get immediate attention to get medicine - For getting help if the person is alone Connection Diagram You have to make the connection as per the diagram below. It is easy and can be made by any beginner. This project can be mounted on a small box after successful testing along with a battery pack as a power source. Setting Up Blynk Project Open the Blynk App and tap on New Project Enter a project name followed by select ESP8266 as device and tap on Create An Auth Token will be sent to you registered email id. Now lets add the widgets from the widget box. Select Notification and Email widgets. Now tap on notification widget and configure as shown below. In the same way tap on email settings and update the email address to which email will be sent. Finally your Blynk setup will be complete and it will look some what as shown below. Complete Code You can download the complete code from the link below. After downloading just unzip it and open it using Arduino IDE. Select the correct board and upload it. We have a good article for uploading the code to ESP8266-01. Suggested Reading: - How to send sensor data to Thingspeak using Raspberry Pi - ESP32-CAM based Email Notification System - ESP32 based Gas Leakage Detection using Email Notification - IoT based Motion Detection Alarm using ESP8266 and Blynk - IoT based Fire Security Alarm System using NodeMCU - NTP Digital Clock using ESP8266 and OLED Display - NodeMCU based WiFi Network Scanner with OLED Display - DHT11 Sensor with ESP-NOW and ESP32 - Raspberry Pi Flask Web Server with DHT11 - IoT Heart Rate Monitoring with ThingSpeak Platform - ESP32 Web Server PWM based LED Control - Temperature Monitoring with ESP-DASH Webserver Preparing the code before uploading Before you start uploading the code and use your device, you have to make some changes in the code to make it work properly. Below are the library files which we need. You can install it by going to Sketch-> Include library-> Manage libraries. Now Search for “ESP8266” and click on install. #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> Next you need to update the Blynk authentication token which you will receive on your registered email id. char auth[] = "AuthToken"; Here you have to provide your WiFi details which is name of your wifi and password. //Your WiFi credentials. char ssid[] = "Your Network SSID"; char pass[] = "Network Password"; You can customize the alert and email messages here. //EMAIL and PUSH Notification Blynk.email("Serious Alert", "Please check the patient"); Blynk.notify("Serious Alert : Please check the patient"); Building and Testing After connecting all the components on a breadboard and uploading the code, its time to power it on. Once you power it on it will start and try to connect with the WiFi connection. Once you press the switch it will trigger a notification along with an email. You will also receive an email along with the Blynk notification. Summary We have built a ESP8266 based IoT Panic Alarm. This can be also named as Health care alarm using blynk app. I hope you will enjoy building this project and also reading this article. Please share this article and help others to learn IoT.
https://www.iotstarters.com/esp8266-based-iot-panic-alarm-for-old-age-people-using-blynk/
CC-MAIN-2021-25
refinedweb
792
63.39
Description Haphazard is an ETS based plug for caching response body. Haphazard alternatives and similar packages Based on the "Caching" category cachex9.5 5.5 Haphazard VS cachexA powerful caching library for Elixir with a wide featureset. con_cache9.3 0.0 Haphazard VS con_cacheConCache is an ETS based key/value storage. locker9.2 0.0 Haphazard VS lockerAtomic distributed "check and set" for short-lived keys. Nebulex9.0 5.4 Haphazard VS NebulexA fast, flexible and powerful distributed caching framework for Elixir lru_cache5.4 0.7 Haphazard VS lru_cacheSimple LRU Cache, implemented with ets. stash5.1 0.0 Haphazard VS stashA straightforward, fast, and user-friendly key/value store. gen_spoxy4.9 0.0 Haphazard VS gen_spoxyCaching made fun. Mem4.0 0.0 Haphazard VS MemKV cache with TTL, Replacement and Persistence support jc3.6 0.0 Haphazard VS jcIn-memory, distrbutable cache with pub/sub, JSON-query and consistency support. elixir_locker3.2 0.0 Haphazard VS elixir_lockerLocker is an Elixir wrapper for the locker Erlang library that provides some useful libraries that should make using locker a bit easier. Scout APM: Application Performance Monitoring Do you think we are missing an alternative of Haphazard or a related project? README Haphazard [ ](LICENSE) Haphazard is an ETS based plug for caching response body. Check the Online Documentation Installation Add haphazard to your list of dependencies in mix.exs: def deps do [{:haphazard, "~> 0.4.0"}] end put it in applications applications: [:logger, ..., :haphazard] Usage Setup in your plug router: plug Haphazard.Plug Additional configurations (optional): plug Haphazard.Plug, methods: ~w(GET HEAD), path: ~r/\/myroute/, ttl: 60_000, enabled: true The additional configurations reflect the default values. License Source code is released under MIT License. Check [LICENSE](LICENSE) for more information. *Note that all licence references and agreements mentioned in the Haphazard README section above are relevant to that project's source code only.
https://elixir.libhunt.com/haphazard-alternatives
CC-MAIN-2020-45
refinedweb
311
50.84
Etags, ctags, gnu global, idutils and cscope all have parsers of some sort that parse C and C++ code. Some use regexp matchers. Others have primitive parsers. gcc, of course, has a full language compliant parser which it uses to compile code. I'm not a gcc expert, but I assume that as it parses, it keeps track of the various symbols (functions, variables, namespaces, etc) and where they are. (ie - debug info for gdb). Now I know what you are talking about. This idea seems very appealing, but it has a grave flaw. The flaw comes from the way GCC handles input: it does preprocessing first, and real parsing operates only on the output of preprocessing. So the output that GCC can easily make would describe only the output of preprocessing. Definitions and calls which are not actually compiled won't be seen at all. Macros and references to them won't be seen at all. What etags does now is much better, because it avoids that problem. It is true that output from GCC would give more details about types, etc., and would avoid getting confused in a few strange situations. So there is indeed an advantage to generating the output from GCC. But the disadvantage is much more important. I designed a way to make GCC analyze and report on macros and on the code that's not compiled in. That would get the best of both aspects. But this is not a small job. Please don't ask me to write more details unless you're prepared to do a substantial amount of work and study the GCC parsing code carefully.
http://lists.gnu.org/archive/html/emacs-devel/2009-09/msg00374.html
CC-MAIN-2014-10
refinedweb
275
75.4
Sometimes variables and other program elements are declared in a separate file called a header file. Header file names customarily end in .h. While it is not always a good idea to declare variables in a header file (we'll see why this can be a problem in the discussion on multi-file projects), it is not technically wrong to do so and is even required sometimes (external variables). Declaring non-external variables in header files is sometimes done by those who simply want to eliminate clutter in their source files. #include Syntax Header files are associated with a program through the #include directive. There are four ways the #include directive may be used. Note that there is never a semicolon ";" at the end of these. #include is a directive or special instruction to the compiler and is not a line of code. Only C statements require a semicolon at the end. 1 Header File is in Compiler Path #include <filename.h> Every C compiler comes with a whole series of header files, most of which are for working with the C Standard Library which contains functions and utilities that are a standard part of the language but not built into the language itself. In case of the MPLAB® XC Compilers, these files are installed along with the compiler in a subdirectory called "Include". As long as you use the angle brackets around the filename, you don't need to specify the full path to the file, the compiler will know where to look for it. 2 Header File is in Project Directory #include "filename.h" If the header file is inside your project's directory, all you need is to enclose the filename in double-quotes. Some integrated development environments like MPLAB X IDE let you specify include directories as part of your project. This tells the IDE (and the IDE tells the compiler) that these directories should be treated as if their files were in your project's directory. If you have specified any include directories, you can use this form to include files from them. 3 Header File is in Project Subdirectory #include "subdirectory_name/filename.h" Although it is usually better to add a project subdirectory to the IDE's include directories, this form is still widely used. If the header file resides in a directory inside your project directory, you can use this form with as many subdirectory levels as necessary. Note that a forward slash "/" is used. The forward slash is independent of the operating system and may be used on Window® machines that traditionally use a backslash "\" as the directory separator. A backslash will work but only on a Windows machine. In the interest of portability across platforms, the forward slash is the preferred syntax. This also works for subdirectories of any directory you specified in your IDE's Include Directories settings. 4 Header File is in a Specific Location Outside of Project Directory #include "C:\path\to\filename.h" Finally, in some cases, you may want to specify a file at a very specific location on your machine. However, it is better to add the file's location in your IDE's Include Directories settings. When using this form, you often need to use paths that are unique to the operating system you are running on. Paths on Windows machines are specified very differently from those on Linux® and Mac OS X® machines. Using this form will harm your ability to use the file on different platforms and should generally be avoided. #include Example Below is one possible way an include file might be used. This exact scenario isn't very common but it serves to illustrate the point of how include files behave. Most of the time, header files are used for function prototypes and external variable declarations which we will cover later in the class. main.h The contents of main.h above are included in the main.c file below by the #include directive on line 1 of main.c. This has the same effect as if you copied the contents of main.h and pasted them in main.c in place of the #include directive on line 1. main.c The two files above are exactly equivalent to the one file below. In fact, after the C preprocessor is run on the project, this is how the file would look when it is passed on to the compiler.
https://microchipdeveloper.com/tls2101:include-directive
CC-MAIN-2020-24
refinedweb
739
63.09
A Developer.com Site An Eweek.com Site #include <iostream> #include <cstdlib> #include <cstring> #include <fstream> #include <iomanip> #define MAXLEN 30 /* structure definition */ struct StudentRec { char name[MAXLEN]; float grade1; float grade2; float average; }; /* function prototypes */ void ReadGrade(StudentRec *Student); void Display(const StudentRec Student[], int last); void ReadFromFile(StudentRec Student[], int *next); void Search(const StudentRec Student[], int last); using namespace std; /* main program */ int main(void) { int quit = 0; char action; StudentRec Stud[MAXLEN]; //an array to store student information int next = 0; //counter to keep track of the number of students in the array while(!quit) { cout << "\n\t\t******************************************"; cout << "\n\t\t**\tStudent Records Menu\t\t**"; cout << "\n\t\t******************************************\n\n"; cout << "\t\t<1> Read Grade.\n"; cout << "\t\t<2> Display.\n"; cout << "\t\t<3> Read File.\n"; cout << "\t\t<4> Search.\n"; cout << "\t\t<5> Quit.\n\n"; cout << "\tPlease enter your option --> "; cin >> action; cout << "\n\n"; switch(action) { case '1': ReadGrade(&Stud[next]); break; case '2': Display(Stud, next-1); break; case '3': ReadFromFile(Stud, &next); break; case '4': Search(Stud, next-1); break; case '5': quit = 1; break; default : cout << "\tWrong selection.\n"; break; } } } //read student information from keyboard and store in array void ReadGrade(StudentRec *Student) { cout << "\tPlease enter the student name : "; cin >> Student->name; cout << "\tPlease enter the student grade 1: "; cin >> Student->grade1; cout << "\tPlease enter the student grade 2: "; cin >> Student->grade2; Student->average = (Student->grade1 + Student->grade2) / 2; cout << "\tThe average of the student grades is " << Student->average << endl; } //display all the student information void Display(const StudentRec Student[], int last) { int i; cout << "Name\t\tGrade1\t\tGrade2\t\tAverage\n"; for(i = 0; i <= last; i++) { cout << Student[i].name << "\t\t" << setprecision(1) << showpoint << fixed << Student[i].grade1 << "\t\t" << Student[i].grade2 << "\t\t" << Student[i].average << endl; } } //read record from file and store in array Student // next is the counter in the array to keep track of number of student void ReadFromFile(StudentRec Student[], int *next) { } //function to search for a student in an array according to name //last is the numebr of students store in the array Student void Search(const StudentRec Student[], int last) { } When posting code, please use code tags so that the code is readable. Go Advanced, select the formatted code and click '#'. The last below 2 void function what should I write ? Please see With that in mind, what is the issue you are having with the code for these two functions? For the file read, you'll need to open the required file, read/parse each record and update the student record array until all records have been processed. See For the search, you'll need to ask for a student name and iterate through the record array until either the name is found or the end is reached and report accordingly. If you produce some code for these functions and post back here we'll be able to advise/guide further..4.
http://forums.codeguru.com/showthread.php?554699-Wrong-function-called-in-DLL&goto=nextoldest
CC-MAIN-2020-05
refinedweb
505
57.4
isnan Determines if the given floating point number arg is a not-a-number (NaN) value. The macro returns an integral value. FLT_EVAL_METHOD is ignored: even if the argument is evaluated with more range and precision than its type, it is first converted to its semantic type, and the classification is based on that (this matters if the evaluation type supports NaNs, while the semantic type does not). [edit] Parameters [edit] Return value Nonzero integral value if arg is a NaN, 0 otherwise. [edit] Notes There are many different NaN values with different sign bits and payloads, see nan. NaN values never compare equal to themselves or to other NaN values. Copying a NaN may change its bit pattern. Another way to test if a floating-point value is NaN is to compare it with itself: bool is_nan(double x) { return x != x; } [edit] Example #include <stdio.h> #include <math.h> #include <float.h> int main(void) { printf("isnan(NAN) = %d\n", isnan(NAN)); printf("isnan(INFINITY) = %d\n", isnan(INFINITY)); printf("isnan(0.0) = %d\n", isnan(0.0)); printf("isnan(DBL_MIN/2.0) = %d\n", isnan(DBL_MIN/2.0)); printf("isnan(0.0 / 0.0) = %d\n", isnan(0.0/0.0)); printf("isnan(Inf - Inf) = %d\n", isnan(INFINITY - INFINITY)); } Possible output: isnan(NAN) = 1 isnan(INFINITY) = 0 isnan(0.0) = 0 isnan(DBL_MIN/2.0) = 0 isnan(0.0 / 0.0) = 1 isnan(Inf - Inf) = 1
http://en.cppreference.com/w/c/numeric/math/isnan
CC-MAIN-2016-07
refinedweb
240
53.07