text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Find all occurrences of a pattern (of length m) in a text (of length n) is quite commonly encountered string matching problem. For example, you hit Ctrl-F in your browser and type string you want to find while browser highlights every occurrence of a typed string on a page. The naive solution is to at each iteration “shift” pattern along the text by 1 position and check if all characters of a pattern match to corresponding characters in text. This solution has O((n – m + 1)*m) complexity. If either pattern or text is fixed it can be preprocessed to speed up the search. For example, if pattern is fixed we can use Knuth-Morris-Pratt algorithm to preprocess it in O(m) time and make search of its occurrences complexity O(n). Fixed text that is queried many times can also be preprocessed to support fast patterns search. One way to do this is to build suffix array. The idea behind it pretty simple. It is basically a list of sorted in lexicographical order suffixes (which starts at some position inside the string and runs till the end of the string) of the subject text. For example, for the “mississippi” string we have the following: i ippi issippi ississippi mississippi pi ppi sippi sissippi ssippi ssissippi However due to strings immutability in .NET it is not practical to represent each suffix as separate string as it requires O(n^2) space. So instead starting positions of suffixes will be sorted. But why suffixes are selected in the first place? Because searching for every occurrence of a pattern is basically searching for every suffix that starts with the pattern. Once they are sorted we can use binary search to find lower and upper bounds that enclose all suffixes that start with the pattern. Comparison of a suffix with a pattern during binary search should take into account only m (length of the pattern) characters as we are looking for suffixes that start with the pattern. // Suffix array represents simple text indexing mechanism. public class SuffixArray : IEnumerable<int> { private const int c_lower = 0; private const int c_upper = -1; private readonly string m_text; private readonly int[] m_pos; private readonly int m_lower; private readonly int m_upper; SuffixArray(string text, int[] pos, int lower, int upper) { m_text = text; m_pos = pos; // Inclusive lower and upper boundaries define search range. m_lower = lower; m_upper = upper; } public static SuffixArray Build(string text) { Contract.Requires<ArgumentException>(!String.IsNullOrEmpty(text)); var length = text.Length; // Sort starting positions of suffixes in lexicographical // order. var pos = Enumerable.Range(0, length).ToArray(); Array.Sort(pos, (x, y) => String.Compare(text, x, text, y, length)); // By default all suffixes are in search range. return new SuffixArray(text, pos, 0, text.Length - 1); } public SuffixArray Search(string str) { Contract.Requires<ArgumentException>(!String.IsNullOrEmpty(str)); // Search range is empty so nothing to narrow. if (m_lower > m_upper) return this; // Otherwise search for boundaries that enclose all // suffixes that start with supplied string. var lower = Search(str, c_lower); var upper = Search(str, c_upper); // Once precomputed sorted suffixes positions don't change // but the boundaries do so that next refinement // can be done within smaller range and thus faster. // For example, you may narrow search range to suffixes // that start with "ab" and then search within this smaller // search range suffixes that start with "abc". return new SuffixArray(m_text, m_pos, lower + 1, upper); } public IEnumerator<int> GetEnumerator() { // Enumerates starting positions of suffixes that fall // into search range. for (var i = m_lower; i <= m_upper; i++) yield return m_pos[i]; } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } private int Compare(string w, int i) { // Comparison takes into account maximum length(w) // characters. For example, strings "ab" and "abc" // are thus considered equal. return String.Compare(w, 0, m_text, m_pos[i], w.Length); } private int Search(string w, int bound) { // Depending on bound value binary search results // in either lower or upper boundary. int x = m_lower - 1, y = m_upper + 1; if (Compare(w, m_lower) < 0) return x; if (Compare(w, m_upper) > 0) return y; while (y - x > 1) { var m = (x + y)/2; // If bound equals to 0 left boundary andvances to median // only // if subject is strictly greater than median and // thus search results in lower bound (position that // preceeds first suffix equal to or greater than // subject w). Otherwise search results in upper bound // (position that preceeds fisrt suffix that is greater // than subject). if (Compare(w, m) > bound) x = m; else y = m; } return x; } } This implementation is simple (it has O(n^2 log n) complexity to sort and O(m log n) to search where n stands for text length and m for pattern length) and can be improved. It doesn’t take into account the fact that suffixes not arbitrary strings are sorted. On the other hand suffixes may share common prefixes and that may be used to speed up binary search. Here an example of narrowing the search. var str = ...; var sa = SuffixArray.Build(str); string pat; while ((pat = Console.ReadLine()) != String.Empty) { sa = sa.Search(pat); foreach (var pos in sa) { Console.WriteLine(str.Substring(pos)); } } Happy Easter, folks!
https://blogs.msdn.microsoft.com/dhuba/2010/04/04/suffix-array/
CC-MAIN-2019-35
refinedweb
854
54.83
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics Register / Login JavaRanch » Java Forums » Java » Beginning Java Author casting and converting objects Daniel .J.Hyslop Ranch Hand Joined: May 23, 2005 Posts: 55 posted May 28, 2005 13:14:00 0 I am currently learning about casting and converting objects and have come across what seems to be a contradiction from program to discription of what actually happens when implicit conversion takes place. I am told in referal to my manual that child is implicitly cast to a class of type Object and yet when I print the values of obj of type object it holds the opposite value .I understand the concept of the heirachy of implicit casting but why is it explained as being cast into an Object object and yet it holds the value of the class being passed to it here`s the code : interface Super { } interface Child1 extends Super { } interface Child2 extends Super { } class Implement implements Child2 { } public class Imp5 { public static void main(String[]args) { Child2 child = new Implement(); Super sup = child; System.out.println("sup = "+sup + "child = "+child); Object obj = child; System.out.println("obj = "+obj + " child = "+child); /*output of this line =obj = implement@765291 child = Implement@765291*/ } } an island in the sun <br />with a language of many tongue? Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24039 13 I like... posted May 28, 2005 13:37:00 0 Hi, Welcome to JavaRanch! If an object is an Animal, then a variable is like a Leash. The Leash is not the Animal; it's just the mechanism by which you have access to the animal. You could attach several Leashes to the same Animal, just as you can have several variables refer to the same object. A cast (implicit or implicit) affects only the variable -- the Leash. It doesn't effect the object (the Animal). If I have my Pet on a Leash, and you're a professional Pet walker, and I hand you the Leash, I'm implicitly casting that Leash to refer to a generic Pet. It's not a generic Pet leash of course -- it's attached to a specific kind (subclass) of Pet. So let's say it's an Alligator Leash. You're a Pet walker, so I can give you any kind of Leash that's attached to a Pet. This has no effect whatsoever on my pet Alligator on the other end of the Leash -- he's still an Alligator. Now, if you're walking along, and somebody tells you "Aaaaaaaah! There's an Alligator on the end of that Leash!" then they're doing an explicit cast from the superclass (Pet) to the subclass (Alligator.) Now you know there's an Alligator, and you'd be wise to treat it as such. But only your knowledge has changed -- the Alligator has always been an Alligator. Make sense? You could also have a look here and here for another way to explain the same thing. [Jess in Action] [AskingGoodQuestions] Daniel .J.Hyslop Ranch Hand Joined: May 23, 2005 Posts: 55 posted May 29, 2005 11:12:00 0 I`d just like to say the two web pages that you have posted to explain the problem in hand are the best explanations I`ve come across so far. This problem of pass by value of primitives and objects has been confusing me for weeks ,but drinking a cup of coffe will never be the same again Daniel .J.Hyslop Ranch Hand Joined: May 23, 2005 Posts: 55 posted May 30, 2005 10:48:00 0 thanks all for the help so far.I (think),I now understand the concept of handling a reference to an object ie Cup c = new Cup();// declares and initialises the object Cup d = c;// another variable points at the same object Cup e = new Cup();/*another cup object has been declared and inialised and refering to a seperate object than c& d*/ As I said I think that is what is happening, but what is happening here- Object obj;/*declare a variable that wiill eventually be assigned to a Object object*/ obj = new Cup();/* I`ve changed my mind I`m going to assign it to a Cup object instead*/ Why are we declaring an Object(or any superclass) with a variable and then referencing it to a different object. Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24039 13 I like... posted May 30, 2005 11:59:00 0 Hi Daniel, All your assertions are correct. Now, as to your last question: if you enjoyed the last two campfire stories, then go on and read the next one "How my Dog Learned Polymorphism" and it will answer it for you! I agree. Here's the link: subject: casting and converting objects Similar Threads static Methods @ Interface Conversion cast my notes on JLS for any1 who needs them !! java.lang.ClassCastException: Parent All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/399809/java/java/casting-converting-objects
CC-MAIN-2013-20
refinedweb
852
65.35
This is my practice program for finding the resultant force in statics. When I run the program it will run through the loop once fine, but the second time it skips the scanf command and prints the printf commands in an infinite loop. I have been trying to figure out why this is and would really appreciate your help. Thanks! [code] #include <stdio.h> #include <math.h> #include <stdlib.h> int main(){ float xr = 0, fr = 0, num = 0; float force = 0, x = 0, centroid = 0; num = 0; while (xr >= 0){ printf("enter Force and centriod separated by a comma,\n"); printf("enter 0 for centroid to finish.\n"); scanf("%f %f", &fr, &xr); force = force + fr; x = x + xr; xr = 0; centroid = (force * x) / force; } printf("\tResultant force is %f\n", &force); printf("\tCentroid is %f\n", ¢roid); return 0; } [\code]
http://cboard.cprogramming.com/c-programming/125338-using-while-produces-infinite-loop.html
CC-MAIN-2015-40
refinedweb
142
80.31
remctl_close man page remctl_close — Close a remctl connection and free the client object Synopsis #include <remctl.h> void remctl_close(struct remctl *r); Description remctl_close() cleanly closes any connection to a remote server created via remctl_open() for the given client object and then frees the object created by remctl_new(). It should be called when the caller is finished with a remctl client object to avoid resource leaks. Following the call to remctl_close(), the r pointer to the remctl client object is no longer valid. remctl_close() is always successful, even if it is unable to send a clean protocol quit command to the remote server. Compatibility This interface has been provided by the remctl client library since its initial release in version 2.0. Author Russ Allbery <eagle@eyrie.org> Copyright 2007,_open(3) The current version of the remctl library and complete details of the remctl protocol are available from its web page at <>. Referenced By remctl(3), remctl_new(3).
https://www.mankier.com/3/remctl_close
CC-MAIN-2017-22
refinedweb
159
54.52
This document explains how to use Eclipse with web2py. It was tested on the cookbook example. The only notable things so far are: __init__.pymodules where you want to be able to import things from your own code. Only models have been tested so far. One gotcha is make sure you choose the correct case: return dict(records=SQLTABLE(records)) There are two SQLTABLE: SQLTABLE and SQLTable. The lower case one does not need to exposed since it is not intended to be instantiated by the user. The upper case one is being used to output html, and is the one we want. As Python is case sensitive, you would not get the expected outcome if you choose the wrong item. To let Eclipse know about variables being passed into the controller at runtime, you can do the following global db global request global session global reqponse This will remove the warnings about them being undefined. Typing redirect will get the correct import and you can use it normally. To use session, request and (theoretically but untried) response with hints and code completion, you can use this code: req=Request() req=request ses=Session() ses=session resp=Response() resp=response as you type those you will get the imports for these objects as well: from gluon.globals import Request from gluon.globals import Session from gluon.globals import Response Then you can use req as you would with request but with code hinting, etc., and ses for session, and resp for response. If anyone knows a better way to cast request, session, and response as their correct types, please do leave a comment. Code hints and completion in Eclipse is very nice and provide a valid alternative to the web2py built-in editor. Unfortunately you do not get the code hints unless you also import the statement. If you choose to keep "Do auto imports?" checked, imports from your own models, controllers, etc., may throw errors. You can review those and delete the undesired imports. Or perhaps simply use it as a memory aid and keep typing without selecting the object so as to not trigger the auto import. Please note that this is still a work in progress!
http://web2py.com/AlterEgo/default/show/37
CC-MAIN-2015-18
refinedweb
369
73.98
Using Django pipeline to add hash to static files Why do I need hash static files? LookHow to develop and deploy front end code in large companies?Zhang Yunlong’s answer. In this way, when the static file has been modified, it will be very convenient to get the latest modified version, while the unmodified static file will still use the cache. This avoids the embarrassment of not updating the user’s static files after modification, and can make full use of the cache. demo install sudo mkdir /opt/projects git clone cd django_pipeline_demo ln -s $(pwd) /opt/projects ln -s /opt/projects/django_pipeline_demo/deploy/nginx/django_pipeline.conf /etc/nginx/sites-enabled pip install -r requirements.txt python manage.py runserver 0.0.0.0:9888 nginx -s reload VIM / etc / hosts add 127.0.0.1:9888 Django_ pipline_ demo.com Django’s Library pipeline mako, django-mako, django-pipeline-demo The effect is this, todjango_pipeline_demoFor example. Let’s talk about the final usage first - Debug must be false (online is false). If it is true, Django will be used to find static files by default instead of pipeline. python manage.py collectstatic - Restart Django project Key code explanation settings.py Several configurations of, How to install and configure Django pipeline, please move tofile. Explain several collection related configurations # python manage.py After collectstatic, the file will be thrown to static_ Under root STATIC_ROOT = './statics' #Django's templates will be found in these directories TEMPLATE_DIRS = ( os.path.join(BASE_DIR, 'templates'), ) #When developing, collectstatic will look up the path of CSS and then drop it to static_ Under root #After using pipeline, hash code, such as CSS, will be added to the static file/ index.css #After collectstatic, it will become CSS/ index.as1df14jah8dfh .css STATICFILES_DIRS = ( os.path.join(BASE_DIR, "static_dev"), ) templates/common/static_pipeline.html This is to define a URL with Mako. After static files are imported with this URL, the hash version can be found. <%! from django.contrib.staticfiles.storage import staticfiles_storage %> <%def<% try: url = staticfiles_storage.url(file) except: url = file %>${url}</%def> index.html First, import / common / static_ pipeline.html , where static files need to be referenced${ static.url ('unhash file path ')} <%namespace .... <link rel="stylesheet" href="${static.url('css/index.css')}" type="text/css" media="all" /> ....
https://developpaper.com/using-django-pipeline-to-add-hash-to-static-files/
CC-MAIN-2022-33
refinedweb
380
52.97
# How to Make Your Own C++ Game Engine (This [blog post](https://pikuma.com/blog/how-to-make-your-own-cpp-game-engine) was originally posted on [pikuma.com](https://pikuma.com)) ![pikuma.com](https://habrastorage.org/r/w1560/getpro/habr/upload_files/bbf/05f/97a/bbf05f97a1f43799c697870ffbb69134.png "pikuma.com")pikuma.comSo you want to learn more about game engines and write one yourself? That's awesome! To help you on your journey, here are some recommendations of C++ libraries and dependencies that will help you hit the ground running. Game development has always been a great helper to get my students motivated to learn more about more advanced computer science topics. One of my tutors, Dr. Sepi, once said: > *"Some people think games are kid's stuff, but gamedev is one of the few areas that uses almost every item of the standard CS curriculum."-*[*Dr. Sepideh Chakaveh*](https://www.conted.ox.ac.uk/profiles/sepideh-chakaveh) > > As always, she is absolutely right! If we expose what is hidden under the development stack of any modern game, we'll see that it touches many concepts that are familiar to any computer science student. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/ce1/6b9/f93/ce16b9f93068dab0bd40385fb5292ad9.png)Depending on the nature of your game, you might need to dive even deeper into more specialized areas, like distributed systems or human-computer interaction. Game development is serious business and it can be a powerful tool to learn serious CS concepts. This article will go over some of the fundamental building blocks that are required to create a simple game engine with C++. I'll explain the main elements that are required in a game engine, and give some personal recommendations on how I like to approach writing one from scratch. That being said, this will **not** be a coding tutorial. I won't go into too much technical detail or explain how all these elements are glued together via code. If you are looking for a comprehensive video book on how to write a C++ game engine, this is a great starting point: [Create a 2D Game Engine with C++ & Lua](https://pikuma.com/courses/cpp-2d-game-engine-development). ![Create a C++ Game Engine (by pikuma.com)](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/e14/36b/15f/e1436b15fbad85d97645b81cd4805dd0.jpg "Create a C++ Game Engine (by pikuma.com)")Create a C++ Game Engine (by pikuma.com)What is a Game Engine? ---------------------- If you are reading this, chances are you already have a good idea of what a game engine is, and possibly even tried to use one yourself. But just so we are all on the same page, let's quickly review what game engines are and what they help us achieve. A **game engine** is a set of software tools that optimizes the development of video games. These engines can be small and minimalist, providing just a game loop and a couple of rendering functions, or be large and comprehensive, similar to IDE-like applications where developers can script, debug, customize level logic, AI, design, publish, collaborate, and ultimately build a game from start to finish without the need to ever leave the engine. Game engines and game frameworks usually expose an [API](https://en.wikipedia.org/wiki/API) to the user. This API allows the programmer to call engine functions and perform hard tasks as if they were black boxes. To really understand how this API thing works, let's put it into context. For example, it is not rare for a game engine API to expose a function called "**IsColliding()**" that developers can invoke to check if two game objects are colliding or not. There is no need for the programmer to know **how** this function is implemented or what is the algorithm required to correctly determine if two shapes are overlapping. As far as we are concerned, the **IsColliding** function is a [black box](https://en.wikipedia.org/wiki/Black_box) that does some magic and correctly returns *true* or *false* if those objects are colliding with each other or not. This is an example of a function that most game engines expose to their users. ``` if (IsColliding(player, bullet)) { lives--; if (lives == 0) { GameOver(); } } ``` ![Most engines will abstract collision detection and simply expose it as a true/false function.](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/864/81f/763/86481f7637809f59df5206b5923fe24c.jpg "Most engines will abstract collision detection and simply expose it as a true/false function.")Most engines will abstract collision detection and simply expose it as a true/false function.Besides a programming API, another big responsibility of a game engine is hardware abstraction. For example, 3D engines are usually built upon a dedicated graphics API like [OpenGL](https://www.opengl.org/), [Vulkan](https://www.vulkan.org/), or [Direct3D](https://docs.microsoft.com/en-us/windows/win32/direct3d). These APIs provide a software abstraction for the Graphics Processing Unit ([GPU](https://en.wikipedia.org/wiki/Graphics_processing_unit)). Speaking of hardware abstraction, there are also low-level libraries (like [DirectX](https://en.wikipedia.org/wiki/DirectX), [OpenAL](https://www.openal.org/), and [SDL](https://www.libsdl.org/)) that provide abstraction & multi-platform access to many other hardware elements. These libraries help us access and handle keyboard events, mouse movement, network connection, and even audio. The Rise of Game Engines ------------------------ In the early years of the game industry, games were built using a custom rendering engine and the code was developed to squeeze as much performance as possible from slower machines. Every CPU cycle was crucial, so code reuse or generic functions that worked for multiple scenarios was not a luxury that developers could afford. As games and development teams grew in both size and complexity, most studios ended up reusing functions and subroutines between their games. Studios developed in-house engines that were basically a collection of internal files and libraries that dealt with low-level tasks. These functions allowed other members of the development team to focus on high-level details like gameplay, map creation, and level customization. Some popular classic engines are [id Tech](https://en.wikipedia.org/wiki/Id_Tech), [Build](https://en.wikipedia.org/wiki/Build_(game_engine)), and [AGI](https://en.wikipedia.org/wiki/Adventure_Game_Interpreter). These engines were created to aid the development of specific games, and they allowed other members of the team to rapidly develop new levels, add custom assets, and customize maps on the fly. These custom engines were also used to [mod](https://en.wikipedia.org/wiki/Video_game_modding) or create expansion packs for their original games. Id Software developed id Tech. id Tech is a collection of different engines where each iteration is associated with a different game. It is common to hear developers describe id Tech 0 as "the Wolfenstein3D engine", id Tech 1 as "the Doom engine", and id Tech 2 as "the Quake engine." Build is another example of engine that helped shape the history of 90's games. It was created by [Ken Silverman](http://advsys.net/ken/) to help customize first-person shooters. Similar to what happened to id Tech, Build evolved with time and its different versions helped programmers develop games such as [Duke Nukem 3D](https://en.wikipedia.org/wiki/Duke_Nukem_3D), [Shadow Warrior, and](https://en.wikipedia.org/wiki/Shadow_Warrior)[Blood](https://en.wikipedia.org/wiki/Blood_(video_game)). These are arguably the most popular titles created using the Build engine, and are often referred as "The Big Three." ![The Build engine, developed by Ken Silverman, editing a level in 2D mode.](https://habrastorage.org/r/w1560/getpro/habr/upload_files/5f1/b17/9cd/5f1b179cd119d8beb87c4f7ef67fd5cc.png "The Build engine, developed by Ken Silverman, editing a level in 2D mode.")The Build engine, developed by Ken Silverman, editing a level in 2D mode.Yet another example of game engine from the 90s was the "Script Creation Utility for Manic Mansion" ([SCUMM](https://en.wikipedia.org/wiki/SCUMM)). SCUMM was an engine developed at *LucasArts*, and it is the base of many classic Point-and-Click games like [Monkey Island](https://en.wikipedia.org/wiki/Monkey_Island_(video_game_series)) and [Full Throttle](https://en.wikipedia.org/wiki/Full_Throttle_(1995_video_game)). ![Full Throttle's dialogs and actions were managed using the SCUMM scripting language.](https://habrastorage.org/r/w1560/getpro/habr/upload_files/d2e/49c/7b4/d2e49c7b4b520711a03e16d17408a09d.png "Full Throttle's dialogs and actions were managed using the SCUMM scripting language.")Full Throttle's dialogs and actions were managed using the SCUMM scripting language.As machines evolved and became more powerful, so did game engines. Modern engines are packed with feature-rich tools that require fast processor speeds, ridiculous amount of memory, and dedicated graphics cards. With power to spare, modern engines trade machine cycles for more abstraction. This trade-off means we can view modern game engines as general-purpose tools to create complex games at low cost and short development times. Why Make a Game Engine? ----------------------- This is a very common question, and different game programmers will have their own take on this topic depending on the nature of the game being developed, their business needs, and other driving forces being considered. There are many free, powerful, and professional [commercial engines](https://www.incredibuild.com/blog/top-7-gaming-engines-you-should-consider-for-2020) that developers can use to create and deploy their own games. With so many game engines to choose from, why would anyone bother to make a game engine from the ground up? I wrote a blog post called "[Should I Make a Game Engine or Use an Existing One?](https://pikuma.com/blog/why-make-a-game-engine)" explaining some of the reasons programmers might decide to make a game engine/framework from scratch. In my opinion, the top reasons are: * **Learning opportunity**: a low-level understanding of how game engines work under the hood can make you grow as a developer. * **Workflow control**: you'll have more control over special aspects of your game and adjust the solution to fit your workflow needs. * **Customization**: you'll be able to tailor a solution for a unique game requirement. * **Minimalism**: a smaller codebase can reduce the overhead that comes with bigger game engines. * **Innovation**: you might need to implement something completely new or target unorthodox hardware that no other engine supports. I will continue our discussion assuming you are interested in the **educational** appeal of game engines. Creating a small game engine from scratch is something I strongly recommend to all my CS students. Considerations When Writing a Game Engine ----------------------------------------- So, after this quick talk about the motivations of using and developing game engines, let's go ahead and discuss some of the components of game engines and learn how we can go about writing one ourselves. ### 1. Choosing a Programming Language One of the first decisions we face is choosing the programming language we'll use to develop the core engine code. I have seen engines being developed in raw assembly, C, C++, and even high-level languages like C#, Java, Lua, and even JavaScript! One of the most popular languages for writing game engines is C++. The C++ programming language combines speed with the ability to use object-oriented programming (OOP) and other programming paradigms that help developers organize and design large software projects. Since performance is usually a great deal when we develop games, C++ has the advantage of being a compiled language. A compiled language means that the final executables will run natively on the processor of the target machine. There are also many dedicated C++ libraries and development kits for most modern consoles, like PlayStation or Xbox. ![Developers can access the Xbox controller using C++ libraries provided by Microsoft.](https://habrastorage.org/r/w1560/getpro/habr/upload_files/ed9/71f/e1a/ed971fe1ad4f375d654fef9f71c03221.png "Developers can access the Xbox controller using C++ libraries provided by Microsoft.")Developers can access the Xbox controller using C++ libraries provided by Microsoft.Speaking of performance, I personally don't recommend languages that use virtual machines, bytecode, or any other intermediary layer. Besides C++, some modern alternatives that are suited for writing core game engine code are [Rust](https://www.rust-lang.org/), [Odin](https://odin-lang.org/), and [Zig](https://ziglang.org/). For the remainder of this article, my recommendations will assume the reader wants to build a simple game engine using the C++ programming language. ### 2. Hardware Access In older operating systems, like the MS-DOS, we could usually poke memory addresses and access special locations that were mapped to different hardware components. For example, all I had to do to "paint" a pixel with a certain color was to load a special memory address with the number that represented the correct color of my VGA palette, and the display driver translated that change to the physical pixel into the CRT monitor. As operating systems evolved, they became responsible for protecting the hardware from the programmer. Modern operating systems will not allow the code to modify memory locations that are outside the allowed addresses given to our process by the OS. For example, if you are using Windows, macOS, Linux, or \*BSD, you’ll need to ask the OS for the correct permissions to draw and paint pixels on the screen or talk to any other hardware component. Even the simple task of opening a window on the OS desktop is something that must be performed via the operating system API. Therefore, running a process, opening a window, rendering graphics on the screen, paining pixels inside that window, and even reading input events from the keyboard are all OS-specific tasks. One very popular library that helps with multi-platform hardware abstraction is SDL. I personally like using SDL when I teach gamedev classes because with SDL I don’t need to create one version of my code for Windows, another version for macOS, and another one for Linux students. SDL works as a bridge not just for different operating systems, but also different CPU architectures (Intel, ARM, Apple M1, etc.). The SDL library abstracts the low-level hardware access and "translates" our code to work correctly on these different platforms. Here is a minimal snippet of code that uses SDL to open a window on the operating system. I'm not handling errors for the sake of simplicity, but the code below will be the same for Windows, macOS, Linux, BSD, and even RaspberryPi. ``` #include void OpenNewWindow() { SDL\_Init(SDL\_INIT\_VIDEO); SDL\_Window\* window = SDL\_CreateWindow("My Window", 0, 0, 800, 600, 0); SDL\_Renderer\* renderer = SDL\_CreateRenderer(window, -1, 0); } ``` But SDL is just one example of library that we can use to achieve this multi-platform hardware access. SDL is a popular choice for 2D games and to port existing code to different platforms and consoles. Another popular option of multi-platform library that is used mostly with 3D games and 3D engines is GLFW. The GLFW library communicates very well with accelerated 3D APIs like OpenGL and Vulkan. ### 3. Game Loop Once we have our OS window open, we need to create a controlled [game loop](https://gameprogrammingpatterns.com/game-loop.html). Put simply, we *usually* want our games to run at 60 frames per second. The framerate might be different depending on the game, but to put things into perspective, movies shot on film run at a 24 FPS rate (24 images flash past your eyes every single second). A game loop **runs continuously during gameplay**, and at each pass of the loop, our engine needs to run some important tasks. A traditional game loop must: * **Process Input** events without blocking * **Update** all game objects and their properties for the current frame * **Render** all game objects and other important information on the screen ``` while (isRunning) { Input(); Update(); Render(); } ``` That's a cute while-loop. Are we done? Absolutely not! A raw C++ loop is not good enough for us. A game loop must have some sort of relationship with real-world time. After all, the enemies of the game should move at the same speed on a any machine, regardless of their CPU clock speed. Controlling this framerate and setting it to a fixed number of FPS is actually a very interesting problem. It usually requires us to keep track of time between frames and perform some [reasonable calculations](https://gafferongames.com/post/fix_your_timestep/) to make sure our games run smoothly at a framerate of at least 30 FPS. ### 4. User Input I cannot imagine a game that does not read some sort of input event from the user. These can come from a keyboard, a mouse, a gamepad, or a VR set. Therefore, we must process and handle different input events inside our game loop. To process user input, we must request access to hardware events, and this must be performed via the operating system API. The good news is that we can use a multi-platform hardware abstraction library (SDL, GLFW, SFML, etc.) to handle user input for us. If we are using SDL, we can poll events and proceed to handle them accordingly with a few lines of code. ``` void Input() { SDL_Event event; while (SDL_PollEvent(&event)) { switch (event.type) { case SDL_KEYDOWN: if (event.key.keysym.sym == SDLK_SPACE) { ShootMissile(); } break; } } } ``` Once again, if we are using a cross-platform library like SDL to handle input, we don't have to worry too much about OS-specific implementation. Our C++ code should be the same regardless of the platform we are targeting. After we have a working game loop and a way of handling user input, it's time for us to start thinking about organizing our game objects in memory. ### 5. Representing Game Objects in Memory When we are designing a game engine, we need to setup data structures to store and access the objects of our game. There are several techniques that programmers use when architecturing a game engine. Some engines might use a simple object-oriented approach with classes and inheritance, while other engines might organize their objects as entities and components. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/605/4cd/dfc/6054cddfc6bdcd08b3f3a6544c3ef697.png)If one of your goals is to learn more about algorithms and data structures, my recommendation is for you to try implementing these data structures yourself. If you’re using C++, one option is to use the [STL](https://en.wikipedia.org/wiki/Standard_Template_Library) (standard template library) and take advantage of the many data structures that come with it (vectors, lists, queues, stacks, maps, sets, etc.). The C++ STL relies heavily on [templates](https://en.wikipedia.org/wiki/Template_(C%2B%2B)), so this can be a good opportunity to practice working with templates and see them in action in a real project. As you start reading more about game engine architecture, you'll see that one of the most popular design patterns used by games is based on **entities** and **components**. An [entity-component](https://www.gamedeveloper.com/design/the-entity-component-system---an-awesome-game-design-pattern-in-c-part-1-) design organizes the objects of our game scene as *entities* (what Unity calls "game objects" and Unreal calls "actors"), and *components* (the data that we can add or attach to our entities). To understand how entities and components work together, think of a simple game scene. The entities will be our main player, the enemies, the floor, the projectiles, and the components will be the important blocks of data that we "attach" to our entities, like position, velocity, rigid body collider, etc. ![A popular game engine design patterns is to organize game elements as entities and components.](https://habrastorage.org/r/w1560/getpro/habr/upload_files/d99/b48/d45/d99b48d45f0219464c04ff955ec4258a.png "A popular game engine design patterns is to organize game elements as entities and components.")A popular game engine design patterns is to organize game elements as entities and components.Some examples of components that we can choose to attach to our entities are: * **Position component**: Keeps track of the x-y position coordinates of our entity in the world (or x-y-z in 3D). * **Velocity component**: Keeps track of how fast the entity is moving in the x-y axis (or x-y-z in 3D). * **Sprite component**: It usually stores the PNG image that we should render for a certain entity. * **Animation component**: Keeps track of the entity's animation speed, and how the animation frames change over time. * **Collider component**: This is usually related to physics characteristics of a rigid body, and defines the colliding shape of an entity (bounding box, bounding circle, mesh collider, etc.). * **Health component**: Stores the current health value of an entity. This is usually just a number or in some cases a percentage value (a health bar, for example). * **Script component**: Sometimes we can have a script component attached to our entity, which might be an external script file (Lua, Python, etc) that our engines must interpret and execute behind the scenes. This is a very popular way of representing game objects and important game data. We have entities, and we "plug" different components to our entities. There are many books and articles that explore *how* we should go about implementing an entity-component design, as well as what data structures we should use in this implementation. The data structures we use and how we access them have a direct impact on our game's performance, and you’ll hear developers mention things like [Data-Oriented Design](https://en.wikipedia.org/wiki/Data-oriented_design), [Entity-Component-System](https://en.wikipedia.org/wiki/Entity_component_system) (ECS), [data locality](https://gameprogrammingpatterns.com/data-locality.html), and many other ideas that have everything to do with how our game data is stored in memory and how we can access this data efficiently. Representing and accessing game objects in memory can be a complex topic. In my opinion, you can either code a simple entity-component implementation manually, or you can simply use an existing third-party ECS library. There are some popular options of ready-to-use ECS libraries that we can include in our C++ project and start creating entities and attaching components without having to worry about *how* they are implemented under the hood. Some examples of C++ ECS libraries are [EnTT](https://github.com/skypjack/entt/wiki) and [Flecs](https://github.com/SanderMertens/flecs). I personally recommend students that are serious about programming to try implementing a very simple ECS manually at least once. Even if your implementation is not perfect, coding an ECS system from scratch will force you to think about the underlying data structures and consider their performance. Now, serious talk! Once you’re done with your custom ad-hoc ECS implementation, I would encourage you to just use some of the popular third-party ECS libraries (EnTT, Flecs, etc.). These are professional libraries that have been developed and tested by the industry for several years. They are probably a lot better than anything we could come up from scratch ourselves. In summary, a professional ECS is difficult to implement from scratch. It is valid as an academic exercise, but once you're done with your small learning project, just pick a well-tested third-party ECS library and add it to your game engine code. ### 6. Rendering Alright, it looks like our game engine is slowly growing in complexity. Now that we have discussed about ways of storing and accessing game objects in memory, we need to probably talk about how we render objects on the screen. The first step is to consider the nature of the games that we will be creating with our engine. Are we creating a game engine to develop only 2D games? If that's the case, we need to think about rendering sprites, textures, managing layers, and probably take advantage of graphics card acceleration. The good news is that 2D games are usually simpler than 3D ones, and 2D math is considerably easier than 3D math. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/a9f/dab/019/a9fdab019c87c3c9e1b6f75d9fc3a317.png)If your goal is to develop a **2D** engine, you can use SDL to help with multi-platform rendering. SDL abstracts accelerated GPU hardware, can decode and display PNG images, draw sprites, and render textures inside our game window. Now, if your goal is to develop a **3D** engine, then we’ll need to define how we send some extra 3D information (vertices, textures, shaders, etc) to the GPU. You'll probably want to use a software abstraction to the graphics hardware, and the most popular options are [OpenGL](https://www.opengl.org/), [Direct3D](https://en.wikipedia.org/wiki/Direct3D), [Vulkan](https://www.vulkan.org/), and [Metal](https://en.wikipedia.org/wiki/Metal_(API)). The decision of which API to use might depend on your target platform. For example, Direct3D will power Microsoft apps, while Metal will work solely with Apple products. 3D applications work by processing 3D data through a graphics pipeline. This pipeline will dictate how your engine must send graphics information to the GPU (vertices, texture coordinates, normals, etc.). The graphics API and the pipeline will also dictate how we should write programmable [shaders](https://en.wikipedia.org/wiki/Shader) to transform and modify vertices and pixels of our 3D scene. ![Programmable shaders dictate how the GPU should process and display 3D objects. We can have different scripts per vertex and per pixel (fragment), and they control reflection, smoothness, color, transparency, etc.](https://habrastorage.org/r/w1560/getpro/habr/upload_files/60a/73f/576/60a73f5767162150e34773c2cdd4ff3b.png "Programmable shaders dictate how the GPU should process and display 3D objects. We can have different scripts per vertex and per pixel (fragment), and they control reflection, smoothness, color, transparency, etc.")Programmable shaders dictate how the GPU should process and display 3D objects. We can have different scripts per vertex and per pixel (fragment), and they control reflection, smoothness, color, transparency, etc.Speaking of 3D objects and vertices, it is a good idea to delegate to a library the task of reading and decoding different mesh formats. There are many popular 3D model formats that most third-party 3D engines should be aware of. Some examples of files are .OBJ, Collada, FBX, and DAE. My recommendation is to start with .OBJ files. There are well-tested and well-supported libraries that handle OBJ loading with C++. [TinyOBJLoader](https://github.com/tinyobjloader/tinyobjloader) and [AssImp](https://github.com/assimp/assimp) are great options that are used by many game engines. ### 7. Physics When we add entities to our engine, we probably also want them to move, rotate, and bounce around our scene. This subsystem of a game engine is the physics simulation. This can either be created manually, or imported from an existing ready-to-use physics engine. Here, we also need to consider what type of physics we want to simulate. 2D physics is usually simpler than 3D, but the underlying parts of a physics simulation are very similar to both 2D and 3D engines. If you simply want to include a physics library to your project, there are several great options to choose from. For 2D physics, I recommend looking at [Box2D](https://box2d.org/) and [Chipmunk2D](https://chipmunk-physics.net/). For professional and stable 3D physics simulation, some good names are libraries like [PhysX](https://developer.nvidia.com/physx-sdk) and [Bullet](https://pybullet.org/). Using a third-party physics engine is always a good call if physics stability and development speed are crucial for your project. ![Box2D is a very popular option of physics library that you can use with your game engine.](https://habrastorage.org/getpro/habr/upload_files/4aa/39a/1d7/4aa39a1d7c2e8e71816856c1e4a67f33.gif "Box2D is a very popular option of physics library that you can use with your game engine.")Box2D is a very popular option of physics library that you can use with your game engine.As an educator, I strongly believe every programmer should learn how to code a simple physics engine at least once in their career. Once again, you don't need to write a perfect physics simulation, but focus on making sure objects can accelerate correctly and that different types of forces can be applied to your game objects. And once movement is done, you can also think of implementing some simple collision detection and collision resolution. If you want to learn more about physics engines, there are some good books and online resources that you can use. For 2D rigid-body physics, you can look at the [Box2D source code](https://github.com/erincatto/box2d) and the [slides](https://box2d.org/publications/) from Erin Catto. But if you are looking for a comprehensive course about game physics, [Creating a 2D Physics Engine from Scratch](https://pikuma.com/courses/game-physics-engine-programming) is probably a good place to start. ![Create a 2D Physics Engine (by pikuma.com)](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/ca0/df9/189/ca0df9189350c84cc391177ffcf41290.jpg "Create a 2D Physics Engine (by pikuma.com)")Create a 2D Physics Engine (by pikuma.com)If you want to learn about 3D physics and how to implement a robust physics simulation, another great resource is the book "[Game Physics](https://www.amazon.co.uk/Game-Physics-David-H-Eberly/dp/0123749034)" by David Eberly. ![Game Physics by David Eberly](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/48c/2b1/22f/48c2b122fa466f6b0a6bb38219bc1d7e.jpg "Game Physics by David Eberly")Game Physics by David Eberly### 8. UI When we think of modern game engines like Unity or Unreal, we think of complex user interfaces with many panels, sliders, drag-and-drop options, and other pretty UI elements that help users customize our game scene. The UI allows the developer to add and remove entities, change component values on-the-fly, and easily modify game variables. Just to be clear, we are talking about game engine UI for tooling, and not the user interface that we show to the users of your game (like dialog screens and menus). Keep in mind that game engines do not necessarily need to have an editor embedded to them, but since game engines are usually used to increase productivity, having a friendly user interface will help you and other team members to rapidly customize levels and other aspects of the game scene. Developing a UI framework from the ground up is probably one of the most annoying tasks that a beginner programmer can attempt to add to a game engine. You'll have to create buttons, panels, dialog boxes, sliders, radio buttons, manage colors, and you’ll also need to correctly handle the events of that UI and always persist its state. Not fun! Adding UI tools to your engine will make your application increase in complexity and add an incredible amount of noise to your source code. If your goal is to create UI tools for your engine, my recommendation is to use an existing third-party UI library. A quick Google search will show you that the most popular options are [Dear ImGui](https://github.com/ocornut/imgui), [Qt](https://github.com/qt), and [Nuklear](https://github.com/vurtun/nuklear). ![ImGui is a powerful UI library that is used by many game engines as an edit tool.](https://habrastorage.org/getpro/habr/upload_files/232/a45/4cd/232a454cd4623d703aeef2902b23fef8.gif "ImGui is a powerful UI library that is used by many game engines as an edit tool.")ImGui is a powerful UI library that is used by many game engines as an edit tool.Dear ImGui is one of my favorites, as it allows us to quickly set up user interfaces for engine tooling. The ImGui project uses a design pattern called "[immediate mode UI](https://en.wikipedia.org/wiki/Immediate_mode_GUI)", and it is widely used with game engines because it communicates well with 3D applications by taking advantage of accelerated GPU rendering. In summary, if you want to add UI tools to your game engine, my suggestion is to simply use Dear ImGui. 9. Scripting ------------ As our game engine grows, one popular option is to enable level customization using a simple scripting language. The idea is simple; we embed a scripting language to our native C++ application, and this simpler scripting language can be used by non-professional programmers to script entity behavior, AI logic, animation, and other important aspects of our game. Some of the popular scripting languages for games are [Lua](https://www.lua.org/), [Wren](https://wren.io/), [C#](https://en.wikipedia.org/wiki/C_Sharp_(programming_language)), [Python](https://www.python.org/), and [JavaScript](https://en.wikipedia.org/wiki/JavaScript). All these languages operate at a considerably higher level than our native C++ code. Whoever is scripting game behavior using the scripting language does *not* need to worry about things like memory management or other low-level details of how the core engine works. All they need to do is script the levels and our engine knows how to interpret the scripts and perform the hard tasks behind the scenes. ![Lua is a fast and small scripting language that can be easily integrated with C & C++ projects.](https://habrastorage.org/getpro/habr/upload_files/957/0ff/dc7/9570ffdc75d38991d345ba7acec321e0.gif "Lua is a fast and small scripting language that can be easily integrated with C & C++ projects.")Lua is a fast and small scripting language that can be easily integrated with C & C++ projects.My favorite scripting language is Lua. Lua is small, fast, and extremely easy to integrate with C and C++ native code. Also, if I'm working with Lua and "modern" C++, I like to use a wrapper library called [Sol](https://github.com/ThePhD/sol2). The Sol library helps me hit the ground running with Lua and offers many helper functions to improve the traditional Lua C-API. If we enable scripting, we are almost reaching a point where we can start talking about more advanced topics in our game engine. Scripting helps us define AI logic, customize animation frames and movement, and other game behavior that does **not** need to live inside our native C++ code and can easily be managed via external scripts. ### 10. Audio Another element that you might consider adding support to a game engine is audio. It is no surprise that, once again, if we want to poke audio values and emit sound, we need to access audio devices via the OS. And once again, since we don't *usually* want to write OS-specific code, I am going to recommend using a multi-platform library that abstracts audio hardware access. Multi-platform libraries like SDL have extensions that can help your engine handle things like music and sound effects. But, serious talk now! I would strongly suggest tackling audio only after you have the other parts of your engine already working together. Emitting sound files can be easy to achieve, but once we start dealing with audio synchronization, linking audio with animations, events, and other game elements, things can become messy. If you are really doing things manually, audio can be tricky because of multi-threading management. It can be done, but if your goal is to write a simple game engine, this is one part that I like to delegate to a specialized library Some good libraries and tools for audio that you can consider integrating with your game engine are [SDL\_Mixer](https://www.libsdl.org/projects/SDL_mixer/), [SoLoud](https://sol.gfxile.net/soloud/), and [FMOD](https://www.fmod.com/). ![Tiny Combat Arena uses the FMOD library for audio effects like doppler and compression.](https://habrastorage.org/r/w1560/getpro/habr/upload_files/584/89b/b6e/58489bb6e5269bf718a5fe4277458328.png "Tiny Combat Arena uses the FMOD library for audio effects like doppler and compression.")Tiny Combat Arena uses the FMOD library for audio effects like doppler and compression.### 11. Artificial Intelligence The final subsystem I'll include in our discussion is AI. We could achieve AI via scripting, which means we could delegate the AI logic to level designers to script. Another option would be to have a proper AI system embedded in our game engine core native code. In games, AI is used to generate responsive, adaptive, or intelligent-like behavior to game objects. Most AI logic is added to non-player characters (NPCs, enemies) to simulate human-like intelligence. Enemies are a popular example of AI application in games. Game engines can create abstractions over path-finding algorithms or interesting human-like behavior when enemies chase objects on a map. A comprehensive book about the theory and implementation of artificial intelligence for games is called [AI for Games](https://www.amazon.co.uk/AI-Games-Third-Ian-Millington/dp/1138483974) by Ian Millington. ![AI for Games by Ian Millington.](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/7d3/272/51f/7d327251f323fbd2d72f399c26a7b4b2.jpg "AI for Games by Ian Millington.")AI for Games by Ian Millington.Don't Try to Do Everything At Once ---------------------------------- Alright! We have just discussed some important ideas that you can consider adding to a simple C++ game engine. But before we start gluing all these pieces together, I just want to mention something super important. One of the hardest parts of working on a game engine is that most developers will not set clear boundaries, and there is no sense of "end line." In other words, programmers will start a game engine project, render objects, add entities, add components, and it's all downhill after that. If they don't define boundaries, it's easy to just start adding more and more features and lose track of the big picture. If that happens, there is a big chance that the game engine will never see the light of day. Besides lacking boundaries, it is easy to get overwhelmed as we see the code grow in front of our eyes at lightning speed. A game engine project has the potential of quickly growing in complexity and in a few weeks, your C++ project can have several dependencies, require a complex build system, and the overall readability of your code drops as more features are added to the engine One of my first suggestions here is to always write your game engine **while** you're writing an actual game. Start and finish the first iteration of your game with an actual game in mind. This will help you to set limits and define a clear path for what you needs to be completed. Try your best to stick to it and not get tempted to change the requirements along the way. Take Your time and Focus on the Basics -------------------------------------- If you’re creating your own game engine as a learning exercise, enjoy the small victories! Most student gets super excited at the beginning of the project, and as days go by the anxiety starts to appear. If we are creating a game engine from scratch, especially when using a complex language like C++, it is easy to get overwhelmed and lose some momentum. I want to encourage you to fight that feeling of "running against time." Take a deep breath and enjoy the small wins. For example, when you learn how to successfully display a PNG texture on the screen, savor that moment and make sure you understand what you did. If you managed to successfully detect the collision between two bodies, enjoy that moment and reflect on the knowledge that you just gained. Focus on the fundamentals and **own** that knowledge. It doesn't matter how small or simple a concept is, **own it**!!! Everything else is ego. [Follow me on Twitter](https://twitter.com/PikumaLondon)
https://habr.com/ru/post/663594/
null
null
6,600
53.81
From: James E Taylor (james_at_[hidden]) Date: 2005-08-18 05:22:25 Are you sure there are no guarantees for function scope statics? I think the following is thread safe: void my_func() { static pthread_mutex_t m = PTHREAD_MUTEX_INITIALIZER; pthread_mutex_lock(&m); // do something once pthread_mutex_unlock(&m); } because the C-style PTHREAD_MUTEX_INITIALIZER doesn't involve a constructor call: the static is initialised before _any_ threads are running. Also, constructors of global statics are not guaranteed to be single-threaded for the same reason you can't safely reference one static from the constructor of another (The Static Initialisation Order Fiasco); one constructor could create a thread that goes and uses an uninitialised static, whilst the main thread tries to initialise it. Do you know whether the mutex ctor is thread-safe? T. Scott Urban <scottu_at_[hidden]> wrote : > On Thu, 2005-08-18 at 06:22 +1000, Christopher Hunt wrote: > > On 18/08/2005, at 1:19 AM, boost-users-request_at_[hidden] > wrote: > > > > > I envisage two threads accessing a function like this > concurrently: > > > > > > template<typename T> > > > boost::shared_ptr<T> > > > my_func() > > > { > > > static boost::mutex m; > > > boost::mutex::scoped_lock l(m); > > > > > > static boost::shared_ptr<T> p(new T); > > > return p; > > > } > > > > > > and my worry is that both threads could attempt to do the static > > > initialisation of m concurrently. Is there protection against this? > > > > Is it even possible to protect against this? > > The scope of m is actually outside of your function given that it is > > static. Thus it will be initialised outside of your function generally > > before any other threads get to start up. > > The scope of m is in the function, not global, but it has static > duration. Unlike a global, m will only be constructed if my_func() is > ever called in your program. > > That should set of warning bells in you brain. > > I don't believe there are any guarantees about the thread safety of the > initialization of function scope statics. This applies not just to the > mutex class in question but any type. A particular compiler might make > this usage thread safe - or maybe only guarantee initialization of basic > types - but I don't know if it's covered by the pthread spec - and > that's not helpful anyway to boost users, in general. And of course, > the C++ standard is as usual silent on the matter. > > You can make the mutex global - anonymous namespace or whatever. Since > global initializations are single threaded, you trade the threading > problem for the loss of on-demand resource usage. > > Another way to deal with this kind of problem is to use a modified > singleton pattern, but to make that thread safe, you need another mutex > (unless you want to risk DCL), so seems kind of pointless for this > instance. > > > > > > -- > t. scott urban <scottu_at_[hidden]> -- James E Taylor james_at_[hidden] ___________________________________ NOCC, Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
https://lists.boost.org/boost-users/2005/08/13343.php
CC-MAIN-2021-49
refinedweb
491
61.16
Content - Introduction - Quota Considerations - Caching Considerations - When to use Client-Side Geocoding - When to use Server-Side Geocoding - Conclusion Introduction Geocoding is the process of converting addresses ("1600 Amphitheatre Parkway, Mountain View, CA") into geographic coordinates (37.423021, -122.083739), which you can use to place markers or position the map. The Google Maps API Family provides two approaches to geocoding: - Client-side geocoding, which is executed in the browser, generally in response to user action. The Google Maps JavaScript API provides classes that make the requests for you. This approach is described in the Maps API for JavaScript documentation. - HTTP server-side geocoding, which allows your server to directly query Google's servers for geocodes. Typically, this is integrated with other code that is running server-side, and then used to generate a map. Server-side geocoding is described in the Google Maps Geocoding API documentation. }); Here is an example of using Python to do a server-side geocoding request. import urllib2 address="1600+Amphitheatre+Parkway,+Mountain+View,+CA" url="" % address. When running client-side geocode requests at periodic intervals, such as on a mobile app, your requests may be subject to blocking if all of your users are making requests at the same time (e. In Google Maps API for Work, quotas are tied to client IDs, which provide much higher quotas. To learn more about Google Maps API for Work quotas and error handling, we recommend reviewing our article, Usage Limits for Google Maps API Web Services. If you're still running into quota limits using the Google Maps API for Work, file a support request here:. Caching Considerations The Google Maps API allows you to cache geocodes (i.e. store them on your server for a limited period). Caching can be useful if you have to repeatedly look up the same address. However, there are two important things to keep in mind. - The Google Maps API Terms of Service allow you to use geocodes derived from the service on Google Maps or Google Earth only. You may not sell or distribute them in other fashion. - Geocoding changes often as our data gets more and more accurate. So even if you have cached data, you should refresh it periodically, to make sure you are getting the best geocodes for your locations. When to Use Client-Side Geocoding The basic answer is "almost always." As geocoding limits are per user session, there is no risk that your application will reach a global limit as your userbase grows. Client-side geocoding will not face a quota limit unless you perform a batch of geocoding requests within a user session. Therefore, running client-side geocoding, you generally don't have to worry about your quota. Two basic architectures for client-side geocoding exist. - Run the geocoding, but doesn't give you any sense of what your users are doing. - Run the geocode in the browser and then send it to the server. For instance, the user enters an address.. This cache allows you to optimize even more. You can even query the server with the address, see if you have a recently cached geocode for it, and if you do, use that. If you don't, then return no result to the browser, and let it geocode the result and send it back to the server to for caching. When to Use Server-Side Geocoding Server-Side Geocoding is best used for applications that require you to geocode addresses without input from a client. This usually happens when you get a dataset that comes separately from user input, for instance if you have a fixed, finite, and known set of addresses that need geocodes. Server-side geocoding can also be useful as a back-up for when client-side geocoding fails. that geocode to the original application. - If it has not, the script sends a geocoding request to Google. Once it has a result, it caches it, and then returns the geocode to the original application. - Sometime later, the geocode is used to display data on a Google Map. Conclusion In general, a combination of client-side geocoding and caching will serve most of your needs, and the generous geocoding limits in place in the JavaScript API will be more than most applications need. You should be careful implementing server-side only, as you're very likely to run into quota issues if you geocode too fast or too much. If after designing your site, you're still running into problems with quota, you may consider a Google Maps API for Work license. Check out the Google Earth and Maps Enterprise site for more information on the differences and how to get started.
https://developers.google.com/maps/articles/geocodestrat
CC-MAIN-2015-40
refinedweb
783
62.68
Verifying PDF content is also part of testing.But in WebDriver (Selenium2) we don't have any direct methods to achieve this. If you would like to extract pdf content then we can use Apache PDFBox API. Download the Jar files and add them to your Eclipse Class path.Then you are ready to extract text from PDF file .. :) Here is the sample script which will extract text from the below PDF file. import java.io.BufferedInputStream; import java.io.IOException; import java.net.URL; import java.util.concurrent.TimeUnit; import org.apache.pdfbox.pdfparser.PDFParser; import org.apache.pdfbox.util.PDFTextStripper; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.testng.Reporter; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class ReadPdfFile { WebDriver driver; @BeforeTest public void setUpDriver() { driver = new FirefoxDriver(); Reporter.log("I am done"); } @Test public void start() throws IOException{ driver.get(""); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); URL url = new URL(driver.getCurrentUrl()); BufferedInputStream fileToParse=new BufferedInputStream(url.openStream()); //parse() -- This will parse the stream and populate the COSDocument object. //COSDocument object -- This is the in-memory representation of the PDF document PDFParser parser = new PDFParser(fileToParse); parser.parse(); //getPDDocument() -- This will get the PD document that was parsed. When you are done with this document you must call close() on it to release resources //PDFTextStripper() -- This class will take a pdf document and strip out all of the text and ignore the formatting and such. String output=new PDFTextStripper().getText(parser.getPDDocument()); System.out.println(output); parser.getPDDocument().close(); driver.manage().timeouts().implicitlyWait(100, TimeUnit.SECONDS); } }Here is the output of above program : EarthBox a Day Giveaway Objectives EarthBox wanted to engage their Facebook audience with an Earth Day promotion that would also increase their Facebook likes. They needed a simple solution that would allow them to create a sweepstakes application themselves. Solution EarthBox utilized the Votigo platform to create a like- gated sweepstakes. Utilizing a theme and uploading a custom graphic they were able to create a branded promotion. Details • 1 prize awarded each day for the entire Month of April • A grand prize given away on Earth Day • Daily winner announcements on Facebook • Promoted through email newsletter blast Results (4 weeks) • 6,550 entries Facebook Hi Vamshi, This is really a good code but in my case it showing parse error is : *********************************************************** FAILED: start java.lang.NoClassDefFoundError: org/apache/fontbox/afm/AFMParser ************************************************************ Might be I am doing wrong but Can you please suggest any solution ? Thanks, Shubham Hi Shubham, Sorry I couldn't be of much help. I tried but I couldnot find what was teh actual issue. (Might be jars / pdf ou are trying to read) it works... thanks I run the same code and it is working fine for me. Just want to know , have you done any changes before you run the script? hi thanks a lot for this wonder code . when i run i get the below code can u please try to help me on this Sep 27, 2012 9:29:38 AM org.apache.pdfbox.pdfparser.BaseParser parseCOSStream WARNING: Specified stream length 98877 is wrong. Fall back to reading stream until 'endstream'. FAILED: start org.apache.pdfbox.exceptions.WrappedIOException: Could not push back 98877 bytes in order to reparse stream. Try increasing push back buffer using system property org.apache.pdfbox.baseParser.pushBackSize at org.apache.pdfbox.pdfparser.BaseParser.parseCOSStream(BaseParser.java:546) at org.apache.pdfbox.pdfparser.PDFParser.parseObject(PDFParser.java:566) at org.apache.pdfbox.pdfparser.PDFParser.parse(PDFParser.java:187) at com.test.PDF.PDF_Reader.start(PDF_Reader.java:38) hi, Your code is working fine but your download link is not working so please use beloved link so program will be run (pdfbox-app-1.8.2.jar) Hi Vamsi Kurra Ji, Wonderful Post.Thanks for sharing this with us.Please keep posting articles regularly and share your knowledge and experience with us w.r.t Selenium WebDriver and Other Topics..Thank You. thanks :) Hi Vamshi, The application that I am automating has set of pages where in user provides certain information. All these information are shown in the next page as PDF embedded within a container / frame. Each text in the PDF is captured as separate element using firbug. I couldn't identify a container itself. Tried css, firepath etc but of no luck. More interesting stuff is after accepting (clicking a checkbox and click continue) the PDF in this page, in the next page again the same PDF, opens with an option to esign., where in user clicks to esign the document (within the PDF) and the user name will be displayed in signature area of PDF. We have a test esign verification created for us. Any suggesion? Thanks, Kannan V Kannan, It sounds like it is not exactly a pdf . Seems it is an iframe (Like "read sample" at the) . If it is pdf , are you able to see the exact location of pdf from htmlsource. If yes your problem is solved. Thanks for your response. I spoke to the dev. team. It is not iframe. Just they are creating an object and taking the data submitted in the earlier pages and showing in the format. This is saved as pdf when clicked on save in the container. I can share the screenshot of that page if you can share your email. you can reach me at vamshikurra@gmail.com i faced the same issue downloaded fontbox from this website now it is working fine. thanks!!!! Hello Vamsi, I just check your code on an application that I am automating and guess what? work perfectly, I dont have too much experience working with TestNG but it looks a very useful tool, thanks for sharing and regards from Mexico!!! Thank you Vamshi..
http://www.mythoughts.co.in/2012/05/webdriverselenium2-extract-text-from.html
CC-MAIN-2020-05
refinedweb
968
51.65
in reply to wantarray alternative Isn't first { $_ } @array the same as $array[0]? (except where $array[0] contains a "false" value.) sub lowercase { return ( wantarray ) ? map { lc } @_ : lc $_[0]; } [download] Is your intended use of 'first' really just looking for elements that contain some non-false value (because that's what it does)? If so: return ( wantarray ) ? map { lc } @_ : lc( first { $_ } @_ ); [download] ie, no need to 'lc' the entire @_ if you are only returning one. Find the one you want, 'lc' it, and return that. Dave I'm really struggling to articulate my question. The real question is: If there's only one value in the return list, how can I return that value in scalar context, not the count of 1?. sub lowercase { my @out = map { lc } @_ ; return @out == 1 && ! wantarray ? $out[0]: @out ; } [download] This code is optimized based on all the answers so far, and doesn't use first either. In fact, I've probably used this construction thousands of times. My question is based on 3 assumptions: I assumed that List::Util would include this functionality. That is, if first or some variant could determine and respond to the wantarray context. just return the list! In scalar context the last element is taken, so if you are only returning "one value in the return list" thats the same. It's the scalar of an array(!) which counts. ok not that easy, avoid scalar context and explicitly put the scalar into a list at LHS. DB<129> @out=A..C => ("A", "B", "C") DB<130> sub tst { return map{lc} @out } DB<131> @list = tst() => ("a", "b", "c") DB<132> ($scalar) = tst() => "a" [download] Though we asked several times you haven't clarified which result you expect in in case of scalar context and longer list! Cheers Rolf ( addicted to the Perl Programming Language) sub lowercase { return map { lc } @_ ; } $jim = lowercase( 'Jim' ) ; ## $jim == 1 [download] return @out == 1 && ! wantarray ? $out[0] : @out; [download] return @out == 1 && ! wantarray ? $out[0] : @out; [download] So if @out has a single element, return '1'. If it contains zero or 2, or more elements, then if "wantarray", return @out, otherwise return the first element. I can't imagine using that once, let alone thousands of times. ;) Update: Woops, "&&" is higher on the precedence table than "?:" in any language I know of. My mistake. return ( @out == 1 && ! wantarray ) ? $out[0] :
http://www.perlmonks.org/index.pl?node_id=1043537
CC-MAIN-2014-23
refinedweb
407
74.08
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug I have an up-to-date gentoo distribution on amd64. I first grabbed f2c (by hand, because the script does not know to install it), then ran the emerge on R . the version of R is not the most recent. Worse, it crashes randomly---not on startup, but later. these are not rare crashes, but pretty systematic. so, this version of R should be hard-masked at least for the combination of amd64 and gcc 3.3.3 . Reproducible: Always Steps to Reproduce: 1. 2. 3. I have dropped an email to the R people about this issue. now, I know that at least one person had a stable R under another amd64 (fedora) distribution. Let's hope I find out what's going on---in which case I will report back. it appears (email from one of the R wizards) the problem is that f2c is not really compatible with R anymore. this would not be so bad were it not for the case that there does not appear to be a g77 build for gentoo: $ emerge search g77 so, right now, R is not really buildable. well, you both managed to give me almost none information to work on beside: It's something with R. _0_) What version of dev-lang/R do you mean ? PLEASE, the next time you file a bug, please tell us which version you are talking about ! 1) You don't need f2c, you can use g77 to compile the fortan things. Did you try to use g77 ? Do those error still show up when using a g77 compiled R ? 2) Why haven't you provided the output of "emerge info" ? hi danny: the both of us was both of me. fortunately, it would not have been useful to tell you the R version or to provide the emerge info (or other information), because I now know that the fault lies in the underlying compiler I used---f2c. with g77, most likely the R errors will disappear. (until we have a g77 compiler, we should hardmask R. we should certainly hardmask it against f2c builds---it is definitively incompatible with it. it will build, and appear ok, but R itself will randomly crash thereafter.) now, here is where I remain confused: where do you get a g77 compiler from? gentoo does not seem to have it. $ emerge search g77 does not give me anything. I would love to find out---I am tearing my hairs out trying to learn how to get a g77 to run under gentoo. gcc compiler seems to be a semi-black art. help would be highly appreciated. /iaw g77 is no seperate package, it is part of gcc USE="f77" emerge gcc will give you what you want. No need to hardmask dev-lang/R. No need to teat your hair, either ;-) You still haven't provided the output of "emerge info". Please do that. I'd further like you to tell me if the errors only occur w/ a specific version of dev-lang/R, or with all of them. thank you, danny for the info. first, an aside: in my attempts to update, I had started with $ emerge /usr/portage/sys-devel/gcc/gcc-3.4.1.ebuild which worked nicely, but got me up to the later and thus less conventional gcc version. (I do not think it makes any difference. I did try to unmerge 3.4.1 to get back to 3.3.3, but this cannot be done apparently.) then I got your note, and so I tried the USE flags on f77 and g77 (see also my emerge info below). alas, neither g77 nor f77 is in /usr/portage/profiles/use.desc . instead, a grep thereon tells me that $ grep 77 /usr/portage/profiles/use.desc /usr/portage/profiles/use.desc:ifc - use ifc instead of g77 to build of course, I am not sure if an intel compiler is a great idea for an amd64 architecture---and I wonder if R would like ifc, either. so, I am still perplexed about how to build R... :-( . any advice/ideas would be appreciated. regards, /iaw Portage 2.0.50-r9 (default-amd64-2004.2, gcc-3.4.1, glibc-2.3.4.20040605-r0, 2.6.7-gentoo-r14) ================================================================= System uname: 2.6.7-gentoo-r14 x86_64 4 Gentoo Base System version 1.4.16 distcc 2.12.1 i686-pc-linux-gnu (protocols 1 and 2) (default port 3632) [disabled] Autoconf: sys-devel/autoconf-2.59-r3 Automake: sys-devel/automake-1.8.3 ACCEPT_KEYWORDS="amd64" AUTOCLEAN="yes" CFLAGS="-pipe -O2" CHOST="x86_64-pc-linux-gnu" COMPILER="gcc3"="-pipe -O2" bitmap-fonts bonobo cdr crypt cups directfb encode esd f77 foomaticdb gdbm ggi gif gnome gphoto2 gpm gtk gtk2 gtkhtml guile imlib java jpeg kde ldap libg++ libwww mikmod motif mozilla mpeg mysql nas ncurses nls nogcj oggvorbis opengl oss pam pdflib perl png postgres python qt quicktime readline ruby scanner sdl slang snmp spell ssl tcltk tcpd tetex truetype ungif usb xml2 xmms xv zlib" donnie@quasar donnie $ grep 77 /usr/portage/profiles/use.* /usr/portage/profiles/use.desc:ifc - use ifc instead of g77 to build /usr/portage/profiles/use.local.desc:dev-lang/R:f77 - Use f77 to compile FORTRAN sources. /usr/portage/profiles/use.local.desc:sys-devel/gcc:f77 - Build support for the f77 language thank you, donnie. fortran was indeed already installed in the. On my earlier confusion, I have sent a "bug report" for a portage/emerge suggestion that such features (e.g., fortran compiler presence via a USE flag) should be textually noted in an "$ emerge search" invokation towards the end as "see also USE flags: ... ". Because R is now at version 1.9.1, I grabbed it from cran, linked gcc symbolically to a name of g77 (necessary), and then compiled R. It worked absolutely flawlessly, and the R bugs I had experienced are no longer there. (I am still looking for end-user documentation how to prepare a package for submission. If it is not too difficult [I have never used cvs, so this may be a show stopper], I would be glad to contribute some.) R might also be a good candidate for making a binary build available. Is it possible to require R *NOT* to use f2c in the ebuild file? It is definitively not compatible with f2c. 100%. It will compile, but it will also bomb during use. (With g77, R is rock-solid.) thanks to both of you for your help. i would not have succeeded here without you. very highly appreciated. regards, /ia) --- I hope this bugzilla is google searchable, or else I am probably wasting your time with this. /etc/make.profiles is pointing to ../usr/portage/profiles./default-amd64-2004.2 I presume switching profiles means I should do an $ ln -sf ../usr/portage/profiles/gcc34-amd64-2004.1 ./make. --- the command line interface g77 was not automatically installed on my system, although the fortran compiler was installed when I reemerged the compiler with the f77 flag. the link from g77 to gcc did the job as far as I needed it. I believe the g77 is merely a simplified command line invokator for gcc. . --- i wish i could help more. sofar, I am a consumer, not a producer. . --- not easy. the main reason is that I have killed my R build already. however, I am positive on the subject. I also know why it may work for you, and not for me---apparently, f2c runs into problems on the amd64 platform because sizeof(int) != sizeof(long). it works just fine on most other platforms. if you look at the 1.9.1. installation manual at, it says "If you use f2c you may need to ensure that the FORTRAN type integer is translated to the C type int. Normally `f2c.h' contains `typedef long int integer;' which will work on a 32-bit platform but not on a 64-bit platform." so the error is really in f2c---f2c should be hard-masked for 64 platforms, until this is fixed by the f2c project. regards, /iaw Well, some points: 1) Remove the symlink /etc/make.profile and relink it with ln -sf ../usr/portage/profiles/gcc34-amd64-2004.1 ./make.profile just as you said. 2) You say you don't have the executable g77 on your system ? I doubt that, because you simply can't use gcc to compile Fortran. You need g77 for this. Please remerge gcc by following command: USE="g77" emerge gcc and tell me the location of g77 on your system via "which g77". It should look like this: phi / # which g77 /usr/x86_64-pc-linux-gnu/gcc-bin/3.4/g77 3) You still haven't specified how to reproduce the "not rare, but pretty systematic" crashes. I can't help you if can't reproduce the crash. 4) f2c on 64bit works flawlessly for me. I checked the Suse srpm for f2c on alpha and x86_64. They both provide the very f2c.h as gentoo does, but they have a modified libf2c. 5) the "f2c"-project hasn't worked on f2c since Sat Oct 25 07:57:53 MDT 2003, the date of last change on their homepage. :-/ 6) *Please specify how to crash R*. The best way it to attach a script which does fail on your system. I can't work on it if i can't reproduce !!!!!! 7-\infty) see 3) and 6) thank you. on 2, I do not fully understand the relationship between g77 and f77 use flags. I had put both into my make.conf---just in case. No g77 emerged. When I do '$ USE="g77" ; emerge gcc', then g77 Well, I spoke too soon---it is not exactly f2c that is broken---what is broken are the blas libraries that are commonly used, and which are definitely used inside R. do you have an f2c-built R on your system? if so, try this: $ R > x<- rnorm(1000); > y<- rnorm(1000); > lm( y ~ x ); if this does not SEGV fault, then repeat this again. if this does not die, either, then I will first uninstall my own R, then try to build R without f77/g77 support (can I do so with '$ USE="-f77 -g77" emerge R' ?) and then hopefully (not) eat my words, but give you an account on my machine to see for yourself---or send you the gdb crash point. sorry for not having been more precise. gentoo is a learning process for me. There is no 'g77' USE flag. I'm not quite sure where you got this idea, but the correct USE flag is f77. My earlier grep of use.local.desc and use.desc showed that the only USE flag containing '77' (other than ifc, which is irrelevant) is f77. I'm finally able to reproduce this. g77 compiler needs to be renamed, so that configure doesn't find it (BUG in dev-lang/R's USE flag handling) BACKTRACE: > x<- rnorm(1000); > y<- rnorm(1000); > lm( y ~ x ); Program received signal SIGSEGV, Segmentation fault. 0x0000000000531df1 in dnrm2_ (n=0x206d9e0, x=0x216c070, incx=0x1) at blas.c:1584 1584 blas.c: No such file or directory. in blas.c (gdb) bt #0 0x0000000000531df1 in dnrm2_ (n=0x206d9e0, x=0x216c070, incx=0x1) at blas.c:1584 #1 0x000000000051f11d in dqrdc2_ (x=0x216a130, ldx=0x216c070, n=0x206d9e0, p=0x206d9b0, tol=0x205b318, k=0x206d950, qraux=0x207e6f0, jpvt=0x205b2d8, work=0x1e6a5c0) at dqrdc2.c:133 #2 0x000000000051f89a in dqrls_ (x=0x216a130, n=0x206d9e0, p=0x206d9b0, y=0x216dff0, ny=0x206d980, tol=0x205b318, b=0x207e728, rsd=0x216ff70, qty=0x2171ef0, k=0x206d950, jpvt=0x1, qraux=0x40000000000003e8, work=0x1e6a5d8) at dqrls.c:137 #3 0x0000000000465ea9 in do_dotCode (call=0xfcc5d8, op=0x7fbfffd4d0, args=0x70ba18, env=0x40000000000003e8) at dotcode.c:1340 #4 0x000000000047c7b5 in Rf_eval (e=0xfcc5d8, rho=0x2042eb0) at eval.c:398 #5 0x000000000047dd67 in do_set (call=0xfcc4c0, op=0x729f80, args=0xfcc4f8, rho=0x2042eb0) at eval.c:1271 #6 0x000000000047c6ff in Rf_eval (e=0xfcc4c0, rho=0x2042eb0) at eval.c:375 #7 0x000000000047de2c in do_begin (call=0xfd2700, op=0x729d88, args=0xfcc450, rho=0x2042eb0) at eval.c:1046 #8 0x000000000047c6ff in Rf_eval (e=0xfd2700, rho=0x2042eb0) at eval.c:375 #9 0x000000000047eee4 in Rf_applyClosure (call=0xfd63b0, op=0xfd2fa0, arglist=0x2042258, rho=0x1fe2378, suppliedenv=0x70ba18) at eval.c:566 #10 0x000000000047c486 in Rf_eval (e=0xfd63b0, rho=0x1fe2378) at eval.c:410 #11 0x000000000047c6ff in Rf_eval (e=0xfd60a0, rho=0x1fe2378) at eval.c:375 #12 0x000000000047dd67 in do_set (call=0xfd6e10, op=0x729f80, args=0xfd6e48, rho=0x1fe2378) at eval.c:1271 #13 0x000000000047c6ff in Rf_eval (e=0xfd6e10, rho=0x1fe2378) at eval.c:375 #14 0x000000000047de2c in do_begin (call=0xfd69e8, op=0x729d88, args=0xfd6dd8, rho=0x1fe2378) at eval.c:1046 #15 0x000000000047c6ff in Rf_eval (e=0xfd69e8, rho=0x1fe2378) at eval.c:375 #16 0x000000000047c6ff in Rf_eval (e=0xfd80b8, rho=0x1fe2378) at eval.c:375 #17 0x000000000047de2c in do_begin (call=0xfdfd98, op=0x729d88, args=0xfd7f68, rho=0x1fe2378) at eval.c:1046 #18 0x000000000047c6ff in Rf_eval (e=0xfdfd98, rho=0x1fe2378) at eval.c:375 #19 0x000000000047eee4 in Rf_applyClosure (call=0x1fe1078, op=0xfe0360, arglist=0x1fe0f98, rho=0x749698, suppliedenv=0x70ba18) at eval.c:566 #20 0x000000000047c486 in Rf_eval (e=0x1fe1078, rho=0x749698) at eval.c:410 #21 0x0000000000493e6a in Rf_ReplIteration (rho=0x749698, savestack=0, browselevel=0, state=0x7fbfffed40) at main.c:250 #22 0x0000000000493f88 in R_ReplConsole (rho=0x749698, savestack=0, browselevel=0) at main.c:298 #23 0x00000000004941e2 in run_Rmainloop () at main.c:656 #24 0x000000000050031e in main (ac=34003424, av=0x216c070) at system.c:99 (gdb) q the interesing parts of the (generated) c-code: dqrdc2.c: (from src/appl/dqrdc2.f) #include "f2c.h" /* Table of constant values */ static integer c__1 = 1; [...] qraux[j] = dnrm2_(n, &x[j * x_dim1 + 1], &c__1) <-- Line 133, #1 from bt. You see, though the code says, dnrm2_'s second paramter shall be the address of static integer c__1, the function dnrm2_ gets the _value_ of c__1 instead. That seems to me rather like a compiler bug. Interestingly, if you typedef integer to be "int" instead "long int", it simply works. I'll CC morfic and lv (both gcc-3.4 ppl) to look at this too. lv, morfic: Have you ever seen anything like this ? lv gives up on this one. CC'ing toolchain and gcc-porting: Might this be a gcc bug ? I found some additional info, which might be interesting (I'm in blas.c): (gdb) next 1570 if (*n < 1 || *incx < 1) { (gdb) p incx $2 = (integer *) 0x6bad80 (gdb) next 1572 } else if (*n == 1) { (gdb) p incx $3 = (integer *) 0x1 On another run, incx changed to 0x1 on line 1570 On yet another run, it happened a little later on. Inserting const in the function body might fix the issue maybe? Right, changing the third function argument from "integer *incx" to "const integer *incx" fixed it. However, this shouldn't happen. I asked lv already for a name of someone from the gcc developer team to CC on this bug. Benjamin and I think that this _is_ a gcc bug. I've disassembled the function and did another run gdb run: p incx $1 = (integer *) 0x6bad80 (gdb) next 1570 if (*n < 1 || *incx < 1) { (gdb) p incx $2 = (integer *) 0x6bad80 (gdb) next 1567 --x; (gdb) p incx $3 = (integer *) 0x6bad80 (gdb) p &x Address requested for identifier "x" which is in register $rsi (gdb) p $rsi $4 = 35074264 (gdb) next 1570 if (*n < 1 || *incx < 1) { (gdb) p incx $5 = (integer *) 0x6bad80 (gdb) p x $6 = (doublereal *) 0x21730d0 (gdb) next 1572 } else if (*n == 1) { (gdb) p incx $7 = (integer *) 0x1 The assembler dump is at: What I think should be noted is that the app makes heavy use of the xmm0-2 registers, which are as far as I know mmx registers - right? Now I've found something interesting - I've looked with kdbg what instructions the various statements resolve to: if (*n < 1 || *incx < 1) { <---- mov (%rdi, %rax) norm = 0.; <----- xorpd %xmm0, %xmm0 } else if (*n == 1) { <------ cmp $0x1, %rax je 0x531e5d <dnrm2_+301 norm = abs(x[1]); <------ whatever The value of incx gets set to 0x1 when doing mov(%rdi, %rax). I just found it odd that there is no conditional jump - that's what if's are for - right? Ok, this seems definatelly to be a gcc-bug, though I haven't been able to patch the program so it works, but I have progressed. The problematic (assembler) code is this: 0x0000000000531d3e <dnrm2_+14>: jle 0x531e4c <dnrm2_+284> 0x0000000000531d44 <dnrm2_+20>: mov (%rdx),%rdx 0x0000000000531d47 <dnrm2_+23>: test %rdx,%rdx 0x0000000000531d4a <dnrm2_+26>: jle 0x531e4c <dnrm2_+284> It's important to note, that incx is stored in rdx. So what this code does, is move the value of the variable stored at rdx to rdx, thus overwriting the pointer. This is obvious, since *incx is 1 and after running the code, incx becomes 0x1. I've replaced the code with the following: mov (%rdx),%r8 test %r8,%r8 But it would still segfault, but with another fault. And I think the program expects incx to be at rdx, which it isn't anymore. My assembly skills are limited ;) So gcc-people, what's going on here? ;) Config and me finally found the reason. When R calls fortran functions from its c sources, it uses strictly ints for intgeres, not long ints. But f2c is perfectly right to use long ints on 64bit arches. The R ebuilds now check on both 64-bit and f2c and die when both are set. R-2.0 will check for this in its configure script.
http://bugs.gentoo.org/61042
crawl-002
refinedweb
2,975
75
import "golang.org/x/text/unicode/norm" Package norm contains types and functions for normalizing Unicode strings. composition.go forminfo.go input.go iter.go normalize.go readwriter.go tables12.0.0.go transform.go trie.go const ( // Version is the Unicode edition from which the tables are derived. Version = "12.0.0" //. MaxTransformChunkSize = 35 + maxNonStarters*4 ) GraphemeJoiner is inserted after maxNonStarters non-starter runes. MaxSegmentSize is the maximum size of a byte buffer needed to consider any sequence of starter and non-starter runes for the purpose of normalization.. func (f Form) Properties(s []byte) Properties Properties returns properties for the first rune in s. func (f Form) PropertiesString(s string) Properties. An Iter iterates over a string or byte slice, while normalizing it to a given Form. Code: package main import ( "bytes" "fmt" "io" "unicode/utf8" "golang.org/x/text/unicode/norm" ) // EqualSimple uses a norm.Iter to compare two non-normalized // strings for equivalence. func EqualSimple(a, b string) bool { var ia, ib norm.Iter ia.InitString(norm.NFKD, a) ib.InitString(norm.NFKD, b) for !ia.Done() && !ib.Done() { if !bytes.Equal(ia.Next(), ib.Next()) { return false } } return ia.Done() && ib.Done() } // FindPrefix finds the longest common prefix of ASCII characters // of a and b. func FindPrefix(a, b string) int { i := 0 for ; i < len(a) && i < len(b) && a[i] < utf8.RuneSelf && a[i] == b[i]; i++ { } return i } // EqualOpt is like EqualSimple, but optimizes the special // case for ASCII characters. func EqualOpt(a, b string) bool { n := FindPrefix(a, b) a, b = a[n:], b[n:] var ia, ib norm.Iter ia.InitString(norm.NFKD, a) ib.InitString(norm.NFKD, b) for !ia.Done() && !ib.Done() { if !bytes.Equal(ia.Next(), ib.Next()) { return false } if n := int64(FindPrefix(a[ia.Pos():], b[ib.Pos():])); n != 0 { ia.Seek(n, io.SeekCurrent) ib.Seek(n, io.SeekCurrent) } } return ia.Done() && ib.Done() } var compareTests = []struct{ a, b string }{ {"aaa", "aaa"}, {"aaa", "aab"}, {"a\u0300a", "\u00E0a"}, {"a\u0300\u0320b", "a\u0320\u0300b"}, {"\u1E0A\u0323", "\x44\u0323\u0307"}, // A character that decomposes into multiple segments // spans several iterations. {"\u3304", "\u30A4\u30CB\u30F3\u30AF\u3099"}, } func main() { for i, t := range compareTests { r0 := EqualSimple(t.a, t.b) r1 := EqualOpt(t.a, t.b) fmt.Printf("%d: %v %v\n", i, r0, r1) } }. Properties provides access to normalization properties of a rune. func (p Properties) BoundaryAfter() bool BoundaryAfter returns true if runes cannot combine with or otherwise interact with this or previous runes. func (p Properties) BoundaryBefore() bool BoundaryBefore returns true if this rune starts a new segment and cannot combine with any rune on the left. func (p Properties) CCC() uint8 CCC returns the canonical combining class of the underlying rune. func (p Properties) Decomposition() []byte Decomposition returns the decomposition for the underlying rune or nil if there is none. func (p Properties) LeadCCC() uint8 LeadCCC returns the CCC of the first rune in the decomposition. If there is no decomposition, LeadCCC equals CCC. func (p Properties) Size() int Size returns the length of UTF-8 encoding of the rune. func (p Properties) TrailCCC() uint8 TrailCCC returns the CCC of the last rune in the decomposition. If there is no decomposition, TrailCCC equals CCC. Package norm imports 6 packages (graph) and is imported by 950 packages. Updated 2020-06-17. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/text/unicode/norm
CC-MAIN-2020-29
refinedweb
562
50.94
BACK The method is used to search where an object in stack. If the object occurs in that stack , at that time search operation will occur the method returns the distance from the top of the stack of the occurrence nearest the top of the stack. the topmost item on the stack is considered to be at distance 1. The equals method is used to compare to the items in this stack. Example Program import java.util.Stack; public class Stacksearch { public static void main(String args[]) { Stack n = new Stack(); n.push("a"); n.push("b"); n.push("c"); System.out.println("Searching 'b' in stack: " + n.search("b")); } } Output Searching ‘b’ in stack: [2]
http://candidjava.com/search-whether-object-is-present-in-stack-using-int-searchobject-o/
CC-MAIN-2017-17
refinedweb
116
83.86
In this age of constant communication you would think business and IT would have solved the problem of talking past one another when it comes to business critical issues like business continuity (BC), security and risk. Even fairly basic department-level projects like integrating a series of disparate dashboards to give a marketing director a single view into their various campaigns is often an exercise in futility -- at least from the marketing director's point of view. Line-level directors and managers often don't understand why IT doesn’t “get it” when they talk about customer's needs and how critical they are and IT doesn’t understand why the CFO forces them to work with outdated infrastructure that is one Band-Aid away from crashing the very services they hold so dear. According to a recent IBM study, Reputational Risk and the C-suite, most CIOs "get it"; even more so than their counterparts in some cases: "Over 90 percent of C-suite respondents say their IT budget will grow over the next 12 months due to reputational concerns, and 16 percent say the increase will be more than 20 percent." So where's the disconnect? Why does this issue of "IT-speak" and "business-speak" never seem to get resolved? Maybe an example is in order. So this is IT's translation of the business asking for more agile customer facing capabilities: "If these websites, applications, 'capabilities', etc. are so important to the business' reputation and the brand will suffer incalculable consequences if they are down for even a minute then surely the CFO will understand why we need to invest (or, as they call it “spend”) $2.2M on mirroring our hot site servers and upgrading their TOR switches to 10G to support our WAN Optimization and VPN infrastructure." Related: Are you Ready for Location-Aware Services? This is IT speaking again: "Makes perfect sense to us. And, if they would just approve the upgrade of our VLAN protocol to VXLAN that's been on the CIOs desk since last … whenever, then, of course, end users and customers won't experience any downtime. Everyone knows that WAN optimization is the first step to QoS streaming across the VPN and secure https: servers don't they?" If this sounds familiar, you are not alone. So what happened? A big part of the problem is simple semantics. IT pros have spent years learning technology, not business. While they certainly understand business from a consumer's point of view, how their work in IT relates back to the business is often hard to understand. So when the business says customers need 24/7 access to their accounts, IT hears "five-nines of uptime", or, if the business needs faster access to a suite of applications at the home office, IT hears "We need WAN optimization and network-wide upgrade to 10G." But fault also lies with the business for not extending an olive branch to IT. Here's what you might hear from an executive who is trying to explain the company's over-arching strategy going forward: "To date, operational and strategic accomplishments have helped us achieve the financial goals we shared with investors during our Investor's Day event. These include the long-term achievement of a compound annual non-GAAP diluted EPS growth rate and operational total shareholder return (TSR) of at least 10 percent and 11 percent, respectively. As we end our fiscal year, I’m proud to report that we have tracked well ahead of those trajectories in 2011 and 2012 and remain committed to those long-term goals." Ultimately, the two camps just don't speak each other's language. It takes years and years to understand how networks and infrastructures work -- multiple certifications are often required to work just on one vendor's products, let alone the hodge-podge of gear that makes up most technology infrastructures today. Equally, it takes years and years to understand what a business does and why certain parts of it are more critical than others. "Not all business-people understand the constraints that technologists face," says noted blogger Michael Krigsman, an enterprise analyst and strategy advisor and founder of IT consultancy Asuret. "Likewise, many IT people don't have a clear sense of the business needs and expectations under which their internal customers must manage," But there is hope. "When both sides are clear and patient during conversations, things tend to flow more easily for everyone," say Krigsman. "Clarity is the key." So, to the IT pro, the message it pretty simple: you need to spend some time learning a new language. Why you? Because you can understand more about how your business operates. Why? Because dollars and cents are far more intuitive to learn about than how VXLAN and 10G TOR switches impact your company's eCommerce shopping cart. From Inside ERP: "What's it Going to Cost?" Cost Factors in ERP "Smart IT people learn how to communicate because, otherwise, the business will call their entire relevancy into question," continues Krigsman. "Some IT folks really understand that non-technical communication is critical for job success; the others should plan for early retirement because that is a likely outcome." For the line of business managers the lesson is clear as well: you have to reach out to IT and ask for help and support. If you can do this and get IT on your side, your projects will go much better and you won't have to spend precious opex building out your own shadow IT department. Business continuity and data security are good examples where both sides understand the inherent risk of a tornado wiping out a companies main data center, for example. So, IT, talk about how long it will take to get operations going again if you do X, or if you don't do Y. Leave the disaster radius of the DR site, snapshot, mirroring, and synchronous replication discussion for your internal debates. Just tell the business what they want to hear: "If there's a tornado and our main data center is wiped out, we're covered. Let's eat!" One of the tried and true ways to move from the needle is to embed your IT folks in the business from time to time so the can see first-hand how the work they do is used in the real-world; nothing like waiting minutes for a field to change on customer service rep's screen to turn "five-nines" into a business discussion -- once the frustrated caller hangs up and take their business elsewhere, of course. Leading companies know this, that is why when they make new IT hires, many spend their first 90 days learning about the operations they will be supporting not IT. "If you cannot put the issues into simple language, then folks on the business side will not understand your message," concludes Krigsman. "When they don't understand, it only creates obstacles and will cause everyone more pain down the road. Keep your communications simple and clear.".
http://it.toolbox.com/blogs/itmanagement/how-it-and-business-can-communicate-more-effectively-57322
CC-MAIN-2016-30
refinedweb
1,183
55.98
Strange behaviour when a script is finished For every script that is finished I got this error Exception AttributeError: AttributeError("'NoneType' object has no attribute 'raise_exception_on_not_ok_status'",) in <bound method Session.__del__ of <tensorflow.python.client.session.Session object at 0x7f22fdb730d0>> ignored Some links to track the problem - - The proposed way to reproduce the problem (issue #3388) only reproduces the issue with tensorflow < 0.9 import tensorflow as tf a = tf.constant(123) b = tf.constant(456) c = a * b session = tf.Session() # A slightly different error is produced if this is removed. session.run(tf.initialize_all_variables()) result = session.run(c) print(result) session.close() # The error is produced regardless of this. #quit() # This produces the error. import sys sys.exit() # This also produces the error. The issue is still open in the tensorflow bug tracker. Let's track it.. To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/15
CC-MAIN-2022-21
refinedweb
157
54.49
Help:Files & Images The basic syntax is the same for both images and other files - the only difference being that image files (in supported formats) are rendered as their image rather than as a file link. If you link to a file/image that does not yet exist the link is rendered as a red link - clicking on the link takes you to a page where you can upload the file. Alternatively you can upload the file from the appropriate option in the wiki sidebar Toolbox section. Using the wiki editor Files and images are most easily added using the Wiki Editor Embedded File button highlighted on the left below. Simply highlight the target file then click the tool button (alternatively you can just press the tool to create boilerplate wiki text then copy-paste in the names of your files). Galleries are added using Gallery tool on the right; again you can select the list of files to be in the gallery then select the tool button. The tools create, by default, syntax for the most basic image/gallery. It is possible to customise sizes, position, how text wraps, frames, captions etc., as discussed in the following sections. Files The syntax for linking to a file is shown below. The first line shows the default syntax, the second shows how to define custom link text (note the preceding colon and pipe separator) [[File:NotifierExample.zip]] [[:File:NotifierExample.zip|Text to be displayed for link]] If the file doesn't exist you can open the link to upload it (or select the sidebar option Toolbox | Upload file) and provide a file description. If the file does exist then selecting the link takes you to the file description page, from which users can download it. If you need to link direct to the file rather than its description you can use the "Media" namespace: [[Media:bulb_small.jpg]] Images The recommended image style is to display the image on it's own line (left aligned) and have a border and caption. This can be done using a "frame" or "thumb" and using "none" to force left alignment. The difference between "frame" and "thumb" is that "frame" displays the image full size, while the thumb allows you to specify any size smaller than the full image size. [[File:bulb_small.jpg|none|thumb|100px|A light bulb]] Remember that the file/image you upload must belong to you, or you must have the right to upload it. If you do use a file that belongs to someone else then always attribute it by linking to where it came from. The full image syntax is provided here: Help:Images. Minimum syntax (no frame) The most basic syntax for adding an image is shown below. This is the same syntax as for adding any other file, and displays the image at its full size and without frame or caption and in-line with text. This file is inline [[File:bulb_small.jpg]] with the text. This file is inline with the text. You can also specify the width of the image to be any size (just specify the size and units), whether the image is on its own line or left-right floated using none, right, left, and use link=URL or page name to link the image: [[File:bulb_small.jpg|50px|none|link=Azure - Mobile Services on Windows Phone]] Frames and thumbnails As described in the first section, you can use "frame" or "thumb" to put a border around the image, and then you can also provide a caption. The image is displayed in-line with text, in its full size, without a frame or caption. All of these parameters can be specified using the syntax described in . It is common (but not mandatory) on this wiki to use a frame and caption, and to left align images without text wrapping. Using the previous image, this would be done with the wiki text: [[File:bulb_small.jpg|none|frame|A light bulb]] The "frame" displays the image at full size. If you use "thumb" you can get a default minimum size, or specify any size less than the full size: [[File:bulb_small.jpg|none|thumb|A light bulb thumbnail at default size]] [[File:bulb_small.jpg|none|thumb|100px|A light bulb thumbnail with size set to 100px ]] Link to file rather than display image If you need to link direct to the file description page you can use a colon preceding "File:": [[File:bulb_small.jpg]] If you need to link direct to the file itself you can use the "Media" namespace: [[Media:bulb_small.jpg]] Galleries A gallery is simply a list of image files (one per line) surrounded by <gallery> ... </gallery> tags. You can optionally add a caption for each image by adding a pipe (|) after the filename. You can also change the number of images per row, their sizes and heights, and the caption for the whole gallery. The full syntax is given in Help:Images#Rendering_a_gallery_of_images An example of the syntax and rendering of a gallery with individual and gallery captions is given below: <gallery caption="Gallery caption"> File:bulb_small.jpg|An individual image caption File:bulb_small.jpg </gallery> - Gallery caption - -
http://developer.nokia.com/community/wiki/Help:Images
CC-MAIN-2014-15
refinedweb
866
68.91
The Android for Cars App Library allows you to bring your navigation and point of interest (POI) apps to the car. It does so by providing a set of templates designed to meet driver distraction standards and taking care of details such as the variety of car screen factors and input modalities. This guide provides an overview of the library’s key features and concepts, and walks you through the process of setting up a simple app. Before you begin - Review the Android for Cars App Library Design Guidelines. - Review the key terms and concepts listed in this section. - Familiarize yourself with the Android Auto System UI and Android Automotive OS design. - Review the Release Notes. - Review the Samples. Key terms and concepts - Models and Templates - The user interface is represented by a graph of model objects that can be arranged together in different ways as allowed by the template they belong to. Templates are a subset of the models that can act as a root in those graphs. Models include the information to be displayed to the user, in the form of text and images, as well as attributes to configure aspects of the visual appearance of such information (for example, text colors or image sizes). The host converts the models to views that are designed to meet driver distraction standards and takes care of details such as the variety of car screen factors and input modalities. - Host - The host is the back end component that implements the functionality offered by the library’s APIs in order for your app to run in the car. The responsibilities of the host range from discovering your app and managing its lifecycle, to converting your models into views and notifying your app of user interactions. On mobile devices, this host is implemented by Android Auto. On Android Automotive OS, this host is installed as a system app. - Template restrictions - Different templates enforce restrictions in the content of their models. For example, list templates have limits on the number of items that can be presented to the user. Templates also have restrictions in the way they can be connected to form the flow of a task. For example, the app can only push up to 5 templates to the screen stack. See Template restrictions for more details. - Screen - A Screenis a class provided by the library that apps implement to manage the user interface presented to the user. A Screenhas a lifecycle and provides the mechanism for the app to send the template to display when the screen is visible. Screeninstances can also be pushed and popped to and from a Screen stack, which ensures they adhere to the template flow restrictions. - CarAppService - A CarAppService, you need to configure your app’s manifest files. Declare your CarAppService The host connects to your app through your CarAppService implementation. You declare this service in your manifest to allow the host to discover and connect to your app. You also need to declare your app’s category in the <category> element of your app’s intent filter. See the list of supported app categories for the values allowed for this element. The following code snippet shows how to declare a car app service for a. See Android app quality for cars for the detailed description and criteria for apps to belong to each category. Specify the app name and icon You need to specify an app name and icon that the host can use to represent your app in the system UI. You can specify the app name and icon that is used to represent your app using the label and icon attributes of your CarAppService: ... <service android: ... </service> ... If the label or icon are not declared in the <service> element, the host will fall back to the values specified for the <application> ScreenManager.push. before returning from onCreateScreen. Pre-seeding allows users to navigate back to previous screens from the first screen that your app is showing. Create your start screen You create the screens displayed by your app by defining classes that extend the Screen class and implementing the Screen and Screen instances, which provides access to car services such as the ScreenManager for managing the screen stack, the AppManager for general app-related functionality such as accessing the Surface object for drawing your navigation app’s map, and the NavigationManager used by turn-by-turn navigation apps to communicate navigation metadata and other navigation-related events with the host. See Access the navigation templates for a comprehensive list of library functionality available to navigation apps. The CarContext also offers other functionality such as allowing loading drawable resources using the configuration from the car screen, starting an app in the car using intents, and signaling whether your navigation app should display its map in dark mode. Implement screen navigation Apps often present a number of different screens, each possibly utilizing different templates, that the user can navigate through as they interact with the interface displayed in the screen. The ScreenManager class provides a screen stack that you can use to push screens that can be popped automatically when the user selects a back button in the car screen, or uses the hardware back button available in some cars. The following snippet shows how to add a back action to message template, as well as an action that pushes a new screen when selected by the user:()) .build(); The Action.BACK object is a standard Action that automatically invokes ScreenManager.pop. This behavior can be overridden by using the OnBackPressedDispatcher instance available from the CarContext. To ensure the app is safe while driving, the screen stack can have a maximum depth of 5 screens. See Template restrictions for more details. Refresh the contents of a template Your app can request the content of a Screen to be invalidated by calling the Screen.invalidate method. The host subsequently calls back into your app’s Screen.onGetTemplate method to retrieve the template with the new contents. When refreshing a Screen, it is important to understand the specific content in the template that can be updated so that the host will not count the new template against the template quota. See Template restrictions for more details. It is recommended that you structure your screens so that there is a one-to-one mapping between a Screen and the type of template it returns through its Screen.onGetTemplate implementation. Interact with the user Your app can interact with the user using patterns similar to your mobile app. Handle user input Your app can respond to user input by passing the appropriate listeners to the models that support them. The following snippet shows how to create an Action model that sets an OnClickListener that calls back to a method defined by your app’s code:. Some actions, such as those that require directing the user to continue the interaction on their mobile devices, are only allowed when the car is parked. You can use the ParkedOnlyOnClickListener to implement those actions. If the car is not parked, the host will display an indication to the user that the action is not allowed in this case. If the car is parked, the code will execute normally. The following snippet shows how to use the ParkedOnlyOnClickListener to open a settings screen on the mobile device: Kotlin val row = Row.Builder() .setTitle("Open Settings") .setOnClickListener(ParkedOnlyOnClickListener.create(::openSettingsOnPhone)) .build() Java Row row = new Row.Builder() .setTitle("Open Settings") .setOnClickListener(ParkedOnlyOnClickListener.create(this::openSettingsOnPhone)) .build(); Display notifications Notifications sent to the mobile device will only show up in the car screen if they are extended with a CarAppExtender. Some notification attributes, such as content title, text, icon, and actions, can be set in the CarAppExtender, overriding the notification's attributes when appearing in the car screen. The following snippet shows how to send a notification to the car screen that displays a different title than the one shown on the mobile device: CarAppExtender.Builder() .setContentTitle(titleOnTheCar) ... .build()) .build(); Notifications can affect the following parts of the user interface: - A heads-up notification (HUN) may be displayed to the user. - An entry in the notification center may be added, optionally with a badge visible in the rail. - For navigation apps, the notification may be displayed in the rail widget as described in Turn-by-turn notifications. Applications can choose how to configure their notifications to affect each of those user interface elements by using the notification’s priority, as described in the CarAppExtender documentation. If NotificationCompat.Builder.setOnlyAlertOnce is called with a value of true, a high-priority notification will display as a HUN only once. For more information on how to design your car app’s notifications, see Notifications. Show toasts Your app can display a toast using app. - Start your own app with an intent. The following example shows how to create a notification with an action that opens your app with a screen that shows the details of a parking reservation. You extend the notification instance with a content intent that contains a PendingIntent wrapping an explicit intent to your app’s action: CarAppExtender.Builder() .setContentIntent( PendingIntent.getBroadcast( context, ACTION_VIEW_PARKING_RESERVATION.hashCode(), new Intent(ACTION_VIEW_PARKING_RESERVATION) .setComponent(new ComponentName(context, MyNotificationReceiver.class)), 0)) .build()); Your app must also declare a BroadcastReceiver that is invoked to process the intent when the user selects the action in the notification interface and invokes CarContext.startCarApp with an intent including the data URI: public class MyNotificationReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { String intentAction = intent.getAction(); if (ACTION_VIEW_PARKING_RESERVATION.equals(intentAction)) { CarContext.startCarApp( intent, new Intent(Intent.ACTION_VIEW) .setComponent(new ComponentName(context, MyCarAppService.class)) .setData(Uri.fromParts(MY_URI_SCHEME, MY_URI_HOST, intentAction))); } } } Finally, the); Uri uri = intent.getData(); if (uri != null && MY_URI_SCHEME.equals(uri.getScheme()) && MY_URI_HOST.equals(uri.getSchemeSpecificPart()) && ACTION_VIEW_PARKING_RESERVATION.equals(uri.getFragment()) ) { Screen top = screenManager.getTop(); if (!(top instanceof ParkingReservationScreen)) { screenManager.push(new ParkingReservationScreen(getCarContext())); } } } See Display notifications for more information on how to handle notifications for the car app. Template restrictions The host limits the number of templates to display for a given task to a maximum of 5, of which the last template of the 5 must be one of the following types: Note that this limit applies to the number of templates, and not the number of Screen instances in the stack. For example, if while in screen A an app sends 2 templates, and then pushes screen B, it can now send 3 more templates. Alternatively, if each screen is structured to send a single template, then the app can push 5 screen instances onto the ScreenManager stack. There are special cases to these restrictions: template refreshes, back and reset operations. Template refreshes Certain content updates are not counted towards the template limit. In general, as long as an app pushes a new template that is of the same type and contains the same main content as the previous template, the new template will not be counted against the quota. For example, updating the toggle state of a row in a ListTemplate does not count against the quota. See the documentation of individual templates to learn more about what types of content updates can be considered a refresh. Back operations To enable sub-flows within a task, the host detects when an app is popping a Screen from the ScreenManager stack, and updates the remaining quota based on the number of templates that the app is going backwards by. For example, if while in screen A, the app sends 2 templates and then pushes screen B and sends 2 more templates, then the app has 1 quota remaining. If the app now pops back to screen A, the host will reset the quota to 3, because the app has gone backwards by 2 templates. Note that when popping back to a screen, an app must send a template that is of the same type as the one last sent by that screen. Sending any other template types would cause an error. However, as long as the type remains the same during a back operation, an app can freely modify the contents of the template without affecting the quota. Reset operations Certain templates have special semantics that signify the end of a task. For example, the NavigationTemplate is a view that is expected to stay on the screen and be refreshed with new turn-by-turn instructions for the user’s consumption. Upon reaching one of these templates, the host will reset the template quota, treating that template as if it is the first step of a new task, thus allowing the app to begin a new task. See the documentation of individual templates to see which ones trigger a reset on the host. If the host receives an intent to start the app from a notification action or from the launcher, the quota will also be reset. This mechanism allows an app to begin a new task flow from notifications, and it holds true even if an app is already bound and in the foreground. See Display notifications for more details on how to display your app’s notifications in the car screen, and Start a car app with an intent for how to start your app from a notification action. For full details see the documentation of Session.getLifecycle method. The lifecycle of a Screen For full details see the documentation of Screen.getLifecycle.. Report an Android for Cars App Library issue If you find an issue with the library, report it using the Google Issue Tracker. Be sure to fill out all the requested information in the issue template. Before filing a new issue, please check if it is listed in the library's release notes or reported in the issues list. You can subscribe and vote for issues by clicking the star for an issue in the tracker. For more information, see Subscribing to an Issue.
https://developer.android.com/training/cars/apps?hl=hu
CC-MAIN-2022-21
refinedweb
2,316
51.99
This chapter is taken from the book A Primer on Scientific Programming with Python by H. P. Langtangen, 5th edition, Springer, 2016. Nested lists are list objects where the elements in the lists can be lists themselves. A couple of examples will motivate for nested lists and illustrate the basic operations on such lists. Our table data have so far used one separate list for each column. If there were \( n \) columns, we would need \( n \) list objects to represent the data in the table. However, we think of a table as one entity, not a collection of \( n \) columns. It would therefore be natural to use one argument for the whole table. This is easy to achieve using a nested list, where each entry in the list is a list itself. A table object, for instance, is a list of lists, either a list of the row elements of the table or a list of the column elements of the table. Here is an example where the table is a list of two columns, and each column is a list of numbers: (Note that any value in \( [41,45] \) can be used as second argument (stop value) to(Note that any value in \( [41,45] \) can be used as second argument (stop value) to Cdegrees = range(-20, 41, 5) # -20, -15, ..., 35, 40 Fdegrees = [(9.0/5)*C + 32 for C in Cdegrees] table = [Cdegrees, Fdegrees] rangeand will ensure that 40 is included in the range of generate numbers.) With the subscript table[0] we can access the first element in table, which is nothing but the Cdegrees list, and with table[0][2] we reach the third element in the first element, i.e., Cdegrees[2]. Figure 2: Two ways of creating a table as a nested list. Left: table of columns C and F, where C and F are lists. Right: table of rows, where each row [C, F] is a list of two floats. However, tabular data with rows and columns usually have the convention that the underlying data is a nested list where the first index counts the rows and the second index counts the columns. To have table on this form, we must construct table as a list of [C, F] pairs. The first index will then run over rows [C, F]. Here is how we may construct the nested list: We may shorten this code segment by introducing a list comprehension:We may shorten this code segment by introducing a list comprehension: table = [] for C, F in zip(Cdegrees, Fdegrees): table.append([C, F]) This construction loops through pairsThis construction loops through pairs table = [[C, F] for C, F in zip(Cdegrees, Fdegrees)] Cand F, and for each pass in the loop we create a list element [C, F]. The subscript table[1] refers to the second element in table, which is a [C, F] pair, while table[1][0] is the C value and table[1][1] is the F value. Figure 2 illustrates both a list of columns and a list of pairs. Using this figure, you can realize that the first index looks up an element in the outer list, and that this element can be indexed with the second index. We may write print table to immediately view the nested list table from the previous section. In fact, any Python object obj can be printed to the screen by the command print obj. The output is usually one line, and this line may become very long if the list has many elements. For example, a long list like our table variable, demands a quite long line when printed. [[-20, -4.0], [-15, 5.0], [-10, 14.0], ............., [40, 104.0]] Splitting the output over several shorter lines makes the layout nicer and more readable. The pprint module offers a pretty print functionality for this purpose. The usage of pprint looks like and the corresponding output becomesand the corresponding output becomes import pprint pprint.pprint(table) With this document comes a slightly modifiedWith this document comes a slightly modified [[-20, -4.0], [-15, 5.0], [-10, 14.0], [-5, 23.0], [0, 32.0], [5, 41.0], [10, 50.0], [15, 59.0], [20, 68.0], [25, 77.0], [30, 86.0], [35, 95.0], [40, 104.0]] pprintmodule having the name scitools.pprint2. This module allows full format control of the printing of the floatobjects in lists by specifying scitools.pprint2.float_formatas a printf format string. The following example demonstrates how the output format of real numbers can be changed: As can be seen from this session, theAs can be seen from this session, the >>> import pprint, scitools.pprint2 >>> somelist = [15.8, [0.2, 1.7]] >>> pprint.pprint(somelist) [15.800000000000001, [0.20000000000000001, 1.7]] >>> scitools.pprint2.pprint(somelist) [15.8, [0.2, 1.7]] >>> # default output is '%g', change this to >>> scitools.pprint2.>> scitools.pprint2.pprint(somelist) [1.58e+01, [2.00e-01, 1.70e+00]] pprintmodule writes floating-point numbers with a lot of digits, in fact so many that we explicitly see the round-off errors. Many find this type of output is annoying and that the default output from the scitools.pprint2module is more like one would desire and expect. The pprint and scitools.pprint2 modules also have a function pformat, which works as the pprint function, but it returns a pretty formatted string instead of printing the string: This lastThis last s = pprint.pformat(somelist) print s pprint.pprint(somelist). Many will argue that tabular data such as those stored in the nested table list are not printed in a particularly pretty way by the pprint module. One would rather expect pretty output to be a table with two nicely aligned columns. To produce such output we need to code the formatting manually. This is quite easy: we loop over each row, extract the two elements C and F in each row, and print these in fixed-width fields using the printf syntax. The code goes as follows: for C, F in table: print '%5d %5.1f' % (C, F) Python has a nice syntax for extracting parts of a list structure. Such parts are known as sublists or slices: A[i:] is the sublist starting with index i in A and continuing to the end of A: >>> A = [2, 3.5, 8, 10] >>> A[2:] [8, 10] A[i:j]is the sublist starting with index iin Aand continuing up to and including index j-1. Make sure you remember that the element corresponding to index jis not included in the sublist: >>> A[1:3] [3.5, 8] A[:i]is the sublist starting with index 0 in Aand continuing up to and including the element with index i-1: >>> A[:3] [2, 3.5, 8] A[1:-1]extracts all elements except the first and the last (recall that index -1refers to the last element), and A[:]is the whole list: >>> A[1:-1] [3.5, 8] >>> A[:] [2, 3.5, 8, 10] In nested lists we may use slices in the first index, e.g., We can also slice the second index, or both indices:We can also slice the second index, or both indices: >>> table[4:] [[0, 32.0], [5, 41.0], [10, 50.0], [15, 59.0], [20, 68.0], [25, 77.0], [30, 86.0], [35, 95.0], [40, 104.0]] Observe thatObserve that >>> table[4:7][0:2] [[0, 32.0], [5, 41.0]] table[4:7]makes a list [[0, 32.0], [5, 41.0], [10, 50.0]]with three elements. The slice [0:2]acts on this sublist and picks out its first two elements, with indices 0 and 1. Sublists are always copies of the original list, so if you modify the sublist the original list remains unaltered and vice versa: The fact that slicing makes a copy can also be illustrated by the following code:The fact that slicing makes a copy can also be illustrated by the following code: >>> l1 = [1, 4, 3] >>> l2 = l1[:-1] >>> l2 [1, 4] >>> l1[0] = 100 >>> l1 # l1 is modified [100, 4, 3] >>> l2 # l2 is not modified [1, 4] TheThe >>> B = A[:] >>> C = A >>> B == A True >>> B is A False >>> C is A True B == Aboolean expression is Trueif all elements in Bare equal to the corresponding elements in A. The test B is Ais Trueif Aand Bare names for the same list. Setting C = Amakes Crefer to the same list object as A, while B = A[:]makes Brefer to a copy of the list referred to by A. We end this information on sublists by writing out the part of the table list of [C, F] rows (see the section Nested lists) where the Celsius degrees are between 10 and 35 (not including 35): You should always stop reading and convince yourself that you understand why a code segment produces the printed output. In this latter example,You should always stop reading and convince yourself that you understand why a code segment produces the printed output. In this latter example, >>> for C, F in table[Cdegrees.index(10):Cdegrees.index(35)]: ... print '%5.0f %5.1f' % (C, F) ... 10 50.0 15 59.0 20 68.0 25 77.0 30 86.0 Cdegrees.index(10)returns the index corresponding to the value 10in the Cdegreeslist. Looking at the Cdegreeselements, one realizes (do it!) that the forloop is equivalent to This loop runs over the indices \( 6, 7, \ldots, 10 \) inThis loop runs over the indices \( 6, 7, \ldots, 10 \) in for C, F in table[6:11]: table. We have seen that traversing the nested list table could be done by a loop of the form This is natural code when we know thatThis is natural code when we know that for C, F in table: # process C and F tableis a list of [C, F]lists. Now we shall address more general nested lists where we do not necessarily know how many elements there are in each list element of the list. Suppose we use a nested list scores to record the scores of players in a game: scores[i] holds a list of the historical scores obtained by player number i. Different players have played the game a different number of times, so the length of scores[i] depends on i. Some code may help to make this clearer: The listThe list scores = [] # score of player no. 0: scores.append([12, 16, 11, 12]) # score of player no. 1: scores.append([9]) # score of player no. 2: scores.append([6, 9, 11, 14, 17, 15, 14, 20]) scoreshas three elements, each element corresponding to a player. The element no. gin the list scores[p]corresponds to the score obtained in game number gplayed by player number p. The length of the lists scores[p]varies and equals 4, 1, and 8 for pequal to 0, 1, and 2, respectively. In the general case we may have \( n \) players, and some may have played the game a large number of times, making scores potentially a big nested list. How can we traverse the scores list and write it out in a table format with nicely formatted columns? Each row in the table corresponds to a player, while columns correspond to scores. For example, the data initialized above can be written out as In a program, we must use two nested loops, one for the elements inIn a program, we must use two nested loops, one for the elements in 12 16 11 12 9 6 9 11 14 17 15 14 20 scoresand one for the elements in the sublists of scores. The example below will make this clear. There are two basic ways of traversing a nested list: either we use integer indices for each index, or we use variables for the list elements. Let us first exemplify the index-based version: With the trailing comma after the print string, we avoid a newline so that the column values in the table (i.e., scores for one player) appear at the same line. The singleWith the trailing comma after the print string, we avoid a newline so that the column values in the table (i.e., scores for one player) appear at the same line. The single for p in range(len(scores)): for g in range(len(scores[p])): score = scores[p][g] print '%4d' % score, print cadds a newline after each table row. The reader is encouraged to go through the loops by hand and simulate what happens in each statement (use the simple scoreslist initialized above). The alternative version where we use variables for iterating over the elements in the scores list and its sublists looks like this: Again, the reader should step through the code by hand and realize what the values ofAgain, the reader should step through the code by hand and realize what the values of for player in scores: for game in player: print '%4d' % game, print playerand gameare in each pass of the loops. In the very general case, we have a nested list with many indices: somelist[i1][i2][i3].... To visit each of the elements in the list, we use as many nested for loops as there are indices. With four indices, iterating over integer indices look as The corresponding version iterating over sublists becomesThe corresponding version iterating over sublists becomes for i1 in range(len(somelist)): for i2 in range(len(somelist[i1])): for i3 in range(len(somelist[i1][i2])): for i4 in range(len(somelist[i1][i2][i3])): value = somelist[i1][i2][i3][i4] # work with value for sublist1 in somelist: for sublist2 in sublist1: for sublist3 in sublist2: for sublist4 in sublist3: value = sublist4 # work with value
https://hplgit.github.io/primer.html/doc/pub/looplist/._looplist-solarized004.html
CC-MAIN-2020-10
refinedweb
2,300
71.85
Hanselminutes on 9 - ASP.NET MVC 2 Preview 1 with Phil Haack and Virtual Scott - Posted: Jul 30, 2009 at 11:15 PM - 71,575 Views - 33 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” Scott's in Redmond (or IS he?) and talking to Phil Haack about the release of ASP.NET MVC 2 Preview 1. Phil give us a tour of some of the new features in this high-tech and inappropriate use of technology. A video of a Phil's screen? Hasn't Scott or Phil heard of a screencast? Still, enjoy. You can download ASP.NET MVC 2 Preview 1 if you like, cannot watch video..it sais: media failure. try reloading the page or visiting main site for assistance Sorry, it works here for me. Try clicking on the "Formats" drop down. You can download 7 different other video formats directly. Great video - liked the impromptu style vs. traditional planned screencast! Instead of Html.DisplayFor(dinner => dinner), couldn't you add a shortcut-method Html.DisplayForMe()or Html.DisplayForModel()? Wouldn't that be more readable? Great video, thanks, this is a lot like MVC XForms by Jon Curtis, glad it made its way in to the release, nice.. cool stuff thank ü Love the interview style, very informative and dynamic... keep up the good work! Great stuff Phil & Scott... about to download and start using in my project Cannot get this video to run either...have tried both IE 7 and Firefox and different formats. In IE it looks like it is buffering but just spins and get the error message that marko999 got in Firefox. I got it on this refresh Great video Scott, I like this interview style. I'm exicted about V2 being in the box with VS2010. I'm impressed with how steady you were able to hold the camera. Here is a tool that might make it easier on you for this type of interview. My videos have improved in quality quite a bit since I got one I'm running IE8 and the only formats that work for me are the audio only ones. Cant get video in any of the formats to work Great stuff !! How does Areas work ? I can't found any resources about this feature ... Thanks !. hey scott, next time you should crawl out of the monitor Ring (Ringu) style that should really freak phil out So, is your guys MVC 1.0 book obsolete now? Should I hold off purchasing till you guys come out with v2 of the book? This is almost worst then cloverfield. I need to get some motion sickness pills and I'll try to watch it again later. Funny you mentioned that. That was some feedback I gave them JUST yesterday. Great idea. Have you tried downloading it from the Format's Tab? Where are you located (in the world?) Thanks! I'll pick one up now. Why is that "Inside Visual C++" book being used as a monitor stand? It's the holey grail of C++ programming! You two guys are the most entertaining to watch... Funny, quick, smart... Good team! Question: In the interview you guys talked about having a resource string in place of the label name. Why was it not available in the situation above and will it be? Example: [DisplayName("Contact Phone", <% reference %>)] Anyways, good cast. Thanks! Good question. That was a bit of ignorance on my part. That attribute is System.ComponnentModel.DisplayNameAttribute which is part of System.dll. It is a pre-existing attribute that we just happen to support and it wasn't designed with the localization features that our newer data annotations are designed with. Unfortunately, with DisplayNameAttribute, the general idea is you have to subclass it to get localization. Yeah, that's annoying. There are new attributes coming in ASP.NET 4 so I'll need to figure out what the suggested pattern will be. All the validation components do allow setting resource keys such as ErrorMessageResourceType and ErrorMessageResourceKey. Ahhh.... Well, it's not so annoying - I was curious none the less. Thanks for the answer and explanation ... Still can't wait for asp.net4 It would be nice if you could do EditorFor and it output the label, the textbox and the validation tag - I think 90% of the time that's what myslef and others would be doing anyways. Just a thought. Great vid guys! Scott, I reckon it would be even better if you locked your exposure and colour balance so that the camera doesn't keep trying to compensate as you point the camera at the screen and then at Phil with the bright lamp behind MikeS. As far as I know, manually editing a context's designer.cs file doesn't work. Any event (such as moving an entity on the context's designer) causes designer.cs to re-generate wiping out your manual edits. So, code with dependencies on the attached [DataType(DataType.PhoneNumber)] will break as soon as the context is modified. I tested this behavior using a LINQ to SQL context. Is there a way to prevent manual edits from being wiped out? Cody Skidmore Hey Phil, you moved Offices since last i visited! Good stuff! Great! But, Phil, I am wondering have you ever read through the book of Visual C++.. .. how long you put it there under you screen?? Hi Phil & Scott, Nice Presentation. I saw that you created the display name in the machine generated code. what other options do I have if I don't wish to do so. You know I may want to recreate it again for some reason (highly likely)? I was thinking partial class but these properties are aready defined though and the compiler is going to complain. let me know what your oppinion is. @cody.skidmore and @Mesfin: You can create a partial class for your Model and add a metadata class to it. Then, you replicate your Model's properties in the metadata class and decorate them with your attributes. Example: [MetadataType(typeof(MyModelMetadata)] public partial class MyModel { public class MyModelMetadata { [DisplayName("My property 1")] [UIHint("SomeTemplate")] public object MyModelProp1 {get; set; } [DataType(DataTypes.PhoneNumber)] public object MyModelProp1 {get; set; } } BTW, thanks for the great screencast. I loved the interview style as well. I was confused between LabelFor and DisplayFor. Very clear now Remove this comment Remove this threadclose
http://channel9.msdn.com/Shows/HanselminutesOn9/Hanselminutes-on-9-ASPNET-MVC-2-Preview-1-with-Phil-Haack-and-Virtual-Scott?format=auto
CC-MAIN-2015-14
refinedweb
1,090
76.72
. You should only continue reading if you’re confused when you see something like this: “I just took a function from O(n^2) to O(n)!!” If that sentence made perfect sense, feel free to ignore the rest of this article. Otherwise, here’s what you’re probably thinking: - It looks like a function, but what’s O()? What’s it do? - What’s n? Where does it come from? What’s its value? - What should that function return? - How is it relevant to the other function you’re talking about? Well, in a nutshell, it’s a measure of how well an operation performs against large sets of data. Imagine a music player like iTunes. Sure, everything is nice and fast when you only have one track in your library, but what about when you have a hundred? A thousand? A million??? (Don’t forget, September 19 is Talk Like a Pirate Day.) The Big O notation is a concise way to communicate how well something scales to large datasets like that. Things like adding and removing tracks, searching the library and even just loading the application can all be affected by large amounts of data. So it’s important to be able to identify how well each piece performs under pressure and to communicate that performance to others. If only more of us understood how it works. First, O(). For whatever reason, it’s always called O, but that makes it easier to spot, and makes it handy to call it Big O notation. Whenever you see it, you can be fairly certain it means what I’m about to reveal. But what is it, really? It’s a kind of imaginary function that has something like the following definition: def O(n): return t * n Oh, great. Thanks, Marty, you just defined a function by adding another undescribed variable. Worse, you didn’t even have the decency to give it a docstring, so I still have no idea what it all means. True, but what’s important so far is that you get that image in your head. I’ll describe it in more detail in the following list, but seeing that function definition should help you remember the whole thing. It certainly helped me when I finally visualized it like that. So now we have three things to define: t—The time it takes for an operation to process one item. This may be different from one system to another, due to things outside your control, but for the purpose of analyzing its performance across different sets of data, consider it a constant. If a function takes one second to process one item, t=1. If it takes two seconds, t=2. Get it? Simple. n—A number based on the number of items in the dataset. The number of tracks in your library, rows in your database, items in your cache, friends in your social network, etc. Notice that this is only based on the number of items. The actual value passed in may get modified beforehand, and I’ll explain more about that below. The return value of O()—The time it takes to process a given number of items. Because you can pass in any number of items from one to infinity (at least theoretically), this return value tells you how well it will scale from small to large datasets. Notice above where I said that n is only based on the number of items in your dataset. In reality, you’ll often see things like O(n^2), where n gets modified before being passed into O(). In these cases, the n you see in the function call is the true number of items in your database. It gets modified according to how well the application performs. This is why O() is really just an imaginary function: you can’t just pass it a number and have it tell you how well the function performs, you have to figure that out for yourself. Again, it’s really all about communicating performance to others, rather than determining performance automatically. Thankfully, I believe there are some applications out there to test such performance against various types of datasets, but I don’t know what they are or how they work, so I’ll leave those for another article some other time. Instead, I’ll explain a bit about what the various modifications to n mean and how to use them. The easiest form to understand is O(n). That means the performance of the function is directly proportional to the number of items in your dataset. If it takes one second to process one item, it’ll take 100 seconds to process 100 items, and so on. Depending on the operation you’re analyzing, this can be an acceptable mark to meet. It’s not the best score possible, but it might be the best that can be achieved for you particular case. No article can tell you the best you can do, unfortunately. That’s very dependent on the task a function is meant to perform. The next step down from that is O(n^2). That is, n-squared. This means that if processing one item takes one second, it will take 10,000 seconds to process 100 items (that’s 100 squared, see?). That’s nearly seven hours! Clearly this is unacceptable in most situations, though hopefully it takes far less than a second to process a single item anyway. This score often occurs when a function iterates over a list (that is, a variable-size dataset) twice to do what it needs to do. That would mean that adding a single item to the list would not only add an item to be processed, but another whole iteration over the list. Bad news. Of course, O(n^3) is even worse, but you should get the idea about how the rest of the downward spiral goes. Thankfully, there’s a great score to shoot for: O(1). Notice that n doesn’t even get passed in here (though you could look at it as n^0 if you like). That means that the function’s performance has no relation to the number of items in the dataset. If it takes one second to process one item, it’ll still take just one second to process a hundred items, a thousand, even a million, yeh filthy pirate. In theory, I suppose it might be possible to get a better score than this, but that would mean that the system actually gets faster with more records. There are likely very few cases where that would even be possible, and if you’re still reading this article, you probably won’t ever run into them (I certainly don’t expect to). The article that finally gave me my realization on this subject was Eric Florenzano’s recent post on cache invalidation performance. He managed to come up with an O(1) approach that should scale extremely well on large systems. He does a good job of explaining how he analyzed his code, but I’ll describe a couple variations of his approach and how they would be scored. So, if you haven’t read his article yet, please do so now, so the rest of this section makes sense. Really, go read it. I’ll wait. Okay. First up, rather than tagging a set of keys so they can be invalidated together, imagine that you only set out to solve the first problem: not knowing what keys have been set. You set up a list of cache keys and append to it each time you add an item to the cache. Then, when you want to indvalidate, you look through the whole list for some that match the user. For whatever reason, you didn’t think to also check for the start value at the time, so you have to cycle over the list of matching users to find a matching start value. Then you look over that list for a matching end value. Even though you’re technically not scanning the whole list of cache keys every time, this would still likely be considered O(n^3) because it depends on the size of three different factors. Not good. Now imagine that you’ve seen the light and you check for all three values on the first pass. Now the time is dependent solely on the size of the list, because checking each item takes the same amount of time no matter how the list is laid out. This is a significant improvement, but you still have to consider and ignore a lot of entries before you get to the one you want. Because this performance depends on the number of items in the cache, it gets an O(n) rating. Eric’s makes it all the way up to O(1) by using a known key to target to the exact entry required, without having to scan a list to find it. The preceding explanation was intentionally simplified for the sake of understanding. In the real world, there are a few other things worth noting that can affect how you analyze and score various functions. First, t isn’t always time. Really, it’s more about resources, so it could be memory, disk space, network bandwidth, etc. Time is a useful aggregate, though, because it takes many of those factors into account. Of course, Eric’s O(1) approach is a bit heavy on memory usage, relying on the cache system to clean things up on that end to maintain speed. That works right into the next point: performance isn’t always completely under your control. In Eric’s example, his function is O(1), but if he’s using a cache server that doesn’t implement an O(1) keymap, it may still be O(n) (looking through a big list) when actually deleting the cache key. Cache systems are built for speed, so I doubt this would be the case, but it’s definitely a consideration whenever your functions rely on other code that you don’t control. On a related note, keep an open mind about n, because a number of things can be considered “items”. Going back to the music player example, the length of an individual song can affect how long it takes to seek to a specific position within that track, as can the distance from the start that you’re trying to seek to. Basically, anything that can get bigger can be a candidate for n. Just don’t start mixing them. Take them one at a time to analyze performance properly. Also, note that in the first faulty example I mentioned, calling it O(n^3) is a bit misleading, given my description of how O() could be defined. Because it doesn’t loop over the whole list three times, it would really be somewhere between n and n^2 for most cases, if you just look at the total time it takes on various datasets. But the reason it’s still marked as n^3 is that the score isn’t really a measure of speed—or any other resources—in the end. It’s about the complexity, and even though that generally impacts speed, it’s an important distinction. By looping over three different lists, each of which is based on the size of the entire list, there’s the potential to get as bad as n^3 depending on the dataset, so it’s marked as such. Likewise, you won’t find scores like O(n^2+3) for functions that have a one-time setup penalty that’s unaffected by the size of the dataset, even if the rest of the function is. It would just be marked as O(n^2) to describe the overall complexity. In fact, to my knowledge, the Big O notation is only ever used with squares. The Wikipedia article linked above suggests that other modifications like the +3 above would be written off as “negligible”. I certainly plan to start looking at my functions with a new eye in light of my recent understanding of this topic, and I suggest you do the same. Sure, optimization shouldn’t always be your primary concern, but if it’s on your mind while you’re writing (or updating) a function, you run good odds of finding a reasonably simple way to boost its score.
https://www.martyalchin.com/2009/mar/2/big-o/
CC-MAIN-2017-13
refinedweb
2,090
69.31
IRC log of sparql on 2009-03-31 Timestamps are in UTC. 13:47:54 [RRSAgent] RRSAgent has joined #sparql 13:47:54 [RRSAgent] logging to 13:47:56 [trackbot] RRSAgent, make logs world 13:47:56 [Zakim] Zakim has joined #sparql 13:47:58 [trackbot] Zakim, this will be No Teleconference 13:47:58 [Zakim] I do not see a conference matching that name scheduled within the next hour, trackbot 13:47:59 [trackbot] Meeting: SPARQL Working Group Teleconference 13:47:59 [trackbot] Date: 31 March 2009 13:48:05 [LeeF] oh well, that was close 13:48:09 [LeeF] zakim, this will be SPARQL 13:48:09 [Zakim] ok, LeeF; I see SW_(SPARQL)10:00AM scheduled to start in 12 minutes 13:48:15 [ivan] :-) 13:50:27 [bijan] bijan has joined #sparql 13:53:55 [Zakim] SW_(SPARQL)10:00AM has now started 13:54:02 [Zakim] +??P5 13:54:06 [bijan] zakim, ??p5 is me 13:54:06 [Zakim] +bijan; got it 13:54:47 [LeeF] just no one else there yet 13:54:49 [LeeF] i imagine :) 13:54:58 [bijan] Except me :) 13:55:03 [LeeF] zakim, code? 13:55:03 [Zakim] the conference code is 77277 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), LeeF 13:55:21 [bijan] LeeF: Another mancuian shall be joining the group soon 13:55:31 [bijan] So we'll have more consistent coverage from here 13:56:03 [LeeF] bijan, excellent 13:56:12 [Zakim] +??P10 13:56:17 [AndyS] zakim, ??P10 is me 13:56:19 [Zakim] +AndyS; got it 13:56:39 [AndyS] zakim, who is on the phone? 13:56:39 [Zakim] On the phone I see bijan, AndyS 13:56:41 [Zakim] + +2 13:56:58 [Zakim] - +2 13:57:09 [Zakim] +john-l 13:57:41 [Zakim] +kasei 13:58:36 [Zakim] + +049261287aabb 13:58:46 [SimonS] Zakim, aabb is me 13:58:46 [Zakim] +SimonS; got it 13:58:53 [Zakim] +??P17 13:59:01 [john-l] Zakim, please mute me 13:59:01 [Zakim] john-l should now be muted 13:59:15 [AlexPassant] zakim, ??p17 is me 13:59:16 [Zakim] +AlexPassant; got it 13:59:35 [Zakim] -AndyS 13:59:41 [bijan] zakim, mute me 13:59:41 [Zakim] bijan should now be muted 13:59:42 [SteveH] SteveH has joined #sparql 13:59:52 [ivan] zakim, dial ivan-voip 13:59:52 [Zakim] ok, ivan; the call is being made 13:59:53 [Zakim] +Ivan 14:00:00 [LukeWM] LukeWM has joined #sparql 14:00:14 [Zakim] +??P20 14:00:19 [AndyS] zakim, ??P20 is me 14:00:19 [Zakim] +AndyS; got it 14:00:34 [Zakim] +??P21 14:00:43 [SteveH] Zakim, ??P21 is [Garlik] 14:00:43 [Zakim] +[Garlik]; got it 14:01:01 [chimezie] chimezie has joined #sparql 14:01:01 [AndyS] Very quiet on the phone or is it my connection? 14:01:10 [chimezie] Zakim: passcode? 14:01:19 [kasei] AndyS: no, it's very quiet 14:01:21 [SteveH] noones talking, AFAICT 14:01:31 [kjetil] Zakim, what is the code? 14:01:31 [Zakim] the conference code is 77277 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), kjetil 14:01:41 [Zakim] +Lee_Feigenbaum 14:01:53 [Zakim] +Chimezie_Ogbuji 14:01:56 [LeeF] zakim, who's here? 14:01:56 [Zakim] On the phone I see bijan (muted), john-l (muted), kasei (muted), SimonS, AlexPassant, Ivan, AndyS, [Garlik], Lee_Feigenbaum, Chimezie_Ogbuji 14:01:58 [Zakim] On IRC I see chimezie, LukeWM, SteveH, bijan, Zakim, RRSAgent, AndyS, kasei, LeeF, AlexPassant, SimonS, ivan, AndyS_, kjetil, trackbot, iv_an_ru, john-l, sandro, KjetilK, ericP 14:02:11 [LeeF] Regrets: Axel, Souri 14:02:14 [LeeF] Chair: Lee Feigenbaum 14:02:15 [Zakim] +JanneS 14:02:19 [iv_an_ru] something strange with my phone :| 14:02:23 [Zakim] +??P33 14:02:30 [kjetil] Zakim, ??P33 is me 14:02:33 [Zakim] +??P14 14:02:36 [Zakim] +kjetil; got it 14:02:44 [LeeF] zakim, ??P14 is Orri 14:02:48 [Zakim] +Orri; got it 14:03:51 [kjetil] Zakim, mute me 14:04:04 [LeeF] Scribenick: SteveH 14:04:05 [Zakim] kjetil should now be muted 14:04:07 [JanneS] JanneS has joined #sparql: F2F 14:05:55 [ericP] Zakim, please dial ericP-office 14:05:55 [Zakim] ok, ericP; the call is being made 14:05:57 [Zakim] +EricP 14:06:02 [Zakim] -EricP 14:06:10 [ericP] Zakim, please dial ericP-office 14:06:10 [Zakim] ok, ericP; the call is being made 14:06:12 [Zakim] +EricP 14:06:17 [SteveH] LeeF: set for F2F on 7th, cambridge and bristol 14:06:37 [LeeF] -> 14:06:50 [SteveH] LeeF: do we prefer web survey to wiki 14:06:54 [ericP] looks like the3 wiki has it 14:07:05 [JanneS] zakim, who's here? 14:07:05 :07:08 [Zakim] On IRC I see JanneS, chimezie, LukeWM, SteveH, bijan, Zakim, RRSAgent, AndyS, kasei, LeeF, SimonS, ivan, AndyS_, kjetil, trackbot, iv_an_ru, john-l, sandro, KjetilK, ericP 14:11:45 [SteveH] ivan: from now on OLW makes normative ref. to SPARQL as far as prefix is concerned 14:11:56 [SteveH] ivan: is SPARQL wants to change that we have to be careful 14:12:00 [ivan] s/OLW/OWL/] q- 14:16:04 [LeeF] trackbot, close action-5 14:16:04 [trackbot] ACTION-5 Add security issues to query by reference feature closed 14:16:08 [kjetil] action-2: 14:16:09 [trackbot] ACTION-2 Update the wiki page with his experience (caveat: kjetil may be delayed in doing it) notes added 14:16:25 [SteveH] LeeF: features 14:16:26 [kjetil] trackbot, close action-2 14:16:26 [trackbot] ACTION-2 Update the wiki page with his experience (caveat: kjetil may be delayed in doing it) closed 14:16:37 [LeeF] topic: ExecCommentsAndWarning 14:16:43 [SteveH] LeeF: exec comments and warnings feature... 14:16:44 [AndyS] Topic: Features:19:18 [kjetil] action-2: Whoops, wrong link, this is it: 14:19:18 [trackbot] ACTION-2 Update the wiki page with his experience (caveat: kjetil may be delayed in doing it) notes added 14:20:01 [SteveH] LeeF: my concern is that we don't have a lot of impl. experience, would need to play within the existing result format, dont see strainghtforward way to do that 14:20:10 [SteveH] AndyS: I can see straightforward way 14:20:19 [SteveH] iv_an_ru: I don't think it's difficult 14:20:27 [SteveH] LeeF: how about CONSTRUCT 14:20:30 [AndyS] Not me - that's Orri 14:20:32 [SteveH] q+ 14:20:38 [AndyS] q+ 14:20:58 [SteveH] iv_an_ru: could include triples in dedicated namespace, some kind of convention with triples 14:21:04 [LeeF] s/iv_an_ru/Orri :23:18 [Zakim] On IRC I see AlexPassant, JanneS, chimezie, LukeWM, SteveH, bijan, Zakim, RRSAgent, AndyS, kasei, LeeF, SimonS, ivan,:30 [LeeF] topic: query response linking 14:24:31 [bijan] I would go +1 probably after examining the existing implementations 14:24:32 [SteveH] Topic: query response linking: dont know all details, but there were discussions in HTML for categorising, wouldn't that cover it 14:27:04 [SimonS] +q 14:27:05 [SteveH] ivan: do we have to do anything, or rely on HTTP 14:27:10 [ivan]] -1 (trying to set priorities) 14:33:37 [LeeF] Orri: 0 14:33:39 [JanneS] -1 14:33:44 [ericP] -1 14:33:44 [LeeF] 0 14:33:45 [bijan] 0 14:34:03 [SteveH] Topic: query language features 14:34:12 [LeeF] subtopic: assignment 14:34:16 [SteveH] Topic: assignment 14:34:19 [LeeF] -> 14:35:01 [LeeF] agenda+ renaming feature wiki pages 14:45:16 [SteveH] subtopic: accessing rdf lists 14:48:55 [ericP] members(?x) ordered("1" "2") unordered("2" "1") 14:49:00 [ericP] +1 to ivan's point 14:49:22 [SteveH] ivan:: so had negative effect on the way vocabs were defined 14:49:33 [chimezie] AndyS: menber] q+ 14:55:06 [bijan] q+ to ask about the property path solution people 14:55:34 [SteveH] ivan: if it works with prop paths, the q I have is if prop paths doesn't do it would you still have -1 14:55:50 [SteveH] ?: if it doesn't I would 14:56:01 [ivan] ack ivan 14:56:04 [SteveH] LeeF: I'm indifferent if we do property paths or not 14:56:08 [bijan] q- 14:56:09 [SteveH] thanks 14:56:20 [bijan] I had Ivan's question :) 14:56:23 [SteveH] ericP: +1 on having list access as a requirement, however we do it 14:56:34 [SteveH] ivan: 14:58:12 [SteveH] LeeF: things that don't cleanly fall into nice boxes,] q+ 15:04:54 [SteveH] LeeF: task force of 1-3 people not always best 15:04:55 [LeeF] ack ivan 15:07:13 [ivan] ivan. 15:07:33 [ericP] RRSAgent, please make log world-visible 15:07:34 [Zakim] -[Garlik] 15:07:35 [Zakim] -SimonS 15:07:38 [Zakim] -AlexPassant 15:07:39 [Zakim] -EricP 15:07:41 [Zakim] -kjetil 15:07:44 [SteveH] bijan, but not spelling :) 15:07:55 [Zakim] -AndyS 15:07:56 [bijan] I never critique most people's orthography 15:07:58 [Zakim] -bijan 15:08:01 [Zakim] SW_(SPARQL)10:00AM has ended 15:08:02 [Zakim] Attendees were bijan, AndyS, +2, john-l, kasei, +049261287aabb, SimonS, AlexPassant, Ivan, [Garlik], Lee_Feigenbaum, Chimezie_Ogbuji, JanneS, kjetil, Orri, EricP, DaveNewman 15:08:02 [bijan] Being of the orthographically challenged myself 15:08:09 [bijan] TBL is a notable exception :) 15:08:14 [SteveH] bijan, it's probably a sign of something :) 15:08:50 [bijan] Sign of extreme attractiveness? (Again, with the TBL exception ;)) 15:08:55 [SteveH] SteveH has joined #sparql 15:09:19 [bijan] :) 15:09:24 [AndyS] People who may do F2F in Bristol - any requirements? 15:09:46 [LukeWM] LukeWM has joined #sparql 15:09:51 [SimonS] caffein and wireless. ;-) 15:09:56 [SteveH] AndyS: somewhere to sit :) 15:10:01 [SteveH] oh yeah, and coffee 15:10:02 [AndyS] We have a nice coffee machine! 15:10:08 [SteveH] yeah, you do 15:10:19 [bijan] Indeed you do 15:10:35 [AndyS] But we might be in the middle of a large move. That may be a + or - 15:10:56 [AndyS] Wireless or public wired will be provided. 15:11:05 [SteveH] large scale, or large distance? 15:11:22 [AndyS] Large scale - repacking the building. 15:11:27 [SteveH] ah, ok 15:12:16 [SteveH] packing problems are fun 15:12:24 [SimonS] Any recommendations for hotels? 15:13:16 [AndyS] Can get recommendations - there are close and nice - but not really both. 15:14:26 [SimonS] Thanks, I'm willing to compromise, at least to some extend... 15:14:40 [AndyS] As we will be on Boston time (4pm there = 9pm Bristol or later), that might be a factor 15:14:52 [kasei] kasei has left #sparql 15:15:59 [SteveH] is it {close, cheap, nice} pick two, or pick one? 15:16:55 [AndyS] v close (walking) = 2 IIRC Does not affect me :-) 15:17:12 [SteveH] no :) advantages of hosting 15:17:44 [SteveH] I've stayed in a walking-close hotel, generic business hotel, was ok 15:18:16 [bijan] Such a pretty town 15:19:05 [bijan] Indeed 15:19:16 [SteveH] though, might take car, how is parking in city hotels/hp, AndyS? 15:19:21 [SteveH] nuts 15:24:52 [AndyS] Car works. 15:25:28 [AndyS] Hotels have car parks. Drive out of commute time is OK. 15:25:52 [AndyS] Drive during commute is bad if you don't know where you are going. 15:26:40 [AndyS] F2F day is 8am - 4pm (and a big "ha!" to 4pm) ==> 1pm=9pm UK. 15:28:41 [SteveH] 1pm to 9pm is an odd schedule 15:29:28 [SteveH] socialise 0900 to 1200, work 1300-2100 :) 15:30:16 [AndyS] Or party 10pm-6am. Sleep. Work from 13:00. But I'm too old for that. 15:30:55 [SteveH] could maybe do it in US, but not with UK daylight 15:31:30 [SteveH] at least with that schedule I can arrive on wednesday, leave thursday evening 15:31:41 [SteveH] dont have to arrive night before 15:56:38 [SteveH] SteveH has left #sparql 16:31:53 [iv_an_ru] AndyS, a technical question. I intend to extend SPARUL a bit, to introduce graph groups and security. 16:31:57 [iv_an_ru] Graph group is a named collection of graph IRIs such that FROM <graph-group-iri> is equivalent to FROM <member1> ... FROM <memberN>. 16:32:50 [iv_an_ru] Security is very traditional. User X Graph --> permissions. 16:34:10 [iv_an_ru] So I'll need extra keywords and grammar for all that things and I don't want to get conflict with your potential extensions. 16:34:30 [iv_an_ru] Are you planning something of the sort? 16:38:47 [AndyS] No specific plans. You will need "FROM GROUP <uri>" to differentiate from plain FROM. c.f. proposals/ideas for FROM DATASET 16:39:45 [AndyS] I guess you want up-front declaration. Security Coulf also be done without syntax in the way dataset assembled for a request. 16:39:54 [AndyS] (random thoughts mode) 16:50:26 [iv_an_ru] I don't like both "graph group" and "dataset", they're too overloaded. "set" and "bag" are overloaded as well. Some exotics like "rucksack"? 16:57:35 [SteveH] SteveH has joined #sparql 19:08:04 [LeeF] LeeF has joined #sparql 21:15:52 [LeeF_] LeeF_ has joined #sparql 22:01:23 [LukeWM] LukeWM has joined #sparql 22:42:06 [LukeWM] LukeWM has joined #sparql
http://www.w3.org/2009/03/31-sparql-irc
CC-MAIN-2016-40
refinedweb
2,324
61.5
29 December 2005 09:29 [Source: ICIS news]. Acrylic fibre producers are faced with stiff competition from ?xml:namespace> Acrylonitrile and propylene prices 2005 (in Euro/tonne) European nylon producers are also struggling to pass on raw material price hikes, and margins over caprolactam have fallen by more than Euro100/tonne since January 2005. Downstream from nylon 6 virgin polymer, the textiles, yarns and fibres sectors have been especially weak, and sellers estimated a 20-40% fall in mid-year demand between 2004 and 2005. Producers seemed more optimistic about 2006, but buyers were not so sure. Much will depend on Asian markets, which were very weak in 2005, and benzene price stability in Nylon and caprolactam prices 2005] Producers of polyethylene terephthalate (PET) had a difficult 2005 in a market that was oversupplied and showed few signs of improvement. Demand was described as weak throughout the year and several major price intitiatives by producers amounted to little. Quite a few producers have struggled to pass through increases on paraxylene (PX), and margins have been eroded by an estimated Euro25-30/tonne in the past twelve months. The main squeeze was felt by PET resin producers, caught between struggling PET converters and producers of purified terephthalic acid (PTA), one of the few groups to derive much benefit from 2005. While overall volumes were under downward pressure, PTA producers managed to extract some small margin increases on top of paraxylene changes. Price comparison for PET, PTA* and PX in 2005 (in Euro/tonne) *Please note: The PTA price for December had not been settled at time of going to press. In While major Western European producers such as Voridian, Advansa, Invista and Equipolymers are competing quite strongly at the moment, there were many who felt that smaller producers could be put at risk by this move east. 2005 saw the opening of new PET capacities in New PET and PTA capacities 2005-06 *expansion of existing plant With European players taking something of a backseat role in the global fib
http://www.icis.com/Articles/2005/12/29/1030023/outlook-06-no-improvement-for-euro-fibre-intermediates.html
CC-MAIN-2013-48
refinedweb
339
54.26
1.15 anton 1: \ A powerful locals implementation 2: 1.63 anton 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003,2004,2005,2007,2011 82: require search.fs 1.14 anton 83: require float.fs 1.47 jwilke 84: require extend.fs \ for case 1.1 anton 85: 1.14 anton 86: : compile-@local ( n -- ) \ gforth compile-fetch-local 1.3 anton 87: case 1.7 pazsan 88: 0 of postpone @local0 endof 89: 1 cells of postpone @local1 endof 90: 2 cells of postpone @local2 endof 91: 3 cells of postpone @local3 endof 1.3 anton 92: ( otherwise ) dup postpone @local# , 93: endcase ; 94: 1.14 anton 95: : compile-f@local ( n -- ) \ gforth compile-f-fetch-local 1.3 anton 96: case 1.7 pazsan 97: 0 of postpone f@local0 endof 98: 1 floats of postpone f@local1 endof 1.3 anton 99: ( otherwise ) dup postpone f@local# , 100: endcase ; 101: 1.27 pazsan: 1.1 anton: 1.5 anton 124: slowvoc @ 125: slowvoc on \ we want a linked list for the vocabulary locals 1.1 anton 126: vocabulary locals \ this contains the local variables 1.37 pazsan 127: ' locals >body wordlist-id ' locals-list >body ! 1.5 anton 128: slowvoc ! 1.1 anton 129: 1.62 anton 130: variable locals-mem-list \ linked list of all locals name memory in 131: 0 locals-mem-list ! \ the current (outer-level) definition 1.1 anton 132: 1.62 anton 1.1 anton 146: 147: : alignlp-w ( n1 -- n2 ) 148: \ cell-align size and generate the corresponding code for aligning lp 1.3 anton 149: aligned dup adjust-locals-size ; 1.1 anton 150: 151: : alignlp-f ( n1 -- n2 ) 1.3 anton 152: faligned dup adjust-locals-size ; 1.1 anton: 1.27 pazsan 166: \ locals list operations 167: 1.66 anton? 1.27 pazsan 224: 225: : list-size ( list -- u ) \ gforth-internal 1.36 pazsan 226: \ size of the locals frame represented by list 227: 0 ( list n ) 228: begin 229: over 0<> 230: while 231: over 232: ((name>)) >body @ max 233: swap @ swap ( get next ) 234: repeat 235: faligned nip ; 1.27 pazsan 236: 237: : set-locals-size-list ( list -- ) 1.37 pazsan 238: dup locals-list ! 1.36 pazsan 239: list-size locals-size ! ; 1.27 pazsan 240: 241: : check-begin ( list -- ) 242: \ warn if list is not a sublist of locals-list 1.37 pazsan 243: locals-list @ sub-list? 0= if 1.27 pazsan 244: \ !! print current position 1.64 pazsan 245: >stderr ." compiler was overly optimistic about locals at a BEGIN" cr 1.27 pazsan 246: \ !! print assumption and reality 247: then ; 248: 1.1 anton -- ) 1.3 anton 260: -1 chars compile-lp+! 1.1 anton 261: locals-size @ swap ! 262: postpone lp@ postpone c! ; 263: 1.62 anton: 1.1 anton 295: : create-local ( " name" -- a-addr ) 1.9 anton 296: \ defines the local "name"; the offset of the local shall be 297: \ stored in a-addr 1.62 anton 298: locals-name-size allocate throw 299: dup locals-mem-list prepend-list 300: locals-name-size cell /string over + ['] create-local1 dict-execute ; 301: 302: variable locals-dp \ so here's the special dp for locals. 1.1 anton 303: 1.3 anton 304: : lp-offset ( n1 -- n2 ) 305: \ converts the offset from the frame start to an offset from lp and 306: \ i.e., the address of the local is lp+locals_size-offset 307: locals-size @ swap - ; 308: 1.1 anton 309: : lp-offset, ( n -- ) 310: \ converts the offset from the frame start to an offset from lp and 311: \ adds it as inline argument to a preceding locals primitive 1.3 anton 312: lp-offset , ; 1.1 anton 313: 314: vocabulary locals-types \ this contains all the type specifyers, -- and } 315: locals-types definitions 316: 1.14 anton 317: : W: ( "name" -- a-addr xt ) \ gforth w-colon 318: create-local 1.1 anton 319: \ xt produces the appropriate locals pushing code when executed 320: ['] compile-pushlocal-w 321: does> ( Compilation: -- ) ( Run-time: -- w ) 322: \ compiles a local variable access 1.3 anton 323: @ lp-offset compile-@local ; 1.1 anton 324: 1.14 anton 325: : W^ ( "name" -- a-addr xt ) \ gforth w-caret 326: create-local 1.1 anton 327: ['] compile-pushlocal-w 328: does> ( Compilation: -- ) ( Run-time: -- w ) 329: postpone laddr# @ lp-offset, ; 330: 1.14 anton 331: : F: ( "name" -- a-addr xt ) \ gforth f-colon 332: create-local 1.1 anton 333: ['] compile-pushlocal-f 334: does> ( Compilation: -- ) ( Run-time: -- w ) 1.3 anton 335: @ lp-offset compile-f@local ; 1.1 anton 336: 1.14 anton 337: : F^ ( "name" -- a-addr xt ) \ gforth f-caret 338: create-local 1.1 anton 339: ['] compile-pushlocal-f 340: does> ( Compilation: -- ) ( Run-time: -- w ) 341: postpone laddr# @ lp-offset, ; 342: 1.14 anton 343: : D: ( "name" -- a-addr xt ) \ gforth d-colon 344: create-local 1.1 anton 345: ['] compile-pushlocal-d 346: does> ( Compilation: -- ) ( Run-time: -- w ) 347: postpone laddr# @ lp-offset, postpone 2@ ; 348: 1.14 anton 349: : D^ ( "name" -- a-addr xt ) \ gforth d-caret 350: create-local 1.1 anton 351: ['] compile-pushlocal-d 352: does> ( Compilation: -- ) ( Run-time: -- w ) 353: postpone laddr# @ lp-offset, ; 354: 1.14 anton 355: : C: ( "name" -- a-addr xt ) \ gforth c-colon 356: create-local 1.1 anton 357: ['] compile-pushlocal-c 358: does> ( Compilation: -- ) ( Run-time: -- w ) 359: postpone laddr# @ lp-offset, postpone c@ ; 360: 1.14 anton 361: : C^ ( "name" -- a-addr xt ) \ gforth c-caret 362: create-local 1.1 anton 363: ['] compile-pushlocal-c 364: does> ( Compilation: -- ) ( Run-time: -- w ) 365: postpone laddr# @ lp-offset, ; 366: 367: \ you may want to make comments in a locals definitions group: 1.44 anton 368: ' \ alias \ ( compilation 'ccc<newline>' -- ; run-time -- ) \ core-ext,block-ext backslash 1.42 an 1.39 crook 374: 375: ' ( alias ( ( compilation 'ccc<close-paren>' -- ; run-time -- ) \ core,file paren 1.42 anton. 1.39 crook 382: immediate 1.1 anton 383: 384: forth definitions 1.54 anton 385: also locals-types 386: 387: \ these "locals" are used for comparison in TO 388: c: some-clocal 2drop 389: d: some-dlocal 2drop 390: f: some-flocal 2drop 391: w: some-wlocal 2drop 1.62 anton 392: 393: ' dict-execute1 is dict-execute \ now the real thing 1.54 anton 394: 1.1 anton 1.3 anton 406: drop nextname 1.43 anton 407: ['] W: >head-noprim ; 1.1 anton 408: 409: previous 410: 411: : new-locals-reveal ( -- ) 412: true abort" this should not happen: new-locals-reveal" ; 413: 1.22 anton 414: create new-locals-map ( -- wordlist-map ) 1.29 anton 415: ' new-locals-find A, 416: ' new-locals-reveal A, 417: ' drop A, \ rehash method 1.34 jwilke 418: ' drop A, 1.1 anton 419: 1.41 jwilke 420: new-locals-map mappedwordlist Constant new-locals-wl 421: 422: \ slowvoc @ 423: \ slowvoc on 424: \ vocabulary new-locals 425: \ slowvoc ! 426: \ new-locals-map ' new-locals >body wordlist-map A! \ !! use special access words 1.1 anton 427: 428: \ and now, finally, the user interface words 1.53 anton 429: : { ( -- latestxt wid 0 ) \ gforth open-brace 430: latestxt get-current 1.41 jwilke 431: get-order new-locals-wl swap 1+ set-order 1.32 anton 432: also locals definitions locals-types 1.1 anton 433: 0 TO locals-wordlist 434: 0 postpone [ ; immediate 435: 436: locals-types definitions 437: 1.53 anton 438: : } ( latestxt wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 1.1 anton 439: \ ends locals definitions 1.61 anton 440: ] 1.1 anton 441: begin 442: dup 443: while 444: execute 445: repeat 446: drop 447: locals-size @ alignlp-f locals-size ! \ the strictest alignment 448: previous previous 1.32 anton 449: set-current lastcfa ! 1.37 pazsan 450: locals-list 0 wordlist-id - TO locals-wordlist ; 1.1 anton 451: 1.14 anton 452: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 1.1 anton 453: } 1.9 anton 454: [char] } parse 2drop ; 1.1 anton: 1.28 anton 546: \ Implementation: 1.1 anton 547: 1.3 anton 548: \ explicit scoping 1.1 anton 549: 1.14 anton 550: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.36 pazsan 551: cs-push-part scopestart ; immediate 552: 553: : adjust-locals-list ( wid -- ) 1.37 pazsan 554: locals-list @ common-list 1.36 pazsan 555: dup list-size adjust-locals-size 1.37 pazsan 556: locals-list ! ; 1.3 anton 557: 1.14 anton 558: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.36 pazsan 559: scope? 560: drop adjust-locals-list ; immediate 1.1 anton 561: 1.3 anton 562: \ adapt the hooks 1.1 anton 563: 1.3 anton 564: : locals-:-hook ( sys -- sys addr xt n ) 565: \ addr is the nfa of the defined word, xt its xt 1.1 anton 566: DEFERS :-hook 1.53 anton 567: latest latestxt 1.1 anton 568: clear-leave-stack 569: 0 locals-size ! 1.37 pazsan 570: 0 locals-list ! 1.3 anton 571: dead-code off 572: defstart ; 1.1 anton 573: 1.67 ! anton 574: :noname ( -- ) ! 575: locals-mem-list @ free-list ! 576: 0 locals-mem-list ! ; ! 577: is free-old-local-names ! 578: 1.3 anton 579: : locals-;-hook ( sys addr xt sys -- sys ) 580: def? 1.1 anton 581: 0 TO locals-wordlist 1.3 anton 582: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton 583: lastcfa ! last ! 584: DEFERS ;-hook ; 585: 1.28 anton: 1.30 anton 599: : (then-like) ( orig -- ) 600: dead-orig = 1.27 pazsan 601: if 1.30 anton 602: >resolve drop 1.27 pazsan 603: else 604: dead-code @ 605: if 1.30 anton 606: >resolve set-locals-size-list dead-code off 1.27 pazsan 607: else \ both live 1.30 anton 608: over list-size adjust-locals-size 609: >resolve 1.36 pazsan 610: adjust-locals-list 1.27 pazsan: 1.1 anton 655: ' locals-:-hook IS :-hook 656: ' locals-;-hook IS ;-hook 1.27 pazsan 657: 1.67 ! anton 658: 1.27 pazsan 659: ' (then-like) IS then-like 660: ' (begin-like) IS begin-like 661: ' (again-like) IS again-like 662: ' (until-like) IS until-like 663: ' (exit-like) IS exit-like 1.1 anton: 1.14 anton 704: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 705: \ a little space-inefficient, but well deserved ;-) 706: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 707: \ as long as you use it in a definition 1.3 anton 708: dup 709: if 710: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 711: else 712: 2drop 713: endif ; 1.1 anton 714: 1.56 anton 715: : >definer ( xt -- definer ) \ gforth 1.48 anton!}. 1.30 anton 720: dup >does-code 721: ?dup-if 722: nip 1 or 1.4 anton 723: else 724: >code-address 725: then ; 726: 1.56 anton 727: : definer! ( definer xt -- ) \ gforth 1.48 anton 728: \G The word represented by @var{xt} changes its behaviour to the 729: \G behaviour associated with @var{definer}. 1.4 anton 730: over 1 and if 1.13 anton 731: swap [ 1 invert ] literal and does-code! 1.4 anton 732: else 733: code-address! 734: then ; 735: 1.23 pazsan 736: :noname 1.31 anton 737: ' dup >definer [ ' locals-wordlist ] literal >definer = 1.23 pazsan 738: if 739: >body ! 740: else 741: -&32 throw 742: endif ; 743: :noname 1.28 anton 744: comp' drop dup >definer 1.21 anton 745: case 1.30 anton 746: [ ' locals-wordlist ] literal >definer \ value 1.21 anton 747: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 1.35 anton 748: \ !! dependent on c: etc. being does>-defining words 749: \ this works, because >definer uses >does-code in this case, 750: \ which produces a relocatable address 1.54 anton 751: [ comp' some-clocal drop ] literal >definer 1.21 anton 752: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 1.54 anton 753: [ comp' some-wlocal drop ] literal >definer 1.21 anton 754: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 1.54 anton 755: [ comp' some-dlocal drop ] literal >definer 1.21 anton 756: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 1.54 anton 757: [ comp' some-flocal drop ] literal >definer 1.21 anton 758: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 759: -&32 throw 1.23 pazsan 760: endcase ; 1.24 anton 761: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 1.1 anton 762: 1.58 anton 763: : locals| ( ... "name ..." -- ) \ local-ext locals-bar 1.14 anton 764: \ don't use 'locals|'! use '{'! A portable and free '{' 1.21 anton 765: \ implementation is compat/anslocals.fs 1.8 anton 766: BEGIN 1.49 anton 767: name 2dup s" |" str= 0= 1.8 anton 768: WHILE 769: (local) 770: REPEAT 1.14 anton 771: drop 0 (local) ; immediate restrict
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.67;sortby=rev;f=h;only_with_tag=MAIN;ln=1
CC-MAIN-2021-10
refinedweb
2,153
71.82
Is there any built-in method in Java to find the size of any datatype? Is there any way to find size? No. There is no such method. It is not needed in Java, since the language removes the need for an application to know about how much space needs to be reserved for a primitive value, an object or an array with a given number of elements. You might think that a sizeof operator would be useful for people that need to know how much space their data structures take. However you can also get this information and more, simply and reliably using a Java memory profiler, so there is no need for a sizeof method. @Davor makes the point that sizeof(someInt) would be more readable than 4. I'm not sure that I agree ... because I think that every Java programmer ought to know that an int is always 32 bits (4 bytes). (And besides, it is rarely necessary to use the size of a type in Java code ...) However if you accept that readability argument, then the remedy is in your hands. Simply define a class like this ... public class PrimitiveSizes { public static int sizeof(byte b) { return 1; } public static int sizeof(short s) { return 2; } // etcetera } ... and statically import it ... import static PrimitiveSizes.*; Why haven't the Java designers implemented this in standard libraries? My guess is that: The key word in the above is they. (It does not matter what you or I think about it ... unless you have some influence in the decision making process.) There is also the issue that the next demand would be for a sizeof(Object o) method, which is fraught with technical difficulties.
https://codedump.io/share/aFjmGwt0cSUe/1/is-there-any-sizeof-like-method-in-java
CC-MAIN-2017-34
refinedweb
286
74.19
@Unwrap questionNing Zhao Jul 7, 2007 5:45 AM Hi folks, I would like to use the @Unwrap annotation in an APPLICATION scoped dataStore bean. This stateless bean mainly caches the read-only static data which would be used in the application. I'd like to learn more detail about the @Unwrap than which are already presented in the reference manual. (I build a new Seam dist everyday from the CVS, so my reference manual is always the latest.) Here are my questions: 1. What's the difference between @Factory and @Unwrap in the following two code fragments? (from reference section 3.8) @Factory(scope=CONVERSATION) public List<Customer> getCustomerList() { return ... ; } and @Name("customerList") @Scope(CONVERSATION) public class CustomerListManager { ... @Unwrap public List<Customer> getCustomerList() { return ... ; } } The reference says A manager component is any component with an @Unwrap method. This method returns the value that will be visable to clients, and is called every time a context variable is referenced. Doesn't the method in first code fragment with the @Factory annotation also return a List? Is it not visible to clients? 2. The reference says: An even more powerful pattern is the manager component pattern. In this case, we have a Seam component that is bound to a context variable, that manages the value of the context variable, while remaining invisible to clients. What does the manager component here exactly manage? And in which situation/context is this manager component pattern bestly used? I looked at the HenHouse example, it seems useful when dealing with events which would cause value changes in the managed list. Many thanks in advance for any enlightenment! Regards, Ellen 1. Re: @Unwrap questionRichard Leherpeur Jul 7, 2007 10:22 AM (in response to Ning Zhao) As far as I know, one difference is the way you are going to access your object. If you use @Factory, you'll have to access it through the bean: #{myBeanName.myMethodName} (any first reference to myBeanName will trigger a call to the factory method) Using unwrap, you directly use the bean name: #{myBeanName} (the bean and any other methods remain hidden, only the result of the unwrap method is exposed). There might be other differences, but I'm not aware of them... 2. Re: @Unwrap questionWolfgang Schwendt Jul 7, 2007 10:56 AM (in response to Ning Zhao) "enzhao" wrote: Hi folks, 1. What's the difference between @Factory and @Unwrap in the following two code fragments? (from reference section 3.8) the Factory method gets only called if the referenced context variable, the factory method is defined for, is not yet bound to a value. In your case the context variable is "customerList". Once this context variable is set to a value, the factory won't be called anymore, when "customerList" gets referenced additional times. In contrast, the method annotated with @unwrap gets called EVERY time the manager component with the name "customerList" is referenced. 3. Re: @Unwrap questionPete Muir Jul 8, 2007 9:25 AM (in response to Ning Zhao) @Factory is useful to populate a context variable that is unchanging, or the change originates within your context. within it's scope (for example a list of hotels available to book). @Unwrap is useful if the variable can change outside your control (e.g. from another users conversation) 4. Re: @Unwrap questionNing Zhao Jul 8, 2007 7:08 PM (in response to Ning Zhao) Thanks to you all! :-) Regards, Ellen 5. Re: @Unwrap questionMarius Oancea Oct 15, 2007 1:45 PM (in response to Ning Zhao) As I know, @Unwrap annotate methoid is called everytime you access customerList variable. @Factory annotated method gets called only one time (if variable is not yet initialised) 6. Re: @Unwrap questionMohammad Norouzi Oct 16, 2007 8:34 AM (in response to Ning Zhao) Hi all I have a question about @Unwrap. I tried @Unwrap in a test application and as I understood, the whole bean will play on behalf of the wrapped context variable. for instance if I have method @name("bean") class MyBean { .... @Unwrap public User getUser(){ } } the "bean" is referring to an instance of type User, isn't it? but the thing I want to know is if the values inside a page is changed by the user and press submit button, Are these new changes visible at the server side? for example, the user changes his address which is a property of class User, will this change being applied at context variable inside the MyBean? thanks
https://developer.jboss.org/thread/137093
CC-MAIN-2017-39
refinedweb
748
64.1
See my new blog at .jeffreypalermo.com I use stongly-typed collections all the time for items that are going to be reused quite often. I have recently gone on an adventure to educate myself of what collection are available in the .Net Framework and how they differ. My current scenario had a need for a collection to store a pair of strings and retain the order with which I added them to the collection, and I need to be able to bind the collection directly to a list control. A dataset would be an overkill. Right away, I know I'm going to be dealing with a key-value pair. This automatically rules out all the collections based on the IList interface: ArrayList, StringCollection. Since I need a key-value pair, I need a collection based on IDictionary: Hashtable, HybridDictionary, ListDictionary, and SortedList. Not too many to choose from, but I refuse to extend CollectionBase for this tiny task that is isolated to one user control! I began to explore the differences between these four collections that are all bases on IDictionary and all stored the key-value pair in a DictionaryEntry object. What is really important is the underlying data structure of the collection. The Hashtable is meant to offer better-than-linear performance on lookups. The order of the items is not guaranteed, and, in fact, you can add some items, then immediately iterate through to print them out, and they will often be in a completely different order. Ok, Hashtable is disqualified. They HybridDictionary works well for my purposes. . . only if the number of items is no greater than 10. This is because it's “Hybrid”. With 10 items or fewer it implements a ListDictionary under the hood (see next section). As soon as you add the 11th entry, it switches to a Hashtable, and all ordering reliability goes out the window. The ListDictionary meets the requirements. It stores a key-value pair, and it retains the order in which I add the items. It does this by implementing the linked list data structure internally. According to the SDK, the ListDictionary has better performance than a Hashtable up to 10 items, and then Hashtable becomes faster. The SDK also warns that the ListDictionary should not be used with many items where performance is an issue. I also looked at the SortedList just to make my comparison complete. The SortedList behaves like a Hashtable in that you can use a key to look up the value, but you can also use the index to find the value. Internally it keeps two arrays sychronized: one array for the keys and one for the values. It sorts the entries when added, so the list is always sorted, hence the name. When whidbey gets here, I won't have to worry about strongly-typed collections anymore. I'll be able to declare them generically. . . or is that specifically? :-) I can't wait! I love the SDK. Especially since I don't have access to Google and my other online resources, I have used the SDK documentation so much! It is a wealth of knowledge that every .Net developer should reference. To get back to the collections, I did look at the NameValueCollection, but it does not implement IDictionary. Also, I was not able to bind it to the list control in the way I wanted. If I juse specified the source and then hit the “DataBind()” key [:-)], it would bind the list with the key value for the text as well as value. But I need to set the DataTextField and DataValueField. With IDictionary-based collections, “Key” and “Value” work, but the NameValueCollection returns a single string, not a DictionaryEntry object, so this presents a problem in getting the value in a binding situation. Sure, I could iterate through it and get the keys and values, but I'm just databinding, so I had to rule out this collection. Finally, I chose the ListDictionary for my purposes. Then I started thinking that is some cases, my key-value set could become large, so I can't use the ListDictionary. I might have to write one more strongly-typed collection. But for now, I'll use an ArrayList of DictionaryEntry objects. How's that for thinking outside the box? The ArrayList will grow dynamically and keep my items in a specific order because of the IList interface, but I can bind the key and value because the object being stored is a DictionaryEntry object. I think I'll call it DictionaryList. So one more time, I have to extend CollectionBase (it's internal data structure is an ArrayList). Normally I could extend the DictionaryBase class, but internally it uses a Hashtable which doesn't guarantee order. And I'm back to strongly-typed collections. . . but this one I'll reuse over and over and over and over. public class DictionaryList : CollectionBase { public DictionaryEntry this[int index] { get{ return((DictionaryEntry)List[index]);} set{ List[index] = value;} } public int Add(object key, object value) { DictionaryEntry entry = new DictionaryEntry(key, value); return List.Add(entry); } //rest of class truncated. } Ahh, the answer to so many of my problems! Now it's so simple to pop a few items in and then bind to a list control: list.DataSource = myDictionaryList; list.DataTextField = "Key"; list.DataValueField = "Value"; list.DataBind( ); list.SelectedValue = myValue;
http://codebetter.com/blogs/jeffrey.palermo/archive/2004/04/16/11605.aspx
crawl-002
refinedweb
896
65.73
What's New in Visual Studio Team System Microsoft Visual Studio Team System 2008 includes many new and enhanced features, which are summarized in this topic. For more information about which features are available in each edition, see Visual Studio Team System 2008 Team Edition Comparison. To read more about how you can use Visual Studio Team System 2008 for real-world software development, see the following article series: Global Bank: A Scenario for Visual Studio Team System 2008. Topic Contents Service Pack 1 for Team Foundation Server Team Foundation Version Control Team Foundation Work Item Tracking Migration Tool for Visual SourceSafe Team Foundation Source Control Team Foundation Work Item Tracking Team Foundation Server Management Design Application Systems by Using a Top-Down Approach Conform .NET Web Service Endpoints to WSDL Files Generate ASP.NET Web Application Projects for ASP.NET Applications Save, Import, and Export Custom Prototypes Select From Multiple .NET Framework Versions Select From Multiple Office Versions Testing Methods of Your Code In addition to the features and improvements that are listed later in this section, Service Pack 1 also adds the following administrative enhancements: Support for Microsoft SQL Server 2008 Links to Team System Web Access Improvements in performance and scalability Team Foundation Build You can find the TFSBuild.proj file more easily. You can right-click a build definition name in Team Explorer and then click View Configuration Folder to locate the TFSBuild.proj file in version control. You can determine how a build trigger was set. A new property indicates how a build was triggered, and you can write scripts in TFSBuild.proj file that run differently based each possible value for this field. These values include Manual, IndividualCI, BatchedCI, Schedule and ScheduleForced. The property also appears in the build log file. For more information, see Reason Property and Overview of Build Reports. You can detect test results. Instead of failing a build, you can detect test results and set build conditions based on those results. Team Foundation Version Control You can add items to version control more easily. When you add items to version control, you follow a wizard format to specify the files that you want to add and filter out files that you do not. You can also add files and folders by dragging and dropping them from Windows Explorer into Source Control Explorer. For more information, see How to: Add Non-Project or Non-Solution Files and Folders to Version Control. You manage all files in version control the same way, regardless of whether they are bound to solutions or projects. All version controlled files are now treated equally, whether they are a part of an open project or solution or not. Context menus provide all the standard functionality for version control at a single-file level. You can map working folders more easily. You can map working folders, cloak mapped folders, or remove working folders by right-clicking them in Source Control Explorer. As an alternative, you can verify whether a folder has been mapped by reviewing a link path in Source Control Explorer. If the folder is not mapped, you can click a link to open the Map dialog box. For more information, see How to: Create a Mapped Workspace, How to: Modify a Workspace, How to: Remove a Workspace, and How to: Cloak and Uncloak Folders in a Workspace. You can determine when a file was checked in most recently. Source Control Explorer includes a column that shows the date and time of the most recent check-in. You can specify the source location for a file. In Source Control Explorer, you can type a path in the Source Location box. You can download files directly in memory. Instead of downloading files to temporary files and then reading them, you can download the files directly in memory and process their contents. You can create a branch up to 10 times faster. By using the /checkin option for the tf branch command, you create the branch without first pending the changes and checking them in later. For more information, see Branch Command. You can optimize downloading the files to your workspace after you switch your workspace from one branch to another in the same code base. The /remap option of the tf get command optimizes for identical files by downloading only the items that differ between the two branches. For more information, see Get Command. Team Foundation Work Item Tracking You can track work items by using the Team tab on the ribbon in Office 2007. For more information, see Managing Work Items in Microsoft Excel and Microsoft Project. You can attach queries and links to work items in an e-mail message. In Team Explorer, you can right-click a query to send a work item or a list of work items in e-mail. If you have Team System Web Access, the message contains links to the item or query so that recipients can more easily explore related work items. For more information, see How to: Send Query Results in E-Mail. Migration Tool for Visual SourceSafe VSSConverter converts files that have the same name as a previously deleted file, thus eliminating namespace conflicts. For more information, see Migrating from Visual SourceSafe. When you convert a source tree, solutions are automatically rebound to Team Foundation instead of Visual SourceSafe. VSSConverter automatically corrects timestamp issues. Many Visual SourceSafe databases contain timestamp inconsistencies because Visual SourceSafe uses a client timestamp instead of a server one. VSSConverter automatically adjusts for this problem. You can more easily diagnose conversion problems. The messages that are written into the log file during conversion are clearer and provide more information. Several components of Team Foundation have new features and improvements for Visual Studio Team System 2008 Team Foundation Server. Team Foundation Build Build Definitions Build definitions replace the build types of Microsoft Visual Studio 2005 Team System. Unlike build types, you can use the Team Explorer user interface to modify build definitions. Build definitions also have workspace support in version control. You can now specify local paths and store the build files in any location you specify in version control. For more information, see How to: Create a Build Definition and Understanding Team Foundation Build Configuration Files. Continuous Integration of Builds You can specify a trigger for a build when you create a new build definition or modify an existing one. You can use on-demand builds, rolling builds, and continuous integration where each check-in starts a build. You can also define how long to wait between builds when defining rolling builds. For more information, see How to: Create a Build Definition. Scheduled Builds You can now run builds on a schedule, even if there are no changes. For more information, see How to: Create a Build Definition. Build Agents Build agents can be named independently of the build computer name. For more information, see How to: Create and Manage Build Agents. Each build agent can connect to a build computer via two ports: an interactive port and the default port used to run builds. For more information, see How to: Configure an Interactive Port for Team Foundation Build. HTTPS and Secure Sockets Layer (SSL) for Build You can now set up Team Foundation Build to require HTTPS and SSL. For more information, see How to: Set up a Build Agent to Require HTTPS and Secure Sockets Layer (SSL). New Properties for Customizing Team Foundation Build Team System 2008 Team Foundation Server includes new properties for customizing builds. These properties include customizing the behavior of C++ builds, SkipInvalidConfigurations, CustomizableOutDir, and CustomizablePublishDir. For more information, see Customizable Team Foundation Build Properties. New Tasks and Targets for Customizing Team Foundation Build Team Foundation Build includes a number of new targets that can be overridden to customize the build process. For more information, see Customizable Team Foundation Build Targets, BuildStep Task, GetBuildProperties Task, SetBuildProperties Task, and WorkspaceItemConverterTask Task. Team Foundation Source Control Destroy You can now destroy or permanently delete source-controlled files from Team Foundation version control. For more information, see Destroy Command. Get Latest on Check-Out You can now enable Team Foundation version control to retrieve the latest version of a file automatically when you check it out. For more information, see Team Foundation Check-Out Settings. Annotating Files You can now annotate source code files. You can view line-by-line information in source code about what changes were made, who made the changes, and when the changes were made. For more information, see How to: View File Changes Using Annotate. Comparing Folders You can now compare two server folders, two local folders, or a server folder and a local folder using source control. You can see differences such as missing items, and items that have additions, deletions, or conflicting changes. For more information, see How to: Compare Two Folders. Team Foundation Work Item Tracking The performance of most work item tracking operations under a heavy load has improved significantly. When compared to Visual Studio 2005 Team Foundation Server, throughput has doubled. It now takes less time to complete individual operations. CPU usage on the Team Foundation data-tier server has been reduced. Large organizations can support more work item tracking users on their existing servers than they could with Visual Studio 2005 Team Foundation Server. Visual Studio Team System 2008 Team Foundation Server is more scalable. Scalability has improved response times significantly of most work item tracking operations when the server is under load. This is especially true for teams of more than 500 people. Large organizations should be able to support more work item tracking users on their existing servers than they could with Visual Studio 2005 Team Foundation Server. Team Foundation Server Management Adding large numbers of users to Visual Studio Team System 2008 Team Foundation Server is much more reliable and less likely to cause long delays or other problems. While the total number of supported users has not changed, synchronization of users between Active Directory and Visual Studio Team System 2008 Team Foundation Server completes much more quickly. Visual Studio Team System Architecture Edition contains new features and improvements for the following areas in Visual Studio Team System 2008: Design Application Systems by Using a Top-Down Approach You can now use a top-down approach to design application systems by starting with System Designer. You can start with a new system design solution or you can continue with an existing solution. You can add systems, applications, and endpoints directly to your system definition as members. You can add endpoints directly to the boundary of your system definition and delegate their behavior to members at a later time. You rename members and their underlying definitions at the same time. You can repair members of application systems that become orphaned from their definitions. Conform .NET Web Service Endpoints to WSDL Files You can now conform the operations in an existing .NET Web Service provider endpoint to a WSDL file. Generate ASP.NET Web Application Projects for ASP.NET Applications You can now select the ASP.NET Web Application template to implement an ASP.NET application. This action generates the corresponding project type for the application. Save, Import, and Export Custom Prototypes You can now save or install custom prototypes either for your use only or for all users on your computer. You can now install custom prototypes by importing them instead of editing the registry. You can now export custom prototypes that you want to share with others. Select From Multiple .NET Framework Versions You can now select .NET Framework 2.0, 3.0, or 3.5 for ASP.NET, Windows, and Office applications. Select From Multiple Office Versions You can now select Office 2003 or Office 2007 project templates for Office applications. For more information, see What's New in Architecture Edition. Visual Studio Team System Database Edition is now integrated in the Visual Studio Team System installation. You no longer have to install it separately when you install the full suite. Specify Table and Index Options You can now specify options in your table and index definitions, such as the vardecimal storage format that is new in Microsoft SQL Server 2005. For more information, see How to: Specify Table and Index Options. Code Analysis Code analysis tools perform extensive checks for code defects, which are presented as warnings in the error window. For more information, see Writing Quality Code, Code Analysis for Managed Code Warnings and Code Analysis for C/C++ Warnings. Code Analysis has been enhanced with the following features: Rules Extension and Enhancement Code analysis has more than 20 new rules. Several rules have been enhanced by providing greater accuracy, particularly around naming rules. For more information, see Code Analysis for Managed Code Warnings, Code Analysis for C/C++ Warnings and How to: Enable and Disable Code Analysis for Managed Code. Spelling Checker with Custom Dictionary Support You can use the spelling checker for resource strings as well as class, method, and property names. You can use a custom dictionary to check non-standard words. Better Control over Suppression from the Error List You can suppress code analysis issues from the error window at either the project level or in-source. Auto-Suppress Generated Code Option You can automatically suppress error messages from generated code. This is particularly useful for designer-generated code. Code Analysis Policy Improvements When you copy the settings from the server to your project, you now have the option to replace your local selection, or merge the policy rules with your local project rules. Also, you now have more complete information about policy violations. This enables you to determine the source of the violation. Code Metrics Code metrics are a set of software measures that give developers better insight into the code they are developing. By taking advantage of code metrics, developers understand which types and/or methods should be reworked or more thoroughly tested. In addition, development teams identify potential risks, understand the current state of a project, and track progress during software development. For more information about Code Metrics, see Measuring Complexity and Maintainability of Managed Code. Profiling Tools Profiling tools in Visual Studio Developer Edition enable developers to measure, evaluate, and target performance-related issues in their code. For more information about profiling tools, see Analyzing Application Performance using Profiling Tools. The following features have been added to the Profiling Tools: 64-Bit Support The Profiler now includes support for both the 64-bit applications that run on 64-bit operating System and hardware and the 32-bit applications executed on 64-bit operating system and hardware. Full Allocation Stacks The Profiler has full call stacks for allocation. This is useful for allocation that occurs in non-user code, but is indirectly caused by user actions. By using the full stack, you can see exactly which parts of your code are indirectly causing the allocation. You can collect allocation data by configuring settings in the performance session property page. Use the allocation view in the performance report to see your results. For more information, see How to: Collect .NET Memory Allocation and Lifetime Data and Profiler .NET Memory Allocations View. Line-level Sampling Data Profiling tools now includes an instruction pointer and line views in performance reports. Also, the modules view now includes line information. For more information, see Instruction Pointer (IP) View, Lines View and Modules View. Report Noise Reduction You can configure performance reports for noise reduction. This limits the amount of data in the Call Tree view and the Allocation view. By using noise reduction, performance problems are more prominent. This is helpful when you analyze performance reports For more information, see How to: Configure Noise Reduction in Performance Reports, Call Tree View and Profiler .NET Memory Allocations View. Runtime Control Profiling tools includes a runtime control. The runtime control starts automatically with the profiler. It can be paused and resumed for performance data logging. In addition, you can use the runtime control to start the application with logging paused. This enables you to skip data collection on application startup. When you use the runtime control, you can manually insert annotations in the performance data when events of interest occur in the application lifetime. You can filter the data on your annotations later. Filtered Analysis You can now filter performance reports on timestamp, process, thread, and marks. You can use the show query button to get the filtered analysis. Also, you can use the /summaryfile option from the VSPerfReport command. For more information, see VSPerfReport. Compare Reports The Profiler now supports the comparison of reports. You can compare a report either by using the Performance Explorer or the /diff on options from the VSPerfReport command. For more information, see Comparing Profiling Tools Data Files, How to: Compare Profiler Data Files and VSPerfReport Improved Chip Counter Support Profiling tools provide new friendlier chip-counter names (For example: "L2 Misses", "ITLB Misses", "Mispredicted Branches"). You can modify xml files to further configure counters for a specific architecture. Windows Counter Support The Profiler now collects Windows counters (for example, "% Processor Time", "% Disk Time", "Disk Bytes/sec", "Page Faults/sec"). You can use either the windows counters node in the performance sessions properties page or the /wincounter option from the VSPerfCmd command. The marks view displays the counters. You can use counters as filtering endpoints. For more information, see Marks View, How to: Collect Windows Counter Data and VSPerfCmd. Compressed Report Files Profiling tools enable you to generate small compressed report files that open up quickly. This is because these files, which are created from full reports, are analyzed already. You can either right-click the report in the Performance Explorer and choose Save Analyzed or use the /summaryfile option from the VSPerfReport command. For more information, see How to: Save Analyzed Profiling Tools Report Files and VSPerfReport. Hot Path Profiler now has the ability to automatically expand the most expensive code path in the call tree and allocation view of the performance report. For more information, see Call Tree View and Profiler .NET Memory Allocations View. Copy Report View Data to HTML The Profiler includes support for rich reports in the clipboard. You can copy and paste rich data (tables with headers and values) from the performance reports. Windows Communications Foundation Support Profiling tools now support Windows Communications Foundation (WCF). Load and Web Test Integration in Visual Studio Team Suite You can create performance sessions for Web and Load tests from Test View and Test Results. Visual Studio Team System Test Edition contains new features and improvements for the following areas in Visual Studio Team System 2008 Test Edition: Testing Methods of Your Code You can now create and run unit tests more easily and quickly, and for more kinds of production code. Use Unit Tests in Visual Studio Professional Edition Developers using Visual Studio Professional Edition can now create and run two types of tests: unit and ordered. You can use a unit test to validate that a specific method of production code works correctly, to test for regressions, or to perform buddy testing or smoke testing. Ordered tests run other tests in a specified order. For more information, see Using Testing Tools in Visual Studio Professional Edition. Run Unit Tests More Easily New menus and key combinations enable developers of unit tests to start test runs and select the tests to run more quickly. Also, you can now generate tests from a binary file, without access to product source code. You can generate tests for generic data types as return values and method parameters. For more information, see How to: Run Selected Tests, How to: Create and Run a Unit Test, and Unit Tests and Generics. Use Inheritance Between Test Classes Test classes can now inherit members from other test classes. This enables developers to create initializations or tests in a base test class, from which all other derived tests classes will inherit. This feature eliminates duplicated test code. This gives developers more options to customize their unit tests correctly. For more information, see Unit Tests Overview. Run Unit Tests on Devices Visual Studio provides a suite of tools for testing C# and Visual Basic smart device applications. These tools provide a subset of the functionality found in Test Edition. For more information, see Testing Tools for Smart Device Projects. Create Host Adapters Typically, you run tests in the default environment provided by the Team System testing tools. To run tests in a different environment, use a host adapter. You can use the Visual Studio SDK to create new host adapters. Download the Visual Studio SDK from the affilliate site. Improved Unit Test Data Binding You can now use a wizard to easily bind a unit test to a data source, including CSV files and XML files. For more information, see How to: Configure a Data-Driven Unit Test. Web Testing Web Sites Visual Studio Team System 2008 Test Edition offers more control for authoring Web tests. Call a Web Test From Another Web Test You can insert a call to one Web test from a second Web test. For more information, see How to: Insert a Call to Another Web Test. Improved Web Test Data Binding Test Edition now includes built-in support for csv and xml files. A new wizard facilitates the data binding process. You can also preview the data before you complete the process. For more information, see Data Binding in Web Tests. Improved Web Test Features Test Edition now includes support for test level validation rules. You can create validation rules at the test level. These new rules can apply to all individual requests in the test. You can stop a Web test if an error occurs in the test. Also, you can validate the return of an expected HTTP status code. For more information, see Using Validation and Extraction Rules. In Test Edition you can now extract requests from Web tests to create new Web tests. You can also insert calls to other Web tests. This means you can create Web test components and reuse your Web tests and Web requests. For more information, see How to: Extract a Web Test and How to: Insert a Call to Another Web Test. Load Testing You can now use more realistic load modeling options for running load tests. Also, you can organize the returned data in ways that are richer and more flexible. Control Load Modeling Load tests now offer additional load-modeling options. These options enable you. Improved Load Test Analyzer Views Test Edition Load Test Analyzer includes a new summary view that displays the key indicators and results in a single page that you can print and export. Also, four new built-in graphs display key information. You can view up to four graphs at the same time. These enhancements enable you to view up to four tables at the same time. For more information, see the following: Improved Load Test Results Repository Management Test Edition includes a new Repository Management dialog box that enables you to access directly the load test results repository. Now it is easy for you to open, import, export, and delete load test results. For more information, see Managing Results in a Repository. Published Schema for XML Files As you work with Test Edition, it creates and stores data in XML files. These files Team System 2008 Test Edition, all the XML files used by Test Edition are defined by a new XSD named TestTypes.xsd. Any edits that you make to any of these files, manually or programmatically, must result in XML that conforms to the schema defined in this XSD. Similarly, any files that you create with these extensions must also conform to the schema defined in this XSD. Otherwise, Test Edition cannot use them. Test projects created in Visual Studio 2005 contain XML files. When you open a Visual Studio 2005 test project, the Visual Studio 2008 project upgrade wizard prompts you for permission to convert the files into the new format. To use the files in Team System 2008 Test Edition,. This release provides the following benefits: Improved Web test validation rules. You now have more flexibility to apply validation rules and use their results to control Web test program flow. Better control of load modeling. You now have more flexible ways to control the load modeling in load tests that you run. Improved load test analyzer views. New built-in graphs and viewing capabilities make it easier for you to quickly understand load test results. Improved load test results repository management. You now have easier access to the repository for load test results. Schematized XML file for test results. You can now work programmatically with the test results that are automatically stored in XML format in a .trx (test results) file. For more information, see What's New in Test Edition.
https://msdn.microsoft.com/en-us/library/bb385832(VS.90).aspx
CC-MAIN-2015-11
refinedweb
4,152
55.84
Load a block of text from a file. More... #include "mbl_parse_block.h" #include <vcl_cctype.h> #include <vcl_cstring.h> #include <vcl_iostream.h> #include <mbl/mbl_exception.h> Go to the source code of this file. Load a block of text from a file. Definition in file mbl_parse_block.cxx. Read a block of text from a stream. This function will read through a stream, and store the text found to a string. The function terminates when it finds the closing brace. The stream's fail bit will be set on error. Comment lines beginning with // will be stripped. The last character to be read from the stream. Definition at line 28 of file mbl_parse_block.cxx.
http://public.kitware.com/vxl/doc/development/contrib/mul/mbl/html/mbl__parse__block_8cxx.html
crawl-003
refinedweb
112
89.85
I’m trying out migrations for the first time, and I’m having a problem with my sqlite3 db. A trivial example of what I’m seeing: class InitDb < ActiveRecord::Migration def self.up create_table :mytable, :force => true do |t| t.column :lname, :string t.column :created_at, :string, :default => ‘CURRENT_TIMESTAMP’ end end def self.down end end When I dump the SQL from that command (with #to_sql), the CURRENT_TIMESTAMP field is single quoted. This tells Sqlite3 that you want to use a string literal as the default, not the CURRENT_TIMESTAMP function. So, instead of default timestamps, I get literal “CURRENT_TIMESTAMP” strings in my records. Is there any way to tell ActiveRecord to not quote the default field? I though about calling #execute with an “ALTER TABLE ADD COLUMN” command, but from the sqlite3 docs:. So I can’t do this, either… Thanks, Morgan
https://www.ruby-forum.com/t/sqlite3-migration-and-current-timestamp/58330
CC-MAIN-2022-21
refinedweb
143
67.55
The Meta Object Compiler, moc among friends, is the program which handles Qt's C++ extensions. The moc reads a C++ source file. If it finds one or more class declarations that contain the Q_OBJECT macro, it produces another C++ source file which contains the meta object code for the classes that use the Q_OBJECT macro. Among other things, meta object code is required for the signal/slot mechanism, runtime type information and the dynamic property system.. For more background information on moc, see Why doesn't Qt use templates for signals and slots?. The moc is typically used with an input file containing class declarations like this: example. that are to be used as sets, i.e. OR'ed together. Another macro, Q_CLASSINFO, can be used to attach additional name/value-pairs to the class' meta object: class MyClass : public QObject { Q_OBJECT Q_CLASSINFO( "Author", "Oscar Peterson") Q_CLASSINFO( "Status", "Active") public: MyClass( QObject * parent=0, const char * name=0 ); ~MyClass(); }; The three concepts, signals and slots, properties and class meta-data, can be combined. The output produced by the moc must be compiled and linked, just like the other C++ code in your program; otherwise the build will fail in the final link phase. By convention, this is done in one of the following two ways: #include "myclass.moc"at the end. This will cause the moc-generated code to be compiled and linked together with the normal class definition in myclass.cpp, so it is not necessary to compile and link it separately, as in Method A. Method A is the normal method. Method B can be used in cases where you want the implementation file to be self-contained, or in cases where the Q_OBJECT class is implementation-internal and thus should not be visible in the header file. For anything but the simplest test programs, it is recommended that you automate running the moc. By adding some rules to your program's Makefile, make can take care of running moc when necessary and handling the moc output. We recommend using Trolltech's free makefile generation tool, qmake, for building your Makefiles. This tool recognizes both Method A and B style source files, and with the following form: moc_NAME.cpp: NAME.h moc $< -o $@ You must also remember to add moc_NAME.cpp to your SOURCES (substitute your favorite name) variable and moc_NAME.o or moc_NAME.obj to your OBJECTS variable. (While we prefer to name our C++ source files .cpp, the moc doesn't care, forgotten to compile or #include the moc-generated C++ code, or (in the former case) include that object file in the link command. The moc does not expand #include or #define, it simply skips any preprocessor directives it encounters. This is regrettable, but is not usually.) In most cases where you would consider using function pointers as signal/slot arguments, we think inheritance is a better alternative. Here is an example of illegal syntax: class SomeClass : public QObject { Q_OBJECT ... public slots: // illegal void apply( void (*apply)(List *, void *), char * ); };. Sometimes it will work, but in general, friend declarations cannot be placed in signals or slots sections. Put them in the private, protected or public sections instead. Here is an example of the illegal syntax: class SomeClass : public QObject { Q_OBJECT ... signals: friend class ClassTemplate<char>; // WRONG }; The C++ feature of upgrading an inherited member function to public status is not extended to cover signals and slots. Here is an illegal example: class Whatever : public QButtonGroup { ... public slots: QButtonGroup::buttonPressed; // WRONG ... };) ); ... }; A #define without parameters will work as expected. Here's an example: class A { Q_OBJECT public: class B { public slots: // WRONG void b(); ... }; signals: class B { // WRONG void b(); ... }: }; It is a mystery to us why anyone would put a constructor ) { } // WRONG ... }; ) // WRONG Q_ENUMS( Priority ) // WRONG; ... };
http://doc.trolltech.com/3.1/moc.html
crawl-001
refinedweb
632
65.01
Get the file from the repository (we use p4 print //depot/path/to/file#rev for this), get the diff (just for that file), and then do: Advertising". >>>>>>> >>>>>>> What version of GNU patch is running on that server? What happens if >>>>>>> you try to apply that same patch on a developer machine? >>>>>>> >>>>>>> Christian >>>>>>> >>>>>>> -- >>>>>>> Christian Hammond - chip...@chipx86.com >>>>>>> Review Board - >>>>>>> VMware, Inc. - >>>>>>> >>>>>>> >>>>>>> On Thu, May 10, 2012 at 12:06 AM, Nilesh Jaiswal < >>>>>>> nileshj...@gmail.com> wrote: >>>>>>> >>>>>>>> Dear All, >>>>>>>> >>>>>>>> I have posted the queries on this but i haven't got any answer >>>>>>>> hence i am raising this queries again. >>>>>>>> >>>>>>>> After posting review request and then clicking on viewdiff link on >>>>>>>> RB server i get following error message for the files. >>>>>>>> >>>>>>>> The patch to >>>>>>>> '//sdm/DEV/V14_2_App/XSDGFDH/portals/sidadfds/resources/ApplicationManager.js' >>>>>>>> didn't apply cleanly. The temporary files have been left in >>>>>>>> '/tmp/reviewboard.pgHnuQ' for debugging purposes. `patch` returned: >>>>>>>> patching file /tmp/reviewboard.pgHnuQ/tmpVIuwIQ Hunk #2 FAILED at 931. >>>>>>>> 1 >>>>>>>> out of 2 hunks FAILED -- saving rejects to file >>>>>>>> /tmp/reviewboard.pgHnuQ/tmpVIuwIQ-new.rej >>>>>>>> >>>>>>>> After looking into the /tmp/reviewboard.pgHnuQ/tmpVIuwIQ-new.rejor >>>>>>>> diff file in /tmp of RB server its says >>>>>>>> >>>>>>>> Pasting diff snippet which contains the following string (*\ No >>>>>>>> newline at end of file*) in >>>>>>>> ActionStatusEventHandlerThread.java.diff which has caused the diff >>>>>>>> error. >>>>>>>> why can we not handle such cases in viewdiff.py script. Do we have any >>>>>>>> patch for the same. I would appreciate if you could provide me solution >>>>>>>> that will solve 100's of such failure issues. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> private void log(Exception e) { >>>>>>>> >>>>>>>> @@ -86,4 +93,4 @@ public class ActionStatusEventHandlerThr >>>>>>>> >>>>>>>> return "ActionStatusEventHandlerThread"; >>>>>>>> >>>>>>>> } >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -} >>>>>>>> >>>>>>>> +} >>>>>>>> >>>>>>>> *\ No newline at end of file* >>>>>>>> >>>>>>>> Regards, >>>>>>>> Nilesh
https://www.mail-archive.com/reviewboard@googlegroups.com/msg08895.html
CC-MAIN-2016-44
refinedweb
281
60.41
Module: Essential Tools Module Group: Internationalization RWZoneSimple RWZoneRWZone #include <time.h> #include <rw/zone.h> RWZoneSimple myZone(USCentral); RWZoneSimple is an implementation of the abstract interface defined by class RWZone. It implements a simple Daylight Saving Time rule sufficient to represent all historical U.S. conventions and many European and Asian conventions. It is table-driven and depends on parameters given by the struct RWDaylightRule, which is discussed later in this class. Daylight saving-time rules are volatile, often reflecting geographical and political changes. In some cases, the hard-coded table-driven struct RWDaylightRule does not accurately reflect the locale installed on your machine. RWZone::os() creates a new RWZoneSimple containing the daylight rule discovered from the underlying operating system. The onus of correctness for this DST rule is on the operating system itself. In many cases, you may want more explicit control of the DST rule for the intended RWZoneSimple. If so, you can build a DST rule with arbitrary begin and end times (see the RWDaylightRule below), and provide it as a parameter to RWZoneSimple. Direct use of RWDaylightRule affords the most general interface to RWZoneSimple. However, a much simpler programmatic interface is offered, as illustrated by the examples below. Three instances of RWZoneSimple are automatically constructed at program startup, to represent GMT, Standard, and local time. They are available via calls to the static member functions RWZone::utc(), RWZone::standard(), and RWZone::local(), respectively. These member functions are set up according to the time zone facilities provided in the execution environment (typically defined by the environment variable TZ). By default, if DST is observed at all, then the local zone instance will use U.S. (RWZone::NoAm) Daylight Saving Time rules. Note for developers outside North America: for some time zones this default will not be correct because these time zones rely on the C standard global variable _daylight. This variable is set whenever any alternate time zone rule is available, whether it represents Daylight Saving Time or not. Also the periods of history affected by Daylight Saving Time may be different in your time zone from those in North America, causing the North American rule to be erroneously invoked. The best way to ensure that these default time zones are correct is to construct an RWZoneSimple using an appropriate RWDaylightRule and initialize RWZone::local() and RWZone::std() with this value. Other instances of RWZoneSimple may be constructed to represent other time zones, and may be installed globally using RWZone static member functions RWZone::local(const RWZone*) and RWZone::standard(const RWZone*). None To install US Central time as your global "local" time use: RWZone::local(new RWZoneSimple(RWZone::USCentral)); To install the underlying operating system's Daylight Saving Time rule as your global "local" time use: RWZone::local(&RWZone::os() ); To install Hawaiian time (where Daylight Saving Time is not observed), use: RWZone::local(new RWZoneSimple(RWZone::Hawaii, RWZone::NoDST)); Likewise for Japan, use: RWZone::local(new RWZoneSimple(RWZone::Japan, RWZone::NoDST)); For France: RWZone::local(new RWZoneSimple(RWZone::Europe, RWZone::WeEu)); RWZone has predefined values for the RWZone::DstRule rules: Here are the rules used internally for the NoAm, WeEu, and OfficialEU values of RWZone::DstRule. First, here are the rules for the NoAm value: // last Sun in Apr to last in Oct: const RWDaylightRule usRuleAuld = { 0, 0000, 1, { 3, 4, 0, 120 }, { 9, 4, 0, 120 } }; // first Sun in Apr to last in Oct const RWDaylightRule usRule67 = { &usRuleAuld, 1967, 1, { 3, 0, 0, 120 }, { 9, 4, 0, 120 } }; // first Sun in Jan to last in Oct: const RWDaylightRule usRule74 = { &usRule67, 1974, 1, { 0, 0, 0, 120 }, { 9, 4, 0, 120 } }; // last Sun in Feb to last in Oct const RWDaylightRule usRule75 = { &usRule74, 1975, 1, { 1, 4, 0, 120 }, { 9, 4, 0, 120 } }; // last Sun in Apr to last in Oct const RWDaylightRule usRule76 = { &usRule75, 1976, 1, { 3, 4, 0, 120 }, { 9, 4, 0, 120 } }; // first Sun in Apr to last in Oct const RWDaylightRule usRuleLate = { &usRule76, 1987, 1, { 3, 0, 0, 120 }, { 9, 4, 0, 120 } }; And here are the rules for the WeEu value: // last Sun in March (2am) to last in September static RWDaylightRule euRuleAuld = { 0, 0000, 1, { 2, 4, 0, 120 }, { 8, 4, 0, 120 } }; // last Sun in March (1am) to last in Oct static RWDaylightRule euRuleLate = { &euRuleAuld, 1998, 1, { 2, 4, 0, 60 }, { 9, 4, 0, 60 } }; And here are the rules for the OfficialEU value: // Last Sun in March (2am) to last in Sept static RWDaylightRule euOfficialRuleAuld = { 0, 0000, 1, { 2, 4, 0, 120 }, { 8, 4, 0, 120 } }; // Last Sun in March (2am) to last in Oct static RWDaylightRule euOfficialRuleLate1996 = { &euOfficialRuleAuld, 1996, 1, { 2, 4, 0, 120 }, { 9, 4, 0, 180 } }; Given these definitions, RWZone::local(new RWZoneSimple(RWZone::USCentral, RWZone::NoAm)); is equivalent to the first example given above and repeated here: RWZone::local(new RWZoneSimple(RWZone::USCentral)); Daylight Saving Time systems that cannot be represented with RWDaylightRule and RWZoneSimple must be modeled by deriving from RWZone and implementing its virtual functions. For example, under Britain's Summer Time rules, alternate timekeeping begins the morning after the third Saturday in April, unless that is Easter (in which case it begins the week before) or unless the Council decides on some other time for that year. In some years Summer Time has been two hours ahead, or has extended through winter without a break. British Summer Time clearly deserves an RWZone class all its own. RWZoneSimple(RWZone::StdZone zone, RWZone::DstRule = RWZone::NoAm); Constructs an RWZoneSimple instance using internally held RWDaylightRules. This is the simplest interface to RWZoneSimple. The first argument is the time zone for which an RWZoneSimple is to be constructed. The second argument is the Daylight Saving Time rule which is to be followed. RWZoneSimple(const RWDaylightRule* rule, long tzoff, const RWCString& tzname, long altoff, const RWCString& altname); Constructs an RWZoneSimple instance which Daylight Saving Time is computed according to the rule specified. Variables tzoff and tzname are the offset from GMT (in seconds, positive if west of 0 degrees longitude) and the name of standard time. Arguments altoff and altname are the offset (typically equal to tzoff - 3600) and name when Daylight Saving Time is in effect. If rule is zero, Daylight Saving Time is not observed. RWZoneSimple(long tzoff, const RWCString& tzname); Constructs an RWZoneSimple instance in which Daylight Saving Time is not observed. Argument tzoff is the offset from GMT (in seconds, positive if west of 0 degrees longitude) and tzname is the name of the zone. RWZoneSimple(RWZone::StdZone zone, const RWDaylightRule* rule); Constructs an RWZoneSimple instance in which offsets and names are specified by the StdZone argument. Daylight Saving Time is computed according to the rule argument, if non-zero; otherwise, DST is not observed. The RWDaylightRule struct passed to RWZoneSimple's constructor can be a single rule for all years or can be the head of a chain of rules going backwards in time. RWDaylightRule is a struct with no constructors. It can be initialized with the syntax used in the Examples section above. The data members of this structure are as follows: struct RWExport RWDaylightRule { RWDaylightRule const* next_; short firstYear_; char observed_; RWDaylightBoundary begin_; RWDaylightBoundary end_; } RWDaylightRule const* next_; Points to the next rule in a chain which continues backwards in time. short firstYear_; Four digit representation of the year in which this rule first goes into effect. char observed_; A boolean value that can be used to specify a period of years for which Daylight Saving Time is not observed. 1 = Daylight Saving Time is in effect during this period 0 = Daylight Saving Time is not in effect during this period (Note that these are numeric values as distinguished from '1' and '0'.) RWDaylightBoundary begin_; This structure indicates the time of year, to the minute, when DST begins during this period. (See RWDaylightBoundary below.) RWDaylightBoundary end_; This structure indicates the time of year, to the minute, when standard time resumes during this period. (See RWDaylightBoundary below.) struct RWExport RWDaylightBoundary { // this struct uses <time.h> struct tm conventions: int month_; // [0..11] int week_; // [0..4], or -1 int weekday_; // [0..6], 0=Sunday; or, [1..31] if week_== -1 int minute_; // [0..1439] (Usually 2 AM, = 120) }; int month_; The month from (0 - 11), where 0 = January. int week_; A week of the month from (0 - 4), or -1 if the following field is to represent a day within the month. int weekday_; A day of the week from (0 - 6), where 0 = Sunday, or, if the week_ field is -1, a day of the month from (1 - 31). int minute_; Minutes after 12:00 AM, from (0 - 1439). For example, 120 = 2 AM. Rogue Wave and SourcePro are registered trademarks of Quovadx, Inc. in the United States and other countries. All other trademarks are the property of their respective owners. Contact Rogue Wave about documentation or support issues.
http://www.xvt.com/sites/default/files/docs/Pwr%2B%2B_Reference/rw/docs/html/toolsref/rwzonesimple.html
CC-MAIN-2017-51
refinedweb
1,481
50.16
I need my class to extend JFrame and ApplicationFrame. Is that possible? my class to extend JFrame and ApplicationFrame. Is that possible? How? A class cannot extent two classes, however if the ApplicationFrame is a Interface implement the AplicationFrame after Exending the JFrame: public class Main extends JFrame implements ApplicationFrame { The class main is a child of JFrame, and Application frame, @another question ,what is this ApplicationFrame you speak of is it one of your own classes, you need to give more info :s you can simply write Maiin extends JFrame implements ActionListener{ isn't your application frame a subclass of jframe? if it is not u can do this then:isn't your application frame a subclass of jframe? if it is not u can do this then: public class ApplicationFrame extends JFrame{ } public YourClass extends ApplicationFrame{ } that way you can have the attributes of both JFrame and ApplicationFrame.
http://www.javaprogrammingforums.com/whats-wrong-my-code/6011-class-extending.html
CC-MAIN-2016-36
refinedweb
150
50.16
1 Kernel Release Notes This document describes the changes made to the Kernel application. 1.1 Kernel 8.5 Fixed IPv6 multicast_if and membership socket options. Own Id: OTP-18091 Aux Id: #5789 Fixed issue with inet:getifaddrs hanging on pure IPv6 Windows Own Id: OTP-18102 Aux Id: #5904 The type specifications for inet:getopts/2 and inet:setopts/2 have been corrected regarding SCTP options. Own Id: OTP-18115 Aux Id: PR-5939 The type specifications for inet:parse_* have been tightened. Own Id: OTP-18121 Aux Id: PR-5972 Fix gen_tcp:connect/3 spec to include the inet_backend option. Own Id: OTP-18171 Aux Id: PR-6131 Fix bug where using a binary as the format when calling logger:log(Level, Format, Args) (or any other logging function) would cause a crash or incorrect logging. Own Id: OTP-18229 Aux Id: PR-6212 Improvements and New Features Add rudimentary debug feature (option) for the inet-driver based sockets, such as gen_tcp and gen_udp. Own Id: OTP-18032 Introduced the hidden and dist_listen options to net_kernel:start/2. Also documented the -dist_listen command line argument which was erroneously documented as a kernel parameter and not as a command line argument. Own Id: OTP-18107 Aux Id: PR-6009 Scope and group monitoring have been introduced in pg. For more information see the documentation of pg:monitor_scope(), pg:monitor(), and pg:demonitor(). Own Id: OTP-18163 Aux Id: PR-6058, PR-6275 A new function global:disconnect/0 has been introduced with which one can cleanly disconnect a node from all other nodes in a cluster of global nodes. Own Id: OTP-18232 Aux Id: OTP-17843, PR-6264 1.2 Kernel 8.4.3 Kernel 8.4.1 Fixed Bugs and Malfunctions The DNS resolver inet_res has been fixed to ignore trailing dot difference in the request domain between the sent request and the received response, when validating a response. Own Id: OTP-18112 Aux Id: ERIERL-811 A bug in inet_res has been fixed where a missing internal {ok,_} wrapper caused inet_res:resolve/* to return a calculated host name instead of an `{ok,Msg} tuple, when resolving an IP address or a host name that is an IP address string. Own Id: OTP-18122 Aux Id: GH-6015, PR-6020 The erlang:is_alive() BIF could return true before configured distribution service was available. This bug was introduced in OTP 25.0 ERTS version 13.0. The erlang:monitor_node() and erlang:monitor() BIFs could erroneously fail even though configured distribution service was available. This occurred if these BIFs were called after the distribution had been started using dynamic node name assignment but before the name had been assigned. Own Id: OTP-18124 Aux Id: OTP-17558, PR-6032 Added the missing mandatory address/0 callback in the gen_tcp_dist example. Own Id: OTP-18136 1.4 Kernel 8.4 Fixed Bugs and Malfunctions The DNS resolver implementation has been rewritten to validate replies more thoroughly, and a bit optimized to create less garbage. Own Id: OTP-17323 The socket option 'reuseaddr' is *no longer* ignored on Windows. Own Id: OTP-17447 Aux Id: GH-4819 Fix bug where using the atoms string or report as the format when calling logger:log(Level, Format, Args) (or any other logging function) would cause a crash or incorrect logging. Own Id: OTP-17551 Aux Id: GH-5071 PR-5075 As of OTP 25, global will by default prevent overlapping partitions due to network issues by actively disconnecting from nodes that reports that they have lost connections to other nodes. This will cause fully connected partitions to form instead of leaving the network in a state with overlapping partitions. Prevention of overlapping partitions can be disabled using the prevent_overlapping_partitions kernel(6) parameter, making global behave like it used to do. This is, however,. Since you might get hard to detect issues without this fix, you are strongly advised not to disable this fix. Also note that this fix has to be enabled on all nodes in the network in order to work properly. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17911 Aux Id: PR-5687, PR-5611, OTP-17843 Starting the helper program for name resolving; inet_gethost, has been improved to use an absolute file system path to ensure that the right program is started. If the helper program can not be started - the system now halts, to avoid running with a silently broken name resolver. Own Id: OTP-17958 Aux Id: OTP-17978 The type specification for inet_res:getbyname/2,3 has been corrected to reflect that it can return peculiar #hostent{} records. Own Id: OTP-17986 Aux Id: PR-5412, PR-5803 code:module_status/1 would always report BEAM files loaded from an archive as modified, and code:modified_modules/0 would always return the name of all modules loaded from archives. Own Id: OTP-17990 Aux Id: GH-5801 In logger fix file handler shutdown delay by using erlang timers instead of the timer module's timers. Own Id: OTP-18001 Aux Id: GH-5780 PR-5829 Fix the meta data in log events generated by logger on failure to not contain the original log event's meta data. Own Id: OTP-18003 Aux Id: PR-5771 Fix logger file backend to re-create the log folder if it has been deleted. Own Id: OTP-18015 Aux Id: GH-5828 PR-5845 [socket] Encode of sockaddr has been improved. Own Id: OTP-18020 Fix put_chars requests to the io server with incomplete unicode data to exit with no_translation error. Own Id: OTP-18070 Aux Id: PR-5885 Improvements and New Features The net module now works on Windows. Own Id: OTP-16464 An Erlang installation directory is now relocatable on the file system given that the paths in the installation's RELEASES file are paths that are relative to the installations root directory. The `release_handler:create_RELEASES/4 function can generate a RELEASES file with relative paths if its RootDir parameter is set to the empty string. Own Id: OTP-17304 The following distribution flags are now mandatory: DFLAG_BIT_BINARIES, DFLAG_EXPORT_PTR_TAG, DFLAG_MAP_TAGS, DFLAG_NEW_FLOATS, and DFLAG_FUN_TAGS. This mainly concerns libraries or application that implement the distribution protocol themselves. Own Id: OTP-17318 Aux Id: PR-4972 Fix os:cmd to work on Android OS. Own Id: OTP-17479 Aux Id: PR-4917 Dynamic node name improvements: erlang:is_alive/0 changed to return true for pending dynamic node name and new function net_kernel:get_state/0. Own Id: OTP-17558 Aux Id: OTP-17538, PR-5111, GH-5402 The types for callback result types in gen_statem has bee augmented with arity 2 types where it is possible for a callback module to specify the type of the callback data, so the callback module can get type validation of it. Own Id: OTP-17589 Aux Id: PR-4926 A net_ticker_spawn_options kernel configuration parameter with which one can set spawn options for the distribution channel ticker processes has been introduced. Own Id: OTP-17617 Aux Id: PR-5069 The most, or at least the most used, rpc operations now require erpc support in order to communicate with other Erlang nodes. erpc was introduced in OTP 23. That is, rpc operations against Erlang nodes of releases prior to OTP 23 will fail. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17681 Aux Id: PR-5307 The new module peer supersedes the slave module. The slave module is now deprecated and will be removed in OTP 27. peer contains an extended and more robust API for starting erlang nodes. Own Id: OTP-17720 Aux Id: PR-5162 In order to make it easier for the user to manage multiple outstanding asynchronous call requests, new functionality utilizing request identifier collections have been introduced in erpc, gen_server, gen_statem, and gen_event. Own Id: OTP-17784 Aux Id: PR-5792 IP address validation functions is_ipv4_address/1, is_ipv6_address/1 and is_ip_address/1 have been added to the module inet in Kernel. Own Id: OTP-17923 Aux Id: PR-5646 An API for multihomed SCTP connect has been added in the guise of gen_sctp:connectx_init/* Own Id: OTP-17951 Aux Id: PR-5656 [socket] Add encoding of the field hatype of the type sockaddr_ll (family 'packet'). Own Id: OTP-17968 Aux Id: OTP-16464.5 Kernel 8.3.2.6 Kernel 8.3.2 Fixed Bugs and Malfunctions inet:getopts/2 for the 'raw' option for a socket created with inet-backend 'socket' failed. Own Id: OTP-18078 Aux Id: GH-5930 Corrected the behaviour of the shutdown function when using with the inet_backend = socket. It was not sufficiently compatible with the "old" gen_tcp. Own Id: OTP-18080 Aux Id: GH-5930 1.7 Kernel 8.3.1 Fixed Bugs and Malfunctions Fix failed accepted connection setup after previous established connection from same node closed down silently. Own Id: OTP-17979 Aux Id: ERIERL-780 Fixed a problem where typing Ctrl-R in the shell could hang if there were some problem with the history log file. Own Id: OTP-17981 Aux Id: PR-5791 1.8 Kernel 8.3 Fixed Bugs and Malfunctions Handling of send_timeout for gen_tcp has been corrected so that the timeout is honored also when sending 0 bytes. Own Id: OTP-17840 bug where logger would crash when logging a report including improper lists. Own Id: OTP-17851 Make erlang:set_cookie work for dynamic node names. Own Id: OTP-17902 Aux Id: GH-5402, PR-5670 Improvements and New Features Add support for using socket:sockaddr_in() and socket:sockaddr_in6() when using gen_sctp, gen_tcp and gen_udp. This will make it possible to use Link Local IPv6 addresses. Own Id: OTP-17455 Aux Id: GH-4852 Improve documentation for the dynamic node name feature. Own Id: OTP-17918 1.9 Kernel 8.2 Fixed Bugs and Malfunctions socket:which_sockets( pid() ) uses wrong keyword when looking up socket owner ('ctrl' instead of 'owner'). Own Id: OTP-17716 In epmd_ntop, the #if defined(EPMD6) conditional was inverted and it was only including the IPv6-specific code when EPMD6 was undefined. This was causing IPv6 addrs to be interpreted as IPv4 addrs and generating nonsense IPv4 addresses as output. Several places were incorrectly using 'num_sockets' instead of 'i' to index into the iserv_addr array during error logging. This would result in a read into uninitialized data in the iserv_addr array. Thanks to John Eckersberg for providing this fix. Own Id: OTP-17730 Minor fix of the erl_uds_dist distribution module example. Own Id: OTP-17765 Aux Id: PR-5289 A bug has been fixed for the legacy TCP socket adaption module gen_tcp_socket where it did bind to a socket address when given a file descriptor, but should not. Own Id: OTP-17793 Aux Id: PR-5348, OTP-17451, PR-4787, GH-4680, PR-2989, OTP-17216 Improve the error printout when open_port/2 fails because of invalid arguments. Own Id: OTP-17805 Aux Id: PR-5406 Calling socket:monitor/1 on an already closed socket should succeed and result in an immediate DOWN message. This has now been fixed. Own Id: OTP-17806 Fix the configuration option logger_metadata to work. Own Id: OTP-17807 Aux Id: PR-5418 Fix tls and non-tls distribution to use erl_epmd:address_please to figure out if IPv4 or IPv6 addresses should be used when connecting to the remote node. Before this fix, a dns lookup of the remote node hostname determined which IP version was to be used which meant that the hostname had to resolve to a valid ip address. Own Id: OTP-17809 Aux Id: PR-5337 GH-5334 Improvements and New Features Add logger:reconfigure/0. Own Id: OTP-17375 Aux Id: PR-4663 PR-5186 Add socket function ioctl/2,3,4 for socket device control. Own Id: OTP-17528 Add simple support for socknames/1 for gen_tcp_socket and gen_udp_socket. Own Id: OTP-17531 The types for callback result types in gen_statem has bee augmented with arity 2 types where it is possible for a callback module to specify the type of the callback data, so the callback module can get type validation of it. Own Id: OTP-17738 Aux Id: PR-4926, OTP-17589 1.10 Kernel 8.1.3 Fixed Bugs and Malfunctions The internal, undocumented, but used, module inet_dns has been fixed to handle mDNS high bit usage of the Class field. Code that uses the previously obsolete, undocumented and unused record field #dns_rr.func will need to be updated since that field is now used as a boolean flag for the mDNS high Class bit. Code that uses the also undocumented record #dns_query will need to be recompiled since a boolean field #dns_query.unicast_response has been added for the mDNS high Class bit. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17734 Aux Id: GH-5327, OTP-17659 The fix for Linux's behaviour when reconnecting an UDP socket in PR-5120 released in OTP-24.1.2 has been refined to only dissolve the socket's connection before a connect if the socket is already connected, that is: only for a reconnect. This allows code to open a socket with an ephemeral port, get the port number and connect; without the port number changing (on Linux). This turned out to have at least one valid use case (besides test cases). Should one reconnect the socket then the port number may change, on Linux; it is a known quirk, which can be worked around by binding to a specific port number when opening the socket. If you can do without an ephemeral port, that is... Own Id: OTP-17736 Aux Id: GH-5279, PR-5120, OTP-17559 1.11 Kernel 8.1.2 Fixed Bugs and Malfunctions The undocumented DNS encode/decode module inet_dns has been cleaned up to handle the difference between "symbolic" and "raw" records in a more consistent manner. PR-5145/OTP-17584 introduced a change that contributed to an already existing confusion, which this correction should remedy. Own Id: OTP-17659 Aux Id: ERIERL-702 1.12 Kernel 8.1.1 Fixed Bugs and Malfunctions Add more info about the socket 'type' ('socket' or 'port') for the DOWN message when monitoring sockets. Own Id: OTP-17640 1.13 Kernel 8.1 Fixed Bugs and Malfunctions The extended error information has been corrected and improved for the following BIFs: binary_to_existing_atom/2, list_to_existing_atom/1, erlang:send_after/{3,4}, and erlang:start_timer/{3,4}. Own Id: OTP-17449 Aux Id: GH-4900 Improve handling of closed sockets for inet:info/1. Own Id: OTP-17492 This change fixes a performance problem introduced in pull-request #2675. Pull-request #2675 made so the system tried to start children of already started applications which is unnecessary. This change fixes this performance problem. Own Id: OTP-17519 Fix code:get_doc/1 to not crash when module is located in an escript. Own Id: OTP-17570 Aux Id: PR-5139 GH-4256 ERL-1261 Parsing of the result value in the native DNS resolver has been made more defensive against incorrect results. Own Id: OTP-17578 Aux Id: ERIERL-683 A bug in the option handling for the legacy socket adaptor, that is; when using inet_backend = socket, has been fixed. Now socket options are set before the bind() call so options regarding, for example address reuse have the desired effect. Own Id: OTP-17580 Aux Id: GH-5122 inet:ntoa/1 has been fixed to not accept invalid numerical addresses. Own Id: OTP-17583 Aux Id: GH-5136 Parsing of DNS records has been improved for records of known types to not accept and present malformed ones in raw format. Own Id: OTP-17584 Aux Id: PR-5145 The ip_mreq() type for the {ip,add_membership} and {ip,drop_membership} socket options has been corrected to have an interface field instead of, incorrectly, an address field. Own Id: OTP-17590 Aux Id: PR-5170 Improvements and New Features Add simple utility function to display existing sockets i the erlang shell (socket:i/0). Own Id: OTP-17376 Aux Id: OTP-17157 gen_udp can now be configured to use the socket inet-backend (in the same way as gen_tcp). Own Id: OTP-17410 Functions erlang:set_cookie(Cookie) and erlang:get_cookie(Node) have been added for completeness and to facilitate configuring distributed nodes with different cookies. The documentation regarding distribution cookies has been improved to be less vague. Own Id: OTP-17538 Aux Id: GH-5063, PR-5111 A workaround has been implemented for Linux's quirky behaviour to not adjust the source IP address when connecting a connected (reconnecing) UDP socket. The workaround is to, on Linux, always dissolve any connection before connecting an UDP socket. Own Id: OTP-17559 Aux Id: GH-5092, PR-5120 Documented our recommendation against opening NFS-mounted files, FIFOs, devices, and similar using file:open/2. Own Id: OTP-17576 Aux Id: ERIERL-685 1.14 Kernel 8.0.2 Fixed Bugs and Malfunctions For gen_tcp:connect/3,4 it is possible to specify a specific source port, which should be enough to bind the socket to an address with that port before connecting. Unfortunately that feature was lost in OTP-17216 that made it mandatory to specify the source address to get an address binding, and ignored a specified source port if no source address was specified. That bug has now been corrected. Own Id: OTP-17536 Aux Id: OTP-17216, ERIERL-677 1.15 Kernel 8.0.1 Fixed Bugs and Malfunctions Fix a race condition in Global. Own Id: OTP-16033 Aux Id: ERIERL-329, ERL-1414, GH-4448, ERL-885, GH-3923 After a node restart with init:restart/0,1, the module socket was not usable because supporting tables had been cleared and not re-initialized. This has now been fixed. Handling of the "." domain as a search domain was incorrect and caused a crash in the DNS resolver inet_res, which has now been fixed. Own Id: OTP-17439 Aux Id: GH-4827, PR-4888, GH-4838 Handling of combinations of the fd option and binding to an address has been corrected, especially for the local address family. Own Id: OTP-17451 Aux Id: OTP-17374 Bug fixes and code cleanup for the new socket implementation, such as: Assertions on the result of demonitoring has been added in the NIF code, where appropriate. Internal state handling for socket close in the NIF code has been reviewed. Looping over close() for EINTR in the NIF code has been removed, since it is strongly discouraged on Linux and Posix is not clear about if it is allowed. The inet_backend temporary socket option for legacy gen_tcp sockets has been documented. The return value from net:getaddrinfo/2 has been corrected: the protocol field is now an atom(), instead of, incorrectly, list(atom()). The documentation has also been corrected about this return type. Deferred close of a socket:sendfile/* file was broken and has been corrected. Some debug code, not enabled by default, in the socket NIF has been corrected to not accidentally core dump for debug printouts of more or less innocent events. Own Id: OTP-17452 1.16 Kernel 8.0 Fixed Bugs and Malfunctions A bug has been fixed for the internal inet_res resolver cache that handled a resolver configuration file status timer incorrectly and caused performance problems due to many unnecessary file system accesses. Own Id: OTP-14700 Aux Id: PR-2848 Change the value of the tag head returned by disk_log:info/1 from {ok, Head} to just Head. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16809 Aux Id: ERL-1313 Two options have been added to erl_call. The -fetch_stdout option fetches stdout data resulting from the code invoked by erl_call. The -fetch_stdout option disables printing of the result term. In order to implement the first of these two options a new function called ei_xrpc_from has been added to erl_interface. For details see the erl_call documentation and erl_interface documentation. Own Id: OTP-17132 Missing runtime dependencies has been added to this application. Own Id: OTP-17243 Aux Id: PR-4557 inet:get_rc/0 has been corrected to return host entries as separate entries instead of (incorrectly) in a list within the list. This bug was introduced by OTP-16487 in OTP-23.0-rc1. Own Id: OTP-17262 Aux Id: GH-4588, PR-4604, OTP-16487 The type gen_tcp:option_name() had a duplicate pktoptions value. Own Id: OTP-17277 Fixed removal of empty groups from internal state in pg. Own Id: OTP-17286 Aux Id: PR-4619 erl -remsh now prints an error message when it fails to connect to the remote node. Own Id: OTP-17287 Aux Id: PR-4581 Fix bugs related to corrupt shell history files. Error messages printed by shell history are now logged as logger error reports instead of written to standard error. Own Id: OTP-17288 Aux Id: PR-4581 A logger warning is now issues when too many arguments are given to -name or -sname. Example: erl -name a b. Own Id: OTP-17315 Aux Id: GH-4626 The cache used by inet_res now, again, can handle multiple IP addresses per domain name, and thus fixes a bug introduced in PR-3041 (OTP-13126) and PR-2891 (OTP-14485). Own Id: OTP-17344 Aux Id: PR-4633, GH-4631, OTP-14485, OTP-12136 Sockets created with socket:accept not counted (socket:info/0). Own Id: OTP-17372 The {fd, Fd} option to gen_tcp:listen/2 did not work for inet_backend socket, which has been fixed. Own Id: OTP-17374 Aux Id: PR-4787, GH-4680, PR-2989, OTP-17216 Improvements and New Features The cache used by the DNS resolver inet_res has been improved to use ETS lookups instead of server calls. This is a considerable speed improvement for cache hits. Own Id: OTP-13126 Aux Id: PR-3041 The cache ETS table type for the internal DNS resolver inet_res has changed type (internally) to get better speed and atomicity. Own Id: OTP-14485 Aux Id: PR-2891 The experimental socket module can now use any protocol (by name) the OS supports. Suggested in PR-2641, implemented in PR-2670. Own Id: OTP-14601 Aux Id: PR-2641, PR-2670, OTP-16749 The DNS resolver inet_res has been updated to support CAA (RFC 6844) and URI (RFC 7553) records. Own Id: OTP-16517 Aux Id: PR-2827 A compatibility adaptor for gen_tcp to use the new socket API has been implemented (gen_tcp_socket). Used when setting the kernel application variable inet_backend = socket. Own Id: OTP-16611 Aux Id: OTP-16749 file server can now be bypassed in file:delete/1,2 with the raw option. Own Id: OTP-16698 Aux Id: PR-2634 An example implementation of Erlang distribution over UDS using distribution processes has been introduced. Thanks to Jérôme de Bretagne Own Id: OTP-16703 Aux Id: PR-2620 won't The experimental new socket API has been further developed. Some backwards incompatible changes with respect to OTP 23 have been made. The control message format has been changed so a decoded value is now in the 'value' field instead of in the 'data' field. The 'data' field now always contains binary data. Some type names have been changed regarding message headers and control message headers. socket:bind/2 now returns plain ok instead of {ok, Port} which was only relevant for the inet and inet6 address families and often not interesting. To find out which port was chosen use socket:sockname/1. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16749 Aux Id: OTP-14601 New function os:env/0 returns all OS environment variables as a list of 2-tuples. Own Id: OTP-16793 Aux Id: ERL-1332, PR-2740 Remove the support for distributed disk logs. The new function disk_log:all/0 is to be used instead of disk_log:accessible_logs/0. The function disk_log:close/1 is to be used instead of disk_log:lclose/1,2. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16811 Expand the spec for erl_epmd:listen_port_please/2 to mirror erl_epmd:port_please/2. Own Id: OTP-16947 Aux Id: PR-2781 A new erl parameter for specifying a file descriptor with configuration data has been added. This makes it possible to pass the parameter "-configfd FD" when executing the erl command. When this option is given, the system will try to read and parse configuration parameters from the file descriptor. Own Id: OTP-16952 The experimental HiPE application has been removed, together with all related functionality in other applications. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16963 The pg2 module has been removed. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16968 Accept references up to a size of 160-bits from remote nodes. This is the first step in an upgrade path toward using references up to 160-bits in a future OTP release. Own Id: OTP-17005 Aux Id: OTP-16718 Allow utf-8 binaries as parts of logger_formatter template. Own Id: OTP-17015 Let disk_log:open/1 change the size if a wrap log is opened for the first time, that is, the disk log process does not exist, and the value of option size does not match the current size of the disk log. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17062 Aux Id: ERL-1418, GH-4469, ERIERL-537 Allow the shell history of an erlang node to be fetched and stores using a custom callback module. See shell_history configuration parameter in the kernel documentation for more details. Own Id: OTP-17103 Aux Id: PR-2949 The simple logger (used to log events that happen before kernel has been started) has been improved to print prettier error messages. Own Id: OTP-17106 Aux Id: PR-2885 socket:sendfile/2,3,4,5 has been implemented, for platforms that support the underlying socket library call. Own Id: OTP-17154 Aux Id: OTP-16749 Add socket monitor(s) for all types sockets. Own Id: OTP-17155 Fix various issues with the gen_tcp_socket. Including documenting some incompatibilities. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17156 inet:i/0 now also shows existing gen_tcp compatibility sockets (based on 'socket'). Own Id: OTP-17157 Added support in logger for setting primary metadata. The primary metadata is passed as a base metadata to all log events in the system. See Metadata in the Logger chapter of the Kernel User's Guide for more details. Own Id: OTP-17181 Aux Id: PR-2457 Recognize new key 'optional_applications' in application resource files. Own Id: OTP-17189 Aux Id: PR-2675 The Fun's passed to logger:log/2,3,4 can now return metadata that will only be fetched when needed. See logger:log/2,3,4 for more details. Own Id: OTP-17198 Aux Id: PR-2721 erpc:multicall() has been rewritten to be able to utilize the newly introduced and improved selective receive optimization. Own Id: OTP-17201 Aux Id: PR-4534 Add utility fiunction inet:info/1 to provide miscellaneous info about a socket. Own Id: OTP-17203 Aux Id: OTP-17156 The behaviour for gen_tcp:connect/3,4 has been changed to not per default bind to an address, which allows the network stack to delay the address and port selection to when the remote address is known. This allows better port re-use, and thus enables far more outgoing connections, since the ephemeral port range no longer has to be a hard limit. There is a theoretical possibility that this behaviour change can affect the set of possible error values, or have other small implications on some platforms. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17216 Aux Id: PR-2989 An option {nxdomain_reply, boolean()} has been implemented in the DNS resolver inet_res. It is useful since an nxdomain error from a name server does contain the SOA record if the domain exists at all. This record is useful to determine a TTL for negative caching of the failed entry. Own Id: OTP-17266 Aux Id: PR-4564 Optimized lookup of local processes part of groups in pg. Own Id: OTP-17284 Aux Id: PR-4615 The return values from module socket functions send(), sendto(), sendmsg(), sendfile() and recv() has been changed to return a tuple tagged with select when a SelectInfo was returned, and not sometimes tagged with ok. This is a backwards incompatible change that improves usability for code using asynchronous operations. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-17355 Aux Id: OTP-17154 Fixed warnings in code matching on underscore prefixed variables. Own Id: OTP-17385 Aux Id: OTP-17123 1.17 Kernel 7.3.1.6.18 Kernel 7.3 failed accepted connection setup after previous established connection from same node closed down silently. Own Id: OTP-17979 Aux Id: ERIERL-780.19 Kernel 7.3.1.4 Fixed Bugs and Malfunctions Parsing of the result value in the native DNS resolver has been made more defensive against incorrect results. Own Id: OTP-17578 Aux Id: ERIERL-683 1.20 Kernel 7.3.1.3 Fixed Bugs and Malfunctions Fix code:get_doc/1 to not crash when module is located in an escript. Own Id: OTP-17570 Aux Id: PR-5139 GH-4256 ERL-1261 1.21 Kernel 7.3.1.2 Fixed Bugs and Malfunctions Handling of the "." domain as a search domain was incorrect and caused a crash in the DNS resolver inet_res, which has now been fixed. Own Id: OTP-17473 Aux Id: GH-4838, OTP-17439.22 Kernel 7.3.1.1 Fixed Bugs and Malfunctions Fix a race condition in Global. Own Id: OTP-16033 Aux Id: ERIERL-329, ERL-1414, GH-4448, ERL-885, GH-3923 1.23 Kernel 7.3.1 Fixed Bugs and Malfunctions A bug in the Erlang DNS resolver has been fixed, where it could be made to bring down the kernel supervisor and thereby the whole node, when getting an incorrect (IN A reply to an IN CNAME query) reply from the DNS server and used the reply record's value without verifying its type. Own Id: OTP-17361 1.24 Kernel 7.3 Fixed Bugs and Malfunctions The range check for compression pointers in DNS encoding was faulty, which caused incorrect label compression encoding for very large DNS messages; larger than about 16 kBytes, such as AXFR responses. This more than 11 year old bug has now been corrected. Own Id: OTP-13641 Aux Id: PR-2959 Fix of internal links in the erpc documentation. Own Id: OTP-17202 Aux Id: PR-4516 Fix bug where complex seq_trace tokens (that is lists, tuples, maps etc) could becomes corrupted by the GC. The bug was introduced in OTP-21. Own Id: OTP-17209 Aux Id: PR-3039 When running Xref in the modules mode, the Debugger application would show up as a dependency for the Kernel applications. Own Id: OTP-17223 Aux Id: GH-4546, PR-4554 Improvements and New Features erl_epmd (the epmd client) will now try to reconnect to the local EPMD if the connection is broken. Own Id: OTP-17178 Aux Id: PR-3003 1.25 Kernel 7.2.1 Fixed Bugs and Malfunctions. Own Id: OTP-12960 Aux Id: ERIERL-598, PR-4509 1.26 Kernel 7.2 Fixed Bugs and Malfunctions The apply call's in logger.hrl are now called with erlang prefix to avoid clashed with local apply/3 functions. Own Id: OTP-16976 Aux Id: PR-2807 Fix memory leak in pg. Own Id: OTP-17034 Aux Id: PR-2866 Fix crash in logger_proxy due to stray gen_server:call replies not being handled. The stray replies come when logger is under heavy load and the flow control mechanism is reaching its limit. Own Id: OTP-17038 Fixed a bug in erl_epmd:names() that caused it to return the illegal return value noport instead of {error, Reason} where Reason is the actual error reason. This bug also propagated to net_adm:names(). This bug was introduced in kernel version 7.1 (OTP 23.1). Own Id: OTP-17054 Aux Id: ERL-1424 Improvements and New Features Add export of some resolver documented types. Own Id: OTP-16954 Aux Id: ERIERL-544 Add configurable retry timeout for resolver lookups. Own Id: OTP-16956 Aux Id: ERIERL-547 gen_server:multi_call() has been optimized in the special case of only calling the local node with timeout set to infinity. Own Id: OTP-17058 Aux Id: PR-2887 1.27 Kernel 7.1 Fixed Bugs and Malfunctions A fallback has been implemented for file:sendfile when using inet_backend socket Own Id: OTP-15187 Aux Id: ERL-1293 Make default TCP distribution honour option backlog in inet_dist_listen_options. Own Id: OTP-16694 Aux Id: PR-2625 Raw option handling for the experimental gen_tcp_socket backend was broken so that all raw options were ignored by for example gen_tcp:listen/2, a bug that now has been fixed. Reported by Jan Uhlig. Own Id: OTP-16743 Aux Id: ERL-1287 Accept fails with inet-backend socket. Own Id: OTP-16748 Aux Id: ERL-1284 Fixed various minor errors in the socket backend of gen_tcp. Own Id: OTP-16754 Correct disk_log:truncate/1 to count the header. Also correct the documentation to state that disk_log:truncate/1 can be used with external disk logs. Own Id: OTP-16768 Aux Id: ERL-1312 Fix erl_epmd:port_please/2,3 type specs to include all possible error values. Own Id: OTP-16783 Fix erl -erl_epmd_port to work properly. Before this fix it did not work at all. Own Id: OTP-16785 Fix typespec for internal function erlang:seq_trace_info/1 to allow term() as returned label. This in turn fixes so that calls to seq_trace:get_token/1 can be correctly analyzer by dialyzer. Own Id: OTP-16823 Aux Id: PR-2722 Fix erroneous double registration of processes in pg when distribution is dynamically started. Own Id: OTP-16832 Aux Id: PR-2738 Improvements and New Features Make (use of) the socket registry optional (still enabled by default). Its now possible to build OTP with the socket registry turned off, turn it off by setting an environment variable and controlling in runtime (via function calls and arguments when creating sockets). Own Id: OTP-16763 erl -remsh nodename no longer requires the hostname to be given when used together with dynamic nodenames. Own Id: OTP-16784 1.28 Kernel 7.0 Fixed Bugs and Malfunctions Fix race condition during shutdown when shell_history is enabled. The race condition would trigger crashes in disk_log. Own Id: OTP-16008 Aux Id: PR-2302 Fix the Erlang distribution to handle the scenario when a node connects that can handle message fragmentation but can not handle the atom cache. This bug only affects users that have implemented a custom distribution carrier. It has been present since OTP-21. The DFLAG_FRAGMENT distribution flag was added to the set of flags that can be rejected by a distribution implementation. Own Id: OTP-16284 Fix bug where a binary was not allowed to be the format string in calls to logger:log. Own Id: OTP-16395 Aux Id: PR-2444 Fix bug where logger would end up in an infinite loop when trying to log the crash of a handler or formatter. Own Id: OTP-16489 Aux Id: ERL-1134 code:lib_dir/1 has been fixed to also return the lib dir for erts. This is been marked as an incompatibility for any application that depended on {error,bad_name} to be returned for erts. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16502 The application stop/1 callback was not called if the application master of the application terminated. Own Id: OTP-16504 Aux Id: PR-2328 Fix bug in application:loaded_applications/0 that could cause it to fail with badarg if for example a concurrent upgrade/downgrade is running. Own Id: OTP-16627 Aux Id: PR-2601 Improvements and New Features A new module erpc has been introduced in the kernel application.. Also the rpc module benefits from these improvements by utilizing erpc when it is possible. This change has been marked as a potential incompatibility since rpc:block_call() now only is guaranteed to block other block_call() operations. The documentation previously claimed that it would block all rpc operations. This has however never been the case. It previously did not block node-local block_call() operations. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-13450 Aux Id: OTP-15251 A client node can receive its node name dynamically from the node that it first connects to. This featured can by used by starting with erl -sname undefined erl_interface functions ei_connect_init and friends erl_call -R Own Id: OTP-13812 Directories can now be opened by file:open/2 when passing the directory option. Own Id: OTP-15835 Aux Id: PR-2212 The check of whether to log or not based on the log level in logger has been optimized by using persistent_term to store the log level. Own Id: OTP-15948 Aux Id: PR-2356 file:read_file_info/2 can now be used on opened files and directories. Own Id: OTP-15956 Aux Id: PR-2231 The -config option to erl now can take multiple config files without repeating the -config option. Example: erl -config sys local Own Id: OTP-16148 Aux Id: PR-2373 Improved node connection setup handshake protocol. Made possible to agree on protocol version without dependence on epmd or other prior knowledge of peer node version. Also added exchange of node incarnation ("creation") values and expanded the distribution capability flag field from 32 to 64 bits. Own Id: OTP-16229 The possibility to run Erlang distribution without relying on EPMD has been extended. To achieve this a couple of new options to the inet distribution has been added. - -dist_listen false - Setup the distribution channel, but do not listen for incoming connection. This is useful when you want to use the current node to interact with another node on the same machine without it joining the entire cluster. - -erl_epmd_port Port - Configure a default port that the built-in EPMD client should return. This allows the local node to know the port to connect to for any other node in the cluster.. Own Id: OTP-16250 A first EXPERIMENTAL module that is a socket backend to gen_tcp and inet has been implemented. Others will follow. Feedback will be appreciated. Own Id: OTP-16260 Aux Id: OTP-15403 The new experimental socket module has been moved to the Kernel application. Own Id: OTP-16312 Replace usage of deprecated function in the group module. Own Id: OTP-16345 Minor updates due to the new spawn improvements made. Own Id: OTP-16368 Aux Id: OTP-15251 Update of sequential tracing to also support other information transfers than message passing. Own Id: OTP-16370 Aux Id: OTP-15251, OTP-15232 code:module_status/1 now accepts a list of modules. code:module_status/0, which returns the statuses for all loaded modules, has been added. Own Id: OTP-16402 filelib:wildcard/1,2 is now twice as fast when a double star (**) is part of the pattern. Own Id: OTP-16419 A new implementation of distributed named process groups has been introduced. It is available in the pg module. Note that this pg module only has the name in common with the experimental pg module that was present in stdlib up until OTP 17. Thanks to Maxim Fedorov for the implementation. Own Id: OTP-16453 Aux Id: PR-2524 The pg2 module has been deprecated. It has also been scheduled for removal in OTP 24. You are advised to replace the usage of pg2 with the newly introduced pg module. pg has a similar API, but with a more scalable implementation. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16455 Refactored the internal handling of deprecated and removed functions. Own Id: OTP-16469 The internal hosts file resolver cache inet_hosts has been rewritten to behave better when the hosts file changes. For example the cache is updated per entry instead of cleared and reloaded so lookups do not temporarily fail during reloading, and; when multiple processes simultaneously request reload these are now folded into one instead of all done in sequence. Reported and first solution suggestion by Maxim Fedorov. Own Id: OTP-16487 Aux Id: PR-2516 Add code:all_available/0 that can be used to get all available modules. Own Id: OTP-16494 As of OTP 23, the distributed disk_log feature has been deprecated. It has also been scheduled for removal in OTP 24. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-16495 Add the function code:fetch_docs/1 for fetching embedded documentation for aa Erlang module. Own Id: OTP-16499 Improve configure for the net nif, which should increase portability. Own Id: OTP-16530 Aux Id: OTP-16464 socket: Socket counters and socket global counters are now represented as maps (instead of property lists). Own Id: OTP-16535 The experimental socket module has gotten restrictions removed so now the 'seqpacket' socket type should work for any communication domain (protocol family) where the OS supports it, typically the Unix Domain. Own Id: OTP-16550 Aux Id: ERIERL-476 Allow using custom IO devices in logger_std_h. Own Id: OTP-16563 Aux Id: PR-2523 Added file:del_dir_r/1 which deletes a directory together with all of its contents, similar to rm -rf on Unix systems. Own Id: OTP-16570 Aux Id: PR-2565 socket: By default the socket options rcvtimeo and sndtimeo are now disabled. To enable these, OTP now has to be built with the configure option --enable-esock-rcvsndtimeo Own Id: OTP-16620 The experimental gen_tcp compatibility code utilizing the socket module could loose buffered data when receiving a specified number of bytes. This bug has been fixed. Reported by Maksim Lapshin on bugs.erlang.org ERL-1234 Own Id: OTP-16632 Aux Id: ERL-1234 1.29 Kernel 6.5.30 Kernel 6.5.2.4 Fixed Bugs and Malfunctions.31 Kernel 6.5.2.3 Fixed Bugs and Malfunctions Fix a race condition in Global. Own Id: OTP-16033 Aux Id: ERIERL-329, ERL-1414, GH-4448, ERL-885, GH-3923 1.32 Kernel 6.5.2.2 Fixed Bugs and Malfunctions When running Xref in the modules mode, the Debugger application would show up as a dependency for the Kernel applications. Own Id: OTP-17223 Aux Id: GH-4546, PR-4554 1.33 Kernel 6.5.2.1 Fixed Bugs and Malfunctions Fix bug in application:loaded_applications/0 that could cause it to fail with badarg if for example a concurrent upgrade/downgrade is running. Own Id: OTP-16627 Aux Id: PR-2601 1.34 Kernel 6.5.2 Fixed Bugs and Malfunctions The DNS resolver `inet_res` has been fixed to return the last intermediate error when subsequent requests times out. Own Id: OTP-16414 Aux Id: ERIERL-452 The prim_net nif (net/kernel) made use of an undefined atom, notsup. This has now been corrected. Own Id: OTP-16440 Fix a crash when attempting to log faults when loading files during early boot. Own Id: OTP-16491 Fix crash in logger when logging to a remote node during boot. Own Id: OTP-16493 Aux Id: ERIERL-459 Improvements and New Features Improved net_kernel debug functionality. Own Id: OTP-16458 Aux Id: PR-2525 1.35 Kernel 6.5.1 Fixed Bugs and Malfunctions The 'socket state' info provided by the inet info function has been improved Own Id: OTP-16043 Aux Id: ERL-1036 Fix bug where logger would crash when starting when a very large log file needed to be rotated and compressed. Own Id: OTP-16145 Aux Id: ERL-1034 Fixed a bug causing actual nodedown reason reported by net_kernel:monitor_nodes(true, [nodedown_reason]) to be lost and replaced by the reason killed. Own Id: OTP-16216 The documentation for rpc:call/4,5/ has been updated to describe what happens when the called function throws or return an 'EXIT' tuple. Own Id: OTP-16279 Aux Id: ERL-1066 1.36 Kernel 6.5 Fixed Bugs and Malfunctions The type specification for gen_sctp:connect/4,5 has been corrected. Own Id: OTP-15344 Aux Id: ERL-947 Extra -mode flags given to erl are ignored with a warning. Own Id: OTP-15852 Fix type spec for seq_trace:set_token/2. Own Id: OTP-15858 Aux Id: ERL-700 logger:compare_levels/2 would fail with a badarg exception if given the values all or none as any of the parameters. This is now corrected. Own Id: OTP-15942 Aux Id: PR-2301 Fix a race condition in the debugging function net_kernel:nodes_info/0. Own Id: OTP-16022 Fix race condition when closing a file opened in compressed or delayed_write mode. Own Id: OTP-16023 Improvements and New Features The possibility to send ancillary data, in particular the TOS field, has been added to gen_udp:send/4,5. Own Id: OTP-15747 Aux Id: ERIERL-294 If the log file was given with relative path, the standard logger handler (logger_std_h) would store the file name with relative path. If the current directory of the node was later changed, a new file would be created relative the new current directory, potentially failing with an enoent if the new directory did not exist. This is now corrected and logger_std_h always stores the log file name as an absolute path, calculated from the current directory at the time of the handler startup. Own Id: OTP-15850 Support local sockets with inet:i/0. Own Id: OTP-15935 Aux Id: PR-2299 1.37 Kernel 6.4.1 Fixed Bugs and Malfunctions user/user_drv could respond to io requests before they had been processed, which could cause data to be dropped if the emulator was halted soon after a call to io:format/2, such as in an escript. Own Id: OTP-15805 1.38 Kernel 6.4 Fixed Bugs and Malfunctions Fix so that when multiple -sname or -name are given to erl the first one is chosen. Before this fix distribution was not started at all when multiple name options were given. Own Id: OTP-15786 Aux Id: ERL-918 Fix inet_res configuration pointing to non-existing files to work again. This was broken in KERNEL-6.3 (OTP-21.3). Own Id: OTP-15806 Improvements and New Features A simple socket API is provided through the socket module. This is a low level API that does *not* replace gen_[tcp|udp|sctp]. It is intended to *eventually* replace the inet driver, but not the high level gen-modules (gen_tcp, gen_udp and gen_sctp). It also provides a basic API that facilitates the implementation of other protocols, that is TCP, UDP and SCTP. Known issues are; No support for the Windows OS (currently). Own Id: OTP-14831 Improved the documentation for the linger option. Own Id: OTP-15491 Aux Id: PR-2019 Global no longer tries more than once when connecting to other nodes. Own Id: OTP-15607 Aux Id: ERIERL-280 The dist messages EXIT, EXIT2 and MONITOR_DOWN have been updated with new versions that send the reason term as part of the payload of the message instead of as part of the control message. The old versions are still present and can be used when communicating with nodes that don't support the new versions. Own Id: OTP-15611 Kernel configuration parameter start_distribution = boolean() is added. If set to false, the system is started with all distribution functionality disabled. Defaults to true. Own Id: OTP-15668 Aux Id: PR-2088 In OTP-21.3, a warning was introduced for duplicated applications/keys in configuration. This warning would be displayed both when the configuration was given as a file on system start, and during runtime via application:set_env/1,2. The warning is now changed to a badarg exception in application:set_env/1,2. If the faulty configuration is given in a configuration file on system start, the startup will fail. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-15692 Aux Id: PR-2170 1.39 Kernel 6.3.1.3 Fixed Bugs and Malfunctions 1.40 Kernel 6.3.1.2 Improvements and New Features The possibility to send ancillary data, in particular the TOS field, has been added to gen_udp:send/4,5. Own Id: OTP-15747 Aux Id: ERIERL-294 1.41 Kernel 6.3.1.1 Fixed Bugs and Malfunctions Fix type spec for seq_trace:set_token/2. Own Id: OTP-15858 Aux Id: ERL-700 1.42 Kernel 6.3.1 Fixed Bugs and Malfunctions Fixed a performance regression when reading files opened with the compressed flag. Own Id: OTP-15706 Aux Id: ERIERL-336 1.43 Kernel 6.3 Fixed Bugs and Malfunctions If for example the /etc/hosts did not come into existence until after the kernel application had started, its content was never read. This bug has now been corrected. Own Id: OTP-14702 Aux Id: PR-2066 Fix bug where doing seq_trace:reset_trace() while another process was doing a garbage collection could cause the run-time system to segfault. Own Id: OTP-15490 Fix erl_epmd:port_please spec to include atom() and string(). Own Id: OTP-15557 Aux Id: PR-2117 The Logger handler logger_std_h now keeps track of the inode of its log file in order to re-open the file if the inode changes. This may happen, for instance, if the log file is opened and saved by an editor. Own Id: OTP-15578 Aux Id: ERL-850 When user specific file modes are given to the logger handler logger_std_h, they were earlier accepted without any control. This is now changes, so Logger will adjust the file modes as follows: - If raw is not found in the list, it is added. - If none of write, append or exclusive are found in the list, append is added. - If none of delayed_write or {delayed_write,Size,Delay} are found in the list, delayed_write is added. Own Id: OTP-15602 Improvements and New Features The standard logger handler, logger_std_h, now has a new internal feature for log rotation. The rotation scheme is as follows: The log file to which the handler currently writes always has the same name, i.e. the name which is configured for the handler. The archived files have the same name, but with extension ".N", where N is an integer. The newest archive has extension ".0", and the oldest archive has the highest number. The size at which the log file is rotated, and the number of archive files that are kept, is specified with the handler specific configuration parameters max_no_bytes and max_no_files respectively. Archives can be compressed, in which case they get a ".gz" file extension after the integer. Compression is specified with the handler specific configuration parameter compress_on_rotate. Own Id: OTP-15479 The new functions logger:i/0 and logger:i/1 are added. These provide the same information as logger:get_config/0 and other logger:get_*_config functions, but the information is more logically sorted and more readable. Own Id: OTP-15600 Logger is now protected against overload due to massive amounts of log events from the emulator or from remote nodes. Own Id: OTP-15601 Logger now uses os:system_time/1 instead of erlang:system_time/1 to generate log event timestamps. Own Id: OTP-15625 Add functions application:set_env/1,2 and application:set_env/2. These take a list of application configuration parameters, and the behaviour is equivalent to calling application:set_env/4 individually for each application/key combination, except it is more efficient. set_env/1,2 warns about duplicated applications or keys. The warning is also emitted during boot, if applications or keys are duplicated within one configuration file, e.g. sys.config. Own Id: OTP-15642 Aux Id: PR-2164 Handler specific configuration parameters for the standard handler logger_std_h are changed to be more intuitive and more similar to the disk_log handler. Earlier there was only one parameter, type, which could have the values standard_io, standard_error, {file,FileName} or {file,FileName,Modes}. This is now changed, so the following parameters are allowed: type = standard_io | standard_error | file file = file:filename() modes = [file:mode()] All parameters are optional. type defaults to standard_io, unless a file name is given, in which case it defaults to file. If type is set to file, the file name defaults to the same as the handler id. The potential incompatibility is that logger:get_config/0 and logger:get_handler_config/1 now returns the new parameters, even if the configuration was set with the old variant, e.g. #{type=>{file,FileName}}. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-15662 The new configuration parameter file_check is added to the Logger handler logger_std_h. This parameter specifies how long (in milliseconds) the handler may wait before checking if the log file still exists and the inode is the same as when it was opened. The default value is 0, which means that this check is done prior to each write operation. Setting a higher number may improve performance, but adds the risk of losing log events. Own Id: OTP-15663 1.44 Kernel 6.2.1 Fixed Bugs and Malfunctions Setting the recbuf size of an inet socket the buffer is also automatically increased. Fix a bug where the auto adjustment of inet buffer size would be triggered even if an explicit inet buffer size had already been set. Own Id: OTP-15651 Aux Id: ERIERL-304 1.45 Kernel 6.2 Fixed Bugs and Malfunctions A new function, logger:update_handler_config/3 is added, and the handler callback changing_config now has a new argument, SetOrUpdate, which indicates if the configuration change comes from set_handler_config/2,3 or update_handler_config/2,3. This allows the handler to consistently merge the new configuration with the old (if the change comes from update_handler_config/2,3) or with the default (if the change comes from set_handler_config/2,3). The built-in handlers logger_std_h and logger_disk_log_h are updated accordingly. A bug which could cause inconsistency between the handlers' internal state and the stored configuration is also corrected. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-15364 Fix fallback when custom erl_epmd client does not implement address_please. Own Id: OTP-15388 Aux Id: PR-1983 The logger ets table did not have the read_concurrency option. This is now added. Own Id: OTP-15453 Aux Id: ERL-782 During system start, logger has a simple handler which prints to stdout. After the kernel supervision is started, this handler is removed and replaced by the default handler. Due to a bug, logger earlier issued a debug printout saying it received an unexpected message, which was the EXIT message from the simple handler's process. This is now corrected. The simple handler's process now unlinks from the logger process before terminating. Own Id: OTP-15466 Aux Id: ERL-788 The logger handler logger_std_h would not re-create it's log file if it was removed. Due to this it could not be used with tools like 'logrotate'. This is now corrected. Own Id: OTP-15469 Improvements and New Features A function inet:getifaddrs/1 that takes a list with a namespace option has been added, for platforms that support that feature, for example Linux (only?). Own Id: OTP-15121 Aux Id: ERIERL-189, PR-1974 Added the nopush option for TCP sockets, which corresponds to TCP_NOPUSH on *BSD and TCP_CORK on Linux. This is also used internally in file:sendfile to reduce latency on subsequent send operations. Own Id: OTP-15357 Aux Id: ERL-698 Optimize handling of send_delay for tcp sockes to better work with the new pollthread implementation introduced in OTP-21. Own Id: OTP-15471 Aux Id: ERIERL-229 1.46 Kernel 6.1.1 Fixed Bugs and Malfunctions Fix bug causing net_kernel process crash on connection attempt from node with name identical to local node. Own Id: OTP-15438 Aux Id: ERL-781 1.47 Kernel 6.1 Fixed Bugs and Malfunctions The values all and none are documented as valid value for the Kernel configuration parameter logger_level, but would cause a crash during node start. This is now corrected. Own Id: OTP-15143 Fix some potential buggy behavior in how ticks are sent on inter node distribution connections. Tick is now sent to c-node even if there are unsent buffered data, as c-nodes need ticks in order to send reply ticks. The amount of sent data was also calculated wrongly when ticks were suppressed due to unsent buffered data. Own Id: OTP-15162 Aux Id: ERIERL-191 Non semantic change in dist_util.erl to silence dialyzer warning. Own Id: OTP-15170 Fixed net_kernel:connect_node(node()) to return true (and do nothing) as it always has before OTP-21.0. Also documented this successful "self connect" as the expected behavior. Own Id: OTP-15182 Aux Id: ERL-643 The single_line option on logger_formatter would in some cases add an unwanted comma after the association arrows in a map. This is now corrected. Own Id: OTP-15228 Improved robustness of distribution connection setup. In OTP-21.0 a truly asynchronous connection setup was introduced. This is further improvement on that work to make the emulator more robust and also be able to recover in cases when involved Erlang processes misbehave. Own Id: OTP-15297 Aux Id: OTP-15279, OTP-15280 Improvements and New Features A new macro, ?LOG(Level,...), is added. This is equivalent to the existing ?LOG_<LEVEL>(...) macros. A new variant of Logger report callback is added, which takes an extra argument containing options for size limiting and line breaks. Module proc_lib in STDLIB uses this for crash reports. Logger configuration is now checked a bit more for errors. Own Id: OTP-15132 The socket options recvtos, recvttl, recvtclass and pktoptions have been implemented in the socket modules. See the documentation for the gen_tcp, gen_udp and inet modules. Note that support for these in the runtime system is platform dependent. Especially for pktoptions which is very Linux specific and obsoleted by the RFCs that defined it. Own Id: OTP-15145 Aux Id: ERIERL-187 Add logger:set_application_level/2 for setting the logger level of all modules in one application. Own Id: OTP-15146 1.48 Kernel 6.0.1 Fixed Bugs and Malfunctions Fixed bug in net_kernel that could cause an emulator crash if certain connection attempts failed. Bug exists since kernel-6.0 (OTP-21.0). Own Id: OTP-15280 Aux Id: ERIERL-226, OTP-15279 1.49 Kernel 6.0 Fixed Bugs and Malfunctions Clarify the documentation of rpc:multicall/5. Own Id: OTP-10551 The DNS resolver when getting econnrefused from a server retained an invalid socket so look up towards the next server(s) also failed. Own Id: OTP-13133 Aux Id: PR-1557 No resolver backend returns V4Mapped IPv6 addresses any more. This was inconsistent before, some did, some did not. To facilitate working with such addresses a new function inet:ipv4_mapped_ipv6_address/1 has been added. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-13761 Aux Id: ERL-503 The type specifications for file:posix/0 and inet:posix/0 have been updated according to which errors file and socket operations should be able to return. Own Id: OTP-14019 Aux Id: ERL-550 Fix name resolving in IPv6 only environments when doing the initial distributed connection. Own Id: OTP-14501 os:putenv and os:getenv no longer access the process environment directly and instead work on a thread-safe emulation. The only observable difference is that it's not kept in sync with libc getenv(3) / putenv(3), so those who relied on that behavior in drivers or NIFs will need to add manual synchronization. On Windows this means that you can no longer resolve DLL dependencies by modifying the PATH just before loading the driver/NIF. To make this less of a problem, the emulator now adds the target DLL's folder to the DLL search path. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-14666 Fixed connection tick toward primitive hidden nodes (erl_interface) that could cause faulty tick timeout in rare cases when payload data is sent to hidden node but not received. Own Id: OTP-14681 Make group react immediately on an EXIT-signal from shell in e.g ssh. Own Id: OTP-14991 Aux Id: PR1705 Calls to gen_tcp:send/2 on closed sockets now returns {error, closed} instead of {error,enotconn}. Own Id: OTP-15001 The included_applications key are no longer duplicated as application environment variable. Earlier, the included applications could be read both with application:get[_all]_env(...) and application:get[_all]_key(...) functions. Now, it can only be read with application:get[_all]_key(...). *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-15071 Owner and group changes through file:write_file_info, file:change_owner, and file:change_group will no longer report success on permission errors. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-15118 Improvements and New Features The function inet:i/0 has been documented. Own Id: OTP-13713 Aux Id: PR-1645 Typespecs for netns and bind_to_device options have been added to gen_tcp, gen_udp and gen_sctp functions. Own Id: OTP-14359 Aux Id: PR-1816 New functionality for implementation of alternative carriers for the Erlang distribution has been introduced. This mainly consists of support for usage of distribution controller processes (previously only ports could be used as distribution controllers). For more information see ERTS User's Guide ➜ How to implement an Alternative Carrier for the Erlang Distribution ➜ Distribution Module. Own Id: OTP-14459 seq_trace labels may now be any erlang term. Own Id: OTP-14899 The SSL distribution protocol -proto inet_tls has stopped setting the SSL option server_name_indication. New verify funs for client and server in inet_tls_dist has been added, not documented yet, that checks node name if present in peer certificate. Usage is still also yet to be documented. Own Id: OTP-14969 Aux Id: OTP-14465, ERL-598 Changed timeout of gen_server calls to auth server from default 5 seconds to infinity. Own Id: OTP-15009 Aux Id: ERL-601 The callback module passed as -epmd_module to erl has been expanded to be able to do name and port resolving. Documentation has also been added in the erl_epmd reference manual and ERTS User's Guide How to Implement an Alternative Node Discovery for Erlang Distribution. Own Id: OTP-15086 Aux Id: PR-1694 Included config file specified with relative path in sys.config are now first searched for relative to the directory of sys.config itself. If not found, it is also searched for relative to the current working directory. The latter is for backwards compatibility. Own Id: OTP-15137 Aux Id: PR-1838 1.50 Kernel 5.4.3.2 Fixed Bugs and Malfunctions Non semantic change in dist_util.erl to silence dialyzer warning. Own Id: OTP-15170 1.51 Kernel 5.4.3.1 Fixed Bugs and Malfunctions Fix some potential buggy behavior in how ticks are sent on inter node distribution connections. Tick is now sent to c-node even if there are unsent buffered data, as c-nodes need ticks in order to send reply ticks. The amount of sent data was calculated wrongly when ticks where suppressed due to unsent buffered data. Own Id: OTP-15162 Aux Id: ERIERL-191 1.52 Kernel 5.4.3 Fixed Bugs and Malfunctions Correct a few contracts. Own Id: OTP-14889 Reject loading modules with names containing directory separators ('/' or '\' on Windows). Own Id: OTP-14933 Aux Id: ERL-564, PR-1716 Fix bug in handling of os:cmd/2 option max_size on windows. Own Id: OTP-14940 1.53 Kernel 5.4.2 Fixed Bugs and Malfunctions Add os:cmd/2 that takes an options map as the second argument. Add max_size as an option to os:cmd/2 that control the maximum size of the result that os:cmd/2 will return. Own Id: OTP-14823 1.54 Kernel 5.4.1 Fixed Bugs and Malfunctions Refactored an internal API. Own Id: OTP-14784 1.55 Kernel 5.4 Fixed Bugs and Malfunctions Processes which did output after switching jobs (Ctrl+G) could be left forever stuck in the io request. Own Id: OTP-14571 Aux Id: ERL-472 1.56 Kernel 5.3.1 Fixed Bugs and Malfunctions The documentation for the 'quiet' option in disk_log:open/1 had an incorrect default value. Own Id: OTP-14498 1.57 Kernel 5.3 Fixed Bugs and Malfunctions Function inet:ntoa/1 has been fixed to return lowercase letters according to RFC 5935 that has been approved after this function was written. Previously uppercase letters were returned so this may be a backwards incompatible change depending on how the returned address string is used. Function inet:parse_address/1 has been fixed to accept %-suffixes on scoped addresses. The addresses does not work yet, but gives no parse errors. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-13006 Aux Id: ERIERL-20, ERL-429 Fix bug where gethostname would incorrectly fail with enametoolong on Linux. Own Id: OTP-14310 Fix bug causing code:is_module_native to falsely return true when local call trace is enabled for the module. Own Id: OTP-14390 Add early reject of invalid node names from distributed nodes. Own Id: OTP-14426 Improvements and New Features Since Unicode is now allowed in atoms an extra check is needed for node names, which are restricted to Latin-1. Own Id: OTP-13805 Replaced usage of deprecated symbolic time unit representations. Own Id: OTP-13831 Aux Id: OTP-13735 file:write_file(Name, Data, [raw]) would turn Data into a single binary before writing. This meant it could not take advantage of the writev() system call if it was given a list of binaries and told to write with raw mode. Own Id: OTP-13909 The performance of the disk_log has been somewhat improved in some corner cases (big items), and the documentation has been clarified. Own Id: OTP-14057 Aux Id: PR-1245 Introduce an event manager in Erlang to handle OS signals. A subset of OS signals may be subscribed to and those are described in the Kernel application. Own Id: OTP-14186 Sockets can now be bound to device (SO_BINDTODEVICE) on platforms where it is supported. This has been implemented e.g to support VRF-Lite under Linux; see VRF , and GitHub pull request #1326. Own Id: OTP-14357 Aux Id: PR-1326 Added option to store shell_history on disk so that the history can be reused between sessions. Own Id: OTP-14409 Aux Id: PR-1420 One of the ETS tables used by the global module is created with {read_concurrency, true} in order to reduce contention. Own Id: OTP-14419 Warnings have been added to the relevant documentation about not using un-secure distributed nodes in exposed environments. Own Id: OTP-14425 1.58 Kernel 5.2 Fixed Bugs and Malfunctions Fix a race during cleanup of os:cmd that would cause os:cmd to hang indefinitely. Own Id: OTP-14232 Aux Id: seq13275 Improvements and New Features The functions in the 'file' module that take a list of paths (e.g. file:path_consult/2) will now continue to search in the path if the path contains something that is not a directory. Own Id: OTP-14191 Two OTP processes that are known to receive many messages are 'rex' (used by 'rpc') and 'error_logger'. Those processes will now store unprocessed messages outside the process heap, which will potentially decrease the cost of garbage collections. Own Id: OTP-14192 1.59 Kernel 5.1.1 Fixed Bugs and Malfunctions code:add_pathsa/1 and command line option -pa both revert the given list of directories when adding it at the beginning of the code path. This is now documented. Own Id: OTP-13920 Aux Id: ERL-267 Add lost runtime dependency to erts-8.1. This should have been done in kernel-5.1 (OTP-19.1) as it cannot run without at least erts-8.1 (OTP-19.1). Own Id: OTP-14003 Type and doc for gen_{tcp,udp,sctp}:controlling_process/2 has been improved. Own Id: OTP-14022 Aux Id: PR-1208 1.60 Kernel 5.1 Fixed Bugs and Malfunctions Fix a memory leak when calling seq_trace:get_system_tracer(). Own Id: OTP-13742 Fix for the problem that when adding the ebin directory of an application to the code path, the code:priv_dir/1 function returns an incorrect path to the priv directory of the same application. Own Id: OTP-13758 Aux Id: ERL-195 Fix code_server crash when adding code paths of two levels. Own Id: OTP-13765 Aux Id: ERL-194 Respect -proto_dist switch while connection to EPMD Own Id: OTP-13770 Aux Id: PR-1129 Fixed a bug where init:stop could deadlock if a process with infinite shutdown timeout (e.g. a supervisor) attempted to load code while terminating. Own Id: OTP-13802 Close stdin of commands run in os:cmd. This is a backwards compatibility fix that restores the behaviour of pre 19.0 os:cmd. Own Id: OTP-13867 Aux Id: seq13178 Improvements and New Features Add net_kernel:setopts/2 and net_kernel:getopts/2 to control options for distribution sockets in runtime. Own Id: OTP-13564 Rudimentary support for DSCP has been implemented in the guise of a tclass socket option for IPv6 sockets. Own Id: OTP-13582 1.61 Kernel 5.0.2 Fixed Bugs and Malfunctions When calling os:cmd from a process that has set trap_exit to true an 'EXIT' message would be left in the message queue. This bug was introduced in kernel vsn 5.0.1. Own Id: OTP-13813 1.62 Kernel 5.0.1 Fixed Bugs and Malfunctions Fix a os:cmd bug where creating a background job using & would cause os:cmd to hang until the background job terminated or closed its stdout and stderr file descriptors. This bug has existed from kernel 5.0. Own Id: OTP-13741 1.63 Kernel 5.0 Fixed Bugs and Malfunctions The handling of on_load functions has been improved. The major improvement is that if a code upgrade fails because the on_load function fails, the previous version of the module will now be retained. Own Id: OTP-12593 rpc:call() and rpc:block_call() would sometimes cause an exception (which was not mentioned in the documentation). This has been corrected so that {badrpc,Reason} will be returned instead. Own Id: OTP-13409 On Windows, for modules that were loaded early (such as the lists module), code:which/1 would return the path with mixed slashes and backslashes, for example: "C:\\Program Files\\erl8.0/lib/stdlib-2.7/ebin/lists.beam". This has been corrected. Own Id: OTP-13410 Make file:datasync use fsync instead of fdatasync on Mac OSX. Own Id: OTP-13411 The default chunk size for the fallback sendfile implementation, used on platforms that do not have a native sendfile, has been decreased in order to reduce connectivity issues. Own Id: OTP-13444 Large file writes (2Gb or more) could fail on some Unix platforms (for example, OS X and FreeBSD). Own Id: OTP-13461 A bug has been fixed where the DNS resolver inet_res did not refresh its view of the contents of for example resolv.conf immediately after start and hence then failed name resolution. Reported and fix suggested by Michal Ptaszek in GitHUB pull req #949. Own Id: OTP-13470 Aux Id: Pull #969 Fix process leak from global_group. Own Id: OTP-13516 Aux Id: PR-1008 The function inet:gethostbyname/1 now honors the resolver option inet6 instead of always looking up IPv4 addresses. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-13622 Aux Id: PR-1065 The Status argument to init:stop/1 is now sanity checked to make sure erlang:halt does not fail. Own Id: OTP-13631 Aux Id: PR-911 Improvements and New Features Add {line_delim, byte()} option to inet:setopts/2 and decode_packet/3 Own Id: OTP-12837 Added os:perf_counter/1. The perf_counter is a very very cheap and high resolution timer that can be used to timestamp system events. It does not have monoticity guarantees, but should on most OS's expose a monotonous time. Own Id: OTP-12908 The os:cmd call has been optimized on unix platforms to be scale better with the number of schedulers. Own Id: OTP-13089 New functions that can load multiple modules at once have been added to the 'code' module. The functions are code:atomic_load/1, code:prepare_loading/1, code:finish_loading/1, and code:ensure_modules_loaded/1. Own Id: OTP-13111 The code path cache feature turned out not to be very useful in practice and has been removed. If an attempt is made to enable the code path cache, there will be a warning report informing the user that the feature has been removed. Own Id: OTP-13191 When an attempt is made to start a distributed Erlang node with the same name as an existing node, the error message will be much shorter and easier to read than before. Example: Protocol 'inet_tcp': the name somename@somehost seems to be in use by another Erlang node Own Id: OTP-13294 The output of the default error logger is somewhat prettier and easier to read. The default error logger is used during start-up of the OTP system. If the start-up fails, the output will be easier to read. Own Id: OTP-13325 The functions rpc:safe_multi_server_call/2,3 that were deprecated in R12B have been removed. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-13449 Update the error reasons in dist_util, and show them in the logs if net_kernel:verbose(1) has been called. Own Id: OTP-13458 Experimental support for Unix Domain Sockets has been implemented. Read the sources if you want to try it out. Example: gen_udp:open(0, [{ifaddr,{local,"/tmp/socket"}}]). Documentation will be written after user feedback on the experimental API. Own Id: OTP-13572 Aux Id: PR-612 Allow heart to be configured to not kill the previous emulator before calling the HEART_COMMAND. This is done by setting the environment variable HEART_NO_KILL to TRUE. Own Id: OTP-13650 1.64 Kernel 4.2 Fixed Bugs and Malfunctions code:load_abs([10100]) would bring down the entire runtime system and create a crash dump. Corrected to generate an error exception in the calling process. Also corrected specs for code loading functions and added more information in the documentation about the error reasons returned by code-loading functions. Own Id: OTP-9375 gen_tcp:accept/2 was not time warp safe. This since it used the same time as returned by erlang:now/0 when calculating timeout. This has now been fixed. Own Id: OTP-13254 Aux Id: OTP-11997, OTP-13222 Correct the contract for inet:getifaddrs/1. Own Id: OTP-13335 Aux Id: ERL-95 Add validation callback for heart The erlang heart process may now have a validation callback installed. The validation callback will be executed, if present, before any heartbeat to heart port program. If the validation fails, or stalls, no heartbeat will be sent and the node will go down. With the option 'check_schedulers' heart executes a responsiveness check of the schedulers before a heartbeat is sent to the port program. If the responsiveness check fails, the heartbeat will not be performed (as intended). Own Id: OTP-13250 Clarify documentation of net_kernel:allow/1 Own Id: OTP-13299 EPMD supports both IPv4 and IPv6 Also affects oldest supported windows version. Own Id: OTP-13364 1.65 Kernel 4.1.1 Fixed Bugs and Malfunctions Host name lookups though inet_res, the Erlang DNS resolver, are now done case insensitively according to RFC 4343. Patch by Holger Weiß. Own Id: OTP-12836 IPv6 distribution handler has been updated to share code with IPv4 so that all features are supported in IPv6 as well. A bug when using an IPv4 address as hostname has been fixed. Own Id: OTP-13040 Caching of host names in the internal DNS resolver inet_res has been made character case insensitive for host names according to RFC 4343. Own Id: OTP-13083 Cooked file mode buffering has been fixed so file:position/2 now works according to Posix on Posix systems i.e. when file:position/2 returns an error the file pointer is unaffected. The Windows system documentation, however, is unclear on this point so the documentation of file:position/2 still does not promise anything. Cooked file mode file:pread/2,3 and file:pwrite/2,3 have been corrected to honor character encoding like the combination of file:position/2 and file:read/2 or file:write/2 already does. This is probably not very useful since the character representation on the caller's side is latin1, period. Own Id: OTP-13155 Aux Id: PR#646 Improvements and New Features Add {line_delim, byte()} option to inet:setopts/2 and decode_packet/3 Own Id: OTP-12837 1.66 Kernel 4.1.67 Kernel 4.0 Fixed Bugs and Malfunctions Fix error handling in file:read_line/1 for Unicode contents. Own Id: OTP-12144 Introduce os:getenv/2 which is similar to os:getenv/1 but returns the passed default value if the required environment variable is undefined. Own Id: OTP-12342 It is now possible to paste text in JCL mode (using Ctrl-Y) that has been copied in the previous shell session. Also a bug that caused the JCL mode to crash when pasting text has been fixed. Own Id: OTP-12673 Ensure that each segment of an IPv6 address when parsed from a string has a maximum of 4 hex digits Own Id: OTP-12773 Improvements and New Features New BIF: erlang:get_keys/0, lists all keys associated with the process dictionary. Note: erlang:get_keys/0 is auto-imported. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-12151 Aux Id: seq12521 The inflateInit/2 and deflateInit/6 functions now accepts a WindowBits argument equal to 8 and -8. Own Id: OTP-12564 Map error logger warnings to warning messages by default. Own Id: OTP-12755 Map beam error logger warnings to warning messages by default. Previously these messages were mapped to the error channel by default. Own Id: OTP-12781 gen_tcp:shutdown/2 is now asynchronous This solves the following problems with the old implementation: It doesn't block when the TCP peer is idle or slow. This is the expected behaviour when shutdown() is called: the caller needs to be able to continue reading from the socket, not be prevented from doing so. It doesn't truncate the output. The current version of gen_tcp:shutdown/2 will truncate any outbound data in the driver queue after about 10 seconds if the TCP peer is idle of slow. Worse yet, it doesn't even inform anyone that the data has been truncated: 'ok' is returned to the caller; and a FIN rather than an RST is sent to the TCP peer. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-1279743 1.68 Kernel 3.2.0.1 Fixed Bugs and Malfunctions The 'raw' socket option could not be used multiple times in one call to any e.g gen_tcp function because only one of the occurrences were used. This bug has been fixed, and also a small bug concerning propagating error codes from within inet:setopts/2. Own Id: OTP-11482 Aux Id: seq12872 1.69 Kernel 3.2 Fixed Bugs and Malfunctions A bug causing an infinite loop in hostname resolving has been corrected. To trigger this bug you would have to enter an bogus search method from a configuration file e.g .inetrc. Bug pinpointed by Emil Holmström Own Id: OTP-12133 The standard_error process now handles the getopts I/O protocol request correctly and stores its encoding in the same way as standard_io. Also, io:put_chars(standard_error, [oops]) could previously crash the standard_error process. This is now corrected. Own Id: OTP-12424 Improvements and New Features Configuration parameters for the Kernel application that allows setting socket options for the distribution sockets have been added. See the application Kernel documentation; parameters 'inet_dist_listen_options' and 'inet_dist_connect_options'. Own Id: OTP-12476 Aux Id: OTP-12476 1.70 Kernel 3.1 Fixed Bugs and Malfunctions Make sure to install .hrl files when needed Own Id: OTP-12197 Removed the undocumented application environment variable 'raw_files' from the kernel application. This variable was checked (by call to application:get_env/2) each time a raw file was to be opened in the file module. Own Id: OTP-12276 A bug has been fixed when using the netns option to gen_udp, which accidentally only worked if it was the last option. Own Id: OTP-12314 Improvements and New Features Updated documentation for inet buffer size options. Own Id: OTP-12296 Introduce new option 'raw' in file_info and link_info functions. This option allows the caller not to go through the file server for information about files guaranteed to be local. Own Id: OTP-12325 1.71 Kernel 3.0.3 Fixed Bugs and Malfunctions Accept inet:ip_address() in net_adm:names/1 Own Id: OTP-12154 1.72 Kernel 3.0.73.74 Erlang/OTP has been ported to the realtime operating system OSE. The port supports both smp and non-smp emulator. For details around the port and how to started see the User's Guide in the ose application. Note that not all parts of Erlang/OTP has been ported. Notable things that work are: non-smp and smp emulators, OSE signal interaction, crypto, asn1, run_erl/to_erl, tcp, epmd, distribution and most if not all non-os specific functionality of Erlang. Notable things that does not work are: udp/sctp, os_mon, erl_interface, binding of schedulers. Own Id: OTP-11334 Add the {active,N} socket option for TCP, UDP, and SCTP, where N is an integer in the range -32768..32767, to allow a caller to specify the number of data messages to be delivered to the controlling process. Once the socket's delivered message count either reaches 0 or is explicitly set to 0 with inet:setopts/2 or by including {active,0} as an option when the socket is created, the socket transitions to passive ({active, false}) mode and the socket's controlling process receives a message to inform it of the transition. TCP sockets receive {tcp_passive,Socket}, UDP sockets receive {udp_passive,Socket} and SCTP sockets receive {sctp_passive,Socket}. The socket's delivered message counter defaults to 0, but it can be set using {active,N} via any gen_tcp, gen_udp, or gen_sctp function that takes socket options as arguments, or via inet:setopts/2. New N values are added to the socket's current counter value, and negative numbers can be used to reduce the counter value. Specifying a number that would cause the socket's counter value to go above 32767 causes an einval error. If a negative number is specified such that the counter value would become negative, the socket's counter value is set to 0 and the socket transitions to passive mode. If the counter value is already 0 and inet:setopts(Socket, [{active,0}]) is specified, the counter value remains at 0 but the appropriate passive mode transition message is generated for the socket. Thanks to Steve Vinoski Own Id: OTP-11368 won't be as much negatively effected by the operation as before. A call to code:purge/1 and code:soft_purge/1 may complete faster or slower depending on the state of the system while the system as a whole won't be as much negatively effected by the operation as before. Own Id: OTP-11388 Aux Id: OTP-11535, OTP-11648 Add sync option to file:open/2. The sync option adds the POSIX O_SYNC flag to the open system call on platforms that support the flag or its equivalent, e.g., FILE_FLAG_WRITE_THROUGH on Windows. For platforms that don't support it, file:open/2 returns {error, enotsup} if the sync option is passed in. Thank to Steve Vinoski and Joseph Blomstedt Own Id: OTP-11498 The contract of inet:ntoa/1 has been corrected. Thanks to Max Treskin. Own Id: OTP-11730 1.75 Kernel 2.16.4.1 Known Bugs and Problems When using gen_tcp:connect and the fd option with port and/or ip, the port and ip options were ignored. This has been fixed so that if port and/or ip is specified together with fd a bind is requested for that fd. If port and/or ip is not specified bind will not be called. Own Id: OTP-12061 1.76.77.78 Kernel 2.16.2 Fixed Bugs and Malfunctions A bug in prim_inet has been corrected. If the port owner was killed at a bad time while closing the socket port the port could become orphaned hence causing port and socket leaking. Reported by Fred Herbert, Dmitry Belyaev and others. Own Id: OTP-10497 Aux Id: OTP-10562 Erlang source files with non-ASCII characters are now encoded in UTF-8 (instead of latin1). Own Id: OTP-11041 Aux Id: OTP-10907 Optimization of simultaneous inet_db operations on the same socket by using a lock free implementation. Impact on the characteristics of the system: Improved performance. Own Id: OTP-11074 The high_msgq_watermark and low_msgq_watermark inet socket options introduced in OTP-R16A could only be set on TCP sockets. These options are now generic and can be set on all types of sockets. Own Id: OTP-11075 Aux Id: OTP-10336 Fix deep list argument error under Windows in os:cmd/1. Thanks to Aleksandr Vinokurov . Own Id: OTP-11104 1.79.80 formatted strings (Thanks to Serge Aleynikov) Own Id: OTP-10620] A boolean socket option 'ipv6_v6only' for IPv6 sockets has been added. The default value of the option is OS dependent, so applications aiming to be portable should consider using {ipv6_v6only,true} when creating an inet6 listening/destination socket, and if necessary also create an inet socket on the same port for IPv4 traffic. See the documentation. Own Id: OTP-8928 Aux Id: kunagi-193 [104].81 Kernel 2.15.3 Fixed Bugs and Malfunctions Ensure 'erl_crash.dump' when asked for it. This will change erl_crash.dump behaviour. * Not setting ERL_CRASH_DUMP_SECONDS will now terminate beam immediately on a crash without writing a crash dump file. * Setting ERL_CRASH_DUMP_SECONDS to 0 will also terminate beam immediately on a crash without writing a crash dump file, i.e. same as not setting ERL_CRASH_DUMP_SECONDS environment variable. * Setting ERL_CRASH_DUMP_SECONDS to a negative value will let the beam wait indefinitely on the crash dump file being written. * Setting ERL_CRASH_DUMP_SECONDS to a positive value will let the beam wait that many seconds on the crash dump file being written. A positive value will set an alarm/timeout for restart both in beam and in heart if heart is running. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-10422 Aux Id: kunagi-250 [161] 1.82 Allow mixed IPv4 and IPv6 addresses to sctp_bindx Also allow mixed address families to bind, since the first address on a multihomed sctp socket must be bound with bind, while the rest are to be bound using sctp_bindx. At least Linux supports adding address of mixing families. Make inet_set_faddress function available also when HAVE_SCTP is not defined, since we use it to find an address for bind to be able to mix ipv4 and ipv6 addresses. Thanks to Tomas Abrahamsson Own Id: OTP-10217.83.84 Kernel 2.15 Fixed Bugs and Malfunctions Honor option packet_size for http packet parsing by both TCP socket and erlang:decode_packet. This gives the ability to accept HTTP headers larger than the default setting, but also avoid DoS attacks by accepting lines only up to whatever length you wish to allow. For consistency, packet type line also honor option packet_size. (Thanks to Steve Vinoski) Own Id: OTP-9389 Correct callback spec in application module Refine warning about callback specs with extra ranges Cleanup autoimport compiler directives Fix Dialyzer's warnings in typer Fix Dialyzer's warning for its own code Fix bug in Dialyzer's behaviours analysis Fix crash in Dialyzer Variable substitution was not generalizing any unknown variables. Own Id: OTP-9776 Improvements and New Features An option list argument can now be passed to file:read_file_info/2, file:read_link_info/2 and file:write_file_info/3 and set time type information in the call. Valid options are {time, local}, {time, universal} and {time, posix}. In the case of posix time no conversions are made which makes the operation a bit faster. Own Id: OTP-7687 file:list_dir/1,2 will now fill an buffer entire with filenames from the efile driver before sending it to an erlang process. This will speed up this file operation in most cases. Own Id: OTP-9023 gen_sctp:open/0-2 may now return {error,eprotonosupport} if SCTP is not supported gen_sctp:peeloff/1 has been implemented and creates a one-to-one socket which also are supported now *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-9239 Sendfile has been added to the file module's API. sendfile/2 is used to read data from a file and send it to a tcp socket using a zero copying mechanism if available on that OS. Thanks to Tuncer Ayaz and Steve Vinovski for original implementation Own Id: OTP-9240.85.86.87 Sanitize the specs of the code module After the addition of unicode_binary() to the file:filename() type, dialyzer started complaining about erroneous or incomplete specs in some functions of the 'code' module. The culprit was hard-coded information in erl_bif_types for functions of this module, which were not updated. Since these functions have proper specs these days and code duplication (pun intended) is never a good idea, their type information was removed from erl_bif_types. While doing this, some erroneous comments were fixed in the code module and also made sure that the code now runs without dialyzer warnings even when the -Wunmatched_returns option is used. Some cleanups were applied to erl_bif_types too. Own Id: OTP-9100 - Add spec for function that does not return - Strengthen spec - Introduce types to avoid duplication in specs - Add specs for functions that do not return - Add specs for behaviour callbacks - Simplify two specs Own Id: OTP-9127 1.88 Kernel 2.14.2 Improvements and New Features The Erlang VM now supports Unicode filenames. The feature is turned on by default on systems where Unicode filenames are mandatory (Windows and MacOSX), but can be enabled on other systems with the '+fnu' emulator option. Enabling the Unicode filename feature on systems where it is not default is however considered experimental and not to be used for production. Together with the Unicode file name support, the concept of "raw filenames" is introduced, which means filenames provided without implicit unicode encoding translation. Raw filenames are provided as binaries, not lists. For further information, see stdlib users guide and the chapter about using Unicode in Erlang. Also see the file module manual page. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-8887 There is now a new function inet:getifaddrs/0 modeled after C library function getifaddrs() on BSD and LInux that reports existing interfaces and their addresses on the host. This replaces the undocumented and unsupported inet:getiflist/0 and inet:ifget/2. Own Id: OTP-8926 1.89 Kernel 2.14.1.1 Fixed Bugs and Malfunctions In embedded mode, on_load handlers that called code:priv_dir/1 or other functions in code would hang the system. Since the crypto application now contains an on_loader handler that calls code:priv_dir/1, including the crypto application in the boot file would prevent the system from starting. Also extended the -init_debug option to print information about on_load handlers being run to facilitate debugging. Own Id: OTP-8902 Aux Id: seq11703 1.90 For a socket in the HTTP packet mode, the return value from gen_tcp:recv/2,3 if there is an error in the header will be {ok,{http_error,String}} instead of {error,{http_error,String}} to be consistent with ssl:recv/2,3. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-8831 Improvements and New Features Even when configuring erlang with --enable-native-libs, the native code for modules loaded very early (such as lists) would not get loaded. This has been corrected. (Thanks to Paul Guyot.) Own Id: OTP-8750 The undocumented function inet:ifget/2 has been improved to return interface hardware address (MAC) on platforms supporting getaddrinfo() (such as BSD unixes). Note it still does not work on all platforms for example not Windows nor Solaris, so the function is still undocumented. Buffer overflow and field init bugs for inet:ifget/2 and inet:getservbyname/2 has also been fixed. Thanks to Michael Santos. Own Id: OTP-8816.91.92 Kernel 2.13.5.3 Fixed Bugs and Malfunctions A bug introduced in Kernel 2.13.5.2 has been fixed. Own Id: OTP-8686 Aux Id: OTP-8643 1.93 Kernel 2.13.5.2 Fixed Bugs and Malfunctions Under certain circumstances the net kernel could hang. (Thanks to Scott Lystig Fritchie.) Own Id: OTP-8643 Aux Id: seq11584 1.94 Kernel 2.13.5.1 Fixed Bugs and Malfunctions A race condition in os:cmd/1 could cause the caller to get stuck in os:cmd/1 forever. Own Id: OTP-8502 1.95 Kernel 2.13.5 Fixed Bugs and Malfunctions A race bug affecting pg2:get_local_members/1 has been fixed. The bug was introduced in R13B03. Own Id: OTP-8358 resolver routines failed to look up the own node name as hostname, if the OS native resolver was erroneously configured, bug reported by Yogish Baliga, now fixed. The resolver routines now tries to parse the hostname as an IP string as most OS resolvers do, unless the native resolver is used. The DNS resolver inet_res and file resolver inet_hosts now do not read OS configuration files until they are needed. Since the native resolver is default, in most cases they are never needed. The DNS resolver's automatic updating of OS configuration file data (/etc/resolv.conf) now uses the 'domain' keyword as default search domain if there is no 'search' keyword. Own Id: OTP-8426 Aux Id: OTP-8381 Improvements and New Features The expected return value for an on_load function has been changed. (See the section about code loading in the Reference manual.) *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-8339 directories can also be listed. For short, the top directories are virtual if they does not exist. Own Id: OTP-8387.96.97 A TCP socket with option {packet,4} could crash the emulator if it received a packet header with a very large size value (>2Gb). The same bug caused erlang:decode_packet/3 to return faulty values. (Thanks to Georgos Seganos.) Own Id: OTP-8102 The file module has now a read_line/1 function similar to the io:get_line/2, but with byte oriented semantics. The function file:read_line/1 works for raw files as well, but for good performance it is recommended to use it together with the 'read_ahead' option for raw file access. Own Id: OTP-8108 1.98 Added functionality to get higher resolution timestamp from system. The erlang:now function returns a timestamp that's not always consistent with the actual operating system time (due to resilience against large time changes in the operating system). The function os:timestamp/0 is added to get a similar timestamp as the one being returned by erlang:now, but untouched by Erlangs time correcting and smoothing algorithms. The timestamp returned by os:timestamp is always consistent with the operating systems view of time, like the calendar functions for getting wall clock time, but with higher resolution. Example of usage can be found in the os manual page. Own Id: OTP-7971 1.99.100 The format of the string returned by erlang:system_info(system_version) (as well as the first message when Erlang is started) has changed. The string now contains the both the OTP version number as well as the erts version number. Own Id: OTP-7649.101 Kernel 2.12.5.1 Fixed Bugs and Malfunctions When chunk reading a disk log opened in read_only mode, bad terms could crash the disk log process. Own Id: OTP-7641 Aux Id: seq11090 Calling gen_tcp:send() from several processes on socket with option send_timeout could lead to much longer timeout than specified. The solution is a new socket option {send_timeout_close,true} that will do automatic close on timeout. Subsequent calls to send will then immediately fail due to the closed connection. Own Id: OTP-7731 Aux Id: seq11161 1.102 io:get_line/1 when reading from standard input is now substantially faster. There are also some minor performance improvements in io:get_line/1 when reading from any file opened in binary mode. (Thanks to Fredrik Svahn.) Own Id: OTP-7542.103 Kernel 2.12.4 Fixed Bugs and Malfunctions Large files are now handled on Windows, where the filesystem supports it. Own Id: OTP-7410 Improvements and New Features. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-7404 Setting the {active,once} for a socket (using inets:setopts/2) is now specially optimized (because the {active,once} option is typically used much more frequently than other options). Own Id: OTP-7520 1.104.105 Kernel 2.12.2.1 Improvements and New Features os:cmd/1 on unix platforms now use /bin/sh as shell instead of looking for sh in the PATH environment. Own Id: OTP-7283 1.106 Kernel 2.12.2 Fixed Bugs and Malfunctions A bug caused by a race condition involving disk_log and pg2 has been fixed. Own Id: OTP-7209 Aux Id: seq10890 The beta testing module gen_sctp now supports active mode as stated in the documentation. Active mode is still rather untested, and there are some issues about what should be the right semantics for gen_sctp:connect/5. In particular: should it be blocking or non-blocking or choosable. There is a high probability it will change semantics in a (near) future patch. Try it, give comments and send in bug reports! Own Id: OTP-7225 Updated the documentation for erlang:function_exported/3 and io:format/2 functions to no longer state that those functions are kept mainly for backwards compatibility. Own Id: OTP-7186 A process executing the processes/0 BIF can now be preempted by other processes during its execution. This in order to disturb the rest of the system as little as possible. The returned result is, of course, still a consistent snapshot of existing processes at a time during the call to processes/0. The documentation of the processes/0 BIF and the is_process_alive/1 BIF have been updated in order to clarify the difference between an existing process and a process that is alive. Own Id: OTP-7213 tuple_size/1 and byte_size/1 have been substituted for size/1 in the documentation. Own Id: OTP-7244 1.107 Kernel 2.12.1.2 Improvements and New Features The {allocator_sizes, Alloc} and alloc_util_allocators arguments are now accepted by erlang:system_info/1. For more information see the erlang(3) documentation. Own Id: OTP-7167 1.108 Kernel 2.12.1.1 Fixed Bugs and Malfunctions Fixed a problem in group that could cause the ssh server to lose answers or hang. Own Id: OTP-7185 Aux Id: seq10871 1.109 Kernel 2.12.1 Fixed Bugs and Malfunctions file:read/2 and file:consult_stream/1,3 did not use an empty prompt on I/O devices. This bug has now been corrected. Own Id: OTP-7013 The sctp driver has been updated to work against newer lksctp packages e.g 1.0.7 that uses the API spelling change adaption -> adaptation. Older lksctp (1.0.6) still work. The erlang API in gen_sctp.erl and inet_sctp.hrl now spells 'adaptation' regardless of the underlying C API. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-7120 1.110 Monitor messages produced by the system monitor functionality, and garbage collect trace messages could contain erroneous heap and/or stack sizes when the actual heaps and/or stacks were huge. As of erts version 5.6 the large_heap option to erlang:system_monitor/[1,2] has been modified. The monitor message is sent if the sum of the sizes of all memory blocks allocated for all heap generations is equal to or larger than the specified size. Previously the monitor message was sent if the memory block allocated for the youngest generation was equal to or larger than the specified size. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-6974 Aux Id: seq10796 inet:getopts/2 returned random values on Windows Vista. Own Id: OTP-7003 Improvements and New Features functions io:columns/0, io:columns/1, io:rows/0 and io:rows/1 are added to allow the user to get information about the terminal geometry. The shell takes some advantage of this when formatting output. For regular files and other io-devices where height and width are not applicable, the functions return {error,enotsup}. Potential incompatibility: If one has written a custom io-handler, the handler has to either return an error or take care of io-requests regarding terminal height and width. Usually that is no problem as io-handlers, as a rule of thumb, should give an error reply when receiving unknown io-requests, instead of crashing. *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-6933.111 Kernel 2.11.5.2 Fixed Bugs and Malfunctions The kernel parameter dist_auto_connect once could fail to block a node if massive parallel sends were issued during a transient failure of network communication Own Id: OTP-6893 Aux Id: seq10753 1.112.113 Kernel 2.11.5 Fixed Bugs and Malfunctions Corrected protocol layer flue for socket options SO_LINGER, SO_SNDBUF and SO_RCVBUF, for SCTP. Own Id: OTP-6625 Aux Id: OTP-6336 The behaviour of the inet option {active,once} on peer close is improved and documented. Own Id: OTP-6681 The inet option send_timeout for connection oriented sockets is added to allow for timeouts in communicating send requests to the underlying TCP stack. Own Id: OTP-6684 Aux Id: seq10637 OTP-6681 Minor Makefile changes. Own Id: OTP-6689 Aux Id: OTP-6742 The documentation of process_flag(priority, Level) has been updated, see the erlang(3) documentation. Own Id: OTP-6715 1.114 Kernel 2.11.4.2 Improvements and New Features process_flag/2 accepts the new flag sensitive. Own Id: OTP-6592 Aux Id: seq10555 1.115 Kernel 2.11.4.1 Fixed Bugs and Malfunctions A bug in gen_udp:open that broke the 'fd' option has been fixed. Own Id: OTP-6594 Aux Id: seq10619 1.116 Improvements and New Features An interface towards the SCTP Socket API Extensions has been implemented.It is an Open Source patch courtesy of Serge Aleynikov and Leonid Timochouk. The Erlang code parts has been adapted by the OTP team, changing the Erlang API somewhat. The Erlang interface consists of the module gen_sctp and an include file -include_lib("kernel/include/inet_sctp.hrl"). for option record definitions. The gen_sctp module is documented. The delivered Open Source patch, before the OTP team rewrites, was written according to and was claimed to work fine, tested on Linux Fedora Core 5.0 (kernel 2.6.15-2054 or later) and on Solaris 10 and 11. The OTP team rewrites used the same standard document but might have accidentally broken some functionality. If so, it will soon be patched to working state. The tricky parts in C and the general design has essentially not changed. During the rewrites the code was hand tested on SuSE Linux Enterprise Server 10, and briefly on Solaris 10. Feedbach on code and docs is very much appreciated. The SCTP interface is in beta state. It has only been hand tested and has no automatic test suites in OTP meaning everything is most certainly not tested. Socket active mode is broken. IPv6 is not tested. The documentation has been reworked due to the API changes, but has not been proofread after this. Thank you from the OTP team to Serge Aleynikov and Leonid Timochouk for a valuable contribution. We hope we have not messed it up too much. Own Id: OTP-6336 A {minor_version,Version} option is now recognized by term_to_binary/2. {minor_version,1} will cause floats to be encoded in an exact and more space-efficient way compared to the previous encoding. Own Id: OTP-6434.117.118.119.120.121.122 Previously unlink/1 and erlang:demonitor/2 behaved completely asynchronous. This had one undesirable effect, though. You could never know when you were guaranteed not to be affected by a link that you had unlinked or a monitor that you had demonitored. The new behavior of unlink/1 and erlang:demonitor/2 can be viewed as two operations performed atomically. Asynchronously send an unlink signal or a demonitor signal, and ignore any future results of the link or monitor. NOTE: This change can cause some obscure code to fail which previously did not. For example, the following code might hang: Mon = erlang:monitor(process, Pid), %% ... exit(Pid, bang), erlang:demonitor(Mon), receive {'DOWN', Mon, process, Pid, _} -> ok %% We were previously guaranteed to get a down message %% (since we exited the process ourself), so we could %% in this case leave out: %% after 0 -> ok end, *** POTENTIAL INCOMPATIBILITY *** Own Id: OTP-5772.123.124 If there were user-defined variables in the boot script, and their values were not provided using the -boot_var option, the emulator would refuse to start with a confusing error message. Corrected to show a clear, understandable message. The prim_file module was modified to not depend on the lists module, to make it possible to start the emulator using a user-defined loader. (Thanks to Martin Bjorklund.) Own Id: OTP-5828 Aux Id: seq10151.125 Kernel 2.10.11.1 Fixed Bugs and Malfunctions Timers could sometimes timeout too early. This bug has now been fixed. Automatic cancellation of timers created by erlang:send_after(Time, pid(), Msg), and erlang:start_timer(Time, pid(), Msg) has been introduced. Timers created with the receiver specified by a pid, will automatically be cancelled when the receiver exits. For more information see the erlang(3) man page. In order to be able to maintain a larger amount of timers without increasing the maintenance cost, the internal timer wheel and bif timer table have been enlarged. Also a number of minor bif timer optimizations have been implemented. Own Id: OTP-5795 Aux Id: OTP-5090, seq8913, seq10139, OTP-5782 Improvements and New Features Documentation improvements: - documentation for erlang:link/1 corrected - command line flag -code_path_cache added - erl command line flags clarifications - net_kernel(3) clarifications Own Id: OTP-5847 1.126.127 Kernel 2.10.10.1 Fixed Bugs and Malfunctions The native resolver has gotten an control API for extended debugging and soft restart. It is: inet_gethost_native:control(Control) Control = {debug_level,Level} | soft_restart Level = integer() in the range 0-4. Own Id: OTP-5751 Aux Id: EABln25013 1.128 Kernel 2.10.10 Fixed Bugs and Malfunctions If several processes (at the same node) simultaneously tried to start the same distributed application, this could lead to application:start returning an erroneous value, or even hang. Own Id: OTP-5606 Aux Id: seq9838 possibility to have comments following the list of tuples in a config file (file specified with the -config flag) has been added. Own Id: OTP-5661 Aux Id: seq10003 1.129 Kernel 2.10.9 Fixed Bugs and Malfunctions 'erl -config sys.config' would fail to start if the sys.config file did not contain any whitespace at all after the dot. (Thanks to Anders Nygren.) Own Id: OTP-5543 A bug regarding tcp sockets which results in hanging gen_tcp:send/2 has been corrected. To encounter this bug you needed one process that read from a socket, one that wrote more date than the reader read out so the sender got suspended, and then the reader closed the socket. (Reported and diagnosed by Alexey Shchepin.) Corrected a bug in the (undocumented and unsupported) option {packet,http} for gen_tcp. (Thanks to Claes Wikstrom and Luke Gorrie.) Updated the documentation regarding the second argument to gen_tcp:recv/2, the Length to receive. Own Id: OTP-5582 Aux Id: seq9839 Improvements and New Features At startup, the Erlang resolver hosts table was used to look up the name of the local (and possibly stand alone) host. This was incorrect. The configured resolver method is now used for this purpose. Own Id: OTP-5393.130.131.132 Kernel 2.10.6 Improvements and New Features The c option for the +B flag has been introduced which makes it possible to use Ctrl-C (Ctrl-Break on Windows) to interrupt the shell process rather than to invoke the emulator break handler. All new +B options are also supported on Windows (werl) as of now. Furthermore, Ctrl-C on Windows has now been reserved for copying text (what Ctrl-Ins was used for previously). Ctrl-Break should be used for break handling. Lastly, the documentation of the system flags has been updated. Own Id: OTP-5388 1.133.134.135.136
https://www.erlang.org/doc/apps/kernel/notes.html
CC-MAIN-2022-40
refinedweb
18,109
56.25
Join devRant Search - "irony" - I told my girlfriend about Devrant. Now she's hooked onto it and doesn't pay attention to me. And now I'm ranting about it on Devrant. Oh the irony! T_T12 - - - - - Fuck You blue car driver who is texting and just cut in front of me! I accidentally double tapped the wrong rant!4 - - - Just had a feedback session. Apparently one of my colleague has a problem with me. The irony is - I didn't even knew the guy's fucking name until now. Relatable right?11 - - - A guy who doesn't work on our project anymore, but still checks on us regularly just did this: - reported a bug - started explaining what's causing the issue - realised the problem was in a piece of code written by him before - fixed his own mistake, committed it and created a pull request We all had a good laugh.6 - - - - *me searching for jobs* *types in 'junior backend developer'* First result: Junior Frontend Developer. *big facepalm* Yeah I understand that it might just be some kinda algorithm that filters on words or whatever but the irony was real!13 - - - - what a irony ! manager using MBP 2017 for browsing and presentation. graphic designer is working on window xp sp2 dual core. coreldraw -_-5 - - - Although I know it's nothing, the irony is real. My room is connected to the power group together with 5 other people. Watching a mass surveillance documentary. Suddenly my power goes out, the rest stays on. I know it must be a glitch in the newly installed power management system but damn the timing couldn't be better!4 - - Do you agree website to use cookies to save your preferences? - No * uses cookies to remember selection3 - So at the beginning there was assembly. But people wanted something more highlevel, so C was born. But writing big projects was a pain so C++ was invented. But then the web started to become more popular and C++ wasn‘t really good at that, so Rasmus Lerdorf created PHP. And then everything moved to the client and should be loaded dynamically for better UX, so everyone writes JS. But JS doesn‘t have a good performance, so people created web assembly compiled from C... Am i the only one who sees the irony in this?7 - - Subtle irony: Tons of people rant about how bad PHP is on devRant. devRant's is built using PHP. Love it4 - Irony: A robot that clicks for me on the "I'm not a robot" buttons. (Firefox Addon by prowebmasters)12 - - Hey fellow selfish Millenials who want to be paid for their valuable work, how's the job hunting going? I'm a few days into hunting for some interesting part time work and I'm already encountering some real gems... Pinnacle of irony with this one. Anyone got some horrific tales to share? This is a safe space for your anguish to finally be released, I promise.15 - - - - - Lately, Namecheap has been forcing their users to change passwords once every six months. Otherwise, they bombard you with annoying popups. When I finally changed mine, this is how they did it on their end. I just can't deal with the irony of this whole situation...11 - I'm a backender. I fucking hate everything relating to designing, UI/UX designing and especially frontending. I can't stand it when interfaces look bad/are off, have bugs and so on. I just can't stand that stuff but the irony is real 😅5 - - - - The story of Netscape and Internet Explorer really proves the irony of fate! And how life will come back to bite you. Back in the 90's you had to pay for browsers like Netscape (it was called a navigator but same thing) but after Microsoft released IE for free with your windows copy in 2002 it crushed Netscape and nobody used it anymore (the graph below). But! Netscape wouldn't give up and before the company died after it made legal accusations against Microsoft and Bill Gates and made them pay for that they did, but Netscape was too far gone and already were falling apart they decided to make a self detonation (I guess that's what they thought being in that tight corner) and they released the code as open source which would later get taken by Mozilla and be the code base for Firefox. Now look at how much better Firefox is and how nobody uses the shitty IE! Kind of reminds me of the scene from watchmen where Rorshack was in prison and said the best sentence in the movie "I'm not locked in here with you. You're locked in here WITH ME!"21 - "Open Source is life", he tweets. Gets a job offer from Microsoft. Starts salary negotiation without thinking twice. #irony24 - When the purpose of your app starts with a "C" then just replace it with a "K" and tada: You have a cool new hipster name for it! Example: Collaborate -> Kollaborate How cool is that?!!20 - - - Nobody cares what the code does when it works. Everybody cares about it when it doesn't work. You just can't win. 😤3 - - - > Amazon like website - 9 dollars > Webdev homework. - 50 dollars > Spelling mistakes. - have no price Freelancing can be so bizarre, I'm 50 bucks wealthier now tho.2 - - Apple asked it's employees not to leak information in a memo. Guess what? The memo got leaked. Lol2 - -.34 - - - - - - Get an email from Twitter about updating my account security, but that’s not even my account Oh the blissful irony1 - - - Lurker here. Just wanted to highlight the irony of specializing in human computer interaction and automation, while people avoid interaction at work.5 - Irony is when I just interviewed a person in my current workplace right after giving an interview for a new job in another place1 - My Nexus 6P died yesterday.. A new battery that I ordered for it months ago, arrived.. today. 4700mAh. If only I could've tested it. This feels so much like it was just to rub it in a little bit further that that Nexus is gone forever.. just before I could fix the first shitstain, the second one occurred already... Fuck me :')9 - - - Found this online while searching for something about shitty existing code that I just didn't understand. Irony! - - - Employer: I want to make a search engine but only for our products. Me: Sure. It's called an eshop. Employer: You know that eshops are not engines right? Me: Technology has changed the past few years. (hidden irony) Employer: I guess that's geeky stuff. Tell me more about this. Me: First, you need to upgrade your flash eshop. Employer: (frustrated) You IT people always want to do things your way, aren't you? Nevermind, let's get to business, how can I make my site better?1 - Arrrr not you too Firefox 😶 and Mr Robot . ... Wtf is wrong with both parties marketing team. If this is not irony, wtf is. - - - -?32 - -).3 - As stated in a previous story, I just started an internship using angular and am learning it on the job. The other day, one of the admins posted an issue in gitlab about how easy it was to delete user accounts via the front end. He wanted someone to add further confirmation to prevent accidentally deleting anyone. Literally just had to hit the X icon and poof they're gone. I was like, I can do that! Of course, as I was looking at the platforms account page, accidentally deleted that admins account 😅 He thanked me for resolving the issue, and it became a joke around the office about the irony of the situation.2 -.11 - - - - so yeah let's have conference about security but its perfectly fine to have registrations over non-secure connection!4 - That fucking ironic time when all you need to make money is Internet connection but you have no money at all to pay for it.6 - Oh the irony. Translation: Warning The certificate of this website is not trustworthy. Proceed anyway?2 - - - - - - Go to business school to get a Finance degree... Since you already know how to code, why do you need study CS? Finance is hot now, make $$$$$$4 - - It's ironic how I could seem to get ms Silverlight working on windows 10 and now I got it working in Linux through wine Seriously anyone who uses Silverlight in his websites or web applications should burn in hell7 - Me: "Showtime!" Windows: "LOL, NOPE!" The irony in this rant is that I just installed Linux in a dualboot environment and was eager to start setting up the new OS. For some reason, Grub was not recognized and Windows started automatically... 😥5 - - Windows is now being developed with git. I cant believe that. This is the BIGGEST IRONY EVER! Please tell me its a joke! - - - - - - - So my Product Manager girlfriend just said, "I have a great idea for an app that people will really want. Can you build it for me?". She did not see the irony - The irony when you move your presentation on data safety onto a drive that is likely to fail soon.2 - Nothing like taking a company IT security training that requires Flash. The first step to be able to run the training? Override your browser's security setting to allow Flash to be able to run. Anyone else see the irony here?1 - - - So on a sign up website for a programming competition they ask you to choose your screen size to fit the webpage.... Oh the irony1 - Based on what I learned so far in the UX course, the UX textbook we learn from is utter shit. Who the fuck thought that using huge blocks of cyan color is a good idea?3 - I always feel the people who post hate about Java are .Net Developers and then I like to think they are also the ones who complain about Windows just so I can have that moment of irony.7 - My 2nd year university project. Everytime I started to work on my module, someone screwed things up on Github somehow and I was the one fixing it. That was the last time I decided to say bye to group projects and offer to do the whole work by myself. But oh the irony...2 - Today we had our first standup. We sat for all 90+ mins of it. The irony. Maybe the standing part isn’t that important? Or maybe it went for 90 mins because we weren’t standing? Sigh. Save me.8 - - So being a fan of Silicon Valley i found out there's a real startup company with a Voice-controlled tablet device called "Nucleus". Irony is they are suing the same company which invested in it and then copied it's product....(Amazon - I love the irony that the devrant stress ball my co-worker just received is the beginning of my stress.... bounce bounce bounce AHHHH!2 - - - - API Documentation: All API request should be made over https connections. Me: Ok, (sees url bar), SECURE, good! (sees curl code) curl -X GET '' Me: (gasps) huh? (heads to) Me: Ok, (sees address bar) NOT SECURE . . . . . (long silence)5 - My company has been looking for a project time tracking and reporting app/website for 2 years but are too cheap to buy one. So they want me to develop one to their specifications which are forever changing. Let's say I've intentionally wasted so much time "perfecting it". it's been done for 2 months.7 - "_rootAccess, you need to stop letting 'freelancer' beat you to the office. You set the standard." -my boss missing the irony that he's telling me this after showing up 30m late... -_- - - - -!) - the irony is most of us work to live and those who live to work create something which we dreamed of7 - That's it, where do I send the bill, to Microsoft? Orange highlight in image is my own. As in ownly way to see that something wasn't right. Oh but - Wait, I am on Linux, so I guess I will assume that I need to be on internet explorer to use anything on microsoft.com - is that on the site somewhere maybe? Cause it looks like hell when rendered from Chrome on Ubuntu. Yes I use Ubuntu while developing, eat it haters. FUCK. This is ridiculous - I actually WANT to use Bing Web Search API. I actually TRIED giving up my email address and phone number to MS. If you fail the I'm not a robot, or if you pass it, who knows, it disappears and says something about being human. I'm human. Give me free API Key. Or shit, I'll pay. Client wants to use Bing so I am using BING GODDAMN YOU. Why am I so mad? BECAUSE THIS. Oauth through github, great alternative since apparently I am not human according to microsoft. Common theme w them, amiright? So yeah. Let them see all my githubs. Whatever. Just GO so I can RELAX. Rate limit fuck shit workaround dumb client requirements google can eat me. Whats this, I need to show my email publicly? Verification? Sure just go. But really MS, this looks terrible. If I boot up IE will it look any better? I doubt it but who knows I am not looking at MS CSS. I am going into my github, making it public. Then trying again. Then waiting. Then verifying my email is shown. Great it is hello everyone. COME ON MS. Send me an email. Do something. I am trying to be patient, but after a few minutes, I revoke access. Must have been a glitch. Go through it again, with public email. Same ugly almost invisible message. Approaching a billable hour in which I made 0 progress. So, lets just see, NO EMAIL from MS, Yes it appears in my GitHub, but I have no way to log into MS. Email doesnt work. OAuth isn't picking it up I guess, I don't even care to think this through. The whole point is, the error message was hard to discover, seems to be inaccurate, and I can't believe the IRONY or the STUPIDITY (me, me stupid. Me stupid thinking I could get working doing same dumb thing over and over like caveman and rock). Longer rant made shorter, I cant come up with a single fucking way to get a free BING API Key. So forget it MS. Maybe you'll email me tomorrow. Maybe Github was pretending to be Gitlab for a few minutes. Maybe I will send this image to my client and tell him "If we use Bing, get used to seeing hard to read error messages like this one". I mean that's why this is so frustrating anyhow - I thought the Google CSE worked FINE for us :/ - - DevRant is so purely dedicated to ranting that if one has to talk about coding and development, he/she should tag it "off-topic".2 -37 - - I am currently working on a project with a team of 5. I like working at night. After committing my code, I sleep at 6. My team on waking up decides to change the UI and I have to start over again. The irony is: None of them are working with me on the frontend! Feels like I am stuck on a while(true) loop.2 - - - If you saw my last rant, you'll know how much I hate Calculus. I decided instead of trying to learn this foreign topic, I'd instead translate it into a language I DO understand: C. The irony is that we use Calculus so we can learn to code easier, but I'm using code to learn Calculus easier. Funny if you ask me.1 - - - Decided to install Linux for dev. Already crashed during install. Might as well just stick to Windows.4 - - - Okay, honst question: What the fuck is up with all that self deprecation? I am not talking about the usual irony that comes with certain stereotypes about being a developer. I am talking about people telling themselves that they are unable to socialize, find a girlfriend or generally justifying bad things just because they belong to a certain group. It's not the 80s. Software devs and nerds in general are not all social outcasts anymore. I don't understand how some people can just "accept their fate as a dev" and act as if anything is keeping them away from social success. What's your take on this issue?17 - - Sometimes it feel like I'm being awkward, not awkward to normal people but awkward to awkward people. Something Like double awkward. The irony is that, I fucking know I'm being super weird. I just prey for that moment to pass as soon as possible.1 - - - - As both a developer and consumer of a cellular phone service, the latest Sprint slogan "works for me" is unintentionally hilarious. - After a while of functioning as a dev, I've learned one of many lessons: The amount of experience you have does not correlate with your expertise, but it in fact correlates with the amount of absolutely broken shit you've seen and your ability deal with - Yes you are. Not a single reason. Clearly no one. I am very surprised. This recruiter put in some very subtle irony and lots of effort (google "minutes in an hour")3 - im not a php fan. like NO. then i know this very rant runs php somewhere. having mixed feelings right now 😐7 - - - Programing courses in my country Translation: Learn PHP in only 2 hours and half without any prior knowledge1 - - Oh the irony! Was checking my app’s crashes on fabric.io iOS app.... And you know what.. Fabric crashed! 🤣 😄 😄 - - Isn't it all too ironic that the inefficient suck-ups get to keep their jobs all the time and get all the benefits? And how the hard-working people get sacked and disrespected all the time? It shouldn't be this way. At my job I bust myself every day mentally to come up with the best solutions and I don't get taken seriously, I just get shrugged at. Meanwhile, an undereducated friend of mine got his contract extended and got a praise from the manager because all he does is do what they ask of him and he slaves away instead of coming up with ways to better the company. He's just a useless, mindless grunt. There's no value in him. Then another friend of mine is asocial and while he had been hoping to get a promotion for the five years he worked at the company, solving numerous important issues, this one younger kid who happened to be a suck-up who bumped up everybody's mood but was in fact as intellectually useless as a rock, he got promoted to team lead in two months. That lackeys and lazy people get more respect than authentic and well-meaning people. What an ugly world we live in..9 - - I've left my MacBook to technical assistance for the thrid time. I've bought it on December (the touchbar model, on day one, arrives on December). I paid a lot for it and since then I got s broken key on my keyboard and a faulty display. Now I got my battery swollen. Fucking Apple. At least I'm happy with the OS and everything when it's hardware-faults free. Oh yeah and I switched to MacBook for the construction quality... Bitter irony. I hope this is the last fucking time, damn.7 - - The industry is sometimes sad and hilarious at the same time. There was a townhall at my workplace and our country head was talking about all the new tech we were working on. Now he is a good business person but I doubt him as a tech guy. And then he went on ranting about AI and ML and how they are to going change the software landscape and how developer as a profession will become obsolete. He said the technology will reach up to a point where we no longer need to write code to build software. Obviously, I couldn't digest it and confronted him the moment after the event. Me: so why do you think writing code will become outdated? Him: it's just that we will be able to create a technology through which we can simply command a machine to build a software. Me: oh. But someone needs to tell the machine how to do it right? Him: yes. We have to train the machine to act on these commands. Me: and do you know how you "train" these machines? Him: umm... Me: by writing code.2 - That moment you realize you're starting your shiny new feature by searching the Wayback Machine (web.archive.org) ... because the company you're integrating with no longer exists.2 - - I hate society for multiple reasons. Stay safe guys. "And after the gunman was shot dead by police, they reportedly remarked on the nice top they were wearing." - Company mail today: If only I got paid to point out the ironies, I would live well, very well indeed! - "Learn coding to make more money" ads are the funniest things ever. Like some underskilled graphic designer must've sat with his shitty laptop to design on a pirated software for a meagre wage, a poster that talked about how to earn more money. - - - I wonder if IBM is aware of the irony in the fact that their application server is literally past tense? #was - - It's Friday night and I should keep preparing for an online tech test I will be taking tomorrow. But I want to just relax and watch my Amazon Prime subscription which I haven't had time for since I've been preparing all week... The test is for Amazon.3 - Android studio kept nagging on me to update to v3.3.1 for a month, upgraded yesterday, and today it wants to upgrade to 3.3.2 T_T Irony at its best lol - - Attempted to install MetaTrader 5 with wine on linux, loving the irony of "... please install using Window 7 ... trading requires maximum security" bahaha - Why does Google hire Chinese developers although Google is blocked in China ? Coz someone gotta copy other company’s tech.3 - -) - - As a Java developer, I’m used to being verbose. Probably a little bit too used to being verbose. My Literature teacher reflected on how verbose all my writing is, and I could not help but laugh at the irony. - - The customer service dept at Koss Headphones sent me an adapter gratis for my Pro 4/AA headphones so I could listen to loud rock and roll on my pc. I've been using Koss exclusively since I rec'd a pair for Christmas in 1971. Despite the natural deterioration in sound quality on a PC, I found I could hear more on certain Rolling Stones soundboards [the ones in question are Philadelphia 1972 and New York City a week later]. So I penned a rather whimsical email to Michael Koss, who actually replied with a letter, the kind we used to lick stamps for and put in a mailbox. OK, that PC died. And the HP I have somehow has a really loose jack so the whole mechanism will slide out if you move the least little bit. It happened so often I became shell shocked about listen to loud music on my headphones at night when my nabes were sleeping because I didn't want to wake anybody up. Finally, after too much jiggling, the end bit of the adapter got stuck in the headphone jack. Koss sent me another adapter gratis. Last night, I got out my headphones, removed the new adapter from its envelope, and inserted tweezers into the jack to pull the broken off bit out. Except the broken off bit slid deeper into the jack. On my own, I have been able to rig the pc so I can use the speakers. And a friend who can remove the bit of the jack stuck in the jack will be over in a couple of days. I went online and googled the methods others have used to remove broken off bits. That was worth the keystrokes! In any case, I just wanted to say something about the irony of expecting the problem to be over and then having a few days more to live with the broken bit. - This has nothing to do wiv developing stuff this site was created for. I just wanted to make a short public statement and there really isn't any place else to say it without the idea that some oik would infantalize it and make fun. It goes under the heading of something like, "Personal Irony: I'm Not Codependent, I'm Just Trying to Help [Myself]!" In 2016 I created a playlist that included REM's "Let Me In," Michael Stipe's song to Kurt Cobain. And "Head Down", and "Black Hole Sun," by Soundgarden. I have a good singing voice, I think it's a baritone. But those notes at the end of BHS, you know, "Won't you come?" When you sing it, you pronounce the lyric: WOAN CHOO CU-UH-UHM, the "UH-" dropping an octave into "UHM." It's particular to my range that dropping that note requires discipline and concentration. And even then I'd say I've sung it 100 times and nailed it to my satisfaction maybe twice. Anyway, I had these two songs as a playlist in my media player. I listened to them and sang along as quietly as I could, it being four a.m. here in Seattle. And as the final notes of BHS fragmented and skipped back into eternity, I felt like total shit. Not at all normal for me to personally feel the loss of an entertainer, but at that moment I did feel sad. That's it. Thanks for reading this odd little collection of words.1 - - Just learned the vue.js framework, which I think is really awesome - and now I gotta learn and work with WordPress for my current project. Oh the irony... Hope WordPress isn't just as horrible as people on here make me think :(8 - - Another manifestation of the irony of life: I have a few intricate questions on StackOverflow, with a couple of upvotes. I fewer answers, also with a couple of upvotes. But once, I posted a question due to pure sloppiness: I had forgotten to set up exception options in Visual Studio. That's my most upvoted SO question.4 -. - I need 750++'s to get my avatar a pair of slippers that I got for free after quitting my job for which the shoes came free. #include "irony.much"; - new Suit() + new Developer() == interview() ; The irony here is I usually wear casual to interviews - - Irony - (noun) Switching to a new framework to do more with less code. Spending obscene amount of time and LOC to retrofit rest of the code to work with the said framework. - I just spent two hours in a workshop today, where the guy organizing the workshop was steamrolling everyone that we could not change anything in the software architecture. Topic of the workshop: Architecture vision. - PrestaShop irony: * Theirs modules have >3500 lines per class (eg. blocklayered.php) * Theirs controllers have > 5000 lines and contains a LOT of html code inside AND when I tried to add own module to theirs addons store they declined it because: * I had unused $key var in foreach and this is "bad practice" as I was told * In one hook I was returning 1 line of html code (i had to add global Js var) and they told me that I should put it into separate template file -.-'2 - Why the hell you write test¿ if you can write 8h a day perfect working code, everyday, without a mistake? You just waste time doing that... </irony> - Deleted part of my build template for no good f-ing reason while trying to back it up -_- Of all the irony - I sometimes feel like we are the godhead and then sometimes feel like we are at the bottom of the 21st century corporate ladder food chain. - - - - - "Want to learn more about building PROFESSIONAL sites with WordPress? Come to the free introduction class" Oh the irony... Top Tags
https://devrant.com/search?term=irony
CC-MAIN-2020-10
refinedweb
4,791
73.78
Greetings! I am a newbie in c++.Now i have a problem.With my source code..What should i do..This is my problem... This program tries to retrieve datas from a notepad, named employee.txt,consisting of employees data, Such as:idnumber,employee name,salary grade. Ex. A1001,John Davidson,1 A1002,Andrew Jackson,2 I use getline to delimit the string... Now my problem is how am i going to assign the delimited string into different arrays?? Ex. A1001 and A1002 is a assigned to an array named idnumber. John Davidson and Andrew Jackson is assigned to an array named emplname.. Now can you give me the solution to my problem?? Please help.. Can you give me a sample code? #include<fstream> #include<iostream> #include<string> #include<cstdlib> using std::cout; using namespace std; void main() { int x=0; string filename="employee.txt"; fstream empfile; empfile.open(filename.c_str()); if(empfile.fail()) { cout<<"Unable to open file: "<<filename<<endl; exit(1); } while(empfile.good()) { getline(empfile,line,',');//Problem Here...I want the delimited string to be assigned in diff. arrays.. } } This post has been edited by no2pencil: 20 March 2012 - 07:59 PM Reason for edit:: Added code tags
http://www.dreamincode.net/forums/topic/271838-problem-in-c-about-delimited-strings-assigned-to-diff-arrays/
CC-MAIN-2016-44
refinedweb
201
62.64
It’s been awhile since I’ve had a technically focused blog post, so I’m rectifying that today. Lately I’ve been doing a lot of work with Event Tracing for Windows (henceforth called “ETW” for brevity’s sake.) For the spelunker who wants to see the OS in operation, or for the developer trying to pin down exactly what happened, ETW provides you a ton of useful information. Unfortunately, working with ETW isn’t as simple as it could be. There are quite a few concepts to wrap your head around, and it’s quite easy to get lost in the weeds. What I’m doing here is providing just a high level tour of the essential concepts. If you want to play with ETW yourself, you’ll obviously need to refer to the MSDN documentation. At heart, ETW is a high performance logging mechanism that usable from both user and kernel mode. There are APIs for producing events as well as consuming events. The OS and various components such as IIS define quite a number of useful event categories, and you can create your own custom events. The ETW API has been around for a number of years, so it’s implemented in unmanaged code. As I write this, there are no shipping managed libraries to make working with it simpler. Internally, various groups here are looking at the best way to make ETW a first class citizen in the managed world. In my mind, ETW subdivides into three parts: - Producing events - Consuming events - Discovering the format of events For this blog entry we’ll look at the discovery portion, since this is the fastest way of seeing for yourself what sort of information ETW can provide to you. If you’re still interested, you can read future blog entries on producing and consuming events. Let’s start with a very brief overview of events. ETW events are logged by providers which are registered with the system. Each provider has a descriptive name and is uniquely identified to the system by a GUID. Some typical provider names are: - HttpEtwTrace - AspNetTrace - MSNT_SystemTrace - ACPITrace A given provider emits one or more events. Each event has its own GUID and a descriptive name. Typical event names include: HttpRequest AspNetTraceEvent Process Thread Registry So far, so good. But here’s where it gets interesting. In addition to a GUID, each event has an integer EventType field. A given event usually has multiple EventType fields. Think of each unique EventType field as a derivation from a base class. For instance, AspNetTraceEvent has several dozen different EventType values, including: - AspNetStart = 1 - AspNetStop = 2 - AspNetAppDomainEnter = 7 - AspPageInit = 21 Put another way, an events GUID indicates the general category of the event (e.g., an HTTP Request), while the EventType field gives you exactly what the event represents (e.g., an app domain being entered, or a page being initialized.) If you were to turn on the HttpEtwTrace provider and then examine the logged events, you’d see potentially hundreds of HttpRequest events for a single request for a web page. Only by examining the EventType field would you be able to infer exactly what the event represents. Each distinct event (as identified by a GUID and EventField value) has a binary data format which can be interpreted. All events, no matter which provider they came from, begin with a standard header which includes fields like a process ID, a thread ID, and a timestamp. After the header, different events are free to put whatever additional data they’d like. For an ASP.NET event, this might include the URL being requested or a connection ID. I admit that this is somewhat confusing at first. To make matters even more complex, different providers categorize their events in different ways. It’s a spectrum, really. On one end you might have a single event GUID with many EventTypes, on the other end you could have a different event GUID for every action you log, and effectively ignore the EventType field. As we’ll soon see, event providers cover all parts of the spectrum. The HttpRequest provider uses only one GUID for all its events and multiple EventFields. The MSNT_SystemTrace has 18 different event GUIDs, with each GUID having roughly four EventType values. Finally, SQL Server has hundreds of event GUIDs, with each GUID using only a single EventType. Describing Events Another interesting piece of the ETW story is how you can programmatically discover the layout of the fields which follow the standard event header. That is, you can query the system for the names and data types of the optional fields in an event. To do so requires you to descend a bit into WMI land. WMI is the acronym for Windows Management Instrumentation. WMI has fairly extensive capabilities, but of interest here is that WMI has an object hierarchy that represents many different aspects of the system. One particular branch of the WMI object hierarchy contains information about ETW events. Nothing requires an event provider to describe its event in the WMI object hierarchy. If you’re writing an event provider and don’t care if anybody else can interpret your events, there’s no need to describe them in the WMI data. The event consumption API will hand you a pointer to the event, and it’s up to you to know how the data fields are encoded. Let’s see how we can learn what providers and events are registered in the WMI hierarchy. The easiest way to do this is with CIM Studio, which is part of the WMI SDK. From this point forward, I’ll assume that you’ve downloaded and installed the WMI SDK. First, start up “WMI CIM Studio”, which is hosted inside Internet Explorer. If you’re running a newer OS such as XP SP2, you may get the warning that IE has blocked active content. If so, allow IE to show the content. You should then get a dialog box prompting you to “connect to a namespace”. The default is “root\CIMV2”. You’ll need to change that to “root\wmi”, then hit “OK”. Another dialog should appear, entitled “WMI CIM Studio Login”. I’m able to just hit “OK’, and shortly thereafter get the screen shown here: The left side of the window has a treeview hierarchy, while the right side shows the properties of the currently selected treeview node. In the treeview, locate the top level object named “EventTrace”, and expand it: Depending on which OS version you’re running, and which software you have installed, you’ll see any number of sub-nodes. In the screenshot above, the sub-nodes are: - HttpEtwTrace - MSNT_SystemTrace - ACPITrace - AspNetTrace Each of these sub-nodes corresponds to an event provider. With one of the providers selected, right-click in the right-hand properties pane, and select “Object Qualifiers…” You should get a dialog like this: The Guid field value is the event provider’s GUID that was registered with ETW. The Description field provides more info about the provider. In this case, the Description field indicates that this provider is the “Windows Kernel Trace”. You can now cancel out of that dialog. Next, expand one of the providers. For our example, expand the MSNT_SystemTrace node. You should see something like this: Each of these sub-nodes (e.g., EventTraceEvent, PageFault, UdpIp, etc…) is an event with a GUID associated with it. Highlight one of them (in this case, Image), right-click in the properties pane, and again select “Object Qualifiers…”. You should see something like this: Cancel out the Object Qualifiers dialog, and then expand the “Image” node. It has one sub-node, named Image_Load. Looking at the right hand properties pane you should see something like this: Notice at the top that there are fields named FileName, ImageBase, ImageSize and ProcessId. These are fields that will be represented in the event’s optional data that appears after the standard header. Right click in the properties pane again, select “Object Qualifiers…”, and you should see this: The crucial field here is the “EventType” field. In this instance, it’s 10. Thus, when you see a raw ETW event blob with the GUID specified by the Image object, and an EventType of 10, you’ll know that it has the four fields (FileName, etc..) listed above. Another key point: In this case, the Image object had only one child. However, other events could have multiple children. For instance, if you look at the HttpRequest event from the HttpEtwTrace provider, you’d see this: There are seven children of the HttpRequest event. If you were to select each one of them, and view the Object Qualifiers, you’d see that they all have different EventType values. For instance, HttpReceiveRequest has an EventType of 1. To further complicate matters, one of these children might not specify an EventType directly. This is the case when the object is describing multiple events with the same data format, but with different EventTypes. For instance, select HttpSendComplete, and view it’s object qualifiers: Notice that the EventType and EventTypeName values are arrays. Parallel arrays, to be more precise. Clicking on the “Array” button for the EventTypeName value, you should see this: There are five separate values (end, CacheAndSend, etc…). Each of these names matches up with the entries in the EventType array: What’s the deal here? Essentially, these arrays allow for a more compact encoding of events that share the same layout. Whew! I’ve covered a lot of ground here, and not even in that much detail. However, you should now know enough to start exploring the ETW hierarchy that’s on your system to see what sort of tracing goodies are available to you. In future blog posts, I’ll talk more about creating ETW traces and consuming the resultant data. come on matt – gives us more. how can we consume these events?? dominick The short answer is the ProcessTrace API. I’m planning on doing a subsequent blog entry on this topic. Yes, the content of the PDC will be great and varied and wonderful… One topic in particular that I’ve… Yes, the content of the PDC will be great and varied and wonderful… One topic in particular that I’ve… PingBack from PingBack from Performance comparison using VSTS Orcas Profiler
https://blogs.msdn.microsoft.com/matt_pietrek/2004/09/16/intro-to-event-tracing-for-windows/
CC-MAIN-2017-47
refinedweb
1,730
71.55
25 October 2004 00:01 [Source: ACN] SAUDI Arabia has not lost its charm when it comes to attracting investments, despite the threat of terrorism and possible social and political turmoil. Investors’ confidence in the kingdom can be explained only by the strong projects economics offered by the kingdom. The security threat came closest to petrochemical players in the kingdom on 1 May, when gunmen attacked the office of ABB Lummus at a petrochemical site in Yanbu and killed five of its expatriate employees. The attack prompted ABB to evacuate its expatriate staff and their families, delaying the completion of a cracker upgrade for Yanpet, a joint venture by Sabic and ExxonMobil. And then, about a month later, terrorist attacks targeted at western staff working for local oil companies, this time in Khobar, left 22 people dead. Yet, a few days after the Yanbu attack, Sumitomo Chemical announced that it had signed an MoU with Saudi Aramco for a US$4.3bn refinery-upgrade and petrochemicals complex in Rabigh – a testament, perhaps, to the Japanese major’s confidence in the kingdom. Following in the footsteps of Sumitomo were Mitsubishi Gas Chemical (MGC), Mitsubishi Corp, and Mitsui & Co. MGC and Mitsui & Co said it was evaluating its next methanol investment in the country. Mitsubishi Corp is studying building a cracker through its joint venture, Sharq. ‘The [terrorism] threat is very real. I knew people who were murdered in the kingdom,’ says Philip Leighton, a consultant with Jacobs Consultancy. ‘However, the threat is containable and business still goes on. ‘Certainly, the number of western expatriates will be further reduced when employment contracts expire. Replacement by staff from the sub-continent and from Muslim countries will take place.’ The attacks have made western contractors step up security measures or relocate their staff to neighbouring countries. But projects have not been delayed, except the one by Yanpet. Industry observers do not think security concerns will result in delays to projects or cancellations. In fact, some projects are expected to come onstream ahead of schedule. These include Saudi International Petrochemical Co’s (Sipchem’s) butanediol and methanol projects, for which the contractors are Aker Kvaerner and Chiyoda. ‘I do not believe these [security] issues will result in project cancellations, as the investment decisions are based on the economics of the project,’ says Rizwan Sheikh, a consultant with Nexant ChemSystems. ‘Yes, the hurdle rate for investment appraisal that foreign companies adopt will reflect security concerns. But the sheer cost competitiveness is expected to allow companies to meet any revised benchmarks. Delays can happen for a whole host of reasons that are typical of this industry.’ Investors concede that security is a concern that must be balanced against economics. ‘Security is an important issue, which we hope will be resolved,’ says a Sumitomo source. ‘We will not go ahead with the project if the issue becomes unmanageable. We need a strong guarantee that our investment and staff will be safe.’ The Japanese major and Saudi Aramco have appointed Foster Wheeler to conduct front-end engineering design work on their project in Rabigh. A feasibility study is due to be completed at end-2005. The Rabigh project could lure Sumitomo away from Singapore, where it was originally considering building its next cracker. The Sumitomo source explains that Saudi Arabia is attractive because of its competitiveness that is a result of its low-cost gas supplies and ready infrastructure. And Saudi Aramco is a good joint-venture partner with a good human-resource pool. Another company keen on Saudi Arabia is Mitsui which is mulling building a second methanol plant in the country. The company says it needs more time to evaluate the investment. A key issue here is market demand and not any security threat. Mitsui has already formed a joint venture with Sipchem and other Japanese companies to build a methanol plant in Al-Jubail. The 2900 tonne/day (957 000 tonne/year) project is due onstream later this year. ‘The government has stepped up safety measures, and that has given foreigners a better sense of security,’ says a source from Mitsui & Co who returned to Japan recently after a two-year stay there. The company is debating whether the market can support one more methanol plant, given the number of projects that have been planned in Saudi Arabia, Iran, Qatar and Oman. All these countries offer similar economics for methanol investors. ‘There would be more than 10m tonne/year of new methanol capacity in the Middle East by 2008,’ the source says. ‘The demand for methanol is growing, but we have to consider the reduced demand from MTBE, which is being phased out in the US.’ Gas pricing While Saudi Arabia offers the most competitive ethane prices in the region, some companies are wondering how long they will continue to enjoy this cost advantage. Saudi Aramco has maintained that its cost of production for new gas supplies – including exploration and production costs – is higher than US$0.75/mmbtu, the current ethane price in Saudi Arabia. One section of the industry believes that ethane prices will remain at this level for a few years, but there is no consensus on how far it will rise after that. Furthermore, gas supplies are not secured on long-term agreements in Saudi Arabia. Therefore, investors are more vulnerable to price hikes. On the supply side, Saudi Aramco has plans to add 3bn–5bn bbl/day of crude-oil production capacity this decade. This will be linked to the development of new gas-separation plants, which means production of associated gas will also increase. Even if ethane prices were raised, they would still be competitive with prices in the region and in the world. Ethane is priced at US$1.25/mmbtu in Iran. ‘If the cost went up to US$1.25/mmbtu [in Saudi Arabia], the increased feedstock cost would reduce the profitability of the crackers by a couple of percentage points. But for an integrated complex with derivatives, it would probably be by only one percentage point,’ says Andrew Pettman, a consultant with CMAI. As for extra gas supply, three projects have been planned in the kingdom. They are the expansion of the Hawiyah gas-processing plant, the Hawiyah straddle plant, and the Ju’aymah fourth NGL fractionation train. All the three are due to come onstream by 2008, in time to meet the ethane demand from the next wave of cracker projects. Another issue is the imminent end of the current price mechanism for liquefied petroleum gas (LPG) supply. In a bid to join the World Trade Organization (WTO), Saudi Arabia has agreed to end a pricing mechanism that allows local companies in the kingdom to buy propane, butane, and LPG at a discount to the Saudi naphtha price (fob Ras Tanurah). The mechanism, which was introduced in 2002, is supposed to be in place until 2011. The future of propane dehydrogenation (PDH)-PP and mix-feed cracker projects in the kingdom is in doubt once the pricing mechanism on LPG ends. Most of the planned crackers in the kingdom are based on a mix of ethane/LPG feedstocks. ‘There appears to be a deliberate policy of providing ethane/LPG feedstocks to new projects to maximise the production of petrochemicals by blending the more attractive feed (ethane) with the less attractive LPG feed,’ Leighton notes. ‘It is unlikely that 100% ethane crackers will be built in Saudi Arabia in the future. But attractively priced ethane will offset some of the relative cost disadvantage of higher-priced propane and butane for cracking,’ Sheikh points out. Higher consumption of LPG (butane and propane) in the kingdom means lower exports. It would also enhance the usage of propylene, which can be converted to a number of high-value products. Therefore, the overall scheme (lower LPG exports) can make a lot of economic sense,’ he adds. But investors are not complaining. Mixed-feed crackers offer more propylene and the chance to build a wider derivatives slate. The Sumitomo source confirms that the company is interested in producing polypropylene and propylene oxide at Rabigh. And the company is not worried that changes to the pricing mechanism will affect the feasibility of its cracker project. ‘As part of the refinery-upgrade programme, we will be building a fluid catalytic cracking (FCC) unit from which we can get propane,’ the source says. ‘The propane will not be extracted from natural gas and therefore its price will not be determined by the government.’ Whether this reasoning is correct was not immediately clear, as the current price mechanism does not specify if it applies to propane from all sources. Another investor, Project Management & Development (PMD), which is planning an ethane/butane cracker project, wants to produce phenol and bisphenol A besides the usual ethylene derivatives. PMD is just one of the many private companies coming up in Saudi Arabia. Besides crackers, these companies have also planned PDH-PP projects, the viability of which is in doubt once the price advantage on propane ends in 2011. ‘Beyond 2011, without the price subsidy, these [PDH-PP] projects will lose their competitiveness compared with Asian naphtha-based production,’ Leighton says. However, Sheikh thinks it is unlikely that Saudi Arabia will raise its domestic prices for LPG after 2011. ‘They [domestic prices] will be more of a business decision for Saudi Arabia, rather than what the WTO dictates.’ Meanwhile, investors in PDH-PP projects are not holding back. There are three PDH-PP projects in the kingdom. A new facility was brought onstream earlier this year. Some observers point out that it is human resources and not security or gas issues that are likely to delay projects in the kingdom. The head scientist of Dia Research Martech Inc, Makoto Takeda, thinks the Rabigh project could be delayed because of difficulties in integrating the oil refinery and the gas-based petrochemicals complex and in finding local skilled technicians and workers. The project would have to rely partly on Japanese technicians and engineers, he believes ‘There is little experience in such an integration in Saudi Arabia. The project needs high-quality management. So, we think it will take time,’ Takeda says. A simple solution would be to import skilled manpower, which is what Saudi Arabia has been doing for a number of years. But this time the problem might be difficult to solve, as Saudi Arabia faces a high level of unemployment. The country has a population of around 24m, but it employs more than 6m expatriates. Furthermore, of the local graduates in the past 20 years, only 5% were engineers and nearly 66.5% majored in non-scientific subjects, such as Islamic studies and social studies. Rise of its neighbours Despite the various uncertainties, Saudi Arabia is still the first place to come to mind when investors consider the Middle East. Besides cheap feedstocks, the kingdom also offers soft loans through the Saudi Industrial Development Fund, which can finance up to 50% of a project’s investment cost. There is also the advantage of good infrastructure in Al-Jubail and Yanbu, the two major petrochemical hubs. But Saudi Arabia’s neighbours cannot be ignored. Qatar surprised industry players when it announced that it would conduct a feasibility study with ExxonMobil on an ethane cracker in Ras Laffan. ExxonMobil also signed an agreement to build a US$7bn gas-to-liquids facility, also in Ras Laffan. Much has been written about Iran, which is actively seeking foreign investments despite facing possible UN sanctions because of its nuclear programme. Iran has had an eventful year. In February, the reformists lost their majority to conservatives. In August, hardline lawmakers rejected parts of a proposed reform plan on concerns of foreign dominance of the mainly state-run economy. As a result, foreign oil companies lost their automatic right to explore the oil they had discovered, making it mandatory for them to participate in a state-run tender. And coming up in November, the UN’s nuclear watchdog is expected to provide an assessment of Iran’s nuclear programme, which would determine whether Iran would be subject to sanctions. Understandably, companies are taking a harder look at the country’s investment environment. ‘The question of where to invest will always be viewed in terms of alternatives. The Saudis realise this and are attempting to tackle issues that will make investment opportunities in the kingdom as competitive as those in Qatar and Iran,’ Sheikh says. ‘Saudi Arabia is in a different league compared with other GCC countries, especially when it comes to natural gas-liquids (NGLs), which are the main driver for petrochemicals. Qatar and Iran may be rich in natural gas, but the use of these reserves for petrochemicals rests on the development of other natural gas-based projects (LNG, methanol, gas-to-liquid) before ethane and propane can be extracted to support petrochemicals,’ he adds. Iran and Qatar hold mainly dry gas, which has been stripped of LPG and ethane and contains mainly methane. On the other hand, Saudi Arabia processes wet gas to produce dry gas and NGLs. The dry gas is used for power generation and industrial projects, while NGLs are separated into ethane, propane, and C5+ streams for use as petrochemical feedstocks. While most in the industry are optimistic about Saudi Arabia’s prospects, others caution that the country will find it difficult to compete with its neighbours for investments in the coming years. A source from a foreign company says: ‘There will continue to be opportunities in Saudi Arabia for only another one to two years.’ He also points out that the country’s top player, Sabic, is increasingly developing projects on its own or only with existing joint-venture partners. These include cracker projects in Al-Jubail through Jubail United Petrochemical, Sharq – a joint venture by Sabic and a Japanese consortium led by Mitsubishi Corp – and in Yanbu through Yanpet, a joint venture by Sabic and ExxonMobil. In the future, investment opportunities in the kingdom will be available by partnering local private companies, such as Tasnee Petrochemical, Sipchem, and PMD. The companies are seeking foreign equity for their cracker and methanol projects in Al-Jubail. ‘But it is different for Iran. The country is going ahead with its projects and at the same time inviting foreign companies to participate in them. Iran will be a good country for investment in five to ten years. It faces complicated issues, but I believe the country can resolve them,’ the source from the foreign company adds. "It is difficult to convince a board to invest in Iran now,’ he agrees. ‘Personally, if I had US$100m to invest, I would put US$20m in Iran and US$80m in Saudi Arabia.’ Given the imminent rise of Iran and other Middle Eastern countries, will Saudi Arabia continue to remain the favoured investment destination?This question was circulated among analysts and petrochemical companies. The consensus is that the kingdom will continue to offer attractive investment opportunities in the short term. Many companies are observing Iran closely, and would start assessing opportunities only when the uncertainties have eased. Company Product Capacity (tonne/year) Location Ar-Razi methanol (No 5) 1.65m Al-Jubail Saudi International Petrochemical Co methanol (No 2) 1m Al-Jubail acetic acid 460 000 vinyl acetate monomer 300 000 Tasnee Petrochemical/partners methanol 1.8m Al-Jubail acetic acid 460 000 vinyl acetate monomer 300 000 Saudi Aramco/Sumitomo Chemical ethylene (ethane/propane-based) 1.3m Rabigh linear low-density polyethylene* 750 000-900 000 polypropylene* 700 000 other derivatives - Sabic ethylene (ethane-based) 1.3m Yanbu polyethylene* 800 000 monoethylene glycol* 700 000 polypropylene* 350 000 Jubail United Petrochemical Co monoethylene glycol 630 000 Al-Jubail?xml:namespace> Sharq ethylene (ethane/propane-based) 1.2m Al-Jubail high-density polyethylene* 400 000 linear low-density polyethylene* 400 000 monoethylene glycol* 700 000 propylene* 200 000 Tasnee Petrochemical/Sahara Petrochemical/ Saudi International Petrochemical Co ethylene (ethane/propane-based) 1m Al-Jubail polyethylene* 800 000 polypropylene* (x) 250 000 # 700 000T Project Management & Development ethylene (ethane/butane-based) 1.35m Al-Jubail polypropylene* 540 000 ethylene oxide* 530 000 monoethylene glycol* 475 000 polyethylene* 970 000 bisphenol A* 300 000 ethanolamines* 100 000 Sahara Petrochemical/Basell propane dehydrogenation-polypropylene 450 000 Al-Jubail National Polypropylene Co propane dehydrogenation-polypropylene 450 000 Al-Jubail National Petrochemical Industrial Co propane dehydrogenation-polypropylene 420 000 Yanbu Sadaf styrene 600 000 Al-Jubail Jubail Chevron Phillips Chemical styrene 700 000 Al-Jubail *part of the complex # expansion of Saudi Polyolefins Co's plant SOURCE: ACNSOURCE: AC
http://www.icis.com/Articles/2004/10/26/622396/succumbing-to-saudis-charms.html
CC-MAIN-2014-10
refinedweb
2,769
52.39
For binlog version 4. More... #include <control_events.h> For binlog version 4. This event is saved by threads which read it, as they need it for future use (to decode the ordinary events). The Post-Header has six components: Format_description_event 1st constructor. This constructor can be used to create the event to write to the binary log (when the server starts or when FLUSH LOGS) This will be used to initialze the post_header_len, for binlog version 4. The layout of Format_description_event data part is as follows: The problem with this constructor is that the fixed header may have a length different from this version, but we don't know this length as we have not read the Format_description_log_event which says it, yet. +=====================================+ | event | binlog_version 19 : 2 | = 4 | data +----------------------------+ | | server_version 21 : 50 | | +----------------------------+ | | create_timestamp 71 : 4 | | +----------------------------+ | | header_length 75 : 1 | | +----------------------------+ | | post-header 76 : n | = array of n bytes, one byte | | lengths for all | per event type that the | | event types | server knows about +=====================================+ This length is in the post-header of the event, but we don't know where the post-header starts. So this type of event HAS to: I (Guilhem) chose the 2nd solution. Rotate has the same constraint (because it is sent before Format_description_log_event). This method populates the array server_version_split which is then used for lookups to find if the server which created this event has some known bug. This method is used to find out the version of server that originated the current FD instance. This method checks the MySQL version to determine whether checksums may be present in the events contained in the binary log. This method checks the MySQL version to determine whether checksums may be present in the events contained in the bainry log. The size of the fixed header which all events have (for binlogs written by this version, this is equal to LOG_EVENT_HEADER_LEN), except FORMAT_DESCRIPTION_EVENT and ROTATE_EVENT (those have a header of size LOG_EVENT_MINIMAL_HEADER_LEN). If this event is at the start of the first binary log since server startup 'created' should be the timestamp when the event (and the binary log) was created. In the other case (i.e. this event is at the start of a binary log created by FLUSH LOGS or automatic rotation), 'created' should be 0. This "trick" is used by MySQL >=4.0.14 slaves to know whether they must drop stale temporary tables and whether they should abort unfinished transaction. Note that when 'created'!=0, it is always equal to the event's timestamp; indeed Format_description_event is written only in binlog.cc where the first constructor below is called, in which 'created' is set to 'when'. So in fact 'created' is a useless variable. When it is 0 we can read the actual value from timestamp ('when') and when it is non-zero we can read the same value from timestamp ('when'). Conclusion:
https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Format__description__event.html
CC-MAIN-2019-35
refinedweb
475
68.91
Refreshes a menu that's being shown. Updates the extension's menu items in the menu that the browser is currently showing, including changes that have been made since the menu was shown. Has no effect if the menu is not being shown. Rebuilding a shown menu is an expensive operation, only invoke this method when necessary. This would typically be called from inside a menus.onShown event handler, after the handler has made any updates to the menu. Firefox makes this function available via the contextMenus namespace as well as the menus namespace. This is an asynchronous function that returns a Promise. Syntax browser.menus.refresh() Parameters None. Return value A Promise that is fulfilled with no arguments. Browser compatibility The compatibility table in this page is generated from structured data. If you'd like to contribute to the data, please check out and send us a pull request. Examples This example listens for the context menu to be shown over a link, then updates the openLabelledId menu item with the link's hostname: function updateMenuItem(linkHostname) { browser.menus.update(openLabelledId, { title: `Open (${linkHostname})` }); browser.menus.refresh(); } browser.menus.onShown.addListener(info => { if (!info.linkUrl) { return; } let linkElement = document.createElement("a"); linkElement.href = info.linkUrl; updateMenuItem(linkElement.hostname); });
https://developer.mozilla.org/id/docs/Mozilla/Add-ons/WebExtensions/API/menus/refresh
CC-MAIN-2020-50
refinedweb
209
50.53
Adding a Domain to a Zone Updated: May 9, 2008 Applies To: Windows Server 2008 In networks that deploy Active Directory Domain Services (AD DS), the Active Directory domain namespace and the Domain Name System (DNS) namespace are usually managed together using Active Directory tools. In some cases, however, you might find it necessary to add a domain to a zone that is not part of your Active Directory namespace. For example, you might want to create a DNS domain that contains alias (CNAME) resource records that point to hosts in different Active Directory domains. In the case of a zone that is not integrated with AD DS, if you do not want to delegate authority for a subdomain to another server, you can extend the DNS namespace of a zone by adding a domain to the zone. To complete this task, perform the following procedure:
https://technet.microsoft.com/en-us/library/cc816923(v=ws.10).aspx
CC-MAIN-2017-51
refinedweb
146
50.3
Class holding a unique name for a variable. Is attached to varRefs as a persistant attribute. This is used to assign absolute names to VarRefExp nodes during VariableRenaming. Definition at line 16 of file uniqueNameTraversal.h. #include <uniqueNameTraversal.h> Constructs the attribute with value thisNode. The key will consist of only the current node. Definition at line 41 of file uniqueNameTraversal.h. Constructs the attribute using the prefix vector and thisNode. The key will first be copied from the prefix value, and then the thisNode value will be appended. Definition at line 54 of file uniqueNameTraversal.h. Copy the attribute. Definition at line 64 of file uniqueNameTraversal.h. Get a constant reference to the name. Definition at line 79 of file uniqueNameTraversal.h. Set the value of the name. Definition at line 88 of file uniqueNameTraversal.h. Get the string representing this uniqueName. Definition at line 107 of file uniqueNameTraversal.h.
http://rosecompiler.org/ROSE_HTML_Reference/classssa__private_1_1VarUniqueName.html
CC-MAIN-2021-10
refinedweb
151
63.46
Hello, I'm trying to implement a custom view renderer for my WebView and I can't figure out how to get a large webpage to automatically scale to fit the viewport. using System; using Xamarin.Forms.Platform.iOS; using UIKit; using Xamarin.Forms; using ButtonCode; using ButtonCode.iOS; using Foundation; [assembly: ExportRenderer(typeof(CustomWebView), typeof(CustomWebViewRenderer))] namespace ButtonCode.iOS { public partial class CustomWebViewRenderer : WebViewRenderer { protected override void OnElementChanged(VisualElementChangedEventArgs e) { base.OnElementChanged(e); if (e.OldElement==null) { this.ScalesPageToFit = true; //this.ScrollView.ScrollEnabled = false; } } } } The code executes without error or warning. To be sure that the web view object is being accessed I tried another property, this.ScrollView.ScrollEnabled, and setting it to false had the expected effect. The code should use if (Control != null)instead of checking for OldElement == null. If that doesn't work attach an example project. Also, be aware of this bug. Hi Adam, Here is my custom class: and my custom renderer: I tried setting "if (e.OldElement==null)" to "if (Control != null)" and the keyword Control was red (not sure what to do now). I'm pretty sure that the code is being executed within my statement because if I uncomment out the commented line in there the scrolling action stops. I will need to apply a bookmarklet to my webview using the eval function or some other way. But in this example it only builds the script string, the eval function is commented out. If I comment out the whole navigated event function it still behaves the same. Emphasis added. A code snippet isn't very useful. Make a small test project and zip up the whole thing so I can just run it and debug. even a small project is a monster to upload. I've done some tests in swift and the problem partially persists there also. So it looks like this is a non xamarin problem I'm having. in swift sometimes the page scales as expected other times it doesn't (different urls). Tip for uploading projects for future reference: If you do that then the project directory should be just a few hundred Kb. I've found the culprit. The webpage I need to display has a meta tag for viewport and this is causing my sizing problems. I've found that in xcode I can get the page size to view perfectly with this meta tag statement: <meta name="viewport" content="width=600,user-scalable=no" /> on Xamarin it almost works but the left side is getting cut off by about 25 pixels. If i figure out a solution I'll post it here. The issue really threw me off because on Android I didn't have to consider this metatag, it just worked out of the box It is looking like this is an impossible thing to fix. I may have to jettison this whole xamarin project. I'm just going to have to rewrite about 16 web pages to fix this that's all. Would be nice if it just worked like xcode does. There is nothing inherently different about a Xamarin UIWebView and one you get with Xcode. You just need to figure out how they're being used differently. Could you attach example projects for both?
https://forums.xamarin.com/discussion/comment/140963
CC-MAIN-2019-26
refinedweb
544
67.25
I'm trying to compile a program in C on OS X 10.9 with GCC 4.9 (experimental). For some reason, I'm getting the following error at compile time: gcc: fatal error: stdio.h: No such file or directory #include <stdio.h> int main(int *argc, const char *argv[]) { printf("Hello, world!"); return 0; } gcc -o ~/hello ~/hello.c gcc stdio I had this problem too (encountered through Macports compilers). Previous versions of Xcode would let you install command line tools through xcode/Preferences, but xcode5 doesn't give a command line tools option in the GUI, that so I assumed it was automatically included now. Try running this command: xcode-select --install
https://codedump.io/share/g3eUUFa7R6LF/1/gcc-fatal-error-stdioh-no-such-file-or-directory
CC-MAIN-2018-17
refinedweb
115
68.36
Protocol compliant web socket server with some awesome extras. Version: 0.3.5 Master build: Wesley is a protocol compliant web socket server with some awesome extras. $ npm install wesley var server = ;server; Sometimes it's necessary to maintain logical pools of clients (AKA namespaces, rooms, topics, etc). var server = ;server; The default pooling strategy separates clients based on the path they connect to. ws://localhost:3000/ # Server and / eventsws://localhost:3000/pool # Server and /pool eventsws://localhost:3000/pool/child # Server and /pool/child events This behavoiur can be overridden by passing in your own handler. The callback expects to be called with the name of the pool to join. var {;};var server = ;server; By default, a Wesley client will emit data for every message sent from the client. You can entirely replace this behaviour at your leisure. var {;};var server = ;server; This also means you could handle more complicated messages than simple strings. var {var data = JSON;;};var server = ;server; Wesley clients inherit from EventEmitter2, so even more complex events can be listened to. var {var data = JSON;;};var server = ;server; In much the same way as handling inbound messages, you can also handle outbound messages. var {var packed = JSONstringifytype:type body:message;;};var server = ;server; As Wesley is a web socket server, it uses a socket transport by default. You can create custom transports by extending wesley.Transport. It is the job of the transport to proxy events listened to by the client or server. Please see the documentation for more detail (Coming soon). Once you have a client, using it is simple. var wesley = ;var server =// An array of transports;server;
https://www.npmjs.com/package/wesley
CC-MAIN-2017-04
refinedweb
275
64.71
Finding the name of the program from which a Python module is running can be trickier than it would seem at first, and investigating the reasons led to some interesting experiments. A couple of weeks ago at the OpenStack Folsom Summit, Mark McClain pointed out an interesting code snippet he had discovered in the Nova sources: nova/utils.py: 339 script_dir = os.path.dirname(inspect.stack()[-1][1]) The code is part of the logic to find a configuration file that lives in a directory relative to where the application startup script is located. It looks at the call stack to find the main program, and picks the filename out of the stack details. The code seems to be taken from a response to a StackOverflow question, and when I saw it I thought it looked like a case of someone going to more trouble than was needed to get the information. Mark had a similar reaction, and asked if I knew of a simpler way to determine the program name. I thought it looked like a case of someone going to more trouble than was needed… Similar examples with inspect.stack() appear in four places in the Nova source code (at last as-of today). All of them are either building filenames relative to the location of the original “main” program, or are returning the name of that program to be used to build a path to another file (such as a log file or other program). Those are all good reasons to be careful about the location and name of the main program, but none explain why the obvious solution isn’t good enough. I assumed that if the OpenStack developers were looking at stack frames there must have been a reason. I decided to examine the original code and spend a little time deciphering what it is doing, and especially to see if there were cases where it did not work as desired (so I could justify a patch). The Stack The call to inspect.stack() retrieves the Python interpreter stack frames for the current thread. The return value is a list with information about the calling function in position 0 and the “top” of the stack at the end of the list. Each item in the list is a tuple containing: - the stack frame data structure - the filename for the code being run in that frame - the line number within the file - the co_name member of the code object from the frame, giving the function or method name being executed - the source lines for that bit of code, when available - an index into the list of source lines showing the actual source line for the frame show_stack.py The information is intended to be used for generating tracebacks or by tools like pdb when debugging an application (although pdb has its own implementation). To answer the question “Which program am I running in?” the filename is most the interesting piece of data. One obvious issue with these results is that the filename in the stack frame is relative to the start up directory of the application. It could lead to an incorrect path if the process has changed its working directory between startup and checking the stack. But there is another mode where looking at the top of the stack produces completely invalid results. The simple one-liner is not always going to produce the right results. The -m option to the interpreter triggers the runpy module, which takes the module name specified and executes it like a main program. As the stack printout above illustrates, runpy is then at the top of the stack, so the “main” part of our local module is several levels down from the top. That means the simple one-liner is not always going to produce the right results. Why the Obvious Solution Fails Now that I knew there were ways to get the wrong results by looking at the stack, the next question was whether there was another way to find the program name that was simpler, more efficient, and especially more correct. The simplest solution is to look at the command line arguments passed through sys.argv. argv.py Normally, the first element in sys.argv is the script that was run as the main program. The value always points to the same file, although the method of invoking it may cause the value to fluctuate between a relative and full path. As this example demonstrates, when a script is run directly or passed as an argument to the interpreter, sys.argv contains a relative path to the script file. Using -m we see the full path, so looking at the command line arguments is more robust for that case. However, we cannot depend on -m being used so we aren’t guaranteed to get the extra details. Using import The next alternative I considered was probing the main program module myself. Every module has a special property, __file__, which holds the path to the file from which the module was loaded. To access the main program module from within Python, you import a specially named module __main__. To test this method, I created a main program that loads another module: import_main_app.py And the second module imports __main__ and print the file it was loaded from. import_main.py Looking at the __main__ module always pointed to the actual main program module, but it did not always produce a full path. This makes sense, because the filename for a module that goes into the stack frame comes from the module itself. Wandering Down the Garden Path After I found such a simple way to reliably retrieve the program name, I spent a while thinking about the motivation of the person who decided that looking at stack frames was the best solution. I came up with two hypotheses. First, it is entirely possible that they did not know about importing __main__. It isn’t the sort of thing one needs to do very often, and I don’t even remember where I learned about doing it (or why, because I’m pretty sure I’ve never used the feature in production code for any reason). That seems like the most plausible reason, but the other idea I had was that for some reason it was very important to have a relatively tamper-proof value – something that could not be overwritten accidentally. This new idea merited further investigation, so I worked back through the methods of accessing the program name to determine which, if any, met the new criteria. I did not need to experiment with sys.argv to know it was mutable. The arguments are saved in a normal list object, and can be modified quite easily, as demonstrated here. argv_modify.py All normal list operations are supported, so replacing the program name is a simple assignment statement. Because sys.argv is a list, it is also susceptible to having values removed by pop(), remove(), or a slice assignment gone awry. $ python argv_modify.py Type : <type 'list'> Before: ['argv_modify.py'] After : ['wrong'] The __file__ attribute of a module is a string, which is not itself mutable, but the contents can be replaced by assigning a new value to the attribute. import_modify.py This is less likely to happen by accident, so it seems somewhat safer. Nonetheless, changing it is easy. $ python import_modify.py Before: import_modify.py After : wrong That leaves the stack frame. Down the Rabbit Hole As described above, the return value of inspect.stack() is a list of tuples. The list is computed each time the function is called, so it was unlikely that one part of a program would accidentally modify it. The key word there is accidentally, but even a malicious program would have to go to a bit of effort to return fake stack data. stack_modify1.py The filename actually appears in two places in the data returned by inspect.stack(). The first location is in the tuple that is part of the list returned as the stack itself. The second is in the code object of the stack frame within that same tuple (frame.f_code.co_filename). $ python stack_modify1.py From stack: wrong From frame: stack_modify1.py It turned out to be more challenging to change the code object. Replacing the filename in the tuple was relatively easy, and would be sufficient for code that trusted the stack contents returned by inspect.stack(). It turned out to be more challenging to change the code object. For C Python, the code class is implemented in C as part of the set of objects used internally by the interpreter. Objects/codeobject.c The data members of a code object are all defined as READONLY, which means you cannot modify them from within Python code directly. code_modify_fail.py Attempting to change a read-only property causes a TypeError. Instead of changing the code object itself, I would have to replace it with another object. The reference to the code object is accessed through the frame object, so in order to insert my code object into the stack frame I would need to modify the frame. Frame objects are also immutable, however, so that meant creating a fake frame to replace the original value. Unfortunately, it is not possible to instantiate code or frame objects from within Python, so I ended up having to create classes to mimic the originals. stack_modify2.py I stole the idea of using namedtuple as a convenient way to have a class with named attributes but no real methods from inspect, which uses it to define a Traceback class. $ python stack_modify2.py From stack: wrong From frame: wrong Replacing the frame and code objects worked well for accessing the “code” object directly, but failed when I tried to use inspect.getframeinfo() because there is an explicit type check with a TypeError near the beginning of getframeinfo() (see line 16 below). The solution was to replace getframeinfo() with a version that skips the check. Unfortunately, getframeinfo() uses getfile(), which performs a similar check, so that function needed to be replaced, too. stack_modify3.py Now the caller can use inspect.getframeinfo() (really my replacement function) and see the modified filename in the return value. After reviewing inspect.py one more time to see if I needed to replace any other functions, I realized that a better solution was possible. The implementation of inspect.stack() is very small, since it calls inspect.getouterframes() to actually build the list of frames. The seed frame passed to getouterframes() comes from sys._getframe(). def stack(context=1): """Return a list of records for the stack above the caller's frame.""" return getouterframes(sys._getframe(1), context) The rest of the stack is derived from the first frame returned by _getframe() using the f_back attribute to link from one frame to the next. If I modified getouterframes() instead of inspect.stack(), then I could ensure that my fake frame information was inserted at the beginning of the stack, and all of the rest of the inspect functions would honor it. stack_modify4.py The customized versions of getframeinfo() and getfile() are still required to avoid exceptions caused by the type checking. $ python stack_modify4.py From stack : wrong From code in frame: wrong From frame info : wrong Enough of That At this point I have proven to myself that while it is unlikely that anyone would bother to do it in a real program (and they would certainly not do it by accident) it is possible to intercept the introspection calls and insert bogus information to mislead a program trying to discover information about itself. This implementation does not work to subvert pdb, because it does not use inspect. Probably because it predates inspect, pdb has its own implementation of a stack building function, which could be replaced using the same technique as what was done above. This investigation led me to several conclusions. First, I still don’t know why the original code is looking at the stack to discover the program name. I should ask on the OpenStack mailing list, but in the mean time I had fun experimenting while researching the question. Second, given that looking at __main__.__file__ produces a value at least as correct as looking at the stack in all cases, and more correct when a program is launched using the -m flag, it seems like the solution with best combination of reliability and simplicity. A patch may be in order. And finally, monkey-patching can drive you to excesses, madness, or both. Updates 8 May – Updated styles around embedded source files and added a label for each.
https://doughellmann.com/blog/2012/04/30/determining-the-name-of-a-process-from-python/
CC-MAIN-2017-43
refinedweb
2,112
60.55
I'm trying to help with a WorkBench development written in Python. Is there a way to reload python code that belongs to the WB without the need to restart the whole FreeCAD application? Switching to another WB and back didn't help Thanks. Code: Select all Gui.doCommand("import " + modul) Gui.doCommand("import " + self.lmod) Gui.doCommand("reload(" + self.lmod + ")") Gui.doCommand(self.command) Code: Select all reload(SomeModule.py) i remember that is don't work out of the box with py3...i remember that is don't work out of the box with py3...triplus wrote: ↑Wed Jul 18, 2018 9:43 amUsually doing: Works just fine. That is when you changed code in SomeModule.py and it was already imported.Works just fine. That is when you changed code in SomeModule.py and it was already imported. Code: Select all reload(SomeModule.py) This works with Python 3:This works with Python 3: Code: Select all reload(MyModule) Code: Select all FreeCADGui.addCommand('MyModule', MyModule()) Code: Select all import someModule from importlib import reload reload(someModule) Users browsing this forum: jaisejames and 3 guests
https://forum.freecadweb.org/viewtopic.php?style=4&f=10&t=29805
CC-MAIN-2020-10
refinedweb
188
62.04
2 Apr 07:18 2013 Re: releasing Bio-Root Carnë Draug <carandraug+dev <at> gmail.com> 2013-04-02 05:18:25 GMT 2013-04-02 05:18:25 GMT On 2 April 2013 05:54, Fields, Christopher J <cjfields <at> illinois.edu> wrote: > On Apr 1, 2013, at 11:18 PM, Carnë Draug <carandraug+dev <at> gmail.com> wrote: > >> I have prepared the Bio-Root repo so a release can be made with >> >> dzil release >> git push --tags >> >> IF the development version of the BioPerl pluginbundles are installed. >> Note that Bioperl's dist zilla and pod weaver pluginbundles are still >> not available on CPAN. Could someone please upload them to CPAN or >> give me co-maintenance? > > You have co-maint on Dist::Zilla::PluginBundle::BioPerl. Note that the Pod::Weaver module wasn't in the original submission; we'll need you to transfer primary maint to BIOPERLML when you can (you should still be co-maint on it). Done. Version 0.20 of the pluginbundles have been released. BIOPERLML already has primary maintenance of the Pod Weaver plugin namespace. >> What to do about Bio::Root::Version? I didn't change its code, but I'm >> guessing it should be edited to do something so it keeps backward >> compatibility with this release. With the multiple distributions, this >> makes no sense since each of them may be a different version at any >> given time. Plus, BioPerl's distzilla pluginbundle uses the version >> plugin which already does it for each module. > > It will be deprecated for most use cases for the reasons you mention, but yes it will need to be fixed to deal with things for the time being, at least until we can get Dist::Zilla running for the main bioperl repo. By the way, I just noticed that Bio::Root::Build does not get the version from Bio::Root::Version, it has its own value hardcoded. Carnë
http://permalink.gmane.org/gmane.comp.lang.perl.bio.general/26569
CC-MAIN-2015-40
refinedweb
318
70.73
In the Open Event Android we have the fragment for schedule, speakers which has the option to sort the list. Schedule Fragment have the option to sort by Title, Tracks and Start Time. Speakers Fragment has the option to sort by Name, Organization and Country. If the user preferred to sort by name then it should always sort the list by name whenever the user uses the app. For this we need to store user preference for sorting list. Another part of the app like Live feed, About fragment also needs to store event id, facebook page id/name etc. In Android there is a SharedPreferences class to store key value pair in the App specific storage. To store data in SharedPreferences we need to create SharedPreferences Object in different activities and fragment. In this post I explain how to create SharedPreferences Util which can be used to store key value pairs from all over the App. 1. Create SharedPreferencesUtil Class The first step is to create SharedPreferncesUtil.java file which will contain static SharedPreferences object. public class SharedPreferencesUtil { ... } 2. Create static objects Create static SharedPreferences and SharedPreferences.Editor object in the SharedPreferncesUtil.java file. private static SharedPreferences sharedPreferences; private static SharedPreferences.Editor editor; 3. Initialize objects Now after creating objects initialize them in the static block. The code inside static block is executed only once: The first time you make an object of that class or the first time you access a static member of that class. static { sharedPreferences = OpenEventApp.getAppContext().getSharedPreferences(ConstantStrings.FOSS_PREFS, Context.MODE_PRIVATE); editor = sharedPreferences.edit(); } Here make sure to use the Application context to avoid a memory leak. The getSharedPreferences() method takes two arguments name of the shared preference and mode. Here we are using Context.MODE_PRIVATE File creation mode where the created file can only be accessed by the calling application. 4. Add methods Now create static methods to store data so that we can use these methods directly from the other activities or classes. Here I am only adding methods for integer you can add more methods for String, long, boolean etc. public static void putInt(String key, int value) { editor.putInt(key, value).apply(); } public static int getInt(String key, int defaultValue) { return sharedPreferences.getInt(key, defaultValue); } 5. Use SharedPreferencesUtil class Now we are ready to use this Util class to store key value pair in SharedPreferences. SharedPreferencesUtil.putInt(ConstantStrings.PREF_SORT, sortType); Here the putInt() methods take two arguments one the key and second the value. To get the stored value use getInt() method. SharedPreferencesUtil.getInt(ConstantStrings.PREF_SORT, 0); To know more how I solved this issue in Open Event Project visit this link.
https://blog.fossasia.org/creating-sharedpreferences-util-in-open-event-android/
CC-MAIN-2021-43
refinedweb
442
50.23
. You’re a successful serial entrepreneur. How’d you get into Perl? I started my professional career as an Engineer at a TV station. As the web started to get popular in the early 90’s I started picking up web development for the TV station, and then eventually went to work at an ISP as a web developer and system administrator. That’s when I first picked up Perl, as it was already installed on the DEC Unix boxes they were running. I realized how easy it was to use it to automate a lot of my job (deploying sites, running backups, collecting statistics, munging logs), and a little web stuff here and there too (processing forms, writing message boards and polls). Since then I’ve used several other languages (PHP, Java, and Ruby mostly), but I always come back to Perl because it solves the most problems for me with the littlest amount of fuss. What’s your business background? I have no formal business training, but I’ve worked at lots of companies big and small, and either started or helped start about a dozen companies now, four of which I still own. So I’ve really picked up a lot of my business expertise through trial and error, and through watching the successes and failures of other businesses. Some people follow sports and can quote you the scores and statistics of their favorite teams. For me, I prefer to watch businesses and business leaders. And when I read for entertainment, it’s almost never fiction. Instead I like to read about things that can give me ideas to apply. For instance, I just finished “The Viral Loop,” which covers viral marketing history from Tupperware through Facebook. I know all this sounds pretty nerdy/geeky/dorky, but so be it. How did you decide to do a browser-based game? Actually long before I built WebGUI, in the CGI era, I built one of the very first web-based RPG systems. It was called Survival of the Fittest. And back about that time I had the idea for The Lacuna Expanse (it was called Star Games back then), but the technology wasn’t there to pull off what I really wanted to do. Then last year (released July 14, 2009) I built a new business called The Game Crafter. It is a web to print company, where people design board games and card games using their web browser (plus some offline image editing) and when they’re done, they can order a copy for themselves, or put it up in our online store to sell to potential customers. It’s sort of like Lulu or CafePress, but for traditional board and card games. Here we are just over a year later and that business has really taken off, with over 1,500 people making custom games, and 70% of customers returning for more than one order. I should mention that TGC is built with 100% pure Perl as well. About the time that The Game Crafter launched, another business that I had created four years earlier started actually making some good money. That business is CMS Matrix, and yes it’s 100% pure Perl as well. After about 6 months of seeing how well The Game Crafter and CMS Matrix were doing, and knowing that I had a solid team in place to keep WebGUI marching forward, my business partners and I decided we should take a chance with yet another business. But this time we decided we wanted to tackle something much more ambitious and risky. One of my business partners reminded me of the Star Games idea. And there’s hardly anything more risky than making a video game. It has a large up front cost of both time and money, and video games pretty much either make a lot of money, or none at all. There’s not much of a middle of the road. With Star Games as our foundation, we started designing game mechanics. We didn’t want to build yet another war game (too many of those) so we settled on espionage as our conflict mechanism. And until WoW and The Sims came around, there was one game that dominated the landscape as far as revenue goes, SimCity. So we knew the game had to have a city building element. And everything else was stuff we either made up, or ideas we borrowed from our favorite games. What did you have to invent and what did you reuse? Luckily CPAN came to the rescue as it has on basically every Perl project I’ve ever tackled. So I was able to not have to reinvent the wheel on basically any foundational level. Here’s the list of Perl modules I used: - Data::Validate::Email - Text::CSV_XS - Log::Log4perl - UUID::Tiny - DateTime::Format::MySQL - DBIx::Class::TimeStamp - JSON::XS - JSON - Config::JSON - Starman - JSON::RPC::Dispatcher - Log::Any::Adapter - Log::Any::Adapter::Log4perl - String::Random - List::Util::WeightedChoice - List::Util - List::MoreUtils - DateTime - Regexp::Common - Pod::Simple::HTML - File::Copy - DateTime::Format::Duration - XML::FeedPP - SOAP::Amazon::S3 - DBD::mysql - DBIx::Class - JSON::Any - DBIx::Class::InflateColumn::Serializer - DBIx::Class::DynamicSubclass - Memcached::libmemcached - Server::Starter - IO::Socket::SSL - Net::Server::SS::PreFork - Facebook::Graph - File::Path - namespace::autoclean - Clone - Plack::Middleware::CrossOrigin - Net::Amazon::S3 When I first started development I was convinced that to be massively parallel I was going to have to go with an async server like Coro or POE, and a NoSQL database. I quickly realized that writing this system to be completely async was going to be a nightmare that would take more than double the time. Part of the problem was that while I was familiar with developing async applications, I had only done it on a small scale in the past. The other problem was that I kept running into modules I wanted to use that weren’t async compatible. Ultimately I ditched the idea of going async within the first month. Unfortunately I wasn’t so quick to ditch the idea of NoSQL. I started with MongoDB and CouchDB, but had trouble compiling them with the Perl bindings. I planned on hosting on Amazon at that point, so I decided to give SimpleDB a go. The downside there was that there were no decent Perl bindings for SimpleDB, that weren’t entirely bare bones. So with that I created SimpleDB::Class (based loosely on DBIx::Class). The module works great. Unfortunately SimpleDB doesn’t. It’s super slow. So four months into development, with a whimper, I had to ditch my beloved SimpleDB::Class module. I’m glad I did. Development has been much faster and easier since then, and a good amount of thanks goes to DBIx::Class for that. WWW::Facebook::API has been largely abandoned by its author. He told me he doesn’t have time to maintain it anymore. And I was having a hard time getting it to work anyway. As luck would have it Facebook just announced their Graph API, so I decided to take on that project, and build a Perl wrapper around it. And Facebook::Graph was born. This enabled me to allow Facebook users to Single Sign On into the web site, the game, and also interact with their accounts. About the only other non-game piece that I had to invent of any consequence was JSON::RPC::Dispatcher, which is a Plack enabled web service generator. There are some other JSON-RPC modules on CPAN, but for one reason or another I found them all completely insufficient. Mostly because of one or more of four reasons: - It didn’t support JSON-RPC 2.0. - Its documentation was so poor that I couldn’t make it work. - It made me write a ton of code to simply expose a web service. - It wasn’t PSGI/Plack compatible. With JSON::RPC::Dispatcher, I can expose object-oriented code as web services with a single line of code. I’m not very happy with the Perl modules that are out there for S3 right now. Right now we’re using a combination of SOAP::Amazon::S3, and Net::Amazon::S3, and neither are particularly good, at least for our purposes. They both work, but only for fairly basic purposes. Sometime in the near future I’ll either take on a massive overhaul of one of those modules, or write my own from scratch. Which remains to be seen depending on how open the authors of those modules are to patches. What did you need from SimpleDB besides more speed? What I was hoping I’d get out of SimpleDB was three things: massive parallelism, hierarchical data structure storage, and schema-less storage to make my upgrade path easier. It provided all of those things. What I hadn’t anticipated was all the limitations it would place on me. Speed was just the nail in the coffin. It also puts pretty harsh limits on the amount of data per record, the amount of data returned per result set, and the complexity of queries. In addition, like most NoSQL databases it’s eventually consistent, which provides its own host of problems. I had worked my way around pretty much all that, and then finally hit the performance bottleneck. At that point I knew I had to make a change, because I wouldn’t be able to make up the difference in parallelism. For example, in order to process functions on a building, I would need planet data, and empire data in addition to the building data. But I wouldn’t know what empire or planet to fetch until I fetched the building, which meant I’d have to do serial processing. And I couldn’t cache all the data for the empire and the planet in the building (or vice versa) because of the limits on the amount of data allowed to be stored per record. My two options were: 1) Bring everything forward into memcached, which has its own problems because I’d have to create an indexing system; 2) Move to a relational database. When did you start using Plack? I first heard about Plack late last year when one of the contributors to WebGUI did an initial test implementation (PlebGUI) to see if we could use it in WebGUI. After seeing how cool it was I knew that WebGUI 8 had to have it, and all my future projects would also use it. What. But you’re right, in general I’m not averse to using new technologies. The problem with “tried and true” is that it’s often “old and stale”. So from my perspective, there are just as many risks choosing proven technologies as there are new ones. That doesn’t mean you can blindly adopt new technologies, but you should be on the lookout for them. The rest of the risk/reward decision comes from my business experience: Change is inevitable. If you try something and it doesn’t work out, so what? Sure it’s going to cost you some time/money, but maintaining antiquated systems costs a lot of money too. These days the pace of technology moves too quickly to rest on tried and true alone. Are you comfortable with the risk that you’ll run into maturity problems and can patch them or work around them, or do you think that despite their relative youth, they’re very capable for your purposes? Here’s the thing. When you’re running a technology based business, the only thing you can plan for is that things will change. Let’s use scalability as an example. If you try to build a system that will infinitely scale before you have any users then you’ll spend a lot of time and money on something that may never get any users. At the same time, if you put no time into planning for some amount of scaling, then you won’t have enough breathing room to keep your users happy while you refactor. Likewise you can’t anticipate all the features you’ll ever need, because user’s desires are hard to predict. And because of this, at some point you’ll likely make a fundamental flaw in your architecture that will require at least a partial rewrite of your software. This is very much a business decision. Most developers I know cry when I say that, because most believe that it’s both possible and desirable to reach design/implementation nirvana. The fact is that users don’t care if your APIs are perfect. They care if your software does what they need it to do. From a business perspective it’s often more profitable to build something quickly and then continually refactor or even rewrite it to match demand. I say all of this to make the point that if a particular new technology doesn’t work out like we expected it to, then we’ll simply replace it in the next iteration. If you go into the project with that mentality you’ll likely be more successful. What’s the basic architecture of The Lacuna Expanse? The basic software architecture looks like: Basically per server configurable game rules go into various Config::JSON config files. DBIx::Class and MySQL handle all of the game data storage and querying. Memcached sits off to the side and handles lock contentions, limit contentions, session management, and other server coordination communication. Unfortunately, not much can actually be cached due to the dynamic nature of the game, unless I was willing to cache basically everything, which I’m not. And all the static stuff, like images, JavaScript, and CSS files get served up from CloudFront. We also push our RSS feeds and other semi-static game content out to S3. The game engine itself is basically an MVC setup built with Moose. DBIx::Class and Config::JSON act as the model. Some custom Moose objects tied to Memcached::libmemcached act as the controller handling session management, privileges, etc. And JSON::RPC::Dispatcher acts as the view. The basic server architecture looks like this: Any of the server nodes can be set up in either a clustered or load balanced formation to handle traffic growth. And finally we use Github as our deployment system. We use its service hooks feature to trigger pushing new content to S3/Cloudfront and code to the game servers. How many people are working on this? Six, plus a bunch of play testers. One artist named Ryan Knope; plus a part time helper, Keegan Runde, who is the son of one of the other developers. One on iPhone development, named Kevin Runde. Two on Web Client development John Rozeske and Graham Knop. Myself on server development. And myself and my business partner Jamie Vrbsky on game mechanics development. We started development in January 2010, and officially launched the game on October 4, 2010. Now that we’ve launched, I’ve brought in one of my other business partners, Tavis Parker, to help out with marketing the game. And we’re still pushing forward on new releases. We hope to have our first expansion for the game, called “Enemy Within”, out sometime in Q1 2011. How do you manage your development process? We’re very loose on management. We basically have a strategy meeting every 2 weeks at a local pub, where we discuss whatever needs to be discussed in person. Beyond that we have a play testers mailing list, a developers mailing list, and a defect tracking system that we use internally. And other than that communicate through Skype and email. We manage all of our code and content through various public and private github repositories. We share documents and art mockups using Dropbox. I publish all the JSON-RPC APIs out using POD (nicely formatted using Pod::Simple::HTML) to our play testers server, which is what the client guys develop against. And then ultimately once vetted and implemented by our client guys, the APIs are pushed out to the public server here at. What little project management and coordination we need is handled by me emailing back and forth. How often are your releases? What’s the breakdown between bugfixes and new development? For the Lacuna Expanse we’re doing releases about 4 or 5 times a week. 1 or 2 of them contain some new features, and the rest are bug fixes. However, TLE is very new. In the beginning it’s very important to react quickly to your users needs because they often find bugs you didn’t, or have feature ideas that are almost fundamental after you hear them, but you never thought of them during the development process. By the end of the year our development cycle will slow down quite a bit, probably to once per week. For WebGUI we release approximately once per week, and those releases are primarily bug fixes. We generally do about 2 major releases per year that are primarily new features. For The Game Crafter we’ve stopped doing releases, except for the occasional bug fix because we’re coming into the holiday season. Starting in January we’re going to get going on a complete rewrite (about a six month process), which will quadruple our feature set, give us about a 700% performance gain, and allow us to scale with the growing demands our customers are placing on us. Recruit existing Perl developers in your area, work with people you’ve worked with before, or hire good people and train them? All of the above. When you’re looking to hire someone you should hire the best person you can afford to hire. In our case this means we’ve decided to design our businesses around telecommuting. We still maintain a small office, and still hire locally when we can, and we even provide incentives for our employees to move to Madison if they so desire, but we never throw out a resume based upon location, what schools they attended, or whether or not they’ve happened to work with the particular modules and technologies we’re working with. I keep an eye on one of the alliances in the game populated by a lot of well- known Perl developers, and they seem to be pushing the limits of the public API. I know you made this API public for a reason (and increased the call limit)–but do you foresee an endgame where the best client automation wins, or do you expect that the game strategy will be malleable such that clever players have an edge over automation? Automation has its advantages certainly. It’s great for getting the mundane crap out of the way. Most games spend a lot of time and effort doing everything they can to prevent people from automating their game. The trouble is that you end up wasting a lot of effort trying to stop smart people from being smart. If they really want to automate something they will find a way around your restrictions. It’s a never ending arms race. In our case we decided to embrace these people. Better and better tools will come along and ultimately that means these people are adding features to our game that we didn’t have to write. Because eventually the tools will get simple enough that your average Joe can run them. As far as the game is concerned it doesn’t make a bit of difference that you can use a tool to push a button in the game, rather than pushing the button yourself. You still have to follow the same rules. It takes a certain amount of time to happen, you have to spend a certain amount of resources, etc. When it comes right down to it, someone still has to make all of the important decisions, and that’s not likely going to be a tool anytime soon. You have to decide what buildings to upgrade in what order, what ships to build, who to attack, how to defend, etc. And once the next expansion comes out, you’ll have to work with your team mates to build a space station, enact laws, and defend your federation of planets. It will be very much a social endeavor.
https://www.perl.com/pub/2010/10/colonizing-the-lacuna-expanse-with-perl.html/
CC-MAIN-2021-49
refinedweb
3,399
70.53
Things to be done before starting. - Download Node.js ( - Download XCode (Mac Only, - Create GitHub account( - Create Vercel account ( node -v You can check the version of you node. If it's not recently one, download a new one. sudo corepack enable and this will download yarn.(I'm using Mac) yarn create next-app wine.likelion.com --typescript // yarn create next-app { project name here } --typescript // if you add --typescript at the end, it means that I am going to use typescript When it's all downloaded, check if it works cd wine.likelion.com // to directory I will work yarn dev Pages I think I used router when I used React but there are pages in Next.js If you make a directory(folder) or file inside Pages directory, they work like Router. This was the first page I created. import type { NextPage } from "next"; // NextPage type (because it's typescript, you should specify types) const WinePage: NextPage = () => { return ( <div> <h1>Wine</h1> </div> ) } export default WinePage; Documentation - Page in Next.js package.json When you create next-app, you have package.json file in the directory. It's JSON format and it records important metadata about a project which is required before publishing and also defines functional attributes of a project that it uses to install dependencies, run scripts, and identify the entry point to the package. - scripts: Scripts are a great way of automating tasks related to your package, such as simple build processes or development tools. Using the "scripts" field, you can define various scripts to be run as yarn run <script>. eg) dev> yarn devor build> yarn build dev> Development mode, not optimised, it skips error sometimes build> Production mode, it's to create a product that will be deployed start> Start production server, used to test in real environment - ** works only it does yarn build** then yarn start lint> spell, syntax check with ESLINT Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/daaahailey/getting-started-with-nextjs-ni4
CC-MAIN-2022-21
refinedweb
323
60.85
After I wrote quite a big article about the analysis of the Tizen OS code, I received a large number of questions concerning the percentage of false positives and the density of errors (how many errors PVS-Studio detects per 1000 lines of code). Apparently, my reasoning that it strongly depends on the project to be analyzed and the settings of the analyzer didn't seem sufficient enough. Therefore, I decided to provide specific figures by doing a more thorough investigation of one of the project of the Tizen OS. I decided that it would be quite interesting to take EFL Core Libraries, because one of the developers, Carsten Haitzler, took an active part in the discussion of my articles. I hope this article would prove to Carsten that PVS-Studio is a worthy tool. If there were people who missed the news, then I just inform that recently I have written an open letter to the Tizen developers, and then a monumental article "27000 Errors in the Tizen Operating System". After that, there were several news post on various resources and quite vivid discussions. Here are some of them: I would express special gratitude to Carsten Haitzler once more, for his attention to my post and active discussion of it. There were various topics raised, some of them were covered in more details in the post "Tizen: summing up". However, there are two eternal questions which continue haunting me. Those programmers, who are well aware of what is the methodology of static analysis would agree with me that such generalized questions don't have any sense at all. It all depends of the project we are working with. Asking such questions is like trying to measure an average temperature of all patients in a hospital. So I'll give the answer on the example of a specific project. I chose EFL Core Libraries. Firstly, this project is part of Tizen. Secondly, as I have already said, one of the developers is Carsten Haitzler, who would probably find these results interesting. I could also check Enlightenment, but I didn't have enough energy for it. I feel that this article will be already rather long. The Enlightenment Foundation Libraries (EFL) are a set of graphics libraries that grew out of the development of Enlightenment, a window manager and Wayland compositor. To check the EFL Core Libraries I used the recent code, taken from the repository. It's worth mentioning that this project is checked by Coverity static code analyzer. Here is a comment on this topic: I will say that we take checking seriously. Coverity reports a bug rate of 0 for Enlightenment upstream (we've fixed all issues Coverity points out or dismissed them as false after taking a good look) and the bug rate for EFL is 0.04 issues per 1k lines of code which is pretty small (finding issues is easy enough i the codebase is large). They are mostly not so big impacting things. Every release we do has our bug rates go down and we tend to go through a bout of "fixing the issues" in the weeks prior to a release. So, let's see what PVS-Studio can show us. After proper configuration, PVS-Studio will issue 10-15% of false positives during the analysis of EFL Core Libraries. The density of the detectable errors in EFL Core Libraries is 0.71 errors per 1000 lines of code at this point. The project EFL Core Libraries at the moment of analysis has about 1 616 000 lines of code written in C and C++. 17.7% of them are comments. Thus, the number of code lines without comments - 1 330 000. After the first run I saw the following number of general analysis warnings (GA): Of course, this is a bad result. That's why I don't like to write abstract results of measurements. The work requires proper analyzer settings, this time I decided to spend some time on it. Almost the whole project is written in C, and as a result, macros are widely used in it. They are the cause of most of the false positives. I spent about 40 minutes on a quick review of the report and came up with the file efl_settings.txt. The file contains the necessary settings. To use them during the project analysis, it's necessary to specify in the configuration file of the analyzer (for example, in PVS-Studio.cfg) the following: rules-config=/path/to/efl_settings.txt The analyzer can be run in the following way: pvs-studio-analyzer analyze ... --cfg /path/to/PVS-Studio.cfg ... or like this: pvs-studio ... --cfg /patn/to/PVS-Studio.cfg ... depending on the way of integration. With the help of these settings I specified in the analyzer, so that it doesn't issue some warnings for those code lines, in which there are names of certain macros or expressions. I have also disabled several diagnostics at all. For example, I disabled V505. It's not great to use the alloca function in the loops, but it's not a crucial error. I don't want to debate a lot, whether a certain warning is a false positive, so I thought it would be easier to disable something. Yes, it should be noted that I reviewed and set up only the warnings of the first two certainty levels. Further on, I will review only them. We aren't going to consider warnings of low certainty level. At least, it would be irrational to start using the analyzer and review warnings of this level. Only after sorting out the warnings of the first two levels, you can take a look at the third and choose the useful warnings at your glance. The second run had the following results: The number 1186 is repeated twice. This is not a typo. These numbers have really turned up to be the same. So, having spent 40 minutes to set up the analyzer, I significantly reduced the number of false positives. Of course, I have a lot of experience in it, it would probably take more time it were a programmer who is new to it, but there is nothing terrible and difficult in the configuring of the analyzer. In total I got 189 +1186 = 1375 messages (High + Medium) with which I started working. After I reviewed these warnings, I suppose that the analyzer detected 950 fragments of code, containing errors. In other words, I found 950 fragments that require fixing. I will give more details about these errors in the next chapter. Let's evaluate the density of the detected errors. 950*1000/1330000 = about 0,71 errors per 1000 lines of code. Now, let's evaluate the percentage of false positives: ((1375-950) / 1375) * 100% = 30% Well, wait! In the beginning of the article there was a number 10-15% of false positives. Here it's 30%. Let me explain. So, reviewing the report of 1375 warnings, I came to the conclusion that 950 of them indicate errors. There were 425 warnings left. But not all these 425 warnings are false positives. There are a lot of messages, reviewing which it is impossible to say if there is an error or not. Let's consider one example of a message that I decided to skip. .... uint64_t callback_mask; .... static void _check_event_catcher_add(void *data, const Efl_Event *event) { .... Evas_Callback_Type type = EVAS_CALLBACK_LAST; .... else if ((type = _legacy_evas_callback_type(array[i].desc)) != EVAS_CALLBACK_LAST) { obj->callback_mask |= (1 << type); } .... } PVS-Studio warning: V629 Consider inspecting the '1 << type' expression. Bit shifting of the 32-bit value with a subsequent expansion to the 64-bit type. evas_callbacks.c 709 Let's take a closer look at this line: obj->callback_mask |= (1 << type); It is used to write 1 to the necessary bit of the variable callback_mask. Pay attention that the variable callback_mask is of 64-bit type. The statement (1 << type) has an int type, that's why you can change only the bits in the lower part of the variable. Bits [32-63] cannot be changed. To understand, if there is a bug or not, we need to understand what range of values may the function _legacy_evas_callback_type return. Can it return a value greater than 31? I don't know, so I skip this warning. Try to understand this case. I see the code for the first time and have no idea what it's doing. In addition, hundreds of the analyzer messages are still waiting for me. I just cannot deal with every case like this. Comment by Carsten Haitzler. Above - actually is a bug that's a result of an optimization that setting bits to decide if it should bother trying to map new event types to old ones (we're refactoring huge chunks of our internals around a new object system and so we have to do this to retain compatibility, but like with any refactoring... stuff happens). Yes - it wraps the bitshift and does the extra work of a whole bunch of if's because the same bits in the mask are re-used for now 2 events due to wrap around. As such this doesn't lead to a bug, just slightly less micro optimizations when set as now that bit means "it has an event callback for type A OR B" not just "type A" ... the following code actually does the complete check/mapping. It surely was not intended to wrap so this was a catch but the way it's used means it was actually pretty harmless. Among those 425 ones left, there will be warnings, pointing to errors. For now I just skipped them. If it comes to the regular use of PVS-Studio, it will be possible to continue setting it up. As I have already said, I spent just 40 minutes on the settings. But it doesn't mean that I did everything I could. The number of false positives can be reduced even more by disabling the diagnostics for certain programming constructs. After careful review of the remaining warnings and additional settings, there will be 10-15% of false positives. This is a good result. Now let's take a look at the bugs I found. I cannot describe all the 950 errors, so I will limit myself to describing a pair of warnings of each type. The remaining warnings I will provide a list or a separate file. The reader can also look at all the warnings by opening the report file: zip-archive with the report. Note that I have left only the General warnings of high and medium level of certainty. I reviewed this report in Windows using PVS-Studio Standalone utility. In Linux you can use a utility Plog Converter that converts the report into one of the following formats: Further on, to view the reports, you can use QtCreator, Vim/gVim, GNU Emacs, LibreOffice Calc. The documentation "How to run PVS-Studio on Linux" gives a detailed description of this process. (see "Filtering and viewing the analyzer report"). The V501 diagnostic detected just one error, but a very nice one. The error is in the comparison function, which echoes the topic of a recent article "Evil in the comparison functions". static int _ephysics_body_evas_stacking_sort_cb(const void *d1, const void *d2) { const EPhysics_Body_Evas_Stacking *stacking1, *stacking2; stacking1 = (const EPhysics_Body_Evas_Stacking *)d1; stacking2 = (const EPhysics_Body_Evas_Stacking *)d2; if (!stacking1) return 1; if (!stacking2) return -1; if (stacking1->stacking < stacking2->stacking) return -1; if (stacking2->stacking > stacking2->stacking) return 1; return 0; } PVS-Studio warning: V501 There are identical sub-expressions 'stacking2->stacking' to the left and to the right of the '>' operator. ephysics_body.cpp 450 A Typo. The last comparison should be as follows: if (stacking1->stacking > stacking2->stacking) return 1; Firstly, let's take a look at the definition of the Eina_Array structure. typedef struct _Eina_Array Eina_Array; struct _Eina_Array { int version; void **data; unsigned int total; unsigned int count; unsigned int step; Eina_Magic __magic; }; There is no need to take a very close look at it. It's just a structure with some fields. Now let's look at the definition of the structure Eina_Accessor_Array: typedef struct _Eina_Accessor_Array Eina_Accessor_Array; struct _Eina_Accessor_Array { Eina_Accessor accessor; const Eina_Array *array; Eina_Magic __magic; }; Pay attention that the pointer to the structure Eina_Array is stored in the in the structure Eina_Accessor_Array. Apart from this, these structures are in no way connected with each other and have different sizes. Now, here is the code fragment that was detected by the analyzer, and which I cannot understand. static Eina_Accessor * eina_array_accessor_clone(const Eina_Array *array) { Eina_Accessor_Array *ac; EINA_SAFETY_ON_NULL_RETURN_VAL(array, NULL); EINA_MAGIC_CHECK_ARRAY(array); ac = calloc(1, sizeof (Eina_Accessor_Array)); if (!ac) return NULL; memcpy(ac, array, sizeof(Eina_Accessor_Array)); return &ac->accessor; } PVS-Studio warning: V512 A call of the 'memcpy' function will lead to the 'array' buffer becoming out of range. eina_array.c 186 Let me remove all unnecessary details to make it easier: .... eina_array_accessor_clone(const Eina_Array *array) { Eina_Accessor_Array *ac = calloc(1, sizeof (Eina_Accessor_Array)); memcpy(ac, array, sizeof(Eina_Accessor_Array)); } The memory is allocated for the object of the Eina_Accessor_Array type. Further on there is a strange thing. An object of the Eina_Array type is copied to the allocated memory buffer. I don't know what this function is supposed to do, but it does something strange. Firstly, there is an index out of source bounds (of the structure Eina_Array). Secondly, this copying has no sense at all. Structures have the set of members of completely different types. Comment by Carsten Haitzler. Function content correct - Type in param is wrong. It didn't actually matter because the function is assigned to a func ptr that does have the correct type and since it's a generic "parent class" the assignment casts to a generic accessor type thus the compiler didn't complain and this seemed to work. Let's consider the following error: static Eina_Bool _convert_etc2_rgb8_to_argb8888(....) { const uint8_t *in = src; uint32_t *out = dst; int out_step, x, y, k; unsigned int bgra[16]; .... for (k = 0; k < 4; k++) memcpy(out + x + k * out_step, bgra + k * 16, 16); .... } PVS-Studio warning: V512 A call of the 'memcpy' function will lead to overflow of the buffer 'bgra + k * 16'. draw_convert.c 318 It's all very simple. A usual array index out of bounds. The array bgra consists of 16 elements of the unsigned int type. The variable k takes values from 0 to 3 in the loop. Take a look at the expression: bgra + k * 16. When the variable k takes the value greater than 0, we'll have the evaluation of a pointer that points outside the array. However, some messages V512 indicate some code fragments that do not have a real error. Still, I don't think that these are false positives of the analyzer. This code is pretty bad and should be fixed. Let's consider such a case. #define MATRIX_XX(m) (m)->xx typedef struct _Eina_Matrix4 Eina_Matrix4; struct _Eina_Matrix4 { double xx; double xy; double xz; double xw; double yx; double yy; double yz; double yw; double zx; double zy; double zz; double zw; double wx; double wy; double wz; double ww; }; EAPI void eina_matrix4_array_set(Eina_Matrix4 *m, const double *v) { memcpy(&MATRIX_XX(m), v, sizeof(double) * 16); } PVS-Studio warning: V512 A call of the 'memcpy' function will lead to overflow of the buffer '& (m)->xx'. eina_matrix.c 1003 The programmer could just copy the array to the structure. Instead of it, the address of the first xx member is used. Probably, it is assumed that further on there will be another fields in the beginning of the structure. This method is used to avoid the crash of the program behavior. Comment by Carsten Haitzler. Above and related memcpy's - not a bug: taking advantage of guaranteed mem layout in structs. I don't like it, actually. I recommend to write something like this: struct _Eina_Matrix4 { union { struct { double xx; double xy; double xz; double xw; double yx; double yy; double yz; double yw; double zx; double zy; double zz; double zw; double wx; double wy; double wz; double ww; }; double RawArray[16]; }; }; EAPI void void eina_matrix4_array_set(Eina_Matrix4 *m, const double *v) { memcpy(m->RawArray, v, sizeof(double) * 16); } This is a little longer, but ideologically more correct. If there is no wish to fix the code, the warning can be suppressed using one of the following methods. The first method. Add a comment to the code: memcpy(&MATRIX_XX(m), v, sizeof(double) * 16); //-V512 The second method. Add a line to the settings file: //-V:MATRIX_:512 The third method. Use a markup base. Other errors: static Eina_Bool evas_image_load_file_head_bmp(void *loader_data, Evas_Image_Property *prop, int *error) { .... if (header.comp == 0) // no compression { // handled } else if (header.comp == 3) // bit field { // handled } else if (header.comp == 4) // jpeg - only printer drivers goto close_file; else if (header.comp == 3) // png - only printer drivers goto close_file; else goto close_file; .... } PVS-Studio warning: V517 The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence. Check lines: 433, 439. evas_image_load_bmp.c 433 The variable header.comp is compared with the constant 3 twice. Other errors: EOLIAN static Efl_Object * _efl_net_ssl_context_efl_object_finalize(....) { Efl_Net_Ssl_Ctx_Config cfg; .... cfg.load_defaults = pd->load_defaults; // <= cfg.certificates = &pd->certificates; cfg.private_keys = &pd->private_keys; cfg.certificate_revocation_lists = &pd->certificate_revocation_lists; cfg.certificate_authorities = &pd->certificate_authorities; cfg.load_defaults = pd->load_defaults; // <= .... } PVS-Studio warning: V519 The 'cfg.load_defaults' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 304, 309. efl_net_ssl_context.c 309 Repeated assignment. One assignment is extra here, or something else was just not copied. Comment by Carsten Haitzler. Not a bug. Just an overzealous copy & paste of the line. One more simple case: EAPI Eina_Bool edje_edit_size_class_add(Evas_Object *obj, const char *name) { Eina_List *l; Edje_Size_Class *sc, *s; .... /* set default values for max */ s->maxh = -1; s->maxh = -1; .... } PVS-Studio warning: V519 The 's->maxh' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 8132, 8133. edje_edit.c 8133 Of course, not cases are so obvious. Nevertheless, I think that the warnings listed below most likely point to errors: Note. Carsten Haitzler, commenting on the article, wrote that V519 warnings, given in the list, are false positives. I do not agree with such approach. Perhaps, code works correctly, but it is still worth paying attention and fixing. I decided to leave the list in the article, so that readers could estimate themselves, if repeats of variables assignment are false positives or not. But if Carsten says, that they are not errors, I will not take them into account. The EFL project has a big problem - the checks if the memory was allocated or not. In general, there are such checks. Example: if (!(el = malloc(sizeof(Evas_Stringshare_El) + slen + 1))) return NULL; Moreover, sometimes they are in those places, where they aren't really needed (see about the warning V668 below). But in a huge number of cases there are no checks at all. Let's take a look at a couple of the analyzer warnings. Comment by Carsten Haitzler. OK so this is a general acceptance that at least on Linux which was always our primary focus and for a long time was our only target, returns from malloc/calloc/realloc can't be trusted especially for small amounts. Linux overcommits memory by default. That means you get new memory but the kernel has not actually assigned real physical memory pages to it yet. Only virtual space. Not until you touch it. If the kernel cannot service this request your program crashes anyway trying to access memory in what looks like a valid pointer. So all in all the value of checking returns of allocs that are small at least on Linux is low. Sometimes we do it... sometimes not. But the returns cannot be trusted in general UNLESS its for very large amounts of memory and your alloc is never going to be serviced - e.g. your alloc cannot fit in virtual address space at all (happens sometimes on 32bit). Yes overcommit can be tuned but it comes at a cost that most people never want to pay or no one even knows they can tune. Secondly, fi an alloc fails for a small chunk of memory - e.g. a linked list node... realistically if NULL is returned... crashing is about as good as anything you can do. Your memory is so low that you can crash, call abort() like glib does with g_malloc because if you can't allocate 20-40 bytes ... your system is going to fall over anyway as you have no working memory left anyway. I'm not talking about tiny embedded systems here, but large machines with virtual memory and a few megabytes of memory etc. which has been our target. I can see why PVS-Studio doesn't like this. Strictly it is actually correct, but in reality code spent on handling this stuff is kind of a waste of code given the reality of the situation. I'll get more into that later. static Eina_Debug_Session * _session_create(int fd) { Eina_Debug_Session *session = calloc(1, sizeof(*session)); session->dispatch_cb = eina_debug_dispatch; session->fd = fd; // start the monitor thread _thread_start(session); return session; } Comment by Carsten Haitzler. This is brand new code that arrived 2 months ago and still is being built out and tested and not ready for prime time. It's part of our live debugging infra where any app using EFL can be controlled by a debugger daemon (if it is run) and controlled (inspect all objects in memory and the object tree and their state with introspection live as it runs), collect execution timeline logs (how much time is spent in what function call tree where while rendering in which thread - what threads are using what cpu time at which slots down to the ms and below level, correlated with function calls, state of animation system and when wakeup events happen and the device timestamp that triggered the wakeup, and so on ... so given that scenario ... if you can't calloc a tiny session struct while debugging a crash accessing the first page of memory is pretty much about as good as anything... as above on memory and aborts etc. Comment by Andrey Karpov. It is not very clear why it is mentioned here, that this is new and non-tested code. In the first place, static analyzers are intended to detect bugs in new code :). PVS-Studio warning: V522 There might be dereferencing of a potential null pointer 'session'. eina_debug.c 440 The programmer allocated the memory with the calloc function and used it right away. Another example: static Reference * _entry_reference_add(Entry *entry, Client *client, unsigned int client_entry_id) { Reference *ref; // increase reference for this file ref = malloc(sizeof(*ref)); ref->client = client; ref->entry = entry; ref->client_entry_id = client_entry_id; ref->count = 1; entry->references = eina_list_append(entry->references, ref); return ref; } PVS-Studio warning: V522 There might be dereferencing of a potential null pointer 'ref'. evas_cserve2_cache.c 1404 The same situation repeated 563 times. I cannot provide them all in the article. Here is a link to the file: EFL_V522.txt. static void _ecore_con_url_dialer_error(void *data, const Efl_Event *event) { Ecore_Con_Url *url_con = data; Eina_Error *perr = event->info; int status; status = efl_net_dialer_http_response_status_get(url_con->dialer); if ((status < 500) && (status > 599)) { DBG("HTTP error %d reset to 1", status); status = 1; /* not a real HTTP error */ } WRN("HTTP dialer error url='%s': %s", efl_net_dialer_address_dial_get(url_con->dialer), eina_error_msg_get(*perr)); _ecore_con_event_url_complete_add(url_con, status); } PVS-Studio warning: V547 Expression '(status < 500) && (status > 599)' is always false. ecore_con_url.c 351 Correct variant of the check should be as follows: if ((status < 500) || (status > 599)) A fragment of code with this error was copied in another two fragments: Another erroneous situation: EAPI void eina_rectangle_pool_release(Eina_Rectangle *rect) { Eina_Rectangle *match; Eina_Rectangle_Alloc *new; .... match = (Eina_Rectangle *) (new + 1); if (match) era->pool->empty = _eina_rectangle_skyline_list_update( era->pool->empty, match); .... } PVS-Studio warning: V547 Expression 'match' is always true. eina_rectangle.c 798 After the pointer was added 1, there is no point in checking it against NULL. The pointer match can become equal to null, only if there is an overflow upon the addition. However, the pointer overflow is thought to be undefined behavior, so this variant shouldn't be considered. And another case. EAPI const void * evas_object_smart_interface_get(const Evas_Object *eo_obj, const char *name) { Evas_Smart *s; .... s = evas_object_smart_smart_get(eo_obj); if (!s) return NULL; if (s) .... } PVS-Studio warning: V547 Expression 's' is always true. evas_object_smart.c 160 If the pointer is NULL, then there is an exit from the function. The repeated check has no sense. Other errors: EFL_V547.txt. There are warnings V547 that I skipped and didn't note them down, as it wasn't much interesting to sort them out. There can be several more errors among them. All of them are issued for one code fragment. First let's take a look at the declaration of two enumerations. typedef enum _Elm_Image_Orient_Type { ELM_IMAGE_ORIENT_NONE = 0, ELM_IMAGE_ORIENT_0 = 0, ELM_IMAGE_ROTATE_90 = 1, ELM_IMAGE_ORIENT_90 = 1, ELM_IMAGE_ROTATE_180 = 2, ELM_IMAGE_ORIENT_180 = 2, ELM_IMAGE_ROTATE_270 = 3, ELM_IMAGE_ORIENT_270 = 3, ELM_IMAGE_FLIP_HORIZONTAL = 4, ELM_IMAGE_FLIP_VERTICAL = 5, ELM_IMAGE_FLIP_TRANSPOSE = 6, ELM_IMAGE_FLIP_TRANSVERSE = 7 } Elm_Image_Orient; typedef enum { EVAS_IMAGE_ORIENT_NONE = 0, EVAS_IMAGE_ORIENT_0 = 0, EVAS_IMAGE_ORIENT_90 = 1, EVAS_IMAGE_ORIENT_180 = 2, EVAS_IMAGE_ORIENT_270 = 3, EVAS_IMAGE_FLIP_HORIZONTAL = 4, EVAS_IMAGE_FLIP_VERTICAL = 5, EVAS_IMAGE_FLIP_TRANSPOSE = 6, EVAS_IMAGE_FLIP_TRANSVERSE = 7 } Evas_Image_Orient; As you see, the names of these constants in the enumerations are similar. This is what was failing for a programmer. EAPI void elm_image_orient_set(Evas_Object *obj, Elm_Image_Orient orient) { Efl_Orient dir; Efl_Flip flip; EFL_UI_IMAGE_DATA_GET(obj, sd); sd->image_orient = orient; switch (orient) { case EVAS_IMAGE_ORIENT_0: .... case EVAS_IMAGE_ORIENT_90: .... case EVAS_IMAGE_FLIP_HORIZONTAL: .... case EVAS_IMAGE_FLIP_VERTICAL: .... } PVS-Studio warnings: Instances from different enumerations are compared eight times. At the same time, thanks to luck, these comparisons work correctly. The constants are the same: The function will work correctly, but still, these are errors. Comment by Carsten Haitzler. All of the above orient/rotate enum stuff is intentional. We had to cleanup duplication of enums and we ensured they had the same values so they were interchangeable - we moved from rotate to orient and kept the compatibility. It's part of our move over to the new object system and a lot of code auto-generation etc. that is still underway and beta. It's not an error but intended to do this as part of transitioning, so it's a false positive. Comment by Andrey Karpov. I do not agree that these are false positives in this case and in some other ones. Following such logic, it turns out that a warning for incorrect but, for some reason, working code is false positive. accessor_iterator<T>& operator++(int) { accessor_iterator<T> tmp(*this); ++*this; return tmp; } PVS-Studio warning: V558 Function returns the reference to temporary local object: tmp. eina_accessor.hh 519 To fix the code, you should remove & from the function declaration: accessor_iterator<T> operator++(int) Other errors: static unsigned int read_compressed_channel(....) { .... signed char headbyte; .... if (headbyte >= 0) { .... } else if (headbyte >= -127 && headbyte <= -1) // <= .... } PVS-Studio warning: V560 A part of conditional expression is always true: headbyte <= - 1. evas_image_load_psd.c 221 If the variable headbyte was >= 0, then there is no sense in the check <= -1. Let's have a look at a different case. static Eeze_Disk_Type _eeze_disk_type_find(Eeze_Disk *disk) { const char *test; .... test = udev_device_get_property_value(disk->device, "ID_BUS"); if (test) { if (!strcmp(test, "ata")) return EEZE_DISK_TYPE_INTERNAL; if (!strcmp(test, "usb")) return EEZE_DISK_TYPE_USB; return EEZE_DISK_TYPE_UNKNOWN; } if ((!test) && (!filesystem)) // <= .... } PVS-Studio warning: V560 A part of conditional expression is always true: (!test). eeze_disk.c 55 The second condition is redundant. If the test pointer were non-null pointer, then the function would have exited. Other errors: EFL_V560.txt. EOLIAN static Eina_Error _efl_net_server_tcp_efl_net_server_fd_socket_activate(....) { .... struct sockaddr_storage *addr; socklen_t addrlen; .... addrlen = sizeof(addr); if (getsockname(fd, (struct sockaddr *)&addr, &addrlen) != 0) .... } PVS-Studio warning: V568 It's odd that 'sizeof()' operator evaluates the size of a pointer to a class, but not the size of the 'addr' class object. efl_net_server_tcp.c 192 I have a suspicion that here the size of the structure should be evaluated, not the pointer size. Then the correct code should be as follows: addrlen = sizeof(*addr); Other errors: EAPI void eeze_disk_scan(Eeze_Disk *disk) { .... if (!disk->cache.vendor) if (!disk->cache.vendor) disk->cache.vendor = udev_device_get_sysattr_value(....); .... } PVS-Studio warning: V571 Recurring check. The 'if (!disk->cache.vendor)' condition was already verified in line 298. eeze_disk.c 299 A redundant or incorrect check. Other errors: Note. Carsten Haitzler does not consider them as erroneous. He thinks that such warnings are recommendations on micro optimizations. But I think that this code is incorrect and it needs to be fixed. In my opinion, these are errors. We have disagreement about this issue, how to consider these analyzer warnings. The diagnostic is triggered when strange factual arguments are passed to the function. Let's consider several variants of how this diagnostic is triggered. static void free_buf(Eina_Evlog_Buf *b) { if (!b->buf) return; b->size = 0; b->top = 0; # ifdef HAVE_MMAP munmap(b->buf, b->size); # else free(b->buf); # endif b->buf = NULL; } PVS-Studio warning: V575 The 'munmap' function processes '0' elements. Inspect the second argument. eina_evlog.c 117 First, 0 was written to the variable b->size, then it was passed to the function munmap. It seems to me that it should be written differently: static void free_buf(Eina_Evlog_Buf *b) { if (!b->buf) return; b->top = 0; # ifdef HAVE_MMAP munmap(b->buf, b->size); # else free(b->buf); # endif b->buf = NULL; b->size = 0; } Let's continue. EAPI Eina_Bool eina_simple_xml_parse(....) { .... else if ((itr + sizeof("<!>") - 1 < itr_end) && (!memcmp(itr + 2, "", sizeof("") - 1))) .... } PVS-Studio warning: V575 The 'memcmp' function processes '0' elements. Inspect the third argument. eina_simple_xml_parser.c 355 It's unclear why the programmer compares 0 bytes of memory. Let's continue. static void _edje_key_down_cb(....) { .... char *compres = NULL, *string = (char *)ev->string; .... if (compres) { string = compres; free_string = EINA_TRUE; } else free(compres); .... } PVS-Studio warning: V575 The null pointer is passed into 'free' function. Inspect the first argument. edje_entry.c 2306 If the compress pointer is null, then there is no need to free the memory. The line else free(compres); can be removed. Comment by Carsten Haitzler. Not a bug but indeed some extra if paranoia like code that isn't needed. Micro optimizations again? Comment by Andrey Karpov. In this case, we also have different opinions. I consider this warning as useful, which points at the error. Probably, another pointer should have been freed and it is absolutely right that the analyzer points at this code. Even though there is no error, code should be fixed so that it would not confuse the analyzer and programmers. Similarly: But most of the V575 diagnostic warnings are related to the use of potentially null pointers. These errors are quite similar to the ones we had a look at when we spoke about the V522 diagnostic. static void _fill_all_outs(char **outs, const char *val) { size_t vlen = strlen(val); for (size_t i = 0; i < (sizeof(_dexts) / sizeof(char *)); ++i) { if (outs[i]) continue; size_t dlen = strlen(_dexts[i]); char *str = malloc(vlen + dlen + 1); memcpy(str, val, vlen); memcpy(str + vlen, _dexts[i], dlen); str[vlen + dlen] = '\0'; outs[i] = str; } } PVS-Studio warning: V575 The potential null pointer is passed into 'memcpy' function. Inspect the first argument. main.c 112 We use a pointer without checking if the memory was allocated. Other errors: EFL_V575.txt. void _ecore_x_event_handle_focus_in(XEvent *xevent) { .... e->time = _ecore_x_event_last_time; _ecore_x_event_last_time = e->time; .... } PVS-Studio warning: V587 An odd sequence of assignments of this kind: A = B; B = A;. Check lines: 1006, 1007. ecore_x_events.c 1007 Comment by Carsten Haitzler. Not bugs as such - looks like just overzealous storing of last timestamp. This is adding a timestamp to an event when no original timestamp exists so we can keep a consistent structure for events with timestamps, but it is code clutter and a micro optimization. Comment by Andrey Karpov. Apparently, we cannot agree about some issues. Some of the cases are erroneous in my view, and inaccurate in Carsten's. As I said, I do not agree with it and, for this reason, I do not include some similar comments in the article. Another error: V587 An odd sequence of assignments of this kind: A = B; B = A;. Check lines: 1050, 1051. ecore_x_events.c 1051 static int command(void) { .... while (*lptr == ' ' && *lptr != '\0') lptr++; /* skip whitespace */ .... } PVS-Studio warning: V590 Consider inspecting the '* lptr == ' ' && * lptr != '\0'' expression. The expression is excessive or contains a misprint. embryo_cc_sc2.c 944 Redundant check. It can be simplified: while (*lptr == ' ') Two more similar warnings: _self_type& operator=(_self_type const& other) { _base_type::operator=(other); } PVS-Studio warning: V591 Non-void function should return a value. eina_accessor.hh 330 static void eng_image_size_get(void *engine EINA_UNUSED, void *image, int *w, int *h) { Evas_GL_Image *im; if (!image) { *w = 0; // <= *h = 0; // <= return; } im = image; if (im->orient == EVAS_IMAGE_ORIENT_90 || im->orient == EVAS_IMAGE_ORIENT_270 || im->orient == EVAS_IMAGE_FLIP_TRANSPOSE || im->orient == EVAS_IMAGE_FLIP_TRANSVERSE) { if (w) *w = im->h; if (h) *h = im->w; } else { if (w) *w = im->w; if (h) *h = im->h; } } PVS-Studio warnings: The if (w) and if (h) checks give a hint to the analyzer, that input arguments w and h may be equal to NULL. It is dangerous that in the beginning of the function they are used without verification. If you call the eng_image_size_get function, as follows: eng_image_size_get(NULL, NULL, NULL, NULL); it won't be ready to it and the null pointer dereference will occur. The warnings, which, in my opinion, also indicate errors: EAPI Eina_Binbuf * emile_binbuf_decipher(Emile_Cipher_Algorithm algo, const Eina_Binbuf *data, const char *key, unsigned int length) { .... Eina_Binbuf *result = NULL; unsigned int *over; EVP_CIPHER_CTX *ctx = NULL; unsigned char ik[MAX_KEY_LEN]; unsigned char iv[MAX_IV_LEN]; .... on_error: memset(iv, 0, sizeof (iv)); memset(ik, 0, sizeof (ik)); if (ctx) EVP_CIPHER_CTX_free(ctx); eina_binbuf_free(result); return NULL; } PVS-Studio warnings: I have already written in articles many times, why the compiler can delete the calls of memset functions in such code. That's why I don't want to repeat myself. If someone is not familiar with this issue, I suggest reading the description of the V597 diagnostics. Comment by Carsten Haitzler. Above 2 - totally familiar with the issue. The big problem is memset_s is not portable or easily available, thus why we don't use it yet. You have to do special checks for it to see if it exists as it does not exist everywhere. Just as a simple example add AC_CHECK_FUNCS([memset_s]) to your configure.ac and memset_s is not found you have to jump through some more hoops like define __STDC_WANT_LIB_EXT1__ 1 before including system headers ... and it's still not declared. On my pretty up to date Arch system memset_s is not defined by any system headers, same on debian testing... warning: implicit declaration of function 'memset_s'; did you mean memset'? [-Wimplicit-function-declaration], and then compile failure ... no matter what I do. A grep -r of all my system includes shows no memset_s declared ... so I think advising people to use memset_s is only a viable advice if its widely available and usable. Be aware of this. Other errors: First of all, let's take a look at eina_value_util_type_size function: static inline size_t eina_value_util_type_size(const Eina_Value_Type *type) { if (type == EINA_VALUE_TYPE_INT) return sizeof(int32_t); if (type == EINA_VALUE_TYPE_UCHAR) return sizeof(unsigned char); if ((type == EINA_VALUE_TYPE_STRING) || (type == EINA_VALUE_TYPE_STRINGSHARE)) return sizeof(char*); if (type == EINA_VALUE_TYPE_TIMESTAMP) return sizeof(time_t); if (type == EINA_VALUE_TYPE_ARRAY) return sizeof(Eina_Value_Array); if (type == EINA_VALUE_TYPE_DOUBLE) return sizeof(double); if (type == EINA_VALUE_TYPE_STRUCT) return sizeof(Eina_Value_Struct); return 0; } Pay attention that the function may return 0. Now let's see, how this function is used. static inline unsigned int eina_value_util_type_offset(const Eina_Value_Type *type, unsigned int base) { unsigned size, padding; size = eina_value_util_type_size(type); if (!(base % size)) return base; padding = ( (base > size) ? (base - size) : (size - base)); return base + padding; } PVS-Studio warning: V609 Mod by zero. Denominator range [0..24]. eina_inline_value_util.x 60 Potential division by zero. I am not sure if there can be a real situation when eina_value_util_type_size function returns 0. In any case, the code is dangerous. Comment by Carsten Haitzler. The 0 return would only happen if you have provided totally invalid input, like again strdup(NULL) ... So I call this a false positive as you cant have an eina_value generic value that is not valid without bad stuff happening - validate you passes a proper value in first. eina_value is performance sensitive btw so every check here costs something. it's like adding if() checks to the add opcode. void fetch_linear_gradient(....) { .... if (t + inc*length < (float)(INT_MAX >> (FIXPT_BITS + 1)) && t+inc*length > (float)(INT_MIN >> (FIXPT_BITS + 1))) .... } PVS-Studio warning: V610 Unspecified behavior. Check the shift operator '>>'. The left operand '(- 0x7fffffff - 1)' is negative. ector_software_gradient.c 412 extern struct tm *gmtime (const time_t *__timer) __attribute__ ((__nothrow__ , __leaf__)); static void _set_headers(Evas_Object *obj) { static char part[] = "ch_0.text"; int i; struct tm *t; time_t temp; ELM_CALENDAR_DATA_GET(obj, sd); elm_layout_freeze(obj); sd->filling = EINA_TRUE; t = gmtime(&temp); // <= .... } PVS-Studio warning: V614 Uninitialized variable 'temp' used. Consider checking the first actual argument of the 'gmtime' function. elm_calendar.c 720 static void _opcodes_unregister_all(Eina_Debug_Session *session) { Eina_List *l; int i; _opcode_reply_info *info = NULL; if (!session) return; session->cbs_length = 0; for (i = 0; i < session->cbs_length; i++) eina_list_free(session->cbs[i]); .... } PVS-Studio warning: V621 Consider inspecting the 'for' operator. It's possible that the loop will be executed incorrectly or won't be executed at all. eina_debug.c 405 There is an ordinary btVector3 class with a constructor. However, this constructor does nothing. class btVector3 { public: .... btScalar m_floats[4]; inline btVector3() { } .... }; There is also such a Simulation_Msg structure: typedef struct _Simulation_Msg Simulation_Msg; struct _Simulation_Msg { EPhysics_Body *body_0; EPhysics_Body *body_1; btVector3 pos_a; btVector3 pos_b; Eina_Bool tick:1; }; Pay attention that some class members are of btVector3 type. Now let's see how the structure is created: _ephysics_world_tick_dispatch(EPhysics_World *world) { Simulation_Msg *msg; if (!world->ticked) return; world->ticked = EINA_FALSE; world->pending_ticks++; msg = (Simulation_Msg *) calloc(1, sizeof(Simulation_Msg)); msg->tick = EINA_TRUE; ecore_thread_feedback(world->cur_th, msg); } PVS-Studio warning: V630 The 'calloc' function is used to allocate memory for an array of objects which are classes containing constructors. ephysics_world.cpp 299 A structure, which contains non-POD members, is created using a call of calloc function. In practice, this code will work, but it is generally incorrect. Technically, the usage of this structure will end up with undefined behavior. Another error: V630 The 'calloc' function is used to allocate memory for an array of objects which are classes containing constructors. ephysics_world.cpp 471 Comment by Carsten Haitzler. Because the other end of the pipe is C code that is passing around a raw ptr as the result from thread A to thread B, it's a mixed c and c++ environment. In the end we'd be sending raw ptr's around no matter what... int evas_mem_free(int mem_required EINA_UNUSED) { return 0; } int evas_mem_degrade(int mem_required EINA_UNUSED) { return 0; } void * evas_mem_calloc(int size) { void *ptr; ptr = calloc(1, size); if (ptr) return ptr; MERR_BAD(); while ((!ptr) && (evas_mem_free(size))) ptr = calloc(1, size); if (ptr) return ptr; while ((!ptr) && (evas_mem_degrade(size))) ptr = calloc(1, size); if (ptr) return ptr; MERR_FATAL(); return NULL; } PVS-Studio warnings: Obviously, this is incomplete code. Comment by Carsten Haitzler. Old old code because caching was implemented, so it was basically a lot of NOP's waiting to be filled in. since evas speculatively cached data (megabytes of it) the idea was that if allocs fail - free up some cache and try again... if that fails then actually try nuke some non-cached data that could be reloaded/rebuilt but with more cost... and only fail after that. But because of overcommit this didn't end up practical as allocs would succeed then just fall over often enough if you did hit a really low memory situation, so I gave up. it's not a bug. it's a piece of history :). The next case is more interesting and odd. EAPI void evas_common_font_query_size(....) { .... size_t cluster = 0; size_t cur_cluster = 0; .... do { cur_cluster = cluster + 1; glyph--; if (cur_w > ret_w) { ret_w = cur_w; } } while ((glyph > first_glyph) && (cur_cluster == cluster)); .... } PVS-Studio warning: V654 The condition of loop is always false. evas_font_query.c 376 This assignment is executed in the loop: cur_cluster = cluster + 1; This means that the comparison (cur_cluster == cluster) is always evaluated as false. Comment by Carsten Haitzler. Above ... it seems you built without harfbuzz support... we highly don't recommend that. it's not tested. Building without basically nukes almost all of the interesting unicode/intl support for text layout. You do have to explicitly - disable it ... because with harfbuzz support we have opentype enabled and a different bit of code is executed due to ifdefs.. if you actually check history of the code before adding opentype support it didn't loop over clusters at all or even glyphs .. so really the ifdef just ensures the loop only loops one and avoids more ifdefs later in the loop conditions making the code easier to maintain - beware the ifdefs! As we found out earlier, there are hundreds fragments of code where there is no checking of the pointer after the memory is allocated using malloc / calloc function. Against this background the checks after the usage of the new operator seem like a joke. There are some harmless errors: static EPhysics_Body * _ephysics_body_rigid_body_add(....) { .... motion_state = new btDefaultMotionState(); if (!motion_state) { ERR("Couldn't create a motion state."); goto err_motion_state; } .... } PVS-Studio warning: V668 There is no sense in testing the 'motion_state' pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error. ephysics_body.cpp 837 There is no point in checking, as in case of impossibility of the memory allocation, the std::bad_alloc exception will be generated. Comment by Carsten Haitzler. Fair enough, but be aware some compiler DON'T throw exceptions... they return NULL on new... so not totally useless code depending on the compiler. I believe VSC6 didn't throw an exception - so before exceptions were a thing this actually was correct behavior, also I depends on the allocator func if it throws and exception or not, so all in all, very minor harmless code. Comment by Andrey Karpov. I do not agree. See the next comment. There are more serious errors: EAPI EPhysics_Constraint * ephysics_constraint_linked_add(EPhysics_Body *body1, EPhysics_Body *body2) { .... constraint->bt_constraint = new btGeneric6DofConstraint( *ephysics_body_rigid_body_get(body1), *ephysics_body_rigid_body_get(body2), btTransform(), btTransform(), false); if (!constraint->bt_constraint) { ephysics_world_lock_release(constraint->world); free(constraint); return NULL; } .... } PVS-Studio warning: V668 There is no sense in testing the 'constraint->bt_constraint' pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error. ephysics_constraints.cpp 382 If the exception is thrown, not only the logic in the work will get broken. What is more, the memory leak will occur, as the free function will not be called. Comment by Carsten Haitzler. Same as previous new + NULL check. Comment by Andrey Karpov. It is not clear for me why it is said here about the Visual C++ 6.0. Yes, it does not through an exception while using a new operator, as well as other old compilers. Yes, if the old compiler is used, the program will work correctly. But Tizen will never be compiled using Visual C++ 6.0! There is no point in thinking about it. A new compiler will through an exception and this will lead to errors. I will emphasize one more time that this is not an extra check. The program works not the way the programmer expects. Moreover, in the second example there is a memory-leak. If we consider a case when new does not through an exception, new(nothrow) should be used. Then the analyzer will not complain. In any way, this code is incorrect. Other errors: EFL_V668.txt. First, let's see how the abs function is declared: extern int abs (int __x) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__const__)) ; As you can see, it possesses and returners the int values. Now let's see, how this function is used. #define ELM_GESTURE_MINIMUM_MOMENTUM 0.001 typedef int Evas_Coord; struct _Elm_Gesture_Momentum_Info { .... Evas_Coord mx; Evas_Coord my; .... }; static void _momentum_test(....) { .... if ((abs(st->info.mx) > ELM_GESTURE_MINIMUM_MOMENTUM) || (abs(st->info.my) > ELM_GESTURE_MINIMUM_MOMENTUM)) state_to_report = ELM_GESTURE_STATE_END; .... } PVS-Studio warnings: It is weird to compare the int values with constant 0.001. There is definitely a bug here. static Image_Entry * _scaled_image_find(Image_Entry *im, ....) { size_t pathlen, keylen, size; char *hkey; Evas_Image_Load_Opts lo; Image_Entry *ret; if (((!im->file) || ((!im->file) && (!im->key))) || (!im->data1) || ((src_w == dst_w) && (src_h == dst_h)) || ((!im->flags.alpha) && (!smooth))) return NULL; .... } PVS-Studio warning: V686 A pattern was detected: (!im->file) || ((!im->file) && ...). The expression is excessive or contains a logical error. evas_cache2.c 825 Most likely this is not a real error, but redundant code. Let me explain this using a simple example. if (A || (A && B) || C) The expression can be simplified to: if (A || C) Although, it is possible that in the expression there is a typo and something important is not checked. In this very case, it is an error. It is hard for me to judge about this unfamiliar code. Similar redundant conditions: #define CPP_PREV_BUFFER(BUFFER) ((BUFFER)+1) static void initialize_builtins(cpp_reader * pfile) { .... cpp_buffer *pbuffer = CPP_BUFFER(pfile); while (CPP_PREV_BUFFER(pbuffer)) pbuffer = CPP_PREV_BUFFER(pbuffer); .... } PVS-Studio warning: V694 The condition ((pbuffer) + 1) is only false if there is pointer overflow which is undefined behavior anyway. cpplib.c 2496 I will expand the macro to make it clearer. cpp_buffer *pbuffer = ....; while (pbuffer + 1) .... The loop condition is always true. Formally, it can become false in case, if the pointer is equal to its limit. In addition, an overflow occurs. This is considered as undefined behavior, it means that a compiler has the right not to take into account such situation. The compiler can delete the conditional check, and then we will get: while (true) pbuffer = CPP_PREV_BUFFER(pbuffer); It's a very interesting bug. Another incorrect check: V694 The condition ((ip) + 1) is only false if there is pointer overflow which is undefined behavior anyway. cpplib.c 2332 Comment by Carsten Haitzler. This old code indeed has issues. There should be checks against CPP_NULL_BUFFER(pfile) because if its a linked list this is a null heck, if its a static buffer array as a stack, it checks stack end position - interestingly in decades it's never been triggered that I know of. static void _efl_vg_gradient_efl_gfx_gradient_stop_set( ...., Efl_VG_Gradient_Data *pd, ....) { pd->colors = realloc(pd->colors, length * sizeof(Efl_Gfx_Gradient_Stop)); if (!pd->colors) { pd->colors_count = 0; return ; } memcpy(pd->colors, colors, length * sizeof(Efl_Gfx_Gradient_Stop)); pd->colors_count = length; _efl_vg_changed(obj); } PVS-Studio warning: V701 realloc() possible leak: when realloc() fails in allocating memory, original pointer 'pd->colors' is lost. Consider assigning realloc() to a temporary pointer. evas_vg_gradient.c 14 This line contains the error: pd->colors = realloc(pd->colors, ....); The old value of the pointer pd->colors does not retain anywhere. A memory leak will occur, if it isn't possible to allocate a new memory block. The address of the previous buffer will be lost, because in pd->colors value NULL will be written. In some cases, the memory leak is complemented also by a null pointer dereference. If a null pointer dereference isn't handled in any way, the program will abort. If it is handled, the program will continue working, but the memory leak will occur. Here is an example of such code: EOLIAN void _evas_canvas_key_lock_add( Eo *eo_e, Evas_Public_Data *e, const char *keyname) { if (!keyname) return; if (e->locks.lock.count >= 64) return; evas_key_lock_del(eo_e, keyname); e->locks.lock.count++; e->locks.lock.list = realloc(e->locks.lock.list, e->locks.lock.count * sizeof(char *)); e->locks.lock.list[e->locks.lock.count - 1] = strdup(keyname); eina_hash_free_buckets(e->locks.masks); } PVS-Studio warning: V701 realloc() possible leak: when realloc() fails in allocating memory, original pointer 'e->locks.lock.list' is lost. Consider assigning realloc() to a temporary pointer. evas_key.c 142 Other errors: EFL_701.txt. static Eina_Bool _evas_textblock_node_text_adjust_offsets_to_start(....) { Evas_Object_Textblock_Node_Format *last_node, *itr; .... if (!itr || (itr && (itr->text_node != n))) .... } PVS-Studio warning: V728 An excessive check can be simplified. The '||' operator is surrounded by opposite expressions '!itr' and 'itr'. evas_object_textblock.c 9505 This is not a bug, but an unnecessarily complicated condition. It expression can be simplified: if (!itr || (itr->text_node != n)) Other redundant conditions: We have already considered the V522 warning in case when there is no checking of the pointer value after the memory allocation. We will now consider a similar error. EAPI Eina_Bool edje_edit_sound_sample_add( Evas_Object *obj, const char *name, const char *snd_src) { .... ed->file->sound_dir->samples = realloc(ed->file->sound_dir->samples, sizeof(Edje_Sound_Sample) * ed->file->sound_dir->samples_count); sound_sample = ed->file->sound_dir->samples + ed->file->sound_dir->samples_count - 1; sound_sample->name = (char *)eina_stringshare_add(name); .... } PVS-Studio warning: V769 The 'ed->file->sound_dir->samples' pointer in the expression could be nullptr. In such case, resulting value of arithmetic operations on this pointer will be senseless and it should not be used. edje_edit.c 1271 The allocation memory function is called. The resulting pointer is not dereferenced, but is used in an expression. A value is added to the pointer, so it becomes incorrect to say that the null pointer is used. The pointer is non-null in any case, but it can be invalid if the memory was not allocated. Exactly this code is shown by the considered diagnostic. By the way, such pointers are more dangerous than null ones. The null pointer dereference will be definitely revealed by the operating system. Using the (NULL + N) pointer one can get to the memory location, which can be modified and stores some data. Other errors: Sometimes the V779 diagnostic indicates the code which is harmless but written incorrectly. For example: EAPI Eina_Bool ecore_x_xinerama_screen_geometry_get(int screen, int *x, int *y, int *w, int *h) { LOGFN(__FILE__, __LINE__, __FUNCTION__); #ifdef ECORE_XINERAMA if (_xin_info) { int i; for (i = 0; i < _xin_scr_num; i++) { if (_xin_info[i].screen_number == screen) { if (x) *x = _xin_info[i].x_org; if (y) *y = _xin_info[i].y_org; if (w) *w = _xin_info[i].width; if (h) *h = _xin_info[i].height; return EINA_TRUE; } } } #endif /* ifdef ECORE_XINERAMA */ if (x) *x = 0; if (y) *y = 0; if (w) *w = DisplayWidth(_ecore_x_disp, 0); if (h) *h = DisplayHeight(_ecore_x_disp, 0); return EINA_FALSE; screen = 0; // <= } PVS-Studio warning: V779 Unreachable code detected. It is possible that an error is present. ecore_x_xinerama.c 92 Under certain conditions, in the function body the screen argument is not used anywhere. Apparently a compiler or an analyzer warned about the argument which is not used. That's why the programmer wrote the assignment which is actually never performed to please that tool. In my opinion it would be better to mark the argument using EINA_UNUSED. Now let's look at a more difficult case. extern void _exit (int __status) __attribute__ ((__noreturn__)); static void _timeout(int val) { _exit(-1); if (val) return; } PVS-Studio warning: V779 Unreachable code detected. It is possible that an error is present. timeout.c 30 _exit function does not return the control. This code is incredibly strange. It seems to me, the function must be written as follows: static void _timeout(int val) { if (val) return; _exit(-1); } Comment by Carsten Haitzler. Not a bug. it's also an unused param thing from before the macros. The timeout has the process self exit in case it takes too long (assuming the decoder lib is stuck if a timeout happens). Comment by Andrey Karpov. I do not agree. Perhaps, the program works correctly, but from professional programmer's point of view, such code is unacceptable. I think is it a mistake to write such code for suppressing false positives. It will confuse a person who will maintain such code. Other errors: EFL_V779.txt. static Elocation_Address *address = NULL; EAPI Eina_Bool elocation_address_get(Elocation_Address *address_shadow) { if (!address) return EINA_FALSE; if (address == address_shadow) return EINA_TRUE; address_shadow = address; return EINA_TRUE; } PVS-Studio warning: V1001 The 'address_shadow' variable is assigned but is not used until the end of the function. elocation.c 1122 This is very strange code. It seems to me that in fact, something should be written here: *address_shadow = *address; Other errors: PVS-Studio seems to find different bugs to Coverity. So far it seems to be noisier than Coverity (more non-bugs pointed out as issues) BUT some of these are bugs so if you're willing to go through all the false positivies there are indeed some gems there. The text reports are far less useful than Coverity but they get the job done. I'd actually consider adding PVS-Studio as a second line of defense after Coverity as it can catch some things Coverity cannot and if you are willing to invest the time, that's a good thing. I've already addressed some of the above issues, others will take time, and some are just non-issues that a tool like PVS-Studio or Coverity will complain about, but being able to ignore it is the best solution. I would like to remind one more time, that visitors can explore the analyzer report to make sure that the analyzer characteristics given in the article, correspond to reality. Among the errors described in the article there is nothing epic, but this is not surprising. As I have already said, the EFL project is regularly checked using Coverity. Despite this, PVS-Studio Analyzer still managed to identify many errors. I think that PVS-Studio would have shown itself better if it had been possible to return to past and swap the analyzers :). I mean, if PVS-Studio had been used first, and then Coverity, PVS-Studio would have shown itself brighter. I suggest downloading and trying PVS-Studio to check your own projects. Thank you for your attention and I wish less bugs in your ...
https://www.viva64.com/en/b/0523/
CC-MAIN-2021-04
refinedweb
8,937
57.87
csEngineTools Class Reference This is a class with static helper functions for working on engine data. More... #include <cstool/enginetools.h> Detailed Description This is a class with static helper functions for working on engine data. Definition at line 86 of file enginetools.h. Member Function Documentation Given a screen space coordinate (with (0,0) being top-left corner of screen) this will try to find the closest mesh there. - Parameters: - - Returns: - an instance of csScreenTargetResult with the mesh that was possibly hit and an intersection point. Given two positions in the world, try to find the shortest distance (using portals if needed) between them and return the final squared distance. Note! This function will ignore all portals if the source and destination sectors are the same. Even if there might be a possible shorter path between the two positions using some space warping portal. An exception to this is if the distance is greater then the max distance. In that case this function will attempt to try out portals in the current sector to see if there is a shorter path anyway. Note that this routine will ignore visibility. It will simply calculate the distance between the two points through some portal path. However, this function will check if the portal is oriented towards the source point (i.e it doesn't ignore visibility with regards to backface culling). Note that this function (by default) only considers the center point of a portal for calculating the distance. This might skew results with very big portals. Set the 'accurate' parameter to true if you don't want this. This function will correctly account for space warping portals. - Parameters: - - Returns: - an instance of csShortestDistanceResult which contains the squared distance between the two points or a negative number if the distance goes beyond the maximum radius. It also contains a space-warping corrected direction from the source point to the destination point. The documentation for this class was generated from the following file: - cstool/enginetools.h Generated for Crystal Space 2.0 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api-2.0/classcsEngineTools.html
CC-MAIN-2014-42
refinedweb
345
56.86
Write a Java program to get the last modified date and time of any file : In this tutorial, we will learn how to print the last modified date and time of a file in Java. For this, we will first create one ‘File’ object. To create one ‘File’ object, pass the file’s location to the constructor. If you want to get the full path of a file, open one terminal and drag-drop one file on the terminal. It will print its full path. Let’s take a look into the program : import java.io.File; import java.util.Date; public class Main { public static void main(String[] args) { File file = new File("E:/song.mp3"); long lastModified = file.lastModified(); System.out.println(new Date(lastModified)); } } Output : Mon Oct 12 19:18:38 IST 2017 Explaination : - First , we create one ’File’ object by passing the file location to its constructor. - Then using ’lastModified()’ method, we got the last modified time .It returns the time in milliseconds since epoch(00:00:00 GMT, January 1, 1970). The return value is long. If the file is not available, it returns 0L. If any IO error occurs, 0L will be returned. - If your file’s last modified time is before epoch, it will return one negative value. - Create one ’Date’ object by passing the last modified time to it. - Print out the ’Date’ object. Similar tutorials : - Java BufferedReader and FileReader example read text file - Java program to copy file - Java program to read contents of a file using FileReader - Java Program to create a temporary file in different locations - Java example to filter files in a directory using FilenameFilter - Read json content from a file using GSON in Java
https://www.codevscolor.com/java-program-get-last-modified-date-time-file/
CC-MAIN-2020-29
refinedweb
287
65.83
Introduction to Python Turtle Turtle graphics is a remarkable way to introduce programming and computers to kids and others with zero interest and is fun. Shapes, figures and other pictures are produced on a virtual canvas using the method Python turtle. Turtle created on the console or a window of display (canvas like) which is used to draw is actually a pen (virtual kind). Turtle backward function and turtle up, down, left right and other functions are used to move the turtle along the canvas to draw patterns along its motion. Python turtle() function is used to create shapes and patterns like this. Syntax: from turtle import * Parameters Describing the Pygame Module: Use of Python turtle needs an import of Python turtle from Python library. - Methods of classes: Screen and Turtle are provided using procedural oriented interface. For understandability, methods have same names as correspondence. Object oriented interface is to be used to create multiple turtles on a screen. Methods of Python Turtle Common methods used for Python turtle are: - Turtle(): Used to create and return a new turtle object. - forward(value): With respect to the value specified, turtle moves forward. - backward(value): With respect to the value specified, turtle moves backward. - right(angle): Clockwise turn of the turtle. - left(angle): Counter – clockwise turn of the turtle. - penup(): Turtle pen is picked up. - pendown(): Turtle pen put down. - up(): Same as penup (). - down(): same as pendown (). - color(color name): Turtle pen’s color is changed. - fillcolor(color name): Color used to fill a particular shape is changed. - heading(): Current heading is returned. - position(): Current position is returned. - goto(x, y): Moves position of turtle to coordinates x, y. - end_fill(): Current fill color is filled after closing the polygon. - begin_fill(): Starting point is remembered for a filled polygon. - dot(): Dot is left at current position. - stamp(): Impression of turtle shape is left at current position. - Shape(): Should be – ‘turtle’ , ‘classic‘ , ‘arrow‘ or ‘circle‘. Examples of Python Turtle A package called standard python package contains the turtle which is not needed to be installed externally. - Turtle module is imported. - Turtle to control is created. - Methods of turtle are used to play or draw around. - Run the code using turtle.done() . Initially, the turtle is imported as: import turtle or from turtle import * A new drawing board or a window screen is to be created and turtle to draw. To do that, let’s give commands such and call them window_ for creating a window and aaa as our turtle name. Window_ = turtle.Screen() window_.bgcolor ( “ light green “ ) window_.title ( “ Turtle ” ) aaa = turtle.Turtle () Now that, a window to draw is created, we need to make the window accessible to draw. Giving commands and turtle methods can help us do the same. Let’s say we want to move the turtle forward 120 pixels, meaning the turtle’s direction is extended with a movement of 200 pixels and is shown by a line of distance 200. The command will be: aaa.forward ( 200 ) Now that we have given a command, the turtle moves forward for 120 pixels. We should complete code by using done() function. turtle.done () Example #1 Code: import turtle polygon_ = turtle.Turtle() for i in range(6): polygon_.forward(100) polygon_.right(300) turtle.done() Output: Example #2 Code: import turtle star = turtle.Turtle() num_of_sides = 5 length_of_side = 50 each_angle = 720.0 / num_of_sides for i in range(num_of_sides): star.forward(length_of_side) star.right(each_angle) turtle.done() Output: Example #3 Code: from turtle import * colors = ['orange', 'red', 'pink', 'yellow', 'blue', 'green'] for x in range(360): pencolor(colors[x % 6]) width(x / 5 + 1) forward(x) left(20) Output: Example #4 Code: from turtle import * penup() for a in range(40, -1, -1): stamp() left(a) forward(20) Output: Conclusion Using basic commands, which are readable and understandable, any person can create a window canvas like draw box to draw whatever they want just by giving the parameters for the turtle to move in the direction of desire. All the functions are the instructions for the Python program to follow. Results can be beautiful, if created patterns and beautiful designs. Python Turtle is a great method to encourage kids acknowledge more about programming, especially Python. Recommended Articles This is a guide to Python Turtle. Here we discuss the Introduction and methods of Python Turtle along with different examples and code implementation. You may also have a look at the following articles to learn more –
https://www.educba.com/python-turtle/
CC-MAIN-2020-24
refinedweb
742
58.58
Get all the Available Performance! Ensuring that your UI5 apps run fast is an important topic in application development. In order to support you in this task, we would like to let you know about the performance-enhancing capabilities provided by the UI5 framework. This blog post points you to some updated as well as newly added performance-related documentation inside the UI5 developer guide. You will learn about the most frequently encountered performance issues and how to address them. While you may have come across the occasional UI5 performance blog post before, the framework has since been changed in many respects. A number of new options are now available. Even if you’re already aware of several of the topics mentioned here, our comprehensive performance checklist will undoubtedly help you to review and speed up your apps. 1. Use the UI5 Support Assistant First of all, you can use the UI5 Support Assistant to check your application for known issues. It contains a set of rules to detect common performance issues and provides useful information on how to address them. [Screenshot] Async Everything! To begin with, keep in mind that enabling the asynchronous loading of modules or views requires extensive testing as well as cooperation on the application side for a stable and fully working application. Is Your Application Ready for Asynchronous Loading? 2. Enable Asynchronous Loading in the Bootstrap Performance issues are often caused by an old bootstrap configuration or a wrong usage of the activated features. Here’s an example of what a bootstrap should look like for an up-to-date UI5 app: <script id="sap-ui-bootstrap" src="/resources/sap-ui-core.js" data- The most important setting is data-sap-ui-async=”true”. It enables the runtime to load all the modules and preload files for declared libraries asynchronously if an asynchronous API is used. Setting async=true leverages the browser’s capabilities to execute multiple requests in parallel without blocking the UI. The attribute data-sap-ui-onInit defines the module my.app.Main which will be loaded initially. Note: Configuration of the bootstrap can only be done by standalone applications and when the bootstrap is under control of the developer. The bootstrap of applications from a SAP Fiori launchpad is managed by the launchpad. 3. Ensure that Root View and Routing are Configured to Load Asynchronously Check the rootView configuration of the application’s manifest.json for an async=true parameter. This allows the root view to be loaded asynchronously. To configure the targets for asynchronous loading, please also check the routing configuration for the async=true parameter: "sap.ui5": { "rootView": { ... "async": true }, "routing": { "config": { ... "async": true } }, ... } 4. Make Use of Asynchronous Module Loading If modules follow the Asynchronous Module Definition (AMD) standard and the bootstrap flag data-sap-ui-async is set to true, custom scripts and other modules can also be loaded asynchronously when a preload is not available. It will help you in the future to enable asynchronous loading of individual modules combined with the usage of HTTP/2 or AMD-based module bundlers. It also ensures proper dependency tracking between modules. But it isn’t enough to write AMD modules. You also need to prevent access to UI5 classes via global names. For instance, do not use global namespaces like “new sap.m.Button()”, but instead require the Button and call its constructor via the local AMD reference. See the API Reference for sap.ui.define in the UI5 developer guide. Always avoid usages of sap.ui.requireSync and jQuery.sap.require! To enable modules to load asynchronously, use sap.ui.define to create modules (e.g. controllers or components) or sap.ui.require in other cases. Stick to the Best Practices for Loading Modules. 5. Use manifest.json Instead of the Bootstrap to Define Dependencies Please use the manifest.json application descriptor file to declare dependencies. This has several advantages, such as reusability and lazy loading. Make sure that you don’t load too many dependencies. In most apps it’s enough to load the libraries sap.ui.core and sap.m by default and add additional libraries only when needed. If you want to make additional libraries generally known in your app without directly loading them during application startup, you can add them to the dependency declaration in the manifest.json file with the lazy loading option. This makes sure that the libraries are only loaded when needed: "sap.ui5": { "dependencies": { "minUI5Version": "1.70.0", "libs": { "sap.ui.core": {}, "sap.m": {}, "sap.ui.layout": { "lazy": true } } }, ... } If a library preload contains reuse components and this preload is configured to be loaded lazily (via “lazy”: true in the dependencies of the manifest.json), the library is not available when the component is created. In this case you need to use the following method: // Core is required from sap/ui/core/Core Core.loadLibrary("my.library", true).then(function() { // create the component... }); and create the component with: Component.create({ name: "my.component" }); or as component usage: myComponent.createComponent("myUsage"); An indicator that a component is inside a library is the existence of an entry sap.app/embeddedBy in its manifest.json file. 6. Migrate jquery.sap.* Modules to their Modularised Variants Since UI5 version 1.58, the global jquery.sap.* modules are deprecated. Please use the modularised variant of the module. If you are still using the jquery.sap.* variant, a so-called “stubbing layer” may load the old module synchronously! You can find a list of these modules in the Legacy jQuery.sap Replacement documentation. Their usages can either be replaced manually or by the UI5 Migration Tool. Note: Make sure to declare the required modules in sap.ui.define or sap.ui.require to ensure that they get loaded asynchronously. 7. Migrate Synchronous Variants of UI5 Factories to Asynchronous Variants Check if the application is using synchronous UI5 factories. Many asynchronous variants are available, e.g. for Components, Resource Bundles, Controllers, Views and Fragments. Find out more about Legacy Factories Replacement. More Preload Bundles mean Fewer Requests Please check that library and component preload bundles are enabled and asynchronously loaded. 8. Ensure that Library Preloads are Enabled If library preloads are disabled or not found, every module is loaded separately by an own request. Depending on the server and network infrastructure, this can take a lot of time. Except for debugging reasons, it is always recommended to make sure library preloads are active. Fortunately, the library preloads are active by default if the files are present. In some cases, it may happen that the preloads are disabled: - The data-sap-ui-preload bootstrap attribute is empty or set to an invalid value. This attribute is optional and only necessary if the loading behavior (sync/async) needs to be overwritten manually. - Debug sources are enabled in the bootstrap (data-sap-ui-debug=true) or URL (sap-ui-debug=true). 9. Ensure that Application Resources are Loaded as Component Preload Application modules (e.g. components, controllers, views or resource bundles) should be loaded asynchronously via the component preload file. Please check (e.g. with the Google Chrome Developer Tools) if a component preload (Component-preload.js) is missing. If the application is not configured to load modules asynchronously, required application files may be loaded synchronously. Note: If a component preload does not exist yet, the bundle needs to be created. For example, you may use the UI5 Tooling. Your OData Model Affects Your Performance Consider switching to the OData V4 model, which has an improved performance over the OData V2 model. Visit the OData V4 model documentation and check if all the features you require are available; for a quick start, work through the OData V4 tutorial. Otherwise, if you have to use the OData V2 model, you should apply the suggestions given in the following sections. 10. Use the OData V2 Model Preload It is possible to load the OData V2 metadata and annotations on application start. To enable the model preload, the models need to be configured in the manifest.json with the option “preload”: true. "sap.ui5": { ... "models": { "mymodel": { "preload": true, ... Please also see the documentation for Manifest Model Preload. 11. Use OData V2 Metadata Caching To ensure fast loading times for applications from a SAP Fiori launchpad, the OData metadata is cached on the web browser using cache tokens. The tokens are added to the URL of metadata requests with the parameter sap-context-token. Please check via the developer tools of your browser (e.g. Google Chrome Developer Tools) if the token has been appended to the request URL. [Screenshot] Note: This feature is only supported by OData V2 for SAP Fiori applications. Further information can be found here: Cache Buster for OData Metadata of SAP Fiori Apps Scheduling Update of OData Metadata Caching Don’t Stop Accelerating! There are still more possibilities to accelerate an application: 12. Use the Content Delivery Network (CDN) In order to ensure that all static SAPUI5 resources are served with the lowest possible latency in SAP Cloud Platform scenarios you can load the resources from the Content Delivery Network (CDN) cached by AKAMAI. Especially when running your app in the cloud, you benefit from the global distribution of servers. Note: This only applies to SAP Cloud Platform scenarios! For other scenarios it is possible to configure a custom CDN of choice as an external location. Please have a look at the documentation: Variant for Bootstrapping from Content Delivery Network 13. Check the Network Requests If you assume slow network requests, check them with your browser’s developer tools (e.g. the Google Chrome Developer Tools). See if any requests are still pending, and check the response times of finished ones. Possible reasons for slow network requests may be: - Slow database service (e.g. OData) - Slow webserver or CDN issues (e.g. serving of static resources) - Slow network infrastructure (e.g. mobile network) - The h2 protocol is not supported (only http/1.1). Ideally, the h2 protocol should be supported by the webserver. Note: To determine the minimum required bandwidth for UI5-based applications, see the SAP Note on Front-End Network Bandwidth sizing. 14. Check Lists and Tables The performance limits of a browser are reached differently depending on the used browser, operating system and hardware. Therefore, it is important to be mindful about the amount of controls and data bindings. This applies especially to lists and their variants (e.g. sap.m.Table or sap.ui.table.Table). If a table needs to display more than 100 rows, please use sap.ui.table.Table instead of sap.m.Table. The reason for this is that the sap.m.Table keeps every loaded row in the memory, even if it’s not visible after scrolling or growing. To choose the right table variant for your requirements, see the Table Feature Overview and check the Performance of Lists and Tables. 15. Start Rendering while Waiting for the Response of Network Requests Please ensure the application does not block the rendering while waiting for back-end requests to respond. Waiting for data before rendering anything is not the favored user experience! We recommend to load data asynchronously and already render the page while the requests are pending. Mostly, the requests won’t fail, and if they do, it is better to show an error or to navigate to an error page. 16. Cache XML Preprocessor Results If an XML Preprocessor is used, try our experimental XML View Cache. If configured in the XML View and with a properly implemented key provider (for invalidation), the XML View Cache is able to cache already processed XML View Preprocessor results. Conclusion If your application uses the latest UI5 version, is well configured, uses the latest asynchronous API features, and has a performant back end, it should be able to run really fast. SAPUI5 aims to stay backward compatible, enabling existing apps to continue working in newer framework versions. Therefore, many recent features are optional and deactivated by default. We advise you to always stay up to date and make use of the latest performance-related features, even if this means that your applications need to be updated occasionally. Keep in mind that one of the key aspects for superior performance is asynchronous loading, which has a huge impact on user experience. You can find advice on performance optimization as mentioned in this post in the UI5 developer guide under: Performance Checklist Further Information UI5ers Buzz #38: Modularization of the SAPUI5 Core UI5ers Buzz #41: Best practices for async loading in UI5 UI5ers Buzz #45: UI5 Migration Tool Author Hey there, very interesting, detailed and helpful blogpost! Quick question regarding the point: “Use manifest.json Instead of the Bootstrap to Define Dependencies”. Does this refer to the “data-sap-ui-libs” property within the bootstrap? If so, does this mean you can actually leave this parameter out of the bootstrap when properly defining the dependencies within the manifest? Best Regards, Marco Hi Marco, Yes, as long as the index.html doesn’t reference modules (also transitively) of which the library hasn’t been preloaded yet. Here I described more: stackoverflow.com/a/56136611. It is important to fetch $metadataas early as possible. For that, removing data-sap-ui-libsbut maintaining sap.ui5/dependencies/libsonly can be helpful. Hey Boghyon Hoffmann, thank you for your thorough reply! 🙂 I checked out your referenced stackoverflow post. It explains perfectly what I was looking/asking for, thank you for that. Best Regards, Marco Hi Sven, do you have any specific tips regarding SAP Fiori Element based apps? Is there just all fine as I use the SAP templates? Best regards Gregor Hi Gregor, most of the topics mentioned by Sven are captured by SAP Fiori elements. The views are created async and the routing is async. The modules follow the AMD syntax and the removal of global names and jquery.sap is ongoing (major parts are cleaned up already). SAP Fiori elements uses manifest.json, and if you use the WebIDE wizard or the application generator of the new SAP Fiori tools the correct manifest settings for the dependencies and the model are set. And sure, SAP Fiori elements makes use of the XML view cache – so the result of the templating is stored there. Of course there are areas where Fiori elements can and will improve – e.g. the parallelization of rendering and network requests, but that’s on the list. So far so good, but there are things that the application developer using SAP Fiori elements has to take care, especially if he is extending the application using own JavaScript code. In this case most of the points above have to be taken into account. And some things, as running the report to create the cache buster tokens that are used for the metadata caching or using cdn for static resources, has to be set up by the system administrator. There is one minor thing we found useful to set in the manifest.json – it’s the model setting to avoid the initial loading of the metadata for all value helps. With this setting the metadata are loaded lazily if the property is used in the view. It’s especially useful for services with large metadata documents. This parameter should also be set by the tools mentioned above: Best regards, Bernhard Hi Sven, Thank you for detailed information. On top the above list I am using HTML5 Application Repository to deploy HTML5 module/application content to SAP Cloud Platform Cloud Foundry environment. Does HTML5 Application Repository help in any means in enhancing performance of the UI5 application? Best Regards, Ravindra Hi Ravindra, Thanks for the comment. The HTML Application Repository is used as a storage for apps. It is able to serve static resources, but I am not sure if it is able to choose the best server location like a proper CDN does. From performance perspective I would say this is not that relevant but I may be wrong. In SAP Cloud Platform Cloud Foundry environments it may reduce network latency depending on the used location. Best regards, Sven One drawback is that currently SAP Cloud Platform as well as SAP HANA XSA doesn’t support HTTP/2 (H2). Maybe the UI5 Team can add some pressure here. even adding below tags already improved the speed a lot! thank you
https://blogs.sap.com/2020/02/06/ui5ers-buzz-47-performance-checklist-for-ui5-apps/
CC-MAIN-2020-50
refinedweb
2,736
57.67
Your source for hot information on Microsoft SharePoint Portal Server and Windows SharePoint Services : Have you tested it with bitwise added enums as well? :) Like the regexoptions. Enum1 | Enum2 My guess is it would work when passed as "Enum1 Enum2", but I'm not sure. Might be interesting to test since you're dealing with powershell and enums. (I haven't tried powershell yet) Hi Serge, thanks for this. I am trying to create a view on a SharePoint list using PowerShell. One of the parameters is the type of the view, and in C# it should be of type Microsoft.SharePoint.SPViewCollection.SPViewType. So that is an enum within a class. This is what I've got: PS> $asm = reflection.assembly]::loadwithpartialname("microsoft.sharepoint") ... stuff to initialise $splist and $spfields PS> $splist.Views.Add("New view", $viewFields, '<OrderBy><FieldRef Name="ID" /></OrderBy>', 100, 1, 1, [Microsoft.SharePoint.SPViewCollection.SPViewType] "Html", 0) The last line gives the error "Unable to find type [Microsoft.SharePoint.SPViewCollection.SPViewType]: make sure that the assembly containing this type is loaded." If I try to fetch the type from the assembly like this: PS > $asm.GetType("Microsoft.SharePoint.SPViewCollection.SPViewT ype", 1, 1) Exception [...] "Could not load type 'Microso ft.SharePoint.SPViewCollection.SPViewType' from assembly 'Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'." In contrast the following works: PS> $asm.GetType("Microsoft.SharePoint.SPViewCollection", 1, 1) I can't get this to work, perhaps you have some tips? Thanks. Alex Found it: nested types should be referenced using <namespace>.<enclosing type>+<nested type> i.e. in my case Cheers, Can you please post your final code for this piece including the value of your variable $viewfields. I am trying to create a new Calendar View inside of a Calendar List and have not been able to successfully figure out the syntax. James
http://weblogs.asp.net/soever/archive/2006/12/11/powershell-and-using-net-enum-types.aspx
crawl-002
refinedweb
310
60.31
Introduction to Static Sites, Static Site Generators, and Jekyll Introduction to Static Sites, Static Site Generators, and Jekyll See why Jekyll might be right for you. Join the DZone community and get the full member experience.Join For Free The modern web development (by “modern” we mean one that’s been taking place in the last ~5 years) seems to be heavily into dynamic content. As the internet progressed from simple HTML pages to dynamic web apps, the benefits of dynamic web technologies became ever so clear. Nowadays, web apps like YouTube, Google Maps, and Figma are gradually bridging the gap between web and native software (and technologies like WebAssembly are playing a huge part in this process). Still, we shouldn’t disregard static websites. Even though they can’t boast rich functionality, their advantages lie in other areas. In this article, we’ll take a closer look at a static site generator called Jekyll and analyze the benefits that you’ll get when using any static website. You may also like: Azure DevOps Build Pipeline for Jekyll. The Bigger Picture: Static and Dynamic Websites Before diving into Jekyll directly, let’s take a step back and see the bigger picture. Websites can be divided into two groups: dynamic and static. What are the differences between them? Even though static websites are called, well, static, it doesn’t mean that they don’t provide any dynamic functionality. The main difference between these two types lies in the website’s ability to change its content (i.e. the files it stores). - Dynamic sites can change to change files dynamically via the server. Every time the server receives a request, it (re-)generates the files depending on the user’s intent. These files are then served to the user. - Static sites, on the other hand, can’t do that. They serve static (i.e. pre-built) unchanged files that include .css styles, .html pages, and .js scripts. In this scenario, the developer doesn’t need a server (e.g. Node.js) to handle the website’s business logic — they only need a web hosting service. Static Websites Advantages The advantages that dynamic websites can offer are self-evident: their functionality is practically limitless (well, it is limited by the amount of JavaScript memory leaks the developer may accidentally introduce ¯\_(ツ)_/¯). But what are the reasons that you may choose static websites instead? Speed The most obvious advantage is speed — as static websites aren’t coupled with databases and content generation, their performance increases dramatically. In fact, databases often turn out to be the website’s biggest bottleneck: databases are easy to use, but they’re hard to use properly and efficiently, which often leads to blaming the wrong technology for the website’s subpar speed (e.g. “Argh! Why did I choose Python for this project — it’s so slow!”). In static sites’ case, therefore, the user experience becomes much more streamlined and enjoyable: the Time to First Byte (the time it takes for the browser to receive the first byte upon the request) is significantly lower, allowing the users to get the content they want faster — after all, the website isn’t executing any database queries or HTML templating. More skeptical developers might think: “Can’t the users wait just a few seconds?” Well, let’s look at the data that Google has provided in their "Why Performance Matters" post: 53% of mobile site visits were abandoned if a page took longer than 3 seconds to load. Security Another important factor is security. Once again, it’s heavily tied to the lack of databases — SQL injection is still one of the most common cyber attacks. In essence, SQL injection utilizes the website’s ability to accept user input — hackers can use various input fields to execute malicious SQL commands, changing or deleting the server’s data. Naturally, static websites are safe from this threat — no databases, no problems. In the sections above, we mentioned that developers working with static websites don’t have to deal with managing a web server. An essential part of this managing process is installing security updates. WordPress developers are especially familiar with the headache-inducing duty of keeping the website’s security up-to-date — the platform’s immense popularity means that it’s a prime target for hackers, so website security becomes a top priority. Conversely, static websites are secure because they don’t have any functionality that can be exploited. Hosting and Development Both hosting and development of static websites are seamless. When it comes to hosting, the hosting provider is only expected to store static files and doesn’t have to support a particular framework. Of course, it’s always great if the hosting provider offers a content distribution network functionality; this will make your static website perform even faster. As of October 2019, more and more hosting providers (especially those active in the cloud computing industry) are offering to host not only static websites, but also static site generators — this means that the entire development process becomes easier as well, allowing you to implement continuous deployment. Some of these providers include: - Amazon Web Services (Amazon S3, optimal for file hosting). - Google Cloud Platform. - DigitalOcean. - Netlify. In terms of development, continuous deployment (which is a strategy for software releases wherein any code commit that passes the automated testing phase is automatically released into the production environment) is a major time-saver. It simplifies the development process, making it much less of a hassle. Therefore, the typical development steps/stages are: - Connect the repository. - Add build settings. - Deploy the website. A Few Words about GitHub Pages This is where GitHub truly shines. As it turns out, we can use this platform to host static websites! GitHub Pages is specifically designed to transform GitHub repositories into static websites, allowing you to showcase anything you want. While we’re on the topic of GitHub Pages, we should also reiterate the limitations this service has — we’ll quote our guide to GitHub Pages: You shouldn’t expect to run a large web project without any hiccups as GitHub enforces usage limits, which are: - Source repositories have a recommended limit of 1GB. - The maximum size of the published sites cannot exceed 1 GB. - Sites have a soft bandwidth limit of 100 GB per month. - Sites have a soft limit of 10 builds per hour. Furthermore, GitHub Pages users are prohibited from hosting commercial projects like online businesses and e-commerce platforms. You may notice that the limitations above as referred to as soft limits — this means that, upon exceeding the limit, the user will get a polite email from GitHub requesting to slow things down. So what happens when site usage pushes past these limits? GitHub suggest a few strategies to solve this problem: - Setting up a separate content distribution network (CDN). - Using GitHub releases. - Switching to a different hosting service. Where Can Static Websites Truly Shine? The (somewhat limited) functionality of static websites shouldn’t prevent you from making your projects come to life. Generally, static websites excel at helping you show something to the world without an overcomplicated setup process: - Portfolios and static websites are the ideal combination — they’re only required to whet the visitor’s appetite and, preferably, lead to the project’s separate websites.Some notable examples: Lucas Gatsas, Chester How. - Personal blogs are, essentially, just text with images — just what a static website needs. Furthermore, static site generators like Jekyll offer a plethora of blog themes. Some notable examples: Auth0 Blog, Mozilla’s Release Management Blog - Informational pages can be viewed as “really long blog posts.” Some notable examples: Twitch Developer Documentation, Spotify Developer Documentation, htmlreference.io, Markdown Guide, plainlanguage.gov. - Landing pages are another great option — it’s usually designed to showcase the product’s key features, topping it off with a call-to-action (“Enjoying our articles? You’re not enjoying them enough — enjoy more!”). Some notable examples: Ionic Framework, TwitchCon, Sketch. A Few (Kind) Words about Jekyll At this point in the article, you’re probably itching to create a static site yourself. You’re opening the StaticGen homepage and seeing that there are 262 static site generators. Which one should you use? Our experience tells us that Jekyll is a great option. In the sections below we’ll explore its awesome functionality in greater detail, but let’s first take a closer look at Jekyll itself. Its Quickstart page reads: Jekyll is a simple, extendable, static site generator. You give it text written in your favorite markup language and it churns through layouts to create a static website. Throughout that process you can tweak how you want the site URLs to look, what data gets displayed in the layout, and more. Liquid The Liquid feature introduces real programming logic to HTML, which is a “not-a-real-programming-language” language. Liquid is a templating language that extends static sites’ functionality via objects, tags, and filters. Objects act as pointers to specific content, signifying that this content should be outputted in the given object’s place. They’re denoted via double curly braces: {{ page.title }} page refers to the website in general, while title refers to the website’s name that we set in the config file. The code above, therefore, is a variable that will output the website’s name in the page we specify. Tags deal with the page’s logic and template’s control flow and are denoted by curly braces and percent signs. Using tags, you can highlight a code snippet: {% highlight python %} def greet(name): print(f”Hello, {name}!”) {% endhighlight %} You can also create short links to other pages: {% link _posts/2019-10-15-dzone-article-guidelines.md %} And you can even implement if-logic! {% if page.show_sidebar %} <div class="sidebar"> sidebar content </div> {% endif %} The code above will output the sidebar if page.show_sidebar is true. Filters can modify an object's output. They’re placed within the objects and are denoted by a vertical slash ( |). Filters can convert dates to strings... {{ site.time | date_to_string }} 23 Oct 2019 … normalize whitespace (i.e. replace multiple whitespace instances with a single one)... {{ "a \n b" | normalize_whitespace }} … count the number of words on a page… {{ page.content | number_of_words }} 1398 … sort an array (e.g. by order)... {{ site.pages | sort: "title", "last" }} … and much more. Front Matter and Layouts The front matter serves as the page’s overseer, storing various variables you'd like to use in objects. It’s denoted by triple-dashed lines at the top of a file: --- author: “Jane” --- author is now a variable. We prepend it with site. and get “Jane” as the output: {{ page.author }} An important feature of Jekyll is layouts, which are templates that you can utilize to format/structure your content with ease. Let’s say you have a personal blog. The page type that you’d have to work with most frequently would be blog posts. Without layouts, you’d have to organize and style each new blog post manually; conversely, you can create a dedicated page layout for blog posts… <div class="post"> <div class="post-info"> <span>Written by</span> {% if page.author %} {{ page.author }} {% else %} {{ site.author.name }} {% endif %} {% if page.date %} <br> <span>on </span><time datetime="{{ page.date }}">{{ page.date | date: "%B %d, %Y" }}</time> {% endif %} </div> <h1 class="post-title">{{ page.title }}</h1> <div class="post-line"></div> {{ content }} </div> <div class="pagination"> {% if page.next.url %} <a href="{{ page.next.url | prepend: site.baseurl }}" class="left arrow">←</a> {% endif %} {% if page.previous.url %} <a href="{{ page.previous.url | prepend: site.baseurl }}" class="right arrow">→</a> {% endif %} <a href="#" class="top">Top</a> </div> … and include the reference to this layout in each post: --- layout: post title: "Introduction to WebAssembly: Is This Magic?" author: "Jane" --- The real power of layouts is tied with editing code. In raw, static websites (i.e. those built without a static site generator), you’d typically copy and paste code blocks whose functionality you’d like to include in multiple pages — a good example would be a navigation bar. In the end, you end up with dozens — if not hundreds — copies of a navbar code block. The problem arises when you want to edit this navbar. In order for the changes to be consistent across the entire website, you’d have to edit each copy of this code block. Layouts were designed as a solution to this problem. To apply the change to the entire website, you only need to edit the original layout file. Conclusion The beauty of Jekyll comes to life when you start combining features like tags, objects, and filters. This makes the development workflow much easier: - Jekyll-powered code is less verbose than raw HTML code. - The build process is streamlined — as you make changes to the code, Jekyll automatically regenerates the page you edited. - Logic introduced by Liquid allows you to implement new functionality. All in all, both static and dynamic websites are excellent tools of web development. In each project, make sure to choose the right tool for the job. We hope that this article has improved your understanding of these topics. Let’s recap the things we’ve learned: - A static website cannot change its server’s files dynamically; instead; it serves premade assets. - Static site generators simplify the development process even further, but they also add some critical functionality. - Hosting a static website via GitHub Pages is a great place to start — but you may need to choose a paid hosting provider later if you’re planning to scale your web project. Further Reading Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/introduction-to-static-sites-static-site-generator
CC-MAIN-2019-51
refinedweb
2,298
56.25
Apache Spark is a unified analytics engine for processing large volumes of data. It can run workloads 100 times faster and offers over 80 high-level operators that make it easy to build parallel apps. Spark can run on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud, and can access data from multiple sources. And this article covers the most important Apache Spark Interview questions that you might face in a Spark interview. The Spark interview questions have been segregated into different sections based on the various components of Apache Spark and surely after going through this article you will be able to answer most of the questions asked in your next Spark interview. Apache Spark Interview Questions The Apache Spark interview questions have been divided into two parts: - Apache Spark Interview Questions for Beginners - Apache Spark Interview Questions for Experienced Let us begin with a few basic Apache Spark interview questions! Apache Spark Interview Questions for Beginners 1. How is Apache Spark different from MapReduce? 2. What are the important components of the Spark ecosystem? Apache Spark has 3 main categories that comprise its ecosystem. Those are: - Language support: Spark can integrate with different languages to applications and perform analytics. These languages are Java, Python, Scala, and R. - Core Components: Spark supports 5 main core components. There are Spark Core, Spark SQL, Spark Streaming, Spark MLlib, and GraphX. - Cluster Management: Spark can be run in 3 environments. Those are the Standalone cluster, Apache Mesos, and YARN. 3. Explain how Spark runs applications with the help of its architecture. This is one of the most frequently asked spark interview questions, and the interviewer will expect you to give a thorough answer to it. Spark applications run as independent processes that are coordinated by the SparkSession object in the driver program. The resource manager or cluster manager assigns tasks to the worker nodes with one task per partition. Iterative algorithms apply operations repeatedly to the data so they can benefit from caching datasets across iterations. A task applies its unit of work to the dataset in its partition and outputs a new partition dataset. Finally, the results are sent back to the driver application or can be saved to the disk. 4. What are the different cluster managers available in Apache Spark? - Standalone Mode: By default, applications submitted to the standalone mode cluster will run in FIFO order, and each application will try to use all available nodes. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided launch scripts. It is also possible to run these daemons on a single machine for testing. - Apache Mesos: Apache Mesos is an open-source project to manage computer clusters, and can also run Hadoop applications. The advantages of deploying Spark with Mesos include dynamic partitioning between Spark and other frameworks as well as scalable partitioning between multiple instances of Spark. - Hadoop YARN: Apache YARN is the cluster resource manager of Hadoop 2. Spark can be run on YARN as well. - Kubernetes: Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. 5. What is the significance of Resilient Distributed Datasets in Spark? Resilient Distributed Datasets are the fundamental data structure of Apache Spark. It is embedded in Spark Core. RDDs are immutable, fault-tolerant, distributed collections of objects that can be operated on in parallel.RDD’s are split into partitions and can be executed on different nodes of a cluster. RDDs are created by either transformation of existing RDDs or by loading an external dataset from stable storage like HDFS or HBase. Here is how the architecture of RDD looks like: 6. What is a lazy evaluation in Spark? When Spark operates on any dataset, it remembers the instructions. When a transformation such as a map() is called on an RDD, the operation is not performed instantly. Transformations in Spark are not evaluated until you perform an action, which aids in optimizing the overall data processing workflow, known as lazy evaluation. 7. What makes Spark good at low latency workloads like graph processing and Machine Learning? Apache Spark stores data in-memory for faster processing and building machine learning models. Machine Learning algorithms require multiple iterations and different conceptual steps to create an optimal model. Graph algorithms traverse through all the nodes and edges to generate a graph. These low latency workloads that need multiple iterations can lead to increased performance. 8. How can you trigger automatic clean-ups in Spark to handle accumulated metadata? To trigger the clean-ups, you need to set the parameter spark.cleaner.ttlx. 9. How can you connect Spark to Apache Mesos? There are a total of 4 steps that can help you connect Spark to Apache Mesos. - Configure the Spark Driver program to connect with Apache Mesos - Put the Spark binary package in a location accessible by Mesos - Install Spark in the same location as that of the Apache Mesos - Configure the spark.mesos.executor.home property for pointing to the location where Spark is installed 10. What is a Parquet file and what are its advantages? Parquet is a columnar format that is supported by several data processing systems. With the Parquet file, Spark can perform both read and write operations. Some of the advantages of having a Parquet file are: - It enables you to fetch specific columns for access. - It consumes less space - It follows the type-specific encoding - It supports limited I/O operations Learn open-source framework and scala programming languages with the Apache Spark and Scala Certification training course. 11. What is shuffling in Spark? When does it occur? Shuffling is the process of redistributing data across partitions that may lead to data movement across the executors. The shuffle operation is implemented differently in Spark compared to Hadoop. 12. What is the use of coalesce in Spark? Spark uses a coalesce method to reduce the number of partitions in a DataFrame. Suppose you want to read data from a CSV file into an RDD having four partitions. This is how a filter operation is performed to remove all the multiple of 10 from the data. The RDD has some empty partitions. It makes sense to reduce the number of partitions, which can be achieved by using coalesce. This is how the resultant RDD would look like after applying to coalesce. 13. How can you calculate the executor memory? Consider the following cluster information: Here is the number of core identification: To calculate the number of executor identification: 14. What are the various functionalities supported by Spark Core? Spark Core is the engine for parallel and distributed processing of large data sets. The various functionalities supported by Spark Core include: - Scheduling and monitoring jobs - Memory management - Fault recovery - Task dispatching 15. How do you convert a Spark RDD into a DataFrame? There are 2 ways to convert a Spark RDD into a DataFrame: - Using the helper function - toDF import com.mapr.db.spark.sql._ val df = sc.loadFromMapRDB(<table-name>) .where(field(“first_name”) === “Peter”) .select(“_id”, “first_name”).toDF() - Using SparkSession.createDataFrame You can convert an RDD[Row] to a DataFrame by calling createDataFrame on a SparkSession object def createDataFrame(RDD, schema:StructType) 16. Explain the types of operations supported by RDDs. RDDs support 2 types of operation: Transformations: Transformations are operations that are performed on an RDD to create a new RDD containing the results (Example: map, filter, join, union) Actions: Actions are operations that return a value after running a computation on an RDD (Example: reduce, first, count) 17. What is a Lineage Graph? This is another frequently asked spark interview question. A Lineage Graph is a dependencies graph between the existing RDD and the new RDD. It means that all the dependencies between the RDD will be recorded in a graph, rather than the original data. The need for an RDD lineage graph happens when we want to compute a new RDD or if we want to recover the lost data from the lost persisted RDD. Spark does not support data replication in memory. So, if any data is lost, it can be rebuilt using RDD lineage. It is also called an RDD operator graph or RDD dependency graph. 18. What do you understand about DStreams in Spark? Discretized Streams is the basic abstraction provided by Spark Streaming. It represents a continuous stream of data that is either in the form of an input source or processed data stream generated by transforming the input stream. 19. Explain Caching in Spark Streaming. Caching also known as Persistence is an optimization technique for Spark computations. Similar to RDDs, DStreams also allow developers to persist the stream’s data in memory. That is, using the persist() method on a DStream will automatically persist every RDD of that DStream in memory. It helps to save interim partial results so they can be reused in subsequent stages. The default persistence level is set to replicate the data to two nodes for fault-tolerance, and for input streams that receive data over the network. 20. What is the need for broadcast variables in Spark? Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks. They can be used to give every node a copy of a large input dataset in an efficient manner. Spark distributes broadcast variables using efficient broadcast algorithms to reduce communication costs. scala> val broadcastVar = sc.broadcast(Array(1, 2, 3)) broadcastVar: org.apache.spark.broadcast.Broadcast[Array[Int]] = Broadcast(0) scala> broadcastVar.value res0: Array[Int] = Array(1, 2, 3) Apache Spark Interview Questions for Experienced 21. How to programmatically specify a schema for DataFrame? DataFrame can be created programmatically with three steps: - Create an RDD of Rows from the original RDD; - Create the schema represented by a StructType matching the structure of Rows in the RDD created in Step 1. - Apply the schema to the RDD of Rows via createDataFrame method provided by SparkSession. 22. Which transformation returns a new DStream by selecting only those records of the source DStream for which the function returns true? 1. map(func) 2. transform(func) 3. filter(func) 4. count() The correct answer is c) filter(func). 23. Does Apache Spark provide checkpoints? This is one of the most frequently asked spark interview questions where the interviewer expects a detailed answer (and not just a yes or no!). Give as detailed an answer as possible here. Yes, Apache Spark provides an API for adding and managing checkpoints. Checkpointing is the process of making streaming applications resilient to failures. It allows you to save the data and metadata into a checkpointing directory. In case of a failure, the spark can recover this data and start from wherever it has stopped. There are 2 types of data for which we can use checkpointing in Spark. Metadata Checkpointing: Metadata means the data about data. It refers to saving the metadata to fault-tolerant storage like HDFS. Metadata includes configurations, DStream operations, and incomplete batches. Data Checkpointing: Here, we save the RDD to reliable storage because its need arises in some of the stateful transformations. In this case, the upcoming RDD depends on the RDDs of previous batches. 24. What do you mean by sliding window operation? Controlling the transmission of data packets between multiple computer networks is done by the sliding window. Spark Streaming library provides windowed computations where the transformations on RDDs are applied over a sliding window of data. 25. What are the different levels of persistence in Spark? DISK_ONLY - Stores the RDD partitions only on the disk MEMORY_ONLY_SER - Stores the RDD as serialized Java objects with a one-byte array per partition MEMORY_ONLY - Stores the RDD as deserialized Java objects in the JVM.. In case the RDD is not able to fit in the memory, additional partitions are stored on the disk MEMORY_AND_DISK_SER - Identical to MEMORY_ONLY_SER with the exception of storing partitions not able to fit in the memory to the disk 26. What is the difference between map and flatMap transformation in Spark Streaming? 27. How would you compute the total count of unique words in Spark? 1. Load the text file as RDD: sc.textFile(“hdfs://Hadoop/user/test_file.txt”); 2. Function that breaks each line into words: def toWords(line): return line.split(); 3. Run the toWords function on each element of RDD in Spark as flatMap transformation: words = line.flatMap(toWords); 4. Convert each word into (key,value) pair: def toTuple(word): return (word, 1); wordTuple = words.map(toTuple); 5. Perform reduceByKey() action: def sum(x, y): return x+y: counts = wordsTuple.reduceByKey(sum) 6. Print: counts.collect() 28. Suppose you have a huge text file. How will you check if a particular keyword exists using Spark? lines = sc.textFile(“hdfs://Hadoop/user/test_file.txt”); def isFound(line): if line.find(“my_keyword”) > -1 return 1 return 0 foundBits = lines.map(isFound); sum = foundBits.reduce(sum); if sum > 0: print “Found” else: print “Not Found”; 29. What is the role of accumulators in Spark? Accumulators are variables used for aggregating information across the executors. This information can be about the data or API diagnosis like how many records are corrupted or how many times a library API was called. 30. What are the different MLlib tools available in Spark? - ML Algorithms: Classification, Regression, Clustering, and Collaborative filtering - Featurization: Feature extraction, Transformation, Dimensionality reduction, and Selection - Pipelines: Tools for constructing, evaluating, and tuning ML pipelines - Persistence: Saving and loading algorithms, models, and pipelines - Utilities: Linear algebra, statistics, data handling 31. What are the different data types supported by Spark MLlib? Spark MLlib supports local vectors and matrices stored on a single machine, as well as distributed matrices. Local Vector: MLlib supports two types of local vectors - dense and sparse Example: vector(1.0, 0.0, 3.0) dense format: [1.0, 0.0, 3.0] sparse format: (3, [0, 2]. [1.0, 3.0]) Labeled point: A labeled point is a local vector, either dense or sparse that is associated with a label/response. Example: In binary classification, a label should be either 0 (negative) or 1 (positive) Local Matrix: A local matrix has integer type row and column indices, and double type values that are stored in a single machine. Distributed Matrix: A distributed matrix has long-type row and column indices and double-type values, and is stored in a distributed manner in one or more RDDs. Types of the distributed matrix: - RowMatrix - IndexedRowMatrix - CoordinatedMatrix 32. What is a Sparse Vector? A Sparse vector is a type of local vector which is represented by an index array and a value array. public class SparseVector extends Object implements Vector Example: sparse1 = SparseVector(4, [1, 3], [3.0, 4.0]) where: 4 is the size of the vector [1,3] are the ordered indices of the vector [3,4] are the value 33. Describe how model creation works with MLlib and how the model is applied. MLlib has 2 components: Transformer: A transformer reads a DataFrame and returns a new DataFrame with a specific transformation applied. Estimator: An estimator is a machine learning algorithm that takes a DataFrame to train a model and returns the model as a transformer. Spark MLlib lets you combine multiple transformations into a pipeline to apply complex data transformations. The following image shows such a pipeline for training a model: The model produced can then be applied to live data: 34. What are the functions of Spark SQL? Spark SQL is Apache Spark’s module for working with structured data. Spark SQL loads the data from a variety of structured data sources. It queries data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC). It provides a rich integration between SQL and regular Python/Java/Scala code, including the ability to join RDDs and SQL tables and expose custom functions in SQL. 35. How can you connect Hive to Spark SQL? To connect Hive to Spark SQL, place the hive-site.xml file in the conf directory of Spark. Using the Spark Session object, you can construct a DataFrame. result=spark.sql(“select * from <hive_table>”) 36. What is the role of Catalyst Optimizer in Spark SQL? Catalyst optimizer leverages advanced programming language features (such as Scala’s pattern matching and quasi quotes) in a novel way to build an extensible query optimizer. 37. How can you manipulate structured data using domain-specific language in Spark SQL? Structured data can be manipulated using domain-Specific language as follows: Suppose there is a DataFrame with the following information: val df = spark.read.json("examples/src/main/resources/people.json") // Displays the content of the DataFrame to stdout df.show() // +----+-------+ // | age| name| // +----+-------+ // |null|Michael| // | 30| Andy| // | 19| Justin| // +----+-------+ // Select only the "name" column df.select("name").show() // +-------+ // | name| // +-------+ // |Michael| // | Andy| // | Justin| // +-------+ // Select everybody, but increment the age by 1 df.select($"name", $"age" + 1).show() // +-------+---------+ // | name|(age + 1)| // +-------+---------+ // |Michael| null| // | Andy| 31| // | Justin| 20| // +-------+---------+ // Select people older than 21 df.filter($"age" > 21).show() // +---+----+ // |age|name| // +---+----+ // | 30|Andy| // +---+----+ // Count people by age df.groupBy("age").count().show() // +----+-----+ // | age|count| // +----+-----+ // | 19| 1| // |null| 1| // | 30| 1| // +----+-----+ 38. What are the different types of operators provided by the Apache GraphX library? In such spark interview questions, try giving an explanation too (not just the name of the operators). Property Operator: Property operators modify the vertex or edge properties using a user-defined map function and produce a new graph. Structural Operator: Structure operators operate on the structure of an input graph and produce a new graph. Join Operator: Join operators add data to graphs and generate new graphs. 39. What are the analytic algorithms provided in Apache Spark GraphX? GraphX is Apache Spark's API for graphs and graph-parallel computation. GraphX includes a set of graph algorithms to simplify analytics tasks. The algorithms are contained in the org.apache.spark.graphx.lib package and can be accessed directly as methods on Graph via GraphOps. PageRank: PageRank is a graph parallel computation that measures the importance of each vertex in a graph. Example: You can run PageRank to evaluate what the most important pages in Wikipedia are. Connected Components: The connected components algorithm labels each connected component of the graph with the ID of its lowest-numbered vertex. For example, in a social network, connected components can approximate clusters. Triangle Counting: A vertex is part of a triangle when it has two adjacent vertices with an edge between them. GraphX implements a triangle counting algorithm in the TriangleCount object that determines the number of triangles passing through each vertex, providing a measure of clustering. 40. What is the PageRank algorithm in Apache Spark GraphX? It is a plus point if you are able to explain this spark interview question thoroughly, along with an example! PageRank measures the importance of each vertex in a graph, assuming an edge from u to v represents an endorsement of v’s importance by u. If a Twitter user is followed by many other users, that handle will be ranked high. PageRank algorithm was originally developed by Larry Page and Sergey Brin to rank websites for Google. It can be applied to measure the influence of vertices in any network graph. PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The assumption is that more important websites are likely to receive more links from other websites. A typical example of using Scala's functional programming with Apache Spark RDDs to iteratively compute Page Ranks is shown below: Rock Your Spark Interview There you go, this was the collection of some of the most commonly asked, conceptual, and some theoretical Apache Spark interview questions that you might come across while attending an interview for a Spark-related interview. To learn more about Apache Spark interview questions, you can also watch this video: Apache Spark Interview Questions. On the other hand you can also enroll in our Apache Spark and Scala Certification Training, that will help you gain expertise working with the Big Data Hadoop Ecosystem. You will master essential skills of the Apache Spark open-source framework and the Scala programming language, including Spark Streaming, Spark SQL, machine learning programming, GraphX programming, and Shell Scripting Spark among other highly valuable skills that will make answering any Apache Spark interview questions a potential employer throws your way. So start learning now and get a step closer to rocking your next spark interview!
https://www.simplilearn.com/top-apache-spark-interview-questions-and-answers-article
CC-MAIN-2021-31
refinedweb
3,444
56.05
Simple Sample code to Backtest an OCO model Hi All, Beginner here. This is probably a straightforward task For most of you here. I have a pandas data frame which is a time series for currency prices with an extra column indicating when the OCO order should be placed. The extra column is binary e.g 1 means execute OCO order and 0 means do nothing. Would anyone be willing to just write some sample code showing how I could create a strategy in backtrader where the OCO order is executed or not based on the column value (1 or 0) in my pandas data frame? And then how to plot this?? Again any help would be much appreciated!!! Thanks Everything is written already, you just need to put it together - Docs - Pandas data feed Docs - Extending data feed Docs - Orders - OCO Docs - Plotting An the best part for beginner Docs - Quickstart Guide Hi, followed the docs and this is the code I’ve got so far: from future import (absolute_import, division, print_function,unicode_literals) import pandas as pd import tkinter import matplotlib from datetime import date from datetime import datetime from datetime import timedelta import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter import matplotlib.dates as mdates from matplotlib.backends.backend_pdf import PdfPages import os import sys import platform import time import datetime as dt import math import numpy as np import backtrader as bt import argparse import backtrader.feeds as btfeeds class PandasData(btfeeds.DataBase): params = (('datetime',None), ('Open',-1), ('High',-1), ('Low',-1), ('Close',-1), ('Signal',-1) ) I believe above I’ve correctly fed the pandas dataframe into backtrader but correct me if wrong class OCOStrategy(bt.OCOStrategy): def notify_order(self, order): print('{}: Order ref: {} / Type {} / Status {}'.format(self.data.datetime.date(0),order.ref, 'Buy' * order.isbuy() or 'Sell',order.getstatusname())) if order.status == order.Completed: self.holdstart = len(self) if not order.alive() and order.ref in self.orefs: self.orefs.remove(order.ref) def __init__(self): self.data.close=df_cur['Signal'] def next(self): if self.data.signal=1 return # pending orders do nothing What exactly is wrong with the syntax underneath “def next(self)??” At this point I’ve figured out how to run strategies in general, plot etc only thing remaining is to derive this OCO strategy. Could you please advise?? Thanks again!!! There is no 24/7 service debugging scripts, answering questions and/or writing scripts. You have two options - patiently wait or develop things by yourself. How useless are you?
https://community.backtrader.com/topic/2345/simple-sample-code-to-backtest-an-oco-model
CC-MAIN-2020-40
refinedweb
421
50.63
Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: So now everything works beautifully except the last step: It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012? This PowerShell worked: tell the Session Collection to add a alternate address for Session Host connections. This is also what you would do for a Session Host farm with round robin. Once I ran this, launching apps from RDWeb would give a single prompt that now matches the three settings without warnings: Set-RDSessionCollectionConfiguration –CollectionName QuickSessionCollection -CustomRdpProperty "use redirection server name:i:1 `n alternate full address:s:remote.csbs.org" Set-RDSessionCollectionConfiguration –CollectionName QuickSessionCollection -CustomRdpProperty "use redirection server name:i:1 `n alternate full address:s:remote.csbs.org" Depending on any other Custom RDP Properties, the above command may be different because you have to include them all in one command with a linefeed between each. Background Info: Adding custom RDP settings: Using this setting for HA: I am pretty sure that you need a cert valid for both the internal and external namespace. If you have .local internally (yuck) then you may have a hard time acquiring a cert with .local in either the Subject or Subject Alternative Name fields. If you can't get a cert from a public provider for this, then an internal PKI setup is your only option here. RemoteApp Manager If you don't have a UC certificate you can use this powershell script to update the FQDN to match your external host name / certificate name: By posting your answer, you agree to the privacy policy and terms of service. asked 1 year ago viewed 7371 times active 6 months ago
http://serverfault.com/questions/524092/rds-rdweb-and-remoteapp-how-to-use-public-certificate-for-launching-apps-on-s
CC-MAIN-2014-52
refinedweb
480
55.24
ALLENDAY/MP3-Icecast-0.02 - 03 Feb 2004 00:21:03 GMT - Search in distribution GREGORY/MP3-Icecast-Simple-0.2 - 05 Oct 2006 13:08:39 The "HTTP::Handle" module allows you to make HTTP requests and handle the data yourself. The general ideas is that you use this module to make a HTTP request and handle non-header data yourself. I needed such a feature for my mp3 player to listen to ...PSIONIC/HTTP-Handle-0.2 - 03 Jul 2004 08:53:40 GMT - Search in distribution - HTTP::Handle - HTTP Class designed for streaming Audio: (1 review) - 29 Jun 2001 18:53:02 GMT - Search in distribution "Net::Icecast::Source" is a simple module designed to make it easy to build programs which stream audio data to an Icecast2 server to be relayed....REVMISCHA/Net-Icecast-Source-1.1 - 07 Jun 2009 19:33:30 GMT - Search in distribution This task contains all distributions under the POE namespace....APOCAL/Task-POE-All-1.102 - 09 Nov 2014 11:07:41 GMT - Search in distribution LONERR/Tail-Stat-0.25 - 29 Nov 2015 16:21:13 GMT - Search in distribution
https://metacpan.org/search?q=MP3-Icecast
CC-MAIN-2016-40
refinedweb
191
53.31
As stated, I've imported the contrail plugin donation into the contrail branch. I've taken the time to add the ASF license header to all of the new files in that branch. I think we have to complete the following in order to merge into master. 1) I'd like to see the package structure changed to match org.apache.cloudstack, instead of the Juniper namespace. We only have com.cloud namespaces for legacy reasons, and are trying to consolidate into the apache ns. 2) Folks with past experience with network plugins need to review the plugin's code and provide comments or +1s for a merge. Chiradeep and Hugo, you've been "randomly" selected to help on this... ;-) Pedro, I'll assume that you will be happy to provide patches via reviewboard against this branch if changes are requested (including the package structure noted above). 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been "randomly" selected to at least provide some input here. Anything I'm missing? -chip
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201310.mbox/%3CCA+96GG51osjrex0RzkjACSWG1f2J_k7QMNZhdXp5_jAUFZNkOA@mail.gmail.com%3E
CC-MAIN-2018-17
refinedweb
193
65.83
The .NET Framework simplifies the Web service creation process. Learn how to create a Web service that interfaces with SQL Server and returns the applicable information from the Northwind database. Overview The System.Web.Services namespace provides the necessary classes for creating custom Web services. Specifically, a Web service is derived from the WebServices class located within this namespace. In addition, a Web service class file is created with the asmx file extension. Web service methods are exposed via the WebMethod attribute. It immediately precedes the method name. The method marked with this attribute must be declared as public, so it is available to all as a Web service method should be. You may use the .NET language of your choice, including C#, VB.NET, J#, or so forth. I will utilise both C# and VB.NET in this example. The remaining aspects of the development follow the normal rules of development. Let's create our Web service to interface with SQL Server and return the applicable information from the Northwind database. There will be one Web service with six methods corresponding to the six stored procedures: - GetProductsById: Accepts an integer parameter. This parameter is used to call the sp_GetProductByID stored procedure. - GetProductsByName: Accepts a string parameter that is passed to the sp_GetProductByName stored procedure. - GetProductsByCategoryId: Accepts an integer parameter that is passed to the sp_GetProductByCategoryID stored procedure. - GetProductsByCategoryName: Accepts a string parameter that is passed to the sp_GetProductByCategoryName stored procedure. - GetProductsBySupplierId: Accepts an integer parameter that is passed to the sp_GetProductBySupplierID stored procedure. - GetProductsBySupplierName: Accepts a string parameter that is passed to the sp_GetProductBySupplierName stored procedure. Each method is public and tagged with the WebMethod attribute. In addition, the SoapDocumentMethod attribute is assigned to each method as well to control SOAP formattnig. For overall SOAP body formatting, or style, the Web Services Description Language (WSDL) provides two choices: RPC and document. The .NET Framework controls these choices in code using attributes. Document refers to formatting the XML Web service method according to an XSD schema. The document style refers to formatting the Body element as a series of one or more message parts following the Body element. Listing A contains the code for the Web service I created. Listing A The Web service is simple, and this simplicity is aided by the Data Access Application Block (DAAB). DAAB is an excellent addition to a .NET developer's toolbox. It provides a clean and elegant method for access data in your .NET applications. A great aspect of DAAB is the simple approach to creating methods. You can use it to return almost any data-related object including DataSet in our example. The overloaded method signatures ensure ease-of-use. They accept the same set of parameters: connection string, command type, SQL, and parameter object. The parameter object is only necessary when using stored procedures with parameters as this sample illustrates. I love interfacing with SQL Server in so few lines of code. Listing B contains the Web service written with VB.NET. Listing B Do you need help with .Net? Gain advice from Builder AU forums
http://www.builderau.com.au/program/dotnet/soa/Exposing-product-information-via-Web-services/0,339028399,339193662,00.htm
crawl-002
refinedweb
516
51.04
How can I force division to be floating point? Division keeps rounding down to 0? In Python 2, division of two ints produces an int. In Python 3, it produces a float. We can get the new behaviour by importing from __future__. from __future__ import division a = 4b = 6c = a / b c0.66666666666666663 You can cast to float by doing c = a / float(b). If the numerator or denominator is a float, then the result will be also. A caveat: as commenters have pointed out, this won't work if b might be something other than an integer or floating-point number (or a string representing one). If you might be dealing with other types (such as complex numbers) you'll need to either check for those or use a different method. How can I force division to be floating point in Python? I have two integer values a and b, but I need their ratio in floating point. I know that a < b and I want to calculate a/b, so if I use integer division I'll always get 0 with a remainder of a. How can I force c to be a floating point number in Python in the following? c = a / b What is really being asked here is: "How do I force true division such that a / b will return a fraction?" Upgrade to Python 3 In Python 3, to get true division, you simply do a / b. 1/20.5 Floor division, the classic division behavior for integers, is now a // b: 1//201//2.00.0 However, you may be stuck using Python 2, or you may be writing code that must work in both 2 and 3. If Using Python 2 In Python 2, it's not so simple. Some ways of dealing with classic Python 2 division are better and more robust than others. Recommendation for Python 2 You can get Python 3 division behavior in any given module with the following import at the top: from __future__ import division which then applies Python 3 style division to the entire module. It also works in a python shell at any given point. In Python 2: from __future__ import division1/20.51//201//2.00.0 This is really the best solution as it ensures the code in your module is more forward compatible with Python 3. Other Options for Python 2 If you don't want to apply this to the entire module, you're limited to a few workarounds. The most popular is to coerce one of the operands to a float. One robust solution is a / (b * 1.0). In a fresh Python shell: 1/(2 * 1.0)0.5 Also robust is truediv from the operator module operator.truediv(a, b), but this is likely slower because it's a function call: from operator import truediv truediv(1, 2)0.5 Not Recommended for Python 2 Commonly seen is a / float(b). This will raise a TypeError if b is a complex number. Since division with complex numbers is defined, it makes sense to me to not have division fail when passed a complex number for the divisor. 1 / float(2)0.51 / float(2j)Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: can't convert complex to float It doesn't make much sense to me to purposefully make your code more brittle. You can also run Python with the -Qnew flag, but this has the downside of executing all modules with the new Python 3 behavior, and some of your modules may expect classic division, so I don't recommend this except for testing. But to demonstrate: $ python -Qnew -c 'print 1/2'0.5$ python -Qnew -c 'print 1/2j'-0.5j
https://codehunter.cc/a/python/how-can-i-force-division-to-be-floating-point-division-keeps-rounding-down-to-0
CC-MAIN-2022-21
refinedweb
631
72.46
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove Ads Via DevMavens Update: You may be more interested in XTUnit, which was released a bit later on and provides the same abilities, without needing a seperate version of NUnit to run. In fact, it runs on any unit testing framework for .NET which can use attributes. My article on performing database unit tests with rollback ability has managed to get quite a lot of attention. There were some caveats to that method, though: The other way to go on this was to use the method called "Services without components" - mainly utilising the ServiceDomain and the ServiceConfig classes found in the System.EnterpriseServices namespace, but are relatively unknown or unused. I won't extend the discussion of the actual technicalities here because I'm planning at article on this subject, but for now you can enjoy the fruits of this research today. What I really wanted was to be able to write something as elegant as this: [Test,Rollback] public void Insert() { //Do some inserts into the DB here //and the automatically roll back } So, I dug deep into the bowels of the Nunit Framework and came out with something that did exactly that - I added a new attribute to the framework - RollbackAttribute by placing that attribute on a test method that test case will automatically run inside its own transaction context, and automatically roll back at the end of the test. It's really quite cool and I'm using it today in our applications. yes. using it looks exactly like this: So - I went ahead and compiled a custom version of the NUnit framework just for you. I like to call it NunitX - it is a mutant and as such must have a cool name. Hope you agree. To use - simply add a reference to the NUnit.framework.dll found in the zip file instead of the one you usually reference. that's basically it. Here's a direct download link for the zip file (130k) Heres a link to XtUnit (as mentioned in the beginning of this post). You may find this a better a implementation that does not need its own version of NUnit to work. USAGE NOTES-------------* windows XP SP 1 or above required (COM+ 1.5 is required for this to work - that's why)* this version is using Services without components = no strong naming assemblies is needed* the NUnit Add-in will work, but when you use the Nunit-Addin you will *not* have Rollback abilities* Each Test method will run in its own separate transaction* If you have an object down the line which uses its own transaction and calls Commit on it - you will not be able to rollback that commit. This will only happen if that object uses the TransactionOption.RequiresNew flag. for objects that use the TransactionOption.Required flag - rollback will work just fine.* you need to use the Nunit-gui and nunit-console versions that come with this distribution. you CANNOT use the original NUnit gui or console applications to run tests that use this custom framework Hi Roy-- Still unable to download NUnitX from this page, or to link to XTUnit. thanks, Chuck Pingback from Eli Lopian’s Blog (TypeMock) » Blog Archive » Rollback for database testing
http://weblogs.asp.net/rosherove/archive/2004/07/12/Introducing_3A00_-NUnitX-and-the-Rollback-attribute-_2D00_-Seamless-database-rollback-with-Nunit.aspx
crawl-002
refinedweb
550
59.23
Forum:Plzz hlp uz at da norwegian site! From Uncyclopedia, the content-free encyclopedia Note: This topic has been unedited for 2320 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response. Hey ya! I'm sysop and check user at Ikkepedia.org and we need some help. Could someone please make logos for our versions of UnNews and UnDictionary (Ikkenytt and Ikktionary), and tell us how to put the logos into pages? I'm sorry if this is the wrong forum, but I'll send you all a dead hamster when I'll get some if this is the wrong forum. God depress you, and have some "Half cock of french country woman"! (In Hungary, it's a restaurant serving that, really, it's 100% true, if it's not, I'm gonna be eaten by a grue) --Norwegian Blue 20:10, 25 July 2007 (UTC) - Ok, first you'll need to get User:Carlb (probably best emailing him) to make those into actual namespaces. - After he's done that, you'll need to add the following line to mediawiki:common.css, one for each logo you want to change: .ns-0 #p-logo a { background-image: url() } - Except change " ns-0" to the right namespace number, and change the url for the one that contains the new logo. • Spang • ☃ • talk • 01:48, 26 Jul 2007 - Thanks, but we still have a big problem: We haven't made any namespaces for them... I've tried once to do it, and then Ikkenytt (UnNews) disappeared, so I had to delete the namespace. --Norwegian Blue 13:56, 27 July 2007 (UTC) - I think that making a new namespace then adding old articles to it requires an SQL query. I'll look around for more info. --Starnestommy (Talk • Contribs • FFS • WP) 04:21, 28 July 2007 (UTC) - this may be of use. --Starnestommy (Talk • Contribs • FFS • WP) 04:27, 28 July 2007 (UTC) - ...but only if you have the foresight not to create Ikkenytt: as an individual Norwegian page. If that page exists, and 'Ikkenytt' is created later as a namespace, that particular script gets rather upset and fails to run properly. --Carlb 17:30, 29 July 2007 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Plzz_hlp_uz_at_da_norwegian_site!?t=20070729173052
CC-MAIN-2013-48
refinedweb
376
80.92
Problem with selection on extendedDataTable built with c:forEachJoaquín Carrasco May 7, 2012 6:32 AM Hi, I have in a page a extendedDataTable built with c:forEach because the data that can contain it depends on some input fields. The problem is the data is shown perfectly but the selection on the table rows is not working. The weird thing is when I refresh the page with F5 in firefox, the selection starts to work, but in the first load it doesn´t work. The code of the table is the following: <h:form> <rich:extendedDataTable <a4j:ajax <c:forEach <rich:column> <f:facet <h:outputText </f:facet> <h:outputText </rich:column> </c:forEach> </rich:extendedDataTable> </h:form> where colum has the values of the header and the value fields of the different objects the extendeddatatable can contain: public class ColumnaDinamica { private String cabecera; private String valor; public String getCabecera() { return cabecera; } public void setCabecera(String cabecera) { this.cabecera = cabecera; } public String getValor() { return valor; } public void setValor(String valor) { this.valor = valor; } Any Idea? Thanks!!! 1. Re: Problem with selection on extendedDataTable built with c:forEachPaul Dijou May 7, 2012 7:48 AM (in response to Joaquín Carrasco) I'm really sorry to say that, but you should not use "<c:forEach>" inside an extendedDataTable. JSTL tags like "<c:forEach>" are not part of the JSF lifecycle and are evaluated only once when the page is loaded, any JSF Ajax will no impact them for what I know. Have you consider using "<rich:columns>" tag instead of "<c:forEach>" ? Or "<a4j:repeat>" ? 2. Re: Problem with selection on extendedDataTable built with c:forEachJoaquín Carrasco May 9, 2012 7:22 AM (in response to Paul Dijou) Hi Paul, Thanks for your answer. I´m trying doing it now with<a4j:repeat> but nothing is shown: <rich:extendedDataTable <a4j:repeat <rich:column> <f:facet <h:outputText </f:facet> <h:outputText </rich:column> </a4j:repeat> </rich:extendedDataTable> I don´t know why it´s not working. I´ve read in this old post:, that this way of create dynamic columns was in preparation, I don´t know if it´s ready. Any way of showing the change of columns if the structure of objects in the list below changes and different columns has to be shown? 3. Re: Problem with selection on extendedDataTable built with c:forEachPaul Dijou May 9, 2012 7:41 AM (in response to Joaquín Carrasco) If you are using RichFaces 3.x, go with <rich:columns>. If you are using RichFaces 4.x.... ... ... <c:forEach> is the only way I know, right now, to have dynamic columns in this version (I was hoping that <a4j:repeat> would work, but no... I guess <ui:repeat> will not work either). The problem is that <c:forEach> will not like JSF ajax. I will bring the problem to the next RichFaces meeting and will provide you feedback. Don't hesitate to spam me if no news before the end of next week. Regards, 4. Re: Problem with selection on extendedDataTable built with c:forEachJoaquín Carrasco May 9, 2012 7:54 AM (in response to Paul Dijou) Thanks Paul, I´m using richfaces 4.2, so I´ll be waiting for some news. Thanks again! 5. Re: Problem with selection on extendedDataTable built with c:forEachDavid Wong Apr 16, 2014 12:03 PM (in response to Joaquín Carrasco) Hi all! Does someone have a workaround for this? We are using 4.3.5, and have an EDT with dynamic columns using <c:forEach>. The row selection seems to work with a F5 refresh like above. Getting desperate, but even a JS call to refresh using window.location.reload() on our table search button is not working consistently...
https://developer.jboss.org/thread/199316
CC-MAIN-2017-13
refinedweb
626
61.16
Copyright © 2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. The W3C Recommendation. This document specifies a language that is in common usage under the name "Turtle". It is intended to be compatible with, and a subset of, Notation 3. This is a proposed replacement for the W3C Turtle Submission. It has not been endorsed by any formal W3C process or by the members. The W3C Turtle Submission is at the head of Dave Beckett's Turtle revision chain, and has been serving as a specification since Jan 2004. While there is apparent interop between Turtle parsers (advice to the contrary welcome), more formality may encourage use in e.g. MPEG formats. The following proposal is intended to address pfps's call for a parsing semantics. It also aligns the Turtle grammar with the SPARQL Grammar where appropriate. Similar grammars have been tested by DanC in his n3 and ntriples parsers. and ericP in his SWObjects Turtle parser which uses the turtleS Yacker grammar. Note the nonexhaustive differences between SPARQL and Turtle. A set of named tests have been integrated into the document. After extensive discussion of a registration request, the media type remains text/turtle. staying within intended to be compatible with, and a subset of, Notation 3 (see Turtle compared to Notation 3), and is generally usable in systems that support N3. The Turtle grammar for triples is a subset of the SPARQL Protocol And RDF Query Language (SPARQL) [SPARQLQ] grammar for TriplesBlock. The two grammars share production and terminal names where possible. This section is informative. The Turtle Syntax and Turtle Grammar sections formally define the language. abbrevi _:BLANK_NODE_LABEL to provide a blank node either from the given BLANK_NODE_LABEL. IRI_REFs) \\I <> # subject of of triples that vary only in predicate and object RDF terms. # this is not a complete turtle document :a :b :c ; :d :e . # the last triple is :a :d :e .., thought these are spelled "@prefix" and "@base" respectively in Turtle. Per RFC3986 section 5.1.1, URI # In-scope base URI is at this point <test-00.ttl> <test-01.ttl> <test-02.ttl> . UR) [NOTATION]. Production labels consisting of a number and a final 's', e.g. [60s], reference to the production with that number in the SPARQL Query Language for RDF grammar [SPARQLQ]. this strings matching productions and lexical tokens to these. The following informative example shows the semantic actions performed when parsing this Turtle document with an LALR1, (1 2.0 3E1) :p "w" . is syntactic sugar for (noting that the blank nodes b0, b1 and b2 do not occur anywhere else in the RDF graph): _: (1 [:p :q] ( 2 ) ) . is syntactic sugar for: _ B. Internet Media Type, File Extension and Macintosh File Type for the media type registration form. Turtle adds the following syntax to N-Triples: @basedirective for setting a base IRI @prefixdirective for assigning namespace prefixes , ; [] rdf:typeshorthand a ()s xsd:integer xsd:double xsd:decimal xsd:boolean Notation 3 (N3) triples are a superset of RDF triples. In particular, N3 formulae (graphs) may be the subject or object of N3 triples. For example here, the formula with _:Bob a foaf:Person is the object of another arc: _:Bob ex:said { _:Bob a foaf:Person } . Following is a partial list of syntactic features in N3 which are not in Turtle: {... } is of :a.:b.:cand :a^:b^:c @keywords =>implies =equivalence @forAll @forSome The SPARQL Query Language for RDF (SPARQL) [SPARQLQ] uses a Turtle/N3 style syntax for its TriplesBlock production. This production differs from the Turtle langage in that: ?name or $name) in any part of the triple of the form For further information see the Syntax for IRIs and SPARQL Grammar sections of the SPARQL query document [SPARQLQ]. W3C Turtle Submission 2008-01-14 . See the Previous changelog for further information trueand false. ex:first.name. ex:7tm.
http://www.w3.org/2010/01/Turtle/
CC-MAIN-2016-44
refinedweb
655
57.87
As a WPF developer for years I was recently concerned by the new direction chosen by Microsoft on its client platforms with the rise of the brand new WinRT framework. I was concerned for good reasons: I’ve suffered from the collateral damages of the Silver. First I’ll start by presenting the signs that are worrying me, and should worry you too if you are a WPF stakeholder. WPF Toolkit is a free and open source project developped by the Microsoft team and designed as the anteroom of the official WPF releases. As an example the DataGrid was). DataGrid. Microsoft, historically a softwares vendor, is diversifying its business by becoming an hardware vendor, mimicking its competitors Apple and Samsung. To this end Microsoft has acquired Nokia to benefit from its long time presence in the mobile phone market. On the tablets and laptops side, to compete with iPad tablets and MacBook Pros laptops, Win. In february 2014 Microsoft named a new CEO, Satya Nadella, who comes from the cloud division of Microsoft. He is replacing Steve Ballmer, who did not understood the new mobile market (first . What could have “saved” WPF would be some niches, e.g. as a portable technology to develop client applications, but unfortunately this is not the case. There is a portable version of .Net (to be pedantic, of the CLI): Mono, which runs on Windows of course but also on Linux, Unix and Mac. And Mono is not a toy technology, it really works, with it I’ve myself already built a Jen. After reading the first part you may be starting to freak out but as often things are not completely black, they are gray, a more or less dark gray. This second part will help you mix the white with the black, so keep reading… in WPF and using in WinRT, this is enough to break XAML compatibility and discourage any flimsy will to jump! clr-namespace using WP like NH. Whether you are a business or an individual developer you should seriously consider slowing down your technical investment in WPF, and start to build your expertise on WinRT. seriously. I think that the sum of all these facts is pretty clear: WPF is past and present, in the near future it will be in direct competition with WinRT, but later if WinRT gets some traction and enough market shares then WPF will will. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) WPF never had a decent Rich Text Editor. Sure the text part worked fine and could even be bound saved and restored. But as soon as you add an image, say goodbye to a WPF RTX solutions. WPF never was able to solve IE air-space issues, I recall fighting that dumb issue for a long time wanting to host mulitple-misbehaved browser instances as a dashboard. The WPF Web browser control mostly only exposed the C++ interfaces which are poorly documented and a real hassle to work with. Just compare those interfaces with what Javascript has to offer and you quickly see what is lacking. The WPF Toolkit only provided a half hearted solution for the System.Windows.Forms.Charting control. Sure you could host graphs in WPF using it, but as you pointed out support for the toolkit died years ago. WPF, the modern solution for binding was strong, but in the end, in many respects it was and is weaker than System.Windows.Forms. At least System.Windows.Forms has full functioning chart controls and a better C# interface for the web browser control. Two highly important things to have. Themeing in WPF is just ridiculous, thought originally to be this great thing, it actually turned out to be a PITA. If it was actually that excellent why didn't we see thousands of free themes out there like we do today with CSS? WPF's property system is just retarded! No other way to think about it, it too is a PITA spawning off all these other helper frameworks all of which get us back to just setting or getting a property with proper notification to the "Closely Coupled" View. Might as well put the properties in the View. Oh wait, don't do that, I forgot that binding to the main window is nearly impossible. To this very day Microsoft has failed to provide ANY WPF template for a MVVM solution. Their refusal to do this forces us to have to bend over every time we create a new WPF project. New comers are taught that by just clicking on a button they can handle the events in the View. More of the same System.Windows.Forms type development every day. I love WPF but hate half-baked things. WPF was at one time the flagship, today it's a mothballed ship. They say it's not dead, but alas even their upcoming release looks like a "Sunset Release" to me. There's very little in this release and the big parts are for Microsoft's own steering committees figuring out how to steer us towards where they are heading. I could rant for a while longer but maybe I should shut up. What they did with Silverlight and WPF was a clear signal to us. Today I no longer choose WPF as my first platform because MVC is easier, more intuitive, loosely coupled and it gives me access to all Javascript libraries in existence today. We should have known during those days when WPF was not strongly adopted or we couldn't find open source WPF libraries everywhere, that it was doomed. Being one who rejected non-strongly typed languages for over 15 years, in particular Javascript which I still can't stand; I clearly had to just adopt it. Now that I have done that, I am more free to just say no to Miscrosoft. They've already told us we can't trust them, why forgive and go back? They could care less about our marketable skills. The funny thing about this is that even Microsoft themselves are adopting the "open world". I've seen them trying to make peace over on GitHub as well as providing "their" versions of JSON support over on Nuget. All of this because the IPhone drove Apple to be of more value that Exxon. Yes, and don't forget Google. To me the only redeeming quality for Microsoft is C#. But that doesn't mean anything, look what happened to Sun who developed Java! Today the world has gone crazy for smart phones which are nothing more than the equivalent of a 24K Baud modem connected device in 1992. But unlike the modems of the past these little devices won't pass because zillions of people will use them as their primary computer! (Slowness and all)... So, by having skills such as HTML5, and Javascript in addition to server side skills (for now MVC and C#) we should be safe for a while. If everything goes to Javascript (NODE, ANGULAR, JQUERY) then we will be safe because we dove into Javascript. <br /> <br /> Funny thing how JQUERY is the "Strong type" and most popular framework for Javascript. Javascript is still as bad as it was from the beginning, but at least now there's a few more options. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/818281/Is-WPF-dead-the-present-and-future-of-WPF?msg=5374471
CC-MAIN-2019-13
refinedweb
1,263
70.43
? npmnpm This is the preferred method. This is only available for TypeScript 2.0+ users. For example: npm install --save-dev @types/node The types should then be automatically included by the compiler. See more in the handbook. For an NPM package "foo", typings for it will be at "@types/foo". If you can't find your package, look for it on TypeSearch. If you still can't find it, check if it bundles its own typings. This is usually provided in a "types" or "typings" field in the package.json, or just look for any ".d.ts" files in the package and manually include them with a /// <reference path="" />. Add to your tsconfig.json: "baseUrl": "types", "typeRoots": ["types"], (You can also use src/types.) Create types/foo/index.d.ts containing declarations for the module "foo". You should now be able import from "foo" in your code and it will route to the new type definition. Then build and run the code to make sure your type definition actually corresponds to what happens at runtime. Once you've tested your definitions with real code, make a PR then follow the instructions to edit an existing package or create tslint.json, run npm run lint package-name. Otherwise, run tscin the package directory.. If you are adding typings for an NPM package, create a directory with the same name. If the package you are adding typings for is not on NPM, make sure the name you choose for it does not conflict with the name of a package on NPM. (You can use npm info foo to check for the existence of the foo package.) Your package should have this structure: Generate these by running. function sum(nums: number[]): number: Use ReadonlyArrayif a function does not write to its parameters. interface Foo { new(): Foo; }: This defines a type of objects that are new-able. You probably want declare class Foo { constructor(); }. const Class: { new(): IClass; }: Prefer to use a class declaration class Class { constructor(); }instead of a new-able constant. getMeAT<T>(): T: If a type parameter does not appear in the types of any parameters, you don't really have a generic function, you just have a disguised type assertion. Prefer to use a real type assertion, e.g. getMeAT() as number. Example where a type parameter is acceptable: function id<T>(value: T): T;. Example where it is not acceptable: function parseJson<T>(json: string): T;. Exception: new Map<string, number>()is OK. - Using the types Functionand Objectis almost never a good idea. In 99% of cases it's possible to specify a more specific type. Examples are (x: number) => numberfor functions and { x: number, y: number }for objects. If there is no certainty at all about the type, anyis the right choice, not Object. If the only known fact about the type is that it's some object, use the type object, not Objector { [key: string]: any }. var foo: string | any: When anyis used in a union type, the resulting type is still any. So while the stringportion of this type annotation may look useful, it in fact offers no additional typechecking over simply using any. Depending on the intention, acceptable alternatives could be any, string, or string | object. Removing a packageRemoving a package When a package bundles its own types, types should be removed from Definitely Typed to avoid confusion. You can remove it by running npm run not-needed -- typingsPackageName asOfVersion sourceRepoURL [libraryName]. typingsPackageName: This is the name of the directory to delete. asOfVersion: A stub will be published to @types/foowith this version. Should be higher than any currently published version,. If a tslint.json turns rules off, this is because that hasn't been fixed yet. For example: { "extends": "dtslint/dt.json", "rules": { // This package uses the Function type, and it will take effort to fix. "ban-types": false } } (To indicate that a lint rule truly does not apply, use // tslint:disable rule-name or better, //tslint:disable-next-line rule-name.) To assert that an expression is of a given type, use $ExpectType. To assert that an expression causes a compile error, use $ExpectError. // $ExpectType void f(1); // $ExpectError f("one"); For more details, see dtslint readme. Test by running npm run lint package-name where package-name is the name of your package. This script uses dtslint.? It depends, but most pull requests will be merged within a week.. Usually you won't need this. When publishing a package we will normally automatically create a package.json for it. A package.json may be included for the sake of specifying dependencies. Here's an example. We do not allow other fields, such as "description", to be defined manually. Also, if you need to reference an older version of typings, you must do that by adding "dependencies": { "@types/foo": "x.y.z" } to the package.json.? Some packages, like chai-http, export a function. Importing this module with an ES6 style import in the form import * as foo from "foo"; leads to the error: error TS2497: Module 'foo' resolves to a non-module entity and cannot be imported using this construct This error can be suppressed by merging the function declaration with an empty namespace of the same name, but this practice is discouraged. This is a commonly cited Stack Overflow answer regarding this matter. It is more appropriate to import the module using the import foo = require("foo"); syntax.. This may belong in TSJS? The TypeScript handbook contains excellent general information about writing definitions, and also this example definition file which shows how to create a definition using ES6-style module syntax, while also specifying objects made available to the global scope. This technique is demonstrated practically in the definition for big.js, which is a library that can be loaded globally via script tag on a web page, or imported via require or ES6-style imports. To test how your definition can be used both when referenced globally or as an imported module, create a test folder, and place two test files in there. Name one YourLibraryName-global.test.ts and the other YourLibraryName-module.test.ts. The global test file should exercise the definition according to how it would be used in a script loaded on a web page where the library is available on the global scope - in this scenario you should not specify an import statement. The module test file should exercise the definition according to how it would be used when imported (including the import statement(s)). If you specify a files property in your tsconfig.json file, be sure to include both test files. A practical example of this is also available on the big.js definition. Please note that it is not required to fully exercise the definition in each test file - it is sufficient to test only the globally-accessible elements on the global test file and fully exercise the definition in the module test file, or vice versa. What about scoped packages?What about scoped packages? Types for a scoped package @foo/bar should go in types/foo__bar. Note the double underscore. When dts-gen is used to scaffold a scoped package, the paths property has to be manually adapted in the generated tsconfig.json to correctly reference the scoped package: { "paths":{ "@foo/bar": ["foo__bar"] } }.
https://libraries.io/npm/@types%2Fcors/2.8.2
CC-MAIN-2019-35
refinedweb
1,223
66.54
Naming Conventions¶ File and Directory Names¶ Our directory tree stripped down looks something like: statsmodels/ __init__.py api.py discrete/ __init__.py discrete_model.py tests/ results/ tsa/ __init__.py api.py tsatools.py stattools.py arima_model.py arima_process.py vector_ar/ __init__.py var_model.py tests/ results/ tests/ results/ stats/ __init__.py api.py stattools.py tests/ tools/ __init__.py tools.py decorators.py tests/ The submodules are arranged by topic, discrete for discrete choice models, or tsa for time series analysis. The submodules that can be import heavy contain an empty __init__.py, except for some testing code for running tests for the submodules. The namespace to be imported is in api.py. That way, we can import selectively and do not have to import a lot of code that we don’t need. Helper functions are usually put in files named tools.py and statistical functions, such as statistical tests are placed in stattools.py. Everything has directories for tests. endog & exog¶ Our working definition of a statistical model is an object that has both endogenous and exogenous data defined as well as a statistical relationship. In place of endogenous and exogenous one can often substitute the terms left hand side (LHS) and right hand side (RHS), dependent and independent variables, regressand and regressors, outcome and design, response variable and explanatory variable, respectively. The usage is quite often domain specific; however, we have chosen to use endog and exog almost exclusively, since the principal developers of statsmodels have a background in econometrics, and this feels most natural. This means that all of the models are objects with endog and exog defined, though in some cases exog is None for convenience (for instance, with an autoregressive process). Each object also defines a fit (or similar) method that returns a model-specific results object. In addition there are some functions, e.g. for statistical tests or convenience functions. See also the related explanation in endog, exog, what’s that?. Variable Names¶ All of our models assume that data is arranged with variables in columns. Thus, internally the data is all 2d arrays. By convention, we will prepend a k_ to variable names that indicate moving over axis 1 (columns), and n_ to variables that indicate moving over axis 0 (rows). The main exception to the underscore is that nobs should indicate the number of observations. For example, in the time-series ARMA model we have: `k_ar` - The number of AR lags included in the RHS variables `k_ma` - The number of MA lags included in the RHS variables `k_trend` - The number of trend variables included in the RHS variables `k_exog` - The number of exogenous variables included in the RHS variables excluding the trend terms `n_totobs` - The total number of observations for the LHS variables including the pre-sample values Options¶ We are using similar options in many classes, methods and functions. They should follow a standardized pattern if they recurr frequently. `missing` ['none', 'drop', 'raise'] define whether inputs are checked for nans, and how they are treated `alpha` (float in (0, 1)) significance level for hypothesis tests and confidence intervals, e.g. `alpha=0.05` patterns `return_xxx` : boolean to indicate optional or different returns (not `ret_xxx`)
https://www.statsmodels.org/v0.10.2/dev/naming_conventions.html
CC-MAIN-2022-33
refinedweb
534
54.42
(Normally POP3 was good enough for me, since I only tended to do intermittent and/or emergency email work from my phone, saving the heavy lifting for when I was back at my desk. But my main computer has been in the shop for a week, leading me to rely more heavily on handheld toys, and the lack of folder access just go too annoying to tolerate any longer.) I have questions, dear Lazyweb. 1) When I send a message from Mail.app on my desktop, it appears to be storing the sent message in the "Drafts" folder, and marking it as unread, as if it has not been sent, even though it has. I know it was sent, because I got a BCC and can see the transport headers. So, WTF? There appears to be a MUA-generated copy in both "Drafts" and "Sent" (no transport headers) plus the copy that came back to my inbox (with transport headers). I do have "store drafts on the server" and "store sent messages on the server" checked, because that seems sensible. I do not have "pretend sent messages are unsent drafts" checked, because it doesn't exist. 2) I am using Sieve to filter my mail into different folders on the server. Mail.app (on desktop and iphone) doesn't seem to update the "unread" badges on these folders until I have first manually selected that folder. This is just-slightly-less-than-useful behavior, since I can't tell whether I have mail without clicking on each folder first. Mail.app on the desktop seems better at this -- the unread badges update sometimes, possibly even most of the time (but not always). On the phone, they update almost never (but not never). Would things go better if my "INBOX" folder had sub-folders into which Sieve was filing things? If so, how does one accomplish that, since Mail.app won't let me drag a folder into INBOX on the IMAP server the way it will let me drag folders into other folders in the "On My Mac" section. Do I "mkdir" something on the server? Will Mail.app even understand these nested folders? I'm using Dovecot 2, and it appears to be using mbox files. Update: Turning off "store drafts on server" seems the only solution to #1. Still no solution to #2. However, switching from mbox to maildir allows me to make sub-folders. The incantation for that appears to have been to add this to /etc/dovecot.conf: mail_location = namespace { separator = / prefix = "#mbox/" location = mbox:~/mail:INBOX=/var/mail/%u inbox = yes hidden = yes list = no } namespace { separator = / prefix = location = maildir:~/Maildir } Then do: dsync -D -v -f -u jwz mirror 'maildir:~/Maildir' to make a "Maildir" copy of the "mail" directory. After doing this, the folder that the IMAP server refers to as "INBOX" appears to be /var/mail/jwz rather than something under ~/Maildir/, and I'm not sure how to change that (or if doing so is wise). I suppose this is why I still can't create sub-folders of my Inbox, and don't know whether that would solve the "message counts" problem.
https://www.jwz.org/blog/2010/08/02/
CC-MAIN-2017-26
refinedweb
532
69.62
Now this is cool. Not only does the WinForms designer do nifty control alignment based on edge position, but it also aligns to the controls' text's baseline! Fan-frickin-tastic! Looks like things are still moving along in the WSE 2.0 camp -- HarveyW says they're in the final round of bug bashing. I, and many others I'm sure, am very happy to hear this. Apparently there are some significant changes from the TP (the namespace the types live in, for one) and I'd love to get those in our code ASAP. As a side note: Working with the WSE 2.0 tech preview has been great fun, especially considering this is a glimpse into how we'll be doing things in the future Indigo. Nice job! How.) Sad. 'Morning Edition' replacing Edwards Strong reactions to NPR host's removal Update: Save Bob Edwards Petition (already “signed” by 8087 people) Man oh man I can't wait to dig further into VS 2005 TP. I only had a minute before work to very quickly check out a few things and noticed “Generate Method Stub” in the Refactoring menu. Going on a hunch, I created an event handler for a dummy button on a form: private void button3_Click(object sender, EventArgs e){ string something = HelloMethod(sender);} HelloMethod didn't exist. I put the cursor on the HelloMethod and selected Refactoring|Generate Method Stub and got the following: private string HelloMethod(object sender){ throw new NotImplementedException();} Are you kidding me?!? Brilliant! I need this feature several times a day and I couldn't ask for a better implementation. Thank you! I was working on a little app to create an XMI file from a .NET assembly, but dropped it as other things came up. I think it's 90% or so done, but I'm not sure if it's worth my time to complete it. Would anyone find something like this useful? (XMI files contain metadata that modeling apps can use to create UML class diagrams.) I use integrated source control most of the time. Whenever I try to edit some code that's not checked out, I'm presented with the “Check Out for Edit” dialog, which is what I would expect. Another thing I would expect would be a keyboard accelerator for “Check Out” so I don't need to grab my mouse to click that button. Sadly, this isn't the case. I think I'm getting tennis elbow because of it. Please, VS friends, add this tiny “feature” to VS.Next. Thank you. It's great to come home from vacation (I had a great time, thanks) with an inbox full of email and only one spam. The surprising bit, though, is that it's the first spam in my inbox in five months. That's right, my friends: I've been sent 19,675 emails in that time and only one spam (out of 7,951) has come through. I couldn't be happier with SpamArrest. The service was pretty spotty when I first joined, but they must've added a server or two in late October because it's been on fire since. I've tried many different approaches to beating spam, and this wins hands-down. My biggest beef with most other solutions is having to deal with false positives and negatives. Once per day I check all of the email that wasn't allowed through -- no “false negatives” make it to my inbox. This is really only necessary to check for any automated emails (recently subscribed newsletters and whatnot[1]) from addresses that haven't been verified. I love it! [1] Hopefully all email newsletters will get on the RSS bandwagon and this will eventually be unnecessary.
http://weblogs.asp.net/jkey/archive/2004/03.aspx
crawl-003
refinedweb
628
71.34
Team Mighty Morphin Coding Rangers - OOP344 [ OOP344 | Weekly Schedule | Student List | Teams | Project | Student Resources Contents Member list A2 assignment. - Declare. AS2: When we want to test our code under different platforms, it is important to change include statement in biof.c correspondingly. Style 1: #include "biof.h" Style 2: extern "C"{ #include "biof.h" }; VCC, BCC like style 1, while LNX likes style 2. MAC is unknown. Dachuan Placing void in Function(void) for Borland is important. It has gotten me frustrated a few times. bio_putint() doesn't seem to work under VCC now? fixed. bio_move() doesn't not work in Borland. fixed. When using Putty, don't forget to set Function/Keyboard/Xterm R6! When compiling on Mac: cc biomain.c biof.c -lcurses (and then) ./a.out Cong i really appreciate Sunny's help, my code works now.And i understand what link problem is. i look the program and find a bug on tab key when you press tab key it jump 4 space forward but when you press backspace it suppose to jump back 4 space. Cgcheung Not really an issue but my own mistake. A reminder to include "-l ncurses" to the compile line in linux >_> . Project Progress Update the current project's progress here. List what's done in each part, and what's not done. Simply cross-out things that are done. Simple Functions Complex Functions
https://wiki.cdot.senecacollege.ca/w/index.php?title=Team_Mighty_Morphin_Coding_Rangers_-_OOP344&oldid=35816
CC-MAIN-2020-50
refinedweb
235
78.35
On Saturday 31 May 2003 06:06 pm, Gopal V wrote: > I have just committed the SSL support into HttpWebRequest ... and tested > it by downloading a page from which means we're > cool !. Fantastic! > I hope I implemented it right ... ie a Session is constructed globally , > which I hope is not wrong ?. This session is destroyed only when we > close down the VM because the object containing the Session is held > in a static field. Should I generate a new session per hostname ?. A new session every time. The session provider can be stored globally, but it isn't really necessary to do that. The current code is OK, but I suggest removing the static for the provider because there's no need. Call "GetProvider()" and then "CreateClientSession()" in the "SecureConnection" constructor. If it ever makes sense to cache providers, we'll do it in the "DotGNU.SSL" assembly instead. > Also I do think we should put some sort of #if #endif around it to > trim it down ... what should be the condition ? "CONFIG_SSL" or "CONFIG_TLS" would be my guess. Toss a coin - either is fine by me. Cheers, Rhys.
http://lists.gnu.org/archive/html/dotgnu-pnet/2003-05/msg00056.html
CC-MAIN-2018-05
refinedweb
190
76.72
Seam gen tool bug!!!huix jan May 12, 2008 12:45 PM SCHWERWIEGEND: Error Rendering View[/home.xhtml] javax.el.PropertyNotFoundException: The class 'com.seamguru.seamquizadmin.TUserList_$$_javassist_0' does not have the property 'tUser'.) error reason:It's a Seam-gen tool bug.JavaBean use way is error. method property name setTUser/getTUser ------ TUser setUser/getUser ------ user The first letter'case of property name is same with the second letter'case of method name(no set/get). So,Seam-gen:org.jboss.seam.tool.Util.java,org.jboss.seam.tool.LowercasePropertyTask.java lower() and freemarker template need be changed. - 2. Re: Seam gen tool bug!!!huix jan May 13, 2008 12:57 PM (in response to huix jan) package org.test; import java.io.Serializable; public class TestBean implements Serializable { private static final long serialVersionUID = 7245129069051557549L; private String name; private String address; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getADdress() { return address; } public void setADdress(String address) { this.address = address; } } import java.beans.BeanInfo; import java.beans.IntrospectionException; import java.beans.Introspector; import java.beans.PropertyDescriptor; import org.test.TestBean; public class Main { public static void main(String[] args) { PropertyDescriptor[] descriptors = null; try { BeanInfo info = Introspector.getBeanInfo(TestBean.class); descriptors = info.getPropertyDescriptors(); } catch (IntrospectionException e) { e.printStackTrace(); } for (PropertyDescriptor pd: descriptors) { System.out.println(pd.getName()); } } } The test code is same with java EL javax.el.BeanELResolver. You can get the resualt as: ADdress name I think this's a seam-gen bug,org.jboss.seam.tool.Util.java,org.jboss.seam.tool.LowercasePropertyTask.java lower() has some error. Thanks for your help! 3. Re: Seam gen tool bug!!!Pete Muir May 13, 2008 1:13 PM (in response to huix jan) So you are saying that, for a getter/setter pair for ADdress, seam-gen will generate aDdress? But that it should be ADress according to JavaBean rules? 4. Re: Seam gen tool bug!!!Stephen Friedrich May 13, 2008 6:13 PM (in response to huix jan) We had the same problem with a hand-written application. I am not sure that JavaBean spec defines the property name to be ADress, but it seems that's the way some piece of software in the stack wants to have it (Seam's or the JSF default EL resolver?). (We just changed the method names...) 5. Re: Seam gen tool bug!!!Pete Muir May 13, 2008 6:34 PM (in response to huix jan) I am not sure that JavaBean spec defines the property name to be ADress, It does. See section 8.8 - if the first letter is upper case and the second lower, it de-captilizes the first letter. If both the first and second letters are capital letters, then it leaves it alone. Please file a JIRA issue, noting places in Seam where you have seen this occur and we can fix it. 6. Re: Seam gen tool bug!!!huix jan May 14, 2008 5:55 AM (in response to huix jan) Pete Muir,Thanks! I'm a chinese,write english is difficult for me.Maybe some grammar error with my talking.Thanks for your help. When I use seam-gen,I find the same things(no exceptions throws) with generated EntityBean properties,JavaBean name is error.I think EntityBean freemarker template also need changed. Another: The same errors When I user derby database with column type:LONG VARCHAR.I check the source code,it's a hibernate org.hibernate.mapping.Table.validateColumns() error.I know this method has some bugs.When run into org.hibernate.tool.hbm2ddl.ColumnMetadata line 26(Hibernate 3.2.6GA),typeName is changed as LONG,so throw HibernateException: Found: LONG, expected: varchar. I comment a line in META-INF/persistence.xml: <property name="hibernate.hbm2ddl.auto" value="validate"/> The application work right.I suggest change persistence.xml file's freemarker template,disable schema validator.So,someone won't be in trouble when run the seam generated code.
https://developer.jboss.org/thread/181980
CC-MAIN-2018-17
refinedweb
662
53.78
19 March 2012 By clicking Submit, you accept the Adobe Terms of Use. Basic JavaScript programming knowledge. All This is the first article in a series about common design patterns used in JavaScript. Design patterns are proven ways of programming to help make your code more maintainable, scalable, and decoupled, all of which are necessary when creating large JavaScript applications, especially in large groups. In Part 2 of this series, you're introduced to another 3 design patterns: adapter, decorator, and factory. Part 3 discusses 3 more design patterns: proxy, observer, and command. The singleton pattern is what you use when you want to ensure that only one instance of an object is ever created. In classical object-oriented programming languages, the concepts behind creating a singleton was a bit tricky to wrap your mind around because it involved a class that has both static and non-static properties and methods. I’m talking about JavaScript here though, so with JavaScript being a dynamic language without true classes, the JavaScript version of a singleton is excessively simple. Before I get into implementation details, I should discuss why the singleton pattern is useful for your applications. The ability to ensure you have only one instance of an object can actually come in quite handy. In server-side languages, you might use a singleton for handling a connection to a database because it's just a waste of resources to create more than one database connection for each request. Similarly, in front-end JavaScript, you might want to have an object that handles all AJAX requests be a singleton. A simple rule could be: if it has the exact same functionality every time you create a new instance, then make it a singleton. This isn't the only reason to make a singleton, though. At least in JavaScript, the singleton allows you to namespace objects and functions to keep them organized and keep them from cluttering the global namespace, which as you probably know is a horrible idea, especially if you use third party code. Using the singleton for name-spacing is also referred to as the module design pattern. To create a singleton, all you really need to do is create an object literal. var Singleton = { prop: 1, another_prop: 'value', method: function() {…}, another_method: function() {…} }; You can also create singletons that have private properties and methods, but it's a little bit trickier as it involves using a closure and a self-invoking anonymous function. Inside a function, some local functions and/or variables are declared. You then create and return an object literal, which has some methods that refererence the variables and functions that you declared within the larger function's scope. That outer function is immediatly executed by placing () immediately after the function declaration and the resulting object literal is assigned to a variable. If this is confusing, then take a look over the following code and then I'll explain it some more afterward. var Singleton = (function() { var private_property = 0, private_method = function () { console.log('This is private'); } return { prop: 1, another_prop: 'value', method: function() { private_method(); return private_property; }, another_method: function() {…} } }()); The key is that when a variable is declared with var in front of it inside a function, that variable is only accessible inside the function and by functions that were declared within that function (the functions in the object literal for example). The return statement gives us back the object literal, which gets assigned to Singleton after the outer function executes itself. In JavaScript, namespacing is done by adding objects as properties of another object, so that it is one or more layers deep. This is useful for organizing code into logical sections. While the YUI JavaScript library does this to a degree that I feel is nearly excessive with numerous levels of namespacing, in general it is considered best practice to limit nesting namespaces to only a few lavels or less. The code below is an example of namespacing. var Namespace = { Util: { util_method1: function() {…}, util_method2: function() {…} }, Ajax: { ajax_method: function() {…} }, some_method: function() {…} }; // Here's what it looks like when it's used Namespace.Util.util_method1(); Namespace.Ajax.ajax_method(); Namespace.some_method(); The use of namespacing, as I said earlier, keeps global variables to a minumum. Heck, you can even have entire applications attached to a single object namespace named app if that's your perogative. If you'd like to learn a little more about the singleton design pattern as well as its applicability in namespacing, then go ahead and check out the "JavaScript Design Patterns: Singleton" article on my personal blog. If you read through the section about the singleton pattern and thought, "well, that was simple," don't worry, I have some more complicated patterns to discuss, one of which is the composite pattern. Composites, as the word implies, are objects composed of multiple parts that create a single entity. This single entity serves as the access point for all the parts, which, while simplifying things greatly, can also be deceiving because there’s no implicit way to tell just how many parts the composite contains. The composite is explained best using an illustration. In Figure 1, you can see two different types of objects: containers and galleries are the composites and images are the leaves. A composite can hold children and generally doesn't implement much behavior. A leaf contains most of the behavior, but it cannot hold any children, at least not in the classical composite example. As another example, I'm 100% certain you’ve seen the composite pattern in action before but never really thought about it. The file structure in computers is an example of the composite pattern. If you delete a folder, all of its contents get deleted too, right? That's essentially how the composite pattern works. You can call a method on a composite object higher up on the tree and the message will be delivered down through the hierarchy. This example creates an image gallery as an example of the composite pattern. You will have just three levels: album, gallery, and image. The album and galleries will be composites and the images will be the leaves, just like in Figure 1. This is a more defined structure than a composite needs to have, but for this example, it makes sense to limit the levels to only being composites or leaves. A standard composite doesn't limit which levels of the hierarchy that leaves can be on, nor do they limit the number of levels. To start things off, you create the GalleryComposite "class" used for both the album and the galleries. Please note that I am using jQuery for the DOM manipulation to simplify things. var GalleryComposite = function (heading, id) { this.children = []; this.element = $('<div id="' + id + '" class="composite-gallery"></div>') .append('<h2>' + heading + '</h2>'); } GalleryComposite.prototype = { add: function (child) { this.children.push(child); this.element.append(child.getElement()); }, remove: function (child) { for (var node, i = 0; node = this.getChild(i); i++) { if (node == child) { this.children.splice(i, 1); this.element.detach(child.getElement()); return true; } if (node.remove(child)) { return true; } } return false; }, getChild: function (i) { return this.children[i]; }, hide: function () { for (var node, i = 0; node = this.getChild(i); i++) { node.hide(); } this.element.hide(0); }, show: function () { for (var node, i = 0; node = this.getChild(i); i++) { node.show(); } this.element.show(0); }, getElement: function () { return this.element; } } There's quite a bit there, so how about I explain it a little bit? The add, remove, and getChild methods are used for constructing the composite. This example won't actually be using remove and getChild, but they are helpful for creating dynamic composites. The hide, show, and getElement methods are used to manipulate the DOM. This composite is designed to be a representation of the gallery that is shown to the user on the page. The composite can control the gallery elements through hide and show. If you call hide on the album, the entire album will disappear, or you can call it on just a single image and just the image will disappear. Now create the GalleryImage class. Notice that it uses all of the exact same methods as the GalleryComposite. In other words, they implement the same interface, except that the image is a leaf so it doesn't actually do anything for the methods regarding children, as it cannot have any. Using the same interface is required for the composite to work because a composite element doesn't know whether it's adding another composite element or a leaf, so if it tries to call these methods on its children, it needs to work without any errors. var GalleryImage = function (src, id) { this.children = []; this.element = $('<img />') .attr('id', id) .attr('src', src); } GalleryImage.prototype = { // Due to this being a leaf, it doesn't use these methods, // but must implement them to count as implementing the // Composite interface add: function () { }, remove: function () { }, getChild: function () { }, hide: function () { this.element.hide(0); }, show: function () { this.element.show(0); }, getElement: function () { return this.element; } } Now that you have the object prototypes built, you can use them. Below you can see the code that actually builds the image gallery. var container = new GalleryComposite('', 'allgalleries'); var gallery1 = new GalleryComposite('Gallery 1', 'gallery1'); var gallery2 = new GalleryComposite('Gallery 2', 'gallery2'); var image1 = new GalleryImage('image1.jpg', 'img1'); var image2 = new GalleryImage('image2.jpg', 'img2'); var image3 = new GalleryImage('image3.jpg', 'img3'); var image4 = new GalleryImage('image4.jpg', 'img4'); gallery1.add(image1); gallery1.add(image2); gallery2.add(image3); gallery2.add(image4); container.add(gallery1); container.add(gallery2); // Make sure to add the top container to the body, // otherwise it'll never show up. container.getElement().appendTo('body'); container.show(); That's all there is to the composite! If you want to see a live demo of the gallery, you can visit the demo page on my blog. You can also read the "JavaScript Design Patterns: Composite" article on my blog for a little more information on this pattern. The façade pattern is the final design pattern in this article, which is just a function or other piece of code that simplifies a more complex interface. This is actually quite common, and, one could argue, most functions are actually made for this very purpose. The goal of a façade is to simplify a larger piece of logic into one simple function call. You just might be using the façade pattern all the time without realizing that you're using any design pattern at all. Just about every single library that you use in any programming language uses the facade pattern to some degreebecause generally their purpose is to make complex things simpler. Let’s look at jQuery for an example. jQuery has a single function ( jquery()) to do so many things; It can query the DOM, create elements, or just convert DOM elements into jQuery objects. If you just look at querying the DOM, and take a peek at the number of lines of code used to create that capability, you'd probably say to yourself, "I'm glad I didn't have to write this," because it is very long and complex code. They have successfully used the façade pattern to convert hundreds of lines of complex code into a single, short function call for your convenience. The façade pattern is pretty easy to understand but if you are interested you can learn more from the JavaScript Design Patterns: Façade post on my personal blog. In this part of the JavaScript Design Patterns series I covered the singleton, composite, and façade. In Part 2 of this series, you're introduced to another 3 design patterns: adapter, decorator, and factory. Part 3 discusses 3 more design patterns: proxy, observer, and command. I also have already completed a series on JavaScript design patterns on my personal blog that also includes a few patterns that I didn't feature in this series. You can find all of the posts in that series on my blog. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe.
http://www.adobe.com/devnet/archive/html5/articles/javascript-design-patterns-pt1-singleton-composite-facade.html
CC-MAIN-2016-07
refinedweb
2,041
54.12
Nov 29, 2019 06:02 AM|RioDD|LINK Hi I have class Device and it has Price. How do I get the minimal price from all devices? here is the Class public class Device { public int ProductID { get; set; } public string ProductTitle { get; set; } public int Price{get, set} } now in the code I have List<Device> Devices = LoadAllDevices(); //loads all devices from database, unnesesary to put the implementation var minimumPrice=Devices.Select(x=>x.Price!=null && //HERE IS THE PROBLEM TO FIND MINIMUM PRICE OF ALL THE DEVICES); Please advice how do i find the minimum? Contributor 2096 Points Nov 29, 2019 06:55 AM|Khuram.Shahzad|LINK In LINQ, you can find the minimum element of the given sequence by using Min() function. This method provides the minimum element of the given set of values using System; using System.Linq; using System.Collections.Generic; public class Simple { public static void Main() { List<Device> devices = new List<Device>{ new Device{ Price =20}, new Device{ Price =30}}; var min = devices.Min(d=>d.Price); Console.WriteLine(min); } public class Device { public int Price {get;set;} } } Contributor 3710 Points Nov 29, 2019 07:16 AM|Yongqing Yu|LINK Hi RioDD, RioDDPlease advice how do i find the minimum? There are several ways to get the minimum of data by LINQ, and you don't have to deal with x=>x.Price!=null, you can directly use the following three methods: var min1 = Devices.Min(x => x.Price); var min2 = Devices.Select(x => x.Price).Min(); var min3 = (from c in Devices select c).Min(c => c.Price); Here is the debugging code result : Best Regards, YongQing. 2 replies Last post Nov 29, 2019 07:16 AM by Yongqing Yu
https://forums.asp.net/t/2161999.aspx?Linq+minimal+value+of+integer+in+object
CC-MAIN-2021-10
refinedweb
288
64.51
String interpolation is a process of substituting values of local variables into placeholders in a string. It is implemented in many programming languages such as Scala: //Scala 2.10+ var name = "John"; println(s"My name is $name") >>> My name is John Perl: my $name = "John"; print "My name is $name"; >>> My name is John CoffeeScript:>> My name is John ... and many others. At first sight, it doesn’t seem that it’s possible to use string interpolation in Python. However, we can implement it with just 2 lines of Python code. But let’s start with basics. An idiomatic way to build a complex string in Python is to use the “format” function: print "Hi, I am {} and I am {} years old".format(name, age) >>> Hi, I am John and I am 26 years old Which is much cleaner than using string concatenation: print "Hi, I am " + name + " and I am " + str(age) + " years old" Hi, I am John and I am 26 years old But if you use the format function in this way, the output depends on the order of arguments: print "Hi, I am {} and I am {} years old".format(age, name) Hi, I am 26 and I am John years old To avoid that we can pass key-value arguments to the “format” function: print "Hi, I am {name} and I am {age} years old".format(name="John", age=26) Hi, I am John and I am 26 years old print "Hi, I am {name} and I am {age} years old".format(age=26, name="John") Hi, I am John and I am 26 years old Here we had to pass all variables for string interpolation to the “format” function, but still we have not achieved what we wanted, because “name” and “age” are not local variables. Can “format” somehow access local variables? To do it we can get a dictionary with all local variables using the locals function: name = "John" age = 26 locals() >>> { ... 'age': 26, 'name': 'John', ... } Now we just need to somehow pass it to the format function. Unfortunately we cannot just call it as s.format(locals()): print "Hi, I am {name} and I am {age} years old".format(locals()) --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-5-0fb983071eb8> in <module>() ----> 1 print "Hi, I am {name} and I am {age} years old".format(locals()) KeyError: 'name' This is because locals returns a dictionary, while format expects key-value parameters. Luckily, we can convert a dictionary into key-value parameters using \*\* opeartor. If we have a function that expects key-value arguments: def foo(arg1=None, arg2=None): print "arg1 = " + str(arg1) print "arg2 = " + str(arg2) We can pass parameters packed in a dictionary: d = { 'arg1': 1, 'arg2': 42 } foo(**d) >>> arg1 = 1 arg2 = 42 Now we can use this technique to implement our first version of string interpolation: print "Hi, I am {name} and I am {age} years old".format(**locals()) Hi, I am John and I am 26 years old It works but looks cumbersome. With this approach every time we need to interpolate our string we would need to write format(\*\*locals()). It would be great if we could write a function that would interpolate a string like this: # Can we implement inter() function in Python? print inter("Hi, I am {name} and I am {age} years old") >>> Hi, I am John and I am 26 years old At first it seems impossible, since if we move interpolation code to another function it would not be able to access local variables from a scope where it was called from: name = "John" print inter("My name is {name}") ... def inter(s): # How can we access "name" variable from here? return s.format(...) And yet, it is possible. Python provides a way to inspect the current stack with the sys.\_getframe function: import sys def foo(): foo_var = 'foo' bar() def bar(): # sys._getframe(0) would return frame for function "bar" # so we need to to access 1-st frame # to get local variables from "foo" function previous_frame = sys._getframe(1) previous_frame_locals = previous_frame.f_locals print previous_frame_locals['foo_var'] foo() >>> foo So the only thing that is left is to combine frames introspection with the format function. Here are 2 lines of code that would do the trick: def inter(s): return s.format(**sys._getframe(1).f_locals) name = "John" age = 26 print inter("Hi, I am {name} and I am {age} years old") >>> Hi, I am John and I am 26 years old {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/how-to-implement-string-interpolation-in-python-br
CC-MAIN-2017-17
refinedweb
763
68.2
0 I am wondering how to read a file in c... this requires some more background information before you give me something useless What I have so far: #include <stdio.h> int main(void) { FILE *fpr,*fpw; char C; int I; fpr = fopen("cwM.dat","r"); fpw = fopen("echocwM.dat","w"); while (C != EOF) { fscanf(fpr,"%c",&C); fprintf(fpw,"%c",C); I = C; printf("%d\n",I); } fclose(fpr);fclose(fpw); return 0; } my real question is what if the file I'm trying to read is: ÿdÿ before you post a solution, check your code against the above file.
https://www.daniweb.com/programming/software-development/threads/411560/reading-characters-from-a-file
CC-MAIN-2017-09
refinedweb
101
75
This site uses strictly necessary cookies. More Information I have been trying for weeks to get my project running smoothly. It's as simple as it gets. I have set up an object pool and generator so that a few prefabs will move from right to left on the screen at random velocities. No matter what I try, I still get a lot of lag. It must be something simple that I'm missing. It's a simple 2D game with no more than 5 sprites moving at one time. It was working perfectly until a little while ago, and now for no reason it seems to lag terribly. I don't even think I changed these scripts. PLEASE if you have any idea why, let me know. Note that I have a box collider at the left edge of the screen (set as trigger) to detect when to disable the objects. I didn't include that because its obvious how it should be done. Generator.cs using UnityEngine; using System.Collections; public class Generator : MonoBehaviour { public ObjectPooler asteroidPool; private float _createPosX; private float _startDelay; void Awake() { _createPosX = 15.0f; _startDelay = 1.0f; } void Start () { StartCoroutine( GenerateAsteroids() ); } IEnumerator GenerateAsteroids() { // Delay generation in the beginning of the game. yield return new WaitForSeconds( _startDelay ); GameObject newAsteroid; float posY; float speed; float angVel; // Loop forever while( true ) { // Randomize attributes of our new asteroid. posY = Random.Range( -5.0f, 5.0f ); speed = Random.Range( 3.0f, 5.0f ); angVel = Random.Range( -50.0f, 50.0f ); // Select a prefab from our object pool and set it to active. newAsteroid = asteroidPool.GetPooledObject(); newAsteroid.SetActive( true ); // Cache a reference to the new asteroid's rigidbody component. Rigidbody2D rb = newAsteroid.GetComponent<Rigidbody2D>(); // Cache a reference to the new asteroid's transform component. Transform newTransform = newAsteroid.transform; // Apply the appropriate speed, position, etc. to the asteroid. newTransform.position = new Vector2( _createPosX, posY ); rb.velocity = Vector2.left * speed; rb.angularVelocity = angVel; // Generate a new asteroid after the given amount of time. yield return new WaitForSeconds( 2.2f ); } } } ObjectPooler.cs using UnityEngine; using System.Collections; using System.Collections.Generic; public class ObjectPooler : MonoBehaviour { public GameObject pooledObject; public int pooledAmount; private List<GameObject> _pooledObjects; private Transform _poolerTransform; void Awake() { _pooledObjects = new List<GameObject>(); _poolerTransform = transform; } void Start() { for( int i = 0; i < pooledAmount; i++ ) { GameObject go = Instantiate( pooledObject ); go.transform.parent = _poolerTransform; go.SetActive( false ); _pooledObjects.Add( go ); } } public GameObject GetPooledObject() { // Cycle through all of our pooled objects... for( int i = 0; i < _pooledObjects.Count; i++ ) { // If we find an object in our pool that is inactive... if( !_pooledObjects[i].activeInHierarchy ) { // Return this game object. return _pooledObjects[i]; } } // If we didn't find an available object in our pool, instantiate a new one. GameObject go = Instantiate( pooledObject ); go.transform.parent = _poolerTransform; go.SetActive( false ); _pooledObjects.Add( go ); return( go ); } public void ResetPool() { foreach( GameObject go in _pooledObjects ) { go.SetActive( false ); } } } It may be prudent to clarify exactly what you mean when you use the term "lag" -- Lag primarily refers to the delay between a client and a server for a computer-to-computer/server connection. I assume you're referring to either of: Input lag: You press a key, then the visible response on screen occurs a moment later, making competent control over the character nearly impossible. Low frame rate: Performance is suffering, so if the game is running poorly enough, it may appear to be little more than a vaguely-interactive slideshow. Anyway, without a clearer picture of what problem is referred to as "lag" it is challenging to provide insight on the actual cause. By lag I mean the framerate drops every 1-2 seconds, causing a jitter/stutter in the motion of objects moving across the screen. In other words, the objects don't move smoothly. Does the game start lagging right after you start the game or after playing for a while. As soon as you open the game and as long as it runs. Answer by tanoshimi · Mar 02, 2016 at 08:20 PM Use Unity's built-in profiler to find out exactly what is taking the time in each frame. Already did. It says its Graphics.PresentAndSync takes up the most resources. But I also get spikes from this on an empty scene... Also the graphics I'm testing this on are just squares. No transparency or anything. So if that were the source of the lag I don't see how Unity could feasibly be used for any kind of development. Answer by Jessespike · Mar 02, 2016 at 09:03 PM it's the Instantiate that happens every 2.2s in the coroutine. Also the point of an object pool is so you do not have to continuously instantiate new objects. Where do you see that? The only time an instantiate can happen after starting up these scripts is if the object pool needs more than the 5 initial prefabs that are instantiated, which never really happens... Please do correct me if I'm wrong though. That could definitely be the source of the problem! I copy-pasted these scripts, threw in a cube prefab with a 2d rigid body and let it run. These scripts continuously instantiate for me. Perhaps its not the instantiate call specifically for the frame rate drop, but rather the deactivating and activating of physics objects that are called immediately afterwards. That may be because I did not include my script for deactivating the pooled objects above. As I said in my post, the way I do this is to have an empty game object with a box collider set as a trigger. Then anytime a rigidbody with a collider passes through this area, its gameobject is set to inactive. If you did not do this step, then yes the script above would continuously instantiate more and more prefabs because it can't find any in the pool that are disabled. Thank you for all your help though! What exactly do you mean by, "Perhaps its not the instantiate call specifically for the frame rate drop, but rather the deactivating and activating of physics objects that are immediately called"? The prefab has a rigid body (physics component). Generator calls GetPooledObjects and then calls SetActive(true), but inside of GetPooledObjects() right after the Instantiate, there is a call to SetActive(false). it just seems unnecessary. Answer by DuckWare Games · Mar 03, 2016 at 03:20 PM Try running Profiler to see what is causing the problem. Also as i saw on your script you are using foreach loop something that according to this is slowing the CPU. What about your sprites are you using one sprite for every element on your scene or is the resolution too high? A screenshot of Profiler would be great to understand in depth what causes the lag. Check out Unity Documentation for more tips: Here OR Check out this blog as it has some good tips: Here Answer by dan5071 · Mar 02, 2016 at 10:33 PM Finally figured out was was wrong!!! Had nothing to do with my scripts at all... I must have been messing with some project settings and turned VSync off. Now that I turned it back on it's smooth as butter. Thanks to all of you who were willing to offer. Rigidbody 2D Jittery 2 Answers Why do my first few collectable/powerup collisions cause lag but not later ones? 1 Answer Is this code inefficient? It is causing Lag. 1 Answer How to stop jittering with movement script 1 Answer Optimization -1 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1149757/cant-find-optimization-problem-in-my-scripts.html
CC-MAIN-2021-43
refinedweb
1,259
58.28
53970/how-to-remove-a-project-from-pycharm Assuming that you have the project opened in PyCharm window, you can follow these steps to delete the project - In case I want to remove some ...READ MORE import json from pprint import pprint with open('data.json') as ...READ MORE You can use with statement with open('strings.json') as ...READ MORE You don't need to change your existing ...READ MORE Ya, this is a problem with installing ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE If you are using Python 3.x then ...READ MORE If you are using Ubuntu then the ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/53970/how-to-remove-a-project-from-pycharm?show=53974
CC-MAIN-2020-10
refinedweb
134
86.71
LanScan is a tool designed for the purpose of searching for files and folders which are shared over a Local Area Network. It is particularly useful in organizations where a lot of folders are shared on individual systems. LanScan At the outset, I would like to declare that I have used a library designed by Robert Deeming. Robert Deeming has explained the library in this article :. I would like to thank him for sharing such a wonderful piece of work. This article assumes that you have a reasonable understanding of C#, basic understanding of threads and general knowledge of local area networks. Here's how the application works. You specify an IP range. The tool searches for live hosts. It then determines the list of folders shared by each system and looks for files (according to a keyword that you specify). The first point of interest would be - how to determine whether a system with a particular IP address is active. This can be achieved using the Ping class defined in the namespace System.Net.NetworkingInformation. The Ping class offers both synchronous and asynchronous methods to detect whether a remote host is reachable. I would be using the asynchronous method because I don't want the system to halt whenever I ping a remote host. Here's the code for it. Ping System.Net.NetworkingInformation ping Ping pingSender = new Ping(); // When the PingCompleted event is raised, // the PingCompletedCallback method is called. pingSender.PingCompleted += new PingCompletedEventHandler(AddHost); // Wait 1000 milliseconds for a reply. int timeout = 1000 ; // Create a buffer of 32 bytes of data to be transmitted. byte[] buffer = new byte[] { 100 }; // Set options for transmission: // The data can go through 64 gateways or routers // before it is destroyed, and the data packet // cannot be fragmented. PingOptions options = new PingOptions(64, true); pingSender.SendAsync("192.168.209.178", timeout, buffer, options, null); AddHost is called when a Ping operation is complete. Here's how to check the result of the Ping operation. AddHost Ping private void AddHost(object sender, PingCompletedEventArgs e) { PingReply reply = e.Reply; if (reply == null) return; if (reply.Status == IPStatus.Success) { // Code to be execute if a remote host // is found to be active. // reply.Address returns the address. } } Now the second point of interest would be how to list the shared folders for an active remote host. Here's where Robert Deeming's library comes into the picture. Two classes defined in this library are ShareCollection and Share. ShareCollection is a class capable of determining the shared folders for a given IP Address and storing the information in objects of type Share. Given below is the code to explain the working. ShareCollection ShareCollection ShareCollection sc = new ShareCollection(reply.Address.ToString()); foreach (Share s in sc) { // Do what you want with each individual Share object. // s.NetName returns the name of the shared folder. // s.ShareType return the type of share. } Lastly, to search for files in a particular share, you have to use recursion. The basic logic is to begin in a shared folder. Process the names of each file in that folder. Once this is done, recursively apply the same logic to each sub-directory until no files or folders remain. A problem with this approach arises if the directory structure is too deep. In this case, the tool would waste a lot of time looking for files in a particular share. To prevent this, we employ the feature of search depth. Search depth is an integer variable which is incremented by one everytime the search moves to a folder a level deeper in the directory tree. Whenever the variable exceeds a particular value (generally a small integer like 2 or 3) the recursion stops and the function returns. Here's the code to explain the point. ShareCollection sc = new ShareCollection(reply.Address.ToString()); foreach (Share share in sc) { RecursiveSearch(share.ToString(), 0); } Then we declare the recursive function: private void RecursiveSearch(string path,int depth) { DirectoryInfo di = new DirectoryInfo(path); if (!di.Exists) return; depth++; // searchDepth is a static variable defined in a static class Settings // Used like a global variable. Typical value is 2 if(depth>Settings.searchDepth) return; // Keyword to look for string keyword=Settings.keyword; if (di.Name.ToLower().Contains(keyword.ToLower())) { AddRow(name,parent, "Folder"); } foreach (FileInfo file in di.GetFiles()) { if (file.Name.ToLower().Contains(keyword.ToLower())) { AddRow(file.Name, di.FullName,size); } } foreach (DirectoryInfo subdir in di.GetDirectories()) { RecursiveSearch(subdir.FullName, depth); } } Well...that's all there is to it. Of course, there are a lot of interface design issues which I am not discussing here because explaining GUI is not the objective behind this article. In addition, there is a lot of exception handling and other minor details which I choose not discuss. After all, something should be left to imagination. Nevertheless you can find it all in the source code(see the link at the top of the page). In every little project which I have done, I find that there is a bit of weird coding that you can't understand (rather you don't want to), you don't want to use it, but you have to and you do. It seems that Windows Form Controls are not very thread friendly, so whenever you try to update a Form Control from a thread other that the thread you created, you get unexpected results. So here's how to get away with it. This code shows how to add rows to a table in a dataGridView(dgvResult irrespective of the thread. This method can be modified to handle any other Windows Form Control. dataGridView dgvResult delegate void SetCallback(params object[] objects); private void AddRow(params object[] objects) { if (this.dgvResult.InvokeRequired) { SetCallback d = new SetCallback(AddRow); this.Invoke(d, new object[] {objects}); } else { (dgvResult.DataSource as DataTable).Rows.Add(objects); } } More information about this problem can be found in this article by Rüdiger Klaehn This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Hi Amit, This is ramana, I am doing project Local Area Network in java. How to know Sharing folders in LAN systems and how to access these folders please tell me Amit. please help me ..................... General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. News on the future of C# as a language
http://www.codeproject.com/Articles/16338/Search-Engine-for-Local-Area-Network-LAN?msg=4533264
CC-MAIN-2014-10
refinedweb
1,080
58.08
Help:GetAround Getting around Rationalwiki The front page contains many blue[1] links to content. Clicking on these links will take you to the page or to a further set of choices. Everything (well almost everything - see below) within the wiki is reachable like this. (some words may be red: these are links inserted without a target - naughty) [2] Exceptions: - Talk pages: - all pages (except talk pages of course - that'd be too recursive even for some of the geeks on here) have an accompanying talk page. To see this page click on DiscussionTalk in the tabs across the top when the page is open. If you edit it to leave a comment, please always "sign" your talk page contributions by typing four tildes (~~~~) after your comments. It is customary to use the colon (:) to indent comments - each colon indents a little more, and shows what you are replying to. Please, please, please do not use asterisks (*) to bullet your comments in talk pages. - Special pages (don't have talk pages!): - Recent changes. This page, if refreshed frequently, gives an ongoing view of what's happening and who is doing it. Using the Search/Go box Anything may be entered into the box: - if Search is then clicked, two searches are made: one seeks a page of that name and reports if one is found; the other looks through all pages in your preferences (qv) search criteria and finds every instance of your word or words. (simple words like "of", "and", "the" etc. will not be searched). - if Go is clicked the search will be solely for a page where the title is the same as the entry. If the page is not found, you will be offered the opportunity of creating it. As spelling and namespace are integral to the page identity, you should check that a page which you create does not duplicate another of similar spelling (singular/plural, British/American etc..) or one in a particular namespace not included in your search criteria. Using the Random page button The random page button does what it says. The random page will only take you to articles in the main space. That means random page will never lead to a talk page, a special page, or a page with any of the following namespaces: Fun, Rationalwiki, Conservapedia, Recipe, Image, User, Template, Debate, Portal, Category, and some others. (See Help:Namespace for more about what those terms mean) Before editing a new page please familiarise yourself with RW's standards and What is an RW article Notes[edit] - ↑ Assuming that you have kept the default skin. If you haven't then it's entirely your fault and you'll have to live with it. - ↑ see: Help:Bored (If you've played with your preferences you mightn't see it red - agaiin this is your fault & you'll have to live with it.
https://rationalwiki.org/wiki/Help:GetAround
CC-MAIN-2017-43
refinedweb
482
67.89
How to get this special ID and click it? Recently Browsing 0 members No registered users viewing this page. Similar Content - jmp How can i click on (Pop Up) button that was based on title or class ? <button title="close" class="close" type="button" data-×</button> <button class="btn btn-default" type="button" data-Close</button> - By jmp I am trying to click on td items using this code : #include <IE.au3> #include <MsgBoxConstants.au3> $oIE = _IEAttach ("Webpage") $sMName = "Akshay Vora" Local $oTDs = _IETagNameGetCollection($oIE, "td") For $oTD in $oTDs If String($oTD.InnerText) = $sMName Then MsgBox(0, "", $sMName & @CRLF & $oTD.InnerText) _IEAction($oTD, "click") ExitLoop EndIf Next Working on other td items But, If $sMName = First TD It showing msgbox but not click on matched first td item. -o_ Heyl! - Recommended Posts
https://www.autoitscript.com/forum/topic/176507-how-to-get-this-special-id-and-click-it/?tab=comments
CC-MAIN-2021-39
refinedweb
133
56.76
I have a WCF endpoint with a decimal value. Somewhere (between submitting the request and the first line I can debug on the WCF service) the value is getting converted to a zero. I can call the same service, with the same parameters from a .NET application and don't see the same problem, so this feels strongly like a problem with the way that SoapUI is handling the request. I'm baffled. Does anyone have any ideas? UPDATE: I have traced the request using Fiddler. The decimal value is intact after it has left SoapUI, but is getting changed before it is consumed. The value is not getting changed if it is submitted by by .NET application. Still baffled. Solved! Go to Solution. The resolution in my case was to change the order of my request parameters. I was able to determine this by enabling WCF tracing, including message payloads, and then comparing the payloads from my .NET application against the payloads from SoapUI. The payloads are massively different, but ignoring namespaces, correlation ids, keys and dates I was able to determine that my problematic parameter was in a different position. Changing the order within SoapUI XML request resolved the issue. I'm yet to determine why SoapUI generated the parameters in a different order, and why (when this is XML), this order makes any difference on the server 😞 View solution in original post
https://community.smartbear.com/t5/API-Functional-Security-Testing/Decimal-value-converted-getting-converted-to-zero/td-p/115225
CC-MAIN-2021-17
refinedweb
235
55.74
As you'll see when you look at the lists of modules and their authors on CPAN, many users have made their modules freely available. If you find an interesting problem and are thinking of writing a module to solve it, check the modules directory on CPAN first to see if there is a module there that you can use. The chances are good that there is a module that does what you need, or perhaps one that you can extend, rather than starting from scratch.[4] [4]If you are interested in writing and contributing modules, there are several good starting points for learning to do so—see the perlmodlib manpage, the "Perl 5 Module List," and the "Perl Authors Upload Server" (). [4]If you are interested in writing and contributing modules, there are several good starting points for learning to do so—see the perlmodlib manpage, the "Perl 5 Module List," and the "Perl Authors Upload Server" (). Before you download a module, you might also check your system to see if it's already installed. The following command searches the libraries in the @INC array and prints the names of all modules it finds: find `perl -e 'print "@INC"'` -name '*.pm' -print If you start from the modules directory on CPAN, you'll see that the modules are categorized into three subdirectories: by-authors Modules by author's registered CPAN name by-category Modules by subject matter (see below) by-module Modules by namespace (i.e., MIME) If you know what module you want, you can go directly to it by clicking on the by-module entry. If you are looking for a module in a particular category, you can find it in the by-category subdirectory. If you know the author, click on by-author. However, if you aren't familiar with the categories and want to find a module that performs a certain task, you might want to get the file 00modlist.long.html, also in the modules directory. This file is the "Perl 5 Modules List." It contains a list of all the modules, by category, with a brief description of the purpose of each module and a link to the author's CPAN directory for downloading. Here is a list of the Perl Module categories, plus two for modules that don't fit anywhere else: 02_Perl_Core_Modules 03_Development_Support 04_Operating_System_Interfaces 05_Networking_Devices_IPC 06_Data_Type_Utilities 07_Database_Interface 08_User_Interfaces 09_Language_Interfaces 10_File_Names_Systems_Locking 11_String_Lang_Text_Proc 12_Opt_Arg_Param_Proc 13_Internationalization_Locale 14_Security_and_Encryption 15_World_Wide_Web_HTML_HTTP_CGI 16_Server_and_Daemon_Utilities 17_Archiving_and_Compression 18_Images_Pixmaps_Bitmaps 19_Mail_and_Usenet_News 20_Control_Flow_Utilities 21_File_Handle_Input_Output 22_Microsoft_Windows_Modules 23_Miscellaneous_Modules 24_Commercial_Software_Interfaces 99_Not_In_Modulelist 99_Not_Yet_In_Modulelist If you are in the by-categories subdirectory and have selected an area from which you'd like to download a module, you'll find a list of the files in the directory. tar files have a .tar.gz extension, and README files have a .readme extension. You'll generally find a README file for each module; take a look at it before you decide to download the file. Here's a sample directory listing from category 14, under the MD5 directory: Digest-MD5-2.09.readme Digest-MD5-2.09.tar.gz GAAS GARY MD5-1.5.3.readme MD5-1.5.3.tar.gz MD5-1.6.readme MD5-1.6.tar.gz MD5-1.7.readme MD5-1.7.tar.gz NWINT You'll notice that multiple versions are sometimes listed—for example, the MD5 module has Versions 1.5.3 through 1.7 available. Generally, this is to facilitate the transition to a new version of the module. Select the .readme file of the most current archive and review its contents carefully. README files often give special instructions about building the module; they warn you about other modules needed for proper functioning and if the module can't be built under certain versions of Perl. If you're satisfied with what you read, download the file. If you're running the standard distribution of Perl, on either a Unix or Win32 system, and you want to install a module, this section explains how to do it. If you are running the ActiveState Win32 port, you can follow the instructions covered in this section, unless you're running on a system without a development toolkit; if this is the case, see the next section. Before installing modules, you should understand at least a little about make. make is a command designed to automate compilations; it guarantees that programs are compiled with the correct options and are linked to the current version of program modules and libraries. But it's not just for programmers—make is useful for any situation in which there are dependencies among a group of related files. make uses a file known as a Makefile, which is a text file that describes the dependencies and contains instructions that tell make what to do. A Perl programmer who writes a module creates a file called Makefile.PL that comes with the module when you download it. Makefile.PL is a Perl script that uses another module, ExtUtils::MakeMaker (generally referred to as simply MakeMaker), to generate a Makefile specific to that module on your system. Before you can actually install the module, you need to decide where it should go. Modules can be installed either globally, for everyone to use, or locally, for your own use. Most system administrators install popular software, including Perl modules, to be globally available. In that case, the modules are generally installed in a branch of the lib directory with the rest of the Perl libraries. If you have root privileges or write access to the locations where Perl modules are installed on your system, you can proceed by moving the downloaded module file to the correct directory and running gunzip and tar to unpack it. Then cd to the module directory and check any README or INSTALL files, check the MANIFEST file to be sure everything is there. If all is well, you can run the following to complete the installation: % perl Makefile.PL % make % make test % make install If you're on a Win32 platform and are using Mingw32, do the following: C:\modulename-version> perl Makefile.PL C:\modulename-version> dmake C:\modulename-version> dmake test C:\modulename-version> dmake install It's possible that you'll need to customize Makefile.PL before running it. If so, see the discussion of ExtUtils::MakeMaker in Chapter 8, "Standard Modules". Or, if you know the MakeMaker options that you'd like to add to Makefile.PL, you can add these options on the command line. A typical scenario would be on a system where you've installed a precompiled version of Perl, and the CC and LD options in Config.pm don't match your programming environment; thus, Perl modules won't build correctly. To solve this problem, you can do the following: % perl Makefile.PL CC=gcc LD=gcc If you are going to install the module locally (for example, if you don't have permission to install globally or you want to test it locally before installing it for general use), you need to pass a PREFIX argument to Perl when you run Makefile.PL to generate the Makefile. This argument tells MakeMaker to use the directory following PREFIX as the base directory when installing the module. For example, to install a module in the directory /home/mydir/Perl/Modules, the PREFIX argument would look like this: % perl Makefile.PL PREFIX=/home/mydir/Perl/Modules Then follow the remaining steps, as above: % make % make test % make install The module is now available, but when you write Perl code to use the module, there's another detail to take care of. Since Perl looks in system-wide directories as specified in the special array @INC, it won't find local modules unless you tell it where they are. Instead, you'll receive an error message such as the following: Can't locate <ModuleName>.pm in @INC. BEGIN failed--compilation aborted. Thus, if you installed the module in /home/mydir/Perl/Modules, you need to tell Perl to look in that location with the command use lib 'path': #!/usr/local/bin/perl -w use lib '/home/mydir/Perl/Modules'; use ModuleName; Prior to Perl 5.005, ActiveState's Perl for Win32 did not support MakeMaker. If you are running Perl 5.004 (or earlier), you should upgrade because the absense of MakeMaker prevents you from installing and using most current Perl modules. While some modules can be installed manually, this is not suggested, since it's likely that something will be forgotten, and the module won't work correctly! You should follow all module documentation to determine which installation technique is the proper one, so that everything will be okay. With 5.6 and later, you can use MakeMaker to install the modules, or you can use the Perl Package Manager that comes with ActivePerl. To install a module using MakeMaker, follow the procedure described earlier for installing when you are running the standard distribution, replacing make with nmake or dmake as appropriate. The Perl Package Manager (PPM) provides a command-line interface for obtaining and installing Perl modules and extensions. To run PPM, connect to CPAN and type: perl ppm.pl The PPM prompt appears, and you can begin to enter PPM commands. The available commands are: If you are just getting and installing one or a few modules, it's not a big problem to download the module's tarball and run through the build process manually. But if you don't want to cope with the brute-force approach when dealing with large module installations (such as LWP and the CPAN bundle), there is an easier way—you can use the CPAN module. The CPAN module (CPAN.pm) can be used interactively from the command line to locate, download, and install Perl modules and their dependencies, or to identify modules and authors. CPAN.pm was designed to automate the installation of Perl modules; it includes searching capabilities and the ability to retrieve files from one or more of the mirrored CPAN sites and unpack them in a dedicated directory. To run the CPAN module interactively, enter: % perl -MCPAN -e shell The first time you use the CPAN module, it takes you through a series of setup questions and writes CPAN::Config if you run the above as root or your administrative user. If the above is run as a user who does not have administrative permissions, CPAN.pm determines who you are and writes MyConfig.pm in a subdirectory of your home directory (defaults to ~/.cpan/CPAN/MyConfig.pm). After that, whenever you use the CPAN module for downloading other modules, it uses the .cpan directory as the general build and cache directory, saved as cpan_home in the configuration file. If ReadLine support is available (i.e., Term::ReadKey and Term::ReadLine are installed), you can use command history and command completion as you enter commands. When the module runs and is ready for commands to be entered, you'll see the prompt: cpan> You can then enter h to get a brief help message, or just start entering commands. The commands are all methods in the CPAN::Shell package. For commands that can operate on modules, bundles, authors, or distributions, CPAN.pm treats arguments containing a slash (/) as distributions, arguments beginning with Bundle:: as bundles, and everything else as modules or authors. The following is a listing of the interactive CPAN commands. ? Displays brief help message. Same as h command. ! perl-code evals a Perl command. a [authorlist] Searches for CPAN authors. Arguments can be strings that must be matched exactly or regular expressions, which must be enclosed in slashes and are matched in case-insensitive fashion. With no arguments, it returns a list of all authors, by CPAN ID. With arguments, it returns a list of authors if there is more than one that matches the criteria, or it returns additional information if a single author is returned. String and regular expression arguments can be combined in the same command. cpan> a /^nv/ LWALL Author NVPAT (Nathan V. Patwardhan) Author LWALL (Larry Wall. Author of Perl. Busy man.) cpan> a /^nv/ Author id = NVPAT FULLNAME Nathan V. Patwardhan autobundle [bundlelist] Writes a bundle file containing a list of all modules that are both available from CPAN and currently installed within @INC. The file is written to the Bundle subdirectory of cpan_home with a name that contains the current date and a counter—for example, Snapshot_1998_04_27_00.pm. You can use this file as input to the install command to install the latest versions of all the modules on your system: perl -MCPAN -e 'install Bundle::Snapshot_1998_04_27_00' b [bundlelist] Searches for CPAN bundles. Arguments are the same as for the a command, except they specify bundles. With a single argument, b displays details about the bundle; with multiple arguments, it displays a list. clean [arglist] Does a make clean in the distribution file's directory. arglistcan include one or more modules, bundles, distributions, or one of the values r or u to reinstall or uninstall. d [distriblist] Displays information about module distributions for the distribution(s) specified in distriblist. Arguments are the same as for the a command. Displays details for a single argument, or a list if the output consists of multiple distributions. force method [arglist] Takes as a first argument the method to invoke, which can be one of make, test, or install, and executes the command from scratch for each argument in arglist. The arguments may be modules or distributions. Be warned that force also allows you to install modules that have failed some or all of the module tests. If installing modules that have failed their tests bothers you, then you shouldn't use force. h Displays brief help message. Same as ?. i [arglist] Displays information about the arguments specified in arglist, which can be an author, module, bundle, or distribution. Arguments and output are the same as for a. install [arglist] Installs the arguments specified in arglist, which can be modules or distributions. Implies test. For a distribution, install is run unconditionally. For a module, CPAN.pm checks to see if the module is up-to-date, and if so, prints a message to that effect and does not do the install. Otherwise, it finds and processes the distribution that contains the module. look arg Takes one argument, which is a module or distribution, gets and untars the distribution file if necessary, changes to the appropriate directory, and opens a subshell process in that directory. m [arglist] Displays information about modules. Arguments are the same as for the a command. Displays details for a single module, or a list if there is more than one in the output. make [arglist] Unconditionally runs a make on each argument in arglist, which can be a module or a distribution. For a module, CPAN.pm finds and processes the distribution that contains the module. o type [option] [value] Sets and queries options. Takes the following arguments: Variable Content build_cache Size of cache for directories to build modules build_dir Locally accessible directory to build modules index_expire Number of days before refetching index files cpan_home Local directory reserved for this package gzip Location of external program gzip inactivity_timeout Breaks an interactive Makefile.PL after inactivity_timeout seconds of inactivity (set to 0 to never break) inhibit_startup_message If true, does not print startup message keep_source If set, keeps source in local directory keep_source_where Where to keep source make Location of external make program make_arg Arguments to always pass to make make_install_arg Same as make_arg for make install makepl_arg Arguments to always pass to perl Makefile.PL pager Location of external more program (or other pager) tar Location of external tar program unzip Location of external unzip program urllist Arrayref to nearby CPAN sites (or equivalent locations, such as CD-ROM) cache_metadata Uses serializer to cache metadata prerequisites_policy Sets behavior for handling module dependencies; options are follow, automatically, ask, and ignore scan_cache Controls the cache-scanning behavior; options are atstart and never wait_list An arrayref that contains the wait server(s) to try q Quits the CPAN module shell subroutine. r Recommendations for reinstallation. With no argument, lists all distributions that are out-of-date. With an argument, tells you whether that module or distribution is out-of-date. readme arglist Finds and displays the README file for modules or distributions in arglist. recompile Runs full make/test/install cycle over all installed dynamically loadable modules with force in effect. Useful for completing a network installation on systems after the first installation, in which the CPAN module would otherwise declare the modules already up-to-date. reload arg Reloads index files or re-eval s CPAN.pm itself. arg may be: test Runs the make test command. u Lists all uninstalled distributions.
https://docstore.mik.ua/orelly/perl3/perlnut/ch02_04.htm
CC-MAIN-2020-24
refinedweb
2,799
53.1
Given are the two complex numbers in the form of a1+ ib1 and a2 + ib2, the task is to add these two complex numbers. Complex numbers are those numbers which can be expressed in the form of “a+ib” where “a” and “b” are the real numbers and i is the imaginary number which is the solution of the expression 𝑥 2 = −1 as no real number satisfies the equation that’s why it is called as imaginary number. Input a1 = 3, b1 = 8 a2 = 5, b2 = 2 Output Complex number 1: 3 + i8 Complex number 2: 5 + i2 Sum of the complex numbers: 8 + i10 Explanation (3+i8) + (5+i2) = (3+5) + i(8+2) = 8 + i10 Input a1 = 5, b1 = 3 a2 = 2, b2 = 2 Output Complex number 1: 5 + i3 Complex number 2: 2 + i2 Sum of the complex numbers: 7 + i5 Explanation (5+i3) + (2+i2) = (5+2) + i(3+2) = 7 + i5 Declare a struct for storing the real and imaginary numbers. Take the input and add the real numbers and imaginary numbers of all the complex numbers. Start Decalre a struct complexnum with following elements 1. real 2. img In function complexnum sumcomplex(complexnum a, complexnum b) Step 1→ Declare a signature struct complexnum c Step 2→ Set c.real as a.real + b.real Step 3→ Set c.img as a.img + b.img Step 4→ Return c In function int main() Step 1→ Declare and initialize complexnum a = {1, 2} and b = {4, 5} Step 2→ Declare and set complexnum c as sumcomplex(a, b) Step 3→ Print the first complex number Step 4→ Print the second complex number Step 5→ Print the sum of both in c.real, c.img Stop #include <stdio.h> //structure for storing the real and imaginery //values of complex number struct complexnum{ int real, img; }; complexnum sumcomplex(complexnum a, complexnum b){ struct complexnum c; //Adding up two complex numbers c.real = a.real + b.real; c.img = a.img + b.img; return c; } int main(){ struct complexnum a = {1, 2}; struct complexnum b = {4, 5}; struct complexnum c = sumcomplex(a, b); printf("Complex number 1: %d + i%d\n", a.real, a.img); printf("Complex number 2: %d + i%d\n", b.real, b.img); printf("Sum of the complex numbers: %d + i%d\n", c.real, c.img); return 0; } If run the above code it will generate the following output − Complex number 1: 1 + i2 Complex number 2: 4 + i5 Sum of the complex numbers: 5 + i7
https://www.tutorialspoint.com/program-to-add-two-complex-numbers-in-c
CC-MAIN-2022-21
refinedweb
423
64.44
Parent Directory | Revision Log Fixed to strip the silent 'e' from the end of a word. #! BioWords; use strict; use Tracer; =head1 BioWords Package Microbiological Word Conflation Helper =head2 Introduction This object is in charge of managing keywords used to search the database. Its purpose is to insure that if a user types something close to the correct word, a usable result will be returned. A keyword string consists of words separated by delimiters. A I<word> is an uninterrupted sequence of letters, semidelimiters (currently only C<'>) and digits. A word that begins with a letter is called a I<real word>. For each real word we produce two alternate forms. The I<stem> represents the root form of the word (e.g. C<skies> to C<ski>, C<following> to C<follow>). The I<phonex> is computed from the stem by removing the vowels and equating consonants that produce similar sounds. It is likely a mispelled word will have the same phonex as its real form. In addition to computing stems and phonexes, this object also I<cleans> a keyword. I<Cleaning> consists of converting upper-case letters to lower case and converting certain delimiters. In particular, bar (C<|>), colon (C<:>), and semi-colon (C<;>) are converted to a single quote (C<'>) and period (C<.>) and hyphen (C<->) are converted to underscore (C<_>). The importance of this is that the single quote and underscore are considered word characters by the search software. The cleaning causes the names of chemical compounds and the IDs of features and genomes to behave as words when searching. Search words must be at least three characters long, so the stem of a real word with only three letters is the word itself, and any real word with only two letters is discarded. In addition, there is a list of I<stop words> that are discarded by the keyword search. These will have an empty string for the stem and phonex. Note that the stemming algorithm differs from the standard for English because of the use of Greek and Latin words in chemical compound names and genome taxonomies. The algorithm has been evolving in response to numerous experiments and is almost certainly not in its last iteration. The fields in this object are as follows. =over 4 =item stems Hash of the stems found so far. This is cleared by L</AnalyzeSearchExpression>, so it can be used by clients to determine the number of search expressions containing a particular stem. =item cache Reference to a hash that maps a pure word to a hash containing its stem, a count of the number of times it has occurred, and its phonex. The hash is also used to keep exceptions (which map to their predetermined stem) and stop words (which map to an empty string). The cache should only be used when the number of words being processed is small. If multiple millions of words are put into the cache, it causes the application to hang. =item stopFile The name of a file containing the stop word list, one word per line. The stop word file is read into the cache the first time we try to stem a pure word. Once the file is read, this field is cleared so that we know it's handled. =item exceptionFile The name of a file containing exception rules, one rule per line. Each rule consists of a space-delimited list of words followed by a single stem. The exception file is read into the cache the first time we try to stem a pure word. Once the file is read, this field is cleared so that we know it's handled. =item cacheFlag TRUE if incoming words should be cached, else FALSE. =item VOWEL The list of vowel characters (lower-case). This defaults to the value of the compile-time constant VOWELS, but may be overridden by the constructor. =item LETTER The list of letter characters (lower-case). This defaults to the value of the compile-time constant LETTERS, but may be overridden by the constructor. All of the vowels should be included in the list of letters. =item DIGIT The list of digit characters (lower-case). This defaults to the value of the compile-time constant DIGITS, but may be overridden by the constructor. =item WORD The list of all word-like characters. This is the union of the letters and digits. =back We allow configuration of letters, digits, and vowels; but in general the stemming and phonex algorithms are aware of the English language and what the various letters mean. The main use of the configuration strings is to allow flexibility in the treatment of special characters, such as underscore (C<_>) and the single quote (C<'>). The defaults have all been chosen fairly carefully based on empirical testing, but of course everything is subject to evolution. =head2 Special Declarations =head3 EMPTY The EMPTY constant simply evaluates to the empty string. It makes the stemming rules more readable. =cut use constant EMPTY => ''; =head3 SHORT The SHORT constant specifies the minimum length for a word. A word shorter than the minimum length is treated as a stop word. =cut use constant SHORT => 3; =head3 VOWELS String containing the characters that are considered vowels (lower case only). =cut use constant VOWELS => q(aeiou_); =head3 LETTERS String containing the characters that are considered letters (lower case only). =cut use constant LETTERS => q(abcdefghijklmnopqrstuvwxyz_); =head3 DIGITS String containing the characters that are considered digits (lower case only). =cut use constant DIGITS => q(0123456789'); =head3 new my $bw = BioWords->new(%options); Construct a new BioWords object. The following options are supported. =over 4 =item exceptions Name of the exception file, or a reference to a hash containing the exception rules. The default is to have no exceptions. =item stops Name of the stop word file, or a reference to a list containing the stop words. The default is to have no stop words. =item vowels List of characters to be treated as vowels (lower-case only). The default is a compile-time constant. =item letters List of characters to be treated as letters (lower-case only). The default is a compile-time constant. =item digits List of characters to be treated as digits (lower-case only). The default is a compile-time constant. =item cache If TRUE, then words will be cached when they are processed. If FALSE, the cache will only be used for stopwords and exceptions. The default is TRUE. =back =cut sub new { # Get the parameters. my ($class, %options) = @_; # Get the options. my $exceptionOption = $options{exceptions} || "$FIG_Config::sproutData/Exceptions.txt"; my $stopOption = $options{stops} || "$FIG_Config::sproutData/StopWords.txt"; my $vowels = $options{vowels} || VOWELS; my $letters = $options{letters} || LETTERS; my $digits = $options{digits} || DIGITS; my $cacheFlag = (defined $options{cache} ? $options{cache} : 1); my $cache = {}; # Create the BioWords object. my $retVal = { cache => $cache, cacheFlag => $cacheFlag, stopFile => undef, exceptionFile => undef, stems => {}, VOWEL => $vowels, LETTER => $letters, DIGIT => $digits, WORD => "$letters$digits" }; # Now we need to deal with the craziness surrounding the exception hash and the stop word # list, both of which are loaded into the cache before we start processing anything # serious. The exceptions and stops could be passed in as hash references, in which case # we load them into the cache. Alternatively, they could be file names, which we save # to be read in when we need them. So, first, we check for an exception file name. if (! ref $exceptionOption) { # Here we have a file name. We store it in the object. $retVal->{exceptionFile} = $exceptionOption; } else { # Here we have a hash. Slurp it into the cache. for my $exceptionWord (keys %{$exceptionOption}) { Store($retVal, $exceptionWord, $exceptionOption->{$exceptionWord}, 0); } } # Now we check for a stopword file name. if (! ref $stopOption) { # Store it in the object. $retVal->{stopFile} = $stopOption; } else { # No file name, so slurp in the list of words. for my $stopWord (@{$stopOption}) { Stop($retVal, $stopWord); } } # Bless and return the object. bless $retVal, $class; return $retVal; } =head2 Public Methods =head3 Stop $bio->Stop($word); Denote that a word is a stop word. =over 4 =item word Word to be declared as a stop word. =back =cut sub Stop { # Get the parameters. my ($self, $word) = @_; Trace("$word is a stop word.") if T(4); # Store the stop word. $self->{cache}->{$word} = {stem => EMPTY, phonex => EMPTY, count => 0 }; } =head3 Store $bio->Store($word, $stem, $count); Store a word in the cache. The word will be mapped to the specified stem and its count will be set to the specified value. The phonex will be computed automatically from the stem. This method can also be used to store exceptions. In that case, the count should be C<0>. =over 4 =item word Word to be stored. =item stem Proposed stem. =item count Proposed count. This should be C<0> for exceptions and C<1> for normal words. The default is C<1>. =back =cut sub Store { # Get the parameters. my ($self, $word, $stem, $count) = @_; # Default the count. my $realCount = (defined $count ? $count : 1); # Get the phonex for the specified stem. my $phonex = $self->_phonex($stem); # Store the word in the cache. $self->{cache}->{$word} = { stem => $stem, phonex => $phonex, count => $realCount }; } =head3 Split my @words = $bio->Split($string); Split a string into keywords. A keyword is this context is either a delimiter sequence or a combination of letters, digits, underscores (C<_>), and isolated single quotes (C<'>). All letters are converted to lower case, and any white space sequence inside the string is converted to a single space. Prior to splitting the string, certain strings that have special biological meaning are modified, and certain delimiters are converted. This helps to resolve some ambiguities (e.g. which alias names use colons and which use vertical bars) and makes strings such as EC numbers appear to be singleton keywords. The list of keywords we output can be rejoined and then passed unmodified to a keyword search; however, before doing that the individual pure words should be stemmed and checked for spelling. =over 4 =item string Input string to process. =item RETURN Returns a List of normalized keywords and delimiters. =back =cut sub Split { # Get the parameters. my ($self, $string) = @_; # Convert letters to lower case and collapse the white space. Note that we use the "s" modifier on # the substitution so that new-lines are treated as white space, and we take precautions so that # an undefined input is treated as a null string (which saves us from compiler warnings). my $lowered = (defined($string) ? lc $string : ""); $lowered =~ s/\s+/ /sg; # Connect the TC prefix to TC numbers. $lowered =~ s/TC ((?:\d+|-)(?:\.(?:\d+|-)){3})/TC_$1/g; # Trim the leading space (if any). $lowered =~ s/^ //; # Fix the periods in EC and TC numbers. Note here we are insisting on real # digits rather than the things we treat as digits. We are parsing for real EC # and TC numbers, not generalized strings, and the format is specific. $lowered =~ s/(\d+|\-)\.(\d+|-)\.(\d+|-)\.(\d+|-)/$1_$2_$3_$4/g; # Fix non-trailing periods. $lowered =~ s/\.([$self->{WORD}])/_$1/g; # Fix non-leading minus signs. $lowered =~ s/([$self->{WORD}])[\-]/$1_/g; # Fix interior vertical bars and colons $lowered =~ s/([$self->{WORD}])[|:]([$self->{WORD}])/$1'$2/g; # Now split up the list so that each keyword is in its own string. The delimiters between # are kept, so when we're done everything can be joined back together again. Trace("Normalized string is -->$lowered<--") if T(4); my @pieces = map { split(/([^$self->{WORD}]+)/, $_) } $lowered; # The last step is to separate spaces from the other delimiters. my @retVal; for my $piece (@pieces) { while (substr($piece,0,1) eq " ") { $piece = substr($piece, 1); push @retVal, " "; } while ($piece =~ /(.+?) (.*)/) { push @retVal, $1, " "; $piece = $2; } if ($piece ne "") { push @retVal, $piece; } } # Return the result. return @retVal; } =head3 Region1 my $root = $bio->Region1($word); Return the suffix region for a word. This is referred to as I<region 1> in the literature on word stemming, and it consists of everything after the first non-vowel that follows a vowel. =over 4 =item word Lower-case word whose suffix region is desired. =item RETURN Returns the suffix region, or the empty string if there is no suffix region. =back =cut sub Region1 { # Get the parameters. my ($self, $word) = @_; # Declare the return variable. my $retVal = ""; # Look for the R1. if ($word =~ /[$self->{VOWEL}][^$self->{VOWEL}](.+)/i) { $retVal = $1; } # Return the result. return $retVal; } =head3 FindRule my ($prefix, $suffix, $replacement) = BioWords::FindRule($word, @rules); Find the appropriate suffix rule for a word. Suffix rules are specified as pairs in a list. Syntactically, the rule list may look like a hash, but the order of the rules is important, so in fact it is a list. The first rule whose key matches the suffix is applied. The part of the word before the suffix, the suffix itself, and the value of the rule are all passed back to the caller. If no rule matches, the prefix will be the entire input word, and the suffix and replacement will be an empty string. =over 4 =item word Word to parse. It should already be normalized to lower case. =item rules A list of rules. Each rule is represented by two entries in the list-- a suffix to match and a value to return. =item RETURN Returns a three-element list. The first element will be the portion of the word before the matched suffix, the second element will be the suffix itself, and the third will be the replacement recommended by the matched rule. If no rule matches, the first element will be the whole word and the other two will be empty strings. =back =cut sub FindRule { # Get the parameters. my ($word, @rules) = @_; # Declare the return variables. my ($prefix, $suffix, $replacement) = ($word, EMPTY, EMPTY); # Search for a match. We'll stop on the first one. for (my $i = 0; ! $suffix && $i < $#rules; $i += 2) { my $len = length($rules[$i]); if ($rules[$i] eq substr($word, -$len)) { $prefix = substr($word, 0, length($word) - $len); $suffix = $rules[$i]; $replacement = $rules[$i+1]; } } # Return the results. return ($prefix, $suffix, $replacement); } =head3 Process my $stem = $biowords->Process($word); Compute the stem of the specified word and record it in the cache. =over 4 =item word Word to be processed. =item RETURN Returns the stem of the word (which could be the original word itself. If the word is a stop word, returns a null string. =back =cut sub Process { # Get the parameters. my ($self, $word) = @_; # Verify that the cache is initialized. my $cache = $self->_initCache(); # Declare the return variable. my $retVal; # Get the word in lower case and compute its length. my $lowered = lc $word; my $len = length $lowered; Trace("Processing \"$lowered\".") if T(4); # Check to see what type of word it is. if ($lowered =~ /[^$self->{WORD}]/) { # It's delimiters. Return it unchanged and don't record it. $retVal = $lowered; } elsif ($len < $self->{SHORT}) { # It's too short. Treat it as a stop word. $retVal = EMPTY; } elsif (exists $cache->{$lowered}) { # It's already in the cache. Get the cache entry. my $entry = $cache->{$lowered}; $retVal = $entry->{stem}; # If it is NOT a stop word, count it. if ($retVal ne EMPTY) { $entry->{count}++; } } elsif ($len == $self->{SHORT}) { # It's already the minimum length. The stem is the word itself. $retVal = $lowered; # Store it if we're using the cache. if ($self->{cacheFlag}) { $self->Store($lowered, $retVal, 1); } } else { # Here we have a new word. We compute the stem and store it. $retVal = $self->_stem($lowered); # Store the word if we're using the cache. if ($self->{cacheFlag}) { $self->Store($lowered, $retVal, 1); } } # We're done. If the stem is non-empty, add it to the stem list. if ($retVal ne EMPTY) { $self->{stems}->{$retVal} = 1; Trace("\"$word\" stems to \"$retVal\".") if T(3); } else { Trace("\"$word\" discarded by stemmer.") if T(3); } # Return the stem. return $retVal; } =head3 IsWord my $flag = $biowords->IsWord($word); Return TRUE if the specified string is a word and FALSE if it is a delimiter. =over 4 =item word String to examine. =item RETURN Returns TRUE if the string contains no delimiters, else FALSE. =back =cut sub IsWord { # Get the parameters. my ($self, $word) = @_; # Test the word. my $retVal = ($word =~ /^[$self->{WORD}]+$/); # Return the result. return $retVal; } =head3 StemList my @stems = $biowords->StemList(); Return the list of stems found in the last search expression. =cut sub StemList { # Get the parameters. my ($self) = @_; # Return the keys of the stem hash. my @retVal = keys %{$self->{stems}}; return @retVal; } =head3 StemLookup my ($stem, $phonex) = $biowords->StemLookup($word); Return the stem and phonex for the specified word. =over 4 =item word Word whose stem and phonex are desired. =item RETURN Returns a two-element list. If the word is found in the cache, the list will consist of the stem followed by the phonex. If the word is a stop word, the list will consist of two empty strings. =back =cut sub StemLookup { # Get the parameters. my ($self, $word) = @_; # Declare the return variables. my ($stem, $phonex); # Get the cache. my $cache = $self->{cache}; # Check the cache for the word. if (exists $cache->{$word}) { # It's found. Return its data. ($stem, $phonex) = map { $_->{stem}, $_->{phonex} } $cache->{$word}; } else { # It's not found. Compute the stem and phonex. my $lowered = lc $word; $stem = $self->Process($lowered); $phonex = $self->_phonex($stem); } # Return the results. return ($stem, $phonex); } =head3 WordList my $words = $biowords->WordList($keep); Return a list of all of the words that were found by L</AnalyzeSearchExpression>. Stop words will not be included. Because the list could potentially contain millions of words, it is returned as a list reference. =cut sub WordList { # Get the parameters. my ($self) = @_; # Get the cache. my $cache = $self->{cache}; # Declare the return variable. my $retVal; # Extract the desired words from the cache. $retVal = [ grep { $cache->{$_}->{count} } keys %{$cache} ]; # Return the result. return $retVal; } =head3 PrepareSearchExpression my $searchExpression = $bio->PrepareSearchExpression($string); Convert an incoming string to a search expression. The string is split into pieces, the pieces are stemmed and processed into the cache, and then they are rejoined after certain adjustments are made. In particular, words without an operator preceding them are prefixed with a plus (C<+>) so that they are treated as required words. =over 4 =item string Search expression to prepare. =item RETURN Returns a modified version of the search expression with words converted to stems, stop words eliminated, and plus signs placed before unmodified words. =back =cut sub PrepareSearchExpression { # Get the parameters. my ($self, $string) = @_; # Declare the return variable. my $retVal = ""; # Analyze the search expression. my @parts = $self->AnalyzeSearchExpression($string); # Now we have to put the pieces back together. At any point, we need # to know if we are inside quotes or in the scope of an operator. my ($inQuotes, $activeOp) = (0, 0); for my $part (@parts) { # Is this a word? if ($part =~ /[a-z0-9]$/) { # Yes. If no operator is present, add a plus. if (! $activeOp && ! $inQuotes) { $retVal .= "+"; $activeOp = 0; } } else { # Here we have one or more operators. We process them # individually. for my $op (split //, $part) { if ($op eq '"') { # Here we have a quote. if ($inQuotes) { # A close quote turns off operator scope. $inQuotes = 0; $activeOp = 0; } else { # An open quote puts us in quote mode. Words inside # quotes do not need the plus added, but the # quote does. $inQuotes = 1; $retVal .= "+"; } } elsif ($op eq ' ') { # Spaces detach us from the preceding operator. $activeOp = 0; } else { # Everything else puts us in operator scope. $activeOp = 1; } } } # Add this part to the output string. $retVal .= $part; } # Return the result. return $retVal; } =head3 AnalyzeSearchExpression my @list = $bio->AnalyzeSearchExpression($string); Analyze the components of a search expression and return them to the caller. Statistical information about the words in the expression will have been stored in the cache, and the return value will be a list of stems and delimiters. =over 4 =item string Search expression to analyze. =item RETURN Returns a list of words and delimiters, in an order corresponding to the original expression. Real words will have been converted to stems and stop words will have been converted to empty strings. =back =cut sub AnalyzeSearchExpression { # Get the parameters. my ($self, $string) = @_; # Clear the stem list. $self->{stems} = {}; # Normalize and split the search expression. my @parts = $self->Split($string); # Declare the return variable. my @retVal; # Now we loop through the parts, processing them. for my $part (@parts) { my $stem = $self->Process($part); push @retVal, $stem; Trace("Stem of \"$part\" is \"$stem\".") if T(4); } # Return the result. return @retVal; } =head3 WildsOfEC my @ecWilds = BioWords::WildsOfEC($number); Return a list of all of the possible wild-carded EC numbers that would match the specified EC number. =over 4 =item number EC number to process. =item RETURN Returns a list consisting of the original EC number and all other EC numbers that subsume it. =back =cut sub WildsOfEC { # Get the parameters. my ($number) = @_; # Declare the return variable. It contains at the start the original # EC number. my @retVal = $number; # Bust the EC number into pieces. my @pieces = split '.', $number; # Put it back together with hyphens. for (my $i = 1; $i <= $#pieces; $i++) { if ($pieces[$i] ne '-') { my @wildPieces; for (my $j = 0; $j <= $#pieces; $j++) { push @wildPieces, ($j < $i ? $pieces[$i] : '-'); } push @retVal, join(".", @wildPieces); } } # Return the result. return @retVal; } =head3 ExtractECs my @ecThings = BioWords::ExtractECs($string); Return any individual EC numbers found in the specified string. =over 4 =item string String containing potential EC numbers. =item RETURN Returns a list of all the EC numbers and subsuming EC numbers found in the string. =back =cut sub ExtractECs { # Get the parameters. my ($string) = @_; # Find all the EC numbers in the string. my @ecs = ($string =~ /ec\s+(\d+(?:\.\d+|\.-){3})/gi); # Get the wild versions. my @retVal = map { WildsOfEc($_) } @ecs; # Return the result. return @retVal; } =head2 Internal Methods =head3 _initCache my $cache = $biowords->_initCache(); Insure the cache is initialized. If exception and stop word files exist, they will be read into memory and used to populate the cache. A reference to the cache will be returned to the caller. =cut sub _initCache { # Get the parameters. my ($self) = @_; # Check for a stopword file. if ($self->{stopFile}) { # Read the file. my @lines = Tracer::GetFile($self->{stopFile}); Trace(scalar(@lines) . " lines found in stop file.") if T(3); # Insert it into the cache. for my $line (@lines) { $self->Stop(lc $line); } # Denote that the stopword file has been processed. $self->{stopFile} = EMPTY; } # Check for an exception list. if ($self->{exceptionFile}) { # Read the file. my @lines = Tracer::GetFile($self->{exceptionFile}); Trace(scalar(@lines) . " lines found in exception file.") if T(3); # Loop through the lines. for my $line (@lines) { # Extract the words. my @words = split /\s+/, $line; # Map all of the starting words to the last word. my $stem = pop @words; for my $word (@words) { $self->Store($word, $stem, 0); } } # Denote that the exception file has been procesed. $self->{exceptionFile} = EMPTY; } # Return the cache. return $self->{cache}; } =head3 _stem my $stem = $biowords->_stem($word); Compute the stem of an incoming word. This is an internal method that does not check the cache or do any length checking. =over 4 =item word The word to stem. It must already have been converted to lower case. =item RETURN Returns the stem of the incoming word, which could possibly be the word itself. =back =cut sub _stem { # Get the parameters. my ($self, $word) = @_; # Copy the word so we can mangle it. my $retVal = $word; # Convert consonant "y" to "j". $retVal =~ s/^y/j/; $retVal =~ s/([aeiou])y/$1j/g; # Convert vowel "y" to "i". $retVal =~ tr/y/i/; # Compute the R1 and R2 regions. R1 is everything after the first syllable, # and R2 is everything after the second syllable. my $r1 = $self->Region1($retVal); my $r2 = $self->Region1($r1); # Compute the physical locations of the regions. my $len = length $retVal; my $p1 = $len - length $r1; my $p2 = $len - length $r2; # These variables will be used by FindRule. my ($prefix, $suffix, $ruleValue); # Remove the genitive apostrophe. ($retVal, $suffix, $ruleValue) = FindRule($retVal, q('s') => EMPTY, q('s) => EMPTY, q(') => EMPTY); # Process latin endings. ($prefix, $suffix, $ruleValue) = FindRule($retVal, us => 'i', um => 'a', ae => 'a'); # Latin endings only apply if they follow a consonant. if ($prefix =~ /[^aeiou]$/) { $retVal = "$prefix$ruleValue"; } # Convert plurals to singular. ($prefix, $suffix, $ruleValue) = FindRule($retVal, sses => 'ss', ied => 'i', ies => 'i', s => 's'); if ($ruleValue eq 'i') { # If the prefix length is one, we append an "e". if (length $prefix <= 1) { $ruleValue .= "e" } } elsif ($ruleValue eq 's') { # Here we have a naked "s" at the end. We null it out if the prefix ends in a # consonant or an 'e'. Nulling it will cause the "s" to be removed. if ($prefix =~ /[^aiou]$/) { $ruleValue = EMPTY; } } # Finish the singularization. The possibly-modified rule value is applied to the prefix. # If no rule applied, this has no effect, since the prefix is the whole word and the # rule value is the empty string. $retVal = "$prefix$ruleValue"; # Catch the special "izing" construct. ($prefix, $suffix, $ruleValue) = FindRule($retVal, izing => 'is'); $retVal = "$prefix$ruleValue"; # Convert adverbs to adjectives. ($prefix, $suffix, $ruleValue) = FindRule($retVal, eedli => 'ee', eed => 'ee', ingli => EMPTY, ing => EMPTY, edli => EMPTY, ed => EMPTY); # These rules only apply in limited circumstances. if ($ruleValue eq 'ee') { # The "ee" replacement only applies if it occurs in region 1. If it does not # occur there, then we put the suffix back. if (length($prefix) < $p1) { $ruleValue = $suffix; } } elsif ($suffix) { # Here the rule value is the empty string. It only applies if there is a # vowel in the prefix. if ($prefix !~ /[aeiou]/) { # No vowel, so put the suffix back. $ruleValue = $suffix; } else { # The prefix is now the whole word, because the rule value is the empty # string. Check for ending mutations. We may need to add an "e" or # remove a doubled letter. ($prefix, $suffix, $ruleValue) = FindRule($prefix, at => 'ate', bl => 'ble', iz => 'ize', bb => 'b', dd => 'd', ff => 'f', gg => 'g', mm => 'n', nn => 'n', pp => 'p', rr => 'r', tt => 't'); } } # Apply the modifications. $retVal = "$prefix$ruleValue"; # Now we get serious. Here we're looking for special suffixes. ($prefix, $suffix, $ruleValue) = FindRule($retVal, ational => 'ate', tional => 'tion', enci => 'ence', anci => 'ance', abli => 'able', entli => 'ent', ization => 'ize', izer => 'ize', ation => 'ate', ator => 'ate', alism => 'al', aliti => 'al', alli => 'al', fulness => 'ful', ousness => 'ous', ousli => 'ous', ivness => 'ive', iviti => 'ive', biliti => 'ble', bli => 'ble', logi => 'log', fulli => 'ful', lessli => 'less', cli => 'c', dli => 'd', eli => 'e', gli => 'g', hli => 'h', kli => 'k', mli => 'm', nli => 'n', rli => 'r', tli => 't', alize => 'al', icate => 'ic', iciti => 'ic', ical => 'ic'); # These only apply if they are in R1. if ($ruleValue && length($prefix) >= $p1) { $retVal = "$prefix$ruleValue"; } # Conflate "ence" to "ent" if it's in R2. ($prefix, $suffix, $ruleValue) = FindRule($retVal, ence => 'ent'); if ($ruleValue && length($prefix) >= $p2) { $retVal = "$prefix$ruleValue"; } # Now zap "ful", "ness", "ative", and "ize", but only if they're in R1. ($prefix, $suffix, $ruleValue) = FindRule($retVal, ful => EMPTY, ness => EMPTY, ize => EMPTY); if (length($prefix) >= $p1) { $retVal = $prefix; } # Now we have some suffixes that get deleted if they're in R2. ($prefix, $suffix, $ruleValue) = FindRule($retVal, ement => EMPTY, ment => EMPTY, able => EMPTY, ible => EMPTY, ance => EMPTY, ence => EMPTY, ant => EMPTY, ent => EMPTY, ism => EMPTY, ate => EMPTY, iti => EMPTY, ous => EMPTY, ive => EMPTY, ize => EMPTY, al => EMPTY, er => EMPTY, ic => EMPTY, sion => 's', tion => 't', alli => 'al'); if (length($prefix) >= $p2) { $retVal = $prefix; } # Process the doubled L. ($prefix, $suffix, $ruleValue) = FindRule($retVal, ll => 'l'); $retVal = "$prefix$ruleValue"; # Check for an ending 'e'. $retVal =~ s/([$self->{VOWEL}][^$self->{VOWEL}]+)e$/$1/; # Return the result. return $retVal; } =head3 _phonex my $phonex = $biowords->_phonex($word); Compute the phonetic version of a word. Vowels are ignored, doubled letters are trimmed to singletons, and certain letters or letter combinations are conflated. The resulting word is likely to match a misspelling of the original. This is an internal method. It does not check the cache and it assumes the word has already been converted to lower case. =over 4 =item word Word whose phonetic translation is desired. =item RETURN Returns a more-or-less phonetic translation of the word. =back =cut sub _phonex { # Get the parameters. my ($self, $word) = @_; # Declare the return variable. my $retVal = $word; # Handle some special cases. For typed IDs, we remove the type. For # horrible multi-part chemical names, remove everything in front of # the last underscore. if ($word =~ /_([$self->{LETTER}]+)$/ && length($1) > $self->{SHORT}) { $word = $1; } elsif ($word =~ /^[$self->{LETTER}]+'(.+)$/ && length($1) > $self->{SHORT}) { $word = $1; } # Convert the pesky sibilant combinatorials to their own private symbol. $retVal =~ s/sch|ch|sh/S/g; # Convert PH to F. $retVal =~ s/ph/f/g; # Remove silent constructs. $retVal =~ s/gh//g; $retVal =~ s/^ps/s/; # Convert soft G to J and soft C to S. $retVal =~ s/g(e|i)/j$1/g; $retVal =~ s/c(e|i)/s$1/g; # Convert C to K, S to Z, M to N. $retVal =~ tr/csm/kzn/; # Singlify doubled letters. $retVal =~ tr/a-z//s; # Split off the first letter. my $first = substr($retVal, 0, 1, ""); # Delete the vowels. $retVal =~ s/[$self->{VOWEL}]//g; # Put the first letter back. $retVal = $first . $retVal; # Return the result. return $retVal; } 1;
http://biocvs.mcs.anl.gov/viewcvs.cgi/Sprout/BioWords.pm?revision=1.3&view=markup&pathrev=rast_rel_2008_12_18
CC-MAIN-2019-43
refinedweb
4,866
67.45
by Michael S. Kaplan, published on 2005/04/03 02:05 -05:00, original URI: Sometimes I am a real pain in the you-know-what. You can ask people on my team, they will probably agree. In fact, if any of them are reading I will not hold it against them if they wanted to verify this point. :-) Anyway, I was going to tell you a story here, about how whining got a feature added. It is mostly from memory, though I did dig up my old newsgroup post from my archives.... A few years ago (back in November of 2000), I believe a little while before the original 1.0 version of the .NET Framework shipped, a problem came up that I was involved with. You see, a customer had been using the System.Globalization namespace's HijriCalendar class on a machine that had Arabic - Saudi Arabia as its default user locale and Hijri was his Windows calendar choice (meaning that its CurrentCulture was ar-SA and that HijriCalendar was his CultureInfo object's Calendar). He noticed that when he changed the "day advance" setting of the Hijri calendar in Regional Options, applications like Outlook and Word had no problem picking up the change instantly. Yet his managed application had to be shut down and restarted to pick up the change. Wanting to figure out why, he sent me a piece of mail asking me about it. Now I was working for the NLS team at the time (I believe this was before we were really GIFT, and Julie was a dev. lead who I reported to directly rather than indirectly through two other people). I was working on the Microsoft Layer for Unicode. I felt kind of weird about going to people on the team during working hours and trying to get special treatment for bugs that had nothing to do with what I was being paid to do there, so after asking this guy for permission I posted the issue in the C# newsgroup (microsoft.public.dotnet.csharp.general): Attn: to someone at Microsoft? :-) BACKGROUND: The Hijri calendar is a 100% lunar calendar that in addition all of its rules, requires a religious authority to spot the new moon for the month to start. Clearly, no algorithm can ever determine this with 100% accuracy, which is why WINDOWS supports (in the control panel) a way to offset the date based on that sighting. This is support by all Arabic localized and enabled versions of Windows, and all language versions of Windows 2000 and Whistler. Applets such as the Windows date on the system tray and application such as Outlook support it. However, the Hijri date support that was in COM and is in the CLR, and the HijriCalendar class does not support getting this offsert value. In addition, unless I do not have my druthers about me, HijriCalendar does not support it either. You can confirm with people such as Assem Hijazi that this is important; Hijri dates need to respect this small bow to the fact that you cannot algorthmically determine the proclamations of religious authorities. Hijri dates need to adjust for this reg. key and ideally you should be able to at least GET (if not SET) this parameter. If the CLR's System.Globalization class is the next iteration of Windows NLS support, then its time to support the things that have been missing to programming languages since VS first shipped. :-) If anyone needs more info on how this info is stored in the registry, let me know. The developer who owned the feature (who is actually now my manager and who has hopefully forgiven me for how I did all this!) came by to ask me about the message. It was all a little embarassing since he did not really understand why I would feel uncomfortable, but he let that pass after I explained (or maybe he just thought I was being weird, which I guess I was). Then, after I explained how this all came about, he explained to me why it was actually a really hard bug to fix (he was aware of the fact that I had never even seen the source code that point). Those other applications "respected" the settings change by listening for the WM_SETTINGCHANGE message that Regional Options sends. But the .NET Framework had no message loop to listen for this message, and thus no way to really respond to it. A developer could create a brand new CultureInfo object to pick up the change, but any existing object would not get it, and there was no way to reset a CultureInfo, especially CurrentCulture. At this point, I got really stubborn. I told him how crucial this functionality was. And it was not just about the Hijri calendar stuff; any update in Regional Options would not be picked up. Even though the very common end user pattern here is to notice that a setting is wrong while she is in an application, to fix the incorrect setting in Regional Options, and then go back to the application and see the update. I had written applications myself that do this, and like I said all of the big applications like Outlook handled things correctly. It would really make the .NET Framework look bad if nothing were done to improve the situation, since there was really no good remedy for the problem. Doing nothing would mean no way to even get parity with Window 95! But my future boss was really worried about the cost of trying to add such a feature so late in the game. And he is right, that is a very real concern. At the time, though, I was external enough to the .NET Framework team that I was not really looking at it that way (remember that I was not being paid to do anything in .NET at all; I was focused on just the user scenario). So I kept pushing. I pointed out how COM had actually fixed this bug in their own code, so much so that in the Windows 95/98/NT 4.0 days they created a hidden window (class name of "OLEAUT32") to listen for the message (in Windows Me and beyond they started listening for it as a thread message instead). Now this would be a big change, yes. But if not that, I asked, couldn't something be done to help here? I finally pointed out one thing -- the way the code was now, even if I were developing a WinForms application and I actually received the WM_SETTINGCHANGE message, I would be powerless to do anything about it! I swear I saw the lightbulb light up on top of his head when I said that. He said he would think about what I was saying and excused himself. (anticipating a question some might have, I did not bill for the time that I was talking to him about the problem!) And very soon after that he added the ClearCachedData method. So that any time a developer listened for and subsequently heard the WM_SETTINGCHANGE message, she could reset the object and pick up the change. And a WinForms application could have parity with a Win32 one (note that even on Win32 you would have to write code to listen for the message, so true parity was really accomplished since a developer could in either case choose to not update if they did not want to). So, that's the story of how whining got ClearCachedData added. :-) Just this last March in an article I wrote for MSDN Magazine, I gave the sample code you could use to accomplish all that in a WinForms application.... But lest you think that you, as a developer using the .NET Framework who runs into a problem, would not have the the same ability to whine and get changes made or bugs fixed or suggestions heard, use that link I have that points to the MSDN Product Feedback Center. Microsoft developers, testers, and program managers see the feedback and bugs posted there directly in the bug database, and they take the feedback seriously. If you look at some of the issues that have been fixed after beging reported there, you will know that you have same kind of direct access to the people who can solve the problems, whether they are international or not (I myself have fixed two bugs reported on the site for Whidbey RTM: String comparison (and sorting) for sr-SP-Latn (Serbian) culture is incorrect and CompareInfo.Name is incorrect for zh-CN and ka-GE alternative sort orders, and a ton of other bugs and suggestions have been posted there, across all aspects of the .NET Framework!). Or you could always post the info here in Sorting It All Out, or on the BCL Team's, Achim Ruopp's, or Brad Abrams's blog. The advantage to one of our blogs is that you may get someone to post about it; the advantage to posting to the MSDN Product Feedback Center is that you get to track the issue and have people vote on it. And voting is something that can in some cases make all the difference in the world if you have either (a) a lot of people who agree with you or (b) a lot of developer friends you can bribe with Twinkies or beer! This post brought to you by "۾" (U+06fe, ARABIC SIGN SINDHI POSTPOSITION MEN) referenced by 2011/01/03 Bad news: you may have to clear the cache; Good news: you *can* clear the cache 2006/04/14 Long term planning is not always done 2005/09/28 Using Hijri dates in SQL Server 2005/04/30 Not perfect, but perfecting feedback loops 2005/04/11 It is all about making sure developers respect the user's preferences 2005/04/05 So what happened in v.Next? go to newer or older post, or back to index or month or day
http://archives.miloush.net/michkap/archive/2005/04/03/404931.html
CC-MAIN-2017-30
refinedweb
1,671
66.37
4.12-stable review patch. If anyone has any objections, please let me know.------------------From: Miklos Szeredi <mszeredi@redhat.com>commit 5d6d3a301c4e749e04be6fcdcf4cb1ffa8bae524 upstream.Commit 0b6e9ea041e6 ("fuse: Add support for pid namespaces") brokeSandstorm.io development tools, which have been sending FUSE filedescriptors across PID namespace boundaries since early 2014.The above patch added a check that prevented I/O on the fuse device filedescriptor if the pid namespace of the reader/writer was different from thepid namespace of the mounter. With this change passing the device filedescriptor to a different pid namespace simply doesn't work. The check wasadded because pids are transferred to/from the fuse userspace server in thenamespace registered at mount time.To fix this regression, remove the checks and do the following:1) the pid in the request header (the pid of the task that initiated thefilesystem operation) is translated to the reader's pid namespace. If amapping doesn't exist for this pid, then a zero pid is used. Note: even ifa mapping would exist between the initiator task's pid namespace and thereader's pid namespace the pid will be zero if either mapping frominitator's to mounter's namespace or mapping from mounter's to reader'snamespace doesn't exist.2) The lk.pid value in setlk/setlkw requests and getlk reply is left alone.Userspace should not interpret this value anyway. Also allow thesetlk/setlkw operations if the pid of the task cannot be represented in themounter's namespace (pid being zero in that case).Reported-by: Kenton Varda <kenton@sandstorm.io>Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>Fixes: 0b6e9ea041e6 ("fuse: Add support for pid namespaces")Cc: Eric W. Biederman <ebiederm@xmission.com>Cc: Seth Forshee <seth.forshee@canonical.com>Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>--- fs/fuse/dev.c | 13 +++++++------ fs/fuse/file.c | 3 --- 2 files changed, 7 insertions(+), 9 deletions(-)--- a/fs/fuse/dev.c+++ b/fs/fuse/dev.c@@ -1222,9 +1222,6 @@ static ssize_t fuse_dev_do_read(struct f struct fuse_in *in; unsigned reqsize; - if (task_active_pid_ns(current) != fc->pid_ns)- return -EIO;- restart: spin_lock(&fiq->waitq.lock); err = -EAGAIN;@@ -1262,6 +1259,13 @@ static ssize_t fuse_dev_do_read(struct f in = &req->in; reqsize = in->h.len;++ if (task_active_pid_ns(current) != fc->pid_ns) {+ rcu_read_lock();+ in->h.pid = pid_vnr(find_pid_ns(in->h.pid, fc->pid_ns));+ rcu_read_unlock();+ }+ /* If request is too large, reply with an error and restart the read */ if (nbytes < reqsize) { req->out.h.error = -EIO;@@ -1823,9 +1827,6 @@ static ssize_t fuse_dev_do_write(struct struct fuse_req *req; struct fuse_out_header oh; - if (task_active_pid_ns(current) != fc->pid_ns)- return -EIO;- if (nbytes < sizeof(struct fuse_out_header)) return -EINVAL; --- a/fs/fuse/file.c+++ b/fs/fuse/file.c@@ -2180,9 +2180,6 @@ static int fuse_setlk(struct file *file, if ((fl->fl_flags & FL_CLOSE_POSIX) == FL_CLOSE_POSIX) return 0; - if (pid && pid_nr == 0)- return -EOVERFLOW;- fuse_lk_fill(&args, file, fl, opcode, pid_nr, flock, &inarg); err = fuse_simple_request(fc, &args);
https://lkml.org/lkml/2017/9/18/258
CC-MAIN-2018-47
refinedweb
480
51.34
Okay! Down to business... actually our hobby. You know stamp collectors have cool stamps. So that is their hobby and their talent. My hobby is Borland Turbo Assembler 4.1 and Borland Turbo C... Type: Posts; User: Tien Nguyen Okay! Down to business... actually our hobby. You know stamp collectors have cool stamps. So that is their hobby and their talent. My hobby is Borland Turbo Assembler 4.1 and Borland Turbo C...... Here is a link to my most recent game for FREE! It is done all in 16bit Borland Turbo C 2.01 mixed with Borland Turbo Assembler 4.1: XP16BIT It was designed to run in any 32bit OS such like... Get out of here Satan! All you do is destroy people's lives. Jesus Christ only came to save lives!!! p8086 model small codeseg startupcode mov al, 19 mov ah, 0 int 10h continue: No in honour of my real dad Nguyen Binh Thuy who took engineering and has much engineering knowledge like thermal dynamics etc. In honour of my real mom Huong... To my friend working at Microsoft... You accuse me of lying about my experiences before. I have NEVER once lied about my experiences. I went to Lehigh Carbon Community Colllege in the state of Pennsylvania from 1990 to approximately... From what I remember of Calculus (I'm not sure of Differential Equation) it is like Sum to infinity, Area under a curve, etc. That is not too hard to do in Borland Turbo C 2.01. The only problem is... 8086 instructions I've found the above link. I think it should be somewhat compatible with the assembly language for the 8086 chip set. The official Borland Turbo Assembler 4.1 and the official... Okay 2 pages hehe For your reference: I've completed the whole course 630 pages of text and now I'm condensing everything within 1 page for you on this website. So if you have problem running any of the programs just let me know and I'll try to help. ...then you have to link it with: TLINK filename.ext once you TASM and TLINK you should now have filename.exe All you have to do is type: filename Jesus! The way you compile is you type: TASM filename.ext filename is your program name ext is the extension you save it Windows usually default to txt Now for our simple purpose we will attempt to run the above program that puts a dot on the screen. So try to enter the program in the text editor of your choice. Some people prefer to use the text... I forgot to mention you need a "tlink.exe" too from VETUSWARE.COM - the biggest free abandonware collection in the universe if they have it. Okay this is the easy part. If you have Windows you can use Notepad (my favorite) to enter you assembly language in. Notepad should be available from Microsoft Windows XP to Microsoft Windows 10. ... Notice it says x86 emulator. It just means 8086/8087 emulator too built-in. I know. I know you hate DOSBox 0.74. Well, I'm giving you the free link and better get it while it lasts for free: DOSBox, an x86 emulator with DOS Once you have DOSBox 0.74 you are almost... Jesus Christ! If you really look you will notice that the 8086/8087 chip deals with interrupts to accomplish all its tasks. When you see "int" it means activate interrupt. Okay so here we go. The 8086/8087 chip has a graphics mode. If you really wanted to program a video game then this is where it starts. p8086 model small dataseg udataseg codeseg... I am talking to my cousin from time to time since my cousin likes to follow me around! You are certainly right about some things? I write program that works. I expect that a real world situation is where a program just works! If you nitpick you may never get the job done on time and... Jesus Christ!. ... #include <math.h>#include <stdio.h> #include <stdlib.h> #include <time.h> #include <conio.h> unsigned long playerone() { int dice, risk1 = 0; unsigned long score;
https://cboard.cprogramming.com/search.php?s=09b7b041a5bdab4c3202f23e13414453&searchid=2221886
CC-MAIN-2019-51
refinedweb
700
78.35
Python is one of the most widely used languages in the world that is known for its clean syntax and beginner friendliness. In this post, I'm sharing some lesser-known features of Python that I recently came to know about. 1. Python can return multiple values Python can return multiple values in a single statement. This doesn’t mean that in other languages like C# or Java, it is impossible to return multiple values. Such languages have to either return it as an array or use some keywords. def calc(a, b): return a+b, a-b, a*b, a/b add, sub, mul, div = calc(5, 5) print('5 + 5 is: ', add) print('5 - 5 is: ', sub) print('5 * 5 is: ', mul) print('5 / 5 is: ', div) That’s interesting. Isn’t it? 2. The Zen of Python Tim Peters, a major contributor to the Python community, wrote this poem to highlight the philosophies of Python. If you type in “import this” in your Python IDLE, you’ll find this. Swapping made simple How will you swap the values of two variables? Will you use a third variable like this for swapping? a = 5 b = 6 c = a # c is now 5 a = b # a is now 6 b = c # b is now 5 Let me show you a shorter way to do this in Python. a = 5 b = 6 a, b = b, a 4. else with for and while Will you believe if I say that you can use else block with for and while loops in Python? But you have to… for i in range(2): print(i) else: print("From while") # Output # 0 # 1 # From while And with while loop: a = 0 while(a < 2): print(a) a += 1 else: print("From else") 5. String concatenation Python will concatenate two strings if they are separated by a space. s = "Hello" "World" print(s) # Output # HelloWorld 6. Try to use import braces, you will get an error Have you ever tried to use braces like in C instead of indentation in Python. If not, try to import it from __future__ package. You will get an error like this. from __future__ import braces # Output # SyntaxError: not a chance 7. Index of a list We can use enumerate method to access elements of a list with its index. a = ['One', 'Two', 'Three'] for index, value in enumerate(a): print(index, value) # Output # 0 One # 1 Two # 2 Three 8. You can access private members of a class Usually, private members are not directly accessible by any object or function outside the class. Only the member functions or the friend functions can access the private data members of a class. But in Python, we can use an alternate way to access private member functions of any class with this syntax. Object_name._Class_name__private_method() Here’s an example. class SeeMee: # Public method def youcanseeme(self): return 'you can see me' # Private method()) 9. Antigravity module Open Python IDLE and import antigravity module and see what happens. import antigravity 10. Keys should be unique Keys of a dictionary should be unique by equivalence. Check this example. languages = {} languages[5.5] = "Java" languages[5.0] = "PHP" languages[5] = "Python" print(languages[5.0]) # Output # Python What happened to PHP? Was it replaced by Python?
https://www.geekinsta.com/lesser-known-features-of-python/
CC-MAIN-2021-31
refinedweb
551
73.47
As a data scientist who did not study computer science or software development in school, I’ve had to teach myself numerous skills on the fly (e.g., GitHub, the drawbacks of using for loops all over the place). Software engineering skills were particularly hard to learn in this fashion, as the wide array of approaches and tools–many ill-suited to the particular needs of a data scientist–made it hard to identify a limited set of tools and techniques that could achieve my needs. This difficulty is why I was particularly excited when professors at the University of Washington–including Jake VanderPlas, author of the wonderfully thorough Python Data Science Handbook–posted the lectures to their new course Software Engineering for Data Scientists online. This post summarizes the topics in the course that I wish I had found practical guides for earlier: building Python packages, writing stylish code, and debugging. While I largely ignore the lectures on version control (1, 2), the iPython notebook, procedural Python, and software design, those lectures are all good introductions to those topics. Building Python Packages A Python module is a file that contains definitions and statements that can expose classes, functions, and global variables. A Python package is simply an installable group of Python modules. Packages structure the Python namespace by allowing you to use dotted module names; for example, after importing sklearn, you access the RandomForestClassifier class in sklearn’s ensemble module by sklearn.ensemble.RandomForestClassifier. Popular Python packages like NumPy or Scikit-learn can be downloaded and installed through conda and pip, Python’s package management system, although packages can installed from local files as well. (A word of warning for those who watch this lecture: While the Building Python Packages lecture is a great reference, if you’re like me, you will spend ~15 minutes at the end of the video futilely yelling at your computer, pleading for Jake to click the “Sync” button that has popped up on the screen five times and would solve his issue if he’d just notice the button, dang it! The perils of live coding. Spoiler alert: He eventually clicks the button.) As with most software engineering tasks, there are many workable options someone building a Python package can choose from. In particular, Jake makes specific choices about how to implement continuous integration and unit testing. Continuous integration (CI) services automatically run a specified series of commands each time the code in a GitHub repository is changed. While most frequently used to automate code testing, CI services can automate many other tasks, including updating a website (by generating HTML files from Markdown and uploading them to S3) or uploading a Python package to PyPI, Python’s package index. Travis is the most widely used CI service today, and it’s the service used by Jake in the lecture. AppVeyor is the other CI service you’ll frequently see GitHub repositories using; it’s frequently described as “Travis for Windows” since it can be used to automatically generate Windows binaries for packages. Both of these services are the source of the build passing/failing stickers you frequently see on GitHub repositories like scikit-learn’s (which, at the time of writing, is passing according to both Travis and AppVeyor!): Unit testing is the process of individually testing units of your code piece by piece. One option for unit testing is Python’s unittest package, which is covered in the course’s unit testing lecture. However, Python’s unittest package is rather cumbersome to use; having been designed with an eye toward Java’s unit testing software, it uses a class inheritance structure that requires the use of lots of boilerplate code. As a result, it’s become trendy to use simpler testing software built atop unittest. For the purposes of building a Python package, Jake decided to instead use nose, a Python package that uses unit tests to “sniff” out code errors and describes itself as “nicer testing for Python.” Other testing tools that you might run across include the pytest package, which can run both nose- and unittest-style test suites, and domain-specific software like engarde, which helps run tests on pandas dataframes. Having chosen to use nose and Travis, the steps to creating a Python package out of a Github repository are: - Create a folder with the same name as your GitHub repo that will contain your package’s modules. Example: Scikit-learn has a “sklearn” folder in their repo. - Put a __init__.pyfile inside this folder. This file (which can be empty) will run whenever you import the python package. Any variables or functions created in this file will be available in the package’s namespace, allowing you to do things like importing submodules into the namespace. Example: Scikit-learn’s __init__.pyfile defines a __version__variable that can be accessed by sklearn.__version__. (The double underscores here are a Python convention to emphasize that these files/variables typically shouldn’t be directly accessed by developers.) - Put a folder called testsinside this folder that will contain your unit tests. This folder should also have an __init__.pyfile, which can also be blank. - For each file filename.pyin the main folder of your package, create a file test_filename.pyin this testsfolder containing all of the functions that test the behavior of filename.py‘s code. To work with the nosetesting package, each of these functions’ names should begin with test_. (Although this is the way Jake sets it up, note that there is some flexibility here; scikit-learn, for example, has separate test folders inside each of its modules.) - For more information on how to write out these tests, watch Ned Batchelder’s PyCon talk on Getting Started Testing. - To make your package installable, place a setup.pyfile in the root directory of your GitHub repo. Shablona, a template for small Python projects, contains a setup.py file that can be configured with your package’s information. Those five steps will all you to turn a group the Python modules into an installable Python package that you can run tests on. To run your tests, you have two options: - The Manual Way: At the command line, run nosetests path_to_project_folder. (You may need to install nose first by running conda install noseor, if you aren’t using Anaconda, pip install nose.) - The Automatic Way: Use Travis for continuous integration by putting a configured version of the travis.yml file in Shablona in the root directory of your GitHub repository. After going to Travis’s site and linking your repository, the travis.ymlfile will cause Travis to automatically run nosetestson your code anytime your GitHub repository is updated. The simplest way to install your package is to navigate to the root directory of your project on the command line and run python setup.py install. After doing that, Python will be able to import your package by name on your computer. Alternatively, if you use configure the travis.yml file in Shablona with your PyPI username and password, you can deploy your package to PyPI and then install it via pip. Stylish Programming Good code shouldn’t just work. Good code should be easily understood by others who will need to work with it–including your future self. Given that, writing stylish code isn’t just fashionable; it serves an important purpose, helping others easily build upon your work. A few of the most useful style tips from the lecture are summarized below; for more details and additional tips, flip through the examples in the lecture notes. While these tips are geared toward Python, the principles work across languages - Avoid putting so-called “magic numbers,” numbers that do not have a clear meaning, in your code. Instead, create a variable with an informative name that equal to the number. Even if you only use the variable once, your code will be much more readable. - Classes are always named in CamelCase. Functions are always named in lower_case_with_underscores_when_needed. - Begin the name of a function with a verb-y name. Common choices: compute, get/set, find, is/has/can, add/remove, first/last. This has the nice benefit of suggesting what the function will return, whether an number, string, boolean, or nothing. - A good docstring has the following parts: - A single-sentence summary of the function - A paragraph or two that elaborates on that summary (if necessary). - A description of each of the function’s parameters - A description of each object that the function returns. When in doubt, look at scikit-learn (or your favorite well-maintained open source project) for examples of great docstrings. - Organize your imports into three sections: first, packages in the standard Python library; second, third-party packages; third, application-specific packages. If you’re feeling like an overachiever, alphabetize the imports to make it easier to scan and see if a particular package is being used. Example: import os import sys import pandas as pd import seaborn as sns from . import myutils - Put two blank lines lines around both top-level functions and classes to help them stand out when skimming the code. - The pep8package tests your code for complaince with Python’s PEP8 style guide. The TravisCI routine run in Shambala uses flake8to test code quality, a package that runs both pep8and pyflakes, which checks for non-PEP8 kinds of code errors. Debugging When I first started having to debug code, I frequently used print() statements to print out variables and check whether they were what I thought they were. Using the Python debugger (pdb) is much preferable to that. Simply insert the following line where you would like to inspect the state of various variables: import pdb; pdb.set_trace() This will run the code up to the line where you called pdb.set_trace() and then open a prompt where you can interactively look at the state of various variables, test the output of simple expressions, and step through the code following that statement. Pdb’s interactive prompt works particularly nicely in an iPython notebook, where it opens up below the active cell. Keyboard shortcuts Frequently using keyboard shortcuts is a hallmark of a productive programmer, but–for me, at least–it takes time to break inefficient keyboard habits and replace them with better ones. Here are a few of my favorite keyboard shortcuts covered in the lectures that frequently save me time. These shortcuts work in iPython Notebooks and, for the most part, at the command line and in text editors like Sublime Text 2. (See the “Keyboard Shortcuts” link in the help menu of the iPython notebook for more timesavers.) Control + a: Move to beginning of line Control + e: Move to end of line Control + d: Delete the next character (aka reverse backspace) Command + [: Indent the current line (or highlighted set of lines) one space to the left Command + ]: Indent the current line (or highlighted set of lines) one space to the right Command + /: Comment out the current line (or highlighted set of lines) Stray Observations - For more resources about software engineering for data scientists, see Trey Causey’s popular blog post on the topic. - According to Jake, the shablonapackage is named for the Hebrew word for “template.” Google Translate is silent on the matter, and searching for “Shablona Hebrew” only reveals that there is a font called “Shablona” as well. - Shablona also contains the structure needed to use Sphinx to automatically generate documentation for your code. - In addition to unit testing, you may also run across regression testing and integration testing. Regression testing has nothing to do with statistical regressions; instead, regression tests are about testing that steps that once generated a bug no longer generate that bug. Integration testing–the most relevantly named type of testing–tests the interactions between different units of code. - You can remind yourself that the Unix command grepworks with regular expressions if you recall that it stands for globally search a regular expression and print. (This, apparently, was the one new thing I learned from the course’s brief overview of the command line.) - Unrelated, yet nice-to-look-at, header photo c/o David Melchor Diaz Share your thoughts
http://hiphoff.com/software-engineering-for-data-scientists/
CC-MAIN-2019-47
refinedweb
2,029
60.24
Hi, I am using notepad++ b/c faster than using eclipse IDE, so how do I have to add junit to path variables. I'm using Windows 7 64 bit. Here is error when I try to run my school assignment that uses Junit in Windows command prompt error: package org.junit does not exist Here is the import statements I am using: import static org.junit.Assert.*; import org.junit.Before; import org.junit.Test; Any help appreciated, my assignemnt is due tommorrow at noon! In the mean time I am going to download Eclipse and try that, but I'd still like to figure out how to get it to work just using NotePad++.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/14601-how-do-i-add-junit-package-printingthethread.html
CC-MAIN-2014-52
refinedweb
116
73.07
#include <lazyinitthreaded.h> Helper class for multithreaded lazy initialization. Typical usage case is a method which initializes data on first call, for example Performance note: Declaring a global LazyInitThreaded as a static member of a class will degrade its performance because the compiler will guard its access with a slow and unnecessary mutex. To avoid this move the global state outside of the class. By default cancellation of the caller will be forwarded to the lambda passed to Init() and after a ThreadCancelledError Init() will retry the initialization. THREADSAFE. Initializes an object by calling the specified method (and does nothing if the object has already been initialized). The method #fnwill be executed by a job and is free to execute long running multithreaded code for initialization. Waiting threads will either participate in the initialization or go to sleep. They will not idle unnecessarily. If the initialization failed once and #retryOnFailure was false (the default) all following Init() calls will return the initial error. THREADSAFE. Resets an object by calling the specified method. Does nothing if the object has already been reset. If the initialization failed the optional reset method won't be invoked. THREADSAFE. Returns if the object already has been initialized. THREADSAFE. Returns if the object already has been initialized. THREADSAFE.
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/classmaxon_1_1_lazy_init_threaded.html
CC-MAIN-2021-39
refinedweb
212
56.25
This book is organized, more or less, to follow the evolution of a zone and its administrator. Chapter 1 and Chapter 2 discuss Domain Name System theory. Chapter 3-Chapter 6 help you decide whether to set up your own zones, then describe how to go about it, should you choose to. Chapter 7-Chapter 11 describe how to maintain your zones, integrate zone data with Active Directory, configure hosts to use your name servers, plan for the growth of your zones, create subdomains, and secure your name servers. Chapter 12-Chapter 16 deal with common problems, management tools, and troubleshooting tools. Here's a more detailed, chapter-by-chapter breakdown: Chapter 1 provides a little historical perspective and discusses the problems that motivated the development of DNS. It presents an overview of DNS theory. Chapter 2 goes over DNS theory in more detail, including the DNS namespace, domains, and name servers. We also introduce important concepts such as name resolution and caching. Chapter 3 covers how to choose and acquire your DNS software if you don't already have it and what to do with it once you've got it; that is, how to figure out what your domain name should be and how to contact the organization that can delegate your domain to you. Chapter 4 details how to set up your first two name servers, including creating your name server database, starting up your name servers, and checking their operation. Chapter 5 deals with DNS's MX record, which allows administrators to specify alternate hosts to handle a given destination's mail. The chapter covers mail-routing strategies for a variety of networks and hosts, including networks with firewalls and hosts without direct Internet connectivity. Chapter 6 explains how to configure a Windows resolver. Chapter 7 describes the periodic maintenance administrators must perform to keep their domains running smoothly, such as checking name server health and authority. Chapter 8 covers how to design the namespace for your Active Directory forest, how to use application partitions for zone storage, and how to enable secure dynamic updates. The chapter ends with a description of the various resource records used by domain controllers. Chapter 9 covers how to plan for the growth and evolution of your domain, including how to get big and how to plan for moves and outages. Chapter 10 explores the joys of becoming a parent domain. We explain when to become a parent (i.e., create subdomains), what to call your children, how to create them (!), and how to watch over them. Chapter 11 goes over name server configuration options that can help you tune your name server's performance, secure your name server, and ease administration. Chapter 12 shows the ins and outs of the most popular tools for doing DNS debugging, including techniques for digging obscure information out of remote name servers. Chapter 13 examines dnscmd and other command-line utilities that can be used for configuring, managing, and updating the Microsoft DNS Server. Chapter 14 details how to program with Microsoft's WMI DNS provider. This chapter includes examples of reading and modifying name server configurations and updating zone data using scripts written in VBScript and Perl. Chapter 15 covers many common DNS problems and their solutions and then describes a number of less common, harder-to-diagnose scenarios. Chapter 16 ties up all the loose ends. We cover DNS wildcards, special configurations for networks that connect to the Internet through firewalls, and hosts and networks with intermittent Internet connectivity via dial-up. Appendix A contains a byte-by-byte breakdown of the formats used in DNS queries and responses as well as a list of commonly used resource record types. Appendix B covers migrating from an existing BIND 4 name server to the Microsoft DNS Server. Appendix C lists the current top-level domains in the Internet domain namespace.
http://etutorials.org/Server+Administration/dns+windows+server/Preface/Organization/
CC-MAIN-2018-30
refinedweb
648
52.19
I am creating a new site to replace an existing one and want to preserve urls that people have bookmarked and appear in search engines. My solution to this is create a route that takes a historical url and pass that along to the controller of my choice with a new id that was looked up in a table. This seems to work in theory but I am stuck on something. Here is my approach: I want to take the url: /category/123 lookup 123 in a table which will return a new id for my new catalog. and send the user to a different controller with id 456 (or whatever the looked up value is) I have a method that is called in the route and will return a value. The problem is passing the supplied id (named :cid in my code below). I can’t seem to pass the value that is in the url. Everything else works. I can hard code the parameter to the catmap method and the route works. Routes.rb def catmap(cat) list ={'60'=>'2','61'=>'3'} return list(cat) end class ApplicationController < ActionController::Base … map.connect ‘category/:cid’,:controller => ‘list’, :action => ‘model’, :id =>catmap(cid) … end
https://www.ruby-forum.com/t/routes-parameter-manipulation/196906
CC-MAIN-2021-25
refinedweb
204
70.73