text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
way we can make this better, and all high quality bundles do this: set the event name as a constant, instead of just having this random string. It's even a bit cooler than it sounds. In the Event directory, create a new class: KnpULoremIpsumEvents. If your bundle dispatches events, you should typically have one class that has a constant for each event. It's a one-stop place to find all the event hook points. Make this class final... which isn't too important... but in general, you should considering making any class in a shareable library final, unless you do want people to be able to sub-class it. Using final is always a safe bet and can be removed later. Anyways, add const FILTER_API = '', go copy the event name and paste it here. Now, of course, replace that string in the controller with KnpULoremIpsumEvents::FILTER_API. So, this is nice! Though, the reason I really like this is that it gives us a proper place to document the purpose of this event: why you would listen to it and the types of things you can do. But the coolest part is this: add @Event(), and then inside double quotes, put the full class name of the event that listeners will receive. In other words, copy the namespace from the event class, paste it here and add \FilterApiResponseEvent. What the heck does this do? On a technical level, absolutely nothing! This is purely documentation. But! Some systems - like PhpStorm - know to parse this and use it to help us when we're building event subscribers. We'll see exactly what I'm talking about in a minute. But, it's at least good documentation: if you listen to this event, this is the event object you should expect. And... we're done! I'm not going to write a test for this, but I do at least want to make sure it works in my project. Move back over to the application code. Inside src/, create a new directory called EventSubscriber. Then, a new class called AddMessageToIpsumApiSubscriber. Like all subscribers, this needs to implement EventSubscriberInterface. Then I'll go to the Code -> Generate menu, or Command + N on a Mac, select Implement Methods, and add getSubscribedEvents. Before we fill this in, I want to make sure that PhpStorm is fully synchronized with how our bundle looks - sometimes the symlink gets stale. Right click on the vendor/knpuniversity/lorem-ipsum-bundle directory, and click "Synchronize". Cool: now it will definitely see the new event classes. When it's done indexing, return an array with KnpULoremIpsumEvents::FILTER_API set to, how about, onFilterApi. Ready for the magic? Thanks to the Symfony plugin, we can hover over the method name, press Alt + Enter and select "Create Method". Woh! It added the onFilterApi method for me and type-hinted the first argument with FilterApiResponseEvent! But, how did it know that this was the right event class? It knew that thanks to the @Event() documentation we added earlier. Inside the method, let's say $data = $event->getData() and then add a new key called message set to, the very important, "Have a magical day". Finally, set that data back on the event with $event->setData($data). That is it! Thanks to Symfony's service auto-configuration, this is already a service and it will already be an event subscriber. In other words, go refresh the API endpoint. It, just, works! Our controller is now extensible, without the user needing to override it. Dispatching events is most commonly done in controllers, but you could dispatch them in any service. Next, let's improve our word provider setup by making it a true plugin system with dependency injection tags and compiler passes. Woh.
https://symfonycasts.com/screencast/symfony-bundle/event-docs
CC-MAIN-2020-16
refinedweb
626
74.9
I want to display all directories, that do not contain files with a specific file ending. Therefore I tried using the following code: find . -type d \! -exec test -e '{}/*.ENDING' \; -print In this example I wanted to display all directories, that do not contain files with the ending .ENDING, but this does not work. .ENDING Where is my mistake? The shell expands the *, but in your case there's no shell involved, just the test command executed by find. Hence the file whose existence is tested, is literally named *.ENDING. * *.ENDING Instead you should use something like this: find . -type d \! -execdir sh -c 'test -e {}/*.ENDING' \; -print This would result in sh expanding *.ENDING when test is executed. Source: find globbing on UX.SE -c: line 0: syntax error near unexpected token sh: line 0: test: foo1/bar.ENDING: binary operator expected ENDING sh: -c: line 0: syntax error near unexpected token Here's a solution in three steps: temeraire:tmp jenny$ find . -type f -name \*ENDING -exec dirname {} \; |sort -u > /tmp/ending.lst temeraire:tmp jenny$ find . -type d |sort -u > /tmp/dirs.lst temeraire:tmp jenny$ comm -3 /tmp/dirs.lst /tmp/ending.lst comm -3 <(find . -type f -name \*ENDING -exec dirname {} \; |sort -u) <(find . -type d |sort -u) Here we go! #!/usr/bin/env python3 import os for curdir,dirnames,filenames in os.walk('.'): if len(tuple(filter(lambda x: x.endswith('.ENDING'), filenames))) == 0: print(curdir) Or alternately (and more pythonic): #!/usr/bin/env python3 import os for curdir,dirnames,filenames in os.walk('.'): # Props to Cristian Cliupitu for the better python if not any(x.endswith('.ENDING') for x in filenames): print(curdir) Bonus DLC Content! The (mostly) corrected version of the find command: find . -type d \! -exec bash -c 'set nullglob; test -f "{}"/*.ENDING' \; -print test: ...: binary operator expected next(filter(lambda x: x.endswith('.ENDING'), filenames)) next(x for x in filenames if x.endswith('.ENDING')) if not any(x.endswith('.ENDING') for x in filenames) False I'd do it in perl personally #!/usr/bin/perl use strict; use warnings; use File::Find; #this sub is called by 'find' on each file it traverses. sub checkdir { #skip non directories. return unless ( -d $File::Find::name ); #glob will return an empty array if no files math - which evaluates as 'false' if ( not glob ( "$File::Find::name/*.ENDING" ) ) { print "$File::Find::name does not contain any files that end with '.ENDING'\n"; } } #kick off the search on '.' - set a directory path or list of directories to taste. # - you can specify multiple in a list if you so desire. find ( \&checkdir, "." ); Should do the trick (works on my very simplistic test case). Inspired by Dennis Nolte's and MikeyB's answers, I came up with this solution: find . -type d \ \! -exec bash -c 'shopt -s failglob; echo "{}"/*.ENDING >/dev/null' \; \ -print 2>/dev/null It works based on the fact that if the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed. if the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed. By the way, that's why stderr was redirected to /dev/null. /dev/null Heres a find one-liner. find ./ -type d ! -regex '.*.ENDING$' -printf "%h\n" | sort -u Edit: Oops, wont work either. find . -type d '!' -exec sh -c 'ls -1 "{}"|egrep -q "*ENDING$"' ';' -print q in egrep is for quiet q With egrep you can swap the regex that you need egrep ls -1 "{}" outputs the file names from the find command ls -1 "{}" By posting your answer, you agree to the privacy policy and terms of service. asked 8 months ago viewed 526 times active 3 months ago
http://serverfault.com/questions/605468/find-directories-where-files-with-a-specific-ending-are-missing?answertab=oldest
CC-MAIN-2015-11
refinedweb
635
77.33
I have one poll table having with question filed and each question have 4 answer which is has_many relation ship with answer table class Poll < ActiveRecord::Base has_many :answer end My answer model class Answer < ActiveRecord::Base belongs_to :poll, :class_name => "Poll", :foreign_key => "question_id" end My active admin form is form :html => {:validate => true} do |f| f.inputs "Polls" do f.input :question f.has_many :answer, :sortable => :priority do |ff| ff.input :question[] end f.input :status,as: 'hidden',:input_html => { :value => f.object.status.present? ? f.object.status : 'Unpublished' } f.input :position,as: 'hidden',:input_html => { :value => Poll.count().to_i + 1} end f.actions end I want to show only 4 answer textbox into my form, how can i do that Add the following to your Polls Controller- def new @poll = Poll.new 4.times do @poll.answers.build end end
http://databasefaq.com/index.php/answer/126/ruby-on-rails-gem-activeadmin-activeadmin-show-4-textbox-in-active-admin-using-has-many-relationship
CC-MAIN-2019-18
refinedweb
139
53.27
. Is 0x10 a hexadecimal literal or a binary literal? How do we differentiate between the two in the versions prior to c++14? Hex. Binary literals start with 0b. HOW CAN WE PRINT OR RETURN VALUES IN ANY OTHER NO. SYSTEM THAN DECIMAL. See this article. Literal constants (usually just called “literals”) are values inserted directly into the code. They are constants because you can’t change their values. For example: int x = 5; But isnt this a variable that i can change anytime? im sry, im confused in this one, literal constants are variables too? why they are called constant then? Please help me 🙁 5 is the literal, not x (x is a variable). 5 has the value 5. You can’t change that. I see, thanks Was also confused. I would suggest something like that: bool myNameIsAlex = true; // boolean variable ‘myNameIsAlex’ is assigned a boolean literal constant ‘true’ int x = 5; // integer variable x is assigned an integer literal constant 5 std::cout << 2 * 3; // 2 and 3 are integer literals Updated per your suggestion. Looks like someone has had a bad time with magic numbers 🙂 In the code fragment on hexidecimal below on line 15 you have the incorrect binary value for B. it should be 1011 not 1010 Thanks, fixed. Typo towards the end of section "Literal constants": "By default, floating point literal constant have a type of double." "Constant" should be plural. Fixed, thanks. Hi Alex, There are many vital pieces of information about "Binary" in section 2.1 & 2.3 (… all data on a computer is just a sequence of bits….….) In this section, Octal and Hexadecimal are introduced…but I don’t know what they are for? Thanks, Have a nice day. As I mention in the lesson, octal is hardly ever used, so you can basically forget about it. Hexadecimal values are used a lot though, mainly because two hexadecimal values cover 8 bits, which is a byte. Therefore, when we talk about the contents of a memory address (which are a byte), instead of representing those contents as 8 binary digits, it’s much easier to represent them as 2 hexadecimal digits. I believe you meant to use "ways" in the following line: While boolean and integer literals are pretty straightforward, there are two different way to declare floating-point literals: thanks - appreciate all your hard work! Thanks for the typo notification. Fixed! does gcc support C++14? Yes, some newer versions do. But you have to pass it a flag to enable that functionality (-std=c++14 or -std=c++1y, depending on gcc version) I’m CLICKING ON EVERY AD! You’re welcome. Well that’s one way to live your life. another is to read through the comments trying to find ones you can write pithy replies to :)) Hello, I tried to do a converter decimal/hexadecimal or decimal/octal with his code : But it seems that using 0x01 and 0number doesn’t work if you write it in the consol. Is their any way to create a converter like that ? Yes, use this instead: I don’t really understand the definition of litteral, because in math I found this definition : "Literal numbers are the letters which are used to represent a number." But here it seems to have a totally different signification, but which ? Literals in programming are values typed directly into the code. For example: 4 6.0f “hello” Your syntax highlighter certainly makes a mess of the infix apostrophes in binary literals…I guess it’s not C++14 compliant 🙂 Yeah, it definitely butchers those. Hopefully it’ll get fixed in a future version. alex, thx for all these best tuts. i want to ask u, are u now going to write tuts for some other platforms also or not??….it will be best for me if u will write javaEE. Nope. I barely have enough time to write these tutorials and answer questions, let alone try and do this for two languages. 🙂 I’m confused about the float type default of double. Why isn’t the default of float set to float? It seems weird to have to use a suffix to specify that the type should be float for a type that’s already been defined as float. Where does the double come in? Are you asking why for the following: 4.0 isn’t assumed to be a float since it’s assigned to a float variable? I presume because: * C doesn’t have type inference (C++11 does, but this was inherited from C). * Type inference works from right to left, not left to right. * This gives the programmer has explicit control over what 4.0 means (type double) regardless of what’s on the left-hand side of the literal. Why 10 instead of 12? Because numbers are stored in decimal, and 12 octal = 10 decimal. I think it should be: Because numbers are being printed in… (or something) They actually stored in binary. Good call. Updated the wording. Thanks for the tutorial HI there Alex, I am a bit confused as to why one needs to type in the suffixes of the data types? They are not necessary right? The suffixes of the data types tell C++ how to interpret a literal. For example, if you type in 5.2, C++ knows this is a floating point literal, but it doesn’t know whether you meant a float or a double. So it assumes double. If you wanted/needed 5.2 to be a float, you’re better off specifying the literal as 5.2f, so C++ knows you meant a float, not a double. Otherwise, if you do this: C++ will convert 5.2 from a double to a float before assigning to variable f, and you may lose some precision. Suffixes are only needed if the default type for a literal isn’t sufficient for your needs. Generally, when using literals, it’s a good idea to ensure your literal has the same type as the variable it’s being assigned to, to minimize the chance of something unexpectedly going wrong somewhere. A question: why FF in hexadecimal is 255 in decimal? That’s simple hexa coding, you definitely should re-read about it. "F" in hexa stands for a "15" in decimal (1,2,3,4,5,6,7,8,9,A,B,C,D,E,F), so when you have 0xFF to convert it to decimal: 0xFF = F * 16^1 + F * 16^0 = 15 * 16 + 15 = 240 + 15 = 255. Alejandro Hi Alex Can you add ";" to your examples? Done. Thanks for pointing that out. A good thing to have in mind counting in oct and hex: oct is base 8, so starting the count is 0, 1, 2, 3, 4, 5, 6, 7 all 8 digits were used, so now we add 1 to the left: 10, 11, 12, 13, 14, 15, 16, 17 20, 21, 22, 23, 24, 25, 26, 27 etc. same for hex, base 16: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F add 1 left: 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1A, 1B, 1C, 1D, 1E, 1F this principle is for decimal as well, just an easy way to count… I think you made a typo here above code should be -Kanchana I sure did. Thanks for noticing. The last example seems to have wrong comments: should be: Thanks for the tutorial ! Yup, you’re correct. Thanks for noticing! Above you have: Octal is base 8 — that is, the only digits available are: 0, 1, 2, 3, 4, 5, 6, 7, 8. That would be base 9, though, right? Yup! Good catch. Octal is base 8 - because the digits available are : 0,1,2,3,4,5,6,7. Hi! I’m not totally new to C++, but new to this course. (Yes, I’m calling it a “course” ’cause it’s better than all other C++ teaching I’ve come across in the past - THANK YOU SO MUCH!) Anyhow… I’ve been digging through the tutorials very seriously. All of the stuff is really well explained and if a piece of information happens to be missing, it is usually covered by the comments. However, here (in 2.8), I’m a bit lost… Can you explain to me, why we’d want to add an “L” to long nValue2 = 5L; or an f to float fValue = 5.0f; …? They have already been declared as being “long” and “float”… So what’s the point here…??? Thx! There aren’t a whole lot of reasons you’d need to specify the L prefix. But lets say you had two functions: void doSomething(int); void doSomething(long); if you called doSomething(5), you’d get the int version instead of the long version. Using doSomething(5L) would get you version that takes the long parameter. There are probably other obscure examples. The f prefix is used more commonly, because floating point numbers have weird truncation/rounding issues. Consider the following code: This prints “not equal”! Why? When 0.67 gets assigned to f, it gets truncated to 0.670000017. When you compare that truncated value to the double value 0.67, it doesn’t match! This one works as expected: Hi Alex, I’m new to c++. My question may be dump to you but what’s the difference between “int = 5” or “int == 5”. there is no such thing like “int = 5” or “int == 5? “int” is a keyword used to define variables type thanks a lot dear. this is very nice tutorial.. i m not new to c++ also not proficient but getting good concepts from this tutorial Alex: Ref.: it is a good idea to try to avoid using literal constants that aren’t 0 or 1. So I should use only literal constants that are 0 or 1? As for above. Generally I only use the literal constant 0 -- anything else is generally defined as a symbolic constant. Literal constants are literal numbers inserted into the code. They are constants because you can’t change their values. int x = 5; // 5 is a literal constant I don’t understand this, you can change 5 to 6, how is it unchangable? If you use the number 6 instead of 5, you are using a different literal, not changing the value of a literal. In other words, literals are constants because the symbol 5 always has the value 5. You can’t change the symbol 5 to the value 6, or any other number. When declaring an integer variable that isn’t const, the value of the variable x can be reassigned to, later on. int x = 5; // declares x as an integer variable and assigns 5 to x x = 2; // 2 is now assigned to x, instead of 5 x = 4; // 4 is now assigned to x, instead of 2 int x = 5; // declares x as an integer variable and assigns 5 to x x = 2; // 2 is now assigned to x, instead of 5 x = 4; // 4 is now assigned to x, instead of 2 In the above example, x is declared and the number 5 is assigned to x. We can then assign another number to x later on, which will then change the value of x. We can check that the values have changed by printing them to the console: using namespace std; int x = 5; cout << x << endl; // the value of x displayed here is 5 x = 2; cout << x << endl; // the value changes to 2 x = 4; cout << x << endl; // now it is 4 using namespace std; int x = 5; cout << x << endl; // the value of x displayed here is 5 x = 2; cout << x << endl; // the value changes to 2 x = 4; cout << x << endl; // now it is 4 This outputs: 5 2 4 5 2 4 However when we declare a variable as const, and assign a value to it, we cannot assign another value later on: const int x = 5; // declares x as an constant integer variable and assigns 5 to x x = 2; // compiler error, as we cannot assign another value to x const int x = 5; // declares x as an constant integer variable and assigns 5 to x x = 2; // compiler error, as we cannot assign another value to x Although the compiler comes up with an error “you cannot assign to a variable that is const”, this is misleading, as you can assign (initialize) a literal constant to a variable only once. Consequently its value remains constant throughout the entire program. P.S. I’m also a newbie learning C++, but I’m just getting this logic from what Alex has written in the past, along with some practise on Visual C++ 2010 Express! P.P.S. I’m loving these tutorials Alex 😉 Name (required) Website
http://www.learncpp.com/cpp-tutorial/28-literals/
CC-MAIN-2017-26
refinedweb
2,159
71.75
Tools for administering Active Directory Posted by: Richard Siddaway It was pointed out in a comment that in my series of posts on administering Active Directory (started with and the posts coming forward) I hadn’t actually discussed the tools I was using in the posts – in the spirit of better now then never I’ll put that right. There are two areas of Active Directory administration we should think about: - Data – users, groups, OUs, computers (possibly plus domains and forests) - Service – sites, subnets, site links, replication, schema Up to now I have concentrated on the data – mainly user administration. I will expand to other areas as we proceed. Any one who has spent any time administering AD soon becomes familiar with AD Users and Computers and the associated tools. These are great for doing the odd ad hoc job but for bulk investigation or processing we need scripts. In years gone by this would have been VBScript but the world has moved on and PowerShell is the scripting tool of choice for the savvy Windows administrator. If you need to get started with PowerShell look a the books available at. PowerShell in a month of lunches is a great starter. PowerShell in Practice and PowerShell and WMI will extend that knowledge to actually using PowerShell in the real world. Now that we’ve decided PowerShell is our admin tool of choice – how do we work with AD. The starting point is that there are no AD admin cmdlets built into PowerShell v2 (or v3 for that matter). However, we do have access to a number of tools that we can use through PowerShell. The Quest AD cmdlets have been available for a number of years. First issued in 2007 they are a free download in 32 and 64 bit versions from. A pdf manual is also available. They install as a snapin on your workstation. These cmdlets have a good coverage of AD data administration and also include PKI and Quest’s Active Roles administration. They have a number of advantages – especially the fact that you don’t have to install anything on your domain controllers and that they work with AD versions from Windows 2003 to Windows 2008 R2 out of the box. The main draw back is that they are non-Microsoft which is a big negative in some organisations. Microsoft introduced a set of cmdlets for administering AD and a provider with Windows 2008 R2. If your domain controllers are running an earlier version of WIndows you can download versions of the cmdlets for Windows 2003 and Windows 2008 from the links provided at ttp://blogs.msdn.com/b/adpowershell/archive/2009/09/18/active-directory-management-gateway-service-released-to-web-manage-your-windows-2003-2008-dcs-using-ad-powershell.aspx. The Microsoft cmdlets work through a web service that runs on the domain controllers. This is installed by default on Windows 2008 R2 but needs a specific install for older versions. I find this a drawback for legacy versions as I like to keep my domain controllers as clean as possible. If your domain is Windows 2008 R2 then install the RSAT tools on your workstation and you will get the AD cmdlets and provider. Having looked at the Microsoft provider in a fair amount of detail recently I have to admit that it is better than I thought. I don’t like the navigational requirement to use ou = xxx to determine path but it is liveable with especially in scripts. Scripting has a venerable tradition for AD administration. In PowerShell we use the [adsi] type accelerator which is a shortcut to System.DirectoryServices.DirectoryEntry. The class is a wrapper for standard ADSI access to AD. ADSI is COM based to add another level of complexity. All of these wrappers modify the objects returned to a greater or lesser extent. This can create some confusion as the methods you are used to from VBScript are available but not visible. You need to know how to use these objects to get the most from them – which is where the posts come in. Searching AD in VBScript was painful but in PowerShell we get [adsisearcher] which is a type accelerator for System.DirectoryServices.DirectorySearcher. We have seen this in a number of posts – and will see it in the future. Also available in the .NET fold for access through scripts is the System.DirectoryServices.ActiveDirectory namespace. This provides access to a number of classes that make it easier to deal with the service side – sites etc as we will see later. The latest addition with .NET 3.5 (needed for ISE and out-gridview) is System.DirectoryServices.AccountManagement. This namespace provides access to users and groups. The syntax is more complex but it supplies easy access to a number of pieces of functionality that we can’t do using other .NET classes. Finally the System.DirectoryServices.Protocols supplies access to some deep level aspects of AD – for instance we can use it to return an object from being tombstoned. This namespace is not well documented and is not easy to decipher its usage. Recommendations: I still turn to the Quest cmdlets – I’ve been using them since I was involved with the original beta testing. if you have Windows 2008 R2 you have the Microsoft cmdlets which provide analogous functionality. I would recommend using one (or better still both) sets of cmdlets. Use scripting to fill in the gaps and leave the provider alone except for specific jobs – it is very good for bulk creation of OUs! Question: I have been asked if I will be pulling all of the code I’m publishing on AD into a book. I hadn’t thought about it up to now. Is there enough interest for such a book?
http://itknowledgeexchange.techtarget.com/powershell/tools-for-administering-active-directory/
crawl-003
refinedweb
970
62.27
Making Mock Data Circular Making Mock Data Circular Join the DZone community and get the full member experience.Join For Free Was writing a test today where I was doing a Command Pattern that would allow me to contain a set of repeated invocations in a timebox. In other words: call me as many times as possible in x milliseconds. So for the test, I was using random #s, but that kept going to zero after a ton of invocations. Got out my Distribution class that allows me to say 'give me random #s with x% in range between y and z.' Once I do that, though, I have to know how many #s I am going to need, and I don't really. First, I did the usual TDD thing: write it and let it fail. Sure enough, it fails on .next() after the iterator is exhausted. Did some searches on whether there is a reset method on some specialized iterator. But then, I decided it would just make more sense to make a simple CircularIterator, like so: /** * Provides the ability to give a fixed sample, and this class will provide an * {@link Iterator} that will just repeatedly loop through the collection. * * @author Rob * * @param * @Pattern Decorator * */ public class CircularIterator implements Iterator { /** * Just using a real iterator here, after running through the set each time. */ private Iterator i; /** * The sample we want to cycle through. */ private Collection collection; public CircularIterator(Collection collection) { this.collection = collection; this.i = collection.iterator(); } /** * Will always return true, since we are just looping. */ public boolean hasNext() { return i.hasNext(); } /** * Gets the next element. If there are no more, the iterator {@link #i} is * recreated. */ public T next() { if (!i.hasNext()) i = collection.iterator(); return i.next(); } /** * Just proxies call so should work as usual. */ public void remove() { i.remove(); } } Of course, this means that you are going to just get the same #s over and over again, but in a lot of cases (like mine), that doesn't really matter: scores, confidence levels, donations, whatever. This is a good example of the Decorator pattern: it implements the interface, holds an instance of the type it's extending, and proxies on the methods it doesn't need to change. The nice part is that I can just create this, using the same declarations and then call next() for as long as I like. From Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/making-mock-data-circular
CC-MAIN-2019-18
refinedweb
419
57.06
Programming with Python Errors and Exceptions Learning Objectives - To be able to read a traceback, and determine the following relevant pieces of information: - The file, function, and line number on which the error occurred - The type of the error - The error message - To be able to describe the types of situations in which the following errors occur: SyntaxErrorand IndentationError NameError IndexError FileNotFoundError: import errors_01 errors_01.favorite_ice_cream() --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-1-9d0462a5b07c> in <module>() 1 import errors_01 ----> 2 errors_01.favorite_ice_cream() /Users/jhamrick/project/swc/novice/python/errors_01.pyc in favorite_ice_cream() 5 "strawberry" 6 ] ----> 7 print(ice_creams[3]) 2 (which is favorite_ice_cream()). The second shows some code in another function ( favorite_ice_cream, located in the file errors_01.py), with an arrow pointing to Line 7 7, when it tried to run the code print(ice_creams[3]). just just is that you just forgot to create the variable before using it. Item just open myfile.txt, this will fail. The correct path would be writing/myfile.txt. It is also possible (like with NameError) that you just made (without running it) try to identify what the errors are. - Run the code, and read the error message. What type of error is it? - Fix the error. seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[4])
https://cac-staff.github.io/summer-school-2016-Python/07-errors.html
CC-MAIN-2022-33
refinedweb
218
57.27
IRC log of wam on 2009-03-05 Timestamps are in UTC. 14:00:10 [RRSAgent] RRSAgent has joined #wam 14:00:10 [RRSAgent] logging to 14:00:12 [Zakim] +??P1 14:00:19 [ArtB] RRSAgent, make log Public 14:00:21 [fjh] zakim, ??P1 is fjh 14:00:21 [Zakim] +fjh; got it 14:00:46 [ArtB] ScribeNick: ArtB 14:00:49 [ArtB] Scribe: Art 14:00:52 [ArtB] Chair: Art 14:00:57 [ArtB] Date: 5 March 2009 14:01:05 [ArtB] Meeting: Widgets Voice Conference 14:01:15 [ArtB] Agenda: 14:01:25 [ArtB] Regrets: Claudio, Bryan 14:01:33 [fjh] widgets signature editors draft update 14:01:35 [fjh] 14:01:54 [Zakim] +Josh_Soref 14:02:13 [Zakim] +Jere 14:02:14 [Marcos] Marcos has joined #wam 14:02:21 [Zakim] + +47.23.69.aaaa 14:02:30 [timeless] timeless has joined #wam 14:02:33 [ArtB] Present: Art, Frederick, Josh, Jere, Marcos, Arve 14:02:42 [fjh] 14:02:43 [Zakim] +??P7 14:02:56 [ArtB] zakim, ??P7 is David 14:02:57 [Zakim] +David; got it 14:03:01 [ArtB] Present +David 14:03:07 [arve_] arve_ has joined #wam 14:03:17 [ArtB] Topic: Review and tweak agenda 14:03:17 [arve_] zakim, who is on the phone? 14:03:17 [Zakim] On the phone I see Art_Barstow, fjh, Josh_Soref, Jere, +47.23.69.aaaa, David 14:03:17 [timeless] zakim, who is on? 14:03:20 [Zakim] I don't understand your question, timeless. 14:03:25 [ArtB] AB: agenda posted March 4 - is 14:03:36 [arve_] zakim, aaaa is Arve/Marcos 14:03:36 [Zakim] +Arve/Marcos; got it 14:03:36 [drogersuk] drogersuk has joined #wam 14:03:53 [ArtB] AB: the main agenda items are Open Issues. I only want to spend a few minutes on each of them to get a sense of where we are e.g. still Open, pending inputs, can be Closed. Any detailed technical discussions should occur on public-webapps mail list. 14:04:00 [ArtB] ... Are there any change requests? 14:04:08 [ArtB] [ None ] 14:04:13 [ArtB] Topic: Announcements 14:04:20 [ArtB] AB: I don't have any urgent announcements 14:04:25 [ArtB] ... what about others? 14:04:48 [ArtB] FH: please submit comments on XML Sig 1.1 drafts 14:05:04 [ArtB] DR: I will respond to Art's BONDI 1.0 email so please look at that 14:05:29 [fjh] please review XML Signature 1.1 and XML Signature Properties FPWD 14:05:30 [fjh] 14:05:32 [ArtB] MC: I uploaded the Window Modes spec; would like to get that on the agenda 14:05:58 [ArtB] Topic: DigSig + P&C synchronization 14:06:11 [ArtB] AB: earlier this week Frederick asked me if the DigSig + P&C specs are now in synch, based on last week's discussions? 14:06:14 [fjh] 14:06:22 [ArtB] ... I believe the answer is yes. 14:06:43 [ArtB] AB: where are we on this? 14:07:01 [ArtB] MC: FHI and I talked about this 14:07:15 [ArtB] ... I think this is mostly now addressed 14:07:45 [ArtB] ... P&C has no real depedency on DigSig 14:07:55 [ArtB] s/FHI/FH/ 14:08:01 [fjh] marcos notes merged steps 4 +5, moved locating to dig sig, removed signature variable from p + c 14:08:06 [ArtB] ... I haven't completed the P&C changes yet 14:08:21 [ArtB] ... e.g. renumber some steps 14:08:22 [fjh] fjh notes revised text on locating to fit it within digsig but essence is same 14:08:42 [ArtB] FH: I had to revise the location text a bit but the logic is the same 14:09:00 [ArtB] ... Josh asked about the sorting 14:09:28 [ArtB] ... I need to think about that a bit more 14:09:45 [ArtB] JS: need to clarify diff between "9" and "009" 14:09:55 [ArtB] ... we can take this discussion to the list 14:10:03 [ArtB] FH: I agree we need more rigor here 14:10:10 [ArtB] MC: I agree too 14:10:47 [ArtB] ... need to address case sensitivity too 14:11:04 [ArtB] AB: can we point to some existing work? 14:11:31 [ArtB] FH: I don't think this is a big issue and agree we can discuss on the list 14:11:52 [ArtB] AB: what needs to be done then? 14:12:07 [ArtB] FH: I need to make a few changes to DigSig and MC needs to do a bit more on P&C 14:12:22 [abraun] abraun has joined #wam 14:12:36 [ArtB] JS: re styling, orange doesn't work well for me regarding readability 14:12:43 [ArtB] MC: I can help with that 14:13:12 [ArtB] FH: I'll take a pass at that 14:13:40 [ArtB] DR: re the ell curve issue, I have asked OMTP to provide comments by March 9 so I should have data for the WG by Mar 12 14:13:50 [ArtB] Topic: Issue-19 - Widgets digital Signatures spec does not meet required use cases and requirements; 14:13:58 [ArtB] AB: do we now consider this issue adequately addressed to close it? 14:14:05 [ArtB] ... < > 14:14:32 [fjh] zakim, unmute me 14:14:32 [Zakim] fjh should no longer be muted 14:14:35 [ArtB] AB: my gut feel here is this is now addressed and we can close it. 14:14:38 [ArtB] AB: any comments? 14:14:54 [ArtB] MC: the DigSig enumerates reqs it addresses 14:15:03 [ArtB] ... it's a bit out of sync 14:15:22 [ArtB] ... we need to sync the Reqs doc with the DigSig spec re the reqs 14:15:37 [ArtB] MC: so I think we can close it 14:15:45 [ArtB] AB: any other comments? 14:15:46 [fjh] zakim, unmute me 14:15:46 [Zakim] fjh was not muted, fjh 14:15:59 [ArtB] FH: not sure how much synching we need to do on the reqs 14:16:19 [ArtB] ... I do think we can close this issue 14:16:52 [ArtB] RESOLUTION: we close Issue #19 as the spec now adresses the original concerns 14:17:04 [ArtB] Topic: Issue-80 - Runtime localization model for widgets 14:17:05 [fjh] zakim, mute me 14:17:05 [Zakim] fjh should now be muted 14:17:15 [ArtB] AB: are there still some pending actions and input needed? 14:17:23 [ArtB] ... < > 14:17:44 [ArtB] ... what is the plan for the next couple of weeks? 14:17:55 [ArtB] MC: I added a new example to the latest ED 14:18:05 [ArtB] ... I still have some additional work on the model 14:18:12 [ArtB] ... I talked with JS earlier today 14:18:22 [ArtB] ... I'm still uneasy re the fwd slash "/" 14:18:30 [ArtB] ... we must maintain the semantics of URI 14:18:49 [ArtB] ... Need to understand if we can do it without the leading / 14:19:01 [ArtB] ... and to still have the fallback model 14:19:12 [Marcos] 14:19:42 [ArtB] AB: note there are related actions 298 and 299 14:20:09 [ArtB] AB: are there other inputs you need? 14:20:31 [ArtB] MC: by the end of the day I hope to have something to share with Jere and Josh 14:21:23 [ArtB] JK: I will review it later and send comments 14:22:19 [ArtB] AB: we need not just Editors but technical contributors too 14:22:36 [ArtB] DR: it would be helpful if MC could identify areas where Bryan can help 14:23:12 [ArtB] AB: any other comments on #80? 14:23:17 [ArtB] ... we will leave that open for now 14:23:26 [ArtB] Topic: Issue-82 - potential conflict between the XHTML <access> and Widget <access> element. 14:23:43 [ArtB] AB: What, if anything, should be done? 14:23:58 [ArtB] ... < > 14:24:16 [ArtB] MC: re last Topic, Jere, please consider XML Base when you review the new inputs 14:24:36 [ArtB] JK: yes, good point and that should be reflected in the spec 14:25:09 [ArtB] MC: this can be conceived of as a virtual file system at the conceptual level 14:25:25 [ArtB] JK: don't want the spec to specify a file system 14:25:44 [ArtB] MC: agree; I was just using that as part of my mental model 14:26:14 [JereK] I thought it was just shuffling URLs also in impl 14:26:25 [ArtB] AB: re #82 was not discussed in Paris 14:26:30 [ArtB] ... what are people thinking? 14:26:43 [ArtB] MC: I think we can close this since we are using a separate namespace 14:26:46 [ArtB] Arve: agree 14:26:54 [ArtB] AB: other comments? 14:26:59 [ArtB] AB: I completely agree 14:27:20 [timeless] "namespaces will save us ;-)" 14:27:28 [ArtB] AB: propose we close this with a resolution of "we address this by defining our own namespace" 14:27:35 [ArtB] AB: any objections to this proposal? 14:27:36 [JereK] or "believe in namespaces or not" :) 14:27:55 [ArtB] RESOLUTION: close Issue #82 - we address by defining our own namespace 14:28:08 [ArtB] Topic: Issue-83 - Instantiated widget should not be able to read digital signature. 14:28:25 [ArtB] AB: What is the status of this issue and is this against P&C spec of DigSig spec? 14:28:31 [ArtB] ... < > 14:28:44 [ArtB] AB: did you create this Marcos? 14:28:55 [fjh] q+ 14:28:58 [ArtB] MC: yes. It was raised by Marcoss 14:29:06 [fjh] zakim, unmute me 14:29:06 [Zakim] fjh should no longer be muted 14:29:14 [Marcos] s/Marcoss/Mark 14:29:29 [ArtB] FH: this issues identifies an potential attack 14:29:57 [ArtB] AB: is this something we must address in v1? 14:30:09 [ArtB] MC: yes. Need a 1-liner in the DigSig spec 14:30:22 [ArtB] FH: I don't quite understand the issue though 14:30:27 [ArtB] MC: me neither 14:30:37 [ArtB] FH: we already have some security consids 14:30:53 [ArtB] ... I recommend we get some more information from Mark 14:31:24 [ArtB] AB: so we need to get more info from Mark? 14:31:27 [ArtB] MC: yes 14:31:39 [ArtB] FH: I don't understand the real threat scenario 14:31:43 [Zakim] + +45.29.aabb 14:32:10 [ArtB] MC: me neither 14:32:14 [ArtB] JS: same with me 14:32:33 [ArtB] FH: I would close this and ask Mark to provide more information 14:32:43 [ArtB] DR: or could leave it open until Mark responds 14:33:15 [ArtB] AB: we'll leave it open for now and I'll take an action to ping Mark for more information on the threat scenario 14:33:17 [fjh] s/would close this/suggest this be closed unless we have new information 14:33:22 [fjh] zakim, mute me 14:33:22 [Zakim] fjh should now be muted 14:33:33 [ArtB] ACTION: Barstow ask Mark to provide more information about the real threat scenario re Issue #83 14:33:44 [ArtB] Topic: Widget requirement #37 (URI scheme etc) - see e-mail from Thomas: 14:34:04 [ArtB] AB: Thomas submitted some comments against Req #37 and I don't believe we have yet responded 14:34:10 [ArtB] ... < > 14:34:17 [ArtB] AB: perhaps we should take the discussion to public-webapps and drop it from today's agenda. OK? 14:34:46 [ArtB] AB: any comments? 14:34:59 [ArtB] Topic: Open Actions 14:35:09 [ArtB] AB: last week we created about 20 Actions and about 15 are still open. 14:35:24 [ArtB] ... To continue to make good progress on our specs we need to address these actions ASAP 14:35:33 [ArtB] ... Please review the actions and address any assigned to you. 14:35:40 [ArtB] ... Also do indeed feel free to submit inputs to address others' actions 14:35:46 [ArtB] ... Widget Actions are: < > 14:36:44 [ArtB] ... Let me know if you want agenda time for any of these Actions 14:37:08 [ArtB] Topic: June f2f meeting 14:37:15 [ArtB] AB: re location, we now have three proposals: Oslo/Opera, Edinburgh/OMTP and London/Vodafone. That's certainly sufficient to close the call for hosts. 14:37:27 [ArtB] AB: re the dates, June 2-4 are preferable. 14:37:40 [ArtB] AB: it will of course be impossible to satisfy everyone's #1 priority 14:37:59 [ArtB] DR: June 2-4 conflicts with OMTP meeting 14:38:25 [ArtB] AB: we should also be as Green as we can as well as to try to minimize travel costs and simplify logistics for everyone, including those attending from other continents 14:38:40 [fjh] that first week of june is not good for me 14:38:59 [ArtB] AB: are there any other conflicts with June 2-4? 14:39:10 [fjh] zakim, unmute me 14:39:10 [Zakim] fjh should no longer be muted 14:39:28 [ArtB] AB: are there any conflicts with June 9-11? 14:39:30 [fjh] zakim, mute me 14:39:30 [Zakim] fjh should now be muted 14:39:32 [abraun] there are always places in North America. I can think of one place with lots of hotels ;) 14:39:41 [ArtB] DR: not from OMTP's side 14:39:52 [ArtB] MC: that's OK with Opera 14:39:59 [ArtB] AB: anyone else 14:40:10 [ArtB] AB: it looks like June 9-11 then is best 14:40:29 [ArtB] AB: any comments about the location? 14:40:30 [timeless] abraun: there's already SJ later in the year 14:40:36 [timeless] so i think the us is out for this meeting 14:40:54 [ArtB] DR: We are happy to cede with Dan's offer to host in London 14:41:16 [ArtB] ... I think London is probably the most cost effective 14:41:32 [ArtB] JS: housing in London can be very expensive 14:41:44 [ArtB] ... I assume Edinburgh would be cheaper 14:41:55 [ArtB] ... I expect to pay for this trip out of my own pocket 14:42:13 [fjh] 14:42:36 [ArtB] Arve: lodging in London is not cheaper than Oslo 14:43:10 [ArtB] DR: London is an inexpensive hub to get to 14:43:25 [ArtB] ... i think airfare costs will dominate the overall cost of travel 14:43:34 [ArtB] MC: we can live with London 14:43:41 [ArtB] ... but want to host the next meeting 14:44:47 [ArtB] AB: any other comments? 14:44:57 [ArtB] JS: I need to check another calendar 14:45:08 [ArtB] AB: I will make a decision in a week or so 14:45:28 [ArtB] AB: the leading candidate is London June 9-11 14:45:38 [ArtB] JS: I just checked, no conflicts that week 14:45:46 [ArtB] Topic: TPAC meeting in November 14:45:55 [ArtB] AB: Charles asked everyone to submit comments about the W3C's proposed TPAC meeting in November 14:46:01 [ArtB] ... see < > 14:46:06 [ArtB] ... I think the general consensus is: a) it's too early to make a firm commitment; b) we support the idea of an all-WG meeting; c) if there are sufficient topics to discuss then we should meet that week. 14:46:19 [w3c_] w3c_ has joined #wam 14:46:35 [ArtB] ... Does that seem like a fair characterization? Does anyone have any other comments? 14:46:58 [w3c_] w3c_ has joined #wam 14:47:02 [Marcos] ? 14:47:06 [arve] did everyone, or just us get dropped from the call? 14:47:18 [timeless] just you 14:47:18 [arve] our call appears to be up, but we can't hear 14:47:22 [ArtB] AB: Charles and I need to report to the Team by the end of next week 14:47:37 [fjh] zakim, unmute me 14:47:37 [Zakim] fjh should no longer be muted 14:47:42 [fjh] q+ 14:47:47 [ArtB] AB: again that November TPAC meetingn is in Silicon Valley 14:47:52 [Zakim] -Arve/Marcos 14:48:17 [ArtB] JS: if Moz has a meeting I can piggy-back then that would increase my probability of attending 14:48:19 [Zakim] +Arve/Marcos 14:48:34 [ArtB] FH: we are tentatively meeting that week Wend to Friday 14:49:13 [ArtB] AB: I think the most we can report to the Team is "Yes, we tenatively have agreement to meet during TPAC" 14:49:35 [ArtB] Topic: Window Modes 14:49:49 [Marcos] 14:50:21 [ArtB] AB: this is Excellent Marcos! 14:50:28 [fjh] s/we are tentatively meeting that week Wend to Friday/XML Security is tentatively planning to meet at TPAC on Thursday Friday, so to avoid overlap can Widgets meet Mon and Tue 14:50:36 [ArtB] MC: give the credit to Arve :) 14:51:03 [ArtB] AB: so this captures last week's strawman? 14:51:12 [ArtB] MC: yes 14:51:22 [ArtB] Arve: it also includes some interfaces 14:51:39 [ArtB] MC: the APIs will be moved to the A&E spec 14:51:52 [ArtB] ... it will only contain the defn of the modes and the Media Queries 14:51:58 [ArtB] Present+ Benoit 14:52:03 [ArtB] BS: this is a good start 14:52:18 [ArtB] AB: anything else on this topic Marcos? 14:52:46 [ArtB] MC: we will work on this over the next few weeks and get it ready for a FPWD 14:52:59 [ArtB] AB: so a FPWD in the beginning of April? 14:53:03 [ArtB] MC: yes, that would be ideal 14:53:04 [MoZ] MoZ has joined #wam 14:53:36 [ArtB] Topic: Editorial Tasks 14:54:10 [ArtB] DR: I asked OMTP members if they can contribute 14:54:18 [ArtB] ... we have an offer from Bryan and ATT 14:54:27 [ArtB] ... they want to know specifics 14:54:54 [ArtB] AB: that's a good idea 14:55:07 [ArtB] ... I want to first talk to the editors 14:55:25 [ArtB] DR: OK. I will also see if I can get more support 14:55:47 [ArtB] AB: any other comments on this topic? 14:56:07 [ArtB] Topic: Anything Else 14:56:28 [ArtB] DR: I just responded to Art's BONDI Release Candidate e-mail 14:56:42 [ArtB] ... we have extended the comment period to March 23 14:56:51 [ArtB] ... the comments should all be public 14:57:32 [ArtB] JS: I tried to submit feedback and I ran into problems with OMTP's web site 14:57:48 [ArtB] ... it would be really good if the comments could be sent to a mail list 14:57:59 [ArtB] DR: if you send me the comments that would be good 14:58:05 [ArtB] JS: OK; will do but not this week 14:58:54 [ArtB] AB: is the URI of the public comment archive available? 14:59:05 [ArtB] DR: yes Nick sent it to public-webapps 14:59:26 [ArtB] DR: depending on the comments we will determine our next step 14:59:36 [ArtB] ... the next OMTP meeting is the following week 14:59:44 [ArtB] AB: thanks for the update David 14:59:49 [ArtB] AB: anythign else? 14:59:55 [ArtB] AB: Meeting Adjourned 15:00:00 [Zakim] -David 15:00:01 [Zakim] -fjh 15:00:02 [Zakim] - +45.29.aabb 15:00:03 [Zakim] -Jere 15:00:03 [Zakim] -Josh_Soref 15:00:07 [ArtB] RRSAgent, make minutes 15:00:07 [RRSAgent] I have made the request to generate ArtB 15:00:08 [Zakim] -Arve/Marcos 15:00:58 [JereK] JereK has left #wam 15:02:34 [Zakim] -Art_Barstow 15:02:36 [Zakim] IA_WebApps(Widgets)9:00AM has ended 15:02:37 [Zakim] Attendees were Art_Barstow, fjh, Josh_Soref, Jere, +47.23.69.aaaa, David, Arve/Marcos, +45.29.aabb 15:03:10 [ArtB] RRSAgent, bye 15:03:10 [RRSAgent] I see 1 open action item saved in : 15:03:10 [RRSAgent] ACTION: Barstow ask Mark to provide more information about the real threat scenario re Issue #83 [1] 15:03:10 [RRSAgent] recorded in
http://www.w3.org/2009/03/05-wam-irc
CC-MAIN-2014-23
refinedweb
3,471
72.19
Routing Tasks¶ Note Alternate routing concepts like topic and fanout may not be available for all transports, please consult the transport comparison table. - Basics - AMQP Primer - Routing Tasks Basics¶ Automatic routing¶ The simplest way to do routing is to use the CELERY_CREATE_MISSING_QUEUES setting (on by default). With this setting on, a named queue that is not already defined in CELERY_QUEUES will be created automatically. This makes it easy to perform simple routing tasks. Say you have two servers, x, and y that handles regular tasks, and one server z, that only handles feed related tasks. You can use this configuration: CELERY_ROUTES = {'feed.tasks.import_feed': {'queue': 'feeds'}} With this route enabled import feed tasks will be routed to the “feeds” queue, while all other tasks will be routed to the default queue (named “celery” for historical reasons). Now: from kombu import Exchange, Queue CELERY_DEFAULT_QUEUE = 'default' CELERY_QUEUES = ( Queue('default', Exchange('default'), routing_key= ghettoq does CELERY_DEFAULT_QUEUE = 'default' CELERY_QUEUES = ( Queue('default', routing_key='task.#'), Queue('feed_tasks', routing_key='feed.#'), ) CELERY_DEFAULT_EXCHANGE = 'tasks' CELERY_DEFAULT_EXCHANGE_TYPE = 'topic' CELERY_DEFAULT_ROUTING_KEY = 'task.default' CELERY_QUEUES is a list of Queue instances. If you don’t set the exchange or exchange type values for a key, these will be taken from the CELERY_DEFAULT_EXCHANGE and CELERY_DEFAULT_EXCHANGE_TYPE settings. To route a task to the feed_tasks queue, you can add an entry in the CELERY_ROUTES setting: CELERY CELERY_QUEUES = ( Queue('feed_tasks', routing_key='feed.#'), Queue('regular_tasks', routing_key='task.#'), Queue('image_tasks', exchange=Exchange('mediatasks', type='direct'), routing_key='image.compress'), ) If you’re confused about these terms, you should read up on AMQP. See also In addition to the AMQP Primer below, there’s Rabbits and Warrens, an excellent blog post describing queues and exchanges. There’s also AMQP in 10 minutes*: Flexible Routing Model, and Standard Exchange Types. For users of RabbitMQ the RabbitMQ FAQ could be useful as a source of information. metadata – are likely to see these terms used a lot in AMQP related material. Exchanges, queues and routing keys.¶ - CELERY_QUEUES to work (except if the queue’s auto_declare setting is set to False). Here’s an example queue configuration with three queues; One for video, one for images and one default queue for everything else: from kombu import Exchange, Queue CELERY_QUEUES = ( Queue('default', Exchange('default'), routing_key='default'), Queue('videos', Exchange('media'), routing_key='media.video'), Queue('images', Exchange('media'), routing_key='media.image'), ) CELERY_DEFAULT_QUEUE = 'default' CELERY_DEFAULT_EXCHANGE_TYPE = 'direct' CELERYcard, which polls for new messages on the queue (which is alright for maintainence tasks, for services you has not been acknowledged and consumer channel is closed, the message will be delivered to another consumer. Note the delivery tag listed in the structure above; Within a connection channel, every received message has a unique delivery tag, This tag is used to acknowledge the message. Also note that delivery tags are CELERY_QUEUES setting. Here’s an example queue configuration with three queues; One for video, one for images and one default queue for everything else: default_exchange = Exchange('default', type='direct') media_exchange = Exchange('media', type='direct') CELERY_QUEUES = ( Queue('default', default_exchange, routing_key='default'), Queue('videos', media_exchange, routing_key='media.video'), Queue('images', media_exchange, routing_key='media.image') ) CELERY_DEFAULT_QUEUE = 'default' CELERY_DEFAULT_EXCHANGE = 'default' CELERY_DEFAULT_ROUTING_KEY = 'default' Here, the CELERY_DEFAULT_QUEUE will be used to route tasks that doesn’t have an explicit route. The default exchange, exchange type and routing key will be used as the default routing values for tasks, and as the default values for entries in CELERY_QUEUES. Specifying task destination¶ The destination for a task is decided by the following (in order): - The Routers defined in CELERY_ROUTES. - The routing arguments to Task.apply_async(). - Routing related attributes defined on the Taskitself. It is considered best practice to not hard-code these settings, but rather leave that as configuration options by using Routers; This is the most flexible approach, but sensible defaults can still be set as task attributes. Routers¶ A router is a class that decides the routing options for a task. All you need to define a new router is to create a class with a route_for_task method: class MyRouter(object): def route_for_task(self, task, args=None, kwargs=None): if task == 'myapp.tasks.compress_video': return {'exchange': 'video', 'exchange_type': 'topic', 'routing_key': 'video.compress'} return None If you return the queue key, it will expand with the defined settings of that queue in CELERY_QUEUES: {'queue': 'video', 'routing_key': 'video.compress'} becomes –> {'queue': 'video', 'exchange': 'video', 'exchange_type': 'topic', 'routing_key': 'video.compress'} You install router classes by adding them to the CELERY_ROUTES setting: CELERY_ROUTES = (MyRouter(), ) Router classes can also be added by name: CELERY_ROUTES = ('myapp.routers.MyRouter', ) For simple task name -> route mappings like the router example above, you can simply drop a dict into CELERY_ROUTES to get the same behavior: CELERY_ROUTES = ({'myapp.tasks.compress_video': { 'queue': 'video', 'routing_key': 'video.compress' }}, ) The routers will then be traversed in order, it will stop at the first router returning a true value, and use that as the final route for the task. Broadcast¶ Celery can also support broadcast routing. Here is an example exchange broadcast_tasks that delivers copies of tasks to all workers connected to it: from kombu.common import Broadcast CELERY_QUEUES = (Broadcast('broadcast_tasks'), ) CELERY_ROUTES = {'tasks.reload_cache': {'queue': 'broadcast_tasks'}} Now the tasks.reload_tasks task will be sent to every worker consuming from this queue. Broadcast & Results Note that Celery result does not define what happens if two tasks have the same task_id. If the same task is distributed to more than one worker, then the state history may not be preserved. It is a good idea to set the task.ignore_result attribute in this case.
http://docs.celeryproject.org/en/latest/userguide/routing.html
CC-MAIN-2015-32
refinedweb
908
54.83
17 August 2012 09:54 [Source: ICIS news] SINGAPORE (ICIS)--Taiwan-based CPC Corp has issued a tender to sell two parcels of 3,000 tonnes orthoxylene (OX) each for first-half September and second-half September loading, a company source said on Friday. The cargoes are to be FOB (free on board) ?xml:namespace> The tender, issued on Thursday, is seeking bids by 21 August, which will remain valid until 17:00 hours CPC was scheduled to shut its No.3 paraxylene (PX)/ OX for maintenance in The No.2 unit has a nameplate capacity of 260,000 tonnes/year of PX and 40,000 tonnes/year of
http://www.icis.com/Articles/2012/08/17/9587868/taiwans-cpc-issues-tender-to-sell-6000-tonnes-of-ox.html
CC-MAIN-2015-06
refinedweb
109
69.11
Blueprints and Views¶ A view function is the code you write to respond to requests to your application. Flask uses patterns to match the incoming request URL to the view that should handle it. The view returns data that Flask turns into an outgoing response. Flask can also go the other direction and generate a URL to a view based on its name and arguments. Create a Blueprint¶ A Blueprint is a way to organize a group of related views and other code. Rather than registering views and other code directly with an application, they are registered with a blueprint. Then the blueprint is registered with the application when it is available in the factory function. Flaskr will have two blueprints, one for authentication functions and one for the blog posts functions. The code for each blueprint will go in a separate module. Since the blog needs to know about authentication, you’ll write the authentication one first. import functools from flask import ( Blueprint, flash, g, redirect, render_template, request, session, url_for ) from werkzeug.security import check_password_hash, generate_password_hash from flaskr.db import get_db bp = Blueprint('auth', __name__, url_prefix='/auth') This creates a Blueprint named 'auth'. Like the application object, the blueprint needs to know where it’s defined, so __name__ is passed as the second argument. The url_prefix will be prepended to all the URLs associated with the blueprint. Import and register the blueprint from the factory using app.register_blueprint(). Place the new code at the end of the factory function before returning the app. def create_app(): app = ... # existing code omitted from . import auth app.register_blueprint(auth.bp) return app The authentication blueprint will have views to register new users and to log in and log out. The First View: Register¶ When the user visits the /auth/register URL, the register view will return HTML with a form for them to fill out. When they submit the form, it will validate their input and either show the form again with an error message or create the new user and go to the login page. For now you will just write the view code. On the next page, you’ll write templates to generate the HTML form. @bp.route('/register', methods=('GET', 'POST')) def register(): if request.method == 'POST': username = request.form['username'] password = request.form['password'] db = get_db() error = None if not username: error = 'Username is required.' elif not password: error = 'Password is required.' elif db.execute( 'SELECT id FROM user WHERE username = ?', (username,) ).fetchone() is not None: error = 'User {} is already registered.'.format(username) if error is None: db.execute( 'INSERT INTO user (username, password) VALUES (?, ?)', (username, generate_password_hash(password)) ) db.commit() return redirect(url_for('auth.login')) flash(error) return render_template('auth/register.html') Here’s what the register view function is doing: @bp.routeassociates the URL /registerwith the registerview function. When Flask receives a request to /auth/register, it will call the registerview and use the return value as the response. If the user submitted the form, request.methodwill be 'POST'. In this case, start validating the input. request.formis a special type of dictmapping submitted form keys and values. The user will input their usernameand Validate that usernameand passwordare not empty. Validate that usernameis not already registered by querying the database and checking if a result is returned. db.executetakes a SQL query with ?placeholders for any user input, and a tuple of values to replace the placeholders with. The database library will take care of escaping the values so you are not vulnerable to a SQL injection attack. fetchone()returns one row from the query. If the query returned no results, it returns None. Later, fetchall()is used, which returns a list of all results. If validation succeeds, insert the new user data into the database. For security, passwords should never be stored in the database directly. Instead, generate_password_hash()is used to securely hash the password, and that hash is stored. Since this query modifies data, db.commit()needs to be called afterwards to save the changes. After storing the user, they are redirected to the login page. url_for()generates the URL for the login view based on its name. This is preferable to writing the URL directly as it allows you to change the URL later without changing all code that links to it. redirect()generates a redirect response to the generated URL. If validation fails, the error is shown to the user. flash()stores messages that can be retrieved when rendering the template. When the user initially navigates to auth/register, or there was an validation error, an HTML page with the registration form should be shown. render_template()will render a template containing the HTML, which you’ll write in the next step of the tutorial. This view follows the same pattern as the register view above. @bp.route('/login', methods=('GET', 'POST')) def login(): if request.method == 'POST': username = request.form['username'] password = request.form['password'] db = get_db() error = None user = db.execute( 'SELECT * FROM user WHERE username = ?', (username,) ).fetchone() if user is None: error = 'Incorrect username.' elif not check_password_hash(user['password'], password): error = 'Incorrect password.' if error is None: session.clear() session['user_id'] = user['id'] return redirect(url_for('index')) flash(error) return render_template('auth/login.html') There are a few differences from the register view: The user is queried first and stored in a variable for later use. check_password_hash()hashes the submitted password in the same way as the stored hash and securely compares them. If they match, the password is valid. sessionis a dictthat stores data across requests. When validation succeeds, the user’s idis stored in a new session. The data is stored in a cookie that is sent to the browser, and the browser then sends it back with subsequent requests. Flask securely signs the data so that it can’t be tampered with. Now that the user’s id is stored in the session, it will be available on subsequent requests. At the beginning of each request, if a user is logged in their information should be loaded and made available to other views. @bp.before_app_request def load_logged_in_user(): user_id = session.get('user_id') if user_id is None: g.user = None else: g.user = get_db().execute( 'SELECT * FROM user WHERE id = ?', (user_id,) ).fetchone() bp.before_app_request() registers a function that runs before the view function, no matter what URL is requested. load_logged_in_user checks if a user id is stored in the session and gets that user’s data from the database, storing it on g.user, which lasts for the length of the request. If there is no user id, or if the id doesn’t exist, g.user will be None. To log out, you need to remove the user id from the session. Then load_logged_in_user won’t load a user on subsequent requests. Require Authentication in Other Views¶ Creating, editing, and deleting blog posts will require a user to be logged in. A decorator can be used to check this for each view it’s applied to. def login_required(view): @functools.wraps(view) def wrapped_view(**kwargs): if g.user is None: return redirect(url_for('auth.login')) return view(**kwargs) return wrapped_view This decorator returns a new view function that wraps the original view it’s applied to. The new function checks if a user is loaded and redirects to the login page otherwise. If a user is loaded the original view is called and continues normally. You’ll use this decorator when writing the blog views. Endpoints and URLs¶ The url_for() function generates the URL to a view based on a name and arguments. The name associated with a view is also called the endpoint, and by default it’s the same as the name of the view function. For example, the hello() view that was added to the app factory earlier in the tutorial has the name 'hello' and can be linked to with url_for('hello'). If it took an argument, which you’ll see later, it would be linked to using url_for('hello', who='World'). When using a blueprint, the name of the blueprint is prepended to the name of the function, so the endpoint for the login function you wrote above is 'auth.login' because you added it to the 'auth' blueprint.
https://flask.palletsprojects.com/en/1.0.x/tutorial/views/
CC-MAIN-2020-16
refinedweb
1,374
59.8
On Wed, 2005-06-29 at 21:58 +1200, Simon Kitching wrote: <snip> > > > The breakage is more when people refer to it in config files. Or > > > possible call > > > LogFactory.setAttribute(..., > > > "org.apache.commons.logging.impl.Log4JLogger"); > > > > that's the point: i was wondering whether we could preserve > > compatibility by having some mechanism that guesses which implementation > > to use. (we'll need something like this to support dynamic binding.) > > The obvious solution is just to provide > /** @deprecated */ > Log4JLogger extends Log4J12Logger {} i've come round to thinking that this is probably the most reasonable solution. > > > We also need to consider that Log4J 1.3 is likely to be released within > > > the lifetime of this release of commons-logging. If we provide a class > > > called "Log4JLogger" that is not compatible with it, I'm concerned we > > > will cause more pain than by simply removing that class. Remember that > > > every person we would have "broken" by not including Log4JLogger would > > > get broken anyway if they try to use Log4J 1.3, as Log4JLogger is not > > > compatible with Log4J 1.3: we have a choice of breakage only: > > > * message "Log4JLogger does not exist" (if we remove the class) > > > * message "Log4JLogger cannot be initialised: log4j 1.2 not found" > > > (if we include it) > > > > by removing support for log4jlogger, we break support for existing > > deployments: it's no longer a drop in replacement. anyone upgrading to > > the new log4j version is going to expect a lot of pain including having > > to upgrade. > > I don't believe "a lot of pain" is a fair assessment. They upgrade to > log4j 1.3, and run their app. sorry: didn't explain myself very well. i meant that anyone wanted to upgrade an enterprise class application running in a container should probably expect quite a bit of pain when they upgrade to an incompatible version of a key library (such as log4j). my concern is for users who are just upgrading JCL (whilst maintaining their existing version of log. > If they just use standard discovery then the new log4j 1.3 is > discovered, and all works fine. that's certainly the way i'd like things to work :) > If they use a commons-logging.properties file, a service file, or a > system property to specify Log4JLogger class then they get the message > Class not found: Log4JLogger does not exist > so they update the relevant config file and all works fine. > > If they have code that does > LogFactory.setAttribute("o.a.c.l.i.Log4JLogger") > then they get the message > Class not found: Log4JLogger does not exist > and have to either update their code, or create a .java file containing > public class Log4JLogger extends Log4J12Logger {} > and drop the relevant .class file into their app. removing log4jlogger seems likely to cause a lot of pain for those just upgrading JCL without really getting much gain. this release is a big improvement on the old and i'd really prefer not to give people any reason not to upgrade. > The last is the only case that is at all awkward, and the fix still > takes less than 10 minutes and requires no immediate code changes > (though the original code should of course be updated sometime). it's not that it's particularly awkward: it's the fact that the release isn't (in practical terms) compatible with the last one. > > > Given that we're going to break their apps either way (due to log4j's > > > binary incompatibility) it seems sensible to do it right from the code > > > point of view. > > > > i'm not sure i parse this correctly. could you expand? > > When a user uses a config file or call to setAttribute to specify > Log4JLogger as the adapter class: > * If we drop Log4JLogger from the jar we cause the problems > listed above. > * If we include a Log4JLogger that only works with Log4J 1.2 > then when they put log4j 1.3 in their classpath their app > will also stop working. Ok, this is only triggered once they > upgrade to log4j1.3 rather than when they upgrade to JCL 1.1, > but it's going to happen eventually. let the future take care of itself :) maybe log4j 1.3 will be compatible with 1.2, maybe incompatible. i'm definitely in favour of adopting the change to log412logger as soon as possible whether 1.3 gets released or not. we deprecate log4jlogger and make it very clear about this future strategy in the release notes. but don't force the huge number of JCL users to change configuration files today without warning because something might happen tomorrow. > So we break such user code either way. What the user needs to do to fix > their app in either case is pretty similar. But the latter solution has > us shipping a junk class in JCL. the problem of junk classes is one of releases why shipping the optional jar was proposed. many (most?) users of JCL don't care if there are a few extra classes in there for backwards compatibility and some are very glad of them. > > > Alternatively we could provide the Log4JLogger that is currently in SVN > > > which is compatible with both log4j versions. But it is only compatible > > > when we compile it against log4j1.3-alpha6 or later, and I'm really not > > > keen on releasing a Log4JLogger class compiled like that. I'm wavering > > > on the idea of releasing a Log4J13Logger compiled like that; depends how > > > confident the log4j team are that they will stick with their API change > > > plan. Currently, however, there seems to be confusion over when/if they > > > are going to change the Category/Logger class hierarchy so I think > > > there's still the possibility of further API changes from the log4j team > > > in which case we should NOT include log4j1.3 support (ie Log4J13Logger) > > > in the next release. > > > > i think that we need to work on log4j support and think about the right > > way to make this change. however, until there is a full release it would > > be wrong to include support within the core jar. > > > > i wouldn't object to support within an optional jar or ask those > > requiring support to use the source release. we could then release a new > > version once a full log4j 1.3 release is available. > > Agreed. I personally wouldn't even offer an optional jar; we can just > provide the source and let people compile it themselves as the fix is > just one class. there are a number of classes in a similar situation. moving them into the optional jar seems to me a bit like deprecating them. > And obviously we can get ready when log4j 1.3 RCs start, so JCL can be > released on the same day. +1 might be good to coordinate release candidates - robert --------------------------------------------------------------------- To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200506.mbox/%3C1120080329.5021.53.camel@knossos.elmet%3E
CC-MAIN-2017-43
refinedweb
1,147
64.61
May 28, 2019 11:14 AM|AndreySignet|LINK Greetings! I use IIS to run my localhost. I have a form on the website where a user enters some data, then PHP gets these data and should run Python script. To run it I try to use following commands: <?php shell_exec("python script_folder/test.py"); ?> and <?php exec("python script_folder/test.py"); ?> but nothing happens. Python script is trivial: import time print("It works!") time.sleep(3) I guess there is an issue with permissions. Possibly a user that connects website doesn't have privilege to use cmd. Could you please help how to proceed? May 29, 2019 02:06 AM|Jalpa Panchal|LINK Hi AndreySignet, In my opinion use IIS 10 and which type of user is? Regards, Jalpa. May 29, 2019 09:29 AM|Jalpa Panchal|LINK No need to delete iis 6. just use iis 10 and add site in iis10. May 29, 2019 09:36 AM|Jalpa Panchal|LINK Is your site is working well before hosting in IIS? May 29, 2019 09:54 AM|Jalpa Panchal|LINK check that you give all permission to iis_iusr and iusr to run the script. You could also refer below post for more detail: 9 replies Last post May 29, 2019 09:54 AM by Jalpa Panchal
https://forums.iis.net/t/1242149.aspx?IIS6+win+10+cmd
CC-MAIN-2020-40
refinedweb
217
77.03
I decided to test-compile a class that I'm making and I'm getting strange errors which i've never seen before in some very simple code. I'm posting the three files here with errors numbered and their occurences commented. My compiler is Dev-C++ 4.9.9.1. If anyone has any ideas please let me know. ERRORS: 1 expected unqualified-id before "using" 2 expected `,' or `;' before "using" 3 `cout' undeclared (first use this function) 4 `endl' undeclared (first use this function) 5 ISO C++ forbids declaration of `vector' with no type 6 expected `;' before '<' token Code:MAIN: #include <cstdlib> #include <iostream> #include "HashTable.h" using namespace std; // <- Error 1 and 2 int main() { HashTable H; cout << "End of Program." << endl; // <- Error 3 and 4 return EXIT_SUCCESS; } HASHTABLE.H: #ifndef _HashTable_H_ #define _HashTable_H_ class HashTable { public: HashTable(int size = 101); private: vector<char*> array; // <- Error 5 and 6 } #endif HASHTABLE.CPP: #include <iostream> #include <vector> #include "HashTable.h" using namespace std; HashTable::HashTable(int size) { array.resize(size); }
http://cboard.cprogramming.com/cplusplus-programming/62189-errors-simple-hash-table-class.html
CC-MAIN-2014-41
refinedweb
172
64.61
freebl PRNG hashes netstat and /dev/urandom data rather than just using /dev/urandom RESOLVED FIXED in 3.11.3 Status P2 normal People (Reporter: aaronl, Assigned: julien.pierre) Tracking ({perf}) Firefox Tracking Flags (Not tracked) Details Attachments (2 attachments, 3 obsolete attachments) In security/nss/lib/freebl/unix_rand.c, bytes are read from /dev/urandom and from the output of netstat to seed the random number generator. I do not see why this code invokes netstat in the first place. If the OS has a decent random device, it should be a much better source for entropy than processes like netstat. In fact, on OS' where /dev/urandom is well-implemented, I don't see why Mozilla should even use its own PRNG. On Linux, /dev/urandom is a similar PRNG to the one used in Mozilla, but its entropy comes from things like interrupts. The Mozilla PRNG uses /dev/urandom and netstat as entropy sources. Why not just read from /dev/urandom when random bytes are required, when running on an OS with a cryptographically secure random device? Doing so would not only be faster, but should be even more secure because nothing is really gained by hashing output from /dev/urandom a second time, and netstat output has questionable randomness. Assigned the bug to Nelson. Assignee: wtc → nelsonb I see no bug here. The code is presently working as intended. The code works, and produces correct results, whether the OS has a well-implemented /dev/urandom or not. There are ***x OSes that have no /dev/urandom. It has been reported that there are ***x OSes that have a bad implementation of /dev/urandom. How does mozilla know whether an OS's implementation of /dev/urandom is a good one? Should mozilla assume that all /dev/urandom implementations are good? Priority: -- → P4 Target Milestone: --- → Future Can this bug be marked invalid? In comment 2 I asked a non-rhetorical question. I expected the answer to be something like "Yes, for <my favorite platform> mozilla/NSS should assume that /dev/urandom is present and well implemented.". I'm not opposed to changing the code so that, on platforms that we believe normally have good secure /dev/urandom implementations, NSS will attempt to use /dev/urandom, and if that succeeds, it will skip the netstat step. Submittor, is that what you want? Linux is the only kernel I am familiar enough with to advocate using its RNG device. What port maintainers would be good CC candidates for this bug? QA Contact: bishakhabanerjee → jason.m.reid QA Contact: jason.m.reid → libraries This has become a hot issue because the libldap library in Solaris 10 uses NSS, and can be transparently invoked in unknowing applications. If an application has for example enforced a no-forking policy through pthread_atfork handler, but calls some LDAP library calls that invokes NSS, we try to fork netstat, even though /dev/urandom is available. We just don't need to do that. We still have a requirement to suport Solaris 8, which doesn't have /dev/urandom . We still need an entropy source on Solaris 8. For that case, I have written new code that uses libkstat. Patch forthcoming. Assignee: nelson → julien.pierre.bugs OS: Linux → Solaris Priority: P4 → P1 Hardware: PC → Sun Target Milestone: Future → 3.11.3 Version: unspecified → 3.11.2 In the libkstat case, I am feeding all kernel statistics to the PRNG, 4 KB at a time. This can be about 800 - 1200 KB of data on the machines I have tested. I don't have time to sort through which parts really change, but many do. I have benchmarked that this code path takes between 65 and 100% of the CPU and wall time of forking netstat, in other words, it is no worse, and sometimes better, depending on the machiine. I came to that conclusion by timing a script that executes certutil -L on a blank DB 100 times. I tried both on Sparc and x86 . Attachment #235522 - Flags: superreview?(wtchang) Attachment #235522 - Flags: review?(nelson) Comment on attachment 235522 [details] [diff] [review] On Solaris, use only /dev/urandom if it is available. If not, use libkstat The last #ifdef SOLARIS block in your patch should be moved forward, before the #ifdef DO_PS block. It is best if it can be combined with the #ifdef BSDI block. Comment on attachment 235522 [details] [diff] [review] On Solaris, use only /dev/urandom if it is available. If not, use libkstat These are just some minor problems. But since there are many, I want to ask you to submit a new patch. In freebl you should use PORT_Assert or PR_ASSERT (they are synomyms). The libc assert is controlled by the NDEBUG macro, which is different from the macros that control PR_ASSERT. Declare all the variables and functions static. >+#include "/usr/include/kstat.h" This should be #include <kstat.h>. entropy_collected should not be a global variable. It should be a local variable in RNG_kstat and passed as argument to CollectEntropy. Is this based on some sample code? This doesn't look like your coding style. Attachment #235522 - Flags: superreview?(wtchang) → superreview- Wan-Teh, This was written from scratch, it wasn't sample code. I switched from assert to PORT_Assert, as well as from malloc/free to PORT_Alloc/PORT_Free . I made both entropy_collected and entropy_buf static locals in RNG_kstat. I fixed the include path. Regarding comment 8 and the ifdef, I had put it at the end rather than near the BSDI ifdef, because if I did the later, I would get a compiler warning about unreachable statement in the call to safe_popen . I agree it makes more logical sense to group the BSDI and SOLARIS ifdefs, but getting rid of the compiler warning requires an extra "ifndef SOLARIS" on the netstat forking code, which I have included in this patch. Attachment #235522 - Attachment is obsolete: true Attachment #235586 - Flags: superreview?(nelson) Attachment #235586 - Flags: review?(wtchang) Attachment #235522 - Flags: review?(nelson) Comment on attachment 235586 [details] [diff] [review] Update Julien, this patch has a bug, and I want to suggest some changes. So please submit a new patch. 1. In general I don't like leaving debug printf statements in the code. 2. I suggest that max_entropy_len be renamed entropy_buf_size to more clearly indicate its relation to entropy_buf. 3. In CollectEntropy, we have: >+ PRUint32 buffered; ... >+ buffered += tocopy; 'buffered' is read uninitialized. Change += to =. 4. Also in CollectEntropy, we have: >+ if (inlen) { >+ buffered += CollectEntropy(inbuf, inlen, entropy_buf, >+ entropy_collected); >+ } >+ return buffered; You really like recursion. I find it harder to understand. Tail recursions can be easily converted into a loop. 5. In RNG_kstat, we have >+ kstat_t* ksp = NULL; ... >+ total += CollectEntropy((char*)ksp, sizeof(struct kstat), >+ entropy_buf, &entropy_collected); Is struct kstat the same as kstat_t? Since I am not familiar with the contents of kstat_t, I can't really review the two CollectEntropy calls other than that they look correct. 6. It would be nice to add a one-line comment to summarize what CollectEntropy and RNG_kstat do. 7. In RNG_SystemInfoForRNG, we have: >+ if (!bytes) { >+ /* On Solaris 8, /dev/urandom isn't available, so we use libkstat. */ >+ RNG_kstat(); >+ } >+ return; Unless you are worried about the cost of calling RNG_kstat(), I suggest you always call it (i.e., remove if (!bytes)), so that the code is exercised on your development machine. >+#ifndef SOLARIS >+ /* As mentioned above, we never want to fork netstat on Solaris */ > fp = safe_popen(netstat_ni_cmd); > if (fp != NULL) { > while ((bytes = fread(buf, 1, sizeof(buf), fp)) > 0) > RNG_RandomUpdate(buf, bytes); > safe_pclose(fp); > } >+#endif Seems like you can emulate the code for the "ps" command and prepare for other platforms that don't want to run the "netstat" command: #ifndef SOLARIS #define DO_NETSTAT 1 #endif ... #ifdef DO_NETSTAT fp = safe_popen(netstat_ni_cmd); ... #endif Attachment #235586 - Flags: review?(wtchang) → review+ Comment on attachment 235586 [details] [diff] [review] Update I have some minor quibbles with this patch. 1) rather than seeing all the new code be "ifdef solaris" and "ifndef solaris" which emphasizes the platform-specific nature of the code, I'd rather see it use "feature test macros", e.g. "if defined(DO_NETSTAT)" and "if !defined(DO_NETSTAT)", where the macro (#define) name is the name of a feature, not a platform, and then have other #ifdefs that conditionally define that feature test macro, or not. That way, the Netstat code becomes a feature, to be considered on a platform by platform basis, not a platform specific hack, and it is more obvious that other platforms may wish to incorporate it, too. I think all the conditionally compiled features for obtaining entropy should be coded that way. 2) I'd like to see the code structured in such a way that there is some run time check that ensures that in no build, at no time, is it possible for this function to "fall out" the bottom without having succesfully gotten any entropy from any of its sources. Perhaps we should have a counter of bytes of entropic data recieved/digested, and at the bottom of the function abort() if that number is zero. The code should be structured not to bail out early, but to conditionally do subsequent parts only if previous ones failed, so that it always gets to the bottom of the function and always tests to see if any was succesfully gotten. If the code is structured that way, then the next person to come along with another platform specific hack is likely to follow the established pattern in the code. We REALLY don't want this function ever returning without succeeding! Attachment #235586 - Flags: superreview?(nelson) → review- Oh, I meant to add, that doing kstat should be a separate feature with a separate feature test macro, from doing netstat. The code might be: #ifdef solaris #define DO_KSTAT 1 #under DO_NETSTAT #endif and the kstat code would be conditioned by #ifdef DO_KSTAT and the netstat code by #ifdef DO_NETSTAT Wan-Teh, In response to comment 11 : 1-4 : I will resolve these in an upcoming patch. 5 : yes, kstat_t is the same as struct kstat . But I will change the code for consistency to use kstat_t in both places. 6 : will do 7 : the cost of RNG_kstat is about the same or lower than safe_popen, but it is still non-zero . I don't see a good reason to execute it if we have /dev/urandom . I tested the code on my machine by adding an environment variable to always trigger it. I also have a Solaris 8 box which doesn't have /dev/urandom that I tested on, and I verified it took this code path. We also have one nightly QA box, mintchip, which is running Solaris 8, so this code will be exercised. 8 and Nelson's comment 12 1) and 13 : I will add a DO_NETSTAT feature macro, the safe_popen code is shared between several platforms. However, for kstat, this code is completely specific to Solaris, so it belongs within an #ifdef SOLARIS . And I wouldn't call it a hack. unix_rand.c is full of platform-specific code surrounded by platform #ifdefs . Even the source file itself is conditionally compiled - we have separate mac_rand.c, os2_rand.c and win_rand.c. Nelson, In response to comment 12 : 1 : see the end of my response to Wan-Teh above 2 : I agree that the code should not return without having collected some entropy. However, this is a general problem that is common to all platforms well beyond the scope of this particular fix. RNG_SystemInfoForRNG returns void, so there is no possibility for softoken to check if anything went wrong with the RNG during C_Initialize . That is a serious problem, but it needs to be fixed for all platforms in a separate bug. For now, my upcoming patch will change RNG_kstat to not be a void function, and return the number of entropy bytes actually collected, which will make it easier to convert its caller to a non-void function in the future. - remove kprintf statements - rename max_entropy_len to max_entropy_buf_len - initialize buffered to zero - replace recursion with a while loop - replace struct kstat with kstat_t - add one-liners for CollectEntropy and RNG_kstat - add and use DO_NETSTAT feature macro - make RNG_kstat return number of entropy bytes and assert that it is non-zero (the caller returns void, so this is the most I can do in this bug). Wan-Teh, the doc on libkstat is available at . Attachment #235586 - Attachment is obsolete: true Attachment #236154 - Flags: superreview?(wtchang) Attachment #236154 - Flags: review?(nelson) Comment on attachment 236154 [details] [diff] [review] Update with feedback from Nelson and Wan-Teh This patch is all OK except for the behavior of function CollectEntropy. It should never call RNG_RandomUpdate with zero bytes of input. And I think it should call RNG_RandomUpdate 0, 1 or 2 times (at most) per call. The variable names "buffered" and "entropy_collected" seem to be interchanged. That is, "entropy_collected" counts the amount in the buffer, and "buffered" counts the amount input to the PRNG. Seems backwards to me, but that's not a huge deal. I just have to remember that buffered means collected by PRNG and entropy_collected means buffered. :-/ Consider if CollectEntropy gets called for the first time with a inlen==12k. The code will call RNG_RandomUpdate with zero bytes immediately, then it will loop, copying 4KB at a time into a buffer, and calling RNG_RandomUpdate once each time in the loop. In that case, it seems to me that (a) it shouldn't call RNG_RandomUpdate with zero bytes, and (b) it can call RNG_RandomUpdate just once to swallow the whole 12KB input in a single shot. No need to break it up into smaller bites first. CollectEntropy can do the job in just two calls to RNG_RandomUpdate, with no zero length calls. Let me suggest this algorithm psuedo code: if (input len >= max buf len) if (buffer is not empty) RNG_RandomUpdate what's in the buffer mark buffer emtpy RNG_RandomUpdate the whole input return while (there remains unconsumed input) append as much of the unconsumed input into the buffer as will fit. if (buffer is full) RNG_RandomUpdate what's in the buffer mark buffer emtpy return That while loop will execute 0, 1 or 2 times and will call RNG_RandomUpdate 0 or 1 times. Attachment #236154 - Flags: review?(nelson) → review- Nelson, I have done a fair amount of benchmarking with the code. I found that buffering 4KB at a time gave indistinguishable performance from having a single big buffer (4 megabytes) allocated upfront, buffering all the kstat data into it, and feeding it to the RNG in one call. I benchmarked that by simply changing the size of my buffer. Given this finding, I'm inclined to keep the code as simple as possible, even if there are possible algorithmic improvements that don't result in an actual difference. That said, the fact that my code called RNG_RandomUpdate with a zero-length was a bug, which I will correct. This patch contains several changes : 1) More comments 2) CollectEntropy and RNG_kstat are both changed to return a SECStatus . This is to ease future work in bug 350798 to check for success 3) CollectEntropy has an extra PRUint32 *total_fed argument which gets incremented 4) RNG_kstat has an extra PRUInt32* fed argument to return the number of bytes of entropy that it generated and fed to the RNG 5) CollectEntropy now only feeds data to the RNG if the buffer is full, and will never pass a zero length to RNG_RandomUpdate 6) CollectEntropy local variable buffered renamed to processed 7) CollectEntropy argument entropy_collected renamed to entropy_buffered. Same for RNG_kstat local Attachment #236154 - Attachment is obsolete: true Attachment #236342 - Flags: superreview?(wtchang) Attachment #236342 - Flags: review?(nelson) Attachment #236154 - Flags: superreview?(wtchang) Comment on attachment 236342 [details] [diff] [review] update r=nelson Please file a separate bug about the fact that RNG_SystemInfoForRNG can fail silently in optimized builds. Attachment #236342 - Flags: review?(nelson) → review+ Nelson, Thanks for the review. I think your request from comment 19 is covered under bug 350798, which I filed yesterday . Comment on attachment 236342 [details] [diff] [review] update r=wtc. Please remove the 4 extraneous semicolons after closing curly braces. Just search for "};" in the file and remove the ";". The following are optional suggested changes and comments. You can ignore them, especially 7, 8, and 9. 1. max_entropy_buf_len should be renamed entropy_buf_len. The length of entropy_buf is fixed, so "max" doesn't make sense. 2. In this comment: >+/* Buffer entropy data, and feed it to the RNG, 4 KB at a time. >+ * Returns error if RNG refuses data. Change "4 KB" to "max_entropy_buf_len bytes". Change "RNG refuses data" to "RNG_RandomUpdate fails". 3. Since entropy_collected has been renamed entropy_buffered, perhaps CollectEntropy should also be renamed BufferEntropy? 4. In CollectEntropy, the local variable 'processed' is not necessary. You can update *total_fed directly, i.e., replace processed += tocopy; by *total_fed += tocopy; at the end of the while loop. 5. In RNG_kstat, you have: >+ if (-1 == kstat_read(kc, ksp, NULL)) { and >+ if (kstat_close(kc)) { It would look nicer to use the same method to test for the failure of kstat_read and kstat_close. 6. The local variable 'total_fed' in RNG_kstat is not necessary. RNG_kstat can actually pass 'fed' through to the two CollectEntropy calls (instead of &total_fed). 7 >+ if (!bytes) { >+ /* On Solaris 8, /dev/urandom isn't available, so we use libkstat. */ >+ PRUint32 kstat_bytes = 0; >+ if (SECSuccess != RNG_kstat(&kstat_bytes)) { >+ PORT_Assert(0); >+ }; >+ bytes += kstat_bytes; >+ PORT_Assert(bytes); >+ } Since 'bytes' is 0, you can just pass &bytes instead of &kstat_bytes to RNG_kstat. Since the type of 'bytes' is size_t, this change would require propagating that type to the 'fed' parameter of RNG_kstat and possibly the 'total_fed' parameter of CollectEntropy. 8. RNG_kstat can call RNG_RandomUpdate directly. It doesn't need to buffer the entropy in CollectEntropy. Another way to say this is that if the CollectEntropy function is a good idea, then we should use it throughout the RNG_SystemInfoForRNG function. For example, the while loop that steps through all the environment variables in the 'environ' array is very similar to your RNG_kstat function. It should use the same method (either buffered or non-buffered) as RNG_kstat to feed its data to RNG_RandomUpdate. 9. RNG_SystemInfoForRNG already has a buffer 'buf'. Ideally you should be able to use that buffer instead of 'entropy_buf' in RNG_kstat. Attachment #236342 - Flags: superreview?(wtchang) → superreview+ Wan-Teh, Thanks for the review. 1) This is a leftover from my Pascal days that I will probably never get rid of. Speaking of which, this code would have been so much cleaner in that language which permits declaration of not only local variables, but also local functions, thus avoiding the need for any pointers to locals in this case. I fixed this. 3) Yes, I had actually considered that, but decided that maybe 1 or 2 symbols should be unchanged from the previous patch ;) But you are correct, so I made this change. 4) This change simplifies the code, so I made it too. 5) From the man pages for kstat_read and kstat_close : RETURN VALUES Upon successful completion, kstat_read() and kstat_write() return the current kstat chain ID (KCID). Otherwise, they return -1 and set errno to indicate the error. Upon successful completion, kstat_close() returns 0. Other- wise, -1 is returned and errno is set to indicate the error. For kstat_close, the only correct value is zero, so checking for if (kstat_close()) is best IMO. Even if the failure case is only supposed to be -1, if there is a bug, it could be something else, and my code detects that. For kstat_read, there is no unique known good value, so I have to check for -1 . So, I prefer to keep the 2 different checks here. 6) Only if *fed is initialized to 0, which it isn't guaranteed to be . An extra *fed = 0 statement is needed to replace total_fed = 0 . However, looking at this, I found that *fed wasn't getting reset at all if kstat_open failed upfront and the function returned. So I have changed the code to initialize *fed first . I also added a NULL-pointer check for the fed argument. 7) This change is not necessary. 8) Buffering the entropy data makes sense only if there may be lots of small chunks, as there happens to be always the case for the kstat structures themselves, and some of the data they point to. The code would need to be reviewed for each platform to see where it makes sense to buffer. The buffering could be moved into RNG_RandomUpdate itself, if there was an RNG_RandomFinish or RNG_RandomFlush to go with it. 9) That is not necessary and ties the code a lot more to its caller. I checked this patch in to the tip . Checking in config.mk; /cvsroot/mozilla/security/nss/lib/freebl/config.mk,v <-- config.mk new revision: 1.16; previous revision: 1.15 done Checking in unix_rand.c; /cvsroot/mozilla/security/nss/lib/freebl/unix_rand.c,v <-- unix_rand.c new revision: 1.19; previous revision: 1.18 done And NSS_3_11_BRANCH : Checking in config.mk; /cvsroot/mozilla/security/nss/lib/freebl/config.mk,v <-- config.mk new revision: 1.15.2.1; previous revision: 1.15 done Checking in unix_rand.c; /cvsroot/mozilla/security/nss/lib/freebl/unix_rand.c,v <-- unix_rand.c new revision: 1.17.10.2; previous revision: 1.17.10.1 done Status: NEW → RESOLVED Last Resolved: 13 years ago Resolution: --- → FIXED P2 is more appropriate, since only a subset of particularly messed-up applications suffer from the use of fork. Priority: P1 → P2
https://bugzilla.mozilla.org/show_bug.cgi?id=182758
CC-MAIN-2019-18
refinedweb
3,597
64.51
On Thu, Apr 20, 2006 at 10:26:09AM +0200, Arjan van de Ven wrote:> > > You a> given. For example if your mount got lazy umounted (like hal probably> does) then it's a floating mount not one tied to any tree going to the> root of any namespace.So, running with your "lazy unmounted" example for a bit.The patch I proposed changes how d_path behaves when the task has chrooted relative to it's namespace. So in your scenario what would calling d_pathon a dentry report (for !chrooted and chrooted) without this patch ?I can't tell if you are claiming there is a fundamental problem calling d_path*period* in this scenario. If so, I'd appreciate a little more concrete detail in the way of an actual example, this is a bit hand-wavy.Or that you are just saying another version of "pathames are crap" which I'm not sure if appropos to this patch itself.If it's the former, I'll happily go off and write some code to test yourassertion and it's ramifications if I can better understand what the actualassertion is :-)ThanksTony-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/4/20/179
CC-MAIN-2014-15
refinedweb
219
61.67
REST API Microversion Support¶ We need a way to be able to introduce changes to the REST API to both fix bugs and add new features. Some of these changes are backwards incompatible and we currently have no way of doing this. Problem description¶ As a community we are really good at evolving interfaces and code over time via incremental development. We’ve been less good at giant big bang drops of code. The Nova API has become sufficiently large, and constantly growing through new extensions, that it’s not likely to be able to ever do a new major version of the API because of the impact on users and overhead for developers of supporting multiple implementations. At the same time the current situation where we allow innovation in the API through adding extensions, has grown to the point where we now have extensions to extensions, under the assumption that the extension list is a poor man’s versioning mechanism. This has led to large amounts of technical debt. It prevents us from making certain changes, like deprecating pieces of the API that are currently non sensible or broken. Or fixing other areas where incremental development has led to inconsistencies in the API which is confusing for new users. We must come up with a better way that serves the following needs: Makes it possible to evolve the API in an incremental manner, which is our strength as a community. Provides backwards compatibility for users of the REST API. Provides cleanliness in the code to make it less likely that we’ll do the wrong thing when extending or modifying the API. A great interface is one that goes out of it’s way to makes it hard to use incorrectly. A good interface tries to be a great interface, but bends to the realities of the moment. Use Cases¶ Allows developers to modify the Nova API in backwards compatible way and signal to users of the API dynamically that the change is available without having to create a new API extension. Allows developers to modify the Nova API in a non backwards compatible way whilst still supporting the old behaviour. Users of the REST API are able to decide if they want the Nova API to behave in the new or old manner on a per request basis. Deployers are able to make new backwards incompatible features available without removing support for prior behaviour as long as there is support to do this by developers. Users of the REST API are able to, on a per request basis, decide which version of the API they want to use (assuming the deployer supports the version they want). Proposed change¶ Design Priorities: How will the end users use this, and how to we make it hard to use incorrectly How will the code be internally structured. How do we make it: Easy to see in the code that you are about to break API compatibility. Make it easy to make backwards compatible changes Make it possible to make backwards incompatible changes Minimise code duplication to minimise maintenance overhead How will we test this both for unittests and in integration. And what limits does that impose. Versioning¶ For the purposes of this discussion, “the API” is all core and optional extensions in the Nova tree. Versioning of the API should be a single monotonic counter. It will be of the form X.Y where it follows the following convention: X will only be changed if a significant backwards incompatible API change is made which affects the API as whole. That is, something that is only very very rarely incremented. Y when you make any change to the API. Note that this includes semantic changes which may not affect the input or output formats or even originate in the API code layer. We are not distinguishing between backwards compatible and backwards incompatible changes in the versioning system. It will however be made clear in the documentation as to what is a backwards compatible change and what is a backwards incompatible one. Note that groups of similar changes across the API will not be made under a single version bump. This will minimise the impact on users as they can control changes that they want to be exposed to. A backwards compatible change is defined as one which would be allowed under the OpenStack API Change Guidelines A version response would look as follows GET / { "versions": [ { "id": "v2.1", "links": [ { "href": "", "rel": "self" } ], "status": "CURRENT", "version": "5.2" "min_version": "2.1" }, ] } This specifies the min and max version that the server can understand. min_version will start at 2.1 representing the v2.1 API (which is equivalent to the V2.0 API except for XML support). It may eventually be increased if there are support burdens we don’t feel are adequate to support. Client Interaction¶ A client specifies the version of the API they want via the following approach, a new header: X-OpenStack-Nova-API-Version: 2.114 This conceptually acts like the accept header. This is a global API version. Semantically this means: If X-OpenStack-Nova-API-Version is not provided, act as if min_version was sent. If X-OpenStack-Nova-API-Version is sent, respond with the API at that version. If that’s outside of the range of versions supported, return 406 Not Acceptable. If X-OpenStack-Nova-API-Version: latest (special keyword) return max_version of the API. This means out of the box, with an old client, an OpenStack installation will return vanilla OpenStack responses at v2. The user or SDK will have to ask for something different in order to get new features. Two extra headers are always returned in the response: X-OpenStack-Nova-API-Version: version_number, experimental Vary: X-OpenStack-Nova-API-Version The first header specifies the version number of the API which was executed. Experimental is only returned if the operator has made a modification to the API behaviour that is non standard. This is only intended to be a transitional mechanism while some functionality used by cloud operators is upstreamed and it will be removed within a small number of releases.. The second header is used as a hint to caching proxies that the response is also dependent on the X-Openstack-Compute-API-Version and not just the body and query parameters. See RFC 2616 section 14.44 for details. Implementation design details¶ On each request the X-OpenStack-Nova-API-Version header string will be converted to an APIVersionRequest object in the wsgi code. Routing will occur in the usual manner with the version object attached to the request object (which all API methods expect). The API methods can then use this to determine their behaviour to the incoming request. Types of changes we will need to support: * Status code changes (success and error codes) * Allowable body parameters (affects input validation schemas too) * Allowable url parameters * General semantic changes * Data returned in response * Removal of resources in the API * Removal of fields in a response object or changing the layout of the response Note: This list is not meant to be an exhaustive list Within a controller case, methods can be marked with a decorator to indicate what API versions they implement. For example: @api_version(min_version='2.1', max_version='2.9') def show(self, req, id): pass @api_version(min_version='3.0') def show(self, req, id): pass An incoming request for version 2.2 of the API would end up executing the first method, whilst an incoming request for version 3.1 of the API would result in the second being executed. A version object is passed down to the method attached to the request object so it is also possible to do very specific checks in a method. For example: def show(self, req, id): .... stuff .... if req.ver_obj.matches(start_version, end_version): .... Do version specific stuff .... .... stuff .... Note that end_version is optional in which case it will match any version greater than or equal to start_version. Some prototype code which explains how this work is available here: The validation schema decorator would also need to be extended to support versioning @validation.schema(schema_definition, min_version, max_version) Note that both min_version and max_version would be optional parameters. A method, extension, or a field in a request or response can be removed from the API by specifying a max_version: @api_version(min_version='2.1', max_version='2.9') def show(self, req, id): .... stuff .... If a request for version 2.11 is made by a client, the client will receive a 404 as if the method does not exist at all. If the minimum version of the API as whole was brought up to 2.10 then the extension itself could then be removed. The minimum version of the API as a whole would only be increased by a consensus decision between Nova developers who have the ovehead of maintaining backwards compatibility and deployers and users who want backwards compatibility forever. Because we have a monotonically increasing version number across the whole of the API rather than versioning individual plugins we will have potential merge conflicts like we currenty have with DB migration changesets. Sorry, I don’t believe there is any way around this, but welcome any suggestions! Client Expectations¶ As with system which supports version negotiation, a robust client consuming this API will need to also support some range of versions otherwise that client will not be able to be used in software that talks to multiple clouds. The concrete example is nodepool in OpenStack Infra. Assume there is a world where it is regularly connecting to 4 public clouds. They are at the following states: - Cloud A: - min_ver: 2.100 - max_ver: 2.300 - Cloud B: - min_ver: 2.200 - max_ver: 2.450 - Cloud C: - min_ver: 2.300 - max_ver: 2.600 - Cloud D: - min_ver: 2.400 - max_ver: 2.800 No single version of the API is available in all those clouds based on the ancientness of some of them. However within the client SDK certain basic functions like boot will exist, though might get different additional data based on version of the API. The client should smooth over these differences when possible. Realistically this is a problem that exists today, except there is no infrastructure to support creating a solution to solve it. Alternatives¶ One alternative is to make all the backwards incompatible changes at once and do a major API release. For example, change the url prefix to /v3 instead of /v2. And then support both implementations for a long period of time. This approach has been rejected in the past because of concerns around maintance overhead. REST API impact¶ As described above there would be additional version information added to the GET /. These should be backwards compatible changes and I rather doubt anyone is actually using this information in practice anyway. Otherwise there are no changes unless a client header as described is supplied as part of the request. Other end user impact¶ SDK authors will need to start using the X-OpenStack-Nova-API-Version header to get access to new features. The fact that new features will only be added in new versions will encourage them to do so. python-novaclient is in an identical situation and will need to be updated to support the new header in order to support new API features. Developer impact¶ This will obviously affect how Nova developers modify the REST API code and add new extensions. Implementation¶ Dependencies¶ This is dependent on v2.1 v2-on-v3-api spec being completed. Any nova spec which wants to make backwards incompatible changes to the API (such as the tasks api specification) is dependent on on this change. As is any spec that wants to make any API change to the v2.1 API without having to add a dummy extension. JSON-Home is related to this though they provide different services. Microversions allows clients to control which version of the API they are exposed to and JSON-Home describes that API allowing for resource discovery. Testing¶ It is not feasible for tempest to test all possible combinations of the API supported by microversions. We will have to pick specific versions which are representative of what is implemented. The existing Nova tempest tests will be used as the baseline for future API version testing. Documentation Impact¶ The long term aim is to produce API documentation at least partially automated using the current json schema support and future JSON-Home support. This problem is fairly orthogonal to this specification though.
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
CC-MAIN-2019-35
refinedweb
2,096
62.98
Changing the GPS frequency and configuring which NMEA sentences the gps posts. Hello, I have been browsing the gps lib library but could not find any API for updating the GPS frequency and configuring the NMEA sentences the gps posts Has anyone done this. I need it to set to 5Hz. What I know from other gps's (like the one from ada fruit), is that at that speed you need to disable some of the nmea sentences in the gps itself otherwise, the 9800 baudrate won't be able to keep up displaying all information. Which brings me to the next topic: the PMTK sentences for configuring the L-76-L. Does anybody has the documentation of the L-76-L on which the registers to use to build up the PMTK sentences? I couldn't find them on Many thanks in advance - this.wiederkehr last edited by this.wiederkehr @fsergeys said in Changing the GPS frequency and configuring which NMEA sentences the gps posts.: GALILEO and GALILEO Full satellites Do you know what the difference between the two is? Also I wonder what is the default configuration for the tracking system? I think the application note regardig I2c mentions to perform a dummy write after each command prior to get back to reading see my example here: and the sequence chart in the application note: <- requires login. and here is the code to perform checksum testing yourself: requires the sentence to be a bytes object. def _checksum(self, sentence): l = sentence[0] for c in sentence[1:]: l = l ^ c return l def _test_checksum(self, sentence): if self._checksum(sentence[1:-3]) == int(sentence[-2:], 16): return True return False @combaindeft Got it implemented yesterday and works like a charm. Hope it helps you print("GPS module configuration") #Stop logging to local flash of GPS self.stoplog = "$PMTK185,1*23\r\n" self.i2c.writeto(GPS_I2CADDR, self.stoplog) # Use GPS, GONASS, GALILEO and GALILEO Full satellites self.searchmode = "$PMTK353,1,1,1,1,0*2B\r\n" self.i2c.writeto(GPS_I2CADDR, self.searchmode) # Only out RMC messages self.rmc = "$PMTK314,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0*29\r\n" self.i2c.writeto(GPS_I2CADDR, self.rmc) # Increase output rate to 5Hz self.fivehz = "$PMTK220,200*2C\r\n" self.i2c.writeto(GPS_I2CADDR, self.fivehz) # Set the speed threshold self.speedth = "$PMTK386,0.4*39\r\n" self.i2c.writeto(GPS_I2CADDR, self.speedth) 2 more TIPS: - This is a neat tool to calculate you Hex sums: - If you readout the NMEA messages and you're looking for the end of the sentence, then only look for \r not \r\n. For some reason, if the GPS logs all messages, the end of each NMEA sentence is ended with \r\n, but not if only RMC messages are outputted. (That one kept me scratching my head for a while ...) def getNMEASentence(self): nmea = b'' while True: nmea += self._read().lstrip(b'\n\n').rstrip(b'\n\n') #print(nmea) matchObj = re.search(b'\$G[P,N]+RMC',nmea) if matchObj: matchStr = matchObj.group(0) rmc_idx = nmea.find(matchStr.encode('ascii')) rmc = nmea[rmc_idx:] e_idx = rmc.find(b'\r') if e_idx >= 0: rmc = rmc[:e_idx].decode('ascii') nmea = nmea[(rmc_idx + e_idx):] gc.collect() break else: continue return rmc (The code is not beautiful, but we are prototyping now ...) @combaindeft I have 50% of the code, but since we voted for another feature to be implemented first. I haven't finished it. Planned for half of march. I'll keep you posted. - combaindeft last edited by How has this gone? Curious, as I'm looking into the same issue(s). @livius I had not noticed these documents, but then I discovered there are actually more spec files after you register on the quectel website. Then I found this document documenting all the pmtk's As soon as I have my boards, I will tweak the library code and maybe reuse some of this code. Thx This post is deleted! @fsergeys look better at full spec on page 21 there is sample
https://forum.pycom.io/topic/2449/changing-the-gps-frequency-and-configuring-which-nmea-sentences-the-gps-posts/3
CC-MAIN-2019-35
refinedweb
691
67.25
Amazon CloudWatch Concepts The following terminology and concepts are central to your understanding and use of Amazon CloudWatch: Namespaces A namespace is a container for CloudWatch metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics. There is no default namespace. You must specify a namespace for each data point you publish to CloudWatch. You can specify a namespace name when you create a metric. These names must contain valid XML characters, and be fewer than 256 characters in length. Possible characters are: alphanumeric characters (0-9A-Za-z), period (.), hyphen (-), underscore (_), forward slash (/), hash (#), and colon (:). The AWS namespaces use the following naming convention: AWS/. For example, Amazon EC2 uses the service AWS/EC2 namespace. For the list of AWS namespaces, see AWS Namespaces. Metrics Metrics are the fundamental concept in CloudWatch. A metric. The data points themselves can come from any application or business activity from which you collect data. AWS services send metrics to CloudWatch, and you can send your own custom metrics to CloudWatch. You can add the data points in any order, and at any rate you choose. You can retrieve statistics about those data points as an ordered set of time-series data. has a time stamp, and (optionally) a unit of measure. When you request statistics, the returned data stream is identified by namespace, metric name, dimension, and (optionally) the unit. For more information, see View Available Metrics and Publish Custom Metrics. Time Stamps point was received. Time stamps are dateTime objects, with the complete date plus hours, minutes, and seconds (for example, 2016-10-31T23:59:59Z). For more information, see dateTime. Although it is not required, we recommend that you use Coordinated Universal Time (UTC). When you retrieve statistics from CloudWatch, all times are in UTC. CloudWatch alarms check metrics based on the current time in UTC. Custom metrics sent to CloudWatch with time stamps other than the current UTC time can cause alarms to display the Insufficient Data state or result in delayed alarms. Metrics Retention. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days this data is still available, but is aggregated and is retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour. CloudWatch started retaining 5-minute and 1-hour metric data as of 9 July 2016. Dimensions A dimension is a name/value pair that uniquely identifies a metric. You can assign up to 10 dimensions to a metric. Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics. Dimensions help you design a structure for your statistics plan. Because dimensions are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of your metrics, you are creating a new variation of that metric. AWS services that send data to CloudWatch attach dimensions to each metric. You can use dimensions to filter the results that CloudWatch returns. For example, you can get statistics for a specific EC2 instance by specifying the InstanceId dimension when you search for metrics. For metrics produced by certain AWS services, such as Amazon EC2, CloudWatch can aggregate data across dimensions. For example, if you search for metrics in the AWS/EC2 namespace but do not specify any dimensions, CloudWatch aggregates all data for the specified metric to create the statistic that you requested. CloudWatch does not aggregate across dimensions for your custom metrics. Dimension Combinations CloudWatch to use for aggregation. For example, suppose that you publish four distinct metrics named ServerStats in the DataCenterMetric namespace with the following properties: Dimensions: Server=Prod, Domain=Frankfurt, Unit: Count, Timestamp: 2016-10-31T12:30:00Z, Value: 105 Dimensions: Server=Beta, Domain=Frankfurt, Unit: Count, Timestamp: 2016-10-31T12:31:00Z, Value: 115 Dimensions: Server=Prod, Domain=Rio, Unit: Count, Timestamp: 2016-10-31T12:32:00Z, Value: 95 Dimensions: Server=Beta, Domain=Rio, Unit: Count, Timestamp: 2016-10-31T12:33:00Z, Value: 97 If you publish only those four metrics, you can retrieve statistics for these combinations of dimensions: Server=Prod,Domain=Frankfurt Server=Prod,Domain=Rio Server=Beta,Domain=Frankfurt Server=Beta,Domain=Rio You can't retrieve statistics for the following dimensions or if you specify no dimensions: Server=Prod Server=Beta Domain=Frankfurt Domain=Rio Statistics Statistics are metric data aggregations over specified periods of time. CloudWatch provides statistics based on the metric data points provided by your custom data or provided by other AWS services. Units Each statistic has a unit of measure. Example units include Bytes, Seconds, Count, and Percent. For the complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference. You can specify a unit when you create a custom metric. If you do not specify a unit, CloudWatch uses None as the unit. Units help provide conceptual meaning to your. If you have two otherwise identical metrics with different units, two separate data streams are returned, one for each unit.. You, you should select a period that aligns to how the metric is stored. For more information about metrics that support sub-minute periods, see High-Resolution Metrics. When you retrieve statistics, you can specify a period, start time, and end time. These parameters determine the overall length of time associated with the statistics. The default values for the start time and end time get you the last hour's worth of statistics. The values that you specify for the start time and end time determine how many periods CloudWatch returns. For example, retrieving statistics using the default values for the period, start time, and end time returns an aggregated set of statistics for each minute of the previous hour. If you prefer statistics aggregated in ten-minute blocks, specify a period of 600. For statistics aggregated over the entire hour, specify a period of 3600. Periods are also important for CloudWatch alarms. When you create an alarm to monitor a specific metric, you are asking CloudWatch to compare that metric to the threshold value that you specified. You have extensive control over how CloudWatch makes that comparison. Not only can you specify the period over which the comparison is made, but you can also specify how many evaluation periods are used to arrive at a conclusion. For example, if you specify three evaluation periods, CloudWatch compares a window of three data points. CloudWatch only notifies you if the oldest data point is breaching and the others are breaching or missing. For metrics that are continuously emitted, CloudWatch doesn.. For large datasets, you can insert a pre-aggregated dataset called a statistic set. With statistic sets, you give CloudWatch the Min, Max, Sum, and SampleCount for a number of data points. This is commonly used when you need to collect data many times in a minute. For example, suppose you have a metric for the request latency of a webpage. It doesn't make sense to publish data with every webpage hit. We suggest that you collect the latency of all hits to that webpage, aggregate them once a minute, and send that statistic set to CloudWatch. Amazon CloudWatch doesn't differentiate the source of a metric. If you publish you to get the statistics for minimum, maximum, average, and sum of all requests across your application. Percentiles A percentile indicates the relative standing of a value in a dataset. For example, the 95th percentile means that 95 percent of the data is lower than this value and 5 percent of the data is higher than this value. Percentiles help you get a better understanding of the distribution of your metric data. You can use percentiles with the following services: Amazon EC2 Amazon RDS Kinesis Application Load Balancer Elastic Load Balancing API Gateway you are monitoring the CPU utilization of your EC2 instances to ensure that your customers have a good experience. If you monitor the average, this can hide anomalies. If you monitor the maximum, a single anomaly can skew the results. Using percentiles, you can monitor the 95th percentile of CPU utilization to check for instances with an unusually heavy load.). Percentile statistics are not available for metrics when any of the metric values are negative numbers. CloudWatch needs raw data points to calculate percentiles. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data when one of the following conditions is true: The SampleCount of the statistic set is 1. The Min and the Max of the statistic set are equal. Alarms You can use an alarm to automatically initiate actions on your. You can also add alarms to dashboards. Alarms invoke actions for sustained state changes only. CloudWatch alarms Publish Custom Metrics. For more information, see Creating Amazon CloudWatch Alarms and Create an Alarm from a Metric on a Graph.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html
CC-MAIN-2018-26
refinedweb
1,518
55.24
How do I call a batch file in jython? thanks!! Here test, subprocess module functions OK in Jython, as it should: Test batch file batch.bat @echo off echo Hello, from batch file Test in Jython prompt: >>> import subprocess >>> subprocess.call("batch.bat") Hello, from batch file 0 >>> Edited 6 Years Ago by pyTony: n/a Hi, I'm getting the error InputError: no module named subprocess is there another way of doing the task?? You might want to update to the latest version of Jython. This one works just fine on my Windows7 machine ... """jy_hello_bat.py run a batch file using Jython The Windows batch file 'hello.bat' is: echo Hello, from batch file tested with jython2.5.2 """ import subprocess subprocess.call('hello.bat') In my experience most of my normal Python26 code seems to work with Jython 2.5.2 as well. Jython is actually a real nice way for Java programmers to enjoy Python syntax, and for Python programmers to get used to a little Java. Edited 6 Years Ago by vegaseat: n/a This might be simpler ... """jy_hello_bat2.py run a batch file using Jython The Windows batch file 'hello.bat' is: echo Hello, from batch file tested with jython2.5.2 """ import os os.system('hello.bat') ...
https://www.daniweb.com/programming/software-development/threads/365126/calling-batch-files-in-jython
CC-MAIN-2017-39
refinedweb
213
76.32
Hello, I'm having some unexpected results with the function LAPACKE_dgeqrf. Apparently I'm unable to get the appropriate QR decomposition at some cases, I'm rather obtaining a QR decomposition with some unexpected vector orientations for the orthogonal matrix Q. Here is a MWE of the problem: #include <stdio.h> #include <stdlib.h> #include "mkl.h" #define N 2 int main() { double *x = (double *) malloc( sizeof(double) * N * N ); double *tau = (double *) malloc( sizeof(double) * N ); int i, j; /* Pathological example */ x[0] = 4.0, x[1] = 1.0, x[2] = 3.0, x[3] = 1.0; printf("\n INITIAL MATRIX\n\n"); for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { printf(" %3.2lf\t", x[i*N+j]); } printf("\n"); } LAPACKE_dgeqrf ( LAPACK_ROW_MAJOR, N, N, x, N, tau); printf("\n R MATRIX\n\n"); for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { if ( j >= i ){ printf(" %3.2lf\t", x[i*N+j]); }else{ printf(" %3.2lf\t", 0.0); } } printf("\n"); } LAPACKE_dorgqr ( LAPACK_ROW_MAJOR, N, N, N, x, N, tau); printf("\n Q MATRIX\n\n"); for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { printf(" %3.2lf\t", x[i*N+j]); } printf("\n"); } printf("\n"); return 0; } With this example, the output I get is: INITIAL MATRIX 4.00 1.00 3.00 1.00 R MATRIX -5.00 -1.40 0.00 0.20 Q MATRIX -0.80 -0.60 -0.60 0.80 However, the expected QR decomposition would be: R MATRIX 5.00 1.40 0.00 0.20 Q MATRIX 0.80 -0.60 0.60 0.80 I have found this problem with other Initial matrices as well. Thanks in advance, Paulo Link Copied There is no problem. Just as (-2) X 3 and 2 X (-3) are both acceptable factorizations of -6, some columns of Q and the corresponding rows of R may have their signs flipped. Hello mecej4, thanks for the reply, yeap, I know that the given factorization is acceptable. My point (and I probably should have mentioned that explicitly in the description of the problem) is that it is not the expected factorization obtained typically by the gram-schimidt process. It may seem irrelevant, but in the particular application I'm interested it is very important that the directions of the orthonormalized column vectors of Q are preserved, so as the diagonal elements of R are positive. So, in other terms, is it possible to force the library to obtain the expected QR decomposition by GS? Thank you The Lapack routines ?geqrf() do not use Gram-Schmidt or Modified Gram-Schmidt. In fact, after calling ?geqrf() the input matrix has been overwritten by the Householder reflectors that were produced by the factorization. In other words, Q is not stored in the usual matrix convention, but as a sequence of reflectors from which, if desired, one can calculate the usual representation of Q by calling ?orgqr(). However, in many algorithms one does not want Q explicitly, but wishes to obtain the product of Q and another matrix, using ?ormqr(). If you really wish to obtain Q explicitly and insist on a convention (e.g., all diagonal elements of R should be positive, as you specified), it is easy to flip the signs of the corresponding columns of Q and rows of R to suit. I see, I did that already, but I was hoping it could be done by the library, so I wouldn't need the extra loop for flipping the signs Unfortunately, I do need the Q matrix explicitly and also need its columns to be aligned so all diagonal elements of R are positive. Anyway, thank you very much for your help.
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Problems-with-dgeqrf-and-dorgqr/td-p/1109138
CC-MAIN-2021-31
refinedweb
627
63.9
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002. If you wish to participate in discussions, please visit the new Artima Forums. Draw llines and preserve them to see them later Posted by Kishori Sharan on July 07, 2001 at 2:47 PM Hi There are a few problems in your code.1. When you call repaint() method from inside the loop in aa() method then all previously drawn lines will beersed in paint() method. This is why you see only one line.2. In your for loop you are increasing counter by 100 ( i = i + 100 ). However, your canvas is only 200 in height. Therefore, you won't be able to see any lines except first two even if they are not erased.Solution:1. Move your line drawing code from aa() method to paint() method.2. Just call repaint() method from aa() which in turn will call paint() method , which will draw all your lines.3. Change the loop counter increments so that all your lines will fit in 200 X 200 canvas. Or, increase the size of canvas to accomodate all your lines co-ordinates. I am attaching the modified code for your reference. ThanksKishori /////////////// Test.javaimport java.awt.*; public class Test extends Canvas { Frame F = new Frame(); int x1,x2,y1,y2; public Test() { F.setSize(300,300); F.setVisible(true); setSize(200,200); setVisible(true); F.add(this); } public void aa() { repaint(); } public void paint(Graphics g) { x1=0; x2=100; for ( int i = 0 ; i < 100 ; i += 3 ) { y1 = i; y2 = i ; g.drawLine(x1,y1,x2,y2); } } public void update(Graphics g) { paint(g); } public static void main (String[] args) { Test t= new Test(); t.aa(); }}
http://www.artima.com/legacy/answers/Jul2001/messages/52.html
CC-MAIN-2016-50
refinedweb
291
81.53
How to Create a Template in C++ In C++, templates enable you to generalize a class or function so that it doesn’t use a particular data type. You can use templates in C++ to create a function or class that can handle all kinds of data types. Creating the Template - Declare the template by typing text below. myType, TemplateName, variable1, and variable2 can be named anything you want. myType will represent a type that hasn't been identified, TemplateName will represent the name of your template, and variable1 and variable2 will represent the variables to be used in your function template. Note that you can have as much variables as you want. template <class myType> myType TemplateName(myType variable1, myType variable2) { }; - Begin programing your code within the brackets. It should involve the variables you defined (in the example above they would be variable1 and variable2). Here's a sample template: template <class T> T GetMax (T a, T b) { T result; result = (a>b)? a : b; return (result); } Note that you have to use T whenever you make a variable. Calling the Template - Create some variables in the main body of your code. - Type your template name and the type of the variable within angle brackets. In we were using integers with our example, we would type: int i=2, j=3, k; k=TemplateName<int>(i,j); [edit]Tips - Most programmers will use a T to define the type. - In the example, you may have noticed you needed to use T (the type variable) to define a variable. This is because you don't know the type of data that will be put in, so it must be a variable. - When you use T as a parameter for the type, you don't have to include <type> after you call the function; the compiler can find out automatically what the type is. - Some good uses of templates include linked lists (a class with a pointer to another of itself), smart pointers (pointers managed by the class) and functions/container classes which manipulate types of different sizes. - Templates can take values as arguments as well as types. For example, you can something like this: template <class type, int value_as_argument>
http://www.wikihow.com/Create-a-Template-in-C%2B%2B
crawl-002
refinedweb
370
69.41
Scrollable List of Buttons Environment: VC6 SP2, NT4, Win95 My application has a feature which allows a user to define one or more "commands". The commands become buttons whose numbers can grow without limit. My users prefer this scrollable list of buttons over a combobox and "go" button implementation (often see on a Web sites) because: - It's less clicking - They can see more than one command at a time Don't fret too much about using an entire grid implementation to do something as trivial as a list of buttons. The grid's executable footprint is quite small. In addition, if you would like to change fonts, change text colors, add an image to the button, etc. you can call the appropriate grid functions to make this happen. To UseGet CGridCtrl Source. Create a new project in VC++. Create a new subdirectory within the project folder named "GridCtrl". Download the latest version of the Grid Control and place all source files within your new GridCtrl folder. Add these files to your VC++ project. Add Button List Source. Download the button list source files and add to the VC++ project. Add the Button List Control to your Dialog. Use VC++ dialog editor to create a custom control whose class name is: MFCGridCtrl. Edit your Dialog Header File: #include "BtnListCtrl.h" ... // Dialog Data ... CBtnListCtrl m_BtnListCtrl; Edit your Dialog Implementation File: // register the control per CGridCtrl requirements void CMyDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(CMyDlg) //}}AFX_DATA_MAP ... DDX_GridControl(pDX, IDC_BTNLIST, m_BtnListCtrl); } // add message handling for button clicks BEGIN_MESSAGE_MAP(CMyDlg, CDialog) //{{AFX_MSG_MAP(CMyDlg) //}}AFX_MSG_MAP ... ON_NOTIFY(GVN_COLUMNCLICK, IDC_BTNLIST, OnBtnClick) END_MESSAGE_MAP() ... void CMyDlg::OnBtnClick( NMHDR* pNMHDR, LRESULT* pResult) { GV_DISPINFO *pgvDispInfo = (GV_DISPINFO *)pNMHDR; GV_ITEM *pgvItem = &pgvDispInfo->item; if( pgvItem->row > -1 ) // -1 indicates clicked on control but not button { TRACE( "Clicked Btn Row=%i", pgvItem->row); } *pResult = 0; } // Initialize the button list class BOOL CMyDlg::OnInitDialog() { ... // TODO: Add extra initialization here m_BtnListCtrl.InitializeButtons(); return TRUE; // return TRUE unless you set the focus to a control } // set the button labels anytime after initialization ... CStringArray StringArrayLabels; StringArrayLabels.Add( "Btn Label"); m_BtnListCtrl.WriteButtonLabels( StringArrayLabels); Downloads Download demo project (includes exe and Grid source)- 109 Kb History Bigger font size and images in the buttonsPosted by ptolomeoo on 10/21/2008 11:29am Hello, I am a completely noob. I wanted to put a bigger font size in the buttons, and insert a little icon on a side of those buttons but couldn't figure out how to do that. Could anyone throw a hint on this issue? Thanks in advance. I could figure out about the font size...Posted by ptolomeoo on 10/21/2008 11:53am Hello. I could figure out how to increase the font size :-) Just doin: (pCell->lfFont).lfHeight=30; in this function (in GridCtrl.cpp) CGridCell* CGridCtrl::CreateCell(int nRow, int nCol) { CGridCell* pCell = new CGridCell; if (!pCell) return NULL; // Make format same as cell above if (nRow > 0 && nCol >= 0 && nCol < m_nCols) pCell->nFormat = GetItemFormat(nRow-1, nCol); // Make font default grid font memcpy(&(pCell->lfFont), &m_Logfont, sizeof(LOGFONT)); (pCell->lfFont).lfHeight=30; return pCell; }Reply
http://www.codeguru.com/cpp/controls/controls/lists,treesandcombos/article.php/c2129/Scrollable-List-of-Buttons.htm
CC-MAIN-2017-13
refinedweb
514
57.87
Registrars, registries, registrants… When registering a domain name*), we can choose from a lot of registrars. Simply put, registrars are the middlemen who interact with registries on behalf of registrants. Got that? Well, the terminology is a bit confusing, so let’s start with some definitions [Abley, 2003]. - “registrant” - the organization or person responsible for a domain; - “registrars” - the middlemen who interact with registries on behalf of registrants; - “registry” - the organization which maintains the register and publishes the zone; - “register” - the data that is maintained by the registry. For a comprehensive listing of all ICANN-Accredited Registrars, see this overview from ICANN. The domain registration process Because there are so many registrars, guidelines and policies are important. In a presentation from Joe Abley (ISC) you will see in detail what’s exactly going on after you hit that submit button and order your favorite domain name. Presentation by Joe Abley during APRICOT 2003 In his presentation, Joe talks about EPP: the Extensible Provisioning Protocol. This is one of the technical protocols that Registrars will use during the registration of domain names. EPP defined: EPP is an application layer client-server protocol for the provisioning and management of objects stored in a shared central repository. Specified in XML, the protocol defines generic object management operations and an extensible framework that maps protocol operations to objects.. Current status of EPP (March 2004) A complete set of EPP documents from the Provisioning Registry Protocol Working Group has now made it to RFC. These RFCs describe the EPP protocol in technical detail. EPP meets and exceeds the requirements for a generic registry registrar protocol as described in RFC 3375: Generic Registry-Registrar Protocol Requirements (Sep) In case you’d like to know more about this topic, you will find these resources helpful as well: *) This article describes the process for registering some of the major generic toplevel domains (gTLD), such as .com, .net, and .org. Terminology and procedures mentioned in the documents are not necessarily the same for ccTLD domain names. And in fact, not even for all gTLDs. Thanks for this remark, Chris! This all assumes... ... you are registering under a generic top-level domain like .com. Registering a country code domain like .co.uk, .de or whatever is a little different. In the UK at least, the terminology is a little different (we don't have "registrars") and you don't have to be ICANN approved to sell .uk domains to the public. Every namespace *is* different... This makes it difficult to come to commonly held understandings in the domain name system, technical or otherwise. I wrote a draft a few years ago that I update once in a while that attempts to make some sense of the terminology for people in a way that takes all of the different implementations into account. The most recent (but expired) version of this draft can be found here: How long will it take after I make the changes to my DNS will the they take effect?
http://www.oreillynet.com/onlamp/blog/2004/03/behind_the_name_the_domain_nam.html
crawl-002
refinedweb
502
53
WE MOVED TO ZULIP => <= I’m a bit confused with Dry Struct…. It used to be in drytypes: Dry::Types::Struct now its on its own so change to Dry::Struct, right? What stupid thing am I doing here: class CreateClientForm < Dry::Struct module Types include Dry::Types.module end attribute :name, Types::Form::String end which appears to not find “String” constant… specs in dry struct do: attribute :name, ‘coercible.string’ is this how I should change things? Maybe this should be Types::Coercible::String ?? that seems to work… hmmm I just assumed all ‘form’ values here so use that, but perhaps I should read more closely descriptions of each…. …in trying to understand option :account def unique?(attr_name, value) and how it’s params line up with schema.with(object: user_account).call(input) 1) is that ‘object’ key special? I thought it would be ‘account’ 2) i guess that the ‘unique?’ method gets curried, so ‘value’ comes in… but a little down” required(:email).filled(scoped_unique?: :email, scope?: { active: true }) seems to pass in as a key “scope?:” but receive it as a positional arg… ok i guess… if i have it straight Just to report back… seems like that doc is wrong. Probably was specialised for ‘account’ then generalised to ‘object’ but not fully. I think it should be something like: option :record def unique?(attr_name, value) record.class .where.not(id: record.id) .where(attr_name => value) .empty? end module Types include Dry::Types.module end class Address < Dry::Struct attribute :suburb, Types::Strict::String end class Customer < Dry::Struct attribute :name, Types::Strict::String attribute :address, Address end Customer.new(name: "George", address: {suburb: "Test Town”}) # => #<Customer name="George" address=#<Address suburb="Test Town">> name.unique_within_account?(account_id) def unique_within?(first, second, the_value) name.unique_within?(“foo”, “bar”), firstwould be ”foo”, secondwould be ”bar” def scoped_unique?(attr_name, scope, value) .filled(prdicate_name?: :args_go_here)
https://gitter.im/dry-rb/chat?at=587441cd074f7be763afa312
CC-MAIN-2020-24
refinedweb
314
68.67
int rfio_open (const char *path, int flags, int mode); Under Linux, for large files: #define _LARGEFILE64_SOURCE #include <sys/types.h> #include "rfio_api.h" int rfio_open64 (const char *path, int flags, int mode); For large files, under other systems: #include <sys/types.h> #include "rfio_api.h" int rfio_open64 (const char *path, int flags, int mode); O_RDONLY Open for reading only O_WRONLY Open for writing only O_RDWR Open for reading and writing O_NDELAY Do not block on open O_APPEND Append on each write O_CREAT Create file if it does not exist O_TRUNC Truncate size to 0 O_EXCL Error if create and file exists O_LARGEFILE When size can be superior to 2GB-1. See NOTES mode specifies the permission bits to be set if the file is created. Opening a file with O_APPEND set causes each write on the file to be appended to the end. If O_TRUNC is specified and the file exists, the file is truncated to zero length. If O_EXCL is set with O_CREAT, then if the file already exists, the open returns an error. This can be used to implement a simple exclusive access locking mechanism. If O_EXCL is set and the last component of the pathname is a symbolic link, the open will succeed even if the symbolic link points to an existing name. If the O_NDELAY flag is specified and the open call would result in the process being blocked for some reason (for example waiting for a carrier on a dial-up line), the open returns immediately. The first time the process attempts to perform IO on the open file, it will block (not currently implemented). On systems that support the Large Files, O_LARGEFILE in rfio_open allows files whose sizes cannot be represented in 31 bits to be opened.
http://www.makelinux.net/man/3/R/rfio_open64
CC-MAIN-2013-48
refinedweb
293
67.69
for connected embedded systems Managing Widgets in Application Code We recommend that you create your application's UI in PhAB -- it's easier than doing it in your code. However, if the interface is dynamic, you'll probably have to create parts of it "on the fly." This chapter includes: Creating widgets Creating a widget in your application code is a bit more work than creating it in PhAB. That's because PhAB looks after a lot of the physical attributes for you, including size, location, and so on. If you create the widget in your code, you'll have to set these resources yourself. To create a widget in your code, call PtCreateWidget(). The syntax is as follows: PtWidget_t *PtCreateWidget( PtWidgetClassRef_t *class, PtWidget_t *parent, unsigned n_args, PtArg_t *args ); The arguments are: - class - The type of widget to create (e.g. PtButton) - parent - The parent of the new widget. If this is Pt_DEFAULT_PARENT, the new widget is made a child of the default parent, which is the most recently created container-class widget. If parent is Pt_NO_PARENT, the widget has no parent. - n_args - The number of elements in the args array. - args - An array of PtArg_t structures (see the Photon Library Reference) that store your settings for the widget's resources. These settings are like the ones used for PtSetResources(); see the Manipulating Resources in Application Code chapter. You can specify the default parent (used if the parent argument to PtCreateWidget() is Pt_DEFAULT_PARENT) by calling PtSetParentWidget(). To assign a widget to a different container, call PtReparentWidget(). Here are a few things to note about widgets created in application code: - The widget isn't realized until the container widget is realized. If the container is already realized, you can call PtRealizeWidget() to realize the new widget. - If you create a widget in a PhAB module and then destroy the module, the widget is destroyed, too. The next time the module is created, it will appear as it was specified in PhAB. - If you save a global pointer to the widget, make sure you reset it to NULL when the widget is destroyed. This can easily be done in the widget's Pt_CB_DESTROYED callback. Failing to reset the global pointer (and check it before using it) is a frequent source of problems with widgets created in code. Ordering widgets The order in which widgets are given focus depends on the order in which they were created or on the widget order specified in PhAB (see "Ordering widgets" in the Creating Widgets in PhAB chapter). The backmost widget is the first in the tab order; the frontmost widget is the last. If you're creating widgets programmatically, you can create them in the order in which you want them to get focus, or you can use these functions to change the order: - PtWidgetInsert() - Insert a widget in the widget family hierarchy - PtWidgetToBack() - Move a widget behind all its brothers - PtWidgetToFront() - Move a widget in front of all its brothers Alternatively, you can use a widget's Pt_CB_LOST_FOCUS callback (defined by PtBasic) to override the tab order by giving focus to another widget. In the lost-focus callback, use PtContainerGiveFocus() to give focus to the desired widget, and return Pt_END from the callback to prevent focus from being given to the original target of the focus change. Working in the widget family The following functions can be used to work with the widget family hierarchy, and may be useful in setting the focus order: - PtChildType() - Determine the relationship between two widgets - PtFindDisjoint() - Return the nearest disjoint parent widget - PtFindFocusChild() - Find the closest focusable child widget - PtFindGuardian() - Find the widget responsible for another widget's actions - PtGetParent() - Find the nearest parent widget that matches the specified class - PtGetParentWidget() - Return the current default widget parent - PtNextTopLevelWidget() - Get a pointer to the next top-level widget - PtValidParent() - Identify a valid parent for a widget - PtWidgetBrotherBehind() - Get the brother behind a widget - PtWidgetBrotherInFront() - Get the brother in front of a widget - PtWidgetChildBack() - Get the child that's farthest back in a container - PtWidgetChildFront() - Get the child at the very front of a container - PtWidgetFamily() - Traverse the widget hierarchy from back to front - PtWidgetParent() - Get a widget's parent - PtWidgetSkip() - PtWidgetTree() - Walk the widget tree from front to back - PtWidgetTreeTraverse() - Walk the widget family hierarchy from front to back Callbacks You can add and remove callbacks in your code as well as from PhAB -- just watch for differences between the two types! Adding callbacks An application registers callbacks by manipulating the widget's callback resources. The Photon widget classes employ a naming convention for these resources -- they all begin with Pt_CB_. Callbacks can be added to the callback list kept by these resources using PtAddCallbacks() to add several callback functions to the list or PtAddCallback() to add just one. In either case, the first two arguments to the function are the widget and the name of the callback resource to be augmented. The remaining arguments depend on which function is used. The third argument to PtAddCallbacks() is an array of callback records. Each record contains a pointer to a callback function and the associated client data pointer that will be passed to the callback function when it's invoked. Each of these callback records is copied to the widget's internal callback list. For example, we might want to have the application perform some action when the user selects (i.e. presses) a button. The PtButton widget class provides the Pt_CB_ACTIVATE callback resource for notifying the application when the button has been pressed. To create the widget and attach a callback function to this callback resource, we'd have to use code like this: { PtWidget_t *button; int push_button_cb( PtWidget_t *, void *, PtCallbackInfo_t *); PtCallback_t callbacks[] = { {push_button_cb, NULL} }; ... button = PtCreateWidget(PtButton, window, 0, NULL); PtAddCallbacks(button, Pt_CB_ACTIVATE, callbacks, 1); } where push_button_cb is the name of the application function that would be called when the user presses the button. The PtCallback_t structure is used to define lists of callbacks; for details, see the Photon Widget Reference. When adding only one callback function to the callback list (as in this case), it's simpler to use PtAddCallback(). This function takes the pointer to the callback function as the third argument, and the client data pointer as the final argument. The above code fragment could be written more concisely as: { PtWidget_t *button; int push_button_cb( PtWidget_t *, void *, PtCallbackInfo_t *); button = PtCreateWidget(PtButton, window, 0, NULL); PtAddCallback(button, Pt_CB_ACTIVATE, push_button_cb, NULL); } You can also give an array of callback records as the value for the callback resource when using argument lists in conjunction with PtCreateWidget() or PtSetResources(). Since the callback list is an array, you should specify the array's base address as the third argument to PtSetArg(), and the number of elements as the final argument. In this case, the callback records are added to the current callback list, if there is one. This gives us another way to specify the callback for the above example: { PtArg_t arg[5]; int push_button_cb( PtWidget_t *, void *, PtCallbackInfo_t *); PtCallback_t callbacks[] = { {push_button_cb, NULL} }; ... PtSetArg(&args[0], Pt_CB_ACTIVATE, callbacks, 1); PtCreateWidget(PtButton, window, 1, arg); } Each of these methods has its advantages. PtAddCallback() is of course simple. PtAddCallbacks() is more efficient when there are several callbacks. Using PtSetArg() and passing the result to PtCreateWidget() allows the widget creation and callback list attachment to be performed atomically. Callback invocation When called, the callback function is invoked with the following parameters: - PtWidget_t *widget - The widget that caused the callback function to be called, i.e. the one on which the action took place. - void *client_data - Application-specific data that was associated with the callback when it was registered with the widget. - PtCallbackInfo_t *call_data - A pointer to a PtCallbackInfo_t structure (see the Photon Widget Reference) that holds data specific to this invocation of the callback. It relates to the reason the callback was called and may have data specific to the callback's behavior. The PtCallbackInfo_t structure is defined as: typedef struct Pt_callback_info { unsigned long reason; unsigned long reason_subtype; PhEvent_t *event; void *cbdata; } PtCallbackInfo_t; The elements of PtCallbackInfo_t have the following meaning: - reason -- indicates the reason the callback was called; this is normally set to the name of the callback resource whose callback list has been called. - reason_subtype -- indicates a particular callback type associated with the reason; for most callbacks, this value is zero. - event -- a pointer to a PhEvent_t structure (see the Photon Library Reference) that describes the Photon event that caused the callback to be invoked. - cbdata -- call data that is specific to the callback resource that caused the callback function to be called. For more information, see the descriptions of the callbacks defined for each widget in the Widget Reference. Removing callbacks You can remove one or more callbacks from a callback list associated with a widget resource using the PtRemoveCallbacks() and PtRemoveCallback() functions. PtRemoveCallbacks() takes an array of callback records as an argument and removes all the callbacks specified by it from the callback list. PtRemoveCallback() removes just one callback function from the callback list. Both functions take the widget as the first argument and the widget resource as the second argument. To remove the callback from the button we've created above, we could do this: int push_button_cb( PtWidget_t *, void *, PtCallbackInfo_t *); PtCallback_t callbacks[] = { {push_button_cb, NULL} }; PtRemoveCallbacks(button, Pt_CB_ACTIVATE, callbacks, 1); or this: int push_button_cb( PtWidget_t *, void *, PtCallbackInfo_t *); PtRemoveCallback(button, Pt_CB_ACTIVATE, push_button_cb, Both the callback function pointer and the client data pointer are important when removing callbacks. Only the first element of the callback list that has both the same callback function and the same client data pointer will be removed from the callback list. Examining callbacks You can examine the callback list by getting the value of the appropriate callback list resource. The type of value you get from a callback list resource is different from the value used to set the resource. Although this resource is set with an array of callback records, the value obtained by getting the resource is a pointer to a list of callback records. The type of the list is PtCallbackList_t. Each element of the list contains a cb member (i.e. the callback record) and a next pointer (which points to the next element of the list). The following example shows how you can traverse through the Pt_CB_ACTIVATE callback list for widget to find all instances of a particular callback function, cb: ... PtCallbackList_t *cl; PtGetResources(widget, Pt_CB_ACTIVATE, &cl, 0); for ( ; cl; cl = cl->next ) { if ( cl->cb.func == cb ) break; } Event handlers You can add and remove event handlers (raw and filter callbacks) in your application code as well as in PhAB -- however, there are some differences between the two types. Adding event handlers As with callbacks, you can also set or examine event handlers by performing a set or get directly on the event handler resource. The following resources of PtWidget let you specify handlers for Photon events: For more information about these callback resources, see the Photon Widget Reference. The set operation requires an array of event handler records of type PtRawCallback_t. These are similar to the callback records mentioned above, having event_mask, event_f, and data fields. The event mask is a mask of Photon event types (see PhEvent_t in the Photon Library Reference) indicating which events will cause the callback function to be invoked. The event_f and data members are the event handler function and client data, respectively. A get operation yields a PtRawCallbackList_t * list of event handler records. As with callback lists, the list contains two members: next and cb. The cb member is an event handler record. You can add Pt_CB_RAW event handlers using either the PtAddEventHandler() or PtAddEventHandlers() function. You can add Pt_CB_FILTER event handlers using either the PtAddFilterCallback() or PtAddFilterCallbacks() function. The arguments to PtAddEventHandler() and PtAddFilterCallback() are: - widget - Widget to which the event handler should be added. - event_mask - Event mask specifying which events should cause the event handler to be called. - event_f - Event-handling function. - data - A pointer to pass to the event handler as client data. The arguments to PtAddEventHandlers() and PtAddFilterCallbacks() are: - widget - Widget to which the event handlers should be added. - handlers - Array of event handler records. - nhandlers - Number of event handlers defined in the array. Removing event handlers You can remove Pt_CB_RAW event handlers by calling either PtRemoveEventHandler() or PtRemoveEventHandlers(). You can remove Pt_CB_FILTER event handlers by calling either PtRemoveFilterCallback() or PtRemoveFilterCallbacks() The parameters to PtRemoveEventHandler() and PtRemoveFilterCallback() are: - widget - Widget from which the event handler should be removed. - event_mask - Event mask specifying the events the handler is responsible for. - event_f - Event-handling function. - data - Client data associated with the handler. This looks for an event handler with the same signature -- i.e. the same event_mask, data and event_f -- in the widget and removes one if it's found. The parameters to PtRemoveEventHandlers() and PtRemoveFilterCallbacks() are: - widget - Widget from which the event handlers should be removed. - handlers - Array of event-handler records. - nhandlers - Number of event handlers defined in the array. As with PtRemoveEventHandler() and PtRemoveFilterCallback(), an event handler is removed only if it has the exact same signature as one of the event handler specifications in the array of event handler records. Event handler invocation When invoked, event handlers receive the same arguments as callback functions, i.e. the parameters are: - the widget that received the event (widget) - the client data associated with the event handler (client_data) - the callback information associated with the particular event (info). Event handlers return an integer value that the event handler must use to indicate whether or not further processing should be performed on the event. If the event handler returns the value Pt_END, this indicates that no further processing is to be performed on the Photon event, and the event is consumed. The event member of the info parameter contains a pointer to the event that caused the event handler to be invoked. You should check the type member of this event to determine how to deal with the event. It will be one of the event types specified in the event_mask given when the event handler was added to the widget. To retrieve the data associated with the particular event, call the PhGetData() with the pointer to the event as a parameter. This will return a pointer to a structure with the data specific to that particular event type. This structure's type depends on the event type. Widget styles Widget class styles let you customize or modify a widget's appearance, size, and behavior at runtime. They also let multiple looks for a single type of widget exist at the same time. Essentially, a widget class style is a collection of methods and data that define the look and feel of instances of the widget class. Each widget class has a default style, but you can add or modify an arbitrary number of additional styles at any time. You can even modify the default style for a class, changing the look and feel of any instances of that class that are using the default style. Each instance of a widget can reference a specific style provided by its class. You can change the style that any widget is using whenever you want. Each style has a set of members, including a name for the style and functions that replace or augment some of the widget class's methods. Methods are class-level functions that define how the widget initializes itself, draws itself, calculates its extent, and so on. For more information about methods, see the Building Custom Widgets guide. The members of a style are identified by the following manifests: - Pt_STYLE_DRAW - The address of a function that's called whenever any widget that's using this style needs to draw itself. - Pt_STYLE_EXTENT or Pt_STYLE_SIZING - The address of a function that whenever a widget that's using this style is moved, resized, or modified in some fashion that may require the widget to move or resize (change in widget data). This function is responsible for setting the widget's dimension to the appropriate values. - Pt_STYLE_ACTIVATE - The address of a function that's called whenever a widget is created that defaults to this style, and whenever a widget's style is changed from some other style to this one. This function is the place to put manipulation of a widget's control surfaces, the addition of callbacks, or the setting of resources (to override the widget's defaults). - Pt_STYLE_CALC_BORDER - The address of a function that's responsible for reporting how much space is required to render the widget's edge decorations and margins. - Pt_STYLE_CALC_OPAQUE - The address of a function that's responsible for calculating the list of tiles that represents the opaque areas of a widget. This list is used to determine what needs to be damaged below this widget when it's modified. - Pt_STYLE_DEACTIVATE - The address of a function that's called whenever a widget using this style is either being destroyed or is switching to a different style. - Pt_STYLE_NAME - The name of the style. - Pt_STYLE_DATA - A pointer to an arbitrary data block for the style's use. For details about the members, see PtSetStyleMember(). The following functions let you create and manipulate the widget class styles: - PtAddClassStyle() - Add a style to a widget class - PtCreateClassStyle() - Create a class style - PtDupClassStyle() - Get a copy of a widget class style - PtFindClassStyle() - Find the style with a given name - PtGetStyleMember() - Get a member of a style - PtGetWidgetStyle() - Get the style that a widget is currently using - PtSetClassStyleMethods() - Set multiple members of a style from an array - PtSetStyleMember() - Set a member of a style - PtSetStyleMembers() - Set multiple members of a style from a variable-length argument list - PtSetWidgetStyle() - Set the current style for a widget Some of these functions require or return a pointer to a PtWidgetClassStyle_t structure. Don't access the members of this structure directly; call PtGetStyleMember() instead. This example creates a style called blue and some buttons. Note that your widgets can use a style before you've added the style to the class or even before you've created the style. When you do create the style and add it to the class, any widgets that use the style are updated immediately. #include <Pt.h> PtWidget_t *win, *but; PtWidgetClassStyle_t *b; use_blue_style( PtWidget_t *widget, void *data, PtCallbackInfo_t *cbinfo) { /* This callback sets the current style for the given widget instance. If you haven't attached the blue style to the class, there shouldn't be any change in the widget's appearance. */ PtSetWidgetStyle (widget, "blue"); return Pt_CONTINUE; } int attach_blue_style( PtWidget_t *widget, void *data, PtCallbackInfo_t *cbinfo) { /* This callback adds the style to the widget class. If you've clicked on one of the "Use blue style" buttons, the style of all buttons should change. */ PtAddClassStyle (PtButton, b); return Pt_CONTINUE; } int main() { PhArea_t area = {{0,50},{100,100}}; PtArg_t argt[10]; PtStyleMethods_t meth; PtCallback_t cb = {use_blue_style, NULL}; PtCallback_t cb2 = {attach_blue_style, NULL}; int unsigned n; /* Initialize the methods for the style. */ meth.method_index = Pt_STYLE_DRAW; meth.func = blue_draw; PtInit(NULL); /* Create the window. */ PtSetArg (&argt[0], Pt_ARG_DIM, &area.size, 0); win = PtCreateWidget (PtWindow, NULL, 1, argt); /* Create some buttons. When you click on one of these buttons, the callback makes the widget instance use the blue style. */ n = 0; PtSetArg (&argt[n++], Pt_ARG_TEXT_STRING, "Use blue style", 0); PtSetArg (&argt[n++], Pt_CB_ACTIVATE, &cb, 1); but = PtCreateWidget (PtButton, NULL, n, argt); PtSetArg (&argt[0], Pt_ARG_TEXT_STRING, "Use blue style also", 0); PtSetArg (&argt[n++], Pt_ARG_POS, &area.pos, 0); but = PtCreateWidget (PtButton, NULL, n, argt); /* Create another button. When you click on it, the callback attaches the blue style to the widget class. */ n = 0; PtSetArg (&argt[n++], Pt_ARG_TEXT_STRING, "Attach blue style", 0); PtSetArg (&argt[n++], Pt_CB_ACTIVATE, &cb2, 1); PtSetArg (&argt[n++], Pt_ARG_POS, &area.pos, 0); area.pos.y = 85; but = PtCreateWidget (PtButton, NULL, n, argt); /* Copy the default style to make the blue style. Replace the drawing member of the new style. */ b = PtDupClassStyle (PtButton, NULL, "blue"); PtSetClassStyleMethods (b,1,&meth); PtRealizeWidget (win); PtMainLoop(); return EXIT_SUCCESS; } Photon hook Photon provides a mechanism for you to allow a block of user code to be pulled in and executed during the initialization of Photon applications. This functionality is most frequently used to customize widget styles, allowing you to change the appearance and behavior of widgets without having to re-compile, re-link, or otherwise reconstruct executables. PtInit() looks for a DLL, PtHook.so, in the search path, and executes the symbol for PtHook() in the DLL. Multi-hook You can use the pt_multihook.so DLL and rename it as PtHook.so to load one or several DLLs, pointed to by the PHOTON_HOOK environment variable. If PHOTON_HOOK points to a DLL, that DLL is loaded and its PtHook() function is executed. If PHOTON_HOOK points to a directory, each DLL in it is loaded and its PtHook() function executed. Example PtHook.so - the pt_multihook: #include <stdio.h> #include <stdlib.h> #include <dlfcn.h> #include <dirent.h> #include <photon/PtHook.h> static int hookit( const char *hookname, PtHookData_t *data ) { void *handle; if ( ( handle = dlopen( hookname, 0 ) ) == NULL ) return -1; else { PtHookF_t *hook; if ( ( hook = (PtHookF_t*) dlsym( handle, "PtHook" ) ) == NULL || (*hook)( data ) == 0 ) dlclose( handle ); return 0; } } int PtHook( PtHookData_t *data ) { const char *hookname; DIR *dir; if ( ( hookname = getenv( "PHOTON_HOOK" ) ) != NULL && hookit( hookname, data ) != 0 && ( dir = opendir( hookname ) ) != NULL ) { struct dirent *de; while ( ( de = readdir( dir ) ) != NULL) if ( de->d_name[0] != '.' ) { char path[512]; if ( (unsigned) snprintf( path, sizeof(path), "%s/%s", hookname, de->d_name ) < sizeof(path) ) hookit( path, data ); } closedir( dir ); } return Pt_CONTINUE; } The PtHook function, declared in Photon/PtHook.h, looks like this: int PtHook( PtHookData_t *data ); PtHookData_t has at least these members: - int size - The size of the PtHookData_t structure. - int version - The version of the Photon library that loaded the DLL. The function can return Pt_END to ensure the DLL is not unloaded by PtInit(), or return Pt_CONTINUE to ensure DLL is unloaded. Setting widget styles using the Photon Hook Here is a simple example of changing widget styles using the Photon Hook. The following code changes the fill for all buttons to blue, based on the previous widget style example. To compile this code, use: cc -shared button_sample.c -o PtHook.so Place the PtHook.so in the search path to change the button style for all Photon applications. You can get the search path with getconf _CS_LIBPATH. #include <Pt.h> static void (*button_draw)(PtWidget_t *widget, PhTile_t const *damage ) = NULL; PtHook (void *data) { PtStyleMethods_t button_meth = { Pt_STYLE_DRAW, blue_draw }; PtWidgetClassStyle_t *button_style = PtFindClassStyle( PtButton, NULL ); button_draw = button_style->draw_f; PtSetClassStyleMethods( button_style, 1, &button_meth ); return( Pt_END ); }
http://www.qnx.com/developers/docs/6.4.0/photon/prog_guide/wgt_code.html
crawl-003
refinedweb
3,748
52.09
completely separate Model from View in MVC? Or How the Model Updates the View? Hello Experts, I'm Building a Swing Application but i'm having a problem trying to implement MVC, I searched a lot about this but all explanations or examples i found has to be missing something. Based on my understanding in an MVC environment the View and the Model have to be completely decoupled ... meaning the we shouldn't import any classes from the Model into the View or vice versa (Correct?) And i did this already but in some situations i find my self in a position that i must import a class from the model into the View. For example ... The app i'm building is a POS application. So in the View i have a table ... this table displays the items been sold. Now when user choose an item he calls a method in the Controller which in turn calls a method in the model to query the database for that item gets all the information and then add it to a List<Product>. The view then calls another method in the Controller to get that List and update the JTable with the new list (to show the items on the View JTable). But here is my problem the List ... to show the item's info on the JTable a List is not sufficient instead i need a List<Product> to be able to get each product column value. And there you go now i must import Product which is POJO i created in the model that holds the product attributes. I hope i explained it well ... i wish i could support this question with my codes but i'm pretty sure it'll confuse you more than it'll help ... I'm working on it. Note: I know how to reach the Model From the View (Using Controller methods) ... But what i really can't understand is how the Model updates the view. Thank you for your time Gado Answers - eudriscabrera-JavaNet Member Posts: 214 Bronze Badge Hi, I need more information about your case. But in the mean time, you need to create a custom TableModel. JTable only show the data, but all operations that you want to perform with the data must be through the TableModel. Here, a simple example. Explanation about the example: Sorry it is into Spanish Language Hi eudriscabrera ... Thank you for your response. I do have an AbstractTableModel for my table. The problem i'm having is how to get the Product value from the Model to the View (TableModel) without importing any of the Model classes. Ok let me rephrase the question ... How can i update the View's TableModel after changing the Model? I know how to reach the Model From the View (Using Controller methods) ... But what i really can't understand is how the Model updates the view. I'm sorry but i don't know Spanish. Thank you Gado - eudriscabrera-JavaNet Member Posts: 214 Bronze Badge I used to use a method to set the data to the TableModel, in this way, I can update the TableModel when the data change. For Example:… public class TableModelCar extends AbstractTableModelStandard { public TableModelCar() { super(new String[]{"Model", "Brand", "Year", "Color"}); } public Object getValueAt(int rowIndex, int columnIndex) { ...... } } } When you open the window for the first time, you must initialize the TableModel. private void initTableModel() { jTable1.setModel(new TableModelCar()); jTable1.setRowSelectionAllowed(true); if (jTable1.getRowCount() > 1) { jTable1.setRowSelectionInterval(0, 0); } } If your model change and you must update the TableModel TableModelCar tmCar = (TableModelCar) jTable1.getModel(); tmCar.setData(list); jTable1.setModel(tmCar);
https://community.oracle.com/tech/developers/discussion/comment/14371000/
CC-MAIN-2022-33
refinedweb
606
65.42
Supplementary Characters and UTF-16 Encoding In the past, all Unicode characters could be held by 16 bits, which is the size of a char (2 bytes), because those values ranged from 0 to FFFF(0 to 65,535). When the unification effort started in the 1980s, a fixed 2-byte width code was more than sufficient to encode all characters used in all languages in the world, with room to spare for future expansion-or so everyone thought at the time. In 1991, Unicode 1.0 was released, using slightly less than half of the available 65,536 code values. Java was designed from the ground up to use 16-bit Unicode characters, which was a major advance over other programming languages that used 8-bit(1 byte) characters. Unfortunately, over time, the inevitable happened. Unicode grew beyond 65,536 characters, primarily due to the addition of a very large set of ideographs used for Chinese, Japanese, and Korean. Now, the 16-bit char type is insufficient to describe all Unicode characters. We need a bit of terminology to explain how this problem is resolved in Java, beginning with Java SE 5.0. A code point is a code value that is associated with a character in an encoding scheme. In the Unicode standard, code points are written in hexadecimal and prefixed with U+, such as U+0041 for the code point of the letter A. Unicode has code points that are grouped into 17 code planes. The first code plane, called the basic multilingual plane(BMP), consists of the “classic” Unicode characters with code points U+0000(0) to U+FFFF(65,535). Sixteen additional planes, with code points U+10000(65,536) to U+10FFFFF(1,114,111), hold the supplementary characters. The UTF-16 encoding is a method of representing all Unicode code points in a variable-length code. The characters in the basic multilingual plane are representing as 16-bit values(2 bytes), called code units. The supplementary characters are encoded as consecutive pairs of code units. Each of the values in such an encoding pair falls into a range of 2048 unused values of the basic multilingual plane, called the surrogates area(U+D800(55296) to U+DBFF(56319) for the first code unit(high surrogate), U+DC00(56320) to U+DFFF(57343) for the second code unit(low surrogate)). This is rather clever, because you can immediately tell whether a code unit encodes a single character or whether it is the first or second part of a supplementary character. For example, the CJK(Chinese, Japanese and Korean) character for the set of integers 𨄗 has code point U+28117(164119) and is encoded by the two code units U+D860(55392) and U+DD17(56599). UTF-16 Encoding Algorithm To encode U+28117(164119) (𨄗) to UTF-16 : - Subtract 0x10000(65536) from the code point, leaving 0x18117(98583). - For the high surrogate, shift right by 10 (divide by 0x400(1024)), then add 0xD800(55296), resulting in 0x0060(96) + 0xD800(55296) = 0xD860(55392). - For the low surrogate, take the low 10 bits (remainder of dividing by 0x400(1024)), then add 0xDC00(56320), resulting in 0x0117(279) + 0xDC00(56320) = 0xDD17(56599). UTF-16 Decoding Algorithm To decode U+28117(164119) (𨄗) from UTF-16 : - Take the high surrogate 0xD860(55392) and subtract 0xD800(55296), then multiply by 0x400(1024), resulting in 0x0060(96) * 0x400(1024) = 0x18000(98304). - Take the low surrogate 0xDD17(56599) and subtract 0xDC00(56320), resulting in 0x0117(279). - Add these two results together 0x18117(98583), and finally add 0x10000(65536) to get the final decoded UTF-32 code point, 0x28117(164119). Supplementary Character Handling Methods The Character class encapsulates the char data type. For the J2SE release 5, many methods were added to the Character class to support supplementary characters. The static toChars(int codepoint) method converts the specified character (Unicode code point) to its UTF-16 representation stored in a char array. If the specified code point is a BMP (Basic Multilingual Plane or Plane 0) value, the resulting char array has the same value as codePoint. If the specified code point is a supplementary code point, the resulting char array has the corresponding surrogate pair. The static toCodePoint(int highs, int lows) method convert and returns the specified parameters to its supplementary code point value. Program public class Javaapp { public static void main(String[] args) { char getchar[] = Character.toChars(164119); System.out.println("164119 -> "+String.valueOf(getchar)); char setchar[] = new char[2]; setchar[0] = 55392; setchar[1] = 56599; System.out.println("164119 -> "+String.valueOf(setchar)); int getcodepoint = Character.toCodePoint(setchar[0],setchar[1]); System.out.println("𨄗 -----> "+getcodepoint); } }
https://hajsoftutorial.com/java-supplementary-characters-utf-16-encoding/
CC-MAIN-2019-22
refinedweb
778
52.09
14 June 2012 17:48 [Source: ICIS news] (updates with Canadian and Mexican chemical railcar traffic data) ?xml:namespace> Canadian chemical railcar loadings for the week totalled 11,557, compared with 11,068 in the same week in 2011, the Association of American Railroads (AAR) said.The previous week, ended 2 June, saw a year-on-year decline of 10.6% in chemical shipments, partly because of the freight rail strike at Canada's second largest rail carrier that ended on 1 June when government back-to-work legislation went into effect. From 1 January to 9 June, Canadian chemical railcar loadings were down by 7.7% year on year to 246,641. The AAR said weekly chemical railcar traffic in US chemical railcar traffic fell by 1.2% year on year for the week ended 9 June, marking its fifth decline in a row and the 15th decline so far this year. There were 29,308 chemical railcar loadings last week, compared with 29,653 in the corresponding week of 2011, the AAR said. In the previous week, ended 2 June, US weekly chemical railcar loadings fell 4.5% year on year.
http://www.icis.com/Articles/2012/06/14/9569692/canada-weekly-chemical-rail-traffic-recovers-after-strike.html
CC-MAIN-2014-41
refinedweb
193
63.7
Hello all, Long time reader, first time poster here. I've been a web programmer for some time now but really never got into any scripting languages, I guess you could say I was just a web designer. Either way I want to write a little web server in Python to just serve up some pages to get to know more about the language. I've decided I want to write for GET, HEAD, OPTIONS(cause it would be good to know how to put that kind of feature in) and TRACE...no POST yet because it appears to be more difficult. I started with a tutorial that taught me: import socket host = '' port = XXXX c = socket.socket(socket.AF_INET, socket.SOCK_STREAM) c.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) c.bind((host, port)) c.listen(1) while 1: csock, caddr = c.accept() cfile = csock.makefile('rw', 0) line = cfile.readline().strip() cfile.write('HTTP/1.0 200 OK\n\n') cfile.write('<html><head><title>Welcome %s!</title></head><body>The Server Is Working!!!</body></html>') cfile.close() csock.close() Then the only problem was that's not serving an actual HTML file just a variable written to the screen. So then I got to this: from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler class RequestHandler(BaseHTTPRequestHandler): def _writeheaders(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() def do_HEAD(self): self._writeheaders() def do_GET(self): self._writeheaders() self.wfile.write(""" <HTML> <HEAD> <TITLE>Test Page</TITLE> </HEAD> <BODY>Test!!! </BODY> </HTML> """) serveraddr = ('', XXXX) srvr = HTTPServer(serveraddr, RequestHandler) print("Server online and active") srvr.serve_forever() But again, I can't seem to understand how to serve out the *.html files themselves. Reader's Digest version: How do I use any of my code, or some other code you can provide, to add in the ability to serve the index.html file that's in my folder with the server. Thanks! -Ray-
https://www.daniweb.com/programming/software-development/threads/408978/python-web-server
CC-MAIN-2020-24
refinedweb
321
60.72
- 13 Musk-Ox or Muskoxen of the Arctic Muskoxen have it rough! Life is hard for the males, for the females and for the calves. Musk-ox live in the Arctic and have been introduced to a few other locations. Photos used with permission. - 15 My Mighty Miss Duck I chased the dog away and cried my way across the lawn to the fragile, feathery body. She was so still. But wait! Had I possibly seen a slight movement? Could the duck be playing dead? - 10 Delicious Fruit and Veggie Smoothies -- Too Much Per Day can cause Fatty Liver Disease For many of us, nutritious smoothies are an essential part of our daily diet. Are we limiting the amount we have and are we aware of the damage too many carbs wreak upon our liver and blood system? - 4 H.O. Model Railroaders, Imagination and Model Railroads. I wonder why many men and women enjoy creating model railways. Is it because it's a way to use one's imagination just like in childhood? We writers do it, too, so I am not surprised if it is so. - 8 Canada Geese of Calgary, Alberta -- I will Always Remember You. There was a sharp decline in the number of Canada Geese from 1900 - 1966. Researchers set up programs which increased the number of Canada Geese. Problems now abound. Many geese no longer migrate. - 6 The Ugly Hornworm Larvae and the Amazing Moths They Become Green, huge caterpillars in Arizona gardens have many enemies not the least of whom are the gardeners. If these hornworm caterpillars survive and pupate, they become moths with hummingbird skills. - 10 Model Railroad Displays by Three Clubs in Scottsdale, Arizona McCormick-Stillman Railroad Park in Scottsdale, AZ is a great place to visit. If you are a model railroader, you will especially enjoy the Model Railroad Building. It's air-conditioned; no worries. - 15 For the Pleasure of His Company Adopting a dog or cat is a lifelong commitment, the animal rescue groups tell us. The first couple weeks can be very trying and you may want to give up. Keep going. Love is the key. - 12 Endangered Red-Capped Parrot of Australia As a native species of Australia, Red Caps are protected under the Wildlife Conservation Act, 1950, but under the DEC Act, they can be shot during open season. Their crime? Trying to obtain food. - 8 Been in Love? New Pop Music Video -- Always on my Mind Turn up the volume and enjoy this happy and fun music video by young musical artist, Clay Williams, who usually writes Rock and Alternative Rock. This song is Pop Rock. I'm glad he ventured over. - 10 Hope Amid Tragedy: My Brother and Christian and Stephanie Nielson On August 16, 2008, their plane went down. I am writing this to celebrate the life of my departed brother, Doug, and to introduce you to the inspiring book Stephanie Nielson wrote: Heaven is Here. - 13 Adieu, Saskatchewan -- Gorgeous Photos Used with Permission Impressions of my 2012 visit to Saskatchewan to live near my parents and again for five months in 2013. Beautiful in summer, unforgiving in winter. Hardy people thrive there, less hardy -- leave. - 12 Magnificent Magpies Magpies in their colorful navy blue and white splendor are entertaining and resourceful. Also they are fierce protectors of their babies and will raid other birds' nests to provide for their young. - 14 Re-reading an Old Journal Entry and a Beloved Poem. Who wrote this beautiful poem which nudges all parents to realize the priorities of their days? Alice E. Chase did. She was also the author of 'Who Will Take Grandma?' read by Walter Brennan. - 27 Embroideries of Trish Burr -- True Inspirations Trish Burr is the epitome of the well-known phrase, 'Practice Makes Perfect'. Her delicately embroidered birds, flowers and portraits are in a class all their own. Have a look at some of these gorgeous... - 6 One Lonely, Cold Horse Consider the many lonely horses in the summer breezy acreages, each one standing alone, their owner coming to see them infrequently. Consider one of those lonely horses in a freezing wind, in a snowstorm in the night... - 25 Some of My Favorite Things I read Fastfreta's hub today -- which she wrote many months ago -- naming her favorite things. It was so fun. I've now written a list of my favorite things. You might enjoy making one, too. - 14 Chaplin Lake, Saskatchewan -- A Safe Haven For Migratory Birds When you hear flocks of geese flying overhead and you look skyward to see them in all of their magnificence -- do you feel utter awe, so much so that tears well up in your eyes? - 4 How to Build an H.O. Model Train Layout -- All Aboard for Part 2 There is the old way of building an H.O. model railroad and now -- there is the new, easy way. The realistic landscaping, however, cannot be purchased in a box. Learning H.O. landscaping is fun. - 15 How to Take Care of Gouldian Finches Gouldians are smart little creatures and inquisitive, but they are not the kind of bird that will sit on your shoulder and put their head down for you to scratch them on the neck -- like a parrot might do. However, if... - 21 Great-Grandfather Kinneard, One of the Earliest Pioneers of Saskatchewan Our Kinneards or Kinnairds of County Antrim, Ireland were descendants of the Kinnairds of Scotland who descended from Radulphus de Kinnaird, a Norman. Our James S. Kinneard sailed to Canada in 1881. - 22 Gwana, my Big Green Iguana Gwana was big -- at least four feet long from the tip of his nose to the end of his tail -- the last time I lifted him from the closet shelf to place him in his bed. He never once hurt me with that tail of his. - 30 Little Screech Owl and our Three-Legged Boxer Owls are actually my least favorite bird because they eat little mammals, but this little screech owl we encountered many nights in Southern Arizona was such a strange little character, he earned my attention and... - 25 How to Apply False Eyelashes Like a Pro Applying false eyelashes is easy when you know the pitfalls to avoid and the safeguards to take. Learn the easy application process from a former College of Esthetics and Fashion Merchandising owner. Success comes... - 37 A Haiku Farewell to my Feral Cat Friend There are hundreds of caretakers of feral cats on the isle of Maui. There are hundreds of thousands of feral cats there. This is a tribute to just one of the feral cats I have loved. Named him Red. - 18 Mommy, Please Listen. Writing and being a good mother is a balancing act. I shed a tear but enjoyed writing this. Spoiler: In case you miss it, there is a miscommunication between a mom and her young daughter in this tale. - 16 Etiquette and Civility Will Always Matter in Your Life and Mine When we treat others with respect, when we are comfortable, making others comfortable and when we lead by example, our world and the world around our little sphere of influence is a better place. - 2 Insulin Potentiation Therapy (IPT) Minimizes the Destructiveness of Chemotherapy Dr. Garcia pioneered this method in 1930. It is being implemented around the world. Already 110 doctors are trained in this alternative method. - 26 When and How to Train a Parrot to Talk How do you train a parrot to talk? The method to teach or at least encourage a parrot to mimic what you say is easy and repetitious. Parrots are thinkers. But more than this.... - 44 Me and This Old House Who started the rumor that there is a generation gap? There is no such thing. Just like an old house, the appearance belies the fun, experiences and wisdom the house knew. Love your age and wisdom. - 40 Colored Pencil Art Painting Is a New and Respected Art Medium Colored pencils have come a long way. High-quality, pure color products now enable dedicated colored pencil artists to create fine art. It's a whole new medium. - 24... - 20 Nine-Month-Old Pomeranian Puppy When this sweet little Pomeranian glided into my life like a summer's dream, I knew she was not mine to keep. But she was mine to love. - 28 Travels with Kitty: Little Cat Sat Beside Me in the Airplane Have you ever traveled by air with your cat in cabin with you? If not, here's my tale. I hope it helps you. - 15 Oodles and Oodles of Poodles I missed a photograph of a lifetime. There were a thousand romping poodles in the sky over Honolulu in October, 2010. They were in the clouds and I was in a jet coming in for a landing. - 19 Horse Races in Heaven Majestic, intelligent, magnificent and competitive race horses -- inspire awe even in those of us who have never been to a race track. They put up with us mere mortals so that they can do what they are born to do: RUN. - 24 A Little Cake with my Icing Now that we are empty nesters, we like to eat our dessert first and the main meal second. When I bake a cake, I always mix plenty of icing. Who really needs the cake, anyway? Isn't it the icing we're after? You have... - 14 All About the Amazing Little Seahorse -- with Pillows to Match I enjoyed creating these seahorses. Seahorses are one of the most unique characters on earth. The male seahorse carries the offspring to full term and delivers it. Very chivalrous of him! - 25 Biggest Repository of Genealogical Records on Earth -- and free to access. familysearch.org is a unique genealogical research tool. It is the largest repository of genealogical data in the world and it is entirely free to use. - 22 Orange Juice, a Heavenly Beverage Do you fresh-squeeze your oranges? Do you only choose organically grown oranges or are you a risk-taker and choose any old kind that is on sale? I do a bit of both. - 43 I Am Hansen, a Three-Legged Purebred Boxer and Survivor No one knows how Hansen survived alone in the desert heat. He is gentle. Perhaps he fed on an occasional dead carcass of starved cattle. Priscilla found him, sick and scared with valley fever. - 14 Bird Images of Hawaii Cattle egrets are everywhere on the Hawaiian Islands. Other birds to see on the islands are Japanese White-Eyes, Mynahs, Herons, Peacocks and the beautiful common Junglefowl. - 10 How Do the French Say Bougainvillea? A lighthearted critique of how English-speakers on the Hawaiian Islands pronounce the lovely Bougainvillea flower's name. - 10 Dioramas of H.O. Train Layouts Are you a H.O. running layout kind of person or a diorama lover? Do you like to make your model train layouts look absolutely realistic or do you add a touch of whimsy? Whatever your railroading pleasure, this hobby... - 20 Feral Cats: The Sun Rose Again on Maui Prayer, teamwork and effort helped get more cats trapped, neutered (or spayed) and returned to their dusty habitation. I love 'em all. The feral cat problem on Maui is barely diminished for Leona's and my efforts but... - 30 Book Review, "The World According to Monsanto: Pollution, Corruption, and Control of our Food Supply." Genetically modified food is in our daily life whether we know about it or not. It is better to know what's going on than not to know. I highly recommend the 2010 thoroughly researched book, "The World According... - 15 Tropical Fish at the Maui Ocean Center at Ma'alaea Tropical marine life at the Maui Ocean Center is colorful, exciting and watching you as you watch them. It's a great place to spend a morning or an afternoon. - 30 A Book Review: Excitotoxins, the Taste that Kills. Did you know a package of processed food can have very misleading words such as "natural flavor" or "seasoning" -- so you won't suspect you're about to eat aspartame or MSG? - 11 Feral Cats of Maui -- Look 'Em In The Eye The trap, neuter, return (TNR) program is alive and well on Maui but feral cats of Maui are innumerable and in need of care. I try to remind myself -- You haven't failed unless you give up. - 0 Family History Research and Two 1917 Poems By getting to know our kindred and ancestors, we get to know ourselves a little better, too. I have included two humorous poems from the early 1900's in this hub. Great Aunt Edith had a sense of humor, so she left... - 23 Maui Humane Society and Abandoned Dogs The Maui Humane Society does its best with the resources it has. There are simply too many people on Maui not neutering or spaying their animals. Can you or your family foster some animals from the Maui Humane Society? - 51 Cone Shells of Hawaii Can Be Deadly Seashells of Hawaii can be beautiful. Most are harmless, but the cone shell which comes in many varieties is not harmless. Cone shells can be very dangerous to your health -- even fatal. Get to know some of the shapes... - 25 Family History: Interview your Grandparents, One at a Time The life story of my mother, the life story of my father and the life stories of my two sets of grandparents are all precious possessions of mine. Their life experiences resonate in the day-to-day experiences I have. ... - 18 Is it Fair to Love your First Grandchild Most? Is it fair to love your first grandchild most? I think not. It's not fair to the other grandchildren that Grandchild Number One got a headstart on your heart. - 78 Big Centipedes in Hawaii -- How to Survive Them Centipedes almost an inch wide and eleven or twelve inches long? Commonplace in Hawaii. It's very diffiult to kill them because they have a strong skeletal frame and an even stronger will. - 18 Helen M. Stevens, Hand-Embroidery Artist Helen M. Stevens is a well-known United Kingdom embroidery artist. Her books are beautifully illustrated and plainly instructive. Links are provided in this article so you can visit her site. Helen M. Stevens also... - 15 Peacocks, the Royal Birds of Maui Peacocks on Maui are a common sight. Yee's Mango Orchard in South Maui is the home of dozens of these regal birds. There was a Peacock Crossing sign on the road to protect them. Someone stole it. - 28 Crazy Chicken Lady of Maui Feral chickens and their turquoise counterparts, the male junglefowl, of the Hawaiian Islands live and die daily in our little corner of the world on Maui, but one little lady does her best to keep her feathery friends... - 7 Cattle Egrets, Hawaiian Style Cattle Egrets are a common sight on the islands of Hawaii. They have a sense of humor, I think, and they say a pleasant 'Howzit?' from the floral medians while we humans are stuck in traffic. - 27 What To Do For the Homeless in Hawaii Homelessness is on the rise across the nation. The State of Hawaii is no exception. Stereotyping the homeless has always been an ignorant and insensitive thing to do. Solutions to the problem exist but take a concerted...
http://hubpages.com/@pamelakinnairdw
CC-MAIN-2017-04
refinedweb
2,567
74.08
Tres Seaver wrote: >>>>I really appreciate your effort in all other cases, but in this case I >>>>think its not a simplification. >>> >>> >>. > > > That "rule" is somewhat debatable (the debate seems ongoing). Advertising Well, it's a rule (my rule :)), the question is whether it's one that we officially want to use ;). Yep, the debate is ongoing. Unfortunately, I don't have time for the debate. I need to solve problems. Like, for example, explaining someone in a book that <utility /> registers a utility, except when you're doing factories, DAV interfaces, vocabularies, etc. etc. Of course, they're all looked up the same way, namely with getUtility. Well, that's just great. > I think we could argue the following equally well: if you find a > directive unuseful, *just don't use it*. Register *new* directives > (perhaps in a new namespace, if you want to reuse the names) which do > your "simpler / cleaner" thing. Ok. Let's call that the Tres rule :). I'd just say: Use Python constructs until your unuseful directives become useful again. I find <utility /> extremely useful. > Deprecation is not always a reasonable model, given disagreement about > the value of the simplification. Deprecation wasn't objected for the other directives (the top-level ones), so at least for the majority of the directives in question, deprecation and eventual removal does seem to be a reasonable model. Philipp _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
https://www.mail-archive.com/zope3-dev@zope.org/msg04919.html
CC-MAIN-2017-51
refinedweb
241
57.98
In the last post, we had set up our ReactJS development environment, installed the ReactJS dependencies and had a starter app ready. Let's move forward. Components React is made up of components. Remember the app.js file? That is the root component. Components must return JSX. JSX gives us the ability to write HTML in React. Or, just think of HTML on steroids. The last line of the file mentions an export. export default App; Exporting the App component to where? Entry point Note that there is an index.js file inside the src directory. import App from './App'; This line is importing the App component. What else is going on here? ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById('root') ); Our React app is rendering the App component. And our React app is being bound to the element with id root on the webpage. What webpage is that? In the public directory, there is a file named index.html. And there we have the element with id root. <div id="root"></div> In conclusion, the App component is the root component, all other components will be rendered inside the App component. Headers, footers, navbars, content, listings, all of these will be components. First Component Let's get down to creating our very own component. We'll start with a header. Create a new directory inside the src directory. Name it Components. Create a Header.js file inside this directory. The first line needs to import the React library. import React from 'react' Define the component function. Here, let's just return "GalleryApp" enclosed in <header> tags. function Header(){ return ( <header> GalleryApp </header> ) } Finally export the component. export default Header It should look like this. Now, to render this component. In app.js, import the component. import Header from './Components/Header' And edit the App component to render the Header component. function App() { return ( <div> <Header /> </div> ); } Your browser should have hot-reloaded, showing you the header. It really is that simple. But Riju, you say. Does it also have to be that ugly? Of course not, click here to get to the next post where we look at styling things up a bit.
http://python.instructive.club/?reactjs_2
CC-MAIN-2022-05
refinedweb
366
71.71
Investors eyeing a purchase of K12 Inc (Symbol: LRN) stock, but tentative about paying the going market price of $22.71/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the December put at the $20 strike, which has a bid at the time of this writing of $1.45. Collecting that bid as the premium represents a 7.2% return against the $20 commitment, or a 12.1% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to LRN K12 Inc sees its shares fall 13% and the contract is exercised (resulting in a cost basis of $18.55 per share before broker commissions, subtracting the $1.45 from $20), the only upside to the put seller is from collecting that premium for the 12.1% annualized rate of return. Below is a chart showing the trailing twelve month trading history for K12 Inc, and highlighting in green where the $20.
https://www.nasdaq.com/articles/commit-purchase-k12-20-earn-121-annualized-using-options-2014-05-15
CC-MAIN-2022-27
refinedweb
177
64.3
BAM! This is an exciting day; today we are releasing WCF RIA Services V1.0 SP1 Beta and WCF RIA Services Toolkit October 2010. These download links are also available from. Side-by-side installation with WCF RIA Services V1.0 is not supported. If you already have RIA Services installed on your machine, you should: You’ll notice that our SP1 Beta installer for the product is much faster now. The Toolkit installer is still slow, but we’ll be fixing that soon too. This release has a bunch of bug fixes; it is a Service Pack after all. But showing a list of obscure bugs is not very exciting, so instead, let’s look at the more notable updates we’ve made. Please keep in mind that this is a Beta release, however it does have a Go-Live license. This feature is also known as “Support for Complex Types” on the WCF RIA Services wish list. This was the 3rd-most requested feature, and we now have full support for complex types. If you have an Entity that has complex type properties, we will generate corresponding types on the client and serialize them seamlessly. The generated types will derive from a new ComplexObject class that we’ve introduced in our framework. ComplexObject values are fully change-tracked and validated as part of your entities. Another top request on the wish list was to “Allow multiple DomainServices to share the same Entity.” We’ve delivered on this one too. In fact, we marked this item as “completed” awhile back, so it hasn’t been able to accrue any votes recently. I remember seeing this as the 1st and 2nd most requested item up until the time we completed it. Now you can logically segment your DomainService classes more freely. With our initial V1.0 release, many of you found that a DataForm bound to an EntitySet or an EntityCollection did not support the Add or Remove buttons. This was a difficult cut to make in V1.0, so I’m pleased to announce that with V1.0 SP1, this is now supported. Silverlight 4 introduced the ICollectionViewFactory interface, with support integrated into DataGrid and DataForm, and both EntitySet and EntityCollection now implement that interface to allow the Add/Remove features to light up. Be sure to keep an eye on Kyle McClellan’s blog for more information on this feature. You’ll hear us refer to it as ICollectionViewFactory or ICVF. Our initial V1.0 release was an English-only product. Regardless of your Visual Studio language, our entire product was always English. With our V1.0 SP1 release, we are now localized into 9 additional languages, matching the same list of languages supported by Visual Studio 2010. Our product (not the Toolkit) is localized for the following languages: (based on your Visual Studio language) The benefits you’ll see from this are: I will be starting a blog post series on localization in the near future, so stay tuned for more information on this topic. Deepesh Mohnani has a little more about this on his blog. With our V1.0 release, our code generation was based on CodeDom and relatively locked down. While you could provide a custom CodeProcessor through the [DomainIdentifier] attribute, you were forced to take the CodeDom model that we generated and manipulate it. We know that many of you wanted to hook into the code generation to change what gets generated, and we want to make this easier. To address this, we have a two-part solution. First, within the product, we have abstracted our code generation away from CodeDom and implemented a MEF-based mechanism for you to bring your own code generator. Our CodeDom-based code generator now implements a simple IDomainServiceClientCodeGenerator interface and returns the generated code as a string. Other code generators can be registered using the [DomainServiceClientCodeGenerator] attribute. You now have a means for creating a new code generator from scratch and plugging it into our build-time code generation. But there’s a catch… you cannot derive from our code generator or otherwise use it in any way when you do that. That’s where step two comes in. Second, within the Toolkit, we have introduced a Text Template (T4) based code generator called Microsoft.ServiceModel.DomainServices.Tools.TextTemplate.CSharpGenerators.CSharpClientCodeGenerator. This class resides in a new Microsoft.ServiceModel.DomainServices.Tools.TextTemplate assembly. The intent is for you to derive from the CSharpClientCodeGenerator class and override its methods within your own T4 template classes. You’ll find that we have scores of virtual methods at your disposal so that you can easily tweak tiny portions of the generated code without having to take full control of the template. Quality Disclaimer: The Text Template code generator is in an experimental state. In fact, the feature was just checked in about 3 hours before we uploaded the MSI. There has been very little testing done on this feature, but from what we’ve seen most mainline scenarios work. We will be soliciting feedback on these early bits, so please let us know if you encounter issues. Also in the Toolkit, we have added a new Microsoft.ServiceModel.DomainServices.WindowsAzure assembly. This assembly contains a TableDomainService class that allows you to very easily expose your Windows Azure Table Storage entities using a DomainService. This allows your client application to be completely ignorant of your Azure use on the server, as we provide the same level of client-side LINQ support over this domain service as we do over Entity Framework, Linq to SQL, or POCO domain services. Kyle McClellan developed this feature and he’ll be blogging more about it. Quality Disclaimer: The Windows Azure Table Storage domain service was also a late addition to our Toolkit, so it’s also in preview state with limited testing having been done at this point. Several members of the WCF RIA Services team have blogs and/or are on Twitter. Keep your eyes open for more information about this release, as there is just a ton of stuff packed in. Here are some links for you. Wednesday, October 27, 2010 8:26 PM Congratulations on your release. Good stuff here. Very nice additions folks, keep up the great work! The obvious question -- what is the state of a ViewModel friendly DataDomainSource?(looks like really great stuff, from my perspective, especially the T4 extensibility). @RobertWe are working on the ViewModel-friendly DomainDataSource. Be sure to follow Kyle McClellan's blog as he's been talking about it.-Jeff Regarding the new feature "Allow multiple DomainServices to share the same Entity" Is it possible to share the same entities between DomainServices and Entity Framework ObjectStateManager? Nice! The Complex Object support is huge. Can we pass complex objects as parameters to query methods now as well? The addtional complex type support is handy. Uninstalling RIA services/toolkit is a pain. The control panel uninstall never worked for me.After I manually deleted all things RIA from the registry (ala Peter B's article plus tool kit stuff) I could run the WCF RIA group's installers. The control panel uninstall still didn't work. Brian,Can you share what happens when you try to uninstall through the control panel? Any idea why it's failing?We have made the SP1 installer upgrade-friendly, so here forward, you'll be able to upgrade instead of uninstall/reinstall.The Toolkit MSI is not yet upgrade-friendly though.-Jeff why don't you support complex query parameter? Just like this:public IQueryable<Customer>(CustomerQuery query){} @Jason,I don't think you can share entity types between EF models in any way. I'm not much of an EF guru though.-Jeff @Hax,Great question! I've logged this as a request for us to review. I'll let you know what comes of it.Thanks a bunch for the early feedback!-Jeff First of all: I like the new features. It's good to see you're closely working with the community to prioritize new features. Regarding bug fixes, unfortunately this one does not seem to be fixed:'s specific to RIA Services and apparently a problem of the client libraries (works with a normal WCF servivce). Usually it leads to a hang + SL plug-in crash in Firefox. I found that as of version 7, it also results in misleading error messages in Chrome (no crash there though). Still works in IE, including IE9 beta, and without RIA flavor. The corresponding connect entry is here:connect.microsoft.com/... I get an 'does not conform to the required signature ...' compile error if my complex object has an [Key] Attribute. When I remove this attribute I can use the object as parameter for an Invoke operation. I hope you will fix this bug in the final release. Peter,This is by design. Complex objects cannot have Key properties. If you have a Key the class gets treated as an Entity.Thanks for trying out the new release!Jeff Mister Goodcat,Thanks for bringing that issue to my attention. I've followed up with our test team to ensure we were investigating it.-Jeff @Hax,We cannot support complex objects as parameters to query methods because we'd then need to be able to serialize them into the URI parameters when invoking the query, which isn't really feasible.Hope this helps,Jeff Hi Jeff,RIA Services rocks! Thanks so much for contributing to such a great toolkit.I have 1 question re: the SP1 beta: try as I might, I can't get the code generator to generate the appropriate metadata classes when I have a complex type in my EDM. It simply ignores the presence of the complex type completely and acts like it's not present as a property on the entity.Am I missing something? Do I somehow have the previous (non SP1 Beta) version still running?Thanks,Dave @DaveS,Are you talking about when you use the Domain Service item template where you select your entities and select whether or not to generate metadata classes? I don't think we've added any handling of complex types in that wizard (which I will confirm and log as a feature request).Thanks for checking out the SP1 Beta bits so soon!-Jeff Pardon my ignorance but these bits (SP1 + Toolkit) only need to be installed on a developer workstation, correct? I'm half tempted to leverage this in a production initiative but I'm trying to make sure I wouldn't be leaving myself too exposed with that kind of decision. I tried out the new complex types behavior in VB.Net by hand crafting POCO classes to be returned from a domain service invoke operation. If those complex types are in another namespace (same assembly), the generated code in my Silverlight application correctly generates code for those classes but the generated invoke operations have broken namespace references to them. I had to add the correct namespace reference as a project level import as a workaround. This appears to be because the imported namespace is incorrect. @Matt,Thanks for the details here. This sounds like a bug; we'll investigate right away.Thanks again,Jeff @Matt,Regarding your deployment question...The RIA Services assemblies that are referenced in the Web project need to be available on the server too. If your server does not have V1 RTM installed on it and you are bin-deploying, you can just continue to bin-deploy.If you previously used the SERVER=true option to install the RiaServices.msi on your server, then you would need to upgrade the server installation to V1 SP1 Beta as well.I hope this helps,Jeff Handley To Jeff: I think complex query parameter is very important for us.You can post query parameter to RIA servicejust like create or update method.Expect your help, thnaks very much! Jeff, I see what you're saying. If you installed WCF RIA Services 1.0 on the server then you can't override that with SP1 bin deploying because it won't take the updated server assemblies (already loaded). You can either then run the updated MSI or you can uninstall and go to bin deploying. You can't mix both approaches. Hi again Jeff,Thanks for the info regarding generation of metadata classes for complex types. That query really arose from trying to make sure I wasn't "missing" something for another issue I'm having:I'm using the RIA-generated DomainService classes in an ASP.NET app, calling them via the DomainDataSource control. However, if the entities that are returned from the GetXXX method in the DomainService include a property that's a ComplexType, what I'm finding is that the DomainDataSource doesn't re-populate the values in the ComplexType property following an Edit/Update operation.Ie, when the DomainDataSource initially retrieves the values to bind to my GridView, the child property values are present in the ComplexType property. Again, when the GridView is re-bound on an Edit operation, the values are still there. But then when the GridView Update operation occurs, the child property values for the ComplexType property aren't being re-populated - eg in the DomainDataSource's Updating event, if I examine the e.ChangeSetEntry.Entity or e.ChangeSetEntry.OriginalEntity, the ComplexType property is initialised to a new instance of its type, but its child properties are all just their defaults, not their "actual" values.I've tried setting [RoundTripOriginal()]on the child properties of the ComplexType as I noticed I needed to do this for other (non-bound) properties, but that doesn't appear to help.I've also tried setting [RoundTripOriginal()] on the parent ComplexType property, but in that case I get an exception "This complex object is already attached to another object" - which seems to possibly imply that something (the DomainDataSource?) is attempting to attach the same instance of the ComplexType to the original and current entities.Any light you can shed on this would be most appreciated - is this just something I shouldn't be attempting?Thanks so much for your help,Dave Hi Jeff,Regarding my last message, I also tried this exact scenario by downloading the "ComplexTypesSample" (code.msdn.microsoft.com/.../ProjectReleases.aspx). It seems to work in a Silverlight app OK, but using exactly the same EDM in an ASP.NET app using the ASP.NET DomainDataSource results in the issues above.So maybe I'm just expecting too much on the ASP.NET support side of things?Thanks for your help,Dave @DaveS - Yeah, we didn't update the ASP.NET DomainDataSource control to support Complex Types.I will file a work item saying it is affecting you and we'll consider it for a future release.Thanks,Jeff We are working on a very large Silverlight application and we really need complex types to be able to be sent to query methods. What we do currently is mark the method [Query(HasSideEffects = true)]and then on in Silverlight side we serialize the complex object and send it up to the RIA query method as a string then deserialize it in the RIA service. This is a big pain in the butt but it works however why can't you automate that?This gets around the query string issue you mentioned above. @Troy - I understand. Unfortunately, not all complex types can be easily (de)serialized to/from a string. I am glad it's working in your scenario, but before we could add this support, it would need to work in all complex type scenarios.Making it only work in Query POST scenarios (with HasSideEffects = true) does make sense though, as the URI length limitations aren't an issue there. Although my proxy classes are created I get the errors below. Also on some machines I get the messages that list the service classes being created on others I don't what gives ? Could not load file or assembly 'System.Windows, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e' or one of its dependencies. The system cannot find the file specified.C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Ria.Client.targets(304,5): warning : The following exception occurred creating the MEF composition container:C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Ria.Client.targets(304,5): warning : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Ria.Client.targets(304,5): warning : The default code generator will be used. Generating proxy classes for the DomainService 'Dating.Server.Data.Services.AnewluvAuthenticationDomainService'. Generating proxy classes for the DomainService 'Dating.Server.Data.Services.DatingService'. Generating proxy classes for the DomainService 'Dating.Server.Data.Services.EmailService'. Generating proxy classes for the DomainService 'Dating.Server.Data.Services.PostalDataService'. @olaThat sounds like Silverlight 4 isn't installed on the machine properly or something. Check in your Silverlight project that the System.Windows reference is okay. You might even want to remove and re-add the reference.-Jeff yep you were right Jeff, I removed and re-added the references to system.windows for both my RIA project and silverlight project and I am not getting those MEF errors anymore.thanks-ola lawalFreelance SLV/Sharepoint/Net DeveloperOla_lawal@yahoo.com Is it still in beta ? When will SP1 version be released ? Yes, it's still in Beta. We haven't finalized our dates for the RTM release of our SP1. We are striving for a very high quality bar and when we've hit that bar, we'll determine the RTM release dates. At the download site you should specify that WCF RIA Services must be uninstalled first. I was really scratching my head until I found your post. Dear Jeff,Please assist in uninstalling previous versions of RIA Services. The problem i was having was:- when creating a new silverlight navigation project i could not see that check box that says RIA Services- I tried to uninstall the 1.RIA services toolkit, 2. RIA Services from control panel but every time i uninstall it disappears from the list of programs in control panel but when i open control panel again they appear back. Its really frustrating. Same problem here. I uninstall the :"WCF ria services toolkit" from: Control Panel>"Unistall a Program" only to find it back when I return or restart to the Windows 7: "Uninstall or change a Program" Window. Obviously I need to get rid of some registry keys but witch one? I'm also having the same issue - I can't uninstall the old RIA Services beta from the Add/Remove Programs list (this is in XP). Therefore I can't install the latest beta.Please advise... Jeff,using SL4 Complex Types to fetch data from db via stored proc is great new method, However, is there a way to update/insert via Complex Types? Please let me know if it's possible.BTW, Great work, I think the RIA Team rocks.Please email me if you can.Tommy Where is DomainServiceClientCodeGener
http://jeffhandley.com/archive/2010/10/27/RiaServicesV1SP1Beta.aspx
CC-MAIN-2016-07
refinedweb
3,203
57.47
Here is another post of vfs based union mount implementation. Union mount provides the filesytem namespace unification feature. Unlike the traditional mounts which hide the contents of the mount point, the union mount presents the merged view of the mount point and the mounted filesytem. These patches were originally by Jan Blunck. The current patches are for 2.6.21-mm1. The main change from the previous post is a different implementation of union mount readdir which is heavily inspired by the unionfs' implementation. The earlier version had two serious drawbacks: It worked only for filesystems which had flat file directories and it used to build and destroy readdir cache for every readdir() call. This version has addressed both of these shortcomings. The code is still in an experimental stage and the intention of posting this now is to get some initial feedback about the design and the future directions about how this should be taken forward. You can find more details about union mount in the documentation included in the patchset. Kindly review and let us know your comments. Regards, Bharata. - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo <at> vger.kernel.org More majordomo info at
http://article.gmane.org/gmane.linux.file-systems/14912
crawl-002
refinedweb
209
64.1
From: Jack, Paul <pjack@sfaf.org> > > I'm not sure a two transformer approach is all thats needed. Collections > > have three basic types of method: > > - input (add/set) > > - query (indexOf, contains, equals, remove) > > - output (get, toArray) > > The Predicate/Transform proposal covers only the input group at the > moment. > > > > A 'two transformer' approach would cover the output group of methods (but > > would require another 7 classes to do it). This is perfectly possible, but > > naming would be interesting :-) > > Well, we could eliminate the need for 1-transformer implementations by > providing an "identity" transform, that doesn't actually alter the object. A cunning plan... > And it seems that the "query" group of functions would just use the > "input" transform...here's what the code looks like in my head: > > public boolean contains(Object value) { > value = inputTransformer.transform(value); > return wrapped.contains(value); > } > > public Object get(int index) { > Object result = wrapped.get(index); > return outputTransformer.transform(result); > } > > public Object set(int index, Object value) { > value = inputTransformer.transform(value); > Object result = wrapped.set(index, value); > return outputTransformer.transform(result); > } The input and output I'm happy with. Its the queries I wasn't so sure about. I guess that they are a kind of input... And in the String interning example that was suggested the queries would need to be transformed. Hmm, I'm talking myself into this ;-) > > I have looked at the ProxyMap. It is suitable for use by the Predicate and > > Transform classes as it provides protected access to the map, but no > public > > method to access it. Thus ProxyList and ProxySet would also be useful. > > However, that would still only cover 3 of the 7 collections! > > It's true, we'd be adding six public classes to the API... > > > At the moment I'm pausing on the implementation of the predicate/transform > > classes until things clear. > > Well the Predicate implementations, at any rate, appear uncontroversial, and > would be extremely useful. Great, I'll try to manage these sometime in this week (without the Proxy* classes). Stephen -- To unsubscribe, e-mail: <mailto:commons-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:commons-dev-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/commons-dev/200204.mbox/%3C019501c1efcb$7579ca40$284d18d4@oemcomputer%3E
CC-MAIN-2018-22
refinedweb
358
51.75
ZenDMD Tip - Refresh DeviceSearch Catalog From Zenoss Wiki This is the approved revision of this page, as well as being the most recent. There are times when you might need to refresh the DeviceSearch catalog. For me this happens mostly around groups, although there are tons of other scenarios. From time to time I will notice that a group has a member that it shouldn't. Looking at the list of groups from the Device overview page, the group is not listed, but when looking at the group through the Infrastructure tab, it shows up. Additionally I'm unable to delete it. This little snippet will refresh the catalog and has generally fixed this issue for me: from Products.ZCatalog.ProgressHandler import StdoutHandler dmd.Devices.deviceSearch.refreshCatalog(clear=1,pghandler=StdoutHandler()) commit() Warning: Never refresh the global_catalog using this method. Always use zencatalog.
http://wiki.zenoss.org/ZenDMD_Tip_-_Refresh_DeviceSearch_Catalog
CC-MAIN-2019-47
refinedweb
143
58.08
Scan memory for variable Someday ago I was playing one game. And as I not so often playing games. I would like to change some variables in memory like ammo quantity or health. May be it is not very interesting to play game with "cheating" but there is much more interest to play with program. In such play can help scanmem Here is example of program that will help us to learn how to use scanmem: #include <stdio.h> #include <stdlib.h> unsigned int secret_dw = 1000; //variable to search unsigned int tmp;//for input variable int main() { int i; while ( secret_dw != -1 ) { scanf("%u",&tmp); printf("secret_dw was %u \n",secret_dw); secret_dw = tmp; tmp = 0; // This is to prevent from detecting tmp variable position } printf("\bExit\n"); return 0; } here only two variables one secret_dw for value that we will search and second one tmp to save input. Also tmp will zeroed if not then we will find tmp and secret_dw. compile example with make and run ./example And in parallel run $ scanmem `pidof example` scanmem version 0.11 Copyright (C) 2009,2010 Tavis Ormandy, Eli Dupree, WANG Lu Copyright (C) 2006-2009 Tavis Ormandy scanmem comes with ABSOLUTELY NO WARRANTY; for details type `show warranty'. This is free software, and you are welcome to redistribute it under certain conditions; type `show copying' for details. info: maps file located at /proc/1801/maps opened. info: 5 suitable regions found. Please enter current value, or "help" for other commands. As we searching 4 byte value of uint we defining it by setting up option 0> option scan_data_type int32 Now we ready to start our game. At beginning we know our secret_dw value it is 1000 but we will not use it. Type 1 in example secret_dw was 1000 in scanmem 0> 1 info: 01/05 searching 0x8049000 - 0x804a000...........ok info: 02/05 searching 0xb763d000 - 0xb763e000...........ok info: 03/05 searching 0xb7787000 - 0xb778a000...........ok info: 04/05 searching 0xb77a7000 - 0xb77a9000...........ok info: 05/05 searching 0xbf9d4000 - 0xbf9f5000...........ok info: we currently have 58 matches. As we can see 58 matches. WooHoo. Now type '1000'in example 1000 secret_dw was 1 in scanmem 58> 1000 ..........info: we currently have 2 matches. only 2 now scanmem has also many built in commands you can see them when type help. One of them is 'list'. Use it. 2> list [ 0] 0x8049680, 1000, [I32 ] [ 1] 0xbf9f2dd8, 1000, [I32 ] Here is list of matched variables. Number,address,value,size. By address we see that our variable is with number 0. 2> set 0=999 info: setting *0x8049680 to 0x3e7... 2> list [ 0] 0x8049680, 1000, [I32 ] [ 1] 0xbf9f2dd8, 1000, [I32 ] Now our variable is with value 999. When you type list it may be little bit confusing that values is the same. Go in example 12 secret_dw was 999 Yes. We have changed our variable. Our goal is completed. Scanmem webpage scanmem[1] Source contains programm outputs and example code. Links Downloads scan_memory.tar.gz - 2KiB -
http://main.lv/writeup/scan_memory_for_variable.md
CC-MAIN-2021-43
refinedweb
498
76.32
Hans Bowman1,083 Points I don't understand what they want here, help please I'm really mixed up and don't know what they want me to do. please help def square(number): return number * number return square function_name(square) = 3 def result(): 3 Answers Eric McKibbin11,456 Points Hi Hans, You're a little mixed up with your syntax, but you're on the right track! To start with, you've created the correct function for part one of the challenge def square(number): return number * number When they're asking you to "Under the function definition, call your new function and pass it the argument 3." and store it in a result variable, you don't need to use "def" to create your result variable. I'm pretty new to python myself, but I think def is a keyword for defining functions only. Variables can just be created, they don't need a keyword. You call a function by using its name, followed by the brackets containing any arguments it needs. In this case you're passing it an argument of 3, so you put three in the brackets. When python runs, it will turn square(3) into whatever your function returns, but we want to store the returned value in a variable, so we just write the variable name, then = square(3) def square(number): return number * number result = square(3) Hans Bowman1,083 Points Thank you, this was really helpful! Sorry for taking up your time :)
https://teamtreehouse.com/community/i-dont-understand-what-they-want-here-help-please
CC-MAIN-2020-10
refinedweb
251
65.05
Getting Started With Laravel Views Views hold the presentation logic of a Laravel application. It is served separately from the application logic using laravel's blade templating engine. Passing data from a controller to a view is as simple as declaring a variable and adding it as a parameter to the returned view helper method. There is no shortage of ways to do this. We will create a SampleController class that will handle our logic php artisan make:controller SampleController Here is a sample controller in app/Http/Controllers/SampleController.php class SampleController extends Controller { /** * pass an array to the 'foo' view * as a second parameter. */ public function foo() { return view('foo', [ 'key' => 'The big brown fox jumped over the lazy dog' ]); } /** * Pass a key variable to the 'foo view * using the compact method as * the second parameter. */ public function bar() { $key = 'If a would chuck can chuck wood,'; return view('foo', compact('key')); } /** * Pass a key, value pair to the view * using the with method. */ public function baz() { return view('foo')->with( 'key', 'How much woood would a woodchuck chuck.' ); } } Passing Data To Multiple Views This is all fine and dandy. Well it is until you try passing data to many views. More often than not, we need to get some data on different pages. One such scenario would be information on the navigation bar or footer that we be available across all pages on your website, say, the most recent movie in theatres. For this example, we will use an array of 5 movies to display the latest movie (the last item on the array) on the navigation bar. For this, I will use a boostrap template to setup the navigation bar in resources/views/app.blade.php. <nav class="navbar navbar-inverse"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> Maniac</a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav"> <li><a href="foo">Foo</a></li> <li><a href="bar">Bar</a></li> <li><a href="baz">Baz</a></li> </ul> <ul class="nav navbar-nav navbar-right"> <li><a href="#">latest movie title here</a></li> </ul> </div> <!-- /.navbar-collapse --> </div> <!-- /.container-fluid --> </nav> The latest movie text on the far right will however be replaced with a title from our movie list to be created later on. Let's go ahead and create our movie list on the homepage. Routes View all the routes in app/Http/routes.php Route::get('/', 'SampleController@index'); Route::get('/foo', 'SampleController@foo'); Route::get('/bar', 'SampleController@bar'); Route::get('/baz', 'SampleController@baz'); We are just creating four simple routes. Controller View the controller in app/Http/Controllers/SampleController.php /** * Return a list of the latest movies to the * homepage * * @return View */ public function index() { $movieList = [ 'Shawshank redemption', 'Forrest Gump', 'The Matrix', 'Pirates of the Carribean', 'Back to the future', ]; return view('welcome', compact('movieList')); } View See latest movie views in resources/views/welcome.blade.php @extends('app') @section('content') <h1>Latest Movies</h1> <ul> @foreach($movieList as $movie) <li class="list-group-item"><h5>{{ $movie }}</h5></li> @endforeach </ul> @endsection It goes without saying that my idea of latest movies is skewed, but we can overlook that for now. Here is what our homepage looks like now. Awesome! We have our movie list. And now to the business of the day. Update Index Controller Method We will assume that Back to the future, being the last movie on our movie list, is the latest movie, and display it as such on the navigation bar. /** * Return a list of the latest movies to the * homepage * * @return View */ public function index() { $movieList = [ 'Shawshank redemption', 'Forrest Gump', 'The Matrix', 'Pirates of the Carribean', 'Back to the future', ]; $latestMovie = end($movieList); return view('welcome', compact('movieList', 'latestMovie')); } Error In Shared View File We now have Back to the future as our latest movie, and rightfully so because Back to the Future 4 was released a week from now in 1985. I cannot make this stuff up. This seems to work. Well until you try navigating to other pages (Try one of foo, bar, baz) created earlier on. This will throw an error. Fixing Error in Shared View This was expected and by now you must have figured out why this happened. We declared the latest movie variable on the index method of the controller and passed it to the welcome biew. By extension, we made latestMovie available to the navigation bar BUT only to views/welcome.blade.php. When we navigate to /foo, our navigation bar still expects a latestMovie variable to be passed to it from the foo method but we have none to give. There are three ways to fix this Declare the latestMovievalue in every other method, and in this case, the movieListtoo. It goes without saying we will not be doing this. - Place the movie information in a service provider's boot method. You can place it on App/Providers/AppServiceProvideror create one. This quickly becomes inefficient if we are sharing alot of data. /** * Bootstrap any application services. * * @return void */ public function boot() { view()->share('key', 'value'); } - Take advantage of Laravel View composers Enter Left:. -Laravel documentation While it is possible to get the data in every controller method and pass it to the single view, this approach may be undesirable. View composers, as described from the laravel documentation, bind data to a view every time it is rendered. They clean our code by getting fetching data once and passing it to the view. Creating A New Service Provider Since Laravel does not include a ViewComposers directory in it's application structure, we will have to create our own for better organization. Go ahead and create App\Http\ViewComposers We will then proceed to create a new service provider to handle all our view composers using the artisan command php artisan make:provider ComposerServiceProvider The service provider will be visible in app/Providers Add the ComposerServiceProvider class to config/app.php array for providers so that laravel is able to identify it. Modify the boot method in the new Provider by adding a composer method that extends view() /** * Bootstrap the application services. * * @return void */ public function boot() { view()->composer( 'app', 'App\Http\ViewComposers\MovieComposer' ); } Laravel will execute a MovieComposer@compose method every time app.blade.php is rendered. This means that every time our navigation bar is loaded, we will be ready to serve it with the latest movie from our view composer. While MovieComposer@compose is the default method to be called, you can overwrite it by specifying your own custom method name in the boot method. view()->composer('app', 'App\Http\ViewComposers\MovieComposer@foobar'); Creating MovieComposer Next, we will create our MovieComposer class <?php namespace App\Http\ViewComposers; use Illuminate\View\View; class MovieComposer { public $movieList = []; /** * Create a movie composer. * * @return void */ public function __construct() { $this->movieList = [ 'Shawshank redemption', 'Forrest Gump', 'The Matrix', 'Pirates of the Carribean', 'Back to the future', ]; } /** * Bind data to the view. * * @param View $view * @return void */ public function compose(View $view) { $view->with('latestMovie', end($this->movieList)); } } The with method will bind the latestMovies to the app view when it is rendered. Notice the we added Illuminate\View\View . Cleaning Up The Controller with MovieComposer We can now get rid of the latestMovie variable and actually remove it from the compact helper method in SampleController@index. public function index() { $movieList = [ 'Shawshank redemption', 'Forrest Gump', 'The Matrix', 'Pirates of the Carribean', 'Back to the future', ]; return view('welcome', compact('movieList')); } We can now access the latest movie on the navigation bar in all our routes. Optimizing Our Code With A Repository While we seem to have solved one issue, we seem to have created another. we now have two movieList arrays. one in the controller and one in the constructor method in MovieComposer. While this may have been caused by using an array as a simple data source, it may be a good idea to fix it, say, by creating a movie repository. Ideally, the latestMovie value would be gotten from a database using Eloquent. Check out the github repo for this tutorial to see how I created a Movie Repository to get around the redudancy as shown below in MovieComposer and SampleController. public $movieList = []; /** * Create a movie composer. * * @param MovieRepository $movie * * @return void */ public function __construct(MovieRepository $movies) { $this->movieList = $movies->getMovieList(); } /** * Bind data to the view. * * @param View $view * @return void */ public function compose(View $view) { $view->with('latestMovie', end($this->movieList)); } public function index(MovieRepository $movies) { $movieList = $movies->getMovieList(); return view('welcome', compact('movieList')); } Conclusion It is possible to create a view composer that is executed when all views are rendered by replacing the view name with an asterisk wildcard view()->composer('*', function (View $view) { //logic goes here }); Notice that instead of passing a string with the path to MovieComposer, you can also pass a closure. You can as also limit the view composer to a finite number of views, say, nav and footer view()->composer( ['nav', 'footer'], 'App\Http\ViewComposers\MovieComposer' );
https://scotch.io/tutorials/sharing-data-between-views-using-laravel-view-composers
CC-MAIN-2017-39
refinedweb
1,527
52.39
Hide Forgot Description of problem: Libvirt prevents performing a live migration when the destination libvirt connection is a unix socket. In KubeVirt, we need the ability to tunnel a live migration through a unix socket to another network namespace which then connects to the destination libvirt. From the libvirtd perspective in the source environment, it looks like we're attempting to migrate the domain to the same host since the destination connection is a local unix socket. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. On the source node, create a socat connection that forwards all connections on a unix socket to a remote libvirtd instance. Example. socat unix-listen:local-destination-socket,reuseaddr,fork tcp:192.168.66.102:22222 2. Attempt to perform a live migration with the local unix socket 'local-destination-socket' as the destination. Example: virsh migrate --copy-storage-all --tunnelled --p2p --live --xml domain.xml my-vm qemu+unix:///system?socket=local-destination-socket Actual results: error "Attempt to migrate guest to the same host" Expected results: successful migration This particular error happens because there is no server part in the URI. However removing that [1] is not enough to work around this, even though it's not blocked by anything. The real problem I'm facing wvwn with that logic "fixed" is that remote driver is no longer consulted for the URI. I have an idea where to look next, will update this BZ whenever I find something. In the meantime, would you be so kind and tried reproducing this, but instead of a socket just listen on a tcp port? Something along the lines of (written from head, not checked for errors): socat tcp-listen:12345 tcp:192.168.66.102:22222 and then: virsh migrate --copy-storage-all --tunnelled --p2p --live --xml domain.xml my-vm qemu+tcp://127.1:12345/system Thanks. [1] "In the meantime, would you be so kind and tried reproducing this, but instead of a socket just listen on a tcp port? Something along the lines of (written from head, not checked for errors): socat tcp-listen:12345 tcp:192.168.66.102:22222" Without testing I know that will work. As a workaround for this tunneling issue, we're doing the equivalent of this. ---------------------- socat tcp-listen:127.0.0.1:12345,reuseaddr,fork unix-connect:local-destination-socket socat unix-listen:local-destination-socket,reuseaddr,fork tcp:192.168.66.102:22222 ---------------------- qemu-tcp://127.0.0.1:12345/system then works as a destination uri for a migration even though we're proxying that connection through a unix socket. This method tricks libvirt into working, but impacts performance. Vladik, David, does this issue still exist? If so should this be fixed for CNV? >Vladik, David, does this issue still exist? If so should this be fixed for CNV? The issue still exists. We have a work around as I described in a previous comment, but the work around adds additional buffers and copying during the migration. We're essentially adding another leg to the copy to get around this. Please note that when using --tunelled prevents from using new qemu features. While we should probably allow using a unix socket for migration using --tunelled is strongly discouraged even for normal use cases. (In reply to Peter Krempa from comment #5) > Please note that when using --tunelled prevents from using new qemu > features. While we should probably allow using a unix socket for migration > using --tunelled is strongly discouraged even for normal use cases. The primary reason tunnelled exists in the first place was that QEMU's migration stream was clear text. Tunnelling over libvirtd's connection enabled use of TLS / SSH for encryption. QEMU now has native support for TLS, so the primary reason for tunnelled migration goes away. This is good because tunnelled migration was always inherantly inefficient creating extra data copies and latency. QEMU 4.0 is about to get support for using multiple sockets for migration. This will be a major performance boost on high capacity network links. Libvirt's tunnelling feature is limited to a single connection. So essentially for modern QEMU tunnelling via libvirt should always be avoided as it is harmful to performance due to its archtecture & impl. > Please note that when using --tunelled prevents from using new qemu features. While we should probably allow using a unix socket for migration using --tunelled is strongly discouraged even for normal use cases. The tunneled flag no longer exists in our implementation. Even without tunneling, it doesn't appear we can specify the destination as a unix socket. The same thing can be said for the qemu connections for disk block migration. Everything (all connections) need to be over unix sockets in our environment. I have _something_ working, but my guess is that the information in this BZ is not up to date. Could you tell me how are you actually running the migration? I cannot simply modify the example from the description by just removing the --tunneled and --p2p flags as that would only use the socket for the libvirt connection and not the qemu one. I need to make sure I replicate your setup and usage to make it work exactly how you expect, but I cannot find the information in the kubevirt codebase. Also don't forget that if --copy-storage-all is requested, the storage migration is done via NBD via a separate connection, so that one probably also needs to be converted to unix socket. Hey, Here's the function that actually issues the migration client command to libvirt. [1] This involves 3 connections for us now port 22222 is what we're exposing for libvirt port 49152 direct migration port port 49153 block migration port We are currently proxying all three of those connections through a unix socket ourselves. Ideally the goal is to remove that extra hop and have libvirt/qemu expose those connections as unix sockets directly for us. Hopefully that provides more context to the situation as it has evolved today. 1. Just for the sake of completeness and early review, I am trying to eliminate a workaround that looks roughly like this (situation before the fix): 1) I setup a tunnel between libvirt daemons, it listens on a unix socket (already available with no patches needed, just use `qemu+unix:///system?socket=/tmp/test-sock-driver` as the destination URI), a command that I am using is: i) on the source: `socat -v unix-listen:/tmp/test-sock-driver,fork tcp:destination:12344` ii) on the destination: `socat -v tcp-listen:12344,reuseaddr,fork unix:/var/run/libvirt/libvirt-sock` 2) I setup a tunnel between QEMU processes for port 49152 with commands: i) on the source: `socat -v tcp-listen:49152,reuseaddr,fork tcp:destination:12345` ii) on the destination: `socat -v tcp-listen:12345,reuseaddr,fork tcp:localhost:49152` 3) I setup a tunnel for QEMUs disk migration (NBD protocol): i) on the source: `socat -v tcp-listen:49153,reuseaddr,fork tcp:destination:12346` ii) on the destination: `socat -v tcp-listen:12346,reuseaddr,fork tcp:localhost:49153` Note that the fact the actual communication between source and destination is still done over network exists only because it is easier for me to test between two remote libvirts (at least for now) and is not supposed to reflect what virt-launcher/virtwrap is doing. That channel is not what is important here. The command that I am using to migrate in this case is: virsh migrate machinename --verbose --compressed --live --copy-storage-all 'qemu+unix:///system?socket=/tmp/test-sock-driver' tcp:localhost The `tcp:localhost` in the end only tells libvirt not to try to connect to the destination hostname/address, but rather just locally for everything. Please verify this resembles your current workaround, at least roughly. What I am trying to reach in the end is the same thing, but instead of the tunnels listening on TCP port they are listening on UNIX sockets. These tunnels need not be created if both daemons are running on the same host as the only thing to make that work is to just make sure the directory with the sockets is mounted in the same place for both source and destination daemon. The setup with non-local daemons could look like this: 1) The setup for the tunnel between libvirt daemons is the same: i) on the source: `socat -v unix-listen:/tmp/test-sock-driver,fork tcp:destination:12344` ii) on the destination: `socat -v tcp-listen:12344,reuseaddr,fork unix:/var/run/libvirt/libvirt-sock` 2) To setup a tunnel between QEMU processes listen on some UNIX socket and forward it the same way as (1): i) on the source: `socat -v unix-listen:/tmp/test-sock-qemu,fork tcp:destination:12345` ii) on the destination: `socat -v tcp-listen:12345,reuseaddr,fork unix:/tmp/test-sock-qemu` 3) To setup a tunnel for QEMUs disk migration (NBD protocol) listen on some other UNIX socket and forward it the same way as (1): i) on the source: `socat -v unix-listen:/tmp/test-sock-nbd,fork tcp:destination:12346` ii) on the destination: `socat -v tcp-listen:12346,reuseaddr,fork unix:/tmp/test-sock-nbd` The command that to migrate the machine after this is implemented could look like this: virsh migrate machinename --verbose --compressed --live --copy-storage-all 'qemu+unix:///system?socket=/tmp/test-sock-driver' unix:/tmp/test-sock-qemu --disks-socket /tmp/test-sock-nbd or (in case you prefer proper URI for the `desturi` parameter): virsh migrate machinename --verbose --compressed --live --copy-storage-all 'qemu+unix:///system?socket=/tmp/test-sock-driver' unix://?socket=/tmp/test-sock-qemu --disks-socket /tmp/test-sock-nbd Please check that this end result is acceptable for you. As far as I understand this would solve two major issues for you: 1) No need for a tunnel in case the daemons are running on the same host, just making sure the sockets are (bind) mounted properly (the libvirt socket needs to be bind-mounted somewhere else or forwarded). 2) No need for a network access from libvirt daemon, no matter whether migrating to the same host or to a remote node (in this case the wrapper would still need to forward it, probably through its own network). I would love if these could be checked before I am done with the implementation. Please see comment #12, thanks. Hey, yes this does look accurate to me. I think you're on the right track. As soon as you feel like you have something worth us testing, it would be worthwhile for us to drop an early build of your libvirt work into kubevirt and attempt to remove the tcp proxies to see if we hit any surprises. Thanks. I found out you also use peer2peer migration unconditionally. My guess is that it's because you used to use tunnelled migration which was since removed. I'll try to support both peer2peer and direct, but maybe switching to direct makes more sense for you (the libvirt connection to the other side is not initiated by the daemon, but the client). > I'll try to support both peer2peer and direct, but maybe switching to direct makes more sense for you (the libvirt connection to the other side is not initiated by the daemon, but the client). oh, good observation. I'm not 100% sure what the implications might be for us to switch to direct. It would be great if you can support both. If you find yourself in a situation where technically it's difficult to support peer2peer, we can look further into the implications of using direct on our side. I have a first version posted here: (In reply to David Vossel from comment #16) Well, the only difference between p2p and direct is who is connecting to the destination daemon. By dropping peer2peer you could remove one socket forward between the daemons. For more info see and which should, hopefully, illustrate the difference. About the early build, you can grab the current tree here: I'm not sure what build you'd like, so let me know and I'll do my best to help with that. >About the early build, you can grab the current tree here: I'm not sure what build you'd like, so let me know and I'll do my best to help with that. your branch is fine. This Dockerfile [1] checks out your branch, and builds libvirt within a container that we can use as the base image for the kubevirt VMIs. I'll see if we can get someone tasked with kubevirt integration using your dev branch as a starting point. I'm not sure how quickly this can be done though. 1. Patches for v2 were sent on the mailing list, the gitlab branch is also updated. The differences are in the URI specification, now it is enough to use "unix:///path/to/socket" for both destination URI and the disks URI (which used to be a disks socket). All is explained in the documentation (virsh manpage for "migrate" command, "migration" web page with example use case and documentation for the new typed parameter itself), I can provide accurate links after the code is merged (the artifacts build in the CI are not directly accessible since it is only in my branch now). Here is the link to the v2: Pushed upstream with commit v6.7.0-58-gf51cbe92c0d8: commit f51cbe92c0d84e29ea1f158ad544a4d69ec1cee3 Author: Martin Kletzander <mkletzan@redhat.com> Date: Wed Sep 2 12:06:12 2020 +0200 qemu: Allow migration over UNIX socket Setting Upstream keyword per comment 25. Try to verify with libvirt-6.6.0-8.module+el8.3.1+8648+130818f2.x86_64 Steps: 1. On src host: # socat unix-listen:/tmp/sock,reuseaddr,fork tcp:<dest host>:16509 # socat unix-listen:/tmp/sock-49152,reuseaddr,fork tcp:<dest host>:49152 2. On dest host: Start libvirtd-tcp.socket, set auth_tcp=none' and restart libvirtd.service. 3. Migrate a vm from src to dest host, it reported error as below: # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/sock --live --verbose --migrateuri unix:///tmp/sock-49152 error: Failed to connect socket to '/tmp/sock-49152': Permission denied Audit log on src host: type=AVC msg=audit(1604999472.307:1736): avc: denied { connectto } for pid=148647 comm="rpc-worker" path="/tmp/sock-49152" scontext=system_u:system_r:svirt_t:s0:c735,c952 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0 4. Change the migrateuri to unix:///var/lib/libvirt/qemu/sock-49152, it will report different error: # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/sock --live --verbose --migrateuri unix:///var/lib/libvirt/qemu/sock-49152 error: Failed to connect socket to '/var/lib/libvirt/qemu/sock-49152': Permission denied Libvirtd log on dest host: 2020-11-10 09:21:22.972+0000: 512585: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-370", "error": {"class": "GenericError", "desc": "Failed to bind socket to /var/lib/libvirt/sock-49152: Permission denied"}}] 5. Change migrateuri back to unix:///tmp/sock-49152, and set selinux to permissive on src host, try migration again: # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/sock --live --verbose --migrateuri unix:///tmp/sock-49152 error: operation failed: migration out job: Unable to write to socket: Broken pipe Hi Martin Could you help to check above comment? The "permission denied is weird because libvirt definitely labels them. It might be that the listening socket has its own label separate from the label on the socket on the filesystem (how it happens in some cases), but also because it looks like the avc denial comes from `rpc-worker`. Anyway, even though forwarding the socket to a tcp address of another machine, that does not work for any other connection than the libvirt one. Basically because you said that the migration should happen over a particular socket then both source and destination expect that (destination listens on a socket, not the port which you picked). Could you please check that if you start socket listening on a socket in the same way (the other way being just a `readline` for example) that you can use that socket as an output for chardev device? Because both of them are being labelled the same way, so I think if one does not work, then the other one should not either. I tried with following steps, please check: 1. Start a socat to listen on a unix socket: # socat unix-listen:/var/lib/libvirt/qemu/f16x86_64.agent,reuseaddr,fork tcp:<host ip>:49152 2. Start a guest with unix socket of connect mode: # virsh dumpxml avocado-vt-vm1 ... <channel type="unix"> <source mode="connect" path="/var/lib/libvirt/qemu/f16x86_64.agent" /> <target type="virtio" /> <address type="virtio-serial" controller="0" bus="0" port="2" /> </channel> ... # virsh start avocado-vt-vm1 error: Failed to start domain avocado-vt-vm1 error: internal error: process exited while connecting to monitor: 2020-11-11T06:47:31.199945Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/f16x86_64.agent: Failed to connect socket /var/lib/libvirt/qemu/f16x86_64.agent: Permission denied 3. Check audit log: type=AVC msg=audit(1605077691.158:2144): avc: denied { connectto } for pid=165747 comm="qemu-kvm" path="/var/lib/libvirt/qemu/f16x86_64.agent" scontext=system_u:system_r:svirt_t:s0:c1,c78 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0 4. I also checked the selinux context of the unix socket file during vm starts, it was labeled correctly: # ll /var/lib/libvirt/qemu/f16x86_64.agent -Z srwxr-xr-x. 1 qemu qemu system_u:object_r:svirt_image_t:s0:c1,c78 0 Nov 11 01:45 /var/lib/libvirt/qemu/f16x86_64.agent Change the unix socket directory to /tmp or /var/lib/libvirt, got same result as above Hi David I'm a little confused by the requirement of CNV. According to the comment 0 of the bug, my understanding about the requirement is as: On src host : socat unix-listen:local-socket-1,reuseaddr,fork tcp:<dest host>:22222 socat unix-listen:local-socket-2,reuseaddr,fork tcp:<dest host>:49152 socat unix-listen:local-socket-3,reuseaddr,fork tcp:<dest host>:49153 Then do migration: virsh migrate [--copy-storage-all] [--p2p] --live --xml domain.xml my-vm qemu+unix:///system?socket=local-socket-1 --migrateuri unix://local-socket-2 [--disks-uri unix://local-socket? Yes, the simulated setup will need socat on both src and destination. I know it looks strange, but we're tunneling a migration over unix sockets which are streamed across a TCP connection. The real world reasoning behind this is that in CNV the container libvirt is running in does not have network access. All we have for communication is unix sockets. So we're tunnelling out of that libvirt container with the unix socket and then streaming in another environment that unix socket over tcp. so it conceptually looks like this. ---------- SOURCE NODE ------ | ------ DESTINATION NODE ----- Libevirt <-> unix socket <-> TCP <-> unix socket <-> libvirt Created attachment 1729265 [details] Script for testing the forwarding So the issue with SELinux was the socket labeling one. The socket needs to be labeled and the label is different from the one that is on the file that represents the socket. Since socat does not have this functionality, I wrote a small script that forwards the data in a very simple manner, but also supports setting the context the way it is supposed to be done. So you can run it like so on the source host: ./pmsocat.py unix2tcp -c system_u:system_r:svirt_socket_t:s0 /tmp/test.sock destination 12345 or even more times if you are testing nbd as well. You can also run it on the destination instead of socat if you want: ./pmsocat.py tcp2unix 12345 /tmp/test.sock I found that it also performs better when testing migration with multiple parallel connections (virsh migrate --parallel), I even tried big numbers of parallel connections and it worked very good. Feel free to ask if you need any more help with this peculiar BZ. Also, I plan to document this more closely, but if you want to track that as well, then please create another BZ so that a simple doc change does not stall this BZ for no reason. Thank you. (In reply to Martin Kletzander from comment #37) > Created attachment 1729265 [details] > Script for testing the forwarding Thanks, I will try with your script. Hi Martin I tried your script, some of the functions are not supported with python3.6(the latest python version on RHEL8), I need some time to modify it, and I will be very grateful if you can provide a script for python3.6 And I did some test with selinux disabled on src host, and met two problems: 1. Migration is stuck at 1% when test with --parallel and without --copy-storage-all(while it works well when test with both --parallel and --copy-storage-all) 2. Migration with both --copy-storage-all and --tls failed: # virsh -k0 migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/sock --live --verbose --migrateuri unix:///tmp/sock-49152 --p2p --postcopy --compressed --comp-methods xbzrle --copy-storage-all --disks-uri unix:///tmp/sock-49153 --tls error: internal error: unable to execute QEMU command 'nbd-server-start': TLS is only supported with IPv4/IPv6 Steps for problem 1: 1). On src host: # socat unix-listen:/tmp/sock,reuseaddr,fork tcp:dell-per740-04.dell2.lab.eng.bos.redhat.com:16509 # socat unix-listen:/tmp/sock-49152,reuseaddr,fork tcp:dell-per740-04.dell2.lab.eng.bos.redhat.com:49152 2). On dest host: # socat tcp-listen:49152,reuseaddr,fork unix:/tmp/sock-49152 3). Do migration with --parallel: # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/sock --live --verbose --p2p --migrateuri unix:///tmp/sock-49152 --parallel Migration: [ 1 %] 4). Check migration progress after a while, I found the data transferred stayed 2.025MiB forever. # virsh domjobinfo avocado-vt-vm1 Job type: Unbounded Operation: Outgoing migration Time elapsed: 483192 ms Data processed: 2.025 MiB Data remaining: 1022.133 MiB Data total: 1.008 GiB Memory processed: 2.025 MiB Memory remaining: 1022.133 MiB Memory total: 1.008 GiB Dirty rate: 0 pages/s Page size: 4096 bytes Iteration: 1 Postcopy requests: 0 Constant pages: 2064 Normal pages: 639 Normal data: 2.496 MiB Expected downtime: 300 ms Setup time: 9 ms 5). Check the opened sockets on both hosts. Src host: # netstat -tuxnap|grep 49152 tcp 0 300328 10.16.218.250:59848 10.16.218.252:49152 ESTABLISHED 176024/socat tcp 0 270584 10.16.218.250:59852 10.16.218.252:49152 ESTABLISHED 176025/socat unix 2 [ ACC ] STREAM LISTENING 828327 172796/socat /tmp/sock-49152 unix 3 [ ] STREAM CONNECTED 801621 176025/socat /tmp/sock-49152 unix 3 [ ] STREAM CONNECTED 801620 176024/socat /tmp/sock-49152 Dest host: # netstat -tuxnap|grep 49152 tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 518422/socat tcp 231616 0 10.16.218.252:49152 10.16.218.250:59848 ESTABLISHED 518971/socat tcp 242080 0 10.16.218.252:49152 10.16.218.250:59852 ESTABLISHED 518973/socat unix 2 [ ACC ] STREAM LISTENING 4980248 518943/qemu-kvm /tmp/sock-49152 unix 3 [ ] STREAM CONNECTED 4993667 518943/qemu-kvm /tmp/sock-49152 unix 3 [ ] STREAM CONNECTED 4993669 518943/qemu-kvm /tmp/sock-49152 6). Try to cancel migration by Ctrl+C or "virsh domjobabort avocado-vt-vm1", the status stays at "cancelling" forever. 2020-11-11 11:20:15.113+0000: 175406: info : qemuMonitorJSONIOProcessLine:240 : QEMU_MONITOR_RECV_REPLY: mon=0x7ffaf8003020 reply={"return": {"expected-downtime": 300, "status": "cancelling", "setup-time": 9, "total-time": 819084, "ram": {"total": 1082859520, "postcopy-requests": 0, "dirty-sync-count": 1, "multifd-bytes": 2105216, "pages-per-second": 0, "page-size": 4096, "remaining": 1071783936, "mbps": 0, "transferred": 2123799, "duplicate": 2064, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 2617344, "normal": 639}}, "id": "libvirt-1655"} I also have another question: Is it needed to provide an option to specify the unix socket path on dest host in case the path on src host is not available on dest host? Created attachment 1729364 [details] Script for python3.6 (In reply to Fangge Jin from comment #39) I tried rewriting it to 3.6, please have a look and try it. ad 1) I'll have to check that out.. (In reply to Martin Kletzander from comment #41) > Created attachment 1729364 [details] > Script for python3.6 > > (In reply to Fangge Jin from comment #39) > I tried rewriting it to 3.6, please have a look and try it. > > ad 1) I'll have to check that out. 1) I tested with your script, migration with --parallel can succeed occasionally but still get stuck at most of time, the error printed by the script is as below: Task exception was never retrieved future: <Task finished coro=<forward() done, defined at ./pmsocket-3.6.py:11> exception=ConnectionResetError(104, 'Connection reset by peer')> Traceback (most recent call last): File "./pmsocket-3.6.py", line 24, in forward yield from writer.drain() File "/usr/lib64/python3.6/asyncio/streams.py", line 329, in drain raise exc File "./pmsocket-3.6.py", line 15, in forward data = yield from reader.read(_args.blocksize) File "/usr/lib64/python3.6/asyncio/streams.py", line 634, in read yield from self._wait_for_data('read') File "/usr/lib64/python3.6/asyncio/streams.py", line 464, in _wait_for_data yield from self._waiter File "/usr/lib64/python3.6/asyncio/selector_events.py", line 714, in _read_ready data = self._sock.recv(self.max_size) ConnectionResetError: [Errno 104] Connection reset by peer > >). 2) I wonder whether kubevirt will need to use TLS NBD for disk migration. > > . 3) Now the path of the unix socket file is hard-coded to be same on src and dest host. Should we provide an option(e.g. --listen-uri) so user can specify different path to listen on dest host? . 3) Not really, it would only complicate things. This is going to be used mostly by containers where you'll probably bind-mount the directory/socket anyway. Outside of containers it is still possible to bind-mount them, but I do not see a reason for having different paths on source and destination, especially when the forwarder is in charge of creating the socket where they want (i.e. you can just create a new directory just for the sockets). If anything, keeping the names in sync is actually more error proof. I checked once more that --parallel without --copy-storage-all (with nfs-backed disks) is working fine. I just have problems with the python 3.6 script and I think it might be related to asyncio not being as function in python36 as it is in 3.7 and 3.8. Feel free to write anything simpler and smaller for the forwarding or just run a fedora container in which you bind mount the socket directory so that you can run the script in there (it would actually be way closer to what CNV is going to do IMHO). Feel free to create BZs for the error on NBD TLS migration and for documenting the selinux rules needed for this to work. I have patches for that already. (In reply to Martin Kletzander from comment #43) > . > Thanks, I will try with container >. If CNV is not using NBD TLS, then I think the current test result is acceptable. So I won't file new BZs for this. Another issue I met: Dst libvirtd crashed when --disks-uri contains on valid schema, do you prefer a new BZ or address it in this BZ?: # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/test.sock --live --verbose --compressed --copy-storage-all --disks-uri fjeifeke error: End of file while reading data: Input/output error (gdb) bt #0 0x00007ff42e73cd4e in __strcmp_avx2 () from /lib64/libc.so.6 #1 0x00007ff3e876579c in qemuMigrationDstStartNBDServer (tls_alias=0x0, nbdURI=0x5584fc300e70 "fjeifeke", nbdPort=0, migrate_disks=0x0, nmigrate_disks=0, listenAddr=<optimized out>, vm=0x5584fc2be6a0, driver=0x7ff3cc107830) at ../../src/qemu/qemu_migration.c:403 #2 qemuMigrationDstPrepare, def=<optimized out>, origname=<optimized out>, st=<optimized out>, protocol=<optimized out>, port=<optimized out>, autoPort=<optimized out>, listenAddress=<optimized out>, nmigrate_disks=<optimized out>, migrate_disks=<optimized out>, nbdPort=<optimized out>, nbdURI=<optimized out>, migParams=<optimized out>, flags=<optimized out>) at ../../src/qemu/qemu_migration.c:2726 #3 0x00007ff3e87676a7 in qemuMigrationDstPrepareD, uri_in=<optimized out>, uri_out=<optimized out>, def=<optimized out>, origname=<optimized out>, listenAddress=0x0, nmigrate_disks=<optimized out>, migrate_disks=<optimized out>, nbdPort=<optimized out>, nbdURI=<optimized out>, migParams=<optimized out>, flags=<optimized out>) at ../../src/qemu/qemu_migration.c:3030 #4 0x00007ff3e87cf850 in qemuDomainMigratePrepare3Params (dconn=0x7ff410004010, params=<optimized out>,=0x7ff4279a7818, cookieoutlen=0x7ff4279a780c, uri_out=0x7ff4180025a0, flags=2113) at ../../src/qemu/qemu_driver.c:12536 --Type <RET> for more, q to quit, c to continue without paging-- #5 0x00007ff4329d031c in virDomainMigratePrepare3Params (dconn=dconn@entry=0x7ff410004010, params=0x7ff418004ff0,=cookieout@entry=0x7ff4279a7818, cookieoutlen=0x7ff4279a780c, uri_out=0x7ff4180025a0, flags=2113) at ../../src/libvirt-domain.c:4871 #6 0x00005584fa3adda4 in remoteDispatchDomainMigratePrepare3Params (ret=0x7ff418002f10, args=0x7ff418002ee0, rerr=0x7ff4279a78e0, msg=0x5584fc311120, client=<optimized out>, server=<optimized out>) at ../../src/remote/remote_daemon_dispatch.c:5610 #7 remoteDispatchDomainMigratePrepare3ParamsHelper (server=<optimized out>, client=<optimized out>, msg=0x5584fc311120, rerr=0x7ff4279a78e0, args=0x7ff418002ee0, ret=0x7ff418002f10) at ./remote/remote_daemon_dispatch_stubs.h:8789 #8 0x00007ff43289cdfc in virNetServerProgramDispatchCall (msg=0x5584fc311120, client=0x5584fc306050, server=0x5584fc2be080, prog=0x5584fc312810) at ../../src/rpc/virnetserverprogram.c:430 #9 virNetServerProgramDispatch (prog=0x5584fc312810, server=server@entry=0x5584fc2be080, client=client@entry=0x5584fc306050, msg=msg@entry=0x5584fc311120) at ../../src/rpc/virnetserverprogram.c:302 #10 0x00007ff4328a49ac in virNetServerProcessMsg (srv=srv@entry=0x5584fc2be080, client=0x5584fc306050, prog=<optimized out>, msg=0x5584fc311120) at ../../src/rpc/virnetserver.c:137 #11 0x00007ff4328a4e1c in virNetServerHandleJob (jobOpaque=0x5584fc2f6160, opaque=0x5584fc2be080) at ../../src/rpc/virnetserver.c:154 #12 0x00007ff4327192fb in virThreadPoolWorker (opaque=<optimized out>) at ../../src/util/virthreadpool.c:163 #13 0x00007ff4327181d7 in virThreadHelper (data=<optimized out>) at ../../src/util/virthread.c:233 #14 0x00007ff42edcb14a in start_thread () from /lib64/libpthread.so.0 #15 0x00007ff42e6e0f23 in clone () from /lib64/libc.so.6 Two more issues: 1. Can't disk migration go on rdma transport? And why? # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/test.sock --live --verbose --compressed --copy-storage-all --disks-uri rdma://192.168.100.6:56891 --listen-address 10.16.218.252 error: invalid argument: Unsupported scheme in disks URI: rdma 2.--tls-destination doesn't take effect for disk migration, I think this is a bug, please confirm: # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/test.sock --live --verbose --compressed --p2p --migrateuri tcp://10.16.218.252:49156 --bandwidth 200 --tls --tls-destination <dest hostname> --copy-storage-all --disks-uri tcp://192.168.100.6:49156 error: internal error: unable to execute QEMU command 'blockdev-add': Certificate does not match the hostname 192.168.100.6 # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/test.sock --live --verbose --compressed --migrateuri tcp://10.16.218.252:49156 --bandwidth 200 --tls --tls-destination <dest hostname> --copy-storage-all error: internal error: unable to execute QEMU command 'blockdev-add': Certificate does not match the hostname 10.16.218.252 For documentation about --disks-uri: " This can be tcp://address:port to specify a listen address (which overrides --listen-address for the disk migration) " How about change it to "which overrides both --migrate-uri and --listen-address for the disk migration" to make it more accurate? (In reply to Fangge Jin from comment #48) Good point. I'll change that as well. (In reply to Fangge Jin from comment #47) > 2.--tls-destination doesn't take effect for disk migration, I think this is > a bug, please confirm: I filed a separate bug for this issue as it is not related to current bug: Bug 1901394 - --tls-destination doesn't take effect for disk migration Another small issue: When --migrateuri uses unix, and --disks-uri is not specified, migration will fail. How about add a logic in code so an more clear error(e.g. --disks-uri must be specified when unix is used for --migrateuri) can be printed in such situation. But I think it is not an important issue, feel free to give your opinion. # virsh migrate avocado-vt-vm1 qemu+unix:///system?socket=/tmp/test.sock --live --verbose --p2p --migrateuri unix:///var/lib/libvirt/qemu/test-49151.sock --copy-storage-all error: internal error: unable to execute QEMU command 'nbd-server-start': address resolution failed for /var/lib/libvirt/qemu/test-49151.sock:49152: Name or service not known The issue 1) in comment 42 can still be reproduced(about 20% reproducible) when I do migration on two fedora hosts each with a containerized libvirtd, while it can NOT be reproduced when I do migration on one single fedora host with two containerized libvirtd(using bind mount, no packet forwarding). Maybe something during packet forwarding. I guess I can verify this bug with two containerized libvirtd running on one fedora host, right? Dst qemu log: 2020-11-26T10:23:07.648598Z qemu-kvm: failed to receive packet via multifd channel 0: multifd: received packet magic 5145564d expected 11223344 2020-11-26 10:23:19.604+0000: shutting down, reason=failed 2020-11-26T10:23:19.605835Z qemu-kvm: terminating on signal 15 from pid 11076 (/usr/sbin/libvirtd) Hi Martin Please check comment 46, 47, 51 and 52. Thanks. I'll fix most of them in this BZ, thanks for the thorough testing. Regarding comment #52 I think those failures are happening simply because the forwarding script is just very, very simple. By eliminating the the script from the equation you test really what should be tested - the feature added in this BZ, so I think that is also more appropriate. I tried to test migration with --parallel by using the migration proxy of kubevirt, and got same error as comment 52. Maybe it is due to the fault of qemu-kvm? Kvm QE told me that combination of "--parallel" and other feature is not supported by now. So I will just treat this as a known issue for now. Hi David Migration over unix socket with "--parallel"(multifd in qemu-kvm) will meet error(see comment 52, comment 55), is "--parallel" used in kubevirt? If not, I will just treat this as a known issue and won't block the verification of this bug. Hey, we currently do not use the parallel flag. We are under some pressure to optimize migration times, which means it's possible the parallel option might be something we consider to help increase bandwidth at some point. I agree this shouldn't block this work though since we aren't utilizing the parallel functionality now. Sorry for not getting back to this earlier, here are some updates and answers (and some of them repeated, so that it is summed up in one place): (In reply to Fangge Jin from comment #46) This looks like it has been an issue even before. But I have a patch for that, might as well include them with the other ones for this BZ. (In reply to Fangge Jin from comment #47) ad 1) No idea, you could ask someone who was adding support for rdma, but since I am not that familiar with it I'm not sure if there is some specific reason for that. From libvirt's point of view it is probably because QEMU does not support it. (In reply to Fangge Jin from comment #48) Fixed in another batch of patches. (In reply to Fangge Jin from comment #50) Yes, it is, but we need QEMU for that, more info in the bug you created. (In reply to Fangge Jin from comment #51) Yeah, that's not a big deal I think. It even is mentioned in the docs kind of. But I agree it would be nice to have a check for that. (In reply to Fangge Jin from comment #52) I'm quite sure that's because of the script I wrote. I wanted to quickly cook up something, so I did it in python, but I am not that very well versed in handling multiple I/O from python, so there might be some mishaps there. It might be possible to set the socket create label, create the socket and then spawn socat with fd: and give it the FD, although I *think* that would treat it just as a character stream and not a socket (i.e. not doing accept() on it). If it works with bind-mounted directories on a single host, then it is most certainly fine. You can then try it even with just RHEL so that the test does not depend on Fedora or anything with different component versions. Fixes posted upstream: Pushed upstream, the additional 5 commits are: commit 9e93d87c00e65211c584769bf27e7cdb74bd6df2 Author: Martin Kletzander <mkletzan@redhat.com> Date: Wed Nov 18 14:05:25 2020 +0100 docs: Document SELinux caveats when migrating over UNIX sockets commit 511013b57b50da7c800967cd990f8ae1ad5fa948 Author: Martin Kletzander <mkletzan@redhat.com> Date: Wed Nov 25 00:19:41 2020 +0100 qemu: Tweak debug message for qemuMigrationSrcPerformPeer2Peer3 commit 5db1fc56022642e610c911efd28f3a931279e917 Author: Martin Kletzander <mkletzan@redhat.com> Date: Sun Dec 13 15:49:29 2020 +0100 qemu: Fix possible segfault when migrating disks commit b17eb7344606dcbe3ec6eee702009c93e46e4d8d Author: Martin Kletzander <mkletzan@redhat.com> Date: Sun Dec 13 22:27:33 2020 +0100 docs: Slightly alter disks-uri description in virsh man commit 68164892fe6f5d1b5e4fbd4fe1a02d14c1384096 Author: Martin Kletzander <mkletzan@redhat.com> Date: Wed Dec 16 11:34:50 2020 +0100 qemu: Extra check for NBD URI being specified Verified with libvirt-client-6.6.0-11.module+el8.3.1+9196+74a80ca4.x86_64 Test scenarios are uploaded in attachment. Created attachment 1739897 [details] Test scenarios Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=1638889
CC-MAIN-2021-49
refinedweb
6,372
54.63
Printing High and Low I need help accessing the high and low values for a current data and N in the past. Code: print(self.datas[0].low) print(self.datas[0].high) Output: <backtrader.linebuffer.LineBuffer object at 0x0000016E46240BC8> <backtrader.linebuffer.LineBuffer object at 0x0000016E46240B08> Also is aside from the documentation is there a list of the keywords explaining what the different methods do? - Yaroslav Horyslavets last edited by If you need help you must send all of your code, because i am not magic and can't understand where are you try write these lines:) Thanks, I am new to this, here is the link to my files, it is pretty simple since I am starting out. See lines 109 and 110 Thanks @ThisGuySold said in Printing High and Low: print(self.datas[0].low) print(self.datas[0].high) Try using : print(self.datas[0].low[0]) # For current bar, and print(self.datas[0].low[-1]) # For previous bar an d-2, -3 etc for further back. I run the code with the changes ([I have updated the drive also), I placed a debug point (breakpoint) on line 112. This is my results after running. 2000-01-11, SELL CREATE, 24.99 2000-01-11, 24.35 2000-01-11, 25.52 But from my csv file 24.35 should be 27.36 25.52 should be 28.69 Is there something about how backtrader runs that I am missing here? Here is the code from your file. if not self.position: #If we dont have a postition we will then buy if self.dataclose[0] < self.dataclose[-1]: # current close less than previous close if self.dataclose[-1] < self.dataclose[-2]: # previous close less than the previous previous close # BUY, BUY, BUY!!! (with all possible default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) self.order = self.buy() #we store the order status else: #Else if we have a position then we need to specify when to sell #Here we choose 5 days after if len(self) >= (self.bar_executed + self.params.exitbars): self.log('SELL CREATE, %.2f' % self.dataclose[0]) #console log self.order = self.sell() self.log(self.datas[0].low[0]) self.log(self.datas[0].high[0]) a1 = 1 Your code self.log(self.datas[0].low[0]) self.log(self.datas[0].high[0]) are in a different part of the if statement, so you will get a different bar's data. Please include your code next time, thanks. Oh okay, Thanks for letting me know, So which part do I need to be to access to the current high, low values? Also suppose I am anywhere in the script is there a way to retrieve the current high, low values? Thanks. Your code is right, but in the wrong level, it is contained in a second if statement so you willl only get the value when that if condition is met. Just put it on the first line, better when together. def next(): self.log("High {:.2f}, Low {:.2f}".format(self.datas[0].high[0], self.datas[0].low[0])) Results in: 2005-02-02 High 36.34, Low 35.29 2005-02-03 High 35.67, Low 35.00 2005-02-04 High 35.30, Low 34.71 2005-02-07 High 35.19, Low 34.36 2005-02-08 High 34.91, Low 34.32 2005-02-09 High 34.66, Low 33.45 2005-02-10 High 33.72, Low 32.47 2005-02-11 High 34.70, Low 33.31 2005-02-14 High 34.41, Low 33.78 Great, Thanks, I appreciate it!
https://community.backtrader.com/topic/2504/printing-high-and-low
CC-MAIN-2022-40
refinedweb
609
79.46
Problem with sending Email from jBossAnkit Garg Jan 21, 2009 2:33 AM I want to know how to send mails from jBoss. A google search revealed something which confused me. It said that mail sending is done through a bean java:/email or something. This confused me. I am using javax.mail.Transport to send the mail. The code is like this Properties props = new Properties(); Session mailConnection = Session.getInstance(props, null); Transport tr = mailConnection.getTransport("smtp"); tr.connect("127.0.0.1","username","pass"); Now it gives an exception when when the connect method is called as- javax.mail.MessagingException: Could not connect to SMTP host: 127.0.0.1, port: 25; nested exception is: java.net.SocketException: Software caused connection abort: connect Now there are a few things I am not sure of. First of all I think the username and password given when calling connect method are those defined in the mail-service.xml in jBoss. Also is mailing on directly in jBoss or do I have to start it by configuring it. I would be thankful to anyone who could help... (I have also posted this on coderanch.com but not getting any reply there so cross posted it here) 1. Re: Problem with sending Email from jBossAndre Ehrlich Jan 21, 2009 9:37 AM (in response to Ankit Garg) Hi! In one of my lasts projects I wrote an MDB, which reads text messages from a queue and sends mails to some my mail account. Maybe this helps you a little. First of all, you have to set up the JBoss mail service in server//deploy/mail-service.xml. You need to specify the connection properties to the mail server which is used. Secondly you write the code to send a mail, for example: // who should receive the mail Address[] to = InternetAddress.parse("duke@java.sun.com,homer@simpson.net", false); // create a simple text message javax.mail.Message message = new MimeMessage(session); message.setFrom(); message.setRecipients(javax.mail.Message.RecipientType.TO, to); message.setSubject("Mail subject goes here"); message.setSentDate(new Date()); message.setText("java mail rocks!"); // ... and send it Transport.send(message); That's it it :-) If you do not specify the "from" property, the default value defined in mail-service.xml is used. Hope that helped! Keep asking, if you have any problems. cheers, Andre 2. Re: Problem with sending Email from jBossAnkit Garg Jan 22, 2009 6:20 AM (in response to Ankit Garg) j-n00b thanks for the help. But the main problem is still there. How to configure the mail-service.xml file. As far as I read on internet, if I use jBoss mail service, then I have to acquire the mail session using a JNDI lookup of java:Email right?? Also I have to provide a provider in the mail-service.xml. Now I don't know which provider I can use. My localhost provider seems not to work. So I need an external provider. Can you tell me any free providers from which I can test my code?? 3. Re: Problem with sending Email from jBossAndre Ehrlich Jan 22, 2009 8:38 AM (in response to Ankit Garg) Sorry, I forgot to send the snippet where the mail session is injected. So let's do it step by step. As provider you could use any mail server, you have access to (including GMX, gmail or whatever). If you don't have one available, you can install your own. I use JES (Java Email Server) for that purpose. It is a real simple and easy to use server (only 2 config files!), which is quite perfect for testing issues. Assuming you have JES running, your mail-service.xml could look like this: <?xml version="1.0" encoding="UTF-8"?> <server> <mbean code="org.jboss.mail.MailService" name="jboss:service=Mail"> <attribute name="JNDIName">mail/MailSession</attribute> <!-- mail server login data, not required for JES --> <attribute name="User">homer</attribute> <attribute name="Password">simpson</attribute> <attribute name="Configuration"> <configuration> <property name="mail.store.protocol" value="pop3"/> <property name="mail.transport.protocol" value="smtp"/> <!-- who receives the mail, if no recipient is specified --> <property name="mail.user" value="nobody"/> <!-- Change to the mail server --> <property name="mail.pop3.host" value="localhost"/> <!-- Change to the SMTP gateway server --> <property name="mail.smtp.host" value="localhost"/> <!-- The mail server port --> <property name="mail.smtp.port" value="25"/> <!-- who is the sender of the mail, if none is specified --> <property name="mail.from" value="mailmaster@asdf.de"/> <!-- Enable debugging output from the javamail classes --> <property name="mail.debug" value="false"/> </configuration> </attribute> <depends>jboss:service=Naming</depends> </mbean> </server> At least, this config works for me :-) Back in your application, you can use the earlier code snippet in an EJB to send a mail. There is no need to look up the mail session or something, this is done by JBoss via Dependency Injection. Here is a simple example: package ae; import java.net.InetAddress; import java.util.Date; import javax.annotation.Resource; import javax.ejb.ActivationConfigProperty;; import javax.mail.Address; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; import org.apache.log4j.Logger; @MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/q2m") }) public class Queue2MailBean implements MessageListener { private static final Logger log = Logger.getLogger(Queue2MailBean.class); @Resource(mappedName = "mail/MailSession") private Session session; public void onMessage(Message inMessage) { try { // text message if (inMessage instanceof TextMessage) { log.info("processing incomming TextMessage"); TextMessage textMsg = (TextMessage) inMessage; int number = textMsg.getIntProperty("asdf"); sendMail("andre@asdf.de", "Received number " + number); } // other message types are currently not supported! else { log.warn("Message of wrong type: " + inMessage.getClass().getName()); } } catch (JMSException e) { log.error("Q2M: Error while receiving JMS message!", e); } } private void sendMail(String recipient, String text) { try { Address[] to = InternetAddress.parse(recipient, false); // create message javax.mail.Message message = new MimeMessage(session); message.setFrom(); message.setRecipients(javax.mail.Message.RecipientType.TO, to); message.setSubject(Queue2MailBean.class.getSimpleName()); message.setSentDate(new Date()); message.setText(text); // Send message Transport.send(message); } catch (Exception e) { log.error(e, e); } } } My Bean reads TextMessages from a Queue named "q2m" (must be deployed manually or via a "-service.xml" contained in the ejb-jar. Within the text messages, there is an int property stored under key "asdf", which is forwarded to my JES mail address. The sendMail() method uses the MailSession injected by JBoss (see @Resource) to send the mail using the default FROM attribute. If you don't like to use Dependency Injection, you can lookup the mail session yourself: session = (Session) new InitialContext().lookup("Mail"); Please note, that this does not work outside of the server VM, because the mail session is bound to the "java" namespace, see serverlog: 2009-01-22 14:21:38,187 INFO [org.jboss.mail.MailService] (main) Mail Service bound to java:/Mail. Are there any questions left? cheers, Andre 4. Re: Problem with sending Email from jBossAnkit Garg Jan 28, 2009 8:53 AM (in response to Ankit Garg) Thanks j-n00b for the wonderful explanation. I'll try this and tell you if I get any problems. Thanks again :)
https://developer.jboss.org/thread/80816
CC-MAIN-2017-47
refinedweb
1,205
53.17
MEMPHIS DAILY APPEAL SATURDAY. JULY 10, 1886. MEMPHIS APPEAL. SATURDAY, I I JILT 10, 1886. ORATIF1IKU laniCATIOKN. On Tuesday next the Democratic primary elections will be held to -lect delegates to the County Conven tion. The interest which the people are manifesting in the election of delegate! presigrsa g'orious victory in Augutt. Democrats in every part of the county are taking a deep in terest in the content. The people of the city wards and civil districts are holding meetings for the purpose of selecting the best men to be votad for aa delegates. In no former contest hns there he en so much interest mnni fested. Heretofore only a few votes were pilled in tho election of dele gates, but the indications are that on Tuesday half the party vote will be polled in the primary elections. Tho people of Shelby county are law abid ing, nnd they have aworn in their hearls tl.ut tho laws passed for the protection of society shall be enforced. The pulpit and the press are united in urging the nomination of the liEBt men, and this feeling ani mates the people in every part of the county. There will be a general turnout on Tuesday, and thus a fa'r test of tho will of tho Democrat y will be ascertained. Demo crats who fail ti intereat themselves in the election of delegates on Tuesday will have no right to complain if the result does not suit them. Let the bett men In each ward and civil dis trict be prerent and take an active part. A full vote at the primary will give us a good convention, which will nominate a ticket that will carry tve county by a largo majority. The first atep in the direction of success or faiU tire will be taken on Toes Jay. Let it be In the right direction. KUHNIt UKABFINM AUAIN. KiiHsia is the mott dangerous power in tho world. O.her countries have learned moic or less to rrgard the pres ervation of penes aa the beat national policy. Ituuia alone followp, at this day of changed ideas, the maxims of barbaric and semi civilized nations. It seeks aggrandisement by making war and by seizing tho possessions of others the good old plan, they shall tako who mny, and they may keep who cm. It is ever In mischief; now it bullies Turkey, then it worrioj the people of the Ilalknns, then it is sub duing B3ine hordo in the deserts, and it is continunlly extending its pro vinces and adding to it already for midable power, llu'gria is kept in continual hot water, owing to the Czar's dislike of Alex ander, it) King. Rim ian agents are reported as swarming hi Macedonia. The peop'e uf Servla are in a state of Bemi-rebellion against their King, Milan, whom they compelled to fight Bulgaria and now refuse to meet the expense by opposing the payment of the war tnxej. They nro lees patient than wo, who tamely submit to pay war taxes a quarter of a century after the war is over. That Russia is help ing the disturbance is likely, While thus making or encouraging disturb ance at the very doord sp of Turkey, Russia is mercilessly dunning the Sul tan to pay money owing for indemnity it incurred the latt time Russia In vadrd it. At tho conclusion of the Crimean war Russia agreed to keep its Jleet out of tho Black tea ; come yeara ago she announced that sho would no longer observe thnt pat of tho treaty, and sailed her warships there as before. At her last war with Turkey shesoized theouly good harbor in it, Batouin; the treaty of Rerlin unsigned thnt port to Russia on its engagement that the place should ha a free port, open to all nations. filio has now broken that pledge, nnd has made Cntoum a close poll. Hho is alto massing her troops ard is expected ti make war on Aus tria, while tho cloture of Baton in is believed to be aimed at Knglnud be cause thnt country favored Bii'gvln in opposition to Russia. It loofah u, time that the l'uoiieau luliuis, lor tbe'rown s f ty, ahou'd unite to put down this bloodthinty, treaty break jnglpower, and putting it down as they put down Bonapaite when ho disre garded hit engagements. Russia wants serving as we Bhould like to aerve tho vast estates smuggled by the monopolists dividing tip. THE REM tl)T FOR NrKIKKH. A better demaud for merchandise and the proipect of a good fall trade has increared the demand for labor. This, with the miction that follows struggle, has caused a lull in the label ngitation. It is but a lull, and not a cessation; to believe otherwise will he to err. Tho hst thing that can be done during the lull is to prepare against the renewal of agitation by providing a safety valve that will rn ducs tho number and decreoaa the virulence of strikes. Thosa who have had labor troubles to deal with much longer than th a United fstatcs have found conciliation and arbitration tho only means of satisfying discontent and allaying etrile. Conciliation is brought about by the mesting together of the representatives of the employers and the employed, who together discuss diflerences, and where possible make mutual con cessions and so arrive at a fair decis ion, acceptable to both parties. If the ' process of conciliation fail arbitration is resorted to. The lmdon corre spondent of the Hjfcton Admliter writes to that paper the result of In vestigations be has been making as to the construction and working of the reconciliation and arbitration bodies. He has received valuable aid from the president of the Board of Trade, Mr. Mundella. Mr. Thomas Burt, for merly a miner, now a member of Parliament, has supplied him with in .formation reporting lubor dispute settlements, in which ltiista ed that in the north of England coal trade op to twenty yeara ago strikes were very numerous, but for many years pant they have been practically unknown. When diflerences arise at any paitica lar mine, six representatives each from employers and men meet tcgetber with an independent person as chair man, who has the casting vota in t&ta ol a "tie." It hss resulted as one set tled point of this calm way of settling disputes, that the best way of arrang ing wagfs is by the sliding scale, wages rising or falling accord ng to the au thor'zed quotations of the colling price of the material produced. Where tho nature of the business admits of this plan, it is found more eatisfactory than any other. In same trade departments in this country the scale plan has been adopted. In the north of Eng land they have no permanent board of arbitration, but many think one de sirable. Only ones of late years in Northumberland has there been a strike, tho employers refusing t9 agree to arbitration. When arbitration Is resorted to, each side ap points two arbitrators. If these fail to agree they appoint an umpire, whose decision is final. In that country and in Durham there bave been only eight resorts to arbitration since 18711. The awards there were Invariably accepted, not a mino Stop ping work for a day. In other parts of England there aro some variations cf detail from this northern system, but the general mode of proceeding Is about the same everywhere. There is an investigation of the circumstances in dispute, points of agreement are brought oat, and where employers and men aro on amicable terms, fair c oncost ion settles the rest. Where there is a general question of the rise or fall of wages, the proceedings are conducted with special care and do liberation, and usually without undue exci'oment. The men continue J their work while the repre sentatives settle the matter. If no settlement can bo arrived a', then a strike Is reported to. In ono district tho judge of a court is the umpire. When conciliation fails the men who know having been unable to decide the ma'ttr, they ask the man who does not know to decide it for them. While there is no angry or excited feeling mnong ourselves, and men and omployers are working in harmony, our manufacturing and transportation people Bhould agree upen come (iich plan es the English bave found to an swer, so as to avoi J future stiikes and tumults. WATEH. Fkrtinint to the water qucition, which has been opened nunin by President Latham, comes the news that th s corner stone of tho new wa'er works of Amsterdam is shoitlytobe laid, much of the work having been done. Tho water is taken from the river Vocht at a point three miles from the site of tho pumping and fil tering works. "At this point," mys the IOihIoq Timet correspondent, "the water will be lifted into deposit ing reservoiis having a united ca pacity of 18,000,000 gallons. After haying m filtered through filter beds of novel construction, the water will pss into a series of covered pure water reservoirs. From these it will be pumped up a stundplpe 232 feet in bight, and convoyed tlier.ee through two lines of parallel mains, twenty seven inches and twenty-four inches in diameter, respectively, into tho city of Amsterdam, a distance of about seven miles, The separate system of Biipply will involvo the laying of up ward of 1-0 miles of distributing pipes, including the exceptionally difficult work of crossing u:.dor no fewer than 100 of the uiuals with which Amsterdam abounds, some of which are of grot width and depth." H w ill thus he seen thnt to gst pure water and cl nr much of costly work has to be done. But in the saving of life ami health it is worth every cent of II. I'LEIUO PS Ell MOM A. Why the t'nllle Bill Fnllrd to ls I ho Holms. CiiicA(.(i, III., July 9, In conse quence of the prospective adjourn ment of Congrees without considering the bill to authorise the Bureau of Animal Industry to extirpate p'euro pneumonia, the llrtrdrrt' ilivette, of this city, charges the Hon. Wm. 11 Hatch, tho Chairman of tbo llouss Committee on Agriculture, with bav ins obstructed its passago and delayed action upon it until too In'e to sunire consideration. Ilia lion. IK W.Minuli. rre i lent of the ConsolidatodNationiil Cattle lirower' Association Bnd tho Horse and Cattle Growers' Association of America, who wasone of a onnn.t- tee maintained at Waslilnitton to nor feet and urge tholegMiiti in, publishes a c:rd in the Urtedtrs' Uuztlte of this week to the etlect that no open hos tility aa encountered in W aldington, ami tuat the committee only tailed in its mitmon through the policy of delay adopted by tin Chair man of the llouee Committee on Agtieulinra. lhe (la&tte contends that the uisappcintiutnt among cattle men is very g.eat m con sequence. It sla'es that an appropri a'iuii was ma le in one of the appro- prUtion bills authorizing the purchase and destruction of annua actually disced and to revent tho spread of ttie disease from any one St to to atiothtr: but tlM this is not satisfac tory, as there is no authority given for compulsory inspection or timiran- line, or lor preventing the sprfiid of lne Uir-eaee lnsule a Male, ana no ma chinery or details provided for under taking the work of extirpation in a systematic and tuect.ve way. yalrrlons llrire. Jackson, Mien., July 9. Military chc'ei here are excited over the dis appearance of Alexander Brown, jr., second lieutenant of Company 1). He was one of a c omm-tt'e appoin'ed to pay the bills incurred by the celebra tion of the Fourth of July, and had about fl'JH) in his pcssi B.ion. He has not been seen since Wednesday lie wa the founder ol the defunct Doily Telegram, In which he sunk con sult rdiie money, His relatives f ar foul play, THE BRITISH ELECTIONS. THE CONSERVATIVE GAISS IN CREASED BY LATER RETURNS. The Number of Unionists So Far Returned 319, Against 210 GladstonlnnB. Low now, July 9. Up to 2 o'clock this afternoon 202 Conservative, fifty four Unionists, 133 Gladstonians and sixtv-soven Parnelitcs had been elected to tho House of Commons. 'I he Torle have won Chippenham, Wiltshire, Maldon and Essex from the Liberals, Lord Henry Bruce defeating Bannister Fletcher (Uladatonian) in the former, and C. W. Gray beating E. B. Barnard (Gladstinian) in the latter. The two accersions make the total Unionist gains thiity-three. The Tories aid today carrying the English connt es by sweeping ma jorities. The Unionists to-day succeeded in returning Inverness buns, selecting Robert Binnatyne Fin'ay; Forfair shire, where they reelected James William Barclay; Falkirk burgs elect ing W. P. 8inc:air, and Hartlepool, they re-electlr g Thomas Richardson. M. Conway (Parnellite) baa been re elected tor Noith Le'trim, and T. It Gill (Parnellite) for South Louth. At 4 o'clock this afternoon the To ries had electfd 2(33 candidates, the Unionists fifty-four, the Gladstonians 133 and the ParnHllitas seventy. The Tories say they are confident of elect ing 320 candidates. Mr. Gladstone telegraph?, with refer ence to the I Huh question: "Wales nnd Scotland have seen their duty quickly. Eng'and will have ti learn hern, but slowly and painfully." The total Unionist vote polled np to 6 p.m. today was 1,01(1,281, and the totd Glad-ftordan polled wan t) l!l,-VJ2. Herbert ( ilad-tone, speaking at tbo Liberal Club this evening, said it wns strongly probable tbat there would be another elect. oa within twelve months. Mr. Sobnadhort writes: "Tho tide Ins turned Conservative, but there will be ano her election in six months." The Earl of Aberdeen, lora Lieu tenant of Ireland, has intimated that he expects to leave Ireland on the change of government. At midnight tno torn manner oi Unionists returned is 319, and of (ilad- Btmiaus 210. The Dailxi Neit " says that If Mr. Gladstone li nils his part v inn minor- l'y in Parliament lie will itouutte'S re sign. (torn Pvnrl ll. PAnis. Ju'y 9. The death of Cora Pearl, wno expired ye?tardav, was caused by cancer, Btie died in com plete poverty 4 hnlrrn In llnlf. Uom k July 9. The cholera returns for to Jav are Blind si, 127 new costs, seventy deaths ; Gatiano, fifty-two new cusps, twenty-two dta lis: Fontano, forty seven new . rases, foriy one deaths. Minister Grmaldl is visiting and Bii:coiing the BurJererH. Brilliant Ititnqnrt lo Mr. needier. London, July 9 Mr. Gillag gave a brilliant banquet to tho Rev. Henry Ward needier this evening at trie Metropolitan Hotel. United Slates Minister 1 no'ps. Justice etamey Mathews. Com til General Waller, Dr. Parker, the Rev. Mr. Haines, Canon Fleming, and a distinguished com pany, wore present, Mr. Beecher, who was in line nenttn ana spirits, ma le an eloquent speech which wns enthusimtioilly applauded. He made no rrferance to Ireland. Mr. Beecher will t'oliver his first lecture at Exeter Hail on the 19th inst., the subject be ing ' The R:ign of ilie Comaioa Peo plo." The app'ication for seats is 0 '.ormous. Moling the Tori of Balomn. St PuTiiasiicRO, July 9 It is offi cially s'nted that the cloBing of the port of Batonm does not constitute a violation ol the Berlin treaty. Ba t mm wts undo a free port under the inlliienee of circums'nnces which have entirely changed; tho present condition of tin nll'iirj of the poit is or erouB on the Tr.'BMiry. Tne cus toms cordon on the land side is pre judicial to thn materiftl nnd commer cial development of Batonm and the district incorporated with liuseia alter the Russn Turkish war, and the naph tha t'ado, which is an important one for TrnnBcaiHMs a and foreign con sumers, has been Beriou-dy afi'ojted. The people als complain nt the octroi duties. Considering all ttioie c rcum- s'auces, Kiis ia cannot O7erlook tho fact that article allot tl'0 Uorlin treaty, is an exception, inasmuch as it was not the result of any understanding, but of a free and spontaneous declaration that KtiKsm was willing to make Hutou m a free port. Too ndvnntages which wore then contemplated on a guarantee of the contiaUing powers cannot longer be considered, bs, sinee tho abolition of Caucasian transit, Batonm has ceoaad to be an entrepot for foreleu eoods between Europe and Persia, and tins only retained the import trade, .therefore, external in terests no longer induce Rusiia to con tinue to ninke sicntices to the detri ment of the country around Bntoum. Kight years' experience has shown the injury result. ng from making Bitoum a five no.t. There is no reason to doubt the necessity of ending the ar rangement. SPOUTING NEWS. nwlitrrd Oil' on Aocouul of Ilml V rHllirr. Chicago, III., July 9. In conse quo; ceof bad weather today's racesat Washington Paik bavo been dictated oil'. The rains today had so much cflert on the track thai owne s wdre afrnid to mako ent. ieu f r tomorrow. With a continuation of the weather as it is torigbt, however, tlurs will probably be wood lacing. K.i'rici nnd weiyh's: Fi rt A'cnv . O n e in i le. W i t li ro w ( 1 03 ) , Ira K. B-id i (10-11, Margo (107). &Toni ii'iic'. Ons mile and a quar ter. Lijeio (US). This will be a walk over. Krlra liver Selling purse, seven e'ghths of a mile. Kutries will close at noon S -turday. Third Jiiirf. The Columbia stake. Mile and three-qnarteis. Binnttte (1111, Grv Cloud (110), Lir..ie Dwver ( IDS), Modesty (118), Luckv B (122), Volants (US). Fourth KKf. Mile beats. John Snibvan (112), Lotipo (1221, Ktlie H. (110) llooibiaek (113), Biduv Bowling (110), Bicvet (113), Irish Pat (113), H.podale(U0.) Fifth A'air. Steeplechase. Ascoli (IV)). Jim Carl sle (1-M), Sun Mar (140), Rnshbrook (112), Birepalns (140), Claude Brauiion (BCi). R ry O'Moira (H7), R ck (140), Ernpiie (130). llrlgblon llrarh Karra. Bhiohtok Beach, N. Y., July 0. Fir ftact. Three qiiartnra ol a mile. Joe Murray won by hull a length; Marsh Redon second, Adela third Time-1 :17. Second Iiact. fselling race, three quarters of a mile. Charlie Russell won by half a length ; Goblin second, Emraet third. Time 1 :17. Third toe. Selling race, eeven eighths of a mill-. Berlin won by half a length; Herry Rnesell second, Lord Coleridne third. Time-1 :29J. Fourth Race. Selling race, seven eighths of a mile. Broughton won by a length ; Miller second, Keokuk third. Time 1 :30. Fifth Race The Btulevard stakes for two year olds, three quarters of a mi'e. Daly Oak won by half a length; Daphine second, Magyar third. Time 1:17. SLith Race. Handicap, one mile and an e ghih. Ernest won by half a ltngth; Kentinpton second, Malaria bird. Time-1 :57j. Seventh Race. One mile. King Vic tor won by a length and a half; Santa Clans second, Lookout third. Time :46J. "BOB LOOSE V" IN BROWNSVILLE. lie la the Hun the Democrat Hhonld nominate for Uovernor. Brownsville State l)emcrat: Our readers will find elsewhere in the States Demoirat today an interesting personal sketch of Col. Robert F. Looney, of Memphis, Wet Tennes see's candidate for Governor. He vie ited Brownsville this week, when be met many of our citizens, who ex tended him not only a cordial wel come, but gave him substantial assur ance cf hearty snppoit in his canvaes. Col. Looney has visited various portions of the State, and every where be has been received with the kindest consideration. Bs ing the only candidate from Wist Tennossee, and in every way fitted for the high position, be inures as one of themrst formidable candidates now in the race. His own coutity, Shelby, with its seventy-six vote", will go into the convent'on solid for Col. Looney being instructed to vote for him so long as his name remains before that body and with his own eoction prac tically unanimous and with a Urge following from Middle Tennesee and some strength in E tat Tennessee, he wi l, at the outset, hs the tt oogestman presented. Col. Loooney is no chronic office seeker. He never held office but ttill he bas always eivon his time, energy and means to the Democratic party. Ho believes that the tendency of the Rmublican party is to foster monopolies and thus to make thn rich richer and the poor poorer, and that tho Deniosratx party is the party for the people. The party cou'd not se lect a better man than Col. Looney es its standard b.arer in the coming race. Ha is able, worthy, eloquent and ablo to cope with the Republican nominee. Fire at llitvcrlilll, Man. JIavkiuiii.l, Mass., July 9. Fire broke out in Longfellow & Co.'e dining room in the Saltonttall block, No 65 Merrimack street this afternoon. The flames quickly eprcad to Sheldon & Sargent's clothing store, the Pacific Tea Company's ttoro, G. F. Cleveland & Co.'s ahoe store aud C. C. Morse & Sons' bookstore in tho fame block, and completely destroyed the con tents of three of them. The upper floors wore occupied as offices, and all their contents were destroyed. Loss, $00 000: insurance, about half. While this fire was in progress another, of incendiary ori gin, was ditcoveicd in the Delaware Houee stablo, which, together with some other wocden buddings, was consumed. Loss about $10,000. Ilrl I'roteatlna Hla Innoceuae. Knoxvii.i.k, Tknn., July 9. Jack Lambert, a painter by trade, was exe cuted at Charleston, N. C-, today, in the presence of several thousand peo p'e, for the murder of Dick WiLjon, twenty months ao, in J.ickson county Lambert had been drinking heavily the dny of the ki ling, and had a grudge against Wileou. Ha left a s'a'cment pr-t sting his innocence and charging another parson witli the mu rder. Nenantlon In Nrw York. New Yobk, July 9. At 1 o'clock to night a woman 50 years of age ran through Suydtra street, Brooklyn, shouting "murder," nnd pursued by thre men. At the cprner of Elm and Suydam streets she fell dead and her partners eecared. The woman's ldft wrist was frightfully gashed. No clew to her identity nor to lnr asndants is known. Nlnbned 1 1 In Noil. Baltimokb. Mo., July 9 Henry Myers, an agad shoemaker, quairoled touight with his ton, Henry, ir., over a small amount of money. The eon 8 ruck his father and the latter picked np a knife nnd stabb d bis Bin, killing him. The futhcr gave himsall to the police. The Iw Torn Hop t'rop. Utica, N. Y., July 9. Report from Montgomery county t the Jlrald show that the bop crop in that county will be almoat a total failure. Mauy growers say tbat it will not pay to pick the vines. They are confident that good hops will command 2o cents, or more. Republican Knllllcailon Storting at Hruwnavllfe, lUPICUL TO THK APPEAL. 1 BaowitgviLLK, Tknn., July 9. The Hon. J. J. Litt'eton and otherj will a (Id ret a a Republican ratification meet inn here tomorrow. Committees of roception, etc., hnve bf en appointed to reeeive the delegntes to the Domiicintio C-mgresnional Con vention, to meet here September Nth. In the Drar ! Day. We ditler in creed and politics, but wo are a unit all the fame on the de sirableness of a flue head rf hair. If yon mourn the loss of this blessing and ornament, a bottle or two of Barker's Hair Balsam will make you look as you did in the dear Id days. It is worth trying. The only standard 50 centa article for the hair. Died In Hnrrllile Agony. Kaksas City, Mu , July 9 Milton lnns, a fa'in Itibi rer from Southern Miwourl, di.'d in great rgony from hy drophobia at the polictt etation, in lb s city, this af'ernoon. A mad i-tme was ai'P'ied hut night and apparently took eh't, hut bs owner sa il the pa tient had come t'-o ltn. Lpnobobo's perfume, Edenis Lundborg's perfume, Alpine Vlole. Lundborg's perfume, Lily of the Valley. Lnndborg'e perfume, Marchal Kiel Ruse. nrrllna to be Manchtared. Momi.E, Ala , Jn y 9. M. D. Wick eraham, nominate;! for Stat Auditor, mil) (i, ne;al (1. M. D.iskin, nnmin:it?d for AsH.xi'e Jndge of the Supreme C'-urt, by tne Republican Slate Ex ecottve Con ni t e at Hirmhwhhuo yectordxy, ata e tliat they were not candidates, arid ar no' caudidates for Iheto or any other offices. DEMMDED HIGHER WAGES THE AUGUSVA (OA.) COTTON HILLS CLOSED ON Acconnt of a Strike by the Em ployesThe President's State mentLabor Notes. Augusta, Qa., July 9.- The hands in the picking room ol the Augusta factory s:ruck today for an advance of ten per cent, in wagp, President I'henisey having replied to their de mands tba . he could not grant the cd vanre, that trie mill has lost in two years and a ha f nearly $100,000, and it was impos-ihle, w.'thout further loss to tbe stockholders, to increase tbe wages of operatives. He si) a: ''To ask us at this time to advance wages would be to isk to con tinue indefinitely, not non pay ment of dividends, but a pro cess of consuming the permanent investment of the company, for w- tell you sincerely that tbe earnings of the ompany will not hear any in crease of wages. Master Workman Maynardier as ei ts tbat the strike in the Augusta factory was n"t ordered by the Knights of Labor, He says be did cot know tbe picker hanos had a grievance, and tht he is opposed to strikes, In cous-quence of tbe ttnke the mill shutdown this eft r roon and will be clcscd tomcrrow The strike throws over 600 hands out of employment. Ten Ilonr a Itnjr'a Work. Chicago, III., July 9. The 1200 employes of the Rook Island shops, in the Town of Lake, have been notified that the ten hour rule wilt go int effect Monday, and it is undeiBt od thit tbe wages will be p'Oportionately increased. Tre increase from eight to ten hours a day is siid lo have been oidered on a count of the great pres sure of business. Some of the employes are dis.-ati.iied with the arrangement. Strikers Acquitted In Texas. Galveston, Tax , July 9. A e pecial fiom Paltstine to the Newt Beys: In the County Court this week six of the la'e striker were acquitted of unlaw fully assembling aud noting, and the County Attorney nolle proreqnied twenty othor cases. Thn partita ac quitted have eha'g-a pending agaimt toein in the Dist.ict Court lor killing an engineer and obs tructing ti attic. The Window Diana Workers. riTTsnuBa, Pa., Ju'y 9. The mo:-t impoitant matter c iinidoted at the Convention of Window CLvs Workera today was the question of reducit g the aseessment for the deferse fund. The E n ern ddlegatrs favored the re duc'.icn, but the meauire was defeat ed by the Wes ern delegates. Ihe fund amounted to about $25,000 latt la it year. Penuaylvanla IMInlng Troubles. Pittsbuko, Pa., July 9. At a large mass mee'i :g nf the miners t mployed alorgthe Baltimore and Oh'ora lrcad, at Scott Haven tcday, it was decided to foree a'l rf the operatives to sign the Columbia cca'e, by makiig tue strike gsieral. A committee was ap pointed to call upon the miners work ing and pereuale tlietn to come cut for the demand. Tbe managers of Scott'smines offered their men 55 cent per ton f ir a year's contract, but the men refused to accept the proposition. Important Action t'oncernlDar Fire insurance. New York, July 9. The Commercial Bulletin says that the New Yoik fire underwriters have jutt agreed upon important action concermrg fire in surance. All the companies and agents doing business in New York and Brooklyn bave signed an agree ment to stiblisn raMngs on all prop erly in the metropolitan district and to reduce b-okerj commissions to 10 percent. The lest s'gna'ure wa3 ob tained by the committee th s morning. LIITLE ROCK, ARK. The Tate Plantation Troubles at an End. 8PI0UL TO THI APPIAL.I Littlb It -CK, Ark , July 9. News from the Tata place today is that peace has b en re-established and the men hnve returned to work, a man named Walker, n-s'd'ng six miles out on th Ta-e r. ad, fou..d a nota pinned to h s gate ro.-t 1'uenday warning him that if bo did not quit objecting to the Kniglit of Labor talaing to hia hands he would be taken in hand and put where Sntritf Wo.'then could not got at him. lie de iros that officer's pr i tection. Th ncto was evidently the work t.f some of the i egroes there on a strike, and Mr. Roberts feels per fectly eisy over the mat er. TUB NEGRO GILL shot bv Deputy Sheriff Kinkead on the Tats place Monday, and brought to th s c'ty nnd placed in jail, I ad a pr'imiiia y hearing today and was (lifcharged. Sheriff Woithen and Deputy Kinkead were examined, both witne s s giving an acc unt of the prieoner'a act ons the morning he was shot, and w Inch led to the shooting, but Judse Y e-er thought the negro was e xc.ired ht ihe time, tnd he b ing the only one hurt, and tl;o trouble over which bo received his injury now being settled, thought he could with eafe'y li t him go. The action of the Juetice will be generally indoised. Will rue for Damage. Dic.hv, N. 8 , July 9.-Ni tico was of cu'tonis of this port, t. day, by tho Att rn y of Josse L.ruis, owner oi tne (iien(estr, Mass, fidiing schooner David J. AdamP, seizal by the Canadian nnrhorities ln.t M-ty, i-.n 1... y...l.1 timiln an H' t.w.ll li IIP ag.nnst him (Vie's) in the court nt D g.iy, h ying his damngos t $12,000 reMihii.g frjiu the seizure. It is un derwood tnet the Canadian Govern ment has abandoned 1 er case agiinst the Ad ms, and it is rujiored t'oat her reicasd lias ueen oroer. u. Emcllement Over the Whlnhy Qnes tlou at Atlanta. Atlanta, Ga., Ju'y 9. When the t il 'i,.nn. Iiipnuii prnireil in this county, Jane Itt'th, Bi'veral wholeea'e lueusts were in tore, b min iu iun three mo .ths. The Kimh.a'l House J.-mpary secu ed an inte'i tt in one ai d rppued a room for the sale cf liq iors and beer by the quart. One otner fiim did likewise and gre t ex-citm-nt was created. A thousand miiD ci niro wtsd aronndihe Kimball House 10 dis' ubs tben at'er. Appli cation was made for an injunction re strainiiig th police aad a temporary order a . granted for a hearing on the 17tn. Meai.time, the sell ng has been rasumed. jonlsvllle Vameat. rurtoxi rcinoc. ' Foundations, col'.awi'Ji'aad baild. ngs subject to overaorinoaldibs ecii strncted with LouierUlgCisiuiU Xi u the standard. icC ormick MACHINERY FITTINGS, ENGINEERS' SUPPLIES. OEGILL BROTHERS & CO. HARDWARE AND MACHINERY. PotMh Victim. Cored by 8. S. 8. S.-S.S. vs. POTASH. I hare had blood poison for ten jnt. I know I have taken one hundred bottles of Iodide of potaah In that time, but it aid me no good. Last summer my face, neck, body and Umba were covered with sores, and I con Id scarcely use my arms on acconnt of rheu matism In my shoulders. I took S. 8. H., and it has done me more good than All other medi cines 1 havo taken. My face, body and neck are perfectly clear and clean, and my rheu matism Is entirely Bone. I weighed 116 poands when I began the medicine, snd I now weigh 169 pounds. Vy first bottle helped me greatly, and gsre me an appetite like a strong man. I would sot be WlUwut S. S. 8. lor several times Its weight in gold. . (j ,C. B. MITCym, W. ad SU Ferry, New TortrTv; UNIVERSITY OF M1SSIPPI AID HIE BOARD OF TRUSTEE V MISTISAiEMENT. How Ihfy Ilnve done From Bnd to Worse Tne Folly er a Dual Njs tem of Government. To the Editors of the Appeal: Oifobd, Miss., July 8. As your paper h?s a vary wide escalation in Mississippi, I nhh to call attention to an article which lately appeared in it in regard to tbe rxtraordinary action of the Board of Trustees. Perhaps the woid extraordinary wruld be inap propriate to tliis board, though it would apDly io any ottier. I do not know whether to cdl it an experi mental or an e.tperimtnling b ard. Our corernots seem ta bave thought that lawytrj alone were cumpetent to or ganize and conduct an ins'.itntion of learcicg, and they have tried tbe ex periment of making our body of tuis- e' s adore corporation d lawyer), l be ieve that by tbissnus arcidenial side wind, two pt-rsonp, not lawjer', ha7e been drilled mto tbe board, and one of them is rf ported to have ntd es toon as he learnea the material of whi'jh the body was compoEed. Now lawyers are very ntcastary perrons, si to speak, ii dispensable in some casee, our Governors rjera to think in all catcs yi t they would hadly Belrct them as arch tect', or n'anagers of a farm or railroad coramisrioners, or indeed intiu-t to them any of thtir private interests except their busicess before the courts. This experiment of our Governors i f iutruS'tinn our inhtitu tions of learning to lawyers alone has been a most egr. gioua failure, f s is ap parent t) a 1 ni :n. As the boaid was an experiment it was but natural that it should be an experimenting body. l.Vrr open to innovation?, and regard lers of the reasons of experience, ihty tried the "beiictk" p'an built cabins, sent nut missionaries, and set all the machintry in motion to produce a failure. Then thty wete eaten up with the coeducation sjfitem, which has become 'Vmall by degri es and beautifully leps." The girls are invited to Oxford, but the boys are not wel come in Columbus. Than came an experiment of reducing te salaries of, tie prifesso s, in orJer that their efficiency mipht he onbanced. The re sult of this experiment was the expul sion of all the proles. ors in a body, and en experimeDtal rer raniz 'tion. The only paiadel cife i the Keely mo'or. which claims wonderful occult pgwer of ueefnlnew, but thus far has only developed failures. I may a 111 ict you wi h another exper mental ar tice. msBissiri'i. Intcrunllonnl IllinetalilNt Eeaaoe. Cincinnati, O., July 9. At a meet ing of the Intjrna'ional BimeWlitt Le gue.held here yof.tt-rday, the Hon. William 8. Groesheck p'i siding, the following resolution was unanimously adopted: Rttolvtd, That the compulsuy coin age of silver by the United States, under the Bland law, as a measure to na ore si vr to its old historic posi tion, is now, af'er eight years of trial, a demonstrated failure. Therefore we, as bimetallism, ask that tbe coin use of silver dol ars by tho United bta'es be suspended, waiting concur rent action among ihe great commer cial ii a ions, as the only means of se curing tt o purpose of this league, namely the restoration of both gold and silver to their proper rlaces as full legal tender money, with free coinage. Telephone Unit at I'hlcnao. Cuicaoo, III. July 9. Jndge Blod geit entered a decree this morning in the raw of the Am?riran Boll Tele phone Company against Fredeiick W. Baird and Frank Dillon. The court hold that (he Bell patn.ts cf March 7, 1876, and Jamnry 30, 1877, were valid, ai d tbat tho deiHndants had infringed the 6fili clsi'ii of tbe firs', and third andtixihand eighth claims of the last pitent. Toe cmiphiinants hav ing waived a right to a mietiT t.i de cide dimnges, the couit 8'iFBed a pomina' Cue of $1 and pcrpetudly en joined the do'endants from furlher use of tho claims net lorih. Aril ice to MottieiM. Mrs. Winslow's Soothing Svrup ahould always be used when children are cutting teeth, it relieves the l.ttle siiflerers at once; it produces natural, quiet sleep by relieving the child from pain, and the little cherub awakes aa . Twenty five cents a bottle. The Panama Canal Scheme. Paws, July 9. M. de Lsps bas reqnt s;ed Prime Mini sti r de Fr? cinet to withdiaw tbe I'anan a canal lottery bi'l, but he reserves the right of ap pealing to the vublic to subscribe to a special iseue cf Panama canal shares. infer. The Panama Canal Com pany has decided to itsue bonds in stead of raining lottery loan. lowers I "J CATJTION. Consumtri ihould not confuse our Sptrtjl; witA Vu numerout tmitaiioni, rubttituttt, potath and mercury mixturet which an got Un up to $eU, nol on their oim merit, but on the merit of our remtiy. An imitation U alvayi a fraud and a cheat, and they thrive only at they can iteatfrom the article imitated. TreatUe on Blood and Sldn DUeatet mailed fret. For tale by all druggUU. " TUB SWIFT SPEC1FW CO., Drawer 3, Atlanta, Ga. FROM THE PEOPLE. Politics anil Boycotts. To the Editor! of the Appeal : Why wan not Mr. Hook nominated by the Republicans? Too much boy cott; and tbe Republican party of ShelDy desorvps the thsnks cf all con servative citizens fcr putting its foot upen such prescriptive and intoler ant doctrine. Freemen will not allow any associa ion to diitite terms of la bor between employeis and operatives ; and it Is a source of sincere gratifica tion to find the more intelligent and C3DSfrvat;ve of the Knights of Labor promptly and boldly ignor ing this odious dogma of sime of tbe fanit c3 of tbat crder. Their action was timely, too, as several courts have ju;t decided that boycotting is a crime against the rich's, privileges and liber ties cf the people egiicst whom these crtzy lerdais would ftait this engine of cowardice and oppression. Shame I ishamel eternal shame upon any so ciely that will say to amanwiih a hungry family, "You can work if you w.ll agree to work on nur terms; but if you wieh to contract with Mr. A. on t9rms tbat may tuit yen both, then we will do all we can to keep you and your family hun cry." Let these chiefs of the boycot'era look over int) Arkansas, and see what their negro allies are doing there. They are on a "strike," and declare that they ill not work tbe crops unless at h'ghor wage, and tbat no one else shell -tlo bo. That's the boycotters style, is it not ? Sup pose the d' etrine was practically car ried out What would become of the crops over there? How would the negroes iive except by depredation and pil'age? Ate the boycotters o Memphis ready to encourage tbe ne groes to continue tbo strike? Are they leady to snd aid to tbe negroes, to keep the wolf from the dcor while the gras nnd the weeds are growing and a'l hands hungry? Now's the time for the boycotters of Memphis to stand by their colors. NEWS IN BRIEF. Indianapolis, Ind., July 9. f?nm Archer was banged at Shoals at 1:13 o'c'ock this aftetnoon. Cleveland, O , July 0. T!.e conven tion of sewer pipe manufacturers closed a two day's se sion at the Forest City House this afternoon. Pittsburg, Pa., July 9. Tbe em ployes of all the blast furnaces in this dis riot have decided t denund aa advance in wags of 20 per cent. The matter will probably be arbitrated. Omaha, Neb., July 9. The Bee will tomorrow print reports fn-m thirty coumies in Nebraska and Western Iowa, which show tbat tbe drouth is proving damaging to both small guiin and corn. The dry spell has extended all over the 8 ale, no rain having fallen in some portions fir more tban a month. ADDITIONAL 1JIVEKS. Cincinnati, O., July 9. Night River 16 feet 10 inches on the gauge and tailing. Weather c'ondy and hot. Saw Orleans, July 9. Night Ar rived: U.S. snugboat Chattahoocbie, Louisville. Departed : City ofNat'.hez, St. Louis. Louisville. July 9. Night River rising, with 8 feot in the canal and & feet on the falle. Weather clear and hot. Business dull. St. Loois, July 9. Night Biver fallen 2 inches, and stint's 12 feel 6 inches on the gauge. Weather clear and very hot. Arrived: Annie P. Sil ver, New Orleans. Nodsfaiturcs of regular packer. Caibo, July 9. Night River 23 feet 2 ' incheB on tho gauge and fal ing. Weather fair and Lot. Arrived: Iron Ag and tow, Memphis, norm; City of St. Louis, St. Louis, 0 p.m. Departed: Iron Age and tow, St. Louis, 0 p.m. Sennallun In thf rrrnrh Chamber or liennltrn. Paris, July 9.-In the Chamber of Deputies todav a man.who is supposed to be insane, fired a flint from a re volver. The bullet passed elope to the bead of the President of the Ol amber. The man was arres'ed. When ques tionedasto bis motive he said he wished to attract the attention of the pjhlieto hinmisTy. TamsmY PitiMRTMitNT, July 2, 1886. SEALEU PROPOSAL will be received at the Cu'tom House building, Memphis, Term .un'il 12 o'clock, nom. Monday, July a;, 188fi, for supplying inel, gas, ice. water, and miscellaneous articles reauired theretor durinn the fiscal year ending June 30, 18s7. blank forms and detailed nforraation may be had upon application to the Custodian of the building. , . Bidders for supplying fuel will be required to deposit ten (10' 1 er cent, of the amount of tbeir bid as a guarantee of good faith. The Department reserves the right to re ject any or all bids, or parts of any bid, and. to waive defect,. g jrAIRCIIILD, Acting Secretary. SOOKCtt Printers, Blank Book Manufacturers xml | txt
http://chroniclingamerica.loc.gov/lccn/sn84024448/1886-07-10/ed-1/seq-4/ocr/
CC-MAIN-2017-04
refinedweb
7,290
72.87
public class CostCenter { public string DepartmentID { get; set; } public string ModelName { get; set; } } List<CostCenter> DList = new List<CostCenter>(); DList = (from x in ctx.Inventory where x.CostCenterID != null select new CostCenter { ModelName = x.CostCenterID + "-" + x.ModelName, DepartmentID = SqlFunctions.StringConvert((double)(x.CostCenterID ?? 1)) }).Distinct().ToList(); Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. ModelName = x.CostCenterID.Value What is the actual datatype of CostCenterID in the database? Your casting and conversions in the Select seem entirely unnecessary. You've already eliminated any row where the CostCenterID is null, so why is the null coalesce (i.e. ??) necessary? It’s our mission to create a product that solves the huge challenges you face at work every day. In case you missed it, here are 7 delightful things we've added recently to monday to make it even more awesome. where x.CostCenterID.HasValue cannot implicitly convert type 'int? ' to 'int' I've tried several ways, but don't know how to get around it. CostCenterID allows nulls. Current Code Open in new window
https://www.experts-exchange.com/questions/28367800/Dealing-with-Null-values-in-a-database-when-using-linq.html
CC-MAIN-2018-13
refinedweb
199
52.97
When you declare a method as virtual, you are basically laying the path for it to be overridden. By default, or if there is no declaration, methods are non-virtual. Only virtual methods can be overridden. using System; class Music { public virtual void Play() { Console.WriteLine("I am virtual and I will be overidden."); } } class Pop : Music { public override void Play() { Console.WriteLine("My parent method is a virtual method and so I will be printed out."); } } class Test { static void Main() { Pop pop = new Pop(); pop.Play(); Console.ReadKey(); } } When you declare a member as virtual method, this method can be changed by an overriding member in a derived class. During run time, the program will look for a method in the most derived class for a similar method with the override keyword. When pop.Play() in line 22 is invoked, the method with the override keyword will be called.
https://codecrawl.com/2014/08/29/csharp-virtual-method/
CC-MAIN-2018-43
refinedweb
151
66.74
Things used in this project Story In this project we will use the Adafruit Starter Pack for Windows 10 IoT Core on Raspberry Pi 2 components to make a speaking light sensor. This will show how to use an MCP3008 Analog Digital Converter (ADC) chip to interface the Pi2 to three analog components. Two variable resistors (potentiometers) and one CdS photocell. Hardware setup Connect the Raspberry Pi2 to the breadboard and the other components as per the Fritzing diagram below. Note: While setting up the circuit, make sure your MCP3008 chip is oriented correctly. The chip has a half-moon shape marker along with a dot on one side. This should be oriented as shown in the diagram below. Optional If you have a pair of headphones with a 1/8" jack or a set of powered speakers with a 1/8" jack you can connect them to the Pi2 audio output jack to hear the prompts from the speech system. Code You can download the code starting project from and we will lead you through the addition of the code needed to talk to the web service and get your pin on the map. What map? Open up "Lesson_204\StartSolution\Lesson_204.sln" and open the mainpage.xaml.cs file. We have filled in a few methods as a starting point for you in this solution. If you want to jump ahead you can find a solution with all the code completed at:"Lesson_204\FullSolution\Lesson_204.sln" Add the following lines at the top of the MainPage class. // Use for configuration of the MCP3008 class voltage formula const float ReferenceVoltage = 5.0F; // Values for which channels we will be using from the ADC chip const byte LowPotentiometerADCChannel = 0; const byte HighPotentiometerADCChannel = 1; const byte CDSADCChannel = 2; // Some strings to let us know the current state. const string JustRightLightString = "Ah, just right"; const string LowLightString = "I need a light"; const string HighLightString = "I need to wear shades"; // Some internal state information enum eState { unknown, JustRight, TooBright, TooDark}; eState CurrentState = eState.unknown; // Our ADC Chip class MCP3008 mcp3008 = new MCP3008(ReferenceVoltage); // The Windows Speech API interface private SpeechSynthesizer synthesizer; // A timer to control how often we check the ADC values. public Timer timer; Now add these lines to the MainPage constructor to setup the windows speech synthesizer and the ADC chip. // Create a new SpeechSynthesizer instance for later use. synthesizer = new SpeechSynthesizer(); // Initialize the ADC chip for use mcp3008.Initialize(); Now add these lines to the OnNavigatedTo method. This will setup a timer callback which will call our code once per second on a different thread. If you do not want to add a pin onto the map, remove MakePinWebAPICall(); protected override void OnNavigatedTo(NavigationEventArgs navArgs) { Debug.WriteLine("MainPage::OnNavigatedTo"); MakePinWebAPICall(); // We will check for light level changes once per second (1000 milliseconds) timer = new Timer(timerCallback, this, 0, 1000); } Now we have the timer callback being called lets fill it out. private async void timerCallback(object state) { Debug.WriteLine("\nMainPage::timerCallback"); if (mcp3008 == null) { Debug.WriteLine("MainPage::timerCallback not ready"); return; } // The new light state, assume it's just right to start. eState newState = eState.JustRight; // Read from the ADC chip the current values of the two pots and the photo cell. int lowPotReadVal = mcp3008.ReadADC(LowPotentiometerADCChannel); int highPotReadVal = mcp3008.ReadADC(HighPotentiometerADCChannel); int cdsReadVal = mcp3008.ReadADC(CDSADCChannel); // convert the ADC readings to voltages to make them more friendly.); // Let us know what was read in. Debug.WriteLine(String.Format("Read values {0}, {1}, {2} ", lowPotReadVal, highPotReadVal, cdsReadVal)); Debug.WriteLine(String.Format("Voltages {0}, {1}, {2} ", lowPotVoltage, highPotVoltage, cdsVoltage)); // Compute the new state by first checking if the light level is too low if (cdsVoltage < lowPotVoltage) { newState = eState.TooDark; } // And now check if it too high. if (cdsVoltage > highPotVoltage) { newState = eState.TooBright; } // Use another method to determine what to do with the state. await CheckForStateChange(newState); } We have filled in most of the CheckForStateChange code for you but you want to add the call to the TextToSpeech helper method. // Use another method to wrap the speech synthesis functionality. await TextToSpeech(whatToSay); Now for the fun part of the speech API, making it talk! Modify the TextToSpeech method and add these lines. async () => { SpeechSynthesisStream synthesisStream; // Creating a stream from the text which can be played using media element. // This API converts text input into a stream. synthesisStream = await synthesizer.SynthesizeTextToStreamAsync(textToSpeak); // start this audio stream playing media.AutoPlay = true; media.SetSource(synthesisStream, synthesisStream.ContentType); media.Play(); } MCP3008.cs This is the class which will wrap the ADC functionality. First we will store the value of the reference voltage when we construct the new object. public MCP3008(float referenceVolgate) { Debug.WriteLine("MCP3008::New MCP3008"); // Store the reference voltage value for later use in the voltage calculation. ReferenceVoltage = referenceVolgate; } Then we will fill in the Initialize method to setup communication with the SPI bus controller. try { // Setup the SPI bus configuration var settings = new SpiConnectionSettings(SPI_CHIP_SELECT_LINE); // 3.6MHz is the rated speed of the MCP3008 at 5v settings.ClockFrequency = 3600000; settings.Mode = SpiMode.Mode0; // Ask Windows for the list of SpiDevices // Get a selector string that will return all SPI controllers on the system string aqs = SpiDevice.GetDeviceSelector(); // Find the SPI bus controller devices with our selector string var dis = await DeviceInformation.FindAllAsync(aqs); // Create an SpiDevice with our bus controller and SPI settings; } Now we will fill in the ReadADC method to actually read a value from the MCP3008 chip. public int ReadADC(byte whichChannel) { byte command = whichChannel; command |= MCP3008_SingleEnded; command <<= 4;; } And finally add a helper method which will be used to convert the returned ADC value (in units) into a voltage. public float ADCToVoltage(int adc) { return (float)adc * ReferenceVoltage / (float)Max; } Calibration Run the code and set the breadboard in a normally lit area. Look at the output window for the voltages being read by the ADC chip from the two potentiometers and the photo cell. The first number is the value being read from the low adjustment pot the second is the high adjustment pot and the third is the value currently being read at the photo cell. Turn the low boundary potentiometer, watching the value of the first number change. Adjust the pot until the voltage is a bit (at lease .2 volts) lower than the value of the third number. Now turn the high boundary pot, watching the value of the second number. You want this to be a bit (once again at least .2 volts) higher than the value of the third number. This has now configured a boundary zone where the value is "just right". Operation With the pots set this way if you shade the photocell with your hand the output should say "I need a light" and if you have connected the optional headphone / speaker you should hear the Pi2 speech. Removing your shade will have it change to "Ah, just right" (and speech). Shining a light on the sensor will change to "I need to wear shades" (with speech again). MainPage::timerCallback Read values 159, 324, 181 Voltages 0.7771261, 1.583578, 0.884653 MainPage::TextToSpeech Ah, just right MainPage::timerCallback Read values 159, 324, 149 Voltages 0.7771261, 1.583578, 0.7282503 MainPage::TextToSpeech I need a light MainPage::timerCallback Read values 159, 324, 372 Voltages 0.7771261, 1.583578, 1.818182 MainPage::TextToSpeech I need to wear shades References You can find out more about the Windows SpeechSynthesis API's at: Information on the MCP3008 ADC chip can be found at: Usage of the SPI interface: Schematics Code Project code for Lesson 204!
https://www.hackster.io/windows-iot/bright-or-not-cc55d7
CC-MAIN-2017-13
refinedweb
1,262
56.15
. 1. Modifying the model Modifying the model is a normal task in big projects. In this case posts could have an owner to distinguish the author of an article or to ensure only the creator of a post can edit the post. Let’s open models.py file inside posts folder and add a new field below publication_date: owner = models.ForeignKey(auth.models.User) Note how User is no more than a model provided by the auth application of the Django framework. With models.foreignKey we are adding a one-to-many relationship between Post and User entities. The model containing the foreignKey is the part of the relationship with only one of the other entities. This creates automatically a virtual property in the other part of the relationship but let’s return on this in a while. We could verbalize the relationship as: A Post is related with only one User As we know the nature of this relationship we can state: A Post belongs to only one User We are changing the model by adding a new field so the database should be updated. To do this automatically we use south. First type: $ ./manage.py schemamigration posts --auto The parameter auto makes south to try to discover automatically what happen to the model. In this case we are adding a field and south realized about this. The field can not be null but we are not providing any default value so the migration procedure is asking the developer what to do now. Choose 2nd option to specify a value manually and press enter. Now you are in a python terminal and you must provide a literal. Introduce number 1 (User 1 is the administrator user you create in the other tutorial). Now its time to apply the migration: $ ./manage.py migrate posts Run the server and go to the administrator URL to check how all the posts now contain a new field called owner. The new form field includes a green plus icon which allows the administrator to add a new user to the database without moving outside the post editing view. Once created, it is bound to the current post. Try adding some users to the database. Use the admin panel to edit their details and fulfill their names and last names and create new posts with those owners. 2. Improving post administration We can improve how to manage the posts in the administrator associating an administration model at the time we register the model. To do this, edit the admin.py file of posts application and leave it as follows: from posts.models import Post from django.contrib import admin class PostAdmin(admin.ModelAdmin): list_display = ('title', 'excerpt', 'publication_date', 'owner') list_filter = ['publication_date', 'owner'] date_hierarchy = 'publication_date' search_fields = ['title', 'content', 'owner__username', 'owner__first_name', 'owner__last_name'] prepopulated_fields = { 'machine_name' : ('title', ) } admin.site.register(Post, PostAdmin) The new class PostAdmin is an administrator model which controls how the list of all posts and the view for creating / editing a post should look. Note how the admin model is not bound (despite of the name convention) to any actual model until line 15 where we pass the model and the associated admin model. By overriding some fields we can change the information to show in the admin website or the behavior for the administration forms. Some of these fields are: - list_display establishes which fields will be shown in the list of posts. - list_filter adds a date filter to the right of this list. - date_hierarchy enables a navigable date hierarchy on the specified date. It is placed just over the list, below the search box. - search_fields points to the list of fields where the text introduced in the search box will be looked for. - prepopulated_fileds is used to populate automatically some fields in function to other fields. Observe line 11. The ‘__’ notation is used to reach fields from other models. In this case we want to search into the user name, first name and last name of the owner of the post. Thus, owner__first_name is equivalent to owner.first_name and owner__last_name to owner.last_name. A list of all options you can specify in an administration model is provided in the Django documentation. Run the server an try the administration panel for Posts. Observe how in the list of all posts you can click on the column headers to set the sorting criteria. Sorting is limited to static fields, no order can be specified on calculated fields like excerpt. 3. Adding commentaries The next version of Django has marked comment framework as deprecated so let’s anticipate this by implementing a custom one ourselves. We want commentaries for our posts. A commentary has a content, an author, a publication date and it’s related with the post which the commentary belongs to. So, go inside the posts application folder and open models.py. Add a new model as follows: class Commentary(models.Model): post = models.ForeignKey(Post) content = models.TextField() publication_date = models.DateTimeField(auto_now_add=True) author = models.CharField(max_length=50, default=u'The invisible man') def __unicode__(self): return self.owner + u'@' + unicode(self.post) class Meta: verbose_name_plural = u'commentaries' ordering = [u'-publication_date'] The field post is a foreign key setting one-to-many relationship between Commentary and Post entities: A Commentary belongs to only one Post As you see again, the foreign key is in the part of the relationship which has only one of the other entities. Note you have not modified Post model but, as a consequence of this foreign key, a virtual property is added to the Post class to represent the set of Commentary entities the Post is related to. This property is named commentary_set. We have to update the database before continuing so, in a terminal, run: $ ./manage.py schemamigration posts --auto $ ./manage.py migrate posts A form to add commentaries Forms are set of fields to be fulfilled with some constrains. We need to build a form with almost the same fields (some of them, like publication_date are filled automatically) so we are using a base class ModelForm able to inspect a model and to create the proper fields for the form. Inside posts application folder create a new file named forms.py and add the following content: # -*- encoding: utf-8 -*- from django import forms from posts.models import Commentary class AddCommentaryForm(forms.ModelForm): class Meta: model = Commentary fields = ('owner', 'content') Overriding the fields property we are indicating explicitly what fields we want to provide to fulfill. The order is important because it is the order in which the fields will be shown. We don’t want the user to complete publication_date nor post fields. The first one is set automatically meanwhile the second should be provided automatically because it is the post we are commenting on. The post view We are going to create a view to display a solely post and its commentaries. Open views.py and import the class TemplateView as well as AddCommentaryForm we created before. from django.views.generic import ListView, TemplateView from posts.forms import AddCommentaryForm Now add this code at the end of the file: class PostDetails(TemplateView): template_name = 'postdetails.html' def post(self, request, *args, **kwargs): return self.get(request, *args, **kwargs) # Overriding def get_context_data(self, **kwargs): context = super(PostDetails, self).get_context_data(**kwargs) post = self.get_post(kwargs['slug']) form = self.get_form(post) context.update({'form':form, 'post':post}) return context # Helper def get_post(self, slug): return Post.objects.get(pk=slug) # Helper def get_form(self, post): if self.request.method == 'POST': form = AddCommentaryForm(self.request.POST) if form.is_valid(): commentary = form.save(commit=False) post.commentary_set.add(commentary) else: return form return AddCommentaryForm() The TemplateView is a fully customizable view with the only intention to provide a template as answer. The template can be set by overriding the template_name property. In this case, we provide the template postdetails.html which we will create in the next section. By providing HTTP verbs we enable the view to handle different kind of requests. By default, only GET requests are allowed. The default behavior is to call get_context_data() method and send the result to the template as the context object we talk in the first part of the tutorial. As we need the view to handle POST requests, we provide the method post() in line 4. The method is quite simple and just delegates on get() method. By default get_context_data() returns a dictionary with the key params which contains the parameters supplied on the query string of the request. We need to include the post the user is visiting and the commentary form so we override get_context_data() in line 8. First let the default behavior takes place (line 9) and then get the post the user is visiting from the URL (line 11) and the form for commentaries (line 12). Finally updates the context object with the form and the post and return the new context. The method get_post() accepts the slug provided in the URL. In the next section we will create a URL schema to identify which part of the URL is the slug. Once we know the slug we use it to retrieve the only object whose public key (pk in line 20) is the slug. The get() method of a collection tries to retrieve a unique entity. If there is some ambiguity and more than one fulfill the constrains specified as parameters, an exception is raised. The method get_form() accepts a post to use in case of handling a POST request for a commentary. We need to distinguish two scenarios: - We are visiting a post (GET request) - We are sending a commentary (POST request) In any case (except if the form is invalid) we will return an empty form (line 32) ready to receive another commentary but if we are handling a valid POST request, we must add the commentary to the post. Fore so, in case we are attending a POST request (line 24), we try to construct a new AddCommentaryForm populated by the contents of the POST payload (line 25). If this form is valid (line 26) save the form to produce a commentary object (line 27). Parameter commit set to false is necessary as we are not providing the post field through the form so we cannot write the object to the database. Saving the form with commit set to false produce the Commentary object but avoid writing it into the database. In the other hand, if the form is invalid, the validation process put errors inside the form object so we return the invalid form. As I commented before, each time a one-to-many relationship is created, a virtual property named from the relationship is created in the other part of the relationship. In this case the property is part of the Post class and it’s called commentary_set. In line 28 we access this set and add it the new commentary. In this way we modify the relationship from the other end: instead of setting the post in the commentary, we add the commentary to the post. URL schema for posts Now we have the view, lets think about the URL schema. The URL schema is referred as the set of patters you use to specify how to build the URLs to navigate your site. It does not refer to the URLs but how we build the URLs. The schema we present here is simple and follows some REST conventions: - /posts/ – refer to the collection of posts - /posts/<machine_name_of_the_post>/ – refer to the post with the provided machine name Django uses named groups and regular expressions to recognize parts of an URL. So the regular expressions for former URLs are: r'^/posts/$' r'^/posts/(?P<slug>[a-zA-Z0-9_-]+)/$ The first one says something like: A string that start with ‘/’, followed by ‘posts’ and ending with ‘/’ The second one means: A string starting with ‘/’, followed by ‘posts/’, with a sub-string of at least 1 character of letters, numbers, underscores or hyphens named ‘slug’ and ending with / We could add these two patterns to alacola/urls.py in the same way we did in the first part of the tutorial but let’s split the URL patterns in such a way the part related with posts goes inside posts application. We ignore ‘posts/’ part of the URL because it could be named in several other ways and focus on the behavior after the slash: - No machine_name is provided, display the list of posts - A machine_name is provided, display the post with that machine_name So go into the posts application folder and add a new file named urls.py. Leave it like: # -*- encoding: utf-8 -*- from posts.views import PostList, PostDetails from django.conf.urls import patterns, include, url urlpatterns = patterns('', url(r'^$', PostList.as_view(), name='postlist'), url(r'^(?P<slug>[a-zA-Z0-9_-]+)/$', PostDetails.as_view(), name='postdetails'), ) Before continuing, pay attention to name parameters. They allow to provide a name to the URL in order to refer the pattern when needed. Now go to djangoblog/urls.py and leave it like: from django.conf.urls import patterns, include, url from django.views.generic.base import RedirectView #)), url(r'^$', RedirectView.as_view(url='/posts/')), url(r'^posts/', include('posts.urls')), ) We have changed some things since the last edition. Observe line 19. When accesing root URL, instead of using a custom view, we use a Django-provided generic view RedirectView to cause a redirection to /posts/ URL. When accessing an URL starting with /posts/ (line 20) we say Django to use the patterns inside posts application by indicating its urls module. Note how in line 20, the pattern does not say «ending with /» as we need to cover both scenarios: when a machine name is provided and when it’s not. When using include() function, not the complete URL is passed to the included patterns: the already matching part of the URL is stripped out first. So when /posts/an-interesting-article/ is requested, the /posts/ part matches the pattern in line 20 and the string an-interesting-article/ is passed to the patterns in posts/urls.py matching the pattern in line 7. This sub-string is then grouped following the regular expression group rules so an-interesting-article is given the name slug. Finally the dispatcher creates a PostDetails object and pass a HttpRequest object and a dictionary with the names of the groups as keys and the respective contents of each group as values. If you run the server, try going to the root URL and you will be redirected to the post list. If you try to access a post knowing its machine_name you will get a TemplateDoesNotExists error because there is no template yet. The post template At last, it’s time to write the template showing post details. Let’s first tweak the post list to include how many commentaries has a post. Edit postlist.html inside templates folder in the djangoblog application folder. Go and replace the line saying: <!-- Here will be commentaries --> With this new code: <p> <a href="{{ post.get_absolute_url }}#comments"> <span class="badge badge-info">{{ post.commentary_set.count }}</span> commentar{{ post.commentary_set.count|pluralize:"y,ies" }} </a> </p> Now replace the post header by: <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2> The attribute get_absolute_url of any Django model is a convenient way to get the unique URI for the object. We need to add it to the Post model so open models.py inside posts application and add the following method to Post class: def get_absolute_url(self): from django.core.urlresolvers import reverse return reverse('postdetails',kwargs={ 'slug':self.machine_name }) Do you remember the name parameter of URL pattern. Let’s refresh your memory: url(r'^(?P<slug>[a-zA-Z0-9_-]+)/$', PostDetails.as_view(), name='postdetails'), So with reverse() function you can provide a URL name and parameters for the named groups to rebuild the URL. This is what we are just doing here. Now revisit the post list and check how headers and commentaries redirect to the proper URL. Go to templates directory inside djangoblog application, create a new file named postdetails.html and add this content: {% extends "base.html" %} {% block title %} {{ post.title }} {% endblock %} {% block content %} <article class="post"> <h1>{{ post.title }}</h1> <section> {{ post.content }} </section> <aside> <p class="label label-info"> Published on <time>{{ post.publication_date|date:"r" }}</time> </p> <p> <span class="badge badge-info">{{ post.commentary_set.count }}</span> commentar{{ post.commentary_set.count|pluralize:"y,ies" }} </p> </aside> <h2 id="comments">Leave a commentary</h2> <section> <form action="{% url "postdetails" slug=post.machine_name %}" method="POST"> {% csrf_token %} {{ form.as_p }} <input type="submit" value="Send" /> </form> {% for commentary in post.commentary_set.all %} <article class="well"> <p>{{ commentary.owner }} said on <time>{{ commentary.publication_date|date:"r" }}</time>:</p> <blockquote>{{ commentary.content }}</blockquote> </article> {% if not forloop.last %} <hr/> {% endif %} {% empty %} <p class="label label-info">No comments at the moment.</p> {% endfor %} </section> </article> {% endblock %} New tags and filters are coming! To include a form in a template the simplest way, use form.as_p attribute. You can use other approaches to include the form in a table or fully control how the fields must to be displayed. The point is that you need to add the form tag and the submit button manually. Do not forget to provide the action URL and method! Pay attention to line 23. You can find the first new tag: url. The tag is very similar to reverse() function, it takes the name of an URL pattern and the content for the named groups and rebuild the absolute URL. This way we indicate we want to POST to the post’s URI. We could have used post.get_absolute_url but I want to show the url tag because it is a very useful tag for when we are not dealing with objects URIs. The second new tag is csrf_token. It is part of the CSRF protection and must be included in every POST form. It acts like a signature in such a way Django can recognize the post has been sent from the Django application and it has not been forged by an attacker. Tag for is an old friend but now we use the empty (line 36) tag inside to specify the behavior when the collection is empty. In the other hand, if tag is new. It is used to test conditions and take decisions in the same way you use if in the code. You can use several boolean operators. Inside a loop you can access loop variables such as last or first that are true if it is the last or first iterations. Now you have postdetails.html so you can run your server and access some post. Try to add a new commentary and see what happen. 4. Managing commentaries We are adding Commentary administration by registering it into the Admin site so open admin.py, ensure your are importing Commentary model and add this controller: from posts.models import Commentary class CommentaryAdmin(admin.ModelAdmin): list_display = ('owner', 'post', 'publication_date') list_filter = ['publication_date', 'owner'] search_fields = ['owner', 'content', 'post__title'] admin.site.register(Commentary, CommentaryAdmin) Try to run the server and add some commentaries and you’ll realize adding commentaries is a little bit annoying. It could be better if we could modify commentaries at the same time we manage posts. This is possible by adding inline administrator models. Inline administrator models allow editing models inside a parent model. So, add the CommentaryInline model to admin.py and modify PostAdmin to override inlines property: class CommentaryInline(admin.StackedInline): model = Commentary class PostAdmin(admin.ModelAdmin): list_display = ('title', 'excerpt', 'publication_date', 'owner') list_filter = ['publication_date', 'owner'] date_hierarchy = 'publication_date' search_fields = ['title', 'content', 'owner__username', 'owner__first_name', 'owner__last_name'] prepopulated_fields = { 'machine_name' : ('title', ) } inlines = [CommentaryInline] Now got to admin again, go to an existing post and test the new way of adding, editing or removing commentaries. 5. Enabling search To finish, we are going to reinforce these concepts with a new form but at the same time I’m going to introduce you to context processors. You will add a search box. The search form must be available to all pages in the site so it would be convenient to have some mechanism to add the form to any context sent to a template automatically instead of modify every single view in our project. Context processors are the mechanisms we are looking for. They are functions that receive a request and return a dictionary. This dictionary updates the context being sent to a template. Unfortunately the settings containing context processors is not in settings.py by default so you need to add it manually. Luckily, the Django documentation show the default value so open settings.py and add this to the end of the", ) Overview Think about the complete functionality: we are going to provide a search form for our whole site in the title bar. When clicking on search we are looking for posts containing the string into the search box and display them into a new page. Thus, we need: - An URL schema for search – let it be /search/?query=<search string> - A template to display the results - A view where to perform the actual search and get the results - A place in the blog to display the search box - A context processor to add the search form to all views - A form to be displayed as the search box In the future, we could look for other entities rather than only posts so we will add this functionality to djangoblog application instead of posts. From bottom to top… The search box form This time the search box is not based on any model so we are going to create a basic form inheriting directly from Form. Add a new forms.py file inside djangoblog application and add this content: # -*- encoding: utf-8 -*- from django import forms class SearchForm(forms.Form): query = forms.CharField(min_length=3, required=False) Here we can see how a plain form is defined. It has no much logic, we simply put the fields as attributes a set some constrains in field constructors. In this case a simple char field with a minimum length of three characters. The context processor Now we need to insert the form in any context so add a context_processors.py file to djangoblog application and leave it like: # -*- encoding: utf-8 -*- from djangoblog.forms import SearchForm def searchform(request): if request.method == 'GET': form = SearchForm(request.GET) else: form = SearchForm() return { 'searchform' : form } Now add the context processor to the tuple of context processors you add to settings.py before. The new item is: "djangoblog.context_processors.searchform", So with this context processor we are adding an empty form if posting or a filled one if getting. The search box template This is pretty simple, we are going to modify base.html inside templates folder in djangoblog application to include the search box form. The only difference is now we are not using as_p method but controlling how to display each field. Replace the line with the title of the blog: <a class="brand" href="#">My Django blog</a> with the following alternative content: <a class="brand pull-left" href="/">My Django blog</a> <form action="{% url "search" %}" method="GET" class="navbar-search pull-right"> <input type="search" name="{{ searchform.query.name }}" value="{{ searchform.query.value }}" class="search-query" placeholder="Search"/> </form> You can see how I provide the widget to display as search box manually only consulting the searchform property of the context when getting the name and the value. This way you have full control over the widget and its properties. There is no send button, pressing enter suffices. The results view More logic is needed to ask the database for those posts containing the query string. Add a views.py file in djangoblog application and add the following code: # -*- encoding: utf-8 -*- from django.views.generic import ListView from djangoblog.forms import SearchForm from posts.models import Post class SearchResults(ListView): template_name = "searchresults.html" # Override def get_queryset(self): if self.request.method == 'GET': form = SearchForm(self.request.GET) if form.is_valid(): query = form.cleaned_data['query'] results = Post.objects.filter(content__icontains=query) return results return Post.objects.none() As you can see this view is another ListView but this time we have overridden (line 5) the get_queryset() method to display only those posts which content contains the query ignoring the letter case (line 10). If form is invalid (line 8) or we are not processing a GET request (line 6), return the empty collection (line 13). This code deserves a deeper explanation of what are we doing. Before querying the database, the view checks if it’s handling a GET (line 6) request. If so, it tries to build a new SearchForm (line 7). Now look at line 9, we are assigning query from cleaned_data dictionary of the form. When a form is created, the data in the query string or POST content is converted to objects of proper type following the form definition. After validation, the valid values are stored in the cleaned_data member. Let’s explain more about querying a collection. In Django, accessing members of a model is done by performing operations over Model.objects member. We’ve already seen the method get() to access an object but if we want to get more than one instance then we have to use filter() instead. Other methods are available as well such as all() or exclude(). These methods return queryset instances and are chainables. Queryset are not tuples nor lists, they act like cursors inside the database so the memory consumption is low. You can pass named parameters to some methods such as filter(). Normally, parameters have the same name as some model field over which the method should act. A suffix starting with __ (double underscore) indicates a test and the value assigned to the parameter represents the value to test. If no test is provided, it is an equality test. Do you remember when using get()? Post.objects.get(pk=slug) This means: Gets the unique post whose public key is equal to slug Now using filter(): Post.objects.filter(content__icontains=query) Which means: Only those posts whose content contains (ignore case) query The results view We are almost done. We need a template to show the results so open templates folder and add a file named searchresults.html to the directory. Add these lines: # -*- encoding: utf-8 -*- from django import forms class SearchForm(forms.Form): query = forms.CharField(min_length=3, required=False) {% extends "base.html" %} {% block title %}Search results{% endblock %} {% block content %} <h1>Search results:</h1> {% for post in object_list %} <h2><a href="{{ post.get_absolute_url }}">post.title</a></h2> {% if not forloop.last %} <hr/> {% endif %} {% empty %} <p class="label label-warning">No results for your query.</p> {% endfor %} {% endblock %} The URL schema The last remaining thing to do is to bind the URL pattern with the view. Edit urls.py from djangoblog and add another entry with the following content: url(r'^search/', SearchResults.as_view(), name="search"), We are done. Start the server and try to find some text you know it is in some post and a random string to check what happen when there is no results. Search improvement using Q objects Before finishing, let’s improve a little bit the search mechanism. It should be great if, instead of looking for the whole string inside the post content, we looked for each word returning those posts containing all the words in the title or in the content. Modify the SearchResults.get_queryet() method in views.py inside djangoblog application to make it look like: def get_queryset(self): from django.db.models import Q if self.request.method == 'GET': form = SearchForm(self.request.GET) if form.is_valid(): query = form.cleaned_data['query'] words = query.split(' ') qobjects = [Q(content__icontains=w) | Q(title__icontains=w) for w in words] condition = reduce(lambda x,y: x & y, qobjects) results = Post.objects.filter(condition) return results return Post.objects.none() In few words, Q objects are used to build WHERE clauses in SQL. Two Q objects can be combined by using | or & in Python in the same way you use OR or AND in SQL. The explanation for the following code is, starting from line 5, as follows: - Get the query from the form. - Get the words in a list by breaking the string in the white spaces. - Make a list of Q objects to look for each word in content or title by combining using OR two separate Q objects. - Reduce the former list by combining all the Q conditions with AND - Filter by that condition - Return the results And now yes, we are done! Conclusions The long walk to here has covered a lot of Django concepts including advanced queries using Q objects. Even, there is lot of features you can not miss so I encourage you to read Django documentation. As in Python, before doing things by yourself, look for the solution first. Django is a very mature framework and it is probably a solution already exists or there is a partial solution you can extend. This tutorial does not cover a very important topic when working with Django: deploying Django. But there are lots of resources out there. I recommend you the official documentation again and the site. It has been a long way to this point. At last, the Django 1.5 in a nutshell series is done… or not? I’m thinking in adding a bonus chapter to these tutorials in order to cover REST APIs and AJAX. I will be partly motivated by the Python mentoring program I’m involved in. Let’s see. Finally, I’m very happy with my first incursions to English posts. Now its time to continuing translations. I’m thinking do it in visiting order. As I realized with these posts, it is very difficult just translating posts, more if I’ve learned new things since the time I wrote them the first time. Hence, I think the new coming posts will be better. See you in the next translation! Please send feedback to commentaries! Thanks a lot.
https://unoyunodiez.wordpress.com/2013/04/04/django-in-a-nutshell-ii/
CC-MAIN-2022-27
refinedweb
5,025
57.16
Matthias Dorfner wrote: > 1. some problems creating my xml file with the > correct double quoted string in the namespace, here one example: > > <?xml version='1.0' encoding='UTF-8'?> > <request xmlns: xmlns: > > I need exactly this output but not single quoted(' -> "). Here's the code The type of quotes used on attribute values during serialization depends on the serializer. If you use the .toxml() method to serialize the document, you'll get double quotes, I believe. It won't pretty-print though. print doc.toxml() <?xml version="1.0"?><request xmlns: xml.dom.ext.Print and PrettyPrint haven't been updated in ages. Well, they've been updated, but only in 4Suite (where they manifest as Ft.Xml.Domlette.Print and PrettyPrint), not PyXML. In PyXML they're hard-coded to use single quotes as attribute delimiters. I asked on the list a while back if there was any interest in bringing 4Suite's Print/PrettyPrint implementation into PyXML and didn't get much response. It's the kind of thing where if I want to see it done, I have to submit patches. > I use to create this one: > > dom = xml.dom.minidom.getDOMImplementation() > doc = dom.createDocument(None, "request", None) > > #Get the document element > msg_elem = doc.documentElement > > #Create an xmlns attributes on the root element > msg_elem.setAttributeNS(EMPTY_NAMESPACE, "xmlns:xsi", > "") > msg_elem.setAttributeNS(EMPTY_NAMESPACE, "xsi:noNameSpaceSchemaLocation", > "Handle.xsd") This isn't related to your quoting problem, but you are using the wrong namespaces. Instead of EMPTY_NAMESPACE you need to do it like this: msg_elem.setAttributeNS(XMLNS_NAMESPACE, 'xmlns:xsi', '') msg_elem.setAttributeNS('', 'xsi:noNameSpaceSchemaLocation','Handle.xsd') > 2. I want so post the above created xml structure to a webserver, but I > need to convert this first to a string, how can I do this? PrettyPrint > allows only writing this DOM structure to a file. Or is it possible to > correctly read out this xml file? A "file" in Python is any file-like object, basically anything with a .read() and maybe also .write() method for byte strings. This includes the sys.stdin, stdout, stderr streams. So to pretty-print to the screen, you could just do PrettyPrint(doc, sys.stdout) And to print to a buffer rather than an external file: f = cStringIO.StringIO() PrettyPrint(doc, f) To read from that buffer into a string, s: s = f.getvalue() Alternatively: f.reset() s = f.read() To read from an external file: f = open('yourfile', 'rb') s = f.read() f.close() Always be sure to call close() on your file-like objects when you're done reading or writing to them. (though not really necessary on sys.stdin, stdout, stderr) I don't know what web API you're using, but if you have access to an object representing the HTTP request, it might have a method that reads from a stream, in which case you could do something like PrettyPrint(doc, request.stream) to print directly to into the request object. Mike
https://mail.python.org/pipermail/xml-sig/2006-June/011529.html
CC-MAIN-2016-44
refinedweb
490
69.48
Servlet::Http::Cookie - HTTP cookie class my $cookie = Servlet::Http::Cookie->new($name, $value); my $clone = $cookie->clone(); my $comment = $cookie->getComment(); $cookie->setComment($comment); my $domain = $cookie->getDomain(); $cookie->setDomain($domain); my $seconds = $cookie->getMaxAge(); $cookie->setMaxAge($seconds); my $name = $cookie->getName(); my $path = $cookie->getPath(); $cookie->setPath($path); my $bool = $cookie->getSecure(); $cookie->setSecure($bool); my $value = $cookie->getValue(); $cookie->setValue($value); my $version = $cookie->getVersion(); $cookie->setVersion(); This class represents addCookie() method, which adds fields to getCookies() method. Serveral. Constructs an instance with the specified name and c hanged after creation with setValue(). By deafult, cookies are created according to the Netscape cookie specification. The version can be changed with setVersion(). Parameters: the name of the cookie the value of the cookie Throws: if the cookie name contains illegal characters or is one of the tokens reserved for use by the cookie specification Returns a copy of the object. Returns the comment describing the purpose of the cookie, or undef if the cookie has no comment. Returns the domain name for the cookie, in the form specified by RFC 2109. Returns the maximum age of the cookie, specified in seconds. The default value is -1, indicating that the cookie will persist until client shutdown. Returns the name of the cookie. The name cannot be changed after creation. Returns the path on the server to which the browser returns the cookie. The cookie is visible to all subpaths on the server. Returns true if the cookie can only be sent over a secure channel., or false if the cookie can be sent over any channel. Returns the value of the cookie. Returns the version of the cookie specification complied with by the cookie. Version 1 complies with RFC 2109, and version 0 complies with the original cookie specification drafted by Netscape. Cookies provided by a client use and identify the client's cookie version. Specifies a comment that describes the cookie's purpose. Comments are not supported by Version 0 cookies. Parameters: the comment Specifies the domain within which this cookie should be presented. The form of the domain name is specified by RFC 2109. A domain name begins with a dot (.foo.com), which means that the cookie is visible to servers in that domain only (, but not). By default, cookies are only returned to the server that sent them. Parameters: the domain name within which the cookie is visible client exits. A zero value causes the cookie to be deleted. Parameters: the maximum age of the cookie in seconds Specifies a server namespace for which the cookie is visible. The cookie is visible to all the resources at or beneath the URI namespace specified by the path. A cookie's path must include the servlet that set the cookie in order to make the cookie visible to that servlet. Parameters: the uri path denoting the visible namespace for the cookie Indicates to the if the cookie must be sent only over a secure channel or if it can be sent over any channel. The default is false. Parameters: a flag specifying the security requirement for cookie transmission Assigns a new value to a cookie after the cookie is created. If a binary value is used, Base64 encoding the value is suggested. With version 0 cookies, values should not contain white space, brackets, parentheses, equals signs, commas, double quotes, slashes, question marks, at signs, colons and semicolons. The behavior of clients in response to empty values is undefined. Parameters: the new value of the cookie Sets the version of the cookie specification with which the cookie complies. Version 0 complies with the original Netscape cookie specification, Version 1 complies with RFC 2109. The default is 0. Parameters: the version number of the supported cookie specification Brian Moseley, bcm@maz.org
http://search.cpan.org/~ix/libservlet-0.9.2/lib/Servlet/Http/Cookie.pm
CC-MAIN-2015-32
refinedweb
633
56.66
/* Declarations for getopt. Copyright (C) 1989, 1990, 1991,. This file was modified slightly by Ian Lance Taylor, November 1992, for Taylor UUCP, and again in June, 1995. */ #ifndef _GETOPT_H #define _GETOPT_H 1 #ifdef __cplusplus extern "C" { #endif /* Ian Lance Taylor <ian@airs.com> added the following defines for Taylor UUCP. This avoids reported conflicts with system getopt definitions. */ #define getopt gnu_getopt #define optarg gnu_optarg #define optind gnu_optind #define opterr gnu_opterr /* { const char *name; /* has_arg can't be an enum because some compilers complain about type mismatches in all the code that assumes it is an int. */ int has_arg; int *flag; int val; }; /* Names for the values of the `has_arg' field of `struct option'. */ enum _argtype { no_argument, required_argument, optional_argument }; #ifndef P /* On some systems, <stdio.h> includes getopt.h before P is defined by uucp.h, and the -I arguments cause this version of getopt.h to be included. Work around that here. */ #define P(x) () #define UNDEFINE_P #endif extern int getopt P((int argc, char *const *argv, const char *shortopts)); extern int getopt_long P((int argc, char *const *argv, const char *shortopts, const struct option *longopts, int *longind)); extern int getopt_long_only P((int argc, char *const *argv, const char *shortopts, const struct option *longopts, int *longind)); /* Internal only. Users should not call this directly. */ extern int _getopt_internal P((int argc, char *const *argv, const char *shortopts, const struct option *longopts, int *longind, int long_only)); #ifdef UNDEFINE_P #undef P #undef UNDEFINE_P #endif #ifdef __cplusplus } #endif #endif /* _GETOPT_H */
http://opensource.apple.com/source/uucp/uucp-11/uucp/getopt.h
CC-MAIN-2014-49
refinedweb
246
65.93
The QPimDelegate class provides an abstract delegate for rendering multiple lines of text for a PIM record. More... #include <QPimDelegate> This class is under development and is subject to change. Inherits QAbstractItemDelegate. Inherited by QContactDelegate and QTaskDelegate. The QPimDelegate class provides an abstract delegate for rendering multiple lines of text for a PIM record. QPimDelegate draws an item with the following features. All text lines will be elided if they are too wide. Much like QAbstractItemDelegate, this class is not intended to be used directly. Subclasses are expected to override some or all of the customization functions providing the above pieces of information or style settings. See also QAbstractItemDelegate, QContactDelegate, QPimRecord, and Pim Library. Enumerates the ways that the background of an item can be rendered by the drawBackground() function. See also backgroundStyle(). Enumerates the ways that the header and value pairs of the subsidiary lines can be rendered. See also subTextAlignment(). Constructs a QPimDelegate, with the given parent. Destroys a QPimDelegate. Returns a value that indicates how the background of this item should be rendered. The default implementation returns QPimDelegate::SelectedOnly, and ignores the supplied index and option parameters. See also drawBackground(). Returns the size hint for the whole delegate, which will include the space required for decorations, if any. The returned value is calculated by adding any space required for decorations to the given textSizeHint parameter. The default implementation returns the supplied textSizeHint, and ignores the supplied index and option parameters. Attempts to return a font that is similar to start but has the given size difference of step. If no matching font for the given step value is found, it will try increasingly larger/smaller fonts (depending on whether step was originally positive or negative). If no suitable font is found after trying different sizes, the original font start will be returned. Paints the background of the item. The rectangle to paint in (using p) should be obtained from option (option.rect) and the index of the item to paint. The default implementation fetches the background style to paint by calling backgroundStyle() for the given option and index. Paints any decorations, and assigns to the given lists of rectangles, which the caller can then use to align painted text, if required. The rtl argument is a convenience parameter that is true if the decorations should be painted right-to-left, or false if they should be painted left-to-right. This may affect which side of the rectangle a decoration is painted on. The rectangle to paint in (using p) should be obtained from option (option.rect) and the index of the item to paint. This function should return (in the leadingFloats and trailingFloats parameters) lists of rectangles that the rendered text will be wrapped around. Rectangles on the leading side of the text should be returned in leadingFloats, and rectangles on the trailing side should be returned in trailingFloats. This allows some flexibility in deciding whether decorations should be drawn behind or beside the text. The default implementation does not draw anything, and returns two empty lists. See also textRectangle(). Paints the foreground of the item. The rectangle to paint in (using p) should be obtained from option (option.rect) and the index of the item to paint. This function is called after painting all other visual elements (background, decorations, text etc) and could be used to apply a transparent effect to the rendered items. The default implementation does not paint anything. Returns the font to use for painting the main label text of the item for the given index and style option option. The default behavior is to return the font of the style option option, modified for bold rendering, and to ignore index. Returns the string to be displayed as the main text element of the delegate. The default implementation returns the DisplayRole of the supplied index, and ignores option. Paints the item specified by index, using the supplied painter and style option option. The default implementation will first draw the background and decorations, then the text items, and finally, any foreground items. All the drawing is accomplished using basic methods in this class. Reimplemented from QAbstractItemDelegate. See also drawBackground(), drawDecorations(), mainFont(), secondaryFont(), secondaryHeaderFont(), subTextAlignment(), textRectangle(), and drawForeground(). Returns the font to use for painting the subsidiary value texts of the item, for the given index and style option option. The default return value is a font that is at least two point sizes smaller than the font of the style option option. The supplied index is ignored in this case. See also mainFont() and secondaryHeaderFont(). Returns the font to use for painting the subsidiary header texts of the item, for the given index and style option option. The default return value is a bold font that is at least two point sizes smaller than the font of the style option option. The supplied index is ignored in this case. See also mainFont() and secondaryFont(). Returns the delegate's size hint for a specific index index and item style option. Reimplemented from QAbstractItemDelegate. See also decorationsSizeHint(). Returns the alignment of the header and value pairs of the subsidiary lines of text. The default implementation returns QPimDelegate::Independent, and ignores the supplied index and option parameters. Returns the list of subsidiary lines to render. This is stored in a list of pairs of strings, where the first member of the pair is the "header" string, and the second member of the pair is the "value" string. If either member of the pair is a null QString (QString()), then the subsidiary line is considered to consist of a single string that will take up the entire line. You can specify an empty QString (QString("")) if you wish to have blank space. The default implementation returns an empty list, and ignores the supplied index and option parameters. Returns a hint for the number of subsidiary lines of text to render for an item, which is used to calculate the sizeHint of this delegate. The default implementation obtains the actual list of subsidiary lines of text to render with the supplied option and index, and returns the size of this list. This method should be overridden if it can be slow to retrieve the list of subsidiary lines but fast to estimate the number of lines, for example, if all items are rendered with two subsidiary lines of text, but each subsidiary line of text requires a database lookup. See also subTexts(). Returns the rectangle that a line of text should be rendered in, given the following parameters. Note that the drawDecorations() function returns lists of rectangles that are RTL independent (e.g. leading and trailing instead of left and right). The lists of rectangles passed to this function are in absolute terms (left and right) - for an LTR language, they are equivalent, but for an RTL language the two lists will be exchanged. This function is used while rendering each line of text, including any subsidiary lines. See also drawDecorations().
https://doc.qt.io/archives/qtopia4.3/qpimdelegate.html
CC-MAIN-2019-18
refinedweb
1,163
56.86
1. Barry owns a 60% interest in an S corporation that earned $150,000 in 2011. He also owns 60% of the stock in a C corporation that earned $150,000 during the year. The S corporation distributed $30,000 to Barry, and the C corporation paid dividends of $30,000 to Barry. How much income must Barry report from these businesses? (Points : 5) $0 income from the S corporation, and $30,000 income from the C corporation $90,000 income from the S corporation, and $30,000 income from the C corporation $90,000 income from the S corporation, and $0 income from the C corporation $30,000 income from the S corporation, and $30,000 of dividend income from the C corporation None of the above 3. (TCO 1) Dawn and William form Orange Corporation. Dawn transfers equipment worth $475,000 (basis of $100,000) and $25,000 cash to Orange Corporation for 50% of its stock. William transfers a building and land worth $525,000 (basis of $200,000) for 50% of Oranges stock and $25,000 cash. Discuss the result of these transfers.(Points : 5) Dawn recognizes a gain of $375,000; William recognizes a gain of $325,000. Dawn recognizes a gain of $25,000; William recognizes no gain. Neither Dawn nor William recognizes gain. Dawn recognizes no gain; William recognizes a gain of $25,000. None of the above 4. (TCO 1) Kevin owns 100% of the stock of Cardinal Corporation. In the current year, Kevin transfers an installment obligation (basis of $30,000 and fair market value of $200,000) for additional stock in Cardinal worth $200,000. Which gain, if any, will Kevin recognize on the transfer? (Points : 5) Kevin recognizes no taxable gain on the transfer. Kevin has a taxable gain of $170,000. Kevin has a taxable gain of $180,000. Kevin has a basis of $200,000 in the additional stock he received in Cardinal Corporation. None of the above 5. (TCO 1) Earl and Mary form Yellow Corporation. Earl transfers property ($200,000 and value of $1,600,000) for 50 shares in Yellow Corporation. Mary transfers property (basis of $80,000 and value of $1,480,000) and agrees to serve as manager of Yellow for 1 year; in return, Mary receives 50 shares of Yellow. The value of Marys services is $120,000. With respect to the transfers, _____. (Points : 5) Mary will not recognize gain or income Earl will recognize a gain of $1,400,000 Yellow Corporation has a basis of $1,480,000 in the property it received from Mary Yellow will have a business deduction of $120,000 for the value of the services Mary will render None of the above 6. (TCO 11) Harold, a calendar-year taxpayer subject to a 35% marginal tax rate, claimed a charitable contribution deduction of $18,000 for a sculpture that the IRS later valued at $10,000. Which is the applicable overvaluation penalty? (Points : 5) $0 $560 $2,800 $3,500 7. (TCO 11) The privilege of confidentiality applies to a CPA tax preparer concerning the clients information relative to _____. (Points : 5) financial accounting tax accrual work papers a tax research memo used to determine an amount reported on the tax return building a defense against a penalty assessed for the use of a tax shelter building a defense against a charge brought by the SEC None of the above 8. (TCO 2) Staff Inc., has taxable income of $10 million this year. Which is the maximum DPAD tax savings for this C corporation? (Points : 5) $0 $204,000 $210,000 $306,000 $900,000 10. (TCO 3) As of January 1, Spruce Corporation has a deficit in accumulated E & P of $37,500. For the tax year, current E & P (all of which accrued ratably) is $20,000 (prior to any distribution). On July 1, Spruce Corporation distributes $25,000 to its sole, noncorporate shareholder. The amount of the distribution that is a dividend is _____.(Points : 5) $0 $20,000 $25,000 $37,500 None of the above 1. (TCO 3) Walnut Corporation, a calendar-year taxpayer, has taxable income of $110,000 for the year. In reviewing Walnuts financial records, you discover the following occurred this year. Federal income taxes paid: $25,000 Net operating loss carryforward deducted currently: $25,000 Gain recognized this year on an installment sale from a prior year: $12,000 Depreciation deducted on tax return (ADS depreciation would have been $8,000): $15,000 Interest income from Wisconsin state bonds: $37,000 Walnut Corporations current E & P is _____. (Points : 5) $73,000 $138,000 $142,000 $166,000 None of the above 3. (TCO 4) Five years ago, Eleanor transferred property she had used in her sole proprietorship to Blue Corporation for 1,000 shares of Blue Corporation in a transaction that qualified exchange treatment. With respect to the redemption, Eleanor will have a _____. (Points : 5) $195,000 capital gain $220,000 capital gain $195,000 dividend $220,000 dividend None of the above 4. (TCO 4) Cardinal Corporation has 1,000 shares of common stock outstanding. John owns 400 of the shares, Johns father owns 300 shares, Johns daughter owns 200 shares, and Redbird Corporation owns 100 shares. John owns 70% of the stock in Redbird Corporation. How many shares is John deemed to own in Cardinal Corporation under the attribution rules of 318? (Points : 5) 400 600 700 1,000 None of the above 5. (TCO 5) One of the tenets of U.S. tax policy is to encourage business development. Which Code section does not support this tenet? (Points : 5) 351, which allows entities to incorporate tax-free 1031, which allows the exchange of stock of one corporation for stock of another 368, which allows for tax-favorable corporate restructuring through mergers and acquisitions 381, which allows the target corporations tax benefits to carry over to the successor corporation All of the above 6. (TCO 5) The French Corporation has assets valued at $1 million (adjusted basis of $700,000). There are mortgages of $250,000 associated with these assets. Accent Corporation acquires all of Frenchs assets by exchanging $800,000 of its voting stock, and assumes $200,000 of Frenchs liabilities. French distributes the Accent stock and remaining liabilities to its shareholders in exchange for their French stock, and then liquidates. Which, if any, statement is correct? (Points : 5) This restructuring qualifies as a Type A reorganization. This restructuring qualifies as a Type C reorganization. The restructuring is taxable because liabilities cannot be distributed to shareholders in a tax-free reorganization. Accent recognizes a $50,000 gain on the restructuring. None of the above 8. (TCO 6) How are the members of a consolidated group affected by computations related to E & P? (Points : 5) E & P is computed solely on a consolidated basis. Consolidated E & P is computed as the sum of the E & P balances of each of the group members. Members E & P balances are frozen as long as the consolidation election is in place. Each member keeps its own E & P account. 9. (TCO 6) Which corporation is not eligible for consolidated return status? (Points : 5) Tax-exempt charitable corporations Insurance companies Corporations formed outside the United States Partnerships All of the above 10. (TCO 6) Members of a controlled group share all but which tax attribute? (Points : 5) The lower tax rates on the first $75,000 of taxable income The $40,000 AMT exemption The 179 depreciation amount allowed All of the above 1. (TCO 2) During the current year, Pet Palace Company had operating income of $510,000 and operating expenses of $400,000. In addition, Pet Palace had a long-term capital gain of $30,000. How does Lucinda, the sole owner of Pet Palace Company, report this information on her individual income tax return under the following assumptions? (I) Pet Palace is a proprietorship, and Lucinda does not withdraw any funds from the company during the year. (II) Pet Palace is an LLC, and Lucinda does not withdraw any funds from the company during the year. (III) Pet Palace is an S corporation, and Lucinda does not withdraw any funds from the company during the year. (IV) Pet Palace is a regular corporation, and Lucinda does not withdraw any funds from the company during the year.(Points : 30) (TCO 11) Congress has set very high goals as to the number of Forms 1040 that should be filed electronically. Summarize the benefits of e-filing from the perspectives of both the taxpayer and the government.(Points : 30) TCO 5) Shelton Corporation and Davis Corporation want to join forces as one corporation because their businesses are complementary. They would like the resulting corporation to have a new name, because both of them have been involved in high profile lawsuits due to environmental issues. Shelton is a manufacturer with a basis in its assets of $2 million (value of $2.9 million) and liabilities of $500,000. Davis is a distributor of a variety of products including those of Sheltons. Its basis in its assets is $1.2 million (value of $2 million) and it has liabilities of $400,000. Given these facts, which type of reorganization would you suggest for Shelton and Davis? (Points : 30) (TCO 6) In a federal consolidated tax return group, who is responsible to pay the tax liabilitythe parent, the subsidiaries, or both? How are these tax-payable amounts determined? (Points : 30)
https://ru.scribd.com/document/305922016/ACCT-424-Midterm-Exam-2
CC-MAIN-2019-30
refinedweb
1,593
64.1
Creating PHP Classes Basics. Creating PHP classes - Do one of the following: - In the Project Tool Window, select the directory in which you want to create a new class, and then choose on the main menu. - Right-click the corresponding directory and select New from the context menu. - Press Alt+Insert. - Choose PHP Class. The Create New PHP Class dialog box opens. - In the Name text box, type the name of the class to be created. PhpStorm automatically fills in the specified name in the File Name field. - Specify the namespace to create the class in. By default, the Namespace field shows the namespace that corresponds to the folder from which the class creation was invoked. You can choose <global namespace> from the drop-down drop-down list, specify the template from which to create the file. The available options are as follows: - The set of PhpStorm bundled templates: - Your own set of manually created file templates having the phpfile extension. To set such a template as a default, select the Use as a default template checkbox. The default template will be selected automatically the next time you invoke the creation of a new class. - Choose the file extension from the drop-down list. When you click OK, a new class is created according to the selected template, with the specified namespace declaration added automatically.
https://www.jetbrains.com/help/phpstorm/2018.2/creating-php-classes.html
CC-MAIN-2018-30
refinedweb
225
65.01
deno.land / x / ky@v0.30.0 / readme.md Ky is a tiny and elegant HTTP client based on the browser Fetch API Ky targets modern browsers and Deno. For older browsers, you will need to transpile and use a fetch polyfill and globalThis polyfill. For Node.js, check out Got. For isomorphic needs (like SSR), check out ky-universal. It's just a tiny file with no dependencies. fetch ky.post()) $ npm install ky import ky from 'ky'; const json = await ky.post('', {json: {foo: true}}).json(); console.log(json); //=> `{data: '🦄'}` With plain fetch, it would be: class HTTPError extends Error {} const response = await fetch('', { method: 'POST', body: JSON.stringify({foo: true}), headers: { 'content-type': 'application/json' } }); if (!response.ok) { throw new HTTPError(`Fetch error: ${response.statusText}`); } const json = await response.json(); console.log(json); //=> `{data: '🦄'}` If you are using Deno, import Ky from a URL. For example, using a CDN: import ky from ''; The input and options are the same as fetch, with some exceptions: credentialsoption is same-originby default, which is the default in the spec too, but not all browsers have caught up yet. Returns a Response object with Body methods added for convenience. So you can, for example, call ky.get(input).json() directly without having to await the Response first. When called like that, an appropriate Accept header will be set depending on the body method used. Unlike the Body methods of window.Fetch; these will throw an HTTPError if the response status is not in the range of 200...299. Also, .json() will return an empty string if the response status is 204 instead of throwing a parse error due to an empty body. Sets options.method to the method name and makes a request. When using a Request instance as input, any URL altering options (such as prefixUrl) will be ignored. Type: object Type: string Default: 'get' HTTP method used to make the request. Internally, the standard methods ( GET, PUT, PATCH, HEAD and DELETE) are uppercased in order to avoid server errors due to case sensitivity. Type: object and any other value accepted by JSON.stringify() Shortcut for sending JSON. Use this instead of the body option. Accepts any plain object or value, which will be JSON.stringify()'d and sent in the body with the correct header set. Type: string | object<string, string | number | boolean> | Array<Array<string | number | boolean>> | URLSearchParams Default: '' Search parameters to include in the request URL. Setting this will override all existing search parameters in the input URL. Accepts any value supported by URLSearchParams(). Type: string | URL A prefix to prepend to the input URL when making the request. It can be any valid URL, either relative or absolute. A trailing slash / is optional and will be added automatically, if needed, when it is joined with input. Only takes effect when input is a string. The input argument cannot start with a slash / when using this option. Useful when used with ky.extend() to create niche-specific Ky-instances. import ky from 'ky'; // On const response = await ky('unicorn', {prefixUrl: '/api'}); //=> '' const response2 = await ky('unicorn', {prefixUrl: ''}); //=> '' Notes: prefixUrland inputare joined, the result is resolved against the base URL of the page (if any). inputare disallowed when using this option to enforce consistency and avoid confusion about how the inputURL is handled, given that inputwill not follow the normal URL resolution rules when prefixUrlis being used, which changes the meaning of a leading slash. Type: object | number Default: limit: 2 methods: get put head delete options trace statusCodes: 408 413 429 500 502 503 504 maxRetryAfter: undefined An object representing limit, methods, statusCodes and maxRetryAfter fields for maximum retry count, allowed methods, allowed status codes and maximum Retry-After time. If retry is a number, it will be used as limit and other defaults will remain in place. If maxRetryAfter is set to undefined, it will use options.timeout. If Retry-After header is greater than maxRetryAfter, it will cancel the request. Delays between retries is calculated with the function 0.3 * (2 ** (retry - 1)) * 1000, where retry is the attempt number (starts from 1). Retries are not triggered following a timeout. import ky from 'ky'; const json = await ky('', { retry: { limit: 10, methods: ['get'], statusCodes: [413] } }).json(); Type: number | false Default: 10000 Timeout in milliseconds for getting a response, including any retries. Can not be greater than 2147483647. If set to false, there will be no timeout. Type: object<string, Function[]> Default: {beforeRequest: [], beforeRetry: [], afterResponse: []} Hooks allow modifications during the request lifecycle. Hook functions may be async and are run serially. Type: Function[] Default: [] This hook enables you to modify the request right before it is sent. Ky will make no further changes to the request after this. The hook function receives request and options as arguments. You could, for example, modify the request.headers here. The hook can return a Request to replace the outgoing request, or return a Response to completely avoid making an HTTP request. This can be used to mock a request, check an internal cache, etc. An important consideration when returning a request or response from this hook is that any remaining beforeRequest hooks will be skipped, so you may want to only return them from the last hook. import ky from 'ky'; const api = ky.extend({ hooks: { beforeRequest: [ request => { request.headers.set('X-Requested-With', 'ky'); } ] } }); const response = await api.get(''); Type: Function[] Default: [] This hook enables you to modify the request right before retry. Ky will make no further changes to the request after this. The hook function receives an object with the normalized request and options, an error instance, and the retry count. You could, for example, modify request.headers here. If the request received a response, the error will be of type HTTPError and the Response object will be available at error.response. Be aware that some types of errors, such as network errors, inherently mean that a response was not received. In that case, the error will not be an instance of HTTPError. You can prevent Ky from retrying the request by throwing an error. Ky will not handle it in any way and the error will be propagated to the request initiator. The rest of the beforeRetry hooks will not be called in this case. Alternatively, you can return the ky.stop symbol to do the same thing but without propagating an error (this has some limitations, see ky.stop docs for details). import ky from 'ky'; const response = await ky('', { hooks: { beforeRetry: [ async ({request, options, error, retryCount}) => { const token = await ky(''); request.headers.set('Authorization', `token ${token}`); } ] } }); Type: Function[] Default: [] This hook enables you to modify the HTTPError right before it is thrown. The hook function receives a HTTPError as an argument and should return an instance of HTTPError. import ky from 'ky'; await ky('', { hooks: { beforeError: [ error => { const {response} = error; if (response && response.body) { error.name = 'GitHubError'; error.message = `${response.body.message} (${response.statusCode})`; } return error; } ] } }); Type: Function[] Default: [] This hook enables you to read and optionally modify the response. The hook function receives normalized request, options, and a clone of the response as arguments. The return value of the hook function will be used by Ky as the response object if it's an instance of Response. import ky from 'ky'; const response = await ky('', { hooks: { afterResponse: [ (_request, _options, response) => { // You could do something with the response, for example, logging. log(response); // Or return a `Response` instance to overwrite the response. return new Response('A different response', {status: 200}); }, // Or retry with a fresh token on a 403 error async (request, options, response) => { if (response.status === 403) { // Get a fresh token const token = await ky('').text(); // Retry with the token request.headers.set('Authorization', `token ${token}`); return ky(request); } } ] } }); Type: boolean Default: true Throw an HTTPError when, after following redirects, the response has a non-2xx status code. To also throw for redirects instead of following them, set the redirect option to 'manual'. Setting this to false may be useful if you are checking for resource availability and are expecting error responses. Note: If false, error responses are considered successful and the request will not be retried. Type: Function Download progress event handler. The function receives a progress and chunk argument: progressobject contains the following elements: percent, transferredBytesand totalBytes. If it's not possible to retrieve the body size, totalByteswill be 0. chunkargument is an instance of Uint8Array. It's empty for the first call. import ky from 'ky'; const response = await ky('', { onDownloadProgress: (progress, chunk) => { // Example output: // `0% - 0 of 1271 bytes` // `100% - 1271 of 1271 bytes` console.log(`${progress.percent * 100}% - ${progress.transferredBytes} of ${progress.totalBytes} bytes`); } }); Type: Function Default: JSON.parse() User-defined JSON-parsing function. Use-cases: bournepackage to protect from prototype pollution. reviveroption of JSON.parse(). import ky from 'ky'; import bourne from '@hapijs/bourne'; const json = await ky('', { parseJson: text => bourne(text) }).json(); Type: Function Default: fetch User-defined fetch function. Has to be fully compatible with the Fetch API standard. Use-cases: fetchimplementations like isomorphic-unfetch. fetchwrapper function provided by some frameworks that use server-side rendering (SSR). import ky from 'ky'; import fetch from 'isomorphic-unfetch'; const json = await ky('', {fetch}).json(); Create a new ky instance with some defaults overridden with your own. In contrast to ky.create(), ky.extend() inherits defaults from its parent. You can pass headers as a Headers instance or a plain object. You can remove a header with .extend() by passing the header with an undefined value. Passing undefined as a string removes the header only if it comes from a Headers instance. import ky from 'ky'; const url = ''; const original = ky.create({ headers: { rainbow: 'rainbow', unicorn: 'unicorn' } }); const extended = original.extend({ headers: { rainbow: undefined } }); const response = await extended(url).json(); console.log('rainbow' in response); //=> false console.log('unicorn' in response); //=> true Create a new Ky instance with complete new defaults. import ky from 'ky'; // On const api = ky.create({prefixUrl: ''}); const response = await api.get('users/123'); //=> '' const response = await api.get('/status', {prefixUrl: ''}); //=> '' Type: object A Symbol that can be returned by a beforeRetry hook to stop the retry. This will also short circuit the remaining beforeRetry hooks. Note: Returning this symbol makes Ky abort and return with an undefined response. Be sure to check for a response before accessing any properties on it or use optional chaining. It is also incompatible with body methods, such as .json() or .text(), because there is no response to parse. In general, we recommend throwing an error instead of returning this symbol, as that will cause Ky to abort and then throw, which avoids these limitations. A valid use-case for ky.stop is to prevent retries when making requests for side effects, where the returned data is not important. For example, logging client activity to the server. import ky from 'ky'; const options = { hooks: { beforeRetry: [ async ({request, options, error, retryCount}) => { const shouldStopRetry = await ky(''); if (shouldStopRetry) { return ky.stop; } } ] } }; // Note that response will be `undefined` in case `ky.stop` is returned. const response = await ky.post('', options); // Using `.text()` or other body methods is not supported. const text = await ky('', options).text(); Exposed for instanceof checks. The error has a response property with the Response object, request property with the Request object, and options property with normalized options (either passed to ky when creating an instance with ky.create() or directly when performing the request). The error thrown when the request times out. It has a request property with the Request object. Sending form data in Ky is identical to fetch. Just pass a FormData instance to the body option. The Content-Type header will be automatically set to multipart/form-data. import ky from 'ky'; // `multipart/form-data` const formData = new FormData(); formData.append('food', 'fries'); formData.append('drink', 'icetea'); const response = await ky.post(url, {body: formData}); If you want to send the data in application/x-www-form-urlencoded format, you will need to encode the data with URLSearchParams. import ky from 'ky'; // `application/x-www-form-urlencoded` const searchParams = new URLSearchParams(); searchParams.set('food', 'fries'); searchParams.set('drink', 'icetea'); const response = await ky.post(url, {body: searchParams}); Content-Type Ky automatically sets an appropriate Content-Type header for each request based on the data in the request body. However, some APIs require custom, non-standard content types, such as application/x-amz-json-1.1. Using the headers option, you can manually override the content type. import ky from 'ky'; const json = await ky.post('', { headers: { 'content-type': 'application/json' } json: { foo: true }, }).json(); console.log(json); //=> `{data: '🦄'}` Fetch (and hence Ky) has built-in support for request cancellation through the AbortController API. Read more. Example: import ky from 'ky'; const controller = new AbortController(); const {signal} = controller; setTimeout(() => { controller.abort(); }, 5000); try { console.log(await ky(url, {signal}).text()); } catch (error) { if (error.name === 'AbortError') { console.log('Fetch aborted'); } else { console.error('Fetch error:', error); } } ky-universal. ky-universal. Either use a test runner that can run in the browser, like Mocha, or use AVA with ky-universal. Read more. Make sure your code is running as a JavaScript module (ESM), for example by using a <script type="module"> tag in your HTML document. Then Ky can be imported directly by that module without a bundler or other tools. <script type="module"> import ky from ''; const json = await ky('').json(); console.log(json.title); //=> 'delectus aut autem </script> got See my answer here. Got is maintained by the same people as Ky. axios? r2? kymean? It's just a random short npm package name I managed to get. It does, however, have a meaning in Japanese: A form of text-able slang, KY is an abbreviation for 空気読めない (kuuki yomenai), which literally translates into “cannot read the air.” It's a phrase applied to someone who misses the implied meaning. The latest version of Chrome, Firefox, and Safari. Polyfill the needed browser globals or just use ky-universal. Version Info
https://deno.land/x/ky@v0.30.0/readme.md
CC-MAIN-2022-27
refinedweb
2,337
59.6
On 8/18/06, Christoph Kiehl <kiehl@subshell.com> wrote: > Nicolas wrote: > > > Why not use > > EffectiveNodeType<> > > > > *reregisterNodeType<> > > > > *(NodeTypeDef<> > > > > ntd) from the NodeTypeRegistry?. > > Because of three things: > > 1. NodeTypeRegistry is no part of the public API, but JackrabbitNodeTypeManager > is to my understanding. > 2. My nodetype definitions are in CND-Format. If I want to change my node types > I could parse them with CompactNodeTypeDefReader. But as far as I can see there > is no possibility to get the namespaces defined in the CND file and register > them before my nodetypes. > 3. A lot of people are asking for such a feature and I think my proposed > solution is an easy way for everyone to change nodetypes. unfortunately supporting node type modifications requires more than just adding a method... see cheers stefan > > Cheers, > Christoph > >
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200608.mbox/%3C90a8d1c00608180700o676d96c2y8ff6685b6c6fd711@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
132
58.58
This course will introduce you to SQL Server Core 2016, as well as teach you about SSMS, data tools, installation, server configuration, using Management Studio, and writing and executing queries. There's a good chance you came across this article because you Googled for a PHP error message, but learning how to READ the errors yourself will make you a faster, more efficient developer, so here's a breakdown of the most common errors you'll come across in PHP development, and how to immediately find and fix the problem yourself. GENERAL SYNTAX ERRORS This error happens all the time, and the "unexpected" character will change depending on your situation. To understand this error, imagine you're reading a book. Without realizing it, your brain is connecting all these different words in the book together to form sentences and ideas. You read a description like "red apple" and your brain says, "These words are in the correct order and 'red' is referring to the color of the 'apple'." This process of reading something and then analyzing the words into purposes ("red" describing "apple", which is an object) is called parsing. Now let's say you were reading this sentence: "George stepped out of the shadows and then he purple." Your brain stops and says, "What the... purple? The way the sentence was going, I expected the next word to be some kind of action or verb... but 'purple' doesn't belong there!" This is exactly what PHP is saying when you get this error. Basically, it's reading and parsing your code, and it's expecting to see something in your code, but instead, it finds some character that it didn't expect - something that doesn't quite make sense, given the code that comes BEFORE the unexpected character. Let's say we get this error: Parse error: syntax error, unexpected ';' in your_script_file.php on line 2 Opening up your_script_file.php, we look at line 2 and see this: $my_variable = ("123"; We then locate the "unexpected" character, which is the semicolon ; and then work BACKWARDS from right to left from there. So PHP is expecting something, and it wasn't a semicolon, so we need to find out what it WAS expecting. We know the semicolon ends a line of code, so PHP obviously doesn't think the line of code is completely finished just yet. This means we look for things that started but did not end. In this case, we can see that there's an unmatched, opening parentheses before our string. So PHP is expecting that parentheses to close before it can finish the line of code. So the fix would be to add a ")" right before the semi-colon: $my_variable = ("123"); Or, if the parentheses isn't needed, then another solution would be to remove the opening parentheses: $my_variable = "123"; Either way, we're ensuring that there is no unfinished code before the line ends. In some cases, PHP might be able to tell you specifically what it is expecting. Let's say we get this common variation of the same error: Parse error: syntax error, unexpected '=', expecting ']' in your_script_file.php on line 2 We go to line 2 in that file and see: $my_array[123 = "Hello World"; So we find the unexpected "=" character and again we work backwards, looking at the code that is leading up to it, and we see that we're in the middle of specifying the index / key of an array using the square brackets. We've got the opening bracket [ and a value of 123 for our index / key. PHP would reasonably expect that you're ready to give it the closing square bracket ], but instead it finds the = sign, so it says, "Hey, you forgot your ']' character before you moved onto the next part of the code!" The vast majority of these "unexpected" errors can be quickly and easily resolved by locating the unexpected character and then working backwards to see what PHP is expecting instead of the character that it found. In a few cases, PHP will sometimes suggest the wrong thing. For example, take this line of code: foreach($my_array as $key => val) It results in: Parse error: syntax error, unexpected ')', expecting '[' in... If we follow the basic instructions, we can start at the ")" at the end of the line and then work backwards and see that we forgot a $ sign in front of our "val" variable name. However, PHP gave a slightly-misleading error message because it thought that maybe you were trying to write some more advanced PHP code. It's rare that you get a misleading suggestion for the expected character, but it happens. Let's look at one more common variation of the error: Parse error: syntax error, unexpected end of file in your_script_file.php on line 4 If we go look at line 4, it's just the final line in the script - there's no code! But in almost every case, when you get this error, it's because you have an opening "{" curly brace that doesn't end, like this: foreach($my_array as $key => $val) { Alternatively, if we see this error: Parse error: syntax error, unexpected '}', expecting end of file in... ...it almost always means we have too many closing } curly braces, like this: foreach($my_array as $key => $val) { }} Just remember, start by finding the unexpected value, and then work backwards and you'll find the problem every time. This is one of the most bizarre error messages because it's the Hebrew word for "double colon" (according to what I read once upon a time). Essentially, there's a variety of ways that this error can pop up, but it's usually due to a missing character of some kind. Think of it as another "unexpected character" type of error, because it is usually due to PHP seeing some code that could potentially be valid if the next part of the code was "::" (followed by something else). More often than not, the scope resolution (the "::" you might use for something like referencing a static class property, like MyClass:$some_property) is actually not really what you were going for, and chances are that you just forgot a $ sign or you didn't close a curly brace around a variable embedded in a string. For example: $x = "hello"; echo "{$x world!"; You can see that the valid, expected code should have been echo "{$x} world!", but that the } character was missing. Again, fix this how you would fix the other errors about unexpected characters - find the line number and start working backwards from the end of that line. Let's move onto some other common errors... THE UNDEFINED ERRORS These are all pretty straightforward errors, so this will be a quick section. This is an easy one. It simply means you tried to use a variable that doesn't exist. For example: $My_variable_name = "Hello world!"; echo $my_variable_name; Notice that the variable names do not match - the first line has an upper-case "M" in the variable name "My" instead of "my". So the first line is creating one variable and the second line is echo-ing a completely different variable which just happens to have a SIMILAR name. Usually, when you have this error, it's because you have a typo in one of your variable names, or maybe you're incorrectly assuming that a bit of code has been run that defines a variable. This is a similar issue to the undefined variable error, with similar causes. Let's take a look at a code example: $example_array = array( "hello" => "world", "How" => "are you?" ); echo $example_array["how"]; Again, we see a case-sensitive problem - the echo is trying access an array index called "how", but our array uses "How" with an uppercase "H", so the line with echo will fail and display that notice. If you ever get this error and you are CONVINCED that the index exists, then you can use print_r() to dump the contents of the array, which can often lead to understanding the reason. Usually, this happens because you've stored a function in some file and it hasn't been included, or you were removing some code but didn't clean it all up completely, or maybe you have a typo in your function name. No matter what, it's because you're calling a function that doesn't exist. Example: function do_my_bidding() { } domybidding(); // Throws the error because we're missing the _ characters in the name. This is almost always the result of a missing $ in front of a variable. For example: $hello = "world"; echo hello; // Notice: Use of undefined constant hello - assumed 'hello' in... If PHP just sees something in the middle of code that looks like it should be treated like a value, then it will try to figure out whether you wanted a string, a constant, or a variable. Without a $ sign, it throws out the possibility of it being a variable, so it then checks to see if there's a constant with that name. If there's no constant, then it throws the warning and then assumes that you wanted the string, so in the above example, it would echo out the word "hello". HEADER / HTTP ERRORS To understand this, you need to understand some basics about how HTTP works. When a browser sends a request to the web server, it gets back a response that is made up of 2 parts. The first part is the HTTP header and the second part is the body / content. The header contains some basic information about the response, such as how long the response content is, the date the content was last modified, the server type, the date/time on the server, and it can also contain instructions to the browser, such as telling the browser to store some cookies or maybe redirect to another page. The content, on the other hand, is the normal stuff you see when you view the source of the page. For a web page, the content section of the response would contain HTML. For an image, the content section would contain the raw image bytes, and so on. The rules for HTTP say that the HTTP headers all have to be together, and that a blank line will indicate the end of the headers and the start of the content. So a response basically looks like: HTTP Header HTTP Header HTTP Header <blank line> HTTP Content... Now, when you use the header() line in PHP, or if you're using session_start(), then you're trying to tell PHP to modify/add the appropriate HTTP headers. However, the moment that any NON-header content is rendered (even a single white space or a blank line), then PHP assumes that you are finished modifying the headers, and so it will send the headers back to the browser, and then wait for you to finish the content. So once those headers are sent, you can't change them anymore, and THAT is what this error is about. Basically, to solve this, you have to find the non-header content and remove it, or re-order your code, or use a more drastic solution like output buffering (ob_start()) so that nothing is sent to the browser until everything is finished. Reproducing this issue is easy: <?php header("X-My-Header-1: hello"); // This will work okay. ?> <?php header("X-My-Header-2: world"); // This will cause the error. ?> When we ended the first PHP block, we had some white space between that point and the start of the next PHP block, and that counts as CONTENT, so PHP will send (flush) the headers after the first one and when you try to modify the headers the second time, it will give you the warning since the headers have already been finished and sent. Again, you can use ob_start() to enable output buffering as a quick fix: <?php ob_start(); header("X-My-Header-1: hello"); // This will work okay. ?> <?php header("X-My-Header-2: world"); // This will now work. ?> ...but you should really review the pros and cons of output buffering before doing this. At a glance, output buffering will use up more memory and the browser won't be able to do any sort of incremental loading because it won't get anything at all from the server until ALL the headers and HTML content of the page has been generated. Usually the more appropriate fix is to simply eliminate the whitespace: <?php header("X-My-Header-1: hello"); // This will work okay. header("X-My-Header-2: world"); // This will work okay, too. ?> OBJECT-ORIENTED PROGRAMMING ERRORS These two are almost the same error. They both occur when you try to use a regular variable as if it were an object: $my_var = "Hello world"; echo $my_var->foo; // Notice: Trying to get property of non-object error echo $my_var->bar(); // Fatal error: Call to a member function bar() on a non-object So if you get that error and you THINK that $my_var should be an object, use something like var_dump() on it to inspect the value prior using it. Chances are that the variable may have been overwritten (e.g. it WAS an object but then some code changed it to a string or something). The "Call to a member function" error can also be due to a bad mysqli query when you're using the object-oriented style of mysqli (see further below). The most common root cause for this is either copying a function from a class or using $this in a static function/method, since $this is a special variable that can ONLY be used within the methods of a class instance: class Foo { public $message = "I'm foo!"; public function Hello() { echo $this->message; } public static function World() { echo $this->message; } } $x = new Foo(); $x->Hello(); // I'm foo! Foo::World(); // Fatal error: Using $this when not in object context Bottom line, you should never use the variable $this unless you're inside a class method. This is a mostly simple one. You can only have one class with a given name, so this will produce the error: class Foo { } class Foo // Fatal error: Cannot declare class Foo, because the name is already in use in... { } Now, there is a slight exception to the rule. PHP supports namespaces, so you can have two different namespaces that have the same class name. If your application makes use of namespaces and you get this error, then you have two classes with the same name within the same namespace. One common cause of this error is using include() or require() instead of include_once() or require_once(). If you include() or require() a file containing a class definition more than once, then it's going to cause this problem. For example: myclass.php class MyClass { ... } index.php include("myclass.php"); // <-- This defines MyClass include("myclass.php"); // <-- This will trigger the problem since MyClass is already defined This isn't necessarily an object-oriented error, but it's the same root cause as the above error. The only difference is that instead of having the same class name defined twice, you have the same function name defined twice. That's it. DATABASE ERRORS Database errors can be a little tricky, since many of them are alike and have some SMALL differences in their messages that make a BIG difference in their meaning. The vast majority of the time, this is because your query failed. If query executed by mysql_query() or mysqli_query() ends up failing (e.g. an invalid table, or invalid syntax, etc...). Example: $conn = mysqli_connect("localhost","user","pass","db"); $results = mysqli_query($conn,"SELECT * FROM non_existent_table"); // The query fails, so $results is now false instead of a valid result $row = mysqli_fetch_assoc($results); // This produces the warning. Note that if you use the object-oriented style of the mysqli extension, then an invalid query will still return a false result, so when you try to call the fetch_assoc() method on the result (which is the boolean "false") , you will get the error about calling a member function on a non-object, since false is not an object: $conn = new mysqli("localhost","user","pass","db"); $results = $conn->query("SELECT * FROM non_existent_table"); // The query fails, so $results is now false instead of a valid result $row = $results->fetch_assoc(); // This produces the "Call to a member function fetch_assoc() on boolean" error. This is an extremely common error these days, and it's almost always because people are updating their code to use the new mysqli extension instead of the old mysql extension. PHP can support multiple different database connections at the same time, and the old mysql extension syntax didn't force users to pick a specific database connection. If you didn't specify which database you wanted to query, the mysql extension would simply pick the last one you connected to: $connA = mysql_connect("serverA","user","pass"); $connB = mysql_connect("serverB","user","pass"); mysql_query("UPDATE some_table SET field=value"); In that example, server B would receive the UPDATE query, since it was the last one connected. This could be a serious problem if you wanted that UPDATE query to go to server A! The new mysqli extension requires you to specify the connection as the FIRST parameter of your mysqli_query(), so if you just do a search-and-replace on your code to change all instances of "mysql_" to "mysqli_", you will end up with this kind of code that produces the error: $connA = mysqli_connect("serverA","user","pass"); $connB = mysqli_connect("serverB","user","pass"); mysqli_query("UPDATE some_table SET field=value"); // Warning: mysqli_query() expects at least 2 parameters, 1 given The fix is simple - just specify the database connection as your first parameter: $connA = mysqli_connect("serverA","user","pass"); $connB = mysqli_connect("serverB","user","pass"); mysqli_query($connA, "UPDATE some_table SET field=value"); Now PHP knows exactly which server to send the query to! You can also avoid this by using the object-oriented style of coding that the mysqli extension offers: $connA = new mysqli("serverA","user","pass","db"); $connB = new mysqli("serverB","user","pass","db"); $connA->query("UPDATE some_table SET field=value"); MEMORY ERRORS There's really only one common memory error, which is: You can probably imagine why this occurs - PHP has run out of memory. To be clear, this is NOT saying that your computer has run out of memory. PHP has its own internal memory limit, which is defined in the php.ini file and can be overridden in many cases with ini_set(). This limit used to be 8MB a long time ago, then it was upped to 16MB in PHP 5.2.0. Nowadays, the default memory_limit setting in PHP 7 and higher is 128MB. However, many popular software packages, like Wordpress, will have their own default limits that they will try to set, which might override that limit (WP has a 40MB default limit for regular sites, and 64MB default limit for multi-sites). A common beginner's question is, "My computer has many gigabytes of RAM - why doesn't PHP just use it?" Now, the common root causes of these errors stem from developers who aren't thinking in advance about how much memory their code might use up. For example, take this simple line of code: $contents = file_get_contents("logfile.csv"); It reads the contents of the "logfile.csv" into memory, into a string variable called $contents. This works fine if logfile.csv is relatively small, like 100 megabytes or less. But what if logfile.csv is something that grows every day without anything that limits its size? Let's say that over time, it eventually grows to be 1 gigabyte in size, and suddenly, your PHP script starts throwing these memory errors, even though nobody has changed any code! It's because the file has finally reached a size where it is simply too big to be read into PHP's memory without exceeding PHP's memory limit. A temporary solution for this problem can be to either: However, never just apply this quick fix and walk away. If you're suddenly getting this error, increasing the memory limit means that it will probably happen again later. Eventually you will reach a point where you can't increase it any further, and by that time, you'll likely have a HUGE data problem on your hands. So if you're going to increase the memory limit in PHP, make sure you immediately start looking into a more permanent solution to the problem. For example, usually there's no point in reading an entire file into memory (unless you are a professional who knows exactly why you would do so). It's often more efficient to read a file in chunks. For example, if the previous example's logfile.csv was 2 gigabytes in size, you could read through the entire file like this: $fp = fopen("logfile.csv","r"); while(!feof($fp)) { $line = fgetcsv($fp); } fclose($fp); ...and chances are that you would never exceed 100k of memory usage, because the $line variable gets overwritten in each loop, so you would never have more than just one line of data in memory at any given time (and using fseek and ftell to jump around a file instantly can address most performance concerns with reading large files). Another root cause is trying to keep too much data in memory and/or not taking measures to properly cast it. For example, let's say that the logfile.csv file just contains a million numbers, like this: logfile.csv 123 124 125 126 ...etc... If you used a file loop to simply read each line into an array, like this: $numbers = array(); $fp = fopen("logfile.csv","r"); while(!feof($fp)) { $numbers[] = fgets($fp); } fclose($fp); ...then you would end up with a million strings inside your array, like "123", "124", etc... For all PHP knows, it is simply a coincidence that all of the strings happen to contain numbers. A number in a string takes up more memory than the number itself. Unfortunately, PHP's simplicity comes at the cost of overall memory efficiency, so if you've worked in other languages where an integer takes up just a few bytes... well that's not the case here. Pretty much any and every variable has quite a bit of overhead that eats up close to 50 bytes, not counting the value itself. However, there's still a lot of efficiencies to be gained by casting a numeric string to an integer, for example. You can easily cut your memory usage in HALF or more by doing this in many cases. This can be pretty important if you're working with LOTS of data that has to be held in memory, but that's another article for another time. IMPORTANT NOTE: The 64-bit version of PHP increases the amount of overhead for each variable, compared to the 32-bit version. So if you're moving from 32-bit to 64-bit (e.g. you're doing an upgrade from PHP 5 to PHP 7), don't just copy your old php.ini over, with its old memory limit. If you copy it over, make sure you increase the memory limit (if it's set) by about 30% to be safe. OTHER ERRORS PHP takes care of a lot of data type conversions for you. For example, if you had the number 123 or the boolean value of true, you could echo them out like this: $var = 123; echo $var; // Works fine and outputs "123" $var = true; echo $var; // Works fine and outputs "1" Whenever it CAN convert a simple value to a string, it will do it automatically when you try to treat that value as if it were a string. However, not every data type can be represented as a simple string. So when it comes to any of the more complex data types like objects, resources, or arrays, PHP will do its best, but will usually throw an error or display something unhelpful: $var = array(); echo $var; // Notice: Array to string conversion in... $var = fopen("some_existing_file","r"); echo $var; // Resource id ## $var = new stdClass(); echo $var; // Catchable fatal error: Object of class stdClass could not be converted to string in... And there you have it. There will be other variations and errors that you might come across, but these are by far the most common ones that I see every day.
https://www.experts-exchange.com/articles/32160/How-to-Fix-Just-About-Every-Common-PHP-Error.html
CC-MAIN-2019-04
refinedweb
4,087
66.17
01 August 2011 10:57 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> With two fires in one week and the sixth major plant accident over the past year, the local government might require FPC to carry out an extensive shutdown of its Mailiao complex for safety checks, market sources said. The Taiwanese government may also require FPC to ensure that it can fulfil domestic requirements for propylene before it can resume exporting, market source said. In response to the supply disruptions, traders in However, buyers said the offers were very high. ($1 = €0.69) Please visit the complete ICIS plants and projects database For more information
http://www.icis.com/Articles/2011/08/01/9481328/Taiwans-Formosa-Plastics-suspends-propylene-offers-after.html
CC-MAIN-2014-41
refinedweb
106
59.23
How to use the Microsoft Outlook Object Library to create an Outlook contact in Visual Basic .NET This article was previously published under Q313787 This article has been archived. It is offered "as is" and will no longer be updated. This article describes how to use the Microsoft Outlook Object Library to create a contact in Microsoft Visual Basic .NET. SUMMARY This article assumes that you are familiar with the following topics: - Outlook Object Model (OOM) - Programming with Visual Basic .NET To use the Microsoft Outlook Object Library to create a contact in Microsoft Visual Basic .NET, follow these steps: MORE INFORMATION - Start Microsoft Visual Studio .NET. - On the File menu, point to New, and then click Project. - Under Project Types, click Visual Basic Projects. - Under Templates, click Console Application, and then click OK. By default, Module1.vb is created. - Add a reference to the Microsoft Outlook 10.0 Object Library. To do this, follow these steps: - On the Project menu, click Add Reference. - On the COM tab, click Microsoft Outlook 10.0 Object Library, and then click Select. - In the Add References dialog box, click OK to accept your selections. If you receive a prompt to generate wrappers for the libraries that you selected, click Yes. - In the code window for Module1, replace all the code with: 'TO DO: If you use the Microsoft Outlook 11.0 Object Library, uncomment the following line.'Imports Outlook = Microsoft.Office.Interop.OutlookImports System.ReflectionModule Module1 Sub Main() ' Create an Outlook application. Dim oApp As Outlook.Application = New Outlook.Application() ' Get the namespace and the logon. Dim oNS As Outlook.NameSpace = oApp.GetNamespace("MAPI") ' TODO: Replace the "YourValidProfile" and "myPassword" with 'Missing.Value if you want to log on with the default profile. oNS.Logon("YourValidProfile", "myPassword", True, True) ' Create a new contact item. Dim oCt As Outlook.ContactItem = oApp.CreateItem(Outlook.OlItemType.olContactItem) oCt.Display(True) 'Modal ' Set some common properties. oCt.FullName = "David Pelton" oCt.Title = "Student" oCt.Birthday = Convert.ToDateTime("10/1/1982") oCt.CompanyName = "Fourth Coffee " oCt.Department = "PSS" oCt.Body = "Test Body" oCt.FileAs = "David" oCt.Email1Address = "abc@hotmail.com" oCt.BusinessHomePage = "" oCt.MailingAddress = "12345 Bellevue" oCt.BusinessAddress = "56789 1st. Redmond, WA 98001" oCt.OfficeLocation = "666 Office" oCt.Subject = "Hello World" oCt.JobTitle = "Engineer" ' Save it to the Contacts folder. oCt.Save() ' Display the created contact item. 'oCt.Display(True) ' Log off. oNS.Logoff() ' Clean up. oApp = Nothing oNS = Nothing oCt = Nothing End SubEnd Module - Search for "TO DO" in the code, and then modify the code for your environment. - Press F5 to build and to run the program. - Verify that the contact has been created. For more information about the security-enhancing features of Outlook 2002, click the following article number to view the article in the Microsoft Knowledge Base: REFERENCES vbnet oom vb.net Properties Article ID: 313787 - Last Review: 12/07/2015 08:15:51 - Revision: 2.4 Microsoft Visual Basic .NET 2002 Standard Edition, Microsoft Visual Basic .NET 2003 Standard Edition, Microsoft Outlook 2002 Standard Edition, Microsoft Office Outlook 2003 - kbnosurvey kbarchive kbhowtomaster KB313787
https://support.microsoft.com/en-us/kb/313787
CC-MAIN-2017-04
refinedweb
510
54.59
I am currently using matplotlib.pyplot from matplotlib import pyplot as plt import numpy as np A=np.matrix("1 2 1;3 0 3;1 2 0") # 3x3 matrix with 2D data plt.imshow(A, interpolation="nearest") # draws one square per matrix entry plt.show() grid_x = np.array([0.0, 1.0, 4.0, 5.0]) # points on the x-axis grid_x = np.array([0.0, 2.5, 4.0, 5.0]) # points on the y-axis (grid_x[i], grid_y[j]) (grid_x[i+1], grid_y[j+1]) A[i,j] imshow pcolormesh np.mgrid[0:5:0.5,0:5:0.5] import matplotlib.pyplot as plt from matplotlib.patches import Rectangle import matplotlib.cm as cm from matplotlib.collections import PatchCollection import numpy as np widths = np.tile(np.diff(grid_x0)[np.newaxis], (len(grid_y0)-1, 1)).flat heights = np.tile(np.diff(grid_y0)[np.newaxis].T, (1, len(grid_x0)-1)).flat fig = plt.figure() ax = fig.add_subplot(111) ptchs = [] for x0, y0, w, h in zip(grid_x2, grid_y2, widths, heights): ptchs.append(Rectangle( (x0, y0), w, h, )) p = PatchCollection(ptchs, cmap=cm.viridis, alpha=0.4) p.set_array(np.ravel(A)) ax.add_collection(p) plt.xlim([0, 8]) plt.ylim([0, 13]) plt.show() Here is another way, using image and R-tree and imshow with colorbar, you need to change the x-ticks and y-ticks (There are alot of SO Q&A about how to do it). from rtree import index import matplotlib.pyplot as plt import numpy as np eps = 1e-3 grid_x3 = grid_x1[1:, 1:].flat grid_y3 = grid_y1[1:, 1:].flat fig = plt.figure() rows = 100 cols = 200 im = np.zeros((rows, cols), dtype=np.int8) grid_j = np.linspace(grid_x0[0], grid_x0[-1], cols) grid_i = np.linspace(grid_y0[0], grid_y0[-1], rows) j, i = np.meshgrid(grid_j, grid_i) i = i.flat j = j.flat idx = index.Index() for m, (x0, y0, x1, y1) in enumerate(zip(grid_x2, grid_y2, grid_x3, grid_y3)): idx.insert(m, (x0, y0, x1, y1)) for k, (i0, j0) in enumerate(zip(i, j)): ind = next(idx.intersection((j0-eps, i0-eps, j0+eps, i0+eps))) im[np.unravel_index(k, im.shape)] = A[np.unravel_index(ind, A.shape)] plt.imshow(im) plt.colorbar() plt.show()
https://codedump.io/share/HGxq025hD31T/1/pyplotimshow-for-rectangles
CC-MAIN-2016-50
refinedweb
374
64.98
Versioning apps built with Kinvey As a mobile developer, it is important to manage multiple active versions of an app and push periodic updates without breaking older clients. In addition to client changes, developers often need to account for differences in backend APIs between versions. These could be due to several factors, such as changes in the data model or business logic. Kinvey client libraries allow you to specify a version for your app. The library sends this version to the Kinvey backend with each request using the X-Kinvey-Client-App-Version header. On the backend, we provide you with utilities to write business logic that is conditional based on the version of the client making the request. This tutorial illustrates how to use Kinvey's app versioning support, with sample solutions to the following common problems: A new column is added to a collection in version 2.0 of your application. Since the version 1.x clients do not know about this field, updates from version 1 clients erase the new field and break data integrity. The data type of a column changes between versions 2.0 and 2.1. Such a change may introduce unwanted behavior as well as potential app crashes. Version Format Versions are represented in Kinvey as strings. We strongly recommend (but do not require) using version strings that conform to the pattern major.minor.patch, where all values are integers and minor and patch are optional. Examples of version strings specified in this format include: "1.1.5", "2.6", "3". Setting Version on the Client Depending on the SDK type, you can set the client app version either on instance level or on request level. Android The setClientAppVersion() method on the Client class allows you to set the app version. import com.kinvey.android.Client; … final Client mKinveyClient = new Client.Builder(your_app_key, your_app_secret , this.getApplicationContext()).build(); mKinveyClient.setClientAppVersion("1.1.5"); iOS The clientAppVersion property of Options struct allows you to set the app version. An instance of Options is used as a global configuration attached to Client. The following sample code shows how to set your version in iOS. let options = Options(clientAppVersion: "1.0.0") store.find(options: options) { switch $0 { case .success(let results): print(results) case .failure(let error): print(error) } } JavaScript The Kinvey.appVersion option allows you to set the client app version globally. // on initialization Kinvey.init({ appKey: '<appKey>', appSecret: '<appSecret>' appVersion: '1.1.5' }); // on an existing instance Kinvey.appVersion('1.1.5'); You can override the instance-level version in individual requests, as shown in the code snippet below. // Set the client app version per request Kinvey.DataStore.find('rooms', null, { appVersion: '1.1.5' }); REST If you rely on the Kinvey REST API to connect to your app backend, you can choose to include the X-Kinvey-Client-App-Version header in any request. POST /blob/:appKey HTTP/1.1 Host: baas.kinvey.com Content-Type: application/json X-Kinvey-Client-App-Version: 1.1.5 Authorization: [user credentials] { "_filename": "myFilename.png", "_acl": { ... }, "myProperty": "some metadata", "someOtherProperty": "some more metadata" } Business Logic On the backend side, versioning is handled by your server-side logic. The requestContext module, available in both Flex and Business Logic, provides a clientAppVersion namespace, which includes APIs to read the version of the client app making the request. Assume you've got an app that lets users manage and book conference rooms. Conference rooms are stored with an ID and a name in the rooms collection. Users can add new rooms from the app or update (e.g. rename) existing ones. In version 2.0 of this app, we added a new column to the collection to indicate the capacity of a room. Version 1.x clients do not know about this column, as a result any entities saved from 1.x will erase previously stored values for capacity, effectively setting them to 0. The following code "fixes" requests from version 1.x clients. For rooms already in the collection, it copies the known capacity to the request. For new rooms, it sets a default capacity of 10. Business Logic code: function onPreSave(request, response, modules) { var context = modules.requestContext; var rooms = modules.collectionAccess.collection('rooms'); //The client making this request is older than release 2.0 if (context.clientAppVersion.majorVersion() < 2){ //find the room that matches the _id in the request rooms.findOne({"_id":request.body._id}, function(err, result){ if(err){ // database query error, return an error response // and terminate the request response.error(err); } else { if(!result){ //No room was found, set a default capacity //before saving the request request.body.capacity = 10; response.continue(); } else { //room was found, set the capacity in the request //to the capacity of the room we found request.body.capacity = result.capacity; response.continue(); //continue to save } } } ); } else { //Client version is 2.0 or above response.continue(); } } Flex code: function roomsPreSave(context, complete, modules) { var versionContext = modules.requestContext.clientAppVersion; var rooms = modules.dataStore().collection('rooms'); // The client making this request is older than release 2.0 if (versionContext.majorVersion() < 2) { // Find the room that matches the _id in the request rooms.findById(context.body._id, function(err, result) { if (err) { // Database query error // End the request chain with no further processing complete().setBody(err).runtimeError().done(); } else { if (!result) { // No room was found, set a default capacity context.body.capacity = 10; // Continue up the request chain complete().next(); } else { // Room was found, set the capacity in the request // to the capacity of the room we found context.body.capacity = result.capacity; // Continue up the request chain complete().next(); } } }); } else { //Client version is 2.0 or above // Continue up the request chain complete().next(); } } With this hook, a request from a version 1.0 client that does not include the capacity column will be saved to the collection with it: // Original request { "id": "52f22d694609ba980401dd56", "name": "Conference Room A" } // Modified request { "id": "52f22d694609ba980401dd56", "name": "Conference Room A", "capacity": 10 } Let's now consider the bookings collection, which records room reservations. Until version 2.0 of our app, we represented the conference room in a booking by its name, as shown below: { "user" : "johndoe", "bookingTime": "1396010640", "bookUntil": "1396011600", "room": " } While developing version 2.1, we have realized that it is much better to represent a booked room by an entire JSON object. As a result, we want to change the booking to look like this: { "user" : "johndoe", "bookingTime": "1396010640", "bookUntil": "1396011600", "room": { "id": "52f22d694609ba980401dd56", "name": "Conference Room A", "capacity": 10 }, "_acl": { "creator": "kid_TVcFFbFV1f" }, "_kmd": { "lmt": "2014-03-28T12:46:44.322Z", "ect": "2014-03-28T12:46:44.322Z" }, "_id": "53356f34403f26fb020fd2b7" } This change will pose a problem to older clients, because they were coded to expect the room as a string, not as a JSON object. To solve this, we can write a hook that attaches to the bookings collection and sends data to clients in the format they expect. Business Logic code: function onPostFetch(request, response, modules) { var context = modules.requestContext; var majorVersion = context.clientAppVersion.majorVersion(); var minorVersion = context.clientAppVersion.minorVersion(); if((majorVersion >= 2) && (minorVersion >= 1)){ //version 2.1 and above response.continue(); } else{ //all versions prior to 2.1 var bookings = response.body; //loop over all the bookings in the response for (var i=0; i < bookings.length; i++){ //return the value of "name" for the room response.body[i].room = bookings[i].room.name; } response.continue(); } } Flex code: function roomsPostFetch(context, complete, modules) { var versionContext = modules.requestContext.clientAppVersion; var majorVersion = versionContext.majorVersion(); var minorVersion = versionContext.minorVersion(); if((majorVersion >= 2) && (minorVersion >= 1)){ // Version 2.1 and above complete().next(); } else { // All versions prior to 2.1 var bookings = context.body; // Loop over all the bookings in the response for (var i=0; i < bookings.length; i++){ // Return the value of "name" for the room context.body[i].room = bookings[i].room.name; } complete().next(); } } With this hook, our older clients will continue working, while newer clients will receive data in the enhanced format. Best Practices We recommend following these best practices for your Kinvey apps: Set the client version to 1.0.0 from your first release, even if you don't anticipate breaking changes in the future. Continue to mark each subsequent release with an incremental version, so that you are able to identify the version of any client making a request. When adding conditional logic for versioning, remember to update all code paths that interact with a collection. You may have hooks or endpoints that interact with the data directly. Any such code will also have access to modules.requestContext.clientAppVersionbecause it is automatically available in all business logic contexts.
https://devcenter.kinvey.com/xamarin/tutorials/app-versioning
CC-MAIN-2019-47
refinedweb
1,440
51.75
"License"); 007 * you may not use this file except in compliance with the License. 008 * * $Id: CoroutineManager.java 468653 2006-10-28 07:07:05Z minchau $ 020 */ 021 package org.apache.xml.dtm.ref; 022 023 import java.util.BitSet; 024 025 import org.apache.xml.res.XMLErrorResources; 026 import org.apache.xml.res.XMLMessages; 027 028 029 /** 030 * <p>Support the coroutine design pattern.</p> 031 * 032 * <p>A coroutine set is a very simple cooperative non-preemptive 033 * multitasking model, where the switch from one task to another is 034 * performed via an explicit request. Coroutines interact according to 035 * the following rules:</p> 036 * 037 * <ul> 038 * <li>One coroutine in the set has control, which it retains until it 039 * either exits or resumes another coroutine.</li> 040 * <li>A coroutine is activated when it is resumed by some other coroutine 041 * for the first time.</li> 042 * <li>An active coroutine that gives up control by resuming another in 043 * the set retains its context -- including call stack and local variables 044 * -- so that if/when it is resumed, it will proceed from the point at which 045 * it last gave up control.</li> 046 * </ul> 047 * 048 * <p>Coroutines can be thought of as falling somewhere between pipes and 049 * subroutines. Like call/return, there is an explicit flow of control 050 * from one coroutine to another. Like pipes, neither coroutine is 051 * actually "in charge", and neither must exit in order to transfer 052 * control to the other. </p> 053 * 054 * <p>One classic application of coroutines is in compilers, where both 055 * the parser and the lexer are maintaining complex state 056 * information. The parser resumes the lexer to process incoming 057 * characters into lexical tokens, and the lexer resumes the parser 058 * when it has reached a point at which it has a reliably interpreted 059 * set of tokens available for semantic processing. Structuring this 060 * as call-and-return would require saving and restoring a 061 * considerable amount of state each time. Structuring it as two tasks 062 * connected by a queue may involve higher overhead (in systems which 063 * can optimize the coroutine metaphor), isn't necessarily as clear in 064 * intent, may have trouble handling cases where data flows in both 065 * directions, and may not handle some of the more complex cases where 066 * more than two coroutines are involved.</p> 067 * 068 * <p>Most coroutine systems also provide a way to pass data between the 069 * source and target of a resume operation; this is sometimes referred 070 * to as "yielding" a value. Others rely on the fact that, since only 071 * one member of a coroutine set is running at a time and does not 072 * lose control until it chooses to do so, data structures may be 073 * directly shared between them with only minimal precautions.</p> 074 * 075 * <p>"Note: This should not be taken to mean that producer/consumer 076 * problems should be always be done with coroutines." Queueing is 077 * often a better solution when only two threads of execution are 078 * involved and full two-way handshaking is not required. It's a bit 079 * difficult to find short pedagogical examples that require 080 * coroutines for a clear solution.</p> 081 * 082 * <p>The fact that only one of a group of coroutines is running at a 083 * time, and the control transfer between them is explicit, simplifies 084 * their possible interactions, and in some implementations permits 085 * them to be implemented more efficiently than general multitasking. 086 * In some situations, coroutines can be compiled out entirely; 087 * in others, they may only require a few instructions more than a 088 * simple function call.</p> 089 * 090 * <p>This version is built on top of standard Java threading, since 091 * that's all we have available right now. It's been encapsulated for 092 * code clarity and possible future optimization.</p> 093 * 094 * <p>(Two possible approaches: wait-notify based and queue-based. Some 095 * folks think that a one-item queue is a cleaner solution because it's 096 * more abstract -- but since coroutine _is_ an abstraction I'm not really 097 * worried about that; folks should be able to switch this code without 098 * concern.)</p> 099 * 100 * <p>%TBD% THIS SHOULD BE AN INTERFACE, to facilitate building other 101 * implementations... perhaps including a true coroutine system 102 * someday, rather than controlled threading. Arguably Coroutine 103 * itself should be an interface much like Runnable, but I think that 104 * can be built on top of this.</p> 105 * */ 106 public class CoroutineManager 107 { 108 /** "Is this coroutine ID number already in use" lookup table. 109 * Currently implemented as a bitset as a compromise between 110 * compactness and speed of access, but obviously other solutions 111 * could be applied. 112 * */ 113 BitSet m_activeIDs=new BitSet(); 114 115 /** Limit on the coroutine ID numbers accepted. I didn't want the 116 * in-use table to grow without bound. If we switch to a more efficient 117 * sparse-array mechanism, it may be possible to raise or eliminate 118 * this boundary. 119 * @xsl.usage internal 120 */ 121 static final int m_unreasonableId=1024; 122 123 /** Internal field used to hold the data being explicitly passed 124 * from one coroutine to another during a co_resume() operation. 125 * (Of course implicit data sharing may also occur; one of the reasons 126 * for using coroutines is that you're guaranteed that none of the 127 * other coroutines in your set are using shared structures at the time 128 * you access them.) 129 * 130 * %REVIEW% It's been proposed that we be able to pass types of data 131 * other than Object -- more specific object types, or 132 * lighter-weight primitives. This would seem to create a potential 133 * explosion of "pass x recieve y back" methods (or require 134 * fracturing resume into two calls, resume-other and 135 * wait-to-be-resumed), and the weight issue could be managed by 136 * reusing a mutable buffer object to contain the primitive 137 * (remember that only one coroutine runs at a time, so once the 138 * buffer's set it won't be walked on). Typechecking objects is 139 * interesting from a code-robustness point of view, but it's 140 * unclear whether it makes sense to encapsulate that in the 141 * coroutine code or let the callers do it, since it depends on RTTI 142 * either way. Restricting the parameters to objects implementing a 143 * specific CoroutineParameter interface does _not_ seem to be a net 144 * win; applications can do so if they want via front-end code, but 145 * there seem to be too many use cases involving passing an existing 146 * object type that you may not have the freedom to alter and may 147 * not want to spend time wrapping another object around. 148 * */ 149 Object m_yield=null; 150 151 // Expose??? 152 final static int NOBODY=-1; 153 final static int ANYBODY=-1; 154 155 /** Internal field used to confirm that the coroutine now waking up is 156 * in fact the one we intended to resume. Some such selection mechanism 157 * is needed when more that two coroutines are operating within the same 158 * group. 159 */ 160 int m_nextCoroutine=NOBODY; 161 162 /** <p>Each coroutine in the set managed by a single 163 * CoroutineManager is identified by a small positive integer. This 164 * brings up the question of how to manage those integers to avoid 165 * reuse... since if two coroutines use the same ID number, resuming 166 * that ID could resume either. I can see arguments for either 167 * allowing applications to select their own numbers (they may want 168 * to declare mnemonics via manefest constants) or generating 169 * numbers on demand. This routine's intended to support both 170 * approaches.</p> 171 * 172 * <p>%REVIEW% We could use an object as the identifier. Not sure 173 * it's a net gain, though it would allow the thread to be its own 174 * ID. Ponder.</p> 175 * 176 * @param coroutineID If >=0, requests that we reserve this number. 177 * If <0, requests that we find, reserve, and return an available ID 178 * number. 179 * 180 * @return If >=0, the ID number to be used by this coroutine. If <0, 181 * an error occurred -- the ID requested was already in use, or we 182 * couldn't assign one without going over the "unreasonable value" mark 183 * */ 184 public synchronized int co_joinCoroutineSet(int coroutineID) 185 { 186 if(coroutineID>=0) 187 { 188 if(coroutineID>=m_unreasonableId || m_activeIDs.get(coroutineID)) 189 return -1; 190 } 191 else 192 { 193 // What I want is "Find first clear bit". That doesn't exist. 194 // JDK1.2 added "find last set bit", but that doesn't help now. 195 coroutineID=0; 196 while(coroutineID<m_unreasonableId) 197 { 198 if(m_activeIDs.get(coroutineID)) 199 ++coroutineID; 200 else 201 break; 202 } 203 if(coroutineID>=m_unreasonableId) 204 return -1; 205 } 206 207 m_activeIDs.set(coroutineID); 208 return coroutineID; 209 } 210 211 /** In the standard coroutine architecture, coroutines are 212 * identified by their method names and are launched and run up to 213 * their first yield by simply resuming them; its's presumed that 214 * this recognizes the not-already-running case and does the right 215 * thing. We seem to need a way to achieve that same threadsafe 216 * run-up... eg, start the coroutine with a wait. 217 * 218 * %TBD% whether this makes any sense... 219 * 220 * @param thisCoroutine the identifier of this coroutine, so we can 221 * recognize when we are being resumed. 222 * @exception java.lang.NoSuchMethodException if thisCoroutine isn't 223 * a registered member of this group. %REVIEW% whether this is the 224 * best choice. 225 * */ 226 public synchronized Object co_entry_pause(int thisCoroutine) throws java.lang.NoSuchMethodException 227 { 228 if(!m_activeIDs.get(thisCoroutine)) 229 throw new java.lang.NoSuchMethodException(); 230 231 while(m_nextCoroutine != thisCoroutine) 232 { 233 try 234 { 235 wait(); 236 } 237 catch(java.lang.InterruptedException e) 238 { 239 // %TBD% -- Declare? Encapsulate? Ignore? Or 240 // dance widdershins about the instruction cache? 241 } 242 } 243 244 return m_yield; 245 } 246 247 /** Transfer control to another coroutine which has already been started and 248 * is waiting on this CoroutineManager. We won't return from this call 249 * until that routine has relinquished control. 250 * 251 * %TBD% What should we do if toCoroutine isn't registered? Exception? 252 * 253 * @param arg_object A value to be passed to the other coroutine. 254 * @param thisCoroutine Integer identifier for this coroutine. This is the 255 * ID we watch for to see if we're the ones being resumed. 256 * @param toCoroutine Integer identifier for the coroutine we wish to 257 * invoke. 258 * @exception java.lang.NoSuchMethodException if toCoroutine isn't a 259 * registered member of this group. %REVIEW% whether this is the best choice. 260 * */ 261 public synchronized Object co_resume(Object arg_object,int thisCoroutine,int toCoroutine) throws java.lang.NoSuchMethodException 262 { 263 if(!m_activeIDs.get(toCoroutine)) 264 throw new java.lang.NoSuchMethodException(XMLMessages.createXMLMessage(XMLErrorResources.ER_COROUTINE_NOT_AVAIL, new Object[]{Integer.toString(toCoroutine)})); //"Coroutine not available, id="+toCoroutine); 265 266 // We expect these values to be overwritten during the notify()/wait() 267 // periods, as other coroutines in this set get their opportunity to run. 268 m_yield=arg_object; 269 m_nextCoroutine=toCoroutine; 270 271 notify(); 272 while(m_nextCoroutine != thisCoroutine || m_nextCoroutine==ANYBODY || m_nextCoroutine==NOBODY) 273 { 274 try 275 { 276 // System.out.println("waiting..."); 277 wait(); 278 } 279 catch(java.lang.InterruptedException e) 280 { 281 // %TBD% -- Declare? Encapsulate? Ignore? Or 282 // dance deasil about the program counter? 283 } 284 } 285 286 if(m_nextCoroutine==NOBODY) 287 { 288 // Pass it along 289 co_exit(thisCoroutine); 290 // And inform this coroutine that its partners are Going Away 291 // %REVIEW% Should this throw/return something more useful? 292 throw new java.lang.NoSuchMethodException(XMLMessages.createXMLMessage(XMLErrorResources.ER_COROUTINE_CO_EXIT, null)); //"CoroutineManager recieved co_exit() request"); 293 } 294 295 return m_yield; 296 } 297 298 /** Terminate this entire set of coroutines. The others will be 299 * deregistered and have exceptions thrown at them. Note that this 300 * is intended as a panic-shutdown operation; under normal 301 * circumstances a coroutine should always end with co_exit_to() in 302 * order to politely inform at least one of its partners that it is 303 * going away. 304 * 305 * %TBD% This may need significantly more work. 306 * 307 * %TBD% Should this just be co_exit_to(,,CoroutineManager.PANIC)? 308 * 309 * @param thisCoroutine Integer identifier for the coroutine requesting exit. 310 * */ 311 public synchronized void co_exit(int thisCoroutine) 312 { 313 m_activeIDs.clear(thisCoroutine); 314 m_nextCoroutine=NOBODY; // %REVIEW% 315 notify(); 316 } 317 318 /** Make the ID available for reuse and terminate this coroutine, 319 * transferring control to the specified coroutine. Note that this 320 * returns immediately rather than waiting for any further coroutine 321 * traffic, so the thread can proceed with other shutdown activities. 322 * 323 * @param arg_object A value to be passed to the other coroutine. 324 * @param thisCoroutine Integer identifier for the coroutine leaving the set. 325 * @param toCoroutine Integer identifier for the coroutine we wish to 326 * invoke. 327 * @exception java.lang.NoSuchMethodException if toCoroutine isn't a 328 * registered member of this group. %REVIEW% whether this is the best choice. 329 * */ 330 public synchronized void co_exit_to(Object arg_object,int thisCoroutine,int toCoroutine) throws java.lang.NoSuchMethodException 331 { 332 if(!m_activeIDs.get(toCoroutine)) 333 throw new java.lang.NoSuchMethodException(XMLMessages.createXMLMessage(XMLErrorResources.ER_COROUTINE_NOT_AVAIL, new Object[]{Integer.toString(toCoroutine)})); //"Coroutine not available, id="+toCoroutine); 334 335 // We expect these values to be overwritten during the notify()/wait() 336 // periods, as other coroutines in this set get their opportunity to run. 337 m_yield=arg_object; 338 m_nextCoroutine=toCoroutine; 339 340 m_activeIDs.clear(thisCoroutine); 341 342 notify(); 343 } 344 }
http://xalan.apache.org/xalan-j/apidocs/src-html/org/apache/xml/dtm/ref/CoroutineManager.html
CC-MAIN-2018-34
refinedweb
2,277
54.12
Search... FAQs Subscribe Pie FAQs Recent topics Flagged topics Hot topics Best topics Search... Search within OO, Patterns, UML and Refactoring: Tim Cooke Campbell Ritchie Ron McLeod Liutauras Vilda Jeanne Boyarsky Sheriffs: Junilu Lacar Rob Spoor Paul Clapham Saloon Keepers: Tim Holloway Tim Moores Jesse Silverman Stephan van Hulst Carey Brown Bartenders: Al Hobbs Piet Souris Frits Walraven Forum: OO, Patterns, UML and Refactoring Using Adapter pattern to improve pluggability Junilu Lacar Sheriff Posts: 16767 281 I like... posted 1 month ago Number of slices to send: Optional 'thank-you' note: Send Relevant Links Adapter Pattern : Number Names exercise on Cyber-Dojo: Global Day of CodeRetreat (GDCR): TL;DR After working on the Number Names exercise on Cyber Dojo, I ended up with a fairly comprehensive set of tests for it. I wanted to share my test with others so they could use it to exercise their own implementations. I wanted to make it as easy as possible for them to come up with their own implementations and be able to quickly plug in to my test code with minimal changes on either side. I used the Adapter Pattern to achieve this. The Question Do you have any other ideas on how I might be able to achieve the goal of pluggability of different implementations without requiring implementations that implement a specific interface? That is, the foreign implementation only needs to take an int as input and return a String as output to be able to get plugged in to this test class. Skip directly to "The Code" section below if you want to get right down to business. The Back Story I am a member of an Agile Technical Coach community of practice (CoP) at work and we've been doing exercises in TDD and mob programming. I thought the Number Names exercise would give us plenty of things to study and learn from as coaches and developers. This was one of the problems I got to work on last week during Global Day of CodeRetreat 2021 . The link above takes you to where we ended up during GDCR. The road getting there wasn't pretty as you might be able to tell if you go back and look at the different iterations the code went through. That led me to working on the problem on my own and trying to do Test-Driven Development (TDD) on it. It wasn't pretty either but I got a lot farther than the group did; I actually wrote a complete solution that supported all positive int values and zero, which seemed like one of a few special cases I dealt with. I came up with a recursive solution to the problem. Others will surely come up with different implementations but I'd like to be able to reuse this test to check their implementations. The API to their solutions are likely going to be different and incompatible with my test though. Forcing other people to extend my own implementation is a bit onerous and puts the burden of making things work with my test on them. That's not going to work well. On the other hand, I don't want to have to change this test significantly just so I can run it against a different implementation. This is where the Adapter Pattern can help. By having an adapter that can wrap a foreign implementation and adapt its interface to the interface my test expects, I only have to change a single line in the test to use a different implementation. The Code I'm still tweaking this code but this is what I have at the moment: import org.junit.jupiter.api.DisplayNameGeneration; import org.junit.jupiter.api.DisplayNameGenerator; import org.junit.jupiter.api.Nested; import org.junit.jupiter.api.Test; import org.junit.jupiter.params.ParameterizedTest; import org.junit.jupiter.params.provider.CsvSource; import java.util.function.Function; import static org.junit.jupiter.api.Assertions.assertEquals; @DisplayNameGeneration(DisplayNameGenerator.ReplaceUnderscores.class) class NumberToWordsTest { class ImplementationAdapter extends NumberToWords { final Function<Integer, String> adaptee; ImplementationAdapter(Function<Integer, String> adaptee) { this.adaptee = adaptee; } @Override public String nameFor(int i) { return adaptee.apply(i); } } NumberToWords spellOut = /*** Change the following line to use a different implementation ***/ // new NumberToWords(); // Java version new ImplementationAdapter((num) -> new NumberNames().nameOf(num)); // Kotlin version @ParameterizedTest(name = "{0} -> [{1}]") @CsvSource({ // less than 20 "0, zero", "1, one", "2, two", "9, nine", // teens "10, ten", "11, eleven", "12, twelve", "19, nineteen", // tens only "20, twenty", "30, thirty", "90, ninety", // tens with ones "21, twenty one", "31, thirty one", "99, ninety nine" }) void less_than_100(int number, String expectedName) { assertEquals(expectedName, spellOut.nameFor(number)); } @Nested class from_100_onwards { @ParameterizedTest(name = "{0} -> [{1}]") @CsvSource({ "100, one hundred", "200, two hundred", "900, nine hundred", }) void only_hundreds(int number, String expectedName) { assertEquals(expectedName, spellOut.nameFor(number)); } @ParameterizedTest(name = "{0} -> [{1}]") @CsvSource({ "101, one hundred and one", "888, eight hundred and eighty eight", "999, nine hundred and ninety nine" }) void hundreds_and_N(int number, String expectedName) { assertEquals(expectedName, spellOut.nameFor(number)); } @ParameterizedTest(name = "{0} -> [{1}]") @CsvSource({ "200, two hundred", "201, two hundred and one", "2_000, two thousand", "2_001, two thousand and one", "2_000_000, two million", "2_000_001, two million and one", "2_000_000_000, two billion", "2_000_000_001, two billion and one", "2_002_002_001, two billion two million two thousand and one" }) void recursive(int number, String expectedName) { assertEquals(expectedName, spellOut.nameFor(number)); } @ParameterizedTest(name = "{0} -> [{1}]") @CsvSource({ "2_001, two thousand and one", "2_000_001, two million and one", "2_000_000_001, two billion and one", "2_000_001_000, two billion one thousand", // <<<< wanted: two billion and one thousand "2_001_000_000, two billion one million", // <<<< wanted?: two billion and one million "2_001_000, two million one thousand", // <<<< wanted?: two million and one thousand "2_002_002_001, two billion two million two thousand and one", "2_000_002_001, two billion two thousand and one" }) void special_cases(int number, String expectedName) { assertEquals(expectedName, spellOut.nameFor(number)); } @Test void max_integer() { assertEquals("two billion one hundred and forty seven million four hundred and eighty three thousand six hundred and forty seven", spellOut.nameFor(Integer.MAX_VALUE)); } } } Junilu Lacar Sheriff Posts: 16767 281 I like... posted 1 month ago Number of slices to send: Optional 'thank-you' note: Send So, say for example you had an implementation you decided to call NumberConverter and the method name you settled on was toWords , in order to use your implementation, I'd just have to import your class and change line 31 to: new ImplementationAdapter((num) -> new NumberConverter().toWords(num)); // another implementation Even if you decided to use a static method instead of an instance method, I'd still have to only write: new ImplementationAdapter((num) -> NumberConverter.toWords(num)); // static method implementation Don't get me started about those stupid light bulbs . reply reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads Numbers to Word coversion Duplicate "If" Logic In An Algorithm can anyone solve this ? Word Conversion Asmt 1.4 Suggestion More...
https://www.coderanch.com/t/747196/engineering/Adapter-pattern-improve-pluggability
CC-MAIN-2022-05
refinedweb
1,142
50.67
Created on 2013-11-25 09:01 by vstinner, last changed 2018-07-04 14:56 by xflr6. This issue is now closed. subprocess.Popen has a race condition on Windows with file descriptors: if two threads spawn subprocesses at the same time, unwanted file descriptors may be inherited, which lead to annoying issues like "cannot delete a file because it is open by another process". For the issue #19575 for an example of such bug. Since Windows Vista, a list of handle which should be inherited can be specified in CreateProcess() using PROC_THREAD_ATTRIBUTE_HANDLE_LIST with STARTUPINFOEX. It avoids the need to mark the handle temporarly inheritable. For more information, see: The purpose of this issue is to avoiding having to call CreateProcess() with bInheritHandles parameter set to TRUE on Windows, and avoid calls to self._make_inheritable() in subprocess.Popen._get_handles(). Currently, bInheritHandles is set to TRUE if stdin, stdout and/or stderr parameter of Popen constructor is set (to something else than None). Using PROC_THREAD_ATTRIBUTE_HANDLE_LIST, handles don't need to be marked as inheritable in the parent process, and CreateProcess() can be called with bInheritHandles parameter set to FALSE. UpdateProcThreadAttribute() documentation says that "... handles must be created as inheritable handles ..." and a comment says that "If using PROC_THREAD_ATTRIBUTE_HANDLE_LIST, pass TRUE to bInherit in CreateProcess. Otherwise, you will get an ERROR_INVALID_PARAMETER." Seriously? What is the purpose of PROC_THREAD_ATTRIBUTE_HANDLE_LIST if it does not avoid the race condition? It's "just" to not inherit some inheritable handles? In Python 3.4, files and sockets are created non-inheritable by default, so PROC_THREAD_ATTRIBUTE_HANDLE_LIST may not improve anything :-/ I. Though Python has taken measures to mark handles as non-inheritable there is still a possible race due to having to create inheritable handles while creating processes with stdio pipes (subprocess). Attached is a Patch that implements subprocess.Popen(close_fds=True) with stdio handles on Windows using PROC_THREAD_ATTRIBUTE_HANDLE_LIST, which plugs that race completely. I implemented this by adding the attribute STARTUPINFO._handleList, which when passed to _winapi.CreateProcess, will be passed to CreateProcess as a PROC_THREAD_ATTRIBUTE_HANDLE_LIST. subprocess.py can than use this attribute as needed with inherit_handles=True to only inherit the stdio handles. The STARTUPINFO._handleList attribute can also be used to implement pass_fds later on. Though the exact behavior of how to convert a file descriptor list to a handle list might be a bit sensitive, so I left that out for now. This patch obviously doesn't support Windows XP but Python 3 doesn't support XP anymore either. Implementing pass_fds on Windows is a problem if Popen has to implement the undocumented use of the STARTUPINFO cbReserved2 and lpReserved2 fields to inherit CRT file descriptors. I suppose we could implement this ourselves in _winapi since it's unlikely that the data format will ever change. Just copy what the CRT's accumulate_inheritable_handles() function does, but constrained by an array of file descriptors. Second version of the patch after review by eryksun. Please pay attention to the hack in _execute_child due to having to temporarily override the handle_list if the user supplied one. As for pass_fds: as you noted, it has it's own share of complexities and issues and I think it's best to leave it to a separate patch/issue. Python already has a multiprocessing module which is able to pass handles (maybe also FD? I don't know) to child processes on Windows. I found some code in Lib/multiprocessing/reduction.py: - duplicate() - steal_handle() - send_handle() But the design doesn't really fit the subprocess module, since this design requires that the child process communicates with the parent process. On UNIX, fork()+exec() is used, so we can execute a few instructions after fork, which allows to pass an exception from the child to the parent. On Windows, CreateProcess() is used which doesn't allow directly to execute code before running the final child process. The PEP 446 describes a solution using a wrapper process, so parent+wrapper+child, 3 processes. IMHO the best design for subprocess is really PROC_THREAD_ATTRIBUTE_HANDLE_LIST. I dislike adding a lpAttributeList attribute: it's too close to the exact implementation of Windows may change in the future. I would prefer a more high level API. Since the only known use case today is to pass handles, I propose to focus on this use case: add a new pass_handles parameter to Popen, similar to pass_fds. I see that your patch is able to set close_fds to True on Windows: great job! It would be a great achievement to finally fix this last known race condition of subprocess on Windows! So thank you for working on this! > As for pass_fds: as you noted, it has it's own share of complexities and issues and I think it's best to leave it to a separate patch/issue. pass_fds would be "nice to have", but I prefer to stick first to the native and well supported handles on Windows. For me, using file descriptors on Windows is more a "hack" to be able to write code working on Windows and UNIX, but since it's not natively supported on Windows, it comes with own set of issues. IMHO it's better supported to work on handles. I removed previous_handle_list in _execute_child since I noticed subprocess already clobbers the other attributes in startupinfo anyhow. I figured there will be some discussion about how to pass the handle list, so here's my two cents: * subprocess already exposes a bit of Windows specific flags like creationflags and STARTUPINFO. * Windows doesn't really break it's API in backwards incompatible ways often (Heck it barely breaks it ever, which is why we have so many Ex functions and reserved parameters :P). * The _winapi module tries to expose WinAPI functions as is. So I implemented this as an internal attribute on STARTUPINFO, in the first version, since I wasn't sure we want this exposed to users, but I still wanted to try and mimic the original WinAPI functions internally. The lpAttributeList is a change requested by eryksun that brings it even closer to WinAPI and exposes it for further extension with additional attributes. > Python already has a multiprocessing module which is able to pass > handles (maybe also FD? I don't know) to child processes on > Windows. Popen doesn't implement the undocumented CRT protocol that's used to smuggle the file-descriptor mapping in the STARTUPINFO cbReserved2 and lpReserved2 fields. This is a feature of the CRT's spawn and exec functions. For example: fdr, fdw = os.pipe() os.set_inheritable(fdw, True) os.spawnl(os.P_WAIT, os.environ['ComSpec'], 'cmd /c "echo spam >&%d"' % fdw) >>> os.read(fdr, 10) b'spam \r\n' We don't have to worry about implementing fd inheritance so long as os.spawn* uses the CRT. Someone that needs this functionality can simply be instructed to use os.spawn. > I dislike adding a lpAttributeList attribute: it's too close to > the exact implementation of Windows may change in the future. If you're going to worry about lpAttributeList, why stop there? Aren't dwFlags, wShowWindow, hStdInput, hStdOutput, and hStdError also too close to the exact implementation? My thoughts when suggesting this were actually to make this as close to the underlying API as possible, and extensible to support other attributes if there's a demand for it. Passing a list of handles is atypical usage, and since Python and subprocess use file descriptors instead of Windows handles, I prefer isolating this in a Windows structure such as STARTUPINFO, rather than adding even more confusion to Popen's constructor. > Since the only known use case today is to pass handles In the review of the first patch, I listed 3 additional attributes that might be useful to add in 3.7: IDEAL_PROCESSOR, GROUP_AFFINITY, and PREFERRED_NODE (simplified by the fact that 3.7 no longer supports Vista). Currently the way to set the latter two is to use the built-in `start` command of the cmd shell. > I propose to focus on this use case: add a new pass_handles parameter > to Popen, similar to pass_fds. This is a messy situation. Python 3's file I/O is built on the CRT's POSIX layer. If it had been implemented directly on the Windows API using handles, then pass_fds would obviously use handles. That's the current situation with socket module because Winsock makes no attempt to hide AFD handles behind POSIX file descriptors. Popen's constructor accepts file descriptors -- not Windows handles -- for its stdin, stdout, and stderr arguments, and the parameter to control inheritance is named "close_fds". It seems out of place to add a "pass_handles" parameter. I have read some of and it documents quite a bunch of nastiness with PROC_THREAD_ATTRIBUTE_HANDLE_LIST in Windows Vista/7. Windows is so much fun sometimes :P Essentially console handles in Windows before Windows 8 are user mode handles and not real kernel handles. Those user mode handles are inherited by a different mechanism than kernel handles and regardless of PROC_THREAD_ATTRIBUTE_HANDLE_LIST, and if passed in PROC_THREAD_ATTRIBUTE_HANDLE_LIST will cause it to fail in weird ways. Those user mode console handles have the lower two bits set. The lower two bits in Windows are reserved for tagging such special handles. Also in all versions you can't pass in an empty handle list, but a list with just a NULL handle works fine. See: I attached a version of the patch with a hack around those issues based on what I read, but I can't test that it actually fixes the issues since I don't have a Windows Vista or 7 system around. It's been a while since this got any attention... In case you didn't get notified by Rietveld, I made a couple suggestions on your latest patch. Also, if you wouldn't mind, please update the patch to apply cleanly to 3.7 -- especially since STARTUPINFO now has an __init__ method. Added the 4th version after review by eryksun (In rietveld). Added the 5th version after another review by eryksun (In rietveld). Oh LOL!!! I missed the fact that Python finally moved to GitHub! Rebased the patch on top of the Git master XD (And removed accidentally committed code... sorry...) I still submitted as a patch since I don't know if the infrastructure handles moving a patch to a PR well :P OK Rietveld definitely punted on the git patch (I guess it's only for the old Mercurial repo, I don't think it actually even support Git...) I will try re-submitting the patch as a PR so that it can be reviewed easily. GitHub PR bit rotting away... :P Just a friendly reminder :) I. We. The PR has been sitting there for quite a while now... New changeset b2a6083eb0384f38839d3f1ed32262a3852026fa by Victor Stinner (Segev Finer) in branch 'master': bpo-19764: Implemented support for subprocess.Popen(close_fds=True) on Windows (#1218) Copy of my comment on the PR. > Merged from master... Again... Hopefully this won't end up missing 3.7 entirely... 😔 Oops sorry, I wanted this feature but I didn't follow closely the PR. I don't know well the Windows API, so I didn't want to take the responsability of reviewing (approving) such PR. But I see that @zooba and @gpshead approved it, so I'm now confortable to merge it :-) Moreover, AppVeyor validated the PR, so let me merge it. I prefer to merge the PR right now to not miss the Python 3.7 feature freeze, and maybe fix issues later if needed, before 3.7 final. Thank you @segevfiner for this major subprocess enhancement. I really love to see close_fds default changing to True on Windows. It will help to fix many corner cases which are very tricky to debug. Sorry for the slow review, but the subprocess is a critical module of Python, and we lack of Windows developers to review changes specific to Windows. Thank you Sergev Finer for finishing the implementation of my PEP 446. Supporting to only inherit a set of Windows handles was a "small note" my PEP 446, mostly because I didn't feel able to implement the feature, but also because we still supported Windows versions which didn't implement this feature (PROC_THREAD_ATTRIBUTE_HANDLE_LIST) if I recall correctly. Thanks Eryk Sun, Gregory P. Smith and Steve Dower for the reviews and help on getting this nice feature into Python 3.7! AFAIU, this change broke the following usage of subprocess on Windows (re-using a subprocess.STARTUPINFO instance to hide the command window): import os, subprocess STARTUPINFO = subprocess.STARTUPINFO() STARTUPINFO.dwFlags |= subprocess.STARTF_USESHOWWINDOW STARTUPINFO.wShowWindow = subprocess.SW_HIDE # raises OSError: [WinError 87] # in the second loop iteration starting with Python 3.7 for i in range(2): print(i) with open(os.devnull, 'w') as stderr: subprocess.check_call(['attrib'], stderr=stderr, startupinfo=STARTUPINFO) AFAICT, this works on Python 2.7, 3.4, 3.5, and 3.6 Sebastian, the problem in this case is that startupinfo.lpAttributeList['handle_list'] contains the duplicated standard-handle values from the previous call, which were closed and are no longer valid. subprocess.Popen has always modified STARTUPINFO in place, including dwFlags, hStdInput, hStdOutput, hStdError, and wShowWindow. This update follows suit to also modify lpAttributeList in place. This issue is closed. Please create a new issue if you think Popen should use a deep copy of startupinfo instead, to allow callers to reuse a single STARTUPINFO instance. Or the new issue could propose only to document the existing behavior. Thanks Eryk. Done:
https://bugs.python.org/issue19764
CC-MAIN-2019-22
refinedweb
2,253
64
Get the latest Cisco news in this December issue of the Cisco Small Business Monthly Newsletter I have an SR520-EF-K9 and I am running CCA version 3.0(1). I cannot get my server to bind to a specific IP address. Also, if I do bind a MAC address in the DHCP, I have had to literally reset my router to factory defaults. What is the CLI commands to bind a specific computer on a given network to a specific IP? I talked with Gaurav Sood [gasood@cisco.com] and got a decent answer. # configure terminal (config) # ip dhcp pool uniquePoolName (dhcp-config) # host 192.168.75.5 255.255.255.0 (dhcp-config) # client-identifier 0100.14BF1CFF.28 # (MAC address is 00:14:BF:1C:FF:28) (dhcp-config) # exit (config) # exit # show run | sec ip dhcp pool ip dhcp pool inside import all network 192.168.75.0 255.255.255.0 default-router 192.168.75.1 ip dhcp pool downstairs host 192.168.75.5 255.255.255.0 client-identifier 0100.14bf.1cff.28 # write mem NOTES: 1) You need a unique pool name for each assignment (minor detail left out of the documentation). 2) The 192.168.75.5 and 255.255.255.0 can be adjusted to the VLAN you want it on.
https://community.cisco.com/t5/small-business-routers/sr520-cca-dhcp-bindings-problem/m-p/1758436
CC-MAIN-2020-10
refinedweb
222
68.97
I've mentioned in my previous posts that Haxe and Javascript (well more Typescirot) are very similar syntax wise. However, if you've come from javascript to Haxe you'll notice a few odd things with the syntax that don't make sense. The creator of Haxe tried to keep to language as simple as possible even though you can do some really complex things with it if you've come from C# some of these might make sense, but for js devs, there are a few weird differences. I've listed some below. 1 - Constructors methods This was quite confusing to me at first but the more I wrote in Haxe the more it's implementation made sense. // Javascript class MyAwesomeClass { constructor() { // Stuff to initialize } } // Haxe class MyAwesomeClass { public function new() { // Stuff to initialize } } 2 - All variables are var (well almost) Again I would have to agree with the Haxe way on this. Javascript gets a bit confusing when creating variables, especially in classes. Sometimes you use this, sometimes you can use var, let or const depending on if you want a constant scoped variable or not, and sometimes, you don't write anything. In Haxe you only need to remember one keyword var. // Javascript class MyAwesomeClass { outsideMethod; constructor() { this.outsideMethod = 10; } myAwesomeMethod() { const constantInsideMethod = 15; let insideMethod = 10; } } // Haxe class MyAwesomeClass { var outsideMethod:Int; public function new() { outsideMethod = 10; } function myAwesomeMethod() { final constantInsideMethod:Int = 15; var insideMethod:Int = 10; } } 3. The overwrite keyword Overriding an inherited method is something I've never done in javascript but do quite often in Haxe so I'm not sure if the js example I've written below will work. // Javascript class MySecondAwesomeClass extends MyAwesomeClass { myAwesomeMethod() { var newVariable = 200; super.myAwesomeMethod(); } } // Haxe class MySecondAwesomeClass extends MyAwesomeClass { override myAwesomeMethod() { var newVariable:Int = 200; } } 4. Package instead of export This is a really small change that you probably would have figured out without this article but I'll put it here nevertheless. // Javascript export default class MySecondAwesomeClass extends MyAwesomeClass { myAwesomeMethod() { var newVariable = 200; super.myAwesomeMethod(); } } // Haxe package; // This should be the first line in the file class MySecondAwesomeClass extends MyAwesomeClass { override myAwesomeMethod() { var newVariable:Int = 200; } } 5. Different array methods There are probably loads other default methods that are different in Haxe and Javascript but the array methods, in my opinion, are used a lot so it's good to know they're slightly different in Haxe. // Typescript class MyThirdAwesomeClass { testArrayMap():Array<number> { var myArray:Array<number> = [0, 1, 2]; return myArray.map(function(number:number, index:number) { return number + index; }); } } // Haxe // the mapi method won't work without adding using Lambda outside of this class class MyThirdAwesomeClass { function testArrayMap():Array<Int> { var myArray:Array<Int> = [0, 1, 2]; return myArray.mapi(function(Index:Int, Number:Int) { return Number + Index; }); } } 6. The Using keyword This is what I meant by the using keyword in the previous example. There's only a Haxe example for this one because you can't do it in Javascript. // Haxe using Lambda; class MyThirdAwesomeClass { function testArrayMap():Array<Int> { var myArray:Array<Int> = [0, 1, 2]; return myArray.mapi(function(Index:Int, Number:Int) { return Number + Index; }); } } It's a bit difficult to explain if you haven't used it before but essentially if you have created a static method in one class and want to use it in another there are two ways you could do it this way. import Lambda; // usage Lambda.mapi(myArray, ...) Or this way: using Lambda // usage myArray.mapi(...) If you import with using the static method can be applied directly to the variable as if it's a method that belongs to it. 7. For Loops There's a pretty cool way of doing incrementing for loops kind of with the spread syntax in javascript. // Javascript for (let i = 0; i < 10; i++) { console.log(i); } // Haxe for (i in 0...9) { trace(i); } There's also a cool little shorthand you can do with Haxe" // Haxe for (i in 0...9) trace(i); 8. Arrow functions There were introduced properly in Haxe 4 (same as the final keyword) so you won't see it in many of the examples online, it's slightly different from Javascripts implementation but not massively. // Javascript () => console.log("Arrow function in Javascript"); // Haxe () -> trace("Arrow function in Haxe"); - Destructuring I really love this feature in Javascript and do it whenever I can, sadly there isn't an easy way to do this in Haxe :( // Javascript const obj = { id: 1, name: 'Fizz'}; const {id, name} = obj; console.log(id); console.log(name); // Haxe final obj = { id: 1, name: 'Fizz'}; var newId; var newName; switch obj { case { id: newId, name: newName }: trace(newId); trace(newName); } - The spread operator Again this is something I love using in Javascripts that once again doesn't have an equivalent in Haxe 4. // Javascript const arr1 = [0, 1, 2]; arr1 = [...arr1, 3, 4, 5]; console.log(arr1); //Haxe final arr1 = [0, 1, 2]; final arr2 = [3, 4, 5]; final newArr = arr1.concat(arr2); trace(newArr); Conclusion So as you can see there are loads of similarities between Haxe and Javascript syntax wise, (also Haxe and Actionscript, and Java/C#). But there are a few small things that might trip you up if you're coming from JS. Hopefully, this article will help you out with that. Sources Posted on by: Richard Oliver Bray Front end developer @OctopusLabs. Making games in Haxeflixel @hello_lightbulb. Novice photographer and player of video games. Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/richardbray/10-syntax-differences-between-haxe-and-javascript-1838
CC-MAIN-2020-40
refinedweb
915
61.26
0 Hi, I was coding the following program.. // Program takes work hours and rate per hour and gives total // salary after tax, if work hours are over 40, its increased by 1.5x. #include <iostream> #include <conio.h> using namespace std; int main() { double hr_work, hr_rate, taxed_total, total; cout <<"Enter Hours Worked (Zero to Quit): "; cin >> hr_work; cout <<"\nEnter Hourly Rate: "; cin >> hr_rate; while(hr_work>0 && hr_rate>0){ total = hr_work*hr_rate; taxed_total = total - (0.18 * total); cout <<"\nGross Wage = "<<total; cout <<"\nGross Wage after Tax Deduction = "<<taxed_total; cout <<"\n\n---------------------------------\nEnter Hours Worked (Zero to Quit): "; cin >> hr_work; cout <<"\nEnter Hourly Rate: "; cin >> hr_rate; } getch(); return 0; } Now how do i make the program do the following: If the the number Hour entered it : 45 , then the following should happen: 40 + (5 x 1.5) = 47.5 and that 47.5 should be then multiplied with hourly rate and so on.. so any help is appreciated:-/
https://www.daniweb.com/programming/software-development/threads/157078/help-required-with-salary-program
CC-MAIN-2017-13
refinedweb
156
70.13
The React.Component class is one of the two ways — along with functional components — that you can create React components. The React.Component API offers more features that help in tweaking your component’s behavior. Jump ahead: - Mounting - Updating - Unmounting constructor() render() setState() componentDidMount() componentDidUpdate() componentWillUnmount() shouldComponentUpdate componentDidCatch() static getDerivedStateFromProps() static getDerivedStateFromError() getSnapshotBeforeUpdate() - Class properties Component lifecycle The component lifecycle in a class component can be defined as a collection of methods that can be accessed to run some code during various stages of the component itself. The component lifecycle methods can be grouped into three phases: Mounting Under the hood, React uses a virtual DOM, an in-memory representation of the DOM that is being rendered in the browser. The virtual DOM is a copy of the DOM that can be updated without using any of the DOM APIs. Updates to the virtual DOM tell React to compare the current version of the virtual DOM to the previous version. Once React knows which virtual DOM objects have changed, React then updates only those objects in the real DOM using ReactDOM. For example: ReactDOM.render(<App />, domContainer); The mounting phase is the process of creating instances and DOM nodes corresponding to React components and inserting them into the DOM. The following are the lifecycle methods available in the mounting phase, called in this order: constructor() static getDerivedStateFromProps() render() componentDidMount() Updating When a component is added to the DOM — i.e., the mounting process — that component is still stored in the memory so that React is aware whenever the state changes. The updating phase can be defined when React detects changes to a component’s state or props and re-renders it. The following are the lifecycle methods available in the updating phase, called in this order: static getDerivedStateFromProps() shouldComponentUpdate() render() getSnapshotBeforeUpdate() componentDidUpdate() Unmounting The unmounting phase is when components that are no longer needed are destroyed/removed from the DOM. This phase is particularly useful for performing cleanup actions such as removing event listeners, canceling network requests, or cleaning up previous subscriptions. The following is the one lifecycle method available in the unmounting phase: componentWillUnmount() You can check out this interactive and accessible diagram that explains when each lifecycle method is called. Ready for a look under the hood? Take a deep dive into React’s call order post-Hooks. Lifecycle methods Next, we’ll review the various lifecycle methods available to a React class component. constructor() The constructor for a React component is called before it is mounted. Code that you want to run when the component is being initialized should be placed here, and that could be initializing local state or binding event handlers. Even though you can initialize the state in a constructor, you should not call the setState() function in it. constructor(props) { super(props); // Don't call this.setState() here! this.state = { counter: 0 }; this.handleClick = this.handleClick.bind(this); } It’s important to add the super(props) line in every constructor method before any other line because it assigns the this value to the props. This is because, by definition, the super() function refers to the parent class constructor, which, in this case, is React.Component. If you were to try to access the component’s props without calling super(props), you’d get an undefined error. So, calling super(props) allows you to access this in a constructor, which, in turn, allows you to access the props in the constructor. Note: You don’t need to implement a constructor for your class component if you don’t initialize a state or bind methods. render() The render() method is the only required method in the component lifecycle. It is called during the mounting and updating phases of the component lifecycle. The render() method returns any of the following: - React elements created using JSX - Fragments, which allow you to rerun multiple elements - Strings and numbers, which are rendered as text nodes in the DOM - React Portals, which provide a to way render children into a DOM node that exists outside the DOM hierarchy of the parent component React encourages that the render() method be pure, with no side effects — i.e., the method will always return the same output when the same input is passed. This means that you should not call the setState() function in the render() method. render() { // Don't call this.setState() here! return ( <p>Hello</p> ) } If you need to modify the state of the component, that can be done in componentDidMount() or in other lifecycles. Take a more nuanced look at React’s render method. Click here to review 8 approaches for conditional rendering. setState() The setState() function is used to update the state of a component and inform React that the component needs to be re-rendered with the updated state. Using setState() is the only way in which a component’s state can be updated. Whenever you make a call to update the state using setState(), it’s important to note that it is an asynchronous request and not an immediate command. This means that React might delay the state update to the component or batch-update multiple setState() requests for performance purposes. You can use the setState() function by either passing an object to setState() or by passing an updater function, explained in detail below. Update state with an object When you call the setState() function and pass an object, React shallow-merges the object you provide into the current state — i.e., if you have an existing state (like the code snippet below) and call this.setState({hasCountStarted: true}), only the value for hasCounterStarted will be modified. this.state = { counter: 0, hasCounterStarted: false } Update state with a function You can also update the state by passing an updater function to the setState() function. The updater function contains references to the current state and props at the time the state update is taking place. (state, props) => stateChange This method is particularly useful when you want to update the state with values that depend on the current state. See the example below. As mentioned above, because setState() is an asynchronous request, i.e., the update could be delayed, it might be tricky to read this.state and rely on its updated value. One way to ensure you’re getting the updated state is by using the callback method that the setState() function accepts. The callback method is an optional second parameter that will be executed once setState() is completed and the component is re-rendered. See the example below. You can further simplify state management with batched updates — see how. componentDidMount() componentDidMount() is called immediately — and only once — after the render() method is done, i.e., after the component has been mounted and inserted into the DOM tree. It is usually a good place to make external API calls, add event listeners, or set up subscriptions for the component. You can call the setState() function safely in this lifecycle. Calling the setState() function will lead to an extra rendering, but that’s fine because it will happen before the page is updated by the browser. If you set up event listeners or subscriptions in the componentDidMount() method, don’t forget to remove them in the componentWillUnmount() method. See the example below on how to fetch external data in componentDidMount() and updating the state with that data. componentDidUpdate() componentDidUpdate(prevProps, prevState, snapshot) is called in the updating phase of the component lifecycle. It is called after the re-rendering of a component and is commonly used for updating the DOM in response to changes to the prop or state. The method receives the following arguments: prevProps– the previous propvalue prevState– the previous statevalue snapshot– this value is only available if your component uses the getSnapshotBeforeUpdate()method in the component lifecycle The setState() function can also be called here as long as it’s wrapped in a condition to check for the state/prop changes from the previous state/prop. This is done to prevent an infinite loop of renders. A typical use case for the componentDidUpdate() method is when you need to make an API call only on the condition that the previous and current state are not the same. componentDidUpdate (prevProps, prevState) { if (prevState.user.id !== this.state.user.id) { // Make your API call here... } } The componentDidUpdate() method is always called after each render in the component lifecycle, except when the shouldComponentUpdate() method returns false. componentWillUnmount() componentWillUnmount() is called when a component is about to be destroyed and removed from the DOM. This method is useful for canceling network requests, removing event listeners, and cleaning up subscriptions that might have been set up in componentDidMount(). componentWillUnmount() { // Cancel network requests, event listeners or subscriptions here... clearInterval(this.timer); this.chatService.unsubscribe(); } Learn how the lifecycle methods above can integrate with the useEffect Hook. shouldComponentUpdate() shouldComponentUpdate(nextProps, nextState) is a component lifecycle method that’s used to determine whether a component should be updated/re-rendered. By default, it returns a value of true because components will re-render themselves whenever their props or state changes. However, you can make the method return false if you’d like the component to be re-rendered only when particular conditions are met. It receives two arguments, nextProps and nextState, which are useful for carrying out the check of whether the component should be updated by doing a comparison with the current prop and state values. shouldComponentUpdate(nextProps, nextState) { if (this.state.user.id === nextState.user.id) { return false; } } As an example, in the code block above, the component will not get updated/re-rendered if the current user.id in the state has the same value in nextState. This method gets called for every render except the initial render, and it’s also important to note that returning false doesn’t prevent inner child components from re-rendering when their state changes. componentDidCatch() React recently introduced a concept called error boundaries to handle errors in React components. Error boundaries are React components that help to catch JavaScript errors anywhere in their child component tree and then proceed to log the error information/display a fallback UI with the error information. This can be very helpful for debugging. You can think of error boundaries as the JavaScript catch block but only for React components. So how are error boundaries created? A class component becomes an error boundary if it has either of the componentDidCatch() or getDerivedStateFromError() methods (or both of them). The componentDidCatch(error, info) method is called after an error occurs in a child component. It receives the error and info arguments; error contains the error message that was thrown, and info is an object with a componentStack property containing the component stack trace information. class ErrorBoundary extends React.Component { constructor(props) { super(props); this.state = { hasError: false }; } componentDidCatch(error, info) { // Once this method is invoked, set the hasError state to be true this.setState({hasError: true }); } render() { if (this.state.hasError) { // You can render any custom fallback UI return <h1>Oh no! Something went wrong.</h1>; } // if not, just render the child component return this.props.children; } } Error boundary components are used by placing them at the top of your app’s component tree. For example: <ErrorBoundary> <SomeProvider> <App /> </SomeProvider> </ErrorBroundary> static getDerivedStateFromProps() static getDerivedStateFromProps(props, state) is a component lifecycle method called just before the render() method. It receives parameters of props and state and returns an object to update the state or a value of null. It is called whenever a component receives new props and is useful for cases in which the state depends on changes in props over time. static getDerivedStateFromProps(props, state) { if (props.user.id !== state.user.id) { return { isNewUser: true, user: props.user, }; } // Return null to indicate no change to the state. return null; } As an example, in the code block above, the getDerivedStateFromProps() method returns a new state only if the user.id from the props is not the same as the state, i.e., the state depends on the changes to the props. static getDerivedStateFromError() static getDerivedStateFromError(error) is a lifecycle method that is called after an error has been thrown by a descendant component. It receives the error from the component and then returns a value to update the state. Like the componentDidCatch() method, it is used in error boundary components and can be used in place of componentDidCatch(). class ErrorBoundary extends React.Component { constructor(props) { super(props); this.state = { hasError: false }; } static getDerivedStateFromError(error) { // Do something with the error here... console.error(error) // Update state so the next render will show the fallback UI. return { hasError: true }; } render() { if (this.state.hasError) { // You can render any custom fallback UI return <h1>Oh no! Something went wrong.</h1>; } // if not, just render the child component return this.props.children; } } getSnapshotBeforeUpdate() getSnapshotBeforeUpdate(prevProps, prevState) is a lifecycle method called right before the DOM is updated. It allows you to get information from the DOM just before it changes. The method should either return a snapshot value or null. The returned value is always passed as a parameter to componentDidUpdate(). A typical use case for getSnapshotBeforeUpdate() is when you’d like to remember the scroll position of a particular element on a page just before the DOM is updated. Let’s say there’s a comment section on a page with a defined height, and we’d like to automatically scroll down to the new comment once it’s added. The CodeSandbox below demonstrates how to do that using getSnapshotBeforeUpdate(). You can also implement getSnapshotBeforeUpdate() with Hooks — yes, really. Read it here. Class properties These are the properties available to a React class component. props props is short for properties. They are used to pass data between React components. In a React class component, props can be accessed by using this.props. state The state in a React component contains data specific to that component, which may change over time. It is user-defined and always a plain JavaScript object. The state in a React component can be accessed by using this.state, and the values of that state can be modified by using the setState() function. It’s important to note that you should never modify the state directly using this.state; rather, use setState() instead. defaultProps defaultProps is a property on the class component that allows you to set the default props for the class. The defaultProps value is used whenever expected props are not passed. class CustomButton extends React.Component { // ... } // Set default props outside the class component CustomButton.defaultProps = { color: 'blue' }; So if the <CustomButton /> component is used somewhere else without the props.color being specified, it’s going to fall back to what was defined as the defaultProps value. displayName The displayName property is a string that can be useful for debugging your React component. By default, it is the name of the function or class that defines the component. However, you can set it explicitly if you’d like to display a different name..
http://blog.logrocket.com/react-reference-guide-react-component/
CC-MAIN-2020-40
refinedweb
2,506
55.13
std::fill_n From cppreference.com < cpp | algorithm Revision as of 09:39, 19 September 2013 by 217.95.4.245 (Talk) Assigns the given value value to the first count elements in the range beginning at first if count > 0. Does nothing otherwise. Parameters Return value Complexity Exactly count assignments, for count > 0. Possible implementation Example The following code uses fill_n() to assign -1 to the first half of a vector of integers: Run this code #include <algorithm> #include <vector> #include <iostream> int main() { std::vector<int> v1{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; std::fill_n(v1.begin(), 5, -1); for (std::vector<int>::iterator it = v1.begin(); it != v1.end(); ++it) { std::cout << *it << " "; } std::cout << "\n"; } Output: -1 -1 -1 -1 -1 5 6 7 8 9
http://en.cppreference.com/mwiki/index.php?title=cpp/algorithm/fill_n&oldid=65560
CC-MAIN-2015-14
refinedweb
133
56.96
BMC CMDB 19.11 Tip To stay informed of changes to this space, place a watch on this page. Related topics Known and corrected issues Videos The following table lists topics that contain videos that supplement or replace the text-based documentation. These videos are valid for BMC CMDB version 19.11. CMDB Impact Simulator The simulation usually gets completed in seconds, but it may take longer depending on the complexity of the graph. By default, an impact simulation record is stored for 60 minutes. You can set the time for which the simulation is stored in memory by using the Time for SIM Cache Removal setting in CMDB core configurations. For more information, see Impact Simulator configuration settings. For information about running an impact simulation, see Running an impact simulation. The Graphwalk API that is used to retrieve the CIs and its relationships always returns the complete graph, which means that all the CIs in the tree between the causal CI and the top impacted CI are returned. The Recon ID is the same for a CI found by multiple discovery sources and populated in multiple datasets. Therefore, a search based solely on the Recon ID may return multiple instances. Whereas, the Instance ID uniquely identifies the CI across all datasets in BMC CMDB. You can fetch the Instance ID by calling the getInstances REST API. Its URL is GET /cmdb/v1.0/instances/{datasetId}/{namespace}/{className}. If the Recon ID is the qualifier, include the Recon ID along with the dataset in the qualification to return the Instance ID. For more information, see Endpoints in the REST API. CMDB Search The options available are Quick Search and Advanced Search. Quick Search is the basic search, which helps you search for CIs quickly, based on the parameters you specify. With Advanced Search, can use template queries or build your own custom query. Yes, you can search for CIs that are Marked As Deleted, by performing the following steps: - On the Search Widget, select Quick Search. - Select the dataset and class. - In the Attribute field, select MarkAsDelete. - In the AttributeValue field, enter 1, to show all CIs that have the MarkAsDelete attribute set as True. Yes, Wildcard % is supported in search. To search for a parameter type the string. By default, the wildcard character % is appended automatically to the end of the search string, while is the search is being run. The search returns all values starting with the search string you entered. For example, if you enter the string as Comp, the search returns values like Computer, Comp12, ComputerSystem, and so on. However, if you want to search for CIs containing a particular string, you can place % anywhere in the search string to use as a wildcard. For example, %system returns results like ComputerSystem, Computer_System, AdminSystem, and so on. CMDB Explorer This error is displayed if you click a CI's link to view its details, but the CI has already been deleted. However, you can still view the details of a deleted CI. An example of the Integrity page from which the link to the CI details can be accessed is shown in the following figure. When large datasets are displayed on CMDB Explorer, the canvas may look cluttered with too many CIs. You can group all the CIs that you do not want to view into a custom group. Then you can view only those CIs of your interest. A custom group can contain CIs of different classes. Perform the following steps to create a custom group. - Click Drag Mode to switch to Select Mode. - Hold your left mouse button on the canvas and draw your selection to include CIs that you want to group. - Click Group Selected CIs. The Custom group is created. - Click Drag Mode again. a. You cannot add an already existing group of CIs to a custom group. b. CIs that are children of another group cannot be added to a custom group. Yes, you can use Filters to include or exclude CIs from being displayed in CMDB Explorer. In BMC CMDB 9.1 and earlier versions, there were separate options to show or hide the selected classes. In the new CMDB UI, filtering is simplified and you can achieve the same results by checking only those classes and relationships that you want to view on the Explorer canvas. The Filtering options are as shown in the following figure. When you create a filter, you can also determine whether that filter can be used by others or only by you. If Is Global is ON, the filter is saved as a shared filter and is available to everyone. If Is Global is OFF , the filter is saved as a Personal filter and it is available only to you. No, you cannot collapse an expanded node. However, you can either group similar CIs or group CIs of your interest by creating a custom group. Use Drag Mode on CMDB Explorer to create a custom group. - Click Drag Mode to switch to Select Mode. - Hold your left mouse button on the canvas and draw your selection to include CIs that you want to group. - Click Group Selected CIs. The Custom group is created. - Click Drag Mode again. BMC CMDB Reconciliation Engine Reconciliation Engine is a component installed with the BMC CMDB. The main function of the Reconciliation Engine is the creation of a production dataset that contains accurate data from all available sources. Data in the production dataset is then used by consuming applications. For more information, see the following resources: Before you run a Reconciliation Engine (RE) job, perform the following: - Perform CDM metadata diagnostics - Run the CDMChecker utility to: - Detect invalid customization - Detect CDM corruption - For information about running this utility, see Using the CDMChecker tool. - For information about troubleshooting the errors, see Issues that might arise during the CDMChecker pre-upgrade check. - Perform data diagnostics - Run the cmdbdiag utility to check if data conforms with the model defined in BMC Atrium Configuration Management Database (BMC Atrium CMDB) and correct it. For information about using this utility, see Verifying your data model using the cmdbdiag program. - Perform Reconciliation Engine-specific data diagnostics - Run the Data Analyzer tool to check whether the data follows the RE-specific guidelines. BMC highly recommends you to run this tool before running any RE job. See Using CMDB Data Analyzer to Investigate CMDB Data Issues. - Perform manual checks for RE jobs - Make sure that all the classes and attributes used in standard or custom rules of all the RE jobs exist in your CDM data model. If a certain job is imported in your CDM, the job might fail. - Make sure that the data sets (source, target and precedence data set) referred in all the RE jobs also exist in the BMC_Dataset class. - Check whether indexes are set - During upgrade, for optimal RE performance, ensure that the out-of-the-box indexes set up on BMC.CORE:BMC_BaseElement and BMC.Core:BMC_BaseRelationship forms.Discrepancies, if any, are logged in the Atrium Installer log file. The following image displays an example message: Watch the video on Reconciliation Engine best practices: Configure recommended settings Ensure that the System-level settings and Job-level settings are configured as per the recommendations. See the following images for recommended settings: System-level settings Job-level settings For more information, see Configuring Reconciliation Engine system parameters . You should consider changing the default configuration settings in the following scenarios: - If you are dealing with large volume of data, increase the value of the Definition Check Interval field. - If you need specific log details, ensure that you change the log level to Debug. (Change the default log value only when you need and reset it back to default log level when you are done). Additionally, set the Maximum Job Log File Size value to 50000 KB. - If the RE Job is not responding for a long period, set Job Idle Time value to the desired time other than the default value (0). Standard identification and precedence rules simplify the process of creating reconciliation jobs. Standard rules use defaults for Identify and Merge activities and automate the creation of reconciliation jobs. The standard rules work with all classes in the Common Data Model (CDM) and BMC extensions. They identify each class using attributes that typically have unique values, and they merge based on precedence set for BMC datasets. Standard reconciliation jobs always use these standard rules, but you can also use standard rules when you create your own reconciliation job. You must use standard rules: - When configuration items are published into the CMDB or when the data is pulled from the CMDB. - To uniquely identify attributes of the discovered configuration items. A set of rules that are created as per your specific business requirements and are not part of the standard rules are termed as custom rules. You can use custom rules for CI classes. The following is an example scenario in which you might want to create custom rules: If you have extended the Common Data Model by adding a new attribute, which is specific to your requirement and which might not be populated by any discovery tool. You might want to define custom rules to identify your assets based on this attribute. For more information about using custom rules, see Creating an identification ruleset for reconciliation. Standard identification and precedence rules simplify the process of creating reconciliation jobs. A reconciliation job is provided with a standard rule sets of identification and precedence rules. These standard rules work with all classes in the Common Data Model (CDM) and BMC extensions. They identify each class using attributes that typically have unique values, and they merge based on precedences set for BMC datasets. Standard reconciliation jobs always use these standard rules, but you can also use them when you create a custom reconciliation job. For more information, see Standard identification and precedence rules for reconciliation jobs. Watch the following video: For more information, see Configuring standard precedence rules for reconciliation . - Do not set a Reconciliation Engine ID manually in any dataset. - Do not manually delete data, which is marked as soft delete. - Do not delete datasets manually. You must also need to clean up dataset references from Reconciliation Engine, Normalization Engine etc. For most reconciliation activities, you can specify a qualification set for the purpose of restricting the instances that participate in an activity. Qualification sets, which are reusable between activities, are qualification rules that each select certain attribute values. Any instance that matches at least one qualification in a set can participate in an activity that specifies the qualification set. For example, you might create a qualification set that selects instances that were discovered within the last 24 hours and have the domain "Frankfurt" if your company just opened a Frankfurt office and you are reconciling its discovered CIs for the first time. - For more information, see Using Qualification Sets in reconciliation activities. - For information about conventions to be used in qualification of Reconciliation Engine activity, see Qualification conventions for reconciliation activities. The CMDB Data Analyzer is a collection of tools that enable you to perform data analysis and also identify data inconsistencies in any CMDB dataset. For more information, see Using CMDB Data Analyzer to Investigate CMDB Data Issues . As a best practice, BMC recommends that you must reconcile only those CIs that have been normalized or those that do not require normalization. To reconcile the appropriate CIs, enable the Process Normalized CIs Only option in the Job Editor. To reconcile manually created CIs, perform the following steps: - Create a custom job. - Define the job for the classes in which you populate the data. - Identify the CIs. - Ensure that you adhere to the identification rules for the discovered jobs so that your CI matches with the discovered CIs. - Ensure that you populate at least those columns/attributes correctly that are defined in the identification rules (This helps in correct identification of the CI with the existing CI) - Set a lower precedence value for the manual dataset, but set a highest precedence to the manually edited columns/attributes of the dataset. CIs are partially merged when the merge process fails due to data issues. See the following flowchart that explains the checks to be performed if the CIs are partially merged: Run the CMDB Data Analyzer utility to find duplicate CIs. See Using CMDB Data Analyzer to Investigate CMDB Data Issues . Run Reconciliation Engine's (RE) Identification activity across multiple datasets. For example, open the BMC.CORE:BaseElement form and enter the RE ID value and search. If a CI having the same RE ID is found in multiple datasets, it means that the same CI exists in those datasets. Run the CMDB Data Analyzer utility to find orphan CIs. See Using CMDB Data Analyzer to Investigate CMDB Data Issues . The following table lists the common RE issues and provides access to relevant resources and workaround to these issues. To migrate the RE jobs, you must: - Export the RE job data using the Reconciliation Console. See Exporting reconciliation definitions. - Import the ARX data using BMC Remedy Data Import tool. See Enabling the Data Import utility in the BMC Remedy AR System online documentation. The Reconciliation Engine process terminates because: - The AR System server or database is busy and is unable to process the requests coming from RE. - The AR System server is either not responding or not reachable. In the above scenarios, RE awaits for a response from the AR System server and then terminates itself. For more information, see the Knowledge Article. Perform the following: - Check whether the job schedule is deleted or modified. - Check whether the Application Pending form has an entry for that job. - If there is no entry for that job, enable the AR API, SQL and Filter logs. - If the entry exists, check the AR Dispatcher log. - Check whether RE is registered with the AR Dispatcher. - Check the number of entries in the Application Pending form. - If there are many entries in the form and if you do not need them, delete those entries from the form. - Check whether the RE job entry is in the Queued state in the RE job run. If yes, delete the entry from the Queued state. - If you are using RE in a server group environment, ensure that the Reconciliation-Engine-Suspended flag is set to False (F). Note: You cannot set this value manually. Perform the following: - Enable server-side logs. Edit the log4j.properties file located in the <AR_Installation_Home>\midtier\WEB-INF\classes folder. Add the following lines: and - (BMC Atrium Core 7.6.04 or earlier): Edit the <AR_HOME>\ARSystem\midtier\WEB-INF\flex\services-config.xml file. (BMC Atrium Core 8.0 or later): Edit the <AR_HOME>\ARSystem\midtier\WEB-INF\AtriumWidget\flex\services-config.xml file. In the services-config.xml file, modify the logging level from Warn to Debug. For example, change the following line: <target class="flex.messaging.log.ConsoleTarget" level="WARN"> To <target class="flex.messaging.log.ConsoleTarget" level=“DEBUG"> - Enable client-side logs. For information about enabling client-side logs, see BMC Atrium Core Console client-side logging. See this knowledge article .
https://docs.bmc.com/docs/ac1911/home-896324739.html
CC-MAIN-2021-17
refinedweb
2,535
56.45
Long non-wrapping text if an older version is in Ubuntu archives Bug Description software-center trunk r2404, Ubuntu 11.10 beta 1. Go to <http:// 2. Click on the Ubuntu download link, and choose "Open in Ubuntu Software Center". What you see: ------------ An older version of "nautilus-dropbox" is available in your normal software channels. Only instal ------------ That is, two strings are concatenated into one ribbon, and they don't wrap. What you should see: ------------ Standalone software package ------------ A minimal improvement would be just to make the existing text wrap. Separately, we should remove the "An older version ... is available" text altogether. We should advertise newer versions, not older versions. (Or if we should advertise older versions, we should figure out why and how.) <https:/ Related branches the linked branch has the simple line-wrap fix. @mpt, there are a few strings in the softwarecenter/ @property def warning(self): # FIXME: use more concise warnings if deb_state == DebPackage. if deb_state == DebPackage. Could be of interest as far as rewording the warnings or designing an alternative dialog. This bug was fixed in the package software-center - 5.0.1 --------------- software-center (5.0.1) oneiric; urgency=low [ Michael Vogt ] * softwarecenter/ - Fix i18n bug in the error string for the reviews. This adds two new strings for a rare error message in the UI that was previously not translatable. Thanks to David Planella * softwarecenter/ - when adding a new database (e.g. on reinstall-previous purchases) trigger a "reopen" to ensure that the db docids are reinitialized * apt-xapian- - do not crash if a apt.Package. * softwarecenter/ - only show the frame with new apps if we actually have information about new applications (LP: #862382) [ Robert Roth ] * softwarecenter/ - fix crash in clear_model() (LP: #863233) [ Gary Lasker ] * debian/control: - add dependency on python- at startup (LP: #829067) * softwarecenter/ softwarecen softwarecen softwarecen - display the correct license type for commercial apps as specified via the software- [ Matthew McGowan ] * lp:~mmcg069/software-center/bug855666: - add missing linewrap (LP: #855666) * lp:~mmcg069/software-center/bug858639 : - fix crash when data can not be parsed from the remote reviews server LP: #858639 -- Michael Vogt <email address hidden> Wed, 05 Oct 2011 11:24:05 +0200 Status changed to 'Confirmed' because the bug affects multiple users.
https://bugs.launchpad.net/ubuntu/+source/software-center/+bug/855666
CC-MAIN-2019-18
refinedweb
374
55.54
On Tue, Sep 03, 2002 at 06:52:46PM +0200, Christopher Lenz wrote: > Hi Stefan, > > Stefan Heimann wrote: > [...] > >The idea is great. What I don't like is that the matcher has to deal > >with collection the information. I think its better to separate the > >collection of information and the matching algorithm. What about > > > >public interface Matcher { > > > > // .. > > > > /** > > * Returns the content handler that is used to collect > > * the information which is needed for matching. > > */ > > public ContentHandler getInformationCollector(); > >} > > > > That makes a lot of sense, although I'd just call the method > Matcher.getContentHandler(). ok. I don't know how much you have written, but with that content handler it should be possible the integrate the changes I've made in RulesBase and ExtendedBaseRules (to support pattern with namespaces) quickly. We could write two new classes that have the same matching semantic than RulesBase and ExtendedBaseRules, but with namespace support. Most of the time I spent on the patch was on rewritting the match methods, because when you are dealing with qualified names, its not possible to implement it with only with string matching. Bye, Stefan -- Stefan Heimann | Brandensteinstr. 5 | D-79110 Freiburg | -- To unsubscribe, e-mail: <mailto:commons-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:commons-dev-help@jakarta.apache.org>
http://mail-archives.us.apache.org/mod_mbox/commons-dev/200209.mbox/%3C20020903171642.GA6193@honk%3E
CC-MAIN-2019-35
refinedweb
213
56.55
Can the Albacore nuspec task resolve all dependencies automatically? Can the Albacore nuspec task resolve all required dependencies for a solution? When I have multiple projects with changing dependencies, it takes a lot of effort to update the rakefile. Could this be automated? desc 'create the nuget package' nuspec do |nuspec| nuspec.id = 'myprojectid' nuspec.version = '1.2.3' nuspec.authors = 'Jon Jones' nuspec.description = 'my-project is a collection of utilties' nuspec.title = 'my-project' nuspec.dependency <magic here> end A manual solution would be to go through the package files and solve it manually. Has anyone written anything automated? source to share I realize this is an old question, but seeing that it has no answer, it might help someone looking for the same thing. I am currently working on some Rake tasks to further automate the generation of nuspec files in the normal / offline way, so I will update this post later with the final solution. To answer the question though, here's a little ruby function that pulls dependencies from the packages.config file for a given project in a solution. def GetProjectDependencies(project) path = "#{File::dirname project.FilePath}/packages.config" packageDep = Array.new if File.exists? path packageConfigXml = File.read("#{File::dirname project.FilePath}/packages.config") doc = REXML::Document.new(packageConfigXml) doc.elements.each("packages/package") do |package| dep = Dependency.new dep.Name = package.attributes["id"] dep.Version = package.attributes["version"] packageDep << dep end end packageDep end And the Dependency class is used: class Dependency attr_accessor :Name, :Version def new(name, version) @Name = name @Version = version end end This method takes an instance of "project" and grabs the dependencies / versions from the package.config file for that project. As I said, I will post a more complete solution shortly, but this is a good starting point for everyone if they need it. EDIT: Sorry, it took me so long to post the final version of this, but here's a link to the gist containing the sample code I'm currently using for several projects. Basically, I wrap the data in the Project class and populate the dependencies on the package.config. As a bonus, it also adds dependencies on links between projects (parses the project file). There are classes / logic as well as an example nuspec task. source to share Of course, there is nothing in the Albacore project that is doing it right now. It would be interesting to see that Mitchell's solution is configured and possibly reversed. I'm going to move the code into the gist , open an " issue " and work on it from the side :) source to share
https://daily-blog.netlify.app/questions/1890509/index.html
CC-MAIN-2021-21
refinedweb
439
58.58
[ Usenet FAQs | Search | Web FAQs | Documents | RFC Index ] Search the FAQ Archives Search the FAQ Archives - From: Tom Cikoski <splinter@panix.com> Newsgroups: comp.protocols.snmp Subject: comp.protocols.snmp SNMP FAQ Part 1 of 2 Date: Wed, 2 Jul 2003 19:10:30 +0000 (UTC) Message-ID: <bdvan6$s6$1@reader1.panix.com> Reply-To: splinter@panix.com Summary: Introduction to SNMP & comp.protocols.snmp newsgroup Keywords: SNMP FAQ User-Agent: nn/6.6.5 Archive-name: snmp-faq/part1 Posting-Frequency: every few months or so Last-Modified: 2 Jul 2003 Version: 2.57 comp.protocols.snmp PART 1 of 2 FAQ - Frequently Asked Questions - FAQ Simple Network Management Protocol ---------------------------------- This 2-part document is provided as a service by and for the readers and droogs of Internet USENET news group comp.protocols.snmp and may be used for research and educational purposes only. Any commercial use of the text may be in violation of copyright laws under the terms of the Berne Convention. My lawyer can whup your lawyer. Anthology Edition Copyright 2002,2003 Thomas R. Cikoski, All Rights Reserved ------------------------------------------------------------ Please feel free to EMail corrections, enhancements, and/or additions to the Reply-To address, above. Your input will receive full credit in this FAQ unless you request otherwise. mailto:splinter@panix.com As a result of the abuses of EMail now taking place on the Internet, we have a policy of NOT providing the EMail address of individual contributors in these postings. We will continue to provide EMail addresses of commercial contributors unless requested not to. ------------------------------------------------------------- A NOTE ON WEB SITES AND URLS: THEY MAY BE OBSOLETE! Neither the contributors nor the editor of this FAQ are responsible for the stability or accuracy of any URL, Web site address, or EMail address listed herein. We take reasonable care to ensure that these data are transcribed correctly and are always open to correction. If, however, a particular URL disappears from the Web there is not much we can do about it. ------------------------------------------------------------- Please also visit our cousin newsgroup news://comp.dcom.net-management. New this month: -------------- > More of the usual stuff Note on host names and addresses: please email me with any changes to URLs, host names or IP addresses. The MIT host rtfm has an autoresponder which always replies to postings with an incorrect IP. It would be nice if every host had that, but they don't, so I need your assistance. SUBJECT: TABLE OF CONTENTS 1.00.00 FAQ PART 1 of 2: IN THIS DOCUMENT 1.01.00 --General 1.01.01 What is the purpose of this FAQ? 1.01.02 Where can I Obtain This FAQ? 1.01.03 Parlez-vous francais? 1.01.04 Why is SNMP like golf? 1.01.05 What is a droog anyway? 1.01.50 HELP ME! MY SNMP PRODUCT IS DUE NEXT WEEK! 1.01.99 This FAQ Stinks! 1.10.00 --General Questions about SNMP and SNMPv1 1.10.01 What is SNMP? 1.10.02 How do I develop and use SNMP technology? 1.10.04 How does the Manager know that its SET arrived? 1.10.10 How does an Agent know where to send a Trap? 1.10.12 Which community string does the agent return? 1.10.15 How can I remotely manage community strings? 1.10.17 What is the largest SNMP message? 1.10.30 Are there security problems with SNMP? 1.11.00 --RFC 1.11.01 What is an RFC? 1.11.02 Where can I get RFC text? 1.12.00 --SNMP Reference 1.12.01 What books are there which cover SNMP? 1.12.02 What periodicals are heavily oriented to SNMP? 1.12.03 What classes are available on the topic of SNMP? 1.12.04 What email discussion groups are available for SNMP? 1.12.05 What trade shows cater to SNMP? 1.12.06 What SNMP product User Groups are available? 1.12.07 Where can I find SNMP-related material on WWW? 1.12.08 What related mailing lists exist? 1.12.20 What related newsgroups exist? 1.12.21 Are there introductory materials? 1.13.00 --Miscellaneous 1.13.01 SNMP and Autodiscovery 1.13.02 SNMP Traps and NOTIFICATION-TYPE 1.13.03 SNMP and/versus The Web 1.13.04 SNMP and Java 1.13.05 SNMP and CORBA 1.13.06 SNMP and Visual Basic 1.13.07 SNMP and IPv6 1.13.10 SNMP and C# 1.13.12 SNMP and Perl 1.20.00 --General Questions about SNMPv2 1.20.01 What is SNMPv2? 1.20.02 What is SNMPv2*/SNMPv1+/SNMPv1.5? 1.20.03 What is SNMPv2c? 1.20.04 What the heck other SNMPv's are there? 1.22.00 --General Questions about SNMPv3 1.22.01 What is SNMP V3? 1.30.00 --RMON 1.30.01 What is RMON? 1.30.02 RMON Standardization Status 1.30.03 RMON Working Group. 1.30.04 Joining the RMON Working Group Mailing List 1.30.05 Historical RMON Records 1.30.06 RMON Documents 1.30.07 RMON2 1.40.00 --ISODE 1.40.01 What is ISODE? 1.40.02 Where can I get ISODE? 1.40.03 Is there an ISODE/SNMP mailing list? 1.50.00 --Using SNMP to Monitor or Manage 1.50.01 How do I calculate utilization using SNMP? 1.50.02 What are Appropriate Operating Thresholds? 1.50.03 Are MIBs available to monitor application traffic? 1.50.04 How can I make sense of the Interfaces Group? 1.50.10 When do I use GETBULK versus GETNEXT? 1.50.12 What free products can be used to monitor? 1.75.00 -- SNMP Engineering and Consulting 1.75.01 SNMP Engineering and Consulting Firms 2.00.00 FAQ PART 2 of 2: NOT IN THIS DOCUMENT 2.01.00 --CMIP 2.01.01 What is CMIP? 2.01.02 What books should I read about CMIP? 2.01.03 A CMISE/GDMO Mailing List 2.01.04 What is OMNIPoint? 2.02.00 --Other Network Management Protocols 2.02.01 What alternatives exist to SNMP? 2.10.00 --SNMP Software and Related Products 2.10.01 Where can I get Public Domain SNMP software? 2.11.01 Where can I get Proprietary SNMP software? 2.12.01 Where can I get SNMP Shareware? 2.13.01 Miscellaneous FTP and WWW Sources 2.14.01 What CMIP software is available? 2.15.01 SNMP and Windows NT/95/98 2.16.01 More About CMU SNMP Software 2.17.01 Miscellaneous SNMP-related Products 2.18.01 SNMP and OS/2 2.18.02 SNMP and SCO Unix 2.18.03 SNMP and Linux 2.18.04 SNMP and AS/400 2.20.01 SNMP++ 2.21.01 What is AgentX? 2.25.00 -- SNMP Engineering and Consulting 2.25.01 SNMP Engineering and Consulting Firms 2.30.00 --The SNMP MIB (Management Information Base) 2.30.01 What is a MIB? 2.30.02 What are MIB-I and MIB-II 2.30.03 How do I convert SNMP V1 to SNMP V2 MIBs? 2.30.04 How do I convert SNMP V2 to SNMP V1 MIBs? 2.30.05 What are enterprise MIBs? 2.30.06 Where can I get enterprise MIBs? 2.30.10 Can I mix SMIv1 and SMIv2 in one MIB? 2.31.01 MIB Compiler Topics 2.32.01 How can I get ______ from the _____ MIB? 2.35.01 How can I register an Enterprise MIB? 2.35.02 Where can I find Enterprise Number Assignments? 2.37.01 How Do I Create a Table Within a Table? 2.37.05 How Do I Reset MIB Counters via SNMP? 2.37.07 How can I change a published MIB? 2.38.01 How unique must MIB variable names be? 2.38.03 Explain MODULE-COMPLIANCE versus AGENT-CAPABILITIES 2.38.04 Which parts of my MIB are mandatory? 2.38.10 Can a CMIP MIB be converted to SNMP? 2.38.11 Can an SNMP MIB be converted to CMIP? 2.38.12 Can a table index value legally be zero? 2.38.14 Where can I find the _____ MIB? 2.38.20 How can I convert a MIB to XML Format? 2.38.22 What is the maximum number of entries in a table? 2.40.00 --SMI 2.40.01 What is the SMI? 2.40.02 What is SMIv2? 2.40.03 Table Indexing and SMI 2.40.04 Floating Point Numbers in SMI? 2.40.05 SMIv1 versus SMIv2? 2.45.00 --ASN.1 2.45.01 What is ASN.1? 2.45.02 Why is ASN.1 not definitive for SNMP? 2.45.05 Where can I find a free ASN.1 compiler? 2.50.00 --BER 2.50.01 How is the Integer value -1 encoded? 2.50.02 What is the Maximum Size of an SNMP Message? 2.50.05 Where can I find BER encoding rules? 2.60.00 -- Agent Behavior 2.60.01 Proper Response to empty VarBind in GetRequest? 2.60.02 Master Agent versus Proxy Agent 2.60.03 Proper Response to GET-NEXT on Last MIB Object? 2.60.10 How can I find the SNMP version of an Agent? 2.60.12 How should an agent respond to a broadcast request? 2.60.14 What does an Agent send in a trap? 2.98.00 Appendix A. Glossary 2.99.00 Appendix B. Acknowledgements & Credits 1.00.00 FAQ PART 1 of 2: 1.01.00 --General 1.01.01 SUBJECT: What is the purpose of this FAQ? This FAQ is to serve as a guide to the resources known to be available for helping you to understand SNMP, SNMPv2, and their related technologies. OSI/CMIP is touched on briefly as well because we're fair-minded folk. There is NO INTENT that this be a one-stop SNMP tutorial. There is NO INFERENCE that this is an authoritative or official document of any kind. What you see is what you get. You WILL need to read the books listed herein, maybe even some of the RFCs. You may wish to take a class as well. Just think of this as your "tourist guide book." 1.01.02 SUBJECT: Where Can I Obtain This FAQ? This FAQ is available on the WWW at: (both text and HTML formats are available) See also: You can also find the most recent Web posting via [formerly "dejanews"] and, last but not least, you can use your favorite search engine such as This FAQ is officially archived (as with all "licensed" FAQs) at rtfm.mit.edu [18.181.0.24] under /pub/usenet/news.answers as snmp-faq/part1 &/part2, or under /pub/usenet/comp.protocols.snmp as its own self (the only files in that directory). Use anonymous ftp to retrieve or send e-mail to mailto:mail-server@rtfm.mit.edu with "send usenet/news.answers/finding-sources" for instructions on FTP via e-mail. 1.01.03 SUBJECT: Parlez-vous francais? 1.01.03.01 > >Un petit conseil: Si tu postais en anglais, beaucoup plus de gens >pourraient t'aider... > Thomas Galley 1.01.03.02 Alternativement, si tu es vraiment fachez avec l'anglais ;-), poster (ou xposter avec fu2) sur fr.comp.reseaux.supervision. Steve Common 1.01.03.03 If you are in the Bayonne area and would like to forget SNMP for a few hours in a great little country hotel, try L'Auberge de Biaudos (**) RN 117 15mn from Bayonne 05 59 56 79 70 Tom Cikoski 1.01.03.04 SNMP-oriented Web Site hosted en France, avec quelques linques francaises. Pierrick Simier 1.01.04 SUBJECT: Why is SNMP like golf? > usually the fewer polls you take the better off you are > but you are sometimes lost in the woods > it helps to have a good set of tools in the bag > it helps to have good instruction > you need a few beers after a bad round 1.01.05 SUBJECT: What is a droog anyway? What's a droog? Eric Meyer <Sigh> It is sad to think that an entire new generation of SNMPers has arisen to push their elder brethern and sisteren, who have done such hard, essential pioneer work, out of the way, and are too young to have seen "Clockwork Orange". The label was actually applied to the readership of comp.protocols.snmp by a rather snide and vehement proponent of SMUX. I took it as a badge of honor. 1.01.50 SUBJECT: HELP ME! MY SNMP PRODUCT IS DUE NEXT WEEK! From time to time there appear posts in news:comp.protocols.snmp which bring a tear to the eye of the casual observer. They often have this form: "My boss told me I need to have the SNMP running on our new 100GB Muxiblaster for next week's first release. What is SNMP? Can I have it for Thursday?" Sometimes there come, in private email, messages to regulars of this newsgroup, often in this form: "Please to sending me all SNMP keywords now. Regards." Or: "Tell me [by email] how SNMP differs from TMN and CMIP." Oy! The "simple" in SNMP doesn't mean "trivial". It cannot be learned by flipping through a few emails or news posts. The "simple" in SNMP is only in contrast to protocols which are thought to be even more complex than SNMP. There is no magic solution to learning SNMP. All of us who have mastered the subject did so by 1)reading several books on the subject, 2)reading/playing with the sample code from CMU or NET-SNMP, 3)implementing several trial products over a period of months. If your boss expects SNMP miracles and will not listen to reason, either become a good liar or find a new job. Or, as David Perkins posted in recent response to a newbie: "It will take you at least 6 months or so of studying and usage to "comprehend SNMP very well". I suggest that you read a few books (more than one) on SNMP and RMON, since authors focus on different aspects of the subject area." You can find these resources listed in this FAQ and on several other Web sites devoted to SNMP. Good luck! 1.01.99 SUBJECT: This FAQ Stinks! 1.01.99.01 The material is out-of-date! A concerned reader writes: "The SNMP FAQ contains incorrect sometimes outdated information and it might therefore cause more questions than it answers. What is your policy with regard to corrections? It sometimes looks that you are just adding corrections and not removing the incorrect text. This makes the FAQ difficult to use and it keeps incorrect stuff around, which again causes confusion." "There is also an issue with relationship to other documents. For example, the SimpleTimes contains an up-to-date list of RFCs related to SNMP. The FAQ contains several more or less correct and outdated lists. I think it would be useful in cases like this to just refer to a `reliable' source instead of trying to include information which is not maintained." Editor's note: Our concerned reader is perceptive. We rely on the good will and support of our readers to notice omissions, commissions and deprecations in the FAQ, although we do try and update RFC lists from time to time. We will act on any notice from you that something ought to be changed. Please send me your corrections. URLs change often and we don't have the time to check them routinely. We also publish the large personal collections of several contributors, some of which offer conflicting details. That's the way it is with tribal documents such as this. If any error in this FAQ causes you to waste or loose precious time then you probably expected too much to begin with. Please use it with our good wishes and this disclaimer. 1.01.99.02 In what language should you post? The following exchange once took place ... A> Ich benvtige Wissen |ber SNMP und MIB und MIB II. Bin A> allerdings kein Informatik- oder A> Nachrichtentechnikstudent. Wenn einer von euch helfen kann, A> dann wdre ich sehr dankbar. B> [This is an international newsgroup, so the common language should be english.] While B has a point, we support the right of posters to ask questions in any language. Your best chance of receiving an answer, of course, is if you ask in English. For online translation, try. 1.10.00 --General Questions about SNMP and SNMPv1 1.10.01 SUBJECT: What is SNMP? The current state of the art [Ed Note: Jan 2003] is well summarized in every recent RFC which contains a MIB module:]. Juergen Schoenwaelder 1.10.01.01 The Simple Network Management Protocol is a protocol for Internet network management services. It is formally specified in a series of related RFC documents. (Some of these RFCs are in "historic" or "informational" status) RFC 1067 - A Simple Network Management Protocol (H)- Management Information Base Network Management of TCP/IP based internets RFC 1157 - A Simple Network Managment Protocol RFC 1158 - Management Information Base Network Management of TCP/IP based internets: MIB-II RFC 1161 (H)- SNMP over OSI RFC 1187 - Bulk Table Retrieval with the SNMP RFC 1212 - Concise MIB Definitions RFC 1213 - Management Information Base for Network Management of TCP/IP-based internets: MIB-II RFC 1215 (I)- A Convention for Defining Traps for use with the SNMP RFC 1224 - Techniques for Managing Asynchronously-Generated Alerts RFC 1270 (I)- SNMP Communication Services RFC 1303 (I)- A Convention for Describing SNMP-based Agents RFC 1470 (I)- A Network Management Tool Catalog RFC 1298 - SNMP over IPX (obsolete, see RFC 1420) RFC 1418 - SNMP over OSI RFC 1419 - SNMP over AppleTalk RFC 1420 - SNMP over IPX (replaces RFC 1298) [EDITOR'S NOTE: RFCs for SNMPv2 and SNMPv3 are under their respective headings.] SNMPv1 is now historic, and SNMPv3 is now standard and is described by RFCs 3410-3418 (note: 3410 is informational). Michael Kirkham 1.10.01.02 "Just a reminder that if you are new to SNMP (or know someone who is) you might want to check out my Web page at:" Tyler Vallillee Tyler Vallillee has a live SNMP site at I assume this replaced the old link in the FAQ. The page calls a missing javascript file so it gives a 404 instead of loading. Switch off Javascript and the page loads OK. I've mailed Tyler about this so hopefully it will be fixed soon. John Bradshaw 1.10.01.03 You can find the "Intro to SNMP" courtesy of the WayBack machine at: You can probably also find other long-lost URLs there, too. Phil Hord The 'Overview of SNMP' document can currently be located at - I've no idea whether this link is reliable I'm afraid. (Also referenced in FAQs 1.12.07.11 and 1.12.21.01) Bruce Coker The URL for the SNMP overview document given in FAQ section 1.10.01.03 is still active but the document is apparently no longer available from that site, or from the alternative site that the page now refers you to. I did find a copy of the document from May 2001 on the Wayback machine: Alan Levy 1.10.01.04 Concord Communications offers a free network management reference guide that includes the information you are looking for. View online or download it at Rob Tandean 1.10.02 SUBJECT: How do I develop and use SNMP technology? To deploy and use SNMP technology for management involves many steps. If you are a device vendor you need to: 1) decide what aspects of your products you want to be managable via SNMP 2) select the standard MIBs to implement (and the objects/traps within them to implement) 3) create proprietary MIB modules containing objects and traps for the management areas not covered by standard MIBs 4) Select an SNMP agent toolkit vendor 5) put instrumentation in your devices 6) following the directions from the SNMP toolkit vendor, create access routines (which some SNMP toolkit vendors call method routines) to get and set the values of from your instrumentation 7) Select an SNMP agent test package, and test your agent 8) Select an SNMP management API library 9) Write SNMP applications to manage your device If you are an end-user, you need to: 1) determine what SNMP management capability that you have in your current devices 2) determine the SNMP management capability that is available in similar devices from other vendors (in case you need to upgrade or change) 3) determine what you want to accomplish with management 4) find off the self management packages that provide the management functions you want 5) possibly upgrade or replace your current devices with one that are managable with the package you chose. 6) implement additional management functions using scripting 7) implement additional management functions using custom written code using a purchased off the self SNMP management API library 8) configure your agents and applications to talk to each other. David T. Perkins 1.10.04 SUBJECT: How does the Manager know that its SET arrived? Praveen Dulam queried: > SNMP is based on UDP. So the SNMP is not a reliable protocol. Let's say > you did the SNMP SET operation. How do we gurantee that the SNMP SET > packet reached the Agent. > > Do we need to write some application level programming to do this. Yes, the management application and the agent need to work cooperatively to take care of reliability. Note that when an agent acts upon a SET request it will send a response packet that is either a positive or negative acknowledgment (the error code tells which). So the main problem is what to do at the management station when you time out and get no response at all. If the SET operation is idempotent (i.e. a second application on top of a previous one does not change the results) then you can just re-send the SET. That would be the case if you are just storing values. But not all SET operations work that way: there may be side effects when an object is written. Mike Heard 1.10.10 SUBJECT: How does an Agent know where to send a Trap? I've noticed on the comp.protocols.snmp mailing list that the question "how does an agent know where to send traps" (short answer is "its implementation specific", long answer is, indeed, long, but has been well answered numerous times) is, indeed, a Frequently Asked Question. Any chance of adding it to your quite impressive FAQ posting? T. Max Devlin [Editor's Note: What T. Max is getting at here is that the trap destination IP address is not represented in MIB-II, so how can the agent know what it is? The answer is that most agents require an external configuration process to take place before they can be put into service, and that is how the IP address, among other interesting parameters, is set in the agent. How this setup is actually done varies among agent developers.] 1.10.12 SUBJECT: Which community string does the agent return? Holger wrote: > which community string is used in a response to a set-request and which is > used in a response to a get-request? The packet is a turn-around document with respect to the community string. The community string in the response is typically whatever it was in the request. [snip] if you were to use the community-string field for passing different information from the agent back to the manager then it would not be standard SNMP. Since you referred to "the" read/write community string let me point out that there can be multiple read and multiple write communities (although your agent/config file may constrain that in some way). You can use them to provide views of different portions of the mib for instance (but there is no v1/v2c standard for this mapping). The community string is a poor man's password scheme because it is sent unencrypted in v1/v2c packets and tries to to do the job of authentication, privacy, and views. V3 does away with it. Jim Jones [Editor's Note: I would have said "SNMPv3 offers more and better options for security and privacy in SNMP messages."] 1.10.15 SUBJECT: How can I remotely manage community strings? Paul Nye wrote: >I'm looking for a utility that enables me to change community names on >multiple devices from a single management console. > >For example, provided I have the correct SU password, I would like to be >able to identify a subnet or IP address range and the utility would query >any SNMP aware device in the range, test whether the SU/community names are >the same and if so, replace the SU password with one of my choice. Because the methodology for setting community strings is not standardized, every type of device/agent version may have a different mechanism for handling this chore. Therefore, there are no "single console" products for setting community strings. For this to be feasible, you would have to be able to differentiate every agent type, and know how that particular vendor/system/agent handles it. T. Max Devlin 1.10.17 SUBJECT: What is the largest SNMP message? George Chandy wrote: > Is there a limit to the size of messages in SNMP ? Every implementation must at least accept messages of 484 octets in size (RFC 1906). That is the lower limit you can always bet on. The upper limit basically depends on the two SNMP engines that communicate. In most cases, people try to avoid IP fragmentation as it reduces the likelihood that the message reaches its destination. [snip] Note that the only hard limit in the SNMP protocol is the number of varbinds you can have in a PDU. And that limit is 2147483647 - quite a big number if you ask me. Juergen Schoenwaelder Remember that the definitions in a MIB module are architectual, and not implementation limits. Note that the OCTET STRING data type does have a limit of 65535 octets, which will not fit in a UDP packet. Thus, there are limits imposed by the protocol and transport in addition to implementation limits of the SNMP agent or managed system. David T. Perkins 1.10.30 SUBJECT: Are there security problems with SNMP? 1.10.30.01 See 1.10.30.02 Recently there was a CERT advisory having to do with SNMPv1. The problem was that the code to process SNMP messages when it encountered malformed BER encoding, unsupported ASN.1 tags, or ASN.1 that didn't follow the format of messages did not "do the right thing". The code had programming errors which in some cases caused the code to crash the system. What the SNMP message processing code was suppose to do is increment the counter snmpInASNParseErrs and drop the message. David Perkins 1.11.00 --RFC 1.11.01 SUBJECT: What is an RFC? The letters stand for the title Request For Comment, which is the initial stage in the process for creating Internet standards. RFCs go through several stages of review and refinement before they are adopted as final by the Internet community. 1.11.02 SUBJECT: Where can I get RFC text? 1.11.02.01 On WWW: ------- Ohio State University has an extensive set of RFCs in html (browser) format. To see RFC 9898 (for example), use the following URL: ^^^^^^^ Put actual RFC number here. Simply change the RFC number in the above URL to access the correct file for your purpose. Also, for an RFC "Home Page" see 1.11.02.02 RFC-Info Simplified Help submitted by: Mark Wallace ----- Use RFC-Info by sending an information about other ways to get RFCs, FYIs, STDs, or IMRs. HELP: ways_to_get_rfcs HELP: ways_to_get_fyis HELP: ways_to_get_stds HELP: ways_to_get_imrs 5. To get help about using RFC-Info: HELP: help or HELP: topics =============================================================== 1.11.02.03 Other possible sites: merit.edu nic.ddn.mil - note: avoid using this one, it's SLOW nis.nsf.net src.doc.ic.ac.uk venera.isi.edu munnari.oz.au \___ Pacific Rim Sites use these archie.au / 1.11.02.04 Use anonymous ftp & look for rfc or pub/rfc directories above. 1.11.02.05 You can get a CD ROM with all the RFCs as of the date of the CD ROM > Info Magic > 11950 N. Highway 89 > Flagstaff, AZ > (800) 800-6613 > (520) 526-9565 > > > > Title is 'International & Domestic Standards' ($30) Mark Aubrey 1.11.02.06 In Germany and Europe, try Christian Seyb: "I also offer a CDROM with all RFCs as of the date of beginning of Aug 93." The following CDROM is available for DM 98,-- (app. $60) and contains the following software: - Linux SLS V1.03, Kernel 0.99.11 and utilities for Linux - 386BSD version 0.1 including patch-kit 0.2.4 - NetBSD version 0.8 - Utilities for 386BSD and NetBSD - The Berkely Second Networking Distribution - GNU software (gcc 2.4.5, emacs 19.17, gmake 3.68, etc) - X11R5 up to patch 25 and lots of Contributed Software - TeX version 3.14 - The Internet RFCs up to RFC1493 - News, mail and mailbox software and many utilities for Unix Issue: Aug 1993 Contact: CDROM Versand Helga Seyb Fuchsweg 86 Tel: +49-8106-302210 85598 Baldham Fax: +49-8106-302310 Germany Bbs/Fax: +49-8106-34593 Christian Seyb | | Mailbox/uucp/Fax: 08106-34593 1.11.02.07 Aloha and greetings from Cologne, Germany. Maybe it is interested for you, that the Technical University of Cologne has a good script which translate the RFCs into HTML-RFCs. So you can link between the RFCs and you can get online. You can try it by using the URL 1.11.02.08 General RFC Information Praveen 1.11.02.09 2/3 down the page is a complete list of SNMP RFCs 1.11.02.10 maintains up-to-date RFC list for network management. 1.11.02.11 Try here: I grab a copy of the RFC Index every once in a while and do my searches on that. You can get the index here (it's about 540K): Michael Fuhr 1.12.00 --SNMP Reference 1.12.01 SUBJECT: What books are there which cover SNMP? You may wish to visit for a preset search on Barnes & Noble dot Com for SNMP. A small part of each sale goes toward supporting the SNMP FAQ. 1.12.01.00 SNMP Books from Barnes & Noble dot com 1. SNMP, SNMPv2, SNMPv3, and RMON 1 and 2 William Stallings / Hardcover / Addison Wesley Longman, Inc. / December 1998 ISBN: 0201485346 To order from Barnes & Noble: 2. Understanding SNMP MIBs: With Cdrom Evan McGinnis,With David Perkins / Hardcover / Prentice Hall / September 1996 ISBN: 0134377087 To order from Barnes & Noble: 3. Windows NT SNMP: Simple Network Management Protocol James D. Murray,Deborah Russell (Editor) / Paperback / O'Reilly & Associates, Incorporated / February 1998 ISBN: 1565923383 To order from Barnes & Noble: 4. Managing Internetworks with SNMP with Cdrom Mark A. Miller,P. E. Miller / Paperback / IDG Books Worldwide / July 1999 ISBN: 076457518X To order from Barnes & Noble: 5. A Practical Guide to SNMPv3 and Network Management Dave Zeltserman / Hardcover / Prentice Hall / May 1999 ISBN: 0130214531 To order from Barnes & Noble: 6. Troubleshooting with SNMP & Analyzing MIBs Louis Steinberg / Paperback / McGraw-Hill Companies, The / June 2000 ISBN: 0072124857 To order from Barnes & Noble: 7. SNMP Network Management Paul Simoneau / Paperback / McGraw-Hill Companies, The / January 1999 ISBN: 0079130755 To order from Barnes & Noble: 8. Snmp++: An Object-Oriented Approach to Developing Network Management Applications Peter E. Mellquist,Hewlett-Packard Company / Paperback / Prentice Hall / July 1997 ISBN: 0132646072 To order from Barnes & Noble: 9. LAN Management with SNMP and RMON Gilbert Held / Paperback / Wiley, John & Sons, Incorporated / August 1996 ISBN: 0471147362 To order from Barnes & Noble: 10. SNMP Application Developers Manual Robert L. Townsend / Hardcover / Wiley, John & Sons, Incorporated / December 1997 ISBN: 0471286400 To order from Barnes & Noble: 11. Total SNMP: Exploring the Simple Network Management Protocol Sean J. Harnedy,Sean J. Harnedy / Paperback / Prentice Hall / June 1997 ISBN: 0136469949 To order from Barnes & Noble: 12. How to Manage Your Network Using SNMP: The Networking Management Practicum Marshall T. Rose,Keith McCloghrie / Paperback / Prentice Hall / September 1994 ISBN: 0131415174 To order from Barnes & Noble: 13. RMON: Remote Monitoring of SNMP-Managed LANs David T. Perkins / Hardcover / Prentice Hall / September 1998 ISBN: 0130961639 To order from Barnes & Noble: 14. SNMP V3 Survival Guide: Practical Strategies for Integrated Network Management Rob Frye,Jon Saperia / Hardcover / Wiley, John & Sons, Incorporated / January 1999 ISBN: 0471356468 To order from Barnes & Noble: 15. SNMP-Based ATM Network Management Heng Pan / Hardcover / Artech House, Incorporated / September 1998 ISBN: 0890069832 To order from Barnes & Noble: 16. SNMP: Simple Network Management Protocol: Theory and Practice, Versions 1 and 2: Theory and Practice, Versions 1 and 2 Mathias Hein,David Griffiths (Editor) / Paperback / Itcp / May 1995 ISBN: 1850321396 To order from Barnes & Noble: For a list of other books which may or may not be in print, go to 1.12.02 SUBJECT: What periodicals are heavily oriented to SNMP? 1.12.02.01 One bi-monthly newsletter is "SIMPLE TIMES". You can subscribe via email at mailto:/st-subscriptions@simple-times.org Use HELP on the Subject line for details. Also try For back issues of Simple Times, try 1.12.02.02 ConneXions, The Interoperability Report 480 San Antonio Road, Suite 100 Mountain View, CA 94040 Ph: 415-941-3399 Fx: 415-949-1779 1.12.03 SUBJECT: What classes are available on the topic of SNMP? 1.12.03.01 Softbank Forums 303 Vintage Park Drive Foster City, CA 94404 415-578-6986 EMail: onsite@interop.com 1.12.03.02 Network World Technical Seminars Ph: 800-643-4668 (direct: 508-820-7493) Fx: 800-756-9430 [Fax back line, ask for document 55] 1.12.03.03 Learning Tree International 1805 Library St Reston, VA 22090-9919 800-843-8733 or 703-709-6405 1.12.03.04 American Research Group, Inc. PO Box 1039 Cary, NC 27512 919-380-0097 1.12.03.05 Chateau Systems, Inc SNMP Training & Development 360 862-1154 Larry R. Walsh 1.12.04 SUBJECT: What email discussion groups are available for SNMP? 1.12.04.01 SUBJECT: Mailing lists for SNMPv1 "This mailing list is currently being managed with ListProcessor, v6.0c." Updates to be made include the request address. It should be: listproc@lists.psi.com The subject line is not looked at. The body of the message should contain: subscribe <list> <your email address> For the snmp list, subscribe to the list by sending a message to: listproc@lists.psi.com with a message body of: subscribe snmp <emailaccount>@<mailhostname.domain>" George Smith It appears the new valid snmpv1 mailing list address is snmp-request@lists.psi.com. However, when I tried to subscribe to snmpv2 mailing list, my email was simply not recieved by anyone. Paul Ledbetter 1.12.04.02 SUBJECT: Mailing lists for SNMPv2 "For the snmpv2 list, subscribe to the list by sending a message to: snmpv2-request@tis.com with a message body of: subscribe snmpv2 <emailaccount>@<mailhostname.domain>" George Smith [Editor's Note: Out of action? See above topic] 1.12.05 SUBJECT: What trade shows cater to SNMP? These days nearly every networking trade show in the US, and many outside the US, covers the SNMP market. The "big name" in internetworking is (their text): "NetWorld+Interop (the definitive networking event) Online registration at Phone registration and customer service: 800-962-6513 and 650-372-7079 Mail registration NetWorld+Interop c/o ZD Events PO Box 45295 San Francisco CA 94145-0295" 1.12.06 SUBJECT: What SNMP Product User Groups Are There? 1.12.06.01 HP OPENVIEW: For owners of a run time license to HP OpenView, there is the the OpenView Forum (a yearly fee is charged). OpenView users should be directed to the OpenView Forum at their web site: "You might also want to include a pointer/reference somewhere on your site for Summit Online. It's a great resource (check it out). The URL is" Rick Sturm There is an email list for the ovforum. It is very active (20-40 messages per day). to submit questions or responses: ovforum.ovforum.org To Subscribe: I think it's majordomo@ovforum.org. If you try to subscribe to ovforum.ovforum.org it will respond with subscription instructions. Matt Dougherty 1.12.06.02 SUNNET MANAGER (revised 3/95): If you wish to subscribe to snm-people, send a message to listproc@zippy.Telcom.Arizona.EDU with no subject, containing only the words: subscribe snm-people "Kent F Enders" [^^^^^^^^^^^^^^^^^^^] [Editor's note: we assume this should be your name here!] If you wish to unsubscribe from snm-people, send the message: unsubscribe snm-people For more information on using listproc, send the message: help This list is devoted to the issues revolving around the use of the SunNet Manager Software package. An anonymous FTP area is set up on Zippy.Telcom.Arizona.EDU as For those users that do not have access to ftp directly, zippy also supports ftps by mail. If you want to try it out send an email message with the word `help' in the body of the message for some instructions. Send that email message to ftpmail@zippy.telcom.arizona.edu. An archive of the mail messages sent to the list subscribers is maintained as well. To get an index of these messages send a message to listproc@zippy.telcom.arizona.edu with a single line message of: INDEX SNM-PEOPLE To remove your name from the mailing list send a one line mail message to listproc@zippy.telcom.arizona.edu. The message should contain the line: UNSUBSCRIBE SNM-PEOPLE To receive a list of the commands for the listproc send a message to listproc@zippy.telcom.arizona.edu with a message of: HELP To send a message to the list send mail to mailto:snm-people@zippy.telcom.arizona.edu 1.12.06.03 IBM NetView There is a NetView User's mailing list (not affiliated or run by Tivoli) that is a great place to learn about NetView and ask questions. Quoting from the nv-l instructions: To subscribe to the NV-L list, send mail to LISTSERV@UCSBVM.UCSB.EDU (not to NV-L nor NV-L-request), with the single line in the body of the note: SUBSCRIBE NV-L firstname lastname This list is for the discussion of NetView and all related products, platforms, usage questions, bugs, and for the dissemination of announcements and updates by members of the NetView Association. Vendors are welcome to post short announcements of products and/or services. You may want to visit the Tivoli NetView web page at: Also, the IBM NetView "red books" are a good practical source of information on NetView. Try or Brett Coley 1.12.07 SUBJECT: Where can I find SNMP-related material on WWW? [from comp.dcom.net-management...] ---------------- 1.12.07.01 it's best if you check out the following www page: it's devoted to network management and contains an excellent overview and links to all the different organisations and commitees: Andreas Weder 1.12.07.02 Re: HNMS: Re: The tkined & scotty network management system: 1.12.07.03 [Deleted] 1.12.07.04 Commercial SNMP Software (See SNMP FAQ Part 2): 1.12.07.05 [Deleted] 1.12.07.06 THIS SPACE WAS FORMERLY OCCUPIED BY A HUGE BUT UNMAINTAINED LIST OF URLS SUBMITTED IN 1996 BY BRUCE BARNETT. IT HAS BEEN REMOVED TO CONSERVE SPACE AND SINCE SO MANY OF ITS LINKS HAD BECOME OBSOLETE. TO SEE IT FOR POSSIBLE VALUE, GO TO 1.12.07.07 eg3.com is a free resource, serving the needs of designers in board-level, embedded, dsp, and realtime. We already link to Simple Network Management Protocol, as an important resource for the engineer in our free listings. Judy Perry 1.12.07.08 1.12.07.09 -- based in France & well-maintained 1.12.07.10 -- from Germany & well-maintained 1.12.07.11 Good overview of net management generalities, context into which snmp fits: Good snmp intro for tech guys (like me). I wouldn't want to talk about snmp before knowing about half this stuff. Ignore chapter 4, though, as it's basically hype for their software. But eveything else is not wasted reading. The SNMP FAQ is very outdated, I feel. Better is the UCD-SNMP FAQ that comes with the linux software! Erik Kruus 1.12.07.12 1.12.07.13 I would like to announce that a forum for users of Sniffer Technologies products is open at. Jim Moore 1.12.08 SUBJECT: What related mailing lists exist? 1.12.08.01 J. Lindsay wrote: "I have started a mailing list for those interested in web-based network and systems management. To subscribe send email to mailto:web-manage-request@qds.com with an email body of "subscribe web-manage <your email address here>" The most applicable usenet news group is news://comp.dcom.net-management. TO UNSUBSCRIBE: If you send an "unsubscribe me" message to the list itself it is almost certain your mail box will overflow with people flaming you. The list is open and unmoderated. All requests should go to: mailto:web-manage-request@qds.com 1.12.20 What related newsgroups exist? 1.12.20.01 Please also visit our cousin newsgroup news://comp.dcom.net-management. 1.12.20.02 There's a discussion group on delphi concerning Enterprise Management. The areas covered are CA Unicenter, HP OpenView, Tivoli, Platinum Tech, Enterprise Management,Trade Shows, EM User Groups, Networking Jobs, Industry Discussion, and General Discussion. Again, this is focusing on Enterprise Management. There are over 423 members to this forum as of 11/11/98. You can visit this site at.... Christopher Smiga 1.12.21 SUBJECT: Are there introductory materials? 1.12.21.01 Look for a document called "ACE-SNMP An Introductory Overview of SNMP" at, I've found it very easy to read and understand and a needed step before getting at RFCs. Alessandro Scotti 1.12.21.02 "SNMP for Dummies" at: Is also good startup reading. John J. Miller 1.13.00 --Miscellaneous 1.13.01 SUBJECT: SNMP and Autodiscovery 1.13.01.01 "Automated topology discovery is a hard problem due to the diversity of deployed systems and the wide distribution of resource information. I will briefly mention some reasons why a ping/traceroute based approach will not work : subnetting, tunneling, firewalls, virtual LANS. Your network topology discovery tool would have to extract more information like subnet masks, etc and use heuristics for "guessing" the real topology. I was a teaching assistant for the computer networks course offered in the spring at Columbia, and assigned the third class project on network topology discovery. You may want to refer to the project resources WWW page at the URL="" Alexander V. Konstantinou" 1.13.01.02 "[...]these are some of the methods that I had used 1. SNMP Broadcast on your local net; all SNMP agents respond 2. Listening for RIP, and OSPF ports, you'll get info on the routers around, and consequently the different sub-networks connected by this router. If you need a better discovery you could listen on both IP and IPX ports. 3. ICMP Router interface discovery; this will again give you information on all the router interfaces (sub-nets) 4. Once you know a sub-net and it's mask (I'm speaking about IP nets) you could issue an ICMP-echo spray to all the possible IP addresses in that range, the one's who are alive will respond. But this has to be fine tuned so that you do not swamp the network with disocvery packets. In case of IPX you issue an IPX diagnostic message spray (the counterpart of ICMP in Novell networks). 5. You could walk the routing tables (MIB2) and get information about other routers and sub-nets. You could figure out the type of the sub-net (MAC layer) by looking up "ifType" for each of the router interfaces. NOTE: The typical problems you would face is handling unnumbered router ports, and proxy-ARP issues." Mohit Tendolkar 1.13.01.03 Check out this paper: Daniel Secci 1.13.02 SUBJECT: SNMP Traps and NOTIFICATION-TYPE 1.13.02.01 - Traps I am relatively ignorant about SNMP. However, I have spent a reasonable amount of effort investigating agents, managers, the technology, and I have read most of the important RFCs. There are a bunch of related but simple, practical questions to which I cannot get a straight answer: Are SNMP traps useful in the real world? Can you depend on traps being sent across networks? Do agents repeat traps? How do you select a polling interval if there are traps you consider very important? RFC 1215 (the one with the TRAP-TYPE macro) says traps are a bad idea (well, sort of). RFC 1224 seems to describe a method to acknowledge SNMP traps and throttle them? What's the real feeling...you know, in practice? The real state of the art? Shyamal Prasad Traps are very useful to us. They let us know when a router link goes down, when network performance is degrading, when a power failure has occurred, etc. - just to name a few. You don't poll for traps - the agent just sends the traps to the network management station(s) you tell it to send them to. Now you can program the network management station to take automatic action if you so desire. For example, if one of our ethernet concentrators sends us traps on a misbehaving port we automatically do some checking and if it is a situation that could potentially take our whole segment down we automatically partition the port off of the network. I'm sure this has saved numerous network outages. Yes, agents may send repetitive traps. The way you throttle or deal with them depends on the software you use on your network management station. All that said - you cannot rely on traps alone. For example, if I die - I cannot pick up the phone and tell someone that "I am dead". Neither can a SNMP agent. Therefore it is good to poll the agents periodically just to see if they are alive and well. Blaine Owens 1.13.02.02 - NOTIFICATION-TYPE The current terminology in use in SNMP is the following: The NOTIFICATION-TYPE construct is used to define events or conditions of interest in a managed system. (In the earlier, but now obsolete, version of the SMI, the TRAP-TYPE construct was used.) SNMPv1 protocol contains the TRAP message type that is sent when an event or condition defined by a NOTIFICATION-TYPE construct occurs. SNMPv2c and SNMPv3 protocols contain the v2TRAP and INFORM message types that are sent when an event or condition defined by a NOTIFICATION-TYPE construct occurs. A v2TRAP message is not confirmed, and an INFORM message is confirmed (that is, a response message is sent back). There is no such thing as an alarm in SNMP. David T. Perkins 1.13.02.03 - Enterprise versus Generic Traps There were 6 defined traps [SNMPv1] that were considered to be common that could be generally useful for most/many SNMP agents [perhaps some more important than others]. There was also the need to let agent/MIB designers implement the idea of traps that were specific to their hardware/software/management needs. In the v1 packet there are 2 fields associated with these, the one for generic traps would be given a value of 0..5 to identify which of the generic traps the packet was related to, or 6, in which case the other [enterprise specifc] field was used to carry the information about what trap was being triggered. Plus there are was another OID field in the v1 trap packet that the manager application would get to identify the [enteprise specific] trap since different agents on different types of hardware would likely use the same values. The v1 approach was not great. With v2 packets this changes somewhat. SMIv2 MIBs traps (NOTIFICATIONS) are not identified as some integer value but rather as a node in the tree. The 6 generic traps were specified as 6 children of a parent node down under a new SNMPv2 subtree in the world-wide tree specification (with node values 1..6, not 0..5). There is no longer a special trap packet format for v2... the old v1 special fields are now v2 varbinds in a standard v2 response packet. Jim Jones 1.13.02.04 -- SNMPv1 Traps versus SNMPv2/v3 Notifications In the SNMPv1 protocol, there is a single type of operation to send an unsolicited message from an agent to a manager, which is a [v1]Trap. SMIv1 uses the TRAP-TYPE construct to define the conditions when such a message can be generated, the identification of the message, and the management information to be contained in the message. When the second version of the SNMP framework was created, it was realized that the simple model for sending unsolicited messages needed to be generalized and a few problems solved. The class of unsolicited messages was renamed to notfications, and contained two types, which are v2TRAP (an unconfirmed notification), and INFORM (a confirmed notification). (Please note that an it is incorrect to characterize v1/v2TRAPs as unreliable and INFORMs as reliable.) Due to politics at the time, INFORMS were labelled as "manager-to-manager" communication. However, this labelling has been fixed (and anyone that claims that INFORMs are "manager-to-manager" communications is living with a 1996 world view and not a present world view!) The first and second frameworks for SNMP-based management do not contain a standard mechanism to configure where to send notification nor the other details such as which type, and the security parameter values. The result has been proprietary definitions that vary in sophistication. The simplest is a table of IP addresses where to send traps (with no support for INFORMs, and other properties). The third version of the SNMP framework contains in RFC 2573 and RFC 2576 a VERY RICH mechanism for managing notification generation. David T. Perkins 1.13.03 SUBJECT: SNMP and/versus the Web 1.13.03.01 SNMP MIB Browsers for Web Software 1.13.03.01.01 Commercially Available. 1.13.03.01.01.01 MibMaster, an SNMP to HTML Gateway from Equivalence An evaluation version is available free. A fully-functional version can be purchased. 1.13.03.01.02 Public Domain. 1.13.03.02 Web Browsers as Network Agents/Managers No data available. 1.13.04 SUBJECT: SNMP and Java "If you have a Linux or Windows NT environment check out: For Java see:" Carl H. Wist Other sources: Hope this helps. Martin Cooley 1.13.04.01 Java Classes/Applets/Etc for SNMP 1.13.04.01.01 Commercially Available 1.13.04.01.01.01 AdventNet, Inc. mailto:info@adventnet.com AdventNet, Inc. 5645 Gibraltar Drive Pleasanton, CA 94588 USA Phone: +1-925-924-9500 Fax: +1-925-924-9600 1.13.04.01.01.02 SunSoft Note: the above is one place to start, but don't forget to search the Web "Another option for building SNMP agents in Java is Sun's Java Dynamic Management Kit (JDMK) product. Take a look at JDMK is based on Java Beans -- as the agent developer, all you have to do is to adhere to the Java Beans design patterns in your Java code. An SNMP MIB compiler is provided that translates an SNMP MIB definition into Java Beans, you then need to fill in the methods of the generated Beans." Dave Hendricks > ... could some one tell me how to subscribe to the JDMK mailing > list I have subscribed using the indication of the JDMK home page but I > do not think some thing happening on this list : You should subscribe by sending an email to listserv@java.sun.com containing in the body: SUBSCRIBE JDMK-FORUM <your email address> (I am not sure whether you should also remove your signature, but I guess it is safer anyway) To get a better response time, please direct your questions regarding JDMK to the JDMK-FORUM list rather than to this forum. The JDMK-FORUM archives are accessible at You might want to have a look at the Java Dynamic Management Kit at Daniel Fuchs 1.13.04.01.01.03 Gamelan Lots of links to Java sites, developers, code, etc. 1.13.04.01.01.04 [...]Furthermore, the coupling of Sun's Jini and JDMK looks promising for creating "plug-and-manage" systems Steve Common 1.13.04.01.01.05 You may want to mention Cyberons for Java -- SNMP Manager Toolkit from Netaphor This product sells for $499 per developer license and royalty-free unlimited distributions. The product also provides high level functions such as device discovery, MIB walks, columnar and row access to tabular data, etc. A programmer's guide is available online at. Shripathi Kamath Cyberons for Java SNMP Manager Toolkit version 2.0 supports SNMP v3, and includes easy-to-use classes which provide access to all v3 features. We paid a great deal of attention while designing these classes to ensure that management applications can be written to work with all versions of SNMP with minimal differences in code, and provide numerous examples to illustrate usage. Also available as a separate product is Cyberons for Java SNMP Utilities 1.0, which is a set of utilities to work with the SNMP Manager Toolkit. These utilities include a MIB compiler/loader, a MIB browser and test application. More information about these products, including a complete programmer's guide, can be obtained from Gopal Narayan 1.13.04.01.02 Public Domain 1.13.04.01.02.01 From Jan-Arendt Klingel ... "Beside MIB-Master there is the JaSCA class library (Java SNMP Control Applet). The URL is. The organisation is called Nixu Oy and is located in Finland. One of the three authors is Pekka Nikander (Pekka.Nikander@nixu.fi). [Note: This site seems to have moved to, is all in Finnish.] 1.13.04.01.02.02 There is a mailing list called "Java Network Management Mailing List" on java-nm@adventnet.com. To subscribe send an email to majordomo@adventnet.com with a body of "subscribe java-nm". There is not so much traffic on the list (maybe because of a bug in the majordomo list). 1.13.04.01.02.03 The URL seems to have disappeared. -Ed 1.13.04.01.02.04 A very nice Java tool can be found on. It's Luca Deris hot application called "Liaison", developed at the IBM Research Center in Zurich. There are SNMP and CMIS-agents to query network management data. [Note: Site reported unreachable, 11/20/98] In the next three months I will hopefully present a network management application with Java "droplets". The URL is. Remember to switch off "lock ports above 1024" at your firewall." Jan-Arendt Klingel 1.13.04.01.02.05 From "Patrick" If you are looking for creating SNMP agents in Java, you can look at : JDMK : contains a mib compiler that creates java (agent) classes from a mib. JMAPI : Java Management API (JMAPI) Java Dynamic Management Kit : JMAPI : 1.13.04.01.02.06 "JMGMT is a java implementation of a SNMP stack. It also includes source code of examples [of]. 2nd public release, now with full source code. JMGMT is a java implementation of a SNMP v1 stack. It also includes the complete source code of all classes and examples. = The JMGMT classes are free and available for download on API documentation is online at Sven Doerr 1.13.04.01.02.07 You may try MIB Designer which is a Java 1.2.2/1.3 application that will run on Unix if that Unix support one of the JREs. MIB Designer has all the features you requested and much more. It can be found at Frank Fock 1.13.05 SUBJECT: SNMP and CORBA 1.13.05.01 >I am currently using an SNMP Manager from SNMP Research on a UNIX >Solaris box and am looking for a CORBA compliant SNMP Manager. Does >anyone know of such an animal? >Dave Stephens What do you mean when you say "CORBA compliant" SNMP manager? If you mean that the SNMP manager should provide a CORBA programming interface you will find some products when you search the internet for the term "JIDM" (Joint Inter Domain Management). Werner Poeppel 1.13.05.02 I worked on such a project. SNMP and TL1 were embedded peers running on top of Corba. Esentially, the implementation for the SNMP functions made Corba service calls to get the data they needed to satisfy the SNMP request. The Corba layer abstracts the device and, thus, the SNMP/TL1/etc developers worked at a high-level. This made it fast to support new MIBs (as long as the Corba IDL was there), but at a slight/moderate cost to performance. One challenge is the style of IDL, i.e. course- or fine-grained object defs. Course grained object defs makes it easy/efficient for such things as GUIs to operate over Corba but meant SNMP had to pull a lot more data than it typically needed to satisfy a SNMP request. Also [you] have to provide "next" IDL methods otherwise it is very inefficient for SNMP getNext hooks to repeatedly make Corba calls until the "right" object is found. In sum, my impression is that if SNMP is the primary method of managing a device, then SNMP/Corba stack is questionable. If SNMP is a minor service, then maybe this is a good choice. Either way, IDL designers need to consider SNMP issues before they set the IDL into stone. I understand there are tools/stds to convert MIBs into Corba IDL, which would make it easy/efficient to stack SNMP over Corba. However, this produces fine-grained object defs which may not be suitable for Java/GUI impls that use such tools to serve as the primary mgmt interface. Lauren Heintz 1.13.06 SUBJECT: SNMP and Visual Basic 1.13.06.01 Terri Coleman wrote: > > I need to be able to write SNMP Sets and Gets from within a Visual Basic > application. Can anyone help? Maybe this package of ocx's contains what you are looking for Free trail version for download available. Bernhard Fischer 1.13.06.02 LogiSoft AR has SNMPv2 toolkit for Visual Basic that includes SNMP ActiveX control and utilities supporting v1 and v2c Look up Alan Revzin 1.13.06.03 NETAPHOR SOFTWARE, INC., has recently released Cyberons, a suite of ActiveX components for engineering and networking applications, which includes a SNMP Manager control. Please note that we do not support trap reception at this time, though this feature will be included in our next release. But if you want to perform SNMP Get, GetNext and Set operations with a real lightweight control, which requires minimal VB code in order to be functional, I think you will find the Cyberons product to be an ideal match. You can check out our free 30-day trial by downloading it from Gopal Narayan 1.13.06.04 You may want to try Mabry Software. They have an OCX that you can download from. Richard Grier 1.13.07 SUBJECT: SNMP and IPv6 > I have a question regarding SNMP and IPv6, and more particularly > SNMP v1 and IP v6. > > Can SNMP v1 be used over an IP v6 network? > > Daniel Fuchs Yes. The only thing that is missing are concrete values for the TDomain and appropriate TCs which define the address formats. This is being worked on. The latest document is available at: <> Discussions take place on the <mibs@ops.ietf.org> mailing list. > In that case how do you handle the agent-addr field of the > trap v1 PDU? (agent-addr is NetworkAddress which is IpAddress > which is OCTET STRING (SIZE(4)) which doesn't have enough room > for an IP v6 address). This not only applies to IPv6 but also to other non-IPv4 transports. In general, agent-addr is broken and the second version of the protocol operations use a trap format which does not have the agent-addr field anymore. > Now if you're agent is bilingual (or trilingual) how do you > handle trap conversion from v2 to v1 when your network is > based on IPv6? With SNMPv3, you use the engineID to identify the originator of a notification. In SNMPv2 or SNMPv1, you are lost. > Is there any RFC that specifically addresses SNMP and IPv6 ? Not really. The ID I have cited above is part of the solution. I am not sure we need much more because UDP is UDP regardless which IP version you use (except that the network layer address format changes). Juergen Schoenwaelder 1.13.10 SUBJECT: SNMP and C# >I am planning to make an SNMP manager (using C#) that will query any SNMP >agent. Currently I'm not able to find any SNMP libraries for C#. Can anyone >point me to a direction. >Also any simple C# code for SNMP will be helpful. >Rubaiyat You access SNMP devices using the SNMP provider for WMI.WMI is wrapped by the System.Management namespace classes. Please refer the folowing link for more more details installing_the_wmi_snmp_provider.asp Sreejumon Also, have a look at this snmp libraries for .NET: Kumar Gaurav Khanna I know some of you have posted looking for a C# Snmp library or one for dotnet. Anyone who is interested check out NetToolWorks, Inc. 1.13.12 SUBJECT: SNMP and Perl 1.13.12.01 SNMP::Info - Object Oriented Perl5 Interface to Network devices and MIBs through SNMP. SNMP::Info - Version 0.4 AUTHOR Max Baker ("max@warped.org") SNMP::Info was created at UCSC for the netdisco project () DESCRIPTION SNMP::Info gives an object oriented interface to information obtained through SNMP. This module lives at Check for newest version and documentation. 1.20.00 --General Questions about SNMPv2 1.20.01 SUBJECT: What is SNMPv2? SNMPv2 is a revised protocol (not just a new MIB) which includes improvements to SNMP in the areas of performance, security, confidentiality, and manager-to-manager communications. SNMPv2 Framework : The following RFCs identify the major components of SNMPv2. Historical ---------- RFC 1441 - Introduction to SNMP v2 RFC 1442 - SMI For SNMP v2 RFC 1443 - Textual Conventions for SNMP v2 RFC 1444 - Conformance Statements for SNMP v2 RFC 1445 - Administrative Model for SNMP v2 RFC 1446 - Security Protocols for SNMP v2 RFC 1447 - Party MIB for SNMP v2 RFC 1448 - Protocol Operations for SNMP v2 RFC 1449 - Transoport Mappings for SNMP v2 RFC 1450 - MIB for SNMP v2 RFC 1451 - Manager to Manger MIB RFC 1452 - Coexistance between SNMP v1 and SNMP v2 Micha Kushner adds: RFC Number Title Status -------------- RFC 1901 Introduction to Community-based SNMPv2 Experim Standard RFC 1902 SMI for SNMPv2 Draft Standard RFC 1903 Textual conventions for SNMPv2 Draft Standard RFC 1904 Conformance statements for SNMPv2 Draft Standard RFC 1905 Protocol operations for SNMPv2 Draft Standard RFC 1906 Transport mappings for SNMPv2 Draft Standard RFC 1907 MIB for SNMPv2 Draft Standard RFC 1908 Coexistence between SNMPv1 and SNMPv2 Draft Standard Wes Hardaker adds: "All SNMPv2 versions but one are historical. Only SNMPv2c is experimental, but is widely accepted as the SNMPv2 standard. Note that the other pieces of SNMPv2 (protocol, SMI, etc) are on the standards track. Only the architecture that ties them together is experimental. The SNMPv2 messaging protocol, etc, are referenced in the SNMPv3 documents, which are on the standards track at draft standard right now." 1.20.02 SUBJECT: What is SNMPv2*/SNMPv1+/SNMPv1.5/SNMP++? SNMPv2 had been announced for many months, and most of us assumed that it was accepted as the next step up from SNMPv1. That assumption was false. In fact there were several points on which the members of the IETF subcommittee could not agree. Primary among them was the security and administrative needs of the protocol. Simply put, SNMPv2*/SNMPv1+/SNMPv1.5 is SNMPv2 without the contentious pieces, but *with* the stuff everyone agrees is of value. You may wish to check for more details. === Edward M. Hourigan wrote: : I keep hearing about SNMP++. What is it? Are there any web pages : describing what it is? I believe there is a Web site with this info at : Hope this helps, John Silva also: The original SNMP++ 2.6 sources can be found at If you`re looking for a Linux/Solaris/Digital port you might try Frank Fock [Editor's Note: See also Part 2: Public Domain SNMP software] I'd like to announce availability of MG-WinSNMP SDK V1.0b6, a 32-bit implementation of WinSNMP specification. It is available under the shareware license and you are welcome to download it from the following URLs: This release of MG-WinSNMP SDK (wsnmp32.dll, a 32-bit winsnmp.dll library) by MG-SOFT Corporation has been published in order to gain compatibility with the Revision 2.5f of SNMP++, an Open Specification for Object Oriented Network Management Development Using C++ by Peter Erik Mellquist, Hewlett Packard Company. () Matjaz Vrecko [Editor's Note: See also Part 2: Public Domain SNMP software] 1.20.03 SUBJECT: What is SNMPv2c? SNMPv2c is the combination of the enhanced protocol features of SNMPv2 without the SNMPv2 security. The "c" comes from the fact that SNMPv2c uses the SNMPv1 community string paradigm for "security". 1.20.04 SUBJECT: What the heck other SNMPv's are there? 1.20.04.01 See 1.20.04.02 Unfortunately, many people are confused about the SNMP protocol versions, which are: SNMPv1 - a standard and widely used SNMPv2p - party based, now obsolete (not used) SNMPv2c - community based, "expermental", but has usage SNMPv2u - user based, experimental and not used SNMPv3 with USM - standards track, trying to get traction In SNMPv1, there was no standards-track mechanism defined that specified where to send traps, so every vendor defined their own approach. The SNMPv3 framework documents include mechanisms that can also be used in SNMPv1 and SNMPv2c. They are very complicated, but do work in specifying the targets for traps in SNMPv1 and traps and informs in SNMPv2c and SNMPv3. David T. Perkins 1.20.04.03 My advice would be to make SNMPv1 the first priority and SNMPv3 the second. I would not bother to implement SNMPv2c unless it came for free with the agent toolkit. Mike Heard 1.20.04.04 There are several varients of the SNMPv2 protocol. They are: SNMPv2p(OBSOLETE): For this version, much work was done to update the SNMPv1 protocol and the SMIv1, and not just security. The result was updated protocol operations, new protocol operations and data types, and party-based security from SNMPsec. This version of the protocol, now called party-based SNMPv2 is defined by RFC 1441, RFC 1445, RFC 1446, RFC 1448, and RFC 1449. (Note this protocol has also been called SNMPv2 classic, but that name has been confused with community-based SNMPv2. Thus, the term SNMPv2p is preferred.) SNMPv2c(experimental, but widely used): This version of the protocol is called community string-based SNMPv2. It is an update of the protocol operations and data types of SNMPv2p, and uses community-based security from SNMPv1. It is defined by RFC 1901, RFC 1905, and RFC 1906. SNMPv2u(experimental): This version of the protocol uses the protocol operations and data types of SNMPv2c and security based on users. It is defined by RFC 1905, RFC 1906, RFC 1909, and RFC 1910. SNMPv2*(experimental): This version combined the best features of SNMPv2p and SNMPv2u. (It is also called SNMPv2star.) The documents defining this version were never published as RFCs. Copies of these unpublished documents can be found at the WEB site owned by SNMP Research (a leading SNMP vendor and previously a proponent of this version). What this all means is that SNMPv2c is in current usage, whereas the other variants are only around in limited form in labs are in some versions of software that have been obsoleted. David Perkins 1.22.00 --General Questions about SNMPv3 1.22.01 SUBJECT: What is SNMP V3? 1.22.01.01 Refer to 1.22.01.02 See also: "I am happy to announce that a SimpleTimes issue on SNMPv3 is now available from the SimpleTimes Web server: The journal is available in PostScript and HTML format. New SimpleTimes issues are announced over a special mailing list. More details about the SimpleTimes project and how to subscribe to this mailing list can be found in the December 1997 issue or on the Web server. I hope you all enjoy reading this issue on SNMPv3 and I wish you all the best for 1998." Juergen Schoenwaelder Juergen later added: "You can find these links and many more on the SNMPv3 web page at:" 1.22.01.03 Micha Kushner/David Partain adds: RFC Number Title Status=PROPOSED -------------- Status = 3D PROPOSED RFC 2271 An Architecture for Describing SNMP Management Frameworks RFC 2272 Message Processing and Dispatching for the Simple Network Management Protocol (SNMP) RFC 2273 SNMPv3 Applications RFC 2274 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3) RFC 2275 View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP) Status = 3DDRAFT STANDARD RFC 2570 Introduction to Version 3 of the Internet-standard Network Management Framework (Status=3DINFORMATIONAL) RFC 2571 An Architecture for Describing SNMP Management Frameworks RFC 2572 Message Processing and Dispatching for the Simple Network Management Protocol (SNMP) RFC 2573 SNMP Applications RFC 2574 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3) RFC 2575 View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP) Internet- Draft Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management Framework 1.22.01.04 also, for SNMPv3 implementations ... "See the list on" Simon Leinen 1.22.01.05 Bill Stallings writes: My paper, "SNMPv3: A Security Enhancement to SNMP" published in the 4th Quarter 1998 issue of the online journal IEEE Communications Surveys, is now available at. 1.22.01.06 Some pertinent excerpts from the RFC index: 1157 Simple Network Management Protocol (SNMP). J.D. Case, M. Fedor, M.L. Schoffstall, C. Davin. May-01-1990. (Format: TXT=74894 bytes) (Obsoletes RFC1098) (Also STD0015) (Status: HISTORIC) --- 3410 Introduction and Applicability Statements for Internet-Standard Management Framework. J. Case, R. Mundy, D. Partain, B. Stewart. December 2002. (Format: TXT=61461 bytes) (Obsoletes RFC2570) (Status: INFORMATIONAL) 3411 An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks. D. Harrington, R. Presuhn, B. Wijnen. December 2002. (Format: TXT=140096 bytes) (Obsoletes RFC2571) (Also STD0062) (Status: STANDARD) 3412 Message Processing and Dispatching for the Simple Network Management Protocol (SNMP). J. Case, D. Harrington, R. Presuhn, B. Wijnen. December 2002. (Format: TXT=95710 bytes) (Obsoletes RFC2572) (Also STD0062) (Status: STANDARD) 3413 Simple Network Management Protocol (SNMP) Applications. D. Levi, P. Meyer, B. Stewart. December 2002. (Format: TXT=153719 bytes) (Obsoletes RFC2573) (Also STD0062) (Status: STANDARD) 3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3). U. Blumenthal, B. Wijnen. December 2002. (Format: TXT=193558 bytes) (Obsoletes RFC2574) (Also STD0062) (Status: STANDARD) 3415 View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP). B. Wijnen, R. Presuhn, K. McCloghrie. December 2002. (Format: TXT=82046 bytes) (Obsoletes RFC2575) (Also STD0062) (Status: STANDARD) 3416 Version 2 of the Protocol Operations for the Simple Network Management Protocol (SNMP). R. Presuhn, Ed.. December 2002. (Format: TXT=70043 bytes) (Obsoletes RFC1905) (Also STD0062) (Status: STANDARD) 3417 Transport Mappings for the Simple Network Management Protocol (SNMP). R. Presuhn, Ed.. December 2002. (Format: TXT=38650 bytes) (Obsoletes RFC1906) (Also STD0062) (Status: STANDARD) 3418 Management Information Base (MIB) for the Simple Network Management Protocol (SNMP). R. Presuhn, Ed.. December 2002. (Format: TXT=49096 bytes) (Obsoletes RFC1907) (Also STD0062) (Status: STANDARD) Michael Kirkham 1.30.00 --RMON 1.30.01 SUBJECT: What is RMON? ---------------- The Remote Network Monitoring MIB is a SNMP MIB for remote management of networks. While other MIBs usually are created to support a network device whose primary function is other than management, RMON was created to provide management of a network. RMON is one of the many SNMP based MIBs that are on the IETF Standards track. 1.30.02 SUBJECT: RMON Standardization Status RMON is one of the many SNMP based MIBs that are on the IETF Standards track (RFC 1310). Currently (Jan 94) RMON has two instantiations in the IETF standards process. First, RFC 1271 - a Proposed Standard, specifies the general structure of RMON and the particulars of an Ethernet based RMON agent. RFC 1513 - a Proposed Standard specifies the additional RMON groups and specifics for a Token Ring network. 1.30.03 SUBJECT: RMON Working Group. The RMON Working Group is an IETF Working Group under the Network Management Area. The WG meets periodically - usually at all IETF meetings. The WG maintains a mailing list for Questions and Comments concerning RMON. Mail List: mailto:rmonmib@cs.hmc.edu ? If no luck there, try rmonmib@cisco.com The group's charter can be found at: 1.30.04 SUBJECT: Joining the RMON Working Group Mailing List To join the RMON Working Group mailing list, send mail to: Mail List Request: mailto:rmonmib-request@jarthur.claremont.edu. DO NOT send a request to join message to the general mailing list. [Editor's Note: We have received a complaint that this request may bounce. The claremont.edu addresses may no longer be active] You may also wish to try: mailto:rmonmib-request@cisco.com (Thanks to James Stansell for the detective work.) 1.30.05 SUBJECT: Historical RMON Records There are copies of the RMON mailing list messages and meeting minutes within the IETF archive structure - available at various sites. There is also a RMON archive directory which can be accessed via anonymous ftp at: jarthur.cs.hmc.edu, directory /pub/rmon [Editor's Note: We have received a complaint that site no longer exists (or, was not at home when someone called). Anyone know if this site remains active? Is this the same place as jarthur.claremont.edu?] 1.30.06 SUBJECT: RMON Documents 1. RMON White Paper in the anonymous ftp directory at jarthur.cs.hmc.edu. There are two formats: frame and postscript. This paper was developed by members of the RMON working group prior to an Interop. It is a superficial discussion of RMON. 2. Chapter 7 in "SNMP, SNMPv2 and CMIP: The Practical Guide to Network Management Standards" by William Stallings, (c) 1993 Addison-Wesley, goes into some detail on the RMON MIB. 1.30.07 SUBJECT: RMON2 RMON2 is an IETF standards track effort. The IETF RMON working group started on the RMON2 MIB module back in the fall of 1994. It was published as RFC 2021 in January 1997. All of the leading probe vendors, including NetScout, Technically Elite, Solcom, HP, etc have probes that support it. Also, many of the networking device manufacturers including Bay Networks and 3Com have embedded RMON2 support in their products. There was an interoperability test summit in December 1997, which was attended by all of the companies above plus Cisco and Cabletron. The RMON2 specification is quite stable and ready for advancement in the standards process. Two additions are in the works to be published. They are RMON extensions for switches and an RMON extension for fast networks. The major difference [between RMON and RMON2] is that RMON provided statistics only at the data link layer, where as RMON2 provides statistics at the network and upper layers. As to the original questions from Paul Black, It is difficult to take advantange of all the features in RMON with generic tools. With RMON2, it is even more difficult. Try David T. Perkins [post edited for conciseness] 1.40.00 --ISODE 1.40.01 SUBJECT: What is ISODE? ------ ISODE (pronounced "eye-so-DEE") is an acronym for "ISO Develoment Environment". It is an implementation of SNMP which can be used as the starting point for further refinement by you. In order to use it you must agree to the conditions. This quote is from "The Simple Book", 2nd ed.: "[ISODE] is openly available but is NOT in the public domain. You are allowed and encouraged to take this software and use it for any lawful purpose. However, as a condition of use, you are required to hold harmless all contributors." Most MIB compilers seen by this editor sprang from ISODE roots. 1.40.02 SUBJECT: Where can I get ISODE? The old archive was ... 4BSD/ISODE 8.0 SNMPv2 package This distribution has moved. One place a copy can be obtained is listed below. Questions may be sent to ISODE-SNMPv2@ida.liu.se Mailing list may be subscribed by sending mail to isode-snmpv2-request@cs.utk.edu A copy of the 4BSD/ISODE 8.0 SNMPv2 package 1.40.03 SUBJECT: Is there an ISODE SNMPv2 Mailing List? Yes. To subscribe, send email to: mailto:isode-snmpv2-request@cs.utk.edu 1.50.00 --Using SNMP to Monitor or Manage 1.50.01 SUBJECT:How do I calculate utilization using SNMP? Brad Harris wrote: > We are trying to setup T-1 utilization percentage stats using ifInOctets > and ifOutOctets. MANY ANSWERS FOLLOW: 1.50.01.01 I would suggest: (DELTA(ifInOctets) + DELTA(ifOutOctets)) * 8 -------- * 100 (DELTA(sysUpTime) / 100) * 1 540 000 where DELTA(attribute) means the difference of the value of attribute between two polls. Of course, the values for ifInOctets, ifOutOctets and sysUpTime should be requested in one single PDU. Olivier Miakinen 1.50.01.02 Serial lines (including TDM systems like T1) measure interface speed as half duplex. That is, the 1.544 Megabit per second bandwidth is one way; a full duplex line actually has twice that value. 1.544 Mb for transmit, 1.544 Mb for receive. If you want the "interface utilization", then you would add outOctetcs and inOctets together, as you did, but use 3088000 for the interface speed. If you want "line utilization" (which is more valuable for typical management operations), you could use the "max" value of in or out Octets, as in the previous example. This is more useful, because the line may be at 50% utilization (using your method) and still be saturated, if all traffic is going one way. T. Max Devlin 1.50.01.03 Make sure your time delta doesn't exceed the wrap time the 32 bit MIB2 counters, ~6 Hrs for T1. Its a nice touch if ifInOctets and ifOutOctets are bound in the same PDUs. Also bind sysUpTime in each PDU so you can detect agent reload. Charlie Dellacona 1.50.01.04 T1 circuits are duplex, you have to have separate utilisation formulae for both in and out. Otherwise you run the risk of missing that your heavily utilised in one direction because the other is very light. In many configurations this is a likely situation, a short frame requesting data from a server or mainframe resulting in megabytes heading in the opposite direction. Wim Harthoorn 1.50.01.05 To make your figures mean something useful, generate incoming and outgoing utilization separately. A T1 link is full-duplex....1.544 Mbps in each direction. An organizational T1 Internet link will saturate on the incoming side while the outgoing side is less than half utilized. Your formula would indicate that the link had some extra bandwidth capability when in reality its a major bottleneck. Gary Flynn 1.50.01.06 You are missing a few subtleties of getting this exactly right. What you want to do is sample (all in one packet exchange) the values of ifInOctets, ifOutOctets, and sysUptime. Then, you sample all three again (after some interval) and use the three deltas to compute: Delta(ifInOctets)*8 -- => Input % utilization Delta(sysUptime)*154 And likewise for output. Note that there are two factors of 100 folded into the denominator (that's why 154 instead of 1540000), one since sysUptime is hundredths of a second and the second to get a percent rather than a fraction. You could also fold the 8 and 154 together as well, but that's not an integer... And be sure your Delta function properly accounts for wrapping. You should do this periodically, each time computing the deltas from the previous sample, dropping intervals that are "insane" (e.g. sysUptime has a large delta [positive or negative] compared to the wall [or monitoring system] clock). You will want to compute _both_ deltas and plot them over time as well as extracting just the maximum value. You want a sampling period that's small enough to really indicate peaks, without being so short it overloads the monitoring or monitored systems. If you can, you want to monitor both ends of the line (ifOutOctets at one end may be greater than ifInOctets at the other, in which case it's a better measure of load in that direction). Michael A. Patton 1.50.01.07 Dependent on your need for reproduction and historical tracking of the utilization and other factors such as error rates, you might want to consider purchasing a performance monitoring and reporting tool to help you through some of this. We have a tool for doing precisely what you want, and it also solves for cases of counter roll-over and sysuptime resets. Our tool is called ClearStats and is very economical and flexible. We have autodiscover and automated/scheduled reporting. Check us out at John Catalano 1.50.01.08 Dan Cox wrote: > if you look in the rmon mib and look at the description of > etherstatsoctets it tells you if you > want to get utilization that you sample etherstatsoctets at two > intervals and use this formula. I want someone to explain the formula > to me. > > Here it is > Packets * (9.6 + 6.4) + (Octets * .8) > utilization = ---------------------------------------------- > Interval * 10,000 > > I assuming this is for 10 mbps ethernet. > What is the 9.6 and 6.4? > Why do you need to know the number of packets? > What formula do you use if you are using 100 mbps ethernet? > What if it is full-duplex? In the formula, 9.6 is the interpacket gap time in micro seconds. 6.4 is the preamble+start-frame-delimiter time in micro seconds. Each time you send a packet, these are present. The 10,000 is the speed. You change this to 100,000 for 100 Mb/s ethernet. For full duplex, the formula is the same, but it applies to each channel. That is, full duplex is a point-to-point technology. If you connect nodes A and B. There are essentially two dedicated and contention free channels, one from from A to B and the other from B to A. You can compute utilization on each channel. David T. Perkins 1.50.01.09 Have a look at for a little bit of info on calculating utilization statistics. Paul Koch 1.50.01.10 Raja Kolli wrote: > How do you represent the speed for full-duplex links e.g. full-duplex 10Mb > ethernet, Should it be 10Mbps or 20Mbps? Or is there any other object (new > ifType value etc.,) that can be used to represent full-duplex operation? > Appreciate any pointers on the standrards. Page 7 of RFC 2358, Definitions of Managed Objects for the Ethernet-like Interface Types, describes how the ifSpeed object from IF-MIB should be set for full-duplex ethernet interfaces: [RFC quote deleted -- go get yourself a copy. Ed.] So, the answer to your question is that for a full-duplex 10BaseT interface ifSpeed should be 10Mbps, just as it is for a half-duplex interface. C. M. "Mike" Heard 1.50.02 SUBJECT: What are Appropriate Operating Thresholds? >We've just installed brand new PS Hubs and a SSII switch 3300 with SNMP >capabilities from 3Com, and we're managing it with the Transcend Workgroup >for Windows 6.0 application. Does anyone know which are the suitable >thresholds for both hub and switch alarms? Basicly, I'd like to know just >the more usual : Total errors, FCS errors, alignment errors, broadcast >packets, runts, collisions, undersize and oversize packets, long and short >events. > > Jorge Alaman~ac [Editor's Note: T. Max Devlin's response has been edited to fit. These out takes are noted by "[...]".] Suitable thresholds are environmentally sensitive; everybody's "correct" values area little different. The best you will get from products or info sources are more "defaults" than "best guesses", IMHO. We've found that the ideal setting for thresholds does not correlate to absolute numbers, or even typical ranges. [...] The best approach, seriously, to thresholding is to consider, not some absolute concept of the perfect network metrics, but the results of the thresholding. Essentially, you should look at a simple plot of your values over a few hours and a few days (baselining), then pick a threshold value that will result in an "appropriate" number of alerts. If you want a log-style "this is how many times this happens", you might want every peak to trip the threshold. If a more report-oriented "the occurrences happened at this time", a slightly higher value might be called for. "This is a problem, you should know about it even if you can't 'fix' it" thresholds might trigger a few times a week, and the "the network is broken; get busy" alerts should essentially be set high enough so that they never happen under typical network conditions. The real issue is not what the numbers should be, but how often you want to know about it. [...] But just so I don't leave you high and dry, here's some beginning defaults, if you insist: Total errors: <2% FCS errors: <2% Alignment errors: <1% broadcast packets: Start with 10%; bring up if you are flooded, bring down if it never triggers runts: <1%, but some systems might have much larger values under normal conditions collisions: 10% undersize and oversize packets: <1% long and short events: <1% T. Max Devlin 1.50.03 SUBJECT: Are MIBs available to monitor application traffic? George Koukoulas wrote: : I would like to find out if there are any MIBs about management : of application traffic, meaning separate management of ftp, : http, telnet, smtp, etc application traffic. There are two to-be-published MIBs that may be of interest to you. The Application Management MIB <draft-ietf-applmib-mib-11.txt> provides statistics for application or service IO channels. On top of these channels, one can have transaction streams with transaction kind specific statistics. The WWW Services MIB <draft-ietf-applmib-wwwmib-11.txt> provides a core set of statistics for Web services. It is written against an abstract document transfer protocol. Mappings to FTP and HTTP are defined in the document. Both MIBs have been approved by the IESG for publication as Prosposed Standards. They are currently sitting in the queue of the RFC editor waiting for publications as RFCs. Both MIBs are the product of the application management working group. Juergen Schoenwaelder 1.50.04 SUBJECT: How can I make sense of the Interfaces Group? You should definitely look at RFC 2863 which is the latest definition of the interfaces group. The introductionary text is very valuable in order to understand of the IF-MIB evolved over time. Juergen Schoenwaelder 1.50.04.01 > If an interface is full duplex, does that mean it can transmit at a > rate of 'ifSpeed' in each direction simultaneously, or does it mean that > the interface has 'ifSpeed' worth of bandwidth in total? > > Glenn Reesor ifSpeed should represent an estimate of the bandwidth of the interface. ifHighSpeed should be used if ifSpeed isn't large enough. Les Cargill There seems to be rough consensus on the former interpretation, i.e., that the interface can transmit at a rate of 'ifSpeed' in each direction simultaneously. Mike Heard 1.50.04.02 > If an interface is half duplex, does that mean that it can transmit > at a rate of 'ifSpeed' in each direction, but only one direction at a > time? > > Glenn Reesor There seems to be nearly universal agreement on this interpretation. In searching the IETF mail archive (specifically 1998-07.mail.aug4) I found the following two excerpts which might be helpful: On Friday, 3 Apr 1998, Gary Hanson wrote: > > Should the ifSpeed for a T1 interface be 1.54Mb or should it be 3.08Mb > > in that it can sustain 1.54Mb in both the transmit and receive > > The latest <draft-ietf-trunkmib-ds1-mib-08.txt> for the DS1-MIB is > unambiguous on this point. In section 3.1 it says to use 1544000 > for the ifSpeed for DS1 lines. On Wednesday, 20 May 1998, John Flick wrote: > > 5) How do you tell if the interface is half or full-duplex for > > both ethernet and token ring interfaces? > > The current answer for Ethernet is ifMauType. Using ifSpeed, as one > response suggested, has been used by some vendors (a survey I did a > few months ago showed about half of the responders doubled ifSpeed for > full-duplex, though most agreed that this is a kludge). The consensus > of the hubmib WG was that this should be disallowed. The hubmib WG is > currently debating whether ifMauType is adequate, or if we need to add > an object for duplex mode to either the Ethernet MIB or IF-MIB. Mike Heard 1.50.04.03 The Interfaces Group of RFC1213 has been superseded by RFC2863 and RFC2864: 2863 The Interfaces Group MIB. K. McCloghrie, F. Kastenholz. June 2000. (Format: TXT=155014 bytes) (Obsoletes RFC2233) (Status: DRAFT STANDARD) 2864 The Inverted Stack Table Extension to the Interfaces Group MIB. K. McCloghrie, G. Hanson. June 2000. (Format: TXT=21445 bytes) (Status: PROPOSED STANDARD) Although RFC1643 is a full-standard, it does not properly support 100 BaseT. It has been superseded by RFC2665, which does: 2665 Definitions of Managed Objects for the Ethernet-like Interface Types. J. Flick, J. Johnson. August 1999. (Format: TXT=110038 bytes) (Obsoletes RFC2358) (Status: PROPOSED STANDARD) Even though these are all SMIv2 MIBS, everything in them except for the Counter64 objects in the IF-MIB can indeed be implemented in an SNMPv1 agent. Mike Heard 1.50.10 SUBJECT: When do I use GETBULK versus GETNEXT? 1.50.10.01 You use GETNEXT, typically, to get selected columns from one or more rows of a table. If you want the values for columns S(1)..S(s) from columns C(1)..C(c) (where s<c)) for all rows in the table (and there are N rows), you would make N+1 GETNEXT requests. (This assumes that the varBinds for columns S(1)..S(s) will fit in a request and response message.) You use GETBULK, typically, as an optimisation of GETNEXT, and you would not typically know how many rows will be in the table. You just issue GETBULKs until you get all of the rows, just like using GETNEXT. With GETNEXT, you know when you are done when the response is not the next row. Likewise, with GETBULK. However, instead of getting a single set of extra varBinds, you get upto the the value of maxRepeaters. This behavior is called "overshoot". If the agent and manager support large max message sizes, and the maxRepeaters is large, then you will have many extra varBinds in the last reponse to GETBULK. Typically, this is not too bad. David Perkins 1.50.10.02 See RFC 1905 section 4.2.3 what the real correct behaviour of a GetBulkRequest is. Especially study the situations under which an agent returns less than the total number of repetitions. Juergen Schoenwaelder 1.50.10.03 I keep getting questions on GetBulk, and how it works and so on, and especially multi-var bind GetBulk. I'll attach an part email I've had to send out recently, which can be dumped into the FAQ. Feel free to edit/clean up as needed. [Editor's Note: See] Pete Flugstad 1.50.12 SUBJECT: What free products can be used to monitor? > I am looking for some Network Monitoring Software that will give > alerts when a device or server goes down and that will also log and > monitor snmp information for devices and be able to graph the data it > logged. > Cyberspew I use a combination of nagios (updated version of netsaint) and mrtg with rrdtool. Nagios provides complete network monitoring, including device and service availability. It will even produce graphics of the nodes in your network. Mrtg allows you to graph any snmp object on your network; the most common of which are incoming and outgoing traffic. I also use it to monitor cpu usage, disk usage and webcache requests. RRDtool is an alternative logger for mrtg, which does not generate graphs automatically, in order to save cpu time. If you do not want to go down the free route, there is program for windows that will do all of this (except you cannot choose any snmp object to graph - only traffic) from Solarwinds, called Network Performance Monitor. Their website is at. Chris 1.75.00 -- SNMP Engineering and Consulting 1.75.01 SUBJECT: SNMP Engineering and Consulting Firms [Editor's Note: Business entities named in this section should have a minimum of three years of direct experience implementing SNMP solutions at either the manager or agent node.] 1.75.01.01 Core Competence Inc. David M. Piscitello 3 Myrtle Bank Lane Hilton Head, SC 29926 Email: dave@corecom.com Phone: (843) 683-9988 Fax: (843) 689-5595 1.75.01.02 SNMP Research International, Inc. 3001 Kimberlin Heights Road Knoxville, TN 37920-9716 Ph: 865-579-3311 Fx: 865-579-6565 mailto:info@snmp.com SNMP Research provides consulting and development services in conjunction with the licensing of our products and development tools. Our customers' needs include the following: Product definition MIB design, Hardware design Implementation System integration/testing Life-cycle maintenance Our expertise and business model allows us to match our resources with customers' needs anywhere along this spectrum. 1.75.01.03 Panther Digital Corporation Danbury, CT OEM Software Engineers and Consultants panther@pantherdig.com 203 312-0349 1.75.01.04 G & H Computer Services, Inc Daytona Beach, FL 904 255-1599 904 253-1545 FAX 1.75.01.05 Prism Communications, Inc 10015 Old Columbia Road, Suite F-100 Columbia MD 21046 Tel: 410-381-1515 Fax: 410-381-8787 mailto: info@prismComm.com Prism Communications has extensive experience with the development of SNMP v1/2/3 based solutions including RMON1/2 and AgentX. Customers look to us to design enterprise MIBs, develop embedded agents, extend/develop manager frameworks, develop scripts for detailed testing. We have extensive experience with VxWorks/pSOS, Win32 and Solaris environments and are a Solutions Partner for HP OpenView. END OF PART 1, SNMP FAQ PLEASE CONTINUE WITH PART 2.
http://www.faqs.org/faqs/snmp-faq/part1/index.html
crawl-002
refinedweb
15,987
66.03
MySQL 8.0 Release Notes There is an issue for MySQL 8.0.19 installed using MySQL Installer. Then install MySQL Enterprise Firewall afterward using the instructions for manual installation (see Installing or Uninstalling MySQL Enterprise Firewall). This problem is corrected in MySQL 8.0.20. Deprecation and Removal Notes Function and Operator Notes Functionality Added or Changed MySQL now Password Management. (Bug #27733694, Bug #90169) ANALYZE TABLE statements now produce read audit events. (Bug #29625461) Audit log connect events now include any connection attributes passed by the client. Connection attribute logging is supported for new-style XML log file format and JSON format, but not old-style XML format. See Audit Log File Formats. Microsoft Windows: On Windows, the minimum version of CMake for builds from the command line is now 3.15. (Bug #30332632, Bug #96954) New FPROFILE_GENERATE and FPROFILE_USE CMake options are available for experimenting with profile guided optimization (PGO) with GCC. See the cmake/fprofile.cmake in a MySQL source distribution for information about using them. These options have been tested with GCC 8 and 9, and with Clang. Enabling FPROFILE_USE also enables WITH_LTO (link time optimization). (Bug #30089834, Bug #96314, Bug #30133324, Bug #96410, Bug #30164113, Bug #96486) Innodb_system_rows_read, Innodb_system_rows_inserted, Innodb_system_rows_deleted status variables were added for counting row operations on InnoDB tables that belong to system-created schemas. The new status variables are similar to the existing Innodb_rows_read, Innodb_rows_inserted, Innodb_rows_deleted status variables, which count operations on InnoDB tables that belong to both user-created and system-created schemas. The new status variables are useful in replication environments where relay_log_info_repository and master_info_repository variables are set to TABLE, resulting in higher row operation counts on slaves due to operations performed on the slave_master_info, slave_replay_log_info, and slave_worker_info tables, which belong to the system-created mysql schema. For a valid comparison of master and slave row operation counts, operations on tables in system-created schemas can now be excluded using the count data provided by the new status variables. Thanks to Facebook for the contribution. (Bug #27724674) Setting the hash_join optimizer switch (see optimizer_switch system variable) no longer has any effect. The same applies with respect to the HASH_JOIN and NO_HASH_JOIN optimizer hints. Both the optimizer switch and the optimizer hint are now deprecated, and subject to removal in a future release of MySQL. (Bug #30471809) Support for the YEAR(2) data type was removed in MySQL 5.7.5, leaving only YEAR and YEAR(4) as valid specifications for year-valued data. Because YEAR and YEAR(4) are semantically identical, specifying a display width is unnecessary, so YEAR(4) is now deprecated and support for it will be removed in a future MySQL version. Statements that include data type definitions in their output no longer show the display width for YEAR.. The (undocumented) UNSIGNED attribute for YEAR is also now deprecated and support for it will be removed in a future MySQL version. Error messages regarding crash recovery for XA were revised to indicate XA context to distinguish them from non-XA crash recovery messages. (Bug #30578290, Bug #97743) Previously, the server returned this error message for attempts to use LOAD DATA LOCAL with LOCAL capability disabled: The used command is not allowed with this MySQL version. This was misleading because the error condition is not related to the MySQL version. The server now returns an error code of ER_CLIENT_LOCAL_FILES_DISABLED and this message: Loading local data is disabled; this must be enabled on both the client and server side. (Bug #30375698, Bug #29377985, Bug #94396) Previously, user-defined functions (UDFs) took no account of the character set or collation of string arguments or return values. In effect, string arguments and return values were treated as binary strings, with the implication that only string arguments containing single-byte characters could be handled reliably. UDF behavior is still the same by default, but the interface for writing UDFs has been extended to enable UDFs to determine the character set and collation of string arguments, and to return strings that have a particular character set and collation. These capabilities are optional for UDF writers, who may take advantage of them as desired. See User-Defined Function Character Set Handling Of the UDFs distributed with MySQL, those associated with the following features and extensions have been modified to take advantage of the new capabilities: MySQL Enterprise Audit, MySQL Enterprise Firewall, MySQL Enterprise Data Masking and De-Identification, MySQL Keyring (the general-purpose keyring UDFs only), and Group Replication. The modification applies only where it make sense. For example, a UDF that returns encrypted data is intended to return a binary string, not a character string. Character-set capabilities for UDFs are implemented using the mysql_udf_metadata server component service. For information about this service, see the MySQL Server Doxygen documentation, available at (search for s_mysql_mysql_udf_metadata and udf_metadata_imp). Source code for the MySQL Keyring UDFs is available in Community source distributions and may be examined as examples for third-party UDF writers who wish to modify their own UDFs to be character set-aware. The INFORMATION_SCHEMA contains several new tables that expose role information: ADMINISTRABLE_ROLE_AUTHORIZATIONS: Roles the current user can grant; see The INFORMATION_SCHEMA ADMINISTRABLE_ROLE_AUTHORIZATIONS Table. APPLICABLE_ROLES: Roles applicable for the current user; see The INFORMATION_SCHEMA APPLICABLE_ROLES Table. ENABLED_ROLES: Roles enabled within the current session; see The INFORMATION_SCHEMA ENABLED_ROLES Table. ROLE_COLUMN_GRANTS: Column privileges for roles for the current user; see The INFORMATION_SCHEMA ROLE_COLUMN_GRANTS Table. ROLE_ROUTINE_GRANTS: Routine privileges for roles for the current user; see The INFORMATION_SCHEMA ROLE_ROUTINE_GRANTS Table. ROLE_TABLE_GRANTS: Table privileges for roles for the current user; see The INFORMATION_SCHEMA ROLE_TABLE_GRANTS Table. A new SECRET key type is available that is intended for general-purpose storage of sensitive data using the MySQL keyring. The keyring encrypts and decrypts SECRET data as a byte stream upon storage and retrieval. The SECRET key type is supported by all keyring plugins. See Supported Keyring Key Types and Lengths. The SIGUSR1 signal now causes the server to flush the error log, general query log, and slow query log. One use for SIGUSR1 is to implement log rotation without having to connect to the server (which to flush logs requires an account that has the RELOAD privilege).. See Unix Signal Handling in MySQL. The zstd library bundled with MySQL has been upgraded from version 1.3.3 to 1.4.3. MySQL uses the zstd library to support connection compression. (Bug #30236685) For package types for which OpenSSL shared libraries are included, they are now also included under lib/private if the package has private-to-MySQL libraries located there that need OpenSSL. (Bug #29966296) Important Change: MySQL now supports explicit table clauses and table value constructors according to the SQL standard. These have now been implemented, respectively, as the TABLE statement and the VALUES statement, each described in brief here: TABLE is equivalent to table_name SELECT * FROM , and can be used anywhere that the equivalent table_name SELECT statement would be accepted; this includes joins, unions, INSERT ... SELECT statements, REPLACE statements, CREATE TABLE ... SELECT statements, and subqueries. You can use ORDER BY with TABLE, which also supports LIMIT with optional OFFSET; these clauses function in the same way in a TABLE statement as they do with SELECT. The following two statements produce the same result: TABLE t ORDER BY c LIMIT 10 OFFSET 3; SELECT * FROM t ORDER BY c LIMIT 10 OFFSET 3; VALUES consists of the VALUES keyword followed by a series of row constructors ( ROW()), separated by commas. It can be used to supply row values in an SQL-compliant fashion to an INSERT statement or REPLACE statement. For example, the following two statements are equivalent: INSERT INTO t1 VALUES ROW(1,2,3), ROW(4,5,6), ROW(7,8,9); INSERT INTO t1 VALUES (1,2,3), (4,5,6), (7,8,9); You can also select from a VALUES table value constructor just as you would a table, bearing in mind that you must supply a table alias when doing so. Using column aliases, you can also select individual columns, like this: mysql> SELECT a,c FROM (VALUES ROW(1,2,3), ROW(4,5,6)) AS t(a,b,c);+---+---+ | a | c | +---+---+ | 1 | 3 | | 4 | 6 | +---+---+ You can employ such statements in joins, unions, subqueries, and other constructs in which you normally expect to be able to use such statements. For more information and examples, see TABLE Statement, and VALUES Statement, as well as INSERT ... SELECT Statement, CREATE TABLE ... SELECT Statement, JOIN Clause, UNION Clause, and Subqueries. (Bug #77639) Previously, it was not possible to use LIMIT in the recursive SELECT part of a recursive common table expression (CTE). LIMIT is now supported in such cases, along with an optional OFFSET clause. An example of such a recursive CTE is shown here: WITH RECURSIVE cte AS ( SELECT CAST("x" AS CHAR(100)) AS a FROM DUAL UNION ALL SELECT CONCAT("x",cte.a) FROM qn WHERE LENGTH(cte.a) < 10 LIMIT 3 OFFSET 2; ) SELECT * FROM cte; Specifying LIMIT in this fashion can make execution of the CTE more efficient than doing so in the outermost SELECT, since only the requested number of rows is generated. For more information, see Recursive Common Table Expressions. (Bug #92857, Bug #28816906) When CHECK constraints were implemented in MySQL 8.0.16, ALTER TABLE supported DROP CHECK and ALTER CHECK syntax as MySQL extensions to standard SQL for modifying check constraints, but did not support the more general (and SQL standard) DROP CONSTRAINT and ALTER CONSTRAINT syntax for modifying existing constraints of any type. That syntax is now supported; the constraint type is determined from the constraint name. MySQL now supports aliases in the VALUES and SET clauses of INSERT INTO ... ON DUPLICATE KEY UPDATE statement for the row to be inserted and its columns. Consider a statement such as this one: INSERT INTO t VALUES (9,5), (7,7), (11,-1) ON DUPLICATE KEY UPDATE a = a + VALUES(a) - VALUES(b); Using the alias new for the inserted row, you can now rewrite the statement, referring back to the row alias in the ON DUPLICATE KEY UPDATE clause, like this: INSERT INTO t VALUES (9,5), (7,7), (11,-1) AS new ON DUPLICATE KEY UPDATE a = a + new.a - new.b; Using the same row alias, and, additionally, the column aliases m and n for the columns of the inserted row, you can omit the row alias and use only the column aliases, as shown here: INSERT INTO t VALUES (9,5), (7,7), (11,-1) AS new(m,n) ON DUPLICATE KEY UPDATE a = a + m - n; The row alias must be distinct from the table name; column aliases must be distinct from one another. See INSERT ... ON DUPLICATE KEY UPDATE Statement, for more information and examples. sys schema objects have been reimplemented not to invoke the deprecated sys.format_bytes(), sys.format_time(), and sys.ps_thread_id() stored functions. Instead, they invoke the equivalent built-in SQL functions implemented in MySQL 8.0.16 that format or retrieve Performance Schema data (see Section 6, “Changes in MySQL 8.0.16 (2019-04-25, General Availability)”). sys.format_bytes(), sys.format_time(), and sys.ps_thread_id() will be removed in a future MySQL version, so applications that use them should be adjusted to use the built-in functions instead, keeping in mind some minor differences between the sys functions and the built-in functions. See Performance Schema Functions. By default, the thread pool plugin tries to ensure a maximum of one thread executing in each group at any time. The default algorithm takes stalled threads into account and may temporarily permit more active threads. The plugin now implements a new thread_pool_max_active_query_threads system variable for controlling number of active threads per group. If thread_pool_max_active_query_threads is 0, the default algorithm applies. If thread_pool_max_active_query_threads is greater than 0, it places a limit on the number of active threads per group. See Thread Pool Operation. X Plugin could not be compiled on Debian with GCC 9. The --no-as-needed linker option was added to provide a workaround for the issue. (Bug #30445201) Using X Protocol to query the Information Schema table TRIGGERS could result in errors being returned or some rows not being returned. (Bug #30318917) In MySQL 5.7.14, the mysqlx namespace parameter was introduced for X Protocol's StmtExecute request, replacing the xplugin parameter, which was therefore deprecated. X Plugin continued to support the deprecated xplugin namespace for backward compatibility. In MySQL 8.0.19, the xplugin namespace has now been removed. If the xplugin namespace is used from this release on, an error message is returned as for an unknown namespace. X Plugin's Mysqlx_stmt_execute_xplugin status variable, which counted the number of StmtExecute requests received for the xplugin namespace, is no longer used from MySQL 8.0.19. Microsoft Windows: Previously, the system ( \!) command for the mysql command-line client worked only for Unix systems. It now works on Windows as well. For example, system cls or \! cls may be used to clear the screen. (Bug #11765690, Bug #58680) JSON: When using JSON_SCHEMA_VALID() to specify a CHECK constraint on a table containing one or more JSON columns and experiencing a validation failure, MySQL now provides detailed information about the reasons for such failures. A new error ER_JSON_SCHEMA_VALIDATION_ERROR_WITH_DETAILED_REPORT is implemented containing this information, which can be viewed in the mysql client by issuing SHOW WARNINGS when an INSERT statement is rejected by the server. For more information and examples, see JSON_SCHEMA_VALID() and CHECK constraints. For more general information, see also CHECK Constraints. Display width specification for integer data types was deprecated in MySQL 8.0.17, and now statements that include data type definitions in their output no longer show the display width for integer types, with these exceptions: The type is TINYINT(1). MySQL Connectors make the assumption that TINYINT(1) columns originated as BOOLEAN columns; this exception enables them to continue to make that assumption. The type includes the ZEROFILL attribute.. (Bug #30556657, Bug #97680) Replication connections to a replication slave, and Group Replication connections for distributed recovery, now have full client side configuration options for the TLSv1.3 protocol. In MySQL releases where TLSv1.3 support was available but these configuration options were not available, if TLSv1.3 was used for these connection types, the client in the connection (the replication slave or the Group Replication joining member that initiated distributed recovery) could not be configured. This meant that the server in the connection (the replication master or the Group Replication existing member that was the donor for distributed recovery) had to permit the use of at least one TLSv1.3 ciphersuite that is enabled by default. From MySQL 8.0.19, you can use the configuration options to specify any selection of ciphersuites for these connections, including only non-default ciphersuites if you want. The new configuration options are as follows: Group Replication system variables group_replication_recovery_tls_version and group_replication_recovery_tls_ciphersuites. group_replication_recovery_tls_version specifies a list of permitted TLS protocols for connection encryption for the client instance (the joining member) in the distributed recovery connection. group_replication_recovery_tls_ciphersuites specifies a list of permitted ciphersuites when TLSv1.3 is used for that connection. A MASTER_TLS_CIPHERSUITES option on the CHANGE MASTER TO command, to specify a list of TLSv1.3 ciphersuites permitted by the replication slave for the connection to the replication master. (The CHANGE MASTER TO command already had a MASTER_TLS_VERSION option to specify the permitted TLS protocol versions for the connection.) (Bug #29960735) Debian packages now contain more general systemd support that better supports manual mysqld execution. (Bug #29702050, Bug #95163) The Group Replication plugin interacts with MySQL Server using internal sessions to perform SQL API operations. Previously, these sessions counted towards the client connections limit specified by the max_connections server system variable. If the server had reached this limit when Group Replication was started or attempted to perform an operation, the operation was unsuccessful and Group Replication or the server itself might stop. From MySQL 8.0.19, Group Replication's interactions with MySQL Server use a new component service that handles the internal sessions separately, which means that they do not count towards the max_connections limit and are not refused if the server has reached this limit. (Bug #29635001) Duplicate key error information was extended to include the table name of the key. Previously, duplicate key error information included only the key value and key name. Thanks to Facebook for the contribution. (Bug #28686224, Bug #925308) When the mysql client operates in interactive mode, the --binary-as-hex option now is enabled by default. In addition, output from the status (or \s) command includes this line when the option is enabled implicitly or explicitly: Binary data as: Hexadecimal To disable hexadecimal notation, use --skip-binary-as-hex (Bug #24432545) MySQL now supports datetime literals with time zone offsets, such as '2019-12-11 10:40:30-05:00', '2003-04-14 03:30:00+10:00', and '2020-01-01 15:35:45+05:30'; these offsets are respected but not stored when inserting such values into TIMESTAMP and DATETIME columns; that is, offsets are not displayed when retrieving the values. The supported range for a timezone offset is -14:00 to +14:00, inclusive. Time zone names such as 'CET' or 'America/Argentina/Buenos_Aires', including the special value 'SYSTEM', are not supported in datetime literals. In addition, in this context, a leading zero is required for an hour value less than 10, and MySQL rejects the offset '-00:00' as invalid. Datetime literals with time zone offsets can also be used as parameter values in prepared statements. As part of this work, the allowed range of numeric values for the time_zone system variable has been changed, so that it is now also -14:00 to +14:00, inclusive. For additional information and examples, see The DATE, DATETIME, and TIMESTAMP Types, and MySQL Server Time Zone Support. (Bug #83852, Bug #25108148) From MySQL 8.0.19, compression is supported for messages sent over X Protocol connections. Connections can be compressed if the server and the client agree on a compression algorithm to use. By default, X Protocol announces support for the deflate, lz4, and zstd compression algorithms. You can disallow any of these algorithms by setting the new mysqlx_compression_algorithms system variable to include only the ones you permit. X Protocol always allows uncompressed connections if the client does not request compression during capability negotiation. Note that X Protocol's list of permitted compression algorithms operates independently of the list of compression algorithms announced by MySQL Server, and X Protocol does not fall back to using MySQL Server's compression settings. You can monitor the effects of message compression for X Protocol using new X Plugin status variables. For multithreaded slaves (replication slaves on which slave_parallel_workers is set to a value greater than 0), setting slave_preserve_commit_order=1 ensures that transactions are executed and committed on the slave in the same order as they appear in the slave's relay log, preserving the same transaction history on the slave as on the master. Previously, this setting required binary logging and slave update logging to be enabled on the slave, with the associated execution costs and disk space requirements. Now, slave_preserve_commit_order=1 can be set on a slave with no binary log and no slave update logging. This enables you to preserve commit order on the slave, and avoid gaps in the sequence of transactions, without the overhead of binary logging. A limitation to preserving the commit order on the slave can occur if statement-based replication is in use, and both transactional and non-transactional storage engines participate in a non-XA transaction that is rolled back on the master. Normally, non-XA transactions that are rolled back on the master are not replicated to the slave, but in this particular situation, the transaction might be replicated to the slave. If this does happen, a multithreaded slave without binary logging does not handle the transaction rollback, so the commit order on the slave diverges from the relay log order of the transactions in that case. The MySQL 8.0.18 release introduced the ability to specify a PRIVILEGE_CHECKS_USER account for a replication channel (using a CHANGE MASTER TO statement), against which MySQL makes privilege checks when replicated transactions are applied. The use of a PRIVILEGE_CHECKS_USER account helps secure a replication channel against the unauthorized or accidental use of privileged or unwanted operations. The use of row-based binary logging is strongly recommended when replication channels are secured with privilege checks. In MySQL 8.0.19, a new setting REQUIRE_ROW_FORMAT is added for replication channels, which makes the channel accept only row-based replication events. You can specify REQUIRE_ROW_FORMAT using a CHANGE MASTER TO statement to enforce row-based binary logging for a replication channel that is secured with privilege checks, or to increase the security of a channel that is not secured in this way. By allowing only row-based replication events, REQUIRE_ROW_FORMAT prevents the replication applier from taking actions such as creating temporary tables and executing LOAD DATA INFILE requests, which protects the replication channel against some known attack vectors. Row-based binary logging ( binlog_format=ROW) must be used on the replication master when REQUIRE_ROW_FORMAT is set. Group Replication already requires row-based binary logging, so from MySQL 8.0.19, Group Replication's channels are automatically created with REQUIRE_ROW_FORMAT set, and you cannot change the option for those channels. The setting is also applied to all Group Replication channels on upgrade. mysqlbinlog has a new --require-row-format option, which enforces row-based replication events for mysqlbinlog's output. The stream of events produced with this option would be accepted by a replication channel that is secured using the REQUIRE_ROW_FORMAT option.. To avoid issues when migrating data directories between case-sensitive and case-insensitive file systems, delimiter strings are now lowercase on all_names setting to ensure case-insensitivity. For example, if a table partition is created with the name PART_1, the tablespace name and file name are generated in lowercase: schema_name. table_name#p# part_1 table_name#p# part_1.ibd During upgrade, MySQL now checks and modifies if necessary: Partition file names on disk and in the data dictionary to ensure lowercase delimiters and partition names. Partition metadata in the data dictionary for related issues introduced by previous bug fixes. InnoDB statistics data for related issues introduced by previous bug fixes. During tablespace import operations, partition tablespace file names on disk are checked and modified if necessary to ensure lowercase delimiters and partition names. References: See also: Bug #26925260, Bug #29823032, Bug #30012621, Bug #29426720, Bug #30024653. Support was added for efficient sampling of InnoDB data for the purpose of generating histogram statistics. The default sampling implementation used by MySQL when storage engines do not provide their own requires a full table scan, which is costly for large tables. The InnoDB sampling implementation improves sampling performance by avoiding full table scans. The sampled_pages_read and sampled_pages_skipped INNODB_METRICS counters can be used to monitor sampling of InnoDB data pages. See Histogram Statistics Analysis. Important Change: Character set resolution has been changed for the following string functions: REPLACE( str, from_str, to_str) SUBSTRING_INDEX( str, delim, count) TRIM([{BOTH | LEADING | TRAILING} [ remstr] FROM] str) Previously, character set information for all arguments to these functions was aggregated, which could lead to results that were not well formed. This also caused issues with LPAD(), which assumes that both input and output are well formed. Now each of the three listed functions always uses the character set employed by str, and converts all other arguments to this character set at execution time; if any such conversion fails, the function returns an error. (Bug #30114420) References: This issue is a regression of: Bug #28197977. Important Change: Subquery materialization no longer requires strict matching of inner and outer types. Different types can now be materialized when one of the following conditions is true: The inner type is numeric (since there is always a way to cast the outer type to a number) The inner type is temporal (since there is always a way to cast the outer type to a temporal) Both types are strings (Bug #13960580) NDB Cluster: Password masking was incomplete for some NDB logging options. (Bug #97335, Bug #30453137) InnoDB: Initialization of certain internal data structures at startup depend on internal variables derived from the max_connections setting. InnoDB failed to resize the internal data structures when the max_connections setting was modified after startup using SET PERSIST. (Bug #30628872) InnoDB: os_file_get_parent_dir warnings were encountered when compiling MySQL with GCC 9.2.0. (Bug #30499288, Bug #97466) InnoDB: An attempt to access a large object (LOB) value using a null reference raised an assertion failure. To prevent this issue form occurring, a check was added to determine if LOB references are null before they are accessed. (Bug #30499064) InnoDB: An assertion failure occurred after upgrading the data directory. Prepared XA transaction were still present, which prevented undo tablespaces from being upgraded. Undo tablespaces containing prepared transaction changes must remain active until all prepared XA transactions are committed or rolled back. Prepared XA transactions also prevented the completion of an explicit undo tablespace truncation operation after a restart. (Bug #30489497) InnoDB: Attempting to upgrade a MySQL 5.7 instance on Linux with uppercase table names (partitioned or otherwise) to MySQL 8.0 on macOS raised an assertion failure. Partition file format changes in MySQL 8.0 prevented migration of the data directory to a different platform, and the lower_case_table_names setting was changed at upgrade time, which can cause an upgrade failure. Instead of a failure occurring under these circumstances, an error is now reported. (Bug #30450968, Bug #30450979) InnoDB: On macOS, a failure occurred when attempting to upgrade a MySQL 5.7 instance with uppercase table names to MySQL 8.0. Uppercase table names were not normalized to lowercase. The following errors were reported: Table is not found in InnoDB dictionary and Error in fixing SE data errors. (Bug #30450944) InnoDB: On Windows, a failure occurred when attempting to upgrade a MySQL 5.7 instance with uppercase partitioned table names to MySQL 8.0. Opening the table returned a null pointer, which caused a segmentation fault when closing the table. (Bug #30450918) InnoDB: On Windows, a mysqld exception was raised when attempting to upgrade a MySQL 5.7 instance with uppercase partitioned table names to MySQL 8.0. (Bug #30447790) InnoDB: On Windows, a failure occurred when attempting to upgrade a MySQL 5.7 instance containing general tablespace defined with an uppercase name to MySQL 8.0. The following errors were reported: Error in fixing SE data and Failed to Populate DD. (Bug #30446798) InnoDB: Introduction of local minitransactions (mtrs) in LOB-related code resulted in an assertion failure during recovery. (Bug #30417719) InnoDB: A failure occurred when attempting to upgrade a MySQL 5.7 instance on Windows with uppercase partitioned table names to MySQL 8.0 on Linux. Partition file format changes in MySQL 8.0 prevented migration of the data directory to a different platform. Instead of a failure, an error is now reported. (Bug #30411118) InnoDB: Updating the same compressed LOB data repeatedly caused the tablespace file to increase in size. (Bug #30353812) InnoDB: When the temptable_max_ram limit was reached, the TempTable storage engine incorrectly reported an out-of-memory error instead of falling back to disk-based storage. (Bug #30314972, Bug #96893) InnoDB: After importing an encrypted table and restarting the server, the following error was returned when attempting to access the table: ERROR 3185 (HY000): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. The tablespace key was not written to disk after it was encrypted with the destination master key. (Bug #30313734) InnoDB: The internal InnoDB dict_create_foreign_constraints() function that parsed SQL statements and performed foreign key related DDL checks was removed. The function became redundant with introduction of the data dictionary in MySQL 8.0 and the subsequent relocation of foreign key related DDL checks to the SQL layer. Removal of the dict_create_foreign_constraints() function also addressed the following foreign key issues: Spaces around dots (“.”) in a fully qualified referenced table name were not permitted by the InnoDB parser. Adding a foreign key and removing partitioning in the same ALTER TABLE statement was not permitted. The InnoDB parser did not detect that the new table version was no longer partitioned. A foreign key constraint could not reference a table inside a schema named “AUX”. The function that parsed referenced table names did not recognize that special names such as AUX are encoded. Conditional comments in foreign key definitions were ignored. Additionally, a check was added to the SQL layer to detect attempts to create multiple foreign keys of the same name on a table at an early stage in the execution of an ALTER TABLE statement. (Bug #30287895, Bug #22364336, Bug #28486106, Bug #28703793, Bug #16904122, Bug #92567, Bug #11754659, Bug #46293) InnoDB: A comparison function found two records to be equal when attempting to merge non-leaf pages of a spatial index. The function was unable to handle this unexpected condition, which resulted in a long semaphore wait and an eventual assertion failure. (Bug #30287668) InnoDB: A locally acquired latch required for freeing a large object (LOB) page could have caused a deadlock if a subsequent caller attempted to acquire a latch for the same page before the page was freed. Similarly, a latch taken on a compressed or uncompressed LOB during a rollback related operation could have caused a deadlock due to a latching order issue. (Bug #30258536) References: This issue is a regression of: Bug #29846292. InnoDB: A race condition between a purge thread that was purging a compressed LOB page and an update thread that is using a delete-marked record caused an assertion failure. (Bug #30197056). (Bug #30190199, Bug #30190227, Bug #20644698, Bug #76142) InnoDB: A purge operation failed when attempting to purge a LOB value larger than the buffer pool. (Bug #30183982) InnoDB: Update operations that moved externally stored LOB data to inline storage failed to mark the old LOB data as purgeable. (Bug #30178056, Bug #96466) InnoDB: Index key part sort order information was not stored to the .cfg metadata file used by ALTER TABLE ... IMPORT TABLESPACE operations. The index key part sort order was therefore assumed to be ascending, which is the default. As a result, records could be sorted in an unintended order if one table involved in the import operation is defined with a DESC index key part sort order and the other table is not. To address this issue, the .cfg file format was updated to include index key part sort order information. (Bug #30128418) InnoDB: Criteria used by the btr_cur_will_modify_tree() function, which detects whether a modifying record needs a modifying tree structure, was insufficient. (Bug #30113362) InnoDB: Startup was slow on instances with a large number of tables due the tablespace file scan that occurs at startup to retrieve space IDs. A multithreaded scan was only initiated if the number of tablespace files exceed 50,000, and three tablespace pages were read to retrieve a space ID. To improve startup times, additional threads are now allocated for the tablespace file scan, and only the first tablespace page is read to retrieve a space ID. If a space ID is not found on the first page of the tablespace, three pages are read to determine the space ID, as before. (Bug #30108154, Bug #96340) InnoDB: Startup failed on a case insensitive file system with an error indicating that multiple files were found for the same tablespace ID. A file path comparison did not recognize that innodb_data_home_dir and datadir paths were the same due to the paths having different lettercases. (Bug #30040815) InnoDB: A storage engine error occurred when accessing the mysql.innodb_index_stats and mysql.innodb_table_stats persistent optimizer statistics tables after upgrading a MySQL 8.0.13 instance on Linux with partitioned tables and a lower_case_table_names=1 setting to MySQL 8.0.14 or MySQL 8.0.15. The persistent optimizer statistics tables contained duplicate entries. (Bug #30012621) References: This issue is a regression of: Bug #26925260. InnoDB: CREATE TABLESPACE failed with an error indicating that the tablespace already exists. The error was due to the failure of a preceding CREATE TABLESPACE operation where the DDL failed but related changes were not rolled back due to rollback being disabled prior to transaction commit. Rollback is now disabled after the transaction commits successfully. (Bug #29959193, Bug #95994) InnoDB: Changed pages belonging to imported tablespaces were not being tracked. (Bug #29917343) InnoDB: Renaming of full-text search auxiliary tables during upgrade failed due to a tablespace name conflict when upgrading from MySQL 5.7 to MySQL 8.0 on a case-insensitive file system. (Bug #29906115) InnoDB: Rollback of an INSERT operation that inserted a LOB value larger than a buffer pool caused a deadlock. (Bug #29846292) InnoDB: A code regression was addressed by prohibiting unnecessary implicit to explicit secondary index lock conversions for session temporary tables. (Bug #29718243) InnoDB: A tablespace import operation raised an assertion when the cursor was positioned on a corrupted page while purging delete-marked records. Instead of asserting when encountering a corrupted page, the import operation is now terminated and an error is reported. (Bug #29454828, Bug #94541) InnoDB: Delete marked rows were able to acquire an external read lock before a partial rollback was completed. The external read lock prevented conversion of an implicit lock to an explicit lock during the partial rollback, causing an assertion failure. (Bug #29195848) InnoDB: Throughput stalled under a heavy workload with a small innodb_io_capacity_max setting, a single page cleaner thread, and multiple buffer pool instances. (Bug #29029294) InnoDB: After a server exit that occurred while an undo tablespace truncation operation was in progress, warning messages were printed at startup stating that doublewrite pages could not be restored for undo tablespace pages. The warning messages are no longer printed for undo tablespaces that are being truncated. (Bug #28590016) InnoDB: In read-only mode ( innodb_read_only=ON), SHOW CREATE TABLE output did not include information about foreign key constraints. (Bug #21966795, Bug #78754) Partitioning: When upgrading a database with a subpartitioned table from MySQL 8.0.16 or lower and then executing ALTER TABLE ADD COLUMN, an assertion or error would occur. (Bug #30360695, Bug #97054) Partitioning: During upgrade of partitioned tables from MySQL 5.7 to 8.0, when a prefix key was used by the partitioning function, the prefix length was ignored, and the full column length was considered instead. Consequently, the table might incorrectly be rejected from being upgraded because its partition field length was found to be too large. (Bug #29941988, Bug #95921) Partitioning: ALTER TABLE ... EXCHANGE PARTITION could cause indexes to become corrupted. This was due to the fact that the server assumed that the order in which an index is created in a partitioned table is the same as that of the table which is not partitioned. This led to the wrong index data being exchanged. (Bug #29706669) Replication: When a member is joining or rejoining a replication group, if Group Replication detects an error in the distributed recovery process (during which the joining member receives state transfer from an existing online member), it automatically switches over to a new donor, and retries the state transfer. The number of times the joining member retries before giving up is set by the group_replication_recovery_retry_count system variable. The Performance Schema table replication_applier_status_by_worker displays the error that caused the last retry. Previously, this error was only shown if the group member was configured with parallel replication applier threads (as set by the slave_parallel_workers system variable). If the group member was configured with a single applier thread, the error was cleared after each retry by an internal RESET SLAVE operation, so it could not be viewed. This was also the case for the output of the SHOW SLAVE STATUS command whether there were single or multiple applier threads. The RESET SLAVE operation is now no longer carried out after retrying distributed recovery, so the error that caused the last retry can always be viewed. (Bug #30517160, Bug #30517172) Replication: An assertion was raised when privilege checks were carried out for a replication channel if the slave had more columns in the relevant table than the master. The check now references the number of columns in the event, rather than in the table definition. (Bug #30343310) Replication: When a replication group member leaves a group, either because STOP GROUP_REPLICATION was issued or due to an error, Group Replication now stops the binary log dump thread so that the former group member cannot send unwanted binary log data to the members that have remained in the group. (Bug #30315614) Replication: Replication connection parameters that are held in the mysql.slave_relay_log_info table are now preserved in the event of a server crash or deliberate restart after issuing RESET SLAVE but before issuing START SLAVE. This action applies to the PRIVILEGE_CHECKS_USER account setting for replication privilege checks (introduced in MySQL 8.0.18) and the REQUIRE_ROW_FORMAT setting (introduced in MySQL 8.0.19). Note that if relay_log_info_repository=FILE is set on the server (which is not the default and is deprecated), replication connection parameters are not preserved in this situation. (Bug #30311908) Replication: When a replication channel is secured by specifying a PRIVILEGE_CHECKS_USER account, which should not have ACL privileges, a GRANT statement that is replicated to the channel causes the replication applier to stop. In this situation, the behavior was correct but an assertion was being raised. The assertion has now been removed. (Bug #30273684) Replication: When Group Replication was started following either provisioning with a cloning operation, execution of RESET MASTER, or removal of a partial transaction from the relay log, RESET SLAVE ALL was used internally to clear any unwanted state on the server. However, in MySQL 8.0.18, this caused any PRIVILEGE_CHECKS_USER account that was specified for a Group Replication channel to be removed. RESET SLAVE is now used instead, which does not remove the account. (Bug #30262225) Replication: For multithreaded replication slaves, setting slave_preserve_commit_order=1 now preserves the order of statements with an IF EXISTS clause when the object concerned does not exist. Previously, these updates might have committed before transactions that preceded them in the relay log, which might have resulted in gaps in the sequence of transactions that have been executed from the slave's relay log. (Bug #30262096) Replication: When privilege checks were carried out for a replication channel, the permissions required for setting the session value of the sql_require_primary_key system variable were not being checked. The check is now carried out. (Bug #30254917) Replication: A memory leak could occur when a failed replication group member tried to rejoin a minority group and was disallowed from doing so. (Bug #30162547, Bug #96471) Replication: When a group member rejoins a replication group, it begins the distributed recovery process by checking the relay log for its group_replication_applier channel for any transactions that it already received from the group, and applying these. The joining member then initiates state transfer from an existing online member, which might begin with a remote cloning operation. Previously, the group_replication_applier channel was not explicitly stopped when a remote cloning operation was started, so it was possible that the applier might still be applying existing transactions at that time, which might lead to errors. The group_replication_applier channel is now stopped before a remote cloning operation is requested, and restarted when the distributed recovery process moves on to state transfer from a donor's binary log. (Bug #30152028) Replication: If STOP GROUP_REPLICATION was issued while the member's XCom port was blocked, the XCom thread hung and the shutdown did not complete. XCom is now terminated in this situation. (Bug #30139794) Replication: When Group Replication is running in single-primary mode, and a new primary server is elected, the messages logged at this time now provide the newly elected primary server's gtid_executed set, and the set of GTIDs retrieved by the replication applier. (Bug #30049310) Replication: The slave status logs mysql.slave_relay_log_info (the relay log info log) and mysql.slave_worker_info (the slave worker log) are now copied from the donor to the recipient during a local or remote cloning operation. The slave status logs hold information that can be used to resume replication correctly after the cloning operation, including the relay log position from which to restart replication, the PRIVILEGE_CHECKS_USER account setting, and the new REQUIRE_ROW_FORMAT setting. Note that the relay logs themselves are not copied from the donor to the recipient, only the information about them that is held in these tables. Also note that if relay_log_info_repository=FILE is set on the server (which is not the default and is deprecated), the slave status logs are not cloned; they are only cloned if TABLE is set. Before this patch, the following replication-related behaviors occurred on a replication slave that had been provisioned by a cloning operation: The default replication channel would fail to start if it was the only channel on the slave, because it was considered to be not initialized due to the missing relay log information. Any PRIVILEGE_CHECKS_USER account setting that had been applied to replication channels on the donor was absent and had to be respecified. Replication channels that used GTID auto-positioning (as specified by the MASTER_AUTO_POSITION option on the CHANGE MASTER TO statement) were able to resume replication automatically. Replication channels that used binary log file position based replication (as specified by the MASTER_LOG_FILE and MASTER_LOG_POS options on the CHANGE MASTER TO statement) had to have the MASTER_LOG_FILE and MASTER_LOG_POS options reapplied manually before restarting replication in order to resume correctly. If the channels were configured to start replication automatically at server startup, without the options reapplied they would attempt to start replication from the beginning. They were therefore likely to attempt to replicate data that had already been copied to the slave by the cloning operation, causing replication to stop and possibly corrupting the data on the slave. With this patch, the following replication-related behaviors now occur on a replication slave that has been provisioned by a cloning operation: The default replication channel can now always start after the cloning operation if it is configured to do so. All channels now have the donor's PRIVILEGE_CHECKS_USER account setting and REQUIRE_ROW_FORMAT setting. Replication channels that use GTID auto-positioning (as specified by the MASTER_AUTO_POSITION option on the CHANGE MASTER TO statement) are still able to resume replication automatically. For Group Replication channels, which use GTID auto-positioning, an internal equivalent of the RESET MASTER statement is now used to ensure that replication resumes optimally. Replication channels that use binary log file position based replication now have the correct MASTER_LOG_FILE and MASTER_LOG_POS options in place after cloning. Because the relay logs themselves are not cloned, these channels now attempt to carry out the relay log recovery process, using the cloned relay log information, before restarting replication. For a single-threaded slave ( slave_parallel_workers is set to 0), relay log recovery should succeed in the absence of any other issues, enabling the channel to resume replication correctly. For a multithreaded slave ( slave_parallel_workers is greater than 0), relay log recovery is likely to fail because it cannot usually be completed automatically, but an informative error message is issued, and the data will not be corrupted. (Bug #29995256, Bug #30510766) Replication: An internal deadlock could occur on a multi-threaded replication slave when the relay_log_space_limit system variable was set to limit the size of relay logs on the slave, and the coordinator thread acquired locks related to this limit and to the end position of the log. (Bug #29842426) Replication: If a replication group member stops unexpectedly and is immediately restarted (for example, because it was started with mysqld_safe), it automatically attempts to rejoin the group if group_replication_start_on_boot=on is set. Previously, if the restart and rejoin attempt took place before the member's previous incarnation had been expelled from the group, the member could not rejoin. Now in this scenario, Group Replication automatically uses a Group Communication System (GCS) feature to retry the rejoin attempt for the member 10 times, with a 5-second interval between each retry. This should cover most cases and allow enough time for the previous incarnation to be expelled from the group, letting the member rejoin. Note that if the group_replication_member_expel_timeout system variable is set to specify a longer waiting period before the member is expelled, the automatic rejoin attempts might still not succeed. (Bug #29801773) Replication: If a replication slave was set up using a CHANGE MASTER TO statement that did not specify the master log file name and master log position, then shut down before START SLAVE was issued, then restarted with the option --relay-log-recovery set, replication did not start. This happened because the receiver thread had not been started before relay log recovery was attempted, so no log rotation event was available in the relay log to provide the master log file name and master log position. In this situation, the slave now skips relay log recovery and logs a warning, then proceeds to start replication. (Bug #28996606, Bug #93397) macOS: On macOS, configuring MySQL with -DWITH_SSL=system caused mysql_config output to incorrectly include internal CMake names for the static SSL libraries. (Bug #30541879, Bug #97632) macOS: Builds on macOS with Ninja could fail with an error trying to create a symbolic link multiple times. (Bug #30368985) Microsoft Windows; JSON: On Windows platforms, memory used for a multi-valued index was not released after the table containing it was dropped. (Bug #30227756) Microsoft Windows: On Windows, -DWITH_SSL=system failed to find the installed OpenSSL headers if Strawberry Perl was installed. (Bug #30359287) Microsoft Windows: On Windows, the -DWITH_SSL=system option did not work if the path name leading to the system OpenSSL libraries contained a space. This is now handled. Also, -DWITH_SSL=yes is treated like -DWITH_SSL=system, as on other platforms. (Bug #30261942, Bug #96739) Microsoft Windows: MSVC 2019 produced garbled source file names for compilation errors. A workaround in the CMake configuration was implemented to correct for this. (Bug #30255096, Bug #96720) JSON: Updating a value in a JSON column by replacing a character string element with a binary string containing the same byte sequence as the utf8mb4 representation of the character string had no effect. The root cause of this issue was a change in the behavior of comparisons between JSON strings and JSON opaque values introduced by the implementation of multi-valued indexes in MySQL 8.0.17, previous to which, JSON strings and JSON opaque values were never considered equal. After the change, they were considered equal if their binary data matched. An analysis of this change showed that it was not needed; in addition, the new behavior conflicted with the existing documentation for comparisons of JSON values. This issue is fixed by restoring the original behavior. (Bug #30348554) JSON: A view that used JSON_TABLE() did not preserve the character set in which JSON path arguments were encoded. This meant that, if the view was evaluated with a different character set in effect from the one in which it was defined, it could produce wrong results. This is fixed by ensuring that JSON_TABLE() preserves the original character set in such cases. (Bug #30310265) JSON: Adding a functional index on a JSON column changed the collation used for comparing strings, causing the result returned by the same query selecting the column to differ from that obtained without the index. (Bug #29723353) JSON: If the first argument to JSON_TABLE() was const during the execution of a stored procedure, but not during preparation, it was not re-evaluated when a statement was subsequently executed again, causing an empty result to be returned each time following the first execution of the procedure. (Bug #97097, Bug #30382156) JSON: In some cases, such as when a query uses FORCE INDEX, the cost of reading the table is DBL_MAX; this was rounded up to 2e308 when printed, which is too large for the JSON parser, so that it was not possible to extract parts of the optimizer trace using a query such as JSON_EXTRACT(trace, '$**.table_scan') FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE. Now in such cases, values greater than 1.5e308 are rounded down and printed as 1e308 instead. (Bug #96751, Bug #30226767) After upgrading from MySQL 5.7 to MySQL 8.0, a CLONE INSTANCE operation failed with the following error: ERROR 3862 (HY000): Clone Donor Error: 1016 : Can't open file: './undo001'. The upgrade process left behind orphaned in-memory undo tablespaces. Thanks to Satya Bodapati for the contribution. (Bug #30602218, Bug #97784, Bug #30239255, Bug #96637) The thread_pool plugin used display widths in definitions for integer columns of Performance Schema tables. This resulted in warnings written to the error log because integer column display widths are now deprecated. (Bug #30597673) The MySQL optimizer's hash join algorithm uses the join buffer to store intermediate results. If this buffer overflows, the server uses a spill-to-disk algorithm, which writes one of the hash join operands to a temporary file, to handle this gracefully. If one of the operands was a table that was a member of a pushed join operation, this strategy conflicted with the pushed join requirement for all child result rows to use nested-loop reads whenever one of their pushed join ancestors was the current row in the join evaluation, which could in some cases result in incorrect query results being returned. (Bug #30573733) Access to the INFORMATION_SCHEMA.VIEWS table was not properly restricted to the correct user. (Bug #30542333) When creating hash values used for lookups during a hash join, the server did not respect the PAD SPACE attribute, meaning that 'foo' and 'foo ' did not match when using a PAD SPACE collation. This is fixed by padding all strings up to the same length as the longest possible string, where the longest possible string is deduced from the data type length specifier N in CHAR( or N) VARCHAR(. (Bug #30535541) N) When retrieving large result sets containing DECIMAL columns from a secondary engine, conversion of the column values to strings for transport over the text protocol acted as a bottleneck. The performance of the functions responsible for such conversions has been improved in some cases by as much as 50%, as reflected in internal testing. (Bug #30528427) When the FORMAT_PICO_TIME() function was invoked to process several rows, once a NULL argument was found in a row, every result after that was set to NULL. (Bug #30525561) When a Performance Schema event was timed, the event duration reported in events_ tables could be xxx NULL instead of 0 for events where the timer start and end values are equal. (Bug #30525560) Adding a LIMIT clause to a parenthesized query suppressed locking clauses within the parentheses. For example, this query would not lock the table: (SELECT ... FOR UPDATE) LIMIT ...; Adding a LIMIT clause outside of a parenthesized query is intended to override a LIMIT clause within the parentheses. However, the outer LIMIT suppressed ORDER BY within the parentheses as well. For example, for this query, the ORDER BY was suppressed: (SELECT ... ORDER BY ... LIMIT a) LIMIT b; Now inner locking and ORDER BY clauses are not suppressed by an outer LIMIT clause. (Bug #30521098, Bug #30521803) When optimizer extracts conditions on constant tables for early evaluation, it does not include WHERE conditions that are expensive to evaluate, including conditions involving stored functions. When the extracted condition evaluated to true because it involved only const tables, the entire WHERE condition was incorrectly removed. Now in such cases, a check for expensive conditions is performed prior to any removal of the WHERE condition. (Bug #30520714) When a lateral materialized derived table used DISTINCT, the derived table was not rematerialized for each outer row as expected. (Bug #30515233) EXPLAIN ANALYZE did not work correctly with a common table expression using WITH RECURSIVE. (Bug #30509580) The GNU gold loader could cause memory exhaustion on some platforms. Now it is used by default only on Intel 64-bit platforms. (Bug #30504760, Bug #96698) Some Linux platforms experienced high overhead with EXPLAIN ANALYZE due to use of a system call by libstdc++ instead of clock_gettime(). (Bug #30483025) On Solaris 11.4, the LDAP authentication plugins could not be built. (Bug #30482553) Queries that used the MEMBER OF() operator were not always handled correctly. (Bug #30477993) Boost compilation failed under Visual Studio due to a Boost workaround for a VC++ 2013 bug that has since been fixed. The workaround is now patched for Boost compilation with MySQL. (Bug #30474056, Bug #97391) When retrieving large result sets containing many integers from a secondary engine, conversion of the integers to strings for sending over the text protocol could act as a bottleneck. To avoid this problem, the performance of internal functions performing such conversions has been improved. (Bug #30472888) Docker packages were missing the LDAP authentication plugins. (Bug #30465247) Corrected a typo in a mysys/my_handler_errors.h error message. Thanks to Nikolai Kostrigin for the contribution. (Bug #30462329, Bug #97361) A GTID table update while innodb_force_recovery was enabled caused a debug assertion failure. (Bug #30449531, Bug #97312) MySQL failed to compile against Protobuf 3.10. (Bug #30428543, Bug #97246) Buffered log lines during system startup could be lost. (Bug #30422941, Bug #97225) If the mysql.user system table was renamed, the server could exit. (Bug #30418070) Revoking a role specified with no host name could cause a server exit. (Bug #30416389) When determining whether to pull out a semijoin table when other tables inside the semijoin depended on this table, only those semijoin tables which were base tables were considered; those in nested joins were ignored. (Bug #30406241) References: See also: Bug #12714094, Bug #11752543, Bug #43768. The AppArmor profile on Ubuntu platforms was not able to read the OpenSSL configuration. (Bug #30375723) Some Fedora 30 packages had missing obsoletes information that could cause problems upgrading an existing MySQL installation. (Bug #30348549, Bug #96969) Altering only the default encryption in an ALTER SCHEMA statement caused the schema default character set and collation to be reset to the system defaults. (Bug #30344462, Bug #96994) Columns declared with both AUTO_INCREMENT and DEFAULT value expressions (a nonpermitted combination) could raise an assertion or cause a server exit. (Bug #30331053) SHOW GRANTS for an anonymous user could result in a server exit under some conditions. (Bug #30329114) GREATEST() and LEAST() did not always handle time values correctly. (Bug #30326848) References: This issue is a regression of: Bug #25123839. The list of subpartitions in partition objects was not serialized and therefore not included in serialized dictionary information (SDI). To address this issue, support was added for serialization and deserialization of subpartition dictionary information. The patch for this bug also includes minor SDI code refactoring and format changes. Due to the format changes, the SDI version number was incremented. (Bug #30326020, Bug #96943) Following execution of ANALYZE TABLE, the optimizer trace for a given query differed when another query was executed previously to it, but also after the ANALYZE TABLE. (Bug #30321546) innodb_buffer_pool_instances was not initialized correctly at server startup if it had been set using SET PERSIST or PERSIST_ONLY. (Bug #30318828) A low max_allowed_packet value caused the following error: ERROR 1153 (08S01) at line 1: Got a packet bigger than 'max_allowed_packet' bytes. The error message was revised to indicate the minimum required max_allowed_packet value for cloning operations. (Bug #30315486, Bug #96891) An assertion could be raised when server code tried to send to clients an error code intended to be written to the error log. These instances are fixed by sending a code intended to be sent to clients. (Bug #30312874) CREATE VIEW did not always succeed when the body of the view definition contained a join and multiple subselects. (Bug #30309982) References: This issue is a regression of: Bug #25466100. Dependency information for SLES 12 RPM packages was incorrect, causing MySQL installation failure. (Bug #30308305) When restoring GEOMETRY data from hash join chunk files to a GEOMETRY column, the server did not copy the data to the column, but instead stored a pointer to the data, which resided in a temporary buffer, meaning that the GEOMETRY column pointed to random data as soon as this buffer was reused. Now, the server always copies the data from this buffer into the GEOMETRY column when executing a hash join. (Bug #30306279) Some ALTER TABLE operations using the COPY algorithm did not handle columns with expression default values properly. (Bug #30302907, Bug #96864) The CONV() function did not always handle returning the proper number of characters correctly. (Bug #30301543) Parser recursion checks were insufficient to prevent stack overflow. (Bug #30299881) The removal of a subquery because the condition in which it occurred was always false was expected to be performed during resolution, but when the subquery did not involve any tables, the server executed it while resolving it. This resulted in the failure of a subsequent check to confirm that the subquery was only being resolved and not yet optimized. Now in such cases, the server also checks to see whether the subquery was already executed. (Bug #30273827) For debug builds, attempts to add to an empty temporary table a column with an expression default that was not valid raised an assertion. (Bug #30271792) Construction of the iterator tree may yield a non-hierarchical structure; this can happen when, for example, b and c from a LEFT JOIN b LEFT JOIN c also make up the right side of a semijoin. The iterator executor solves this by adding a weedout on top of the entire query, which means that is is also necessary to iterators interacting with row IDs that they need to store and restore them. This was not done in all such cases, causing wrong results. Now the addition of a top-level weedout is always communicated to the iterators as soon as it is known that this is being done, before any affected iterators are constructed. (Bug #30267889) Foreign key-handling code duplication between the SQL layer and the data dictionary was eliminated. A side effect is that some error messages now are more informative and clear. (Bug #30267236, Bug #96765) During startup, the server could handle incorrect option values for persisted variables improperly, resulting in a server exit. (Bug #30263773) In some queries involving materialized semijoins, when using the iterator executor, conditions were evaluated outside the materialization, causing inefficient query plans to be used and sometimes also producing wrong results. (Bug #30250091) ALTER TABLE statements that renamed a column used in CHECK constraints could result in an incorrect error message. (Bug #30239721) For SELECT statements, an INTO clause prior to a locking clause is legal but the parser rejected it. (Bug #30237291, Bug #96677) var_name FLUSH TABLES WITH READ LOCK caused a deadlock when a LOCK INSTANCE FOR BACKUP statement was previously executed within the same session and there was a concurrent ALTER DATABASE statement running in another session against the same database specified (implicitly or explicitly) for the FLUSH TABLES WITH READ LOCK statement. (Bug #30226264) Slow query logging could result in a server exit for connections that did not use the classic client/server protocol. (Bug #30221187) A statement that added a foreign key without an explicit name failed when re-executed as a prepared statement or in a stored program with an unwarranted duplicate foreign key name error. (Bug #30214965, Bug #96611) References: This issue is a regression of: Bug #30171959. With multiple sessions executing concurrent INSERT ... ON DUPLICATE KEY UPDATE statements into a table with an AUTO_INCREMENT column but not specifying the AUTO_INCREMENT value, inserts could fail with a unique index violation. (Bug #30194841, Bug #96578) Client programs could load authentication plugins from outside the plugin library. (Bug #30191834, Bug #30644258) When switching between table scans and index lookups, AlternativeIterator did not reset the handler, which could lead to assertion failures. (Bug #30191394) Setting open_files_limit to a large value, or setting it when the operating system rlimit had a value that was large but not equal to RLIM_INF could cause the server to run out of memory. As part of this fix, the server now caps the effective open_files_limit value to the the maximum unsigned integer value. (Bug #30183865, Bug #96525) References to fully qualified INFORMATION_SCHEMA tables could fail depending on the lettercase in which INFORMATION_SCHEMA was specified. (Bug #30158484) Slow queries with an execution time greater than 35 days could cause corruption of the mysql.slow_log system table requiring a REPAIR TABLE operation. (Bug #30113119, Bug #96373) MySQL did not support sending systemd notification messages to a socket specified using the NOTIFY_SOCKET environment variable, if the variable named an abstract namespace socket. (Bug #30102279) Using SET PERSIST_ONLY to set a boolean system variable to a numeric value resulted in the server being unable to restart. (Bug #30094645, Bug #30298191, Bug #96848) A fix for a previous issue combined two TABLE_LIST constructors in an unfortunate way. One of these created a TABLE_LIST object from a TABLE object representing a temporary table. Previously, the table name was made the same as the alias; this was changed to copying the name from the TABLE object. Due to the fact that, for a temporary table, the table name is a file path, it was possible to exceed the limit for MDL_key names, leading to a failed assertion. Fixed by reintroducing dedicated constructors which behave in the manner that they did prior to the fix. (Bug #30083125) References: This issue is a regression of: Bug #27482976. For UNIX_TIMESTAMP() errors occurring within stored functions, the number of fractional seconds for subsequent function invocations could be incorrect. (Bug #30034972, Bug #96166) When a common table expression contained a nondeterministic expression (such one that used RAND()) and the common table expression was referenced more than once in the outer query, it was merged in some cases. This caused the common table expression to return a different result for each reference. Now in such cases, the common table expression is not merged, but rather is materialized instead. (Bug #30026353) In debug build of MySQL started on Linux with a lower_case_table_names=1 setting, discarding a tablespace for a partitioned table after an in-place upgrade from MySQL 8.0.16 caused a serious error. The partition tablespace name stored in the data dictionary was invalid, and the metadata lock key prepared for the partition tablespace in MySQL 8.0.17 did not match the key stored in the mysql.tablespaces table. (Bug #30024653) KILL QUERY could kill the statement subsequent to the one intended. (Bug #29969769) With lower_case_table_names=2, SHOW TABLES could fail to display tables with uppercase names. (Bug #29957361) The error message reported when attempting to upgrade tables with invalid expressions for generated columns did not provided sufficient information. The error message now includes the generated column name and the expression used to create the generated column. (Bug #29941887, Bug #95918) Attempting to display an unresolvable view could result in a server exit rather than an error. (Bug #29939279) Incorrect checking of temporal literals for CREATE TABLE statements could lead to a server exit. (Bug #29906966, Bug #95794) Writing unexpected values to the mysql.global_grants system table could cause a server exit. (Bug #29873343) The LAST_EXECUTED value in the INFORMATION_SCHEMA.EVENTS table was incorrectly reported in UTC, not in the event time zone. (Bug #29871530, Bug #95649) With keyring_encrypted_file_password set on the command line at server startup, the password value could be visible to system utilities. (Bug #29848634) Changing the lower_case_table_names setting when upgrading from MySQL 5.7 to MySQL 8.0 could cause a failure due to a schema or table name lettercase mismatch. If lower_case_table_names=1, table and schema names are now checked by the upgrade process to ensure that all characters are lowercase. If table or schema names are found to contain uppercase characters, the upgrade process fails with an error. For related information, see Preparing Your Installation for Upgrade. (Bug #29842749, Bug #95559) Attempting to spawn a thread for a parallel read operation while system resources were temporary unavailable raised system error. (Bug #29842749, Bug #95559) With a LOCK TABLES statement in effect, a metadata change for the locked table could cause Performance Schema or SHOW queries for session variables to hang in the opening_tables state. (Bug #29836204, Bug #92387) A SELECT using a WHERE condition of the form resulting in an impossible range led to an unplanned exit of the server. (Bug #29770705) A AND ( B OR C [OR ...]) For JSON-format audit logging, the id field now may contain values larger than 65535. Previously, with heaving logging activity, more than 65536 queries per second could be executed, exceeding the 16 bits permitted for id values. (Bug #29661920) An incomplete connection packet could cause clients not to properly initialize the authentication plugin name. (Bug #29630767) Out-of-memory errors from the parser could be ignored, resulting in a server exit. (Bug #29614521) On Linux, an assertion could be raised when the Performance Schema file instrumentation was disabled and re-enabled. (Bug #29607570) For a column defined as a PRIMARY KEY in a CREATE TABLE statement, a default value given as an expression was ignored. (Bug #29596969, Bug #94668) The TABLE_ENCRYPTION_ADMIN privilege, added in MySQL 8.0.16, was incorrectly granted to the system-defined mysql.session user during upgrade. (Bug #29596053, Bug #94888) For connections encrypted with OpenSSL, network I/O at the socket level was not reported by the Performance Schema. Also, network I/O performed while the server was in an IDLE state was not reported by the Performance Schema. (Bug #29205129, Bug #30535558, Bug #97600) When a query used a subquery that was merged into the outer query block (due to a semijoin transformation or merge of a derived table), and the subquery itself contained a subquery with an aggregate function with an aggregation query block that differed from its base query block, the query could sometimes fail to return any rows unless executed a second time or preceded with FLUSH TABLES. This was because, when merging, the information regarded tables used and the aggregation information for the aggregate function was not updated properly. In the case which raised this bug report, this meant that the comparison operation containing a scalar subquery was regarded as const-for-execution and therefore the range optimizer attempted to evaluate it, and the scalar subquery contained a MIN() function referring to an outer reference which had not yet been read. Thus, when the aggregator object was populated, it was based on uninitialized data, leading to unpredictable results. (Bug #28941154) Changing the mandatory_roles system variable could cause SHOW GRANTS in concurrent sessions to produce incorrect results. (Bug #28699403) Failure of keyring_aws initialization caused failure of SSL socket initialization. (Bug #28591098) Under certain conditions, enabling the read_only or super_read_only system variable did not block concurrent DDL statements executed by users without the SUPER privilege. (Bug #28438114, Bug #91852) For slow query logging, the Slow_queries was not implemented unless the slow query log was enabled, contrary to the documentation. (Bug #28268680, Bug #91496) The current GROUP BY plan is improved so that every gap attribute is allowed to have a disjunction of equality predicates. Predicates from different attributes must still be conjunctive to each other in order to take advantage of this enhancement. Our thanks to Facebook for this contribution. (Bug #28056998, Bug #15947433) In some cases, BIGINT arguments to the FLOOR() and CEILING() functions were resolved as the wrong type. (Bug #27125612) mysqlpump exits rather than dumping databases that contain an invalid view, by design, but it also failed if an invalid view existed but was not in any of the databases to be dumped. (Bug #27096081) Foreign key information is now retrieved from the data dictionary, not from InnoDB. (Bug #25583288) Foreign key definitions used in CREATE TABLE and ALTER TABLE statements for InnoDB tables were ignored if the statements were wrapped in conditional comments (such as /*!50101 ... */ or /*! ... */). (Bug #21919887, Bug #78631) The --log-raw option is now available at runtime as the log_raw system variable. The system variable is set at startup to the option value, and may be set at runtime to change password masking behavior. (Bug #16636373, Bug #68936) EXPLAIN ANALYZE did not execute subqueries in the SELECT list, and thus did not take them into account in its calculations of time or cost. (Bug #97296, Bug #30444266) An inner scalar subquery containing an outer reference did not return the same result using a nested set of SELECT expressions on the right hand side as when using a single SELECT that was equivalent. (Bug #97063, Bug #30381092) A materialized subquery could yield different results depending on whether it used an index. (Bug #96823, Bug #30289052) When a query terminated due to exceeding the time specified using the MAX_EXECUTION_TIME hint, the error produced differed depending on the stage of the query. In particular, if the query terminated during a filesort, the error raised was ER_FILSORT_ABORT, even though in such cases the query should always exit with ER_QUERY_TIMEOUT. This made it unnecessarily difficult to trap such errors and to handle them correctly. This fix removes the error codes ER_FILSORT_ABORT and ER_FILESORT_TERMINATED. (Bug #96577, Bug #30186874) If a stored procedure had a parameter named member or array, and it had been defined without quoting the parameter names, the database in which it was defined could not be upgraded to 8.0.17 or 8.0.18. (Bug #96288, Bug #30084237) References: See also: Bug #96350, Bug #30103640. When a function such as COALESCE() or IFNULL() was passed a BIGINT column value, casting a negative return value from this function to UNSIGNED unexpectedly yielded zero. Our thanks to Oleksandr Peresypkin for this contribution. (Bug #95954, Bug #29952066) EXPLAIN output showed Select tables optimized away for a query using MAX() on an indexed column, but if MAX() on the same column was called in a user function, it showed Using index instead. (Bug #94862, Bug #29596977)
https://docs.oracle.com/cd/E17952_01/mysql-8.0-relnotes-en/news-8-0-19.html
CC-MAIN-2020-05
refinedweb
11,690
51.58
* A Compare two directories Waze Mvarume Greenhorn Joined: Mar 14, 2004 Posts: 3 posted Mar 15, 2004 01:46:00 0 Hi guys, I am really cracking my head over this one. If you have two dirs, how can you compare the two? ie. does the second dir match the template? I am looking at passing the two directories as arguments at command line. I can do the simple part like recursively going thru directory and subdirectories. Any ideas??? Below is my code so far. Thanks in advance Waze import java.io.*; import java.util.*; public class Dir3 { public static void listPath (File path) { File filesTemplate[]; // list of files in template directory File files2Bread[]; // list of files in template directory // create list of files in the template dir filesTemplate = path.listFiles(); // create list of files in the 2Bread dir files2Bread = path.listFiles(); // Sort with help of Collections API Arrays.sort (filesTemplate); Arrays.sort (files2Bread); //print out only directories - TEMPLATE for (int i=0, n=filesTemplate.length; i < n; i++) { //System.out.print(i +" ");//testing if (filesTemplate[i].isDirectory()){ //put dirs into array //File dirs[]; //dirs[i] =files[i]; /* Would like to do the matching with members of second array, files2Bread[] ,here. to check if above file exists in our template. For now I can just print out the directory if it exists */ System.out.println(filesTemplate[i].toString() + " exists"); } if (filesTemplate[i].isDirectory()) { // recursively descend dir tree listPath(filesTemplate[i]); } } } public static void main (String args[]) { File path1, path2; if(args.length==0){ System.err.println("Please specify a directory name"); return; } path1 = new File(args[0]); //user specified path2 = new File(args[1]); //user specified listPath(path1); listPath(path2); } } [ March 15, 2004: Message edited by: Waze Waze ] *Added code tags for easier reading -- Jason* [ March 15, 2004: Message edited by: jason adam ] jason adam Chicken Farmer () Ranch Hand Joined: May 08, 2001 Posts: 1932 posted Mar 15, 2004 09:17:00 0 A couple of things: First, welcome to the Ranch! However, we do have a Naming Policy that states not obviously fictitious names. Most people don't have the same first and last name. You don't have to use your real last name, just something that "looks" real. Please change your display name in your profile. Thanks! As far as your question, that depends on how you want to compare the files. Are you looking for files with the same name, same size, both, same data, etc? If you are just looking for file names, you can get the name of the current file and do an equals() for all the files in the array list. Personally I would deal with them as a collection and do a contains(). But if you want to also look at size, data, etc. then you have a bit more code to write. Stan James (instanceof Sidekick) Ranch Hand Joined: Jan 29, 2003 Posts: 8791 posted Mar 15, 2004 19:41:00 0 If you have two lists sorted by name, you can do something like this: get first A get first B while more if A < B process "file in A only" logic get next A else if A > B process "file in B only" logic get next B else if A = B process "files match" logic get next A get next B end while See how it figures out if a file is in A only or B only? You might write a couple lists of files, mostly matching and a few unique and try the algorithm. In "files match" you might do more digging to compare dates, sizes, contents, etc. This doesn't show how to handle the ends of the list. Let's add a bit more detail. Say when we "get A" and there are no more files in the A list we set a flag A-Done. get first A get first B while (not A-Done) and (not B-Done) if (A < B) or (B-Done) do A only else if (A > B) or (A-Done) do B only else if (A == B) do match end while To get really elaborate, back in the 80s I wrote a Pascal program that let you specify the action in the "do A only" and B only and match routines. There are 14 choices, something like: A-Only: Copy from A to B or Delete A B-Only: Copy from B to A or Delete B Both, A is newer: Copy from A to B, Copy from B to A, Delete A, Delete B Both, B is newer: (same choices) Both, dates match: (same choices) I still use the program and I think I've actually used all those choices over the years. Hope that was interesting. [ March 15, Waze Mvarume Greenhorn Joined: Mar 14, 2004 Posts: 3 posted Mar 16, 2004 02:14:00 0 Thanks a lot for the great responses guys. Yes, I am looking for files with same name only. I haven't worked with lists and collections before so I am gonna read on on that and then try your suggestions. Right now I am trying to use the equals() method in a test method (see below) to see how I can match the string elements in arrays. Will post an update soon. My apologies I think I am too "green"!! void doSearch(){ String[] list1 = {"Sisco","Peter","Edgar","Kenneth","Haroon","Kill","Bill"}; String[] list2 = {"Mpho","Lerato","Edgar","Kenneth","Haroon","Tatenda","Bill"}; Arrays.sort(list1); Arrays.sort(list2); for (int i=0;i<list1.length;i++){ for (int j=0;j<list2.length;j++){ //check in second list if (list2[j].equals(list1[i])){ System.out.println(list2[j] + " " + " matches "+list1[i]); } /*if (!list2[j].equals(list1[i])){ System.out.println(list2[j] + " " + "has no match "+list1[i]); }*/ } } }//end doSearch method jason adam Chicken Farmer () Ranch Hand Joined: May 08, 2001 Posts: 1932 posted Mar 16, 2004 09:41:00 0 In your above example, for me it would be preferable to use collections. You would have something like: List list1 = Arrays.asList( array1 ); List list2 = Arrays.asList( array2 ); Iterator iter = list1.iterator(); while( iter.hasNext() ) { String item = (String)iter.next(); if( list2.contains( item ) ) { System.out.println( item + " is in both lists!" ); } } However, when dealing with files, the API says that equals compares the abstract pathname of the two files (and the contains() method utilizes the equals() method). I think that this means unless the two files are in the exact same location, they won't be equal. You'd need to separate out the actual file name, minus the path, and compare those. Might just be easier to use the arrays and call getName() on each. I agree. Here's the link: subject: Compare two directories Similar Threads How to only list sub directories in JList? Problem in inserting tree in scrollpane Deleteing the directory. Problem in File Browser Recursive algorithm doubt All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/276616/java-io/java/Compare-directories
CC-MAIN-2014-52
refinedweb
1,168
80.72
While Loop in C++ Hey guys, Today we would be learning what a while loop is and how it works in C++. So read this article… What is a while loop? A while loop is a keyword that creates a loop until the required condition is met. It first checks the condition and if it is valid, only then it will execute the code. Syntax: while (condition) { //code } The most important part of this is understanding when to use a while loop and when to use a do-while loop and how they are different. Syntax of Do-while: do{ //code } while(condition); Well, there are 3 notable differences: 1) While checks the condition and then executes the code whereas do-while first executes the code and then checks the condition. 2) There is a semicolon present at the end of do-while but no semicolon after a while. 3) Do-while executes 1 time more compared to while. Example of while loop in C++ is given below: while((i%3)!=0) { cout<<i<<" "; i++; } Assuming i starts from 1, Output would be 1 2 3 would not be present because 3%3 is 0 and hence the loop never executes. Dry Run of the above example: i i%3!=0 Output i++ 1 True 1 2 2 True 1 2 3 3 False DNE DNE Hence o/p= 1 2 Code Sample in C++: Take a look at the following code: #include <iostream> using namespace std; int main() { cout<<"Enter a limit\n"; int i; cin>>i; int lim = 1; while(lim<=i) { cout<<"* "; lim++; } } Let us take a look at it line by line. cout<<"Enter a limit\n"; int i; cin>>i; First, we are asking the user to enter a particular value i.e. the limit. We are creating variable i of type int and storing the inputted number in i. int lim = 1; Now we are defining our counter i.e. lim of type int which will iterate from 1 to the value of i while(lim<=i) { cout<<"* "; lim++; } Using the syntax, we are printing a star until the value of lim reaches i. Once it does cross the value of i, the loop would terminate and the program continues till the end. Hence the output would be: Enter a limit 5 * * * * * For better understanding take a look at this dry run: i lim while(lim<=i) Output lim++ 5 1 True * 2 5 2 True * * 3 5 3 True * * * 4 5 4 True * * * * 5 5 5 True * * * * * 6 5 6 False DNE DNE Hence o/p = * * * * * And that covers what a while loop is. I hope you understood it and were able to execute it by yourself. If you have any doubts, feel free to ask in the comments section below. Also, read: How to Terminate a Loop in C++?
https://www.codespeedy.com/while-loop-in-cpp/
CC-MAIN-2020-50
refinedweb
477
75.24
Details Issue Links - blocks JSPWIKI-461 Release 3.0.0-incubating-alpha-1 - Closed - depends upon JSPWIKI-36 Transfer JSPWiki 2.6 source to Apache SVN - Closed - is depended upon by JSPWIKI-39 Refactor all utility classes to the util-package - Resolved - is related to JSPWIKI-303 JSPWiki-API library creation - Open - relates to JSPWIKI-464 JSPWiki authentication support for TextOutputCallback (display login messages on Login.jsp) - Closed Activity Updated dependency. I am not at all sure about moving revision histories, as those contain code under LGPL. Can we do that? This has the potential to break all existing the plugins. Yes, it will break all of the existing plugins. Which is why we want to do this in a relatively controlled fashion during a major upgrade. When would be an appropriate moment to do this ? What approach ? I guess it requires some coordination before committing ? I think we should wait for a while until we're fairly certain that we don't have to merge a lot of stuff from 2.8 branch anymore. And after Andrew has merged in the Stripes branch (because I'm sure he would hate having to merge all those changes back and forth). Some comments on the .api package: - I do NOT like the idea of creating a separate org.apache.jspwiki.api package. That seems like one abstraction too far. It is awkward and unnnatural, and it just looks funny. You do not see this in slf4j or the J2EE servlet packages. What you see (in those examples) are interfaces everywhere, and sub-packages called .spi, or .impl or similar that contain the actual implementations. - I DO like the idea of "coding to the interface" – making sure we create well-defined interfaces that substitute for what are, today, concrete classes. So, I think we make should change our approach slightly. First, the current interface classes in the proposed org.apache.jspwiki.api package should be moved to org.apache.jspwiki. We should start replacing concrete implementations in the code with the interfaces immediately. Janne has already done this with WikiPage; the next one should be WIkiContext (the one in the top-level com.ecyrd.jspwiki package is the correct one to use. Second, the contract we should follow, at least in the top-level org.ecyrd.jspwiki package, should be this: any interfaces are guaranteed to be the "permanent" API. Concrete implementations (example: JCRWikiPage) will be in .spi or .impl subpackages. Third, we should consider creating a separate source folder (proposed name: api) that creates a separate jspwiki-api.jar. Concrete implementations go in the regular jspwiki.jar. I would be happy to make the modifications to the Ant script and Eclipse workspace files to do this. The advantage of this approach is that it would make moving to the apache namespace fairly easy: just change com.ecyrd.jspwiki to org.apache.jspwiki, in most cases. There would need to be some cases where callers might need to use a factory class instead of constructors, but those should be fairly easy to figure out. I disagree. I personally hate the ".spi" -paradigm, since it results in a situation where pretty much everything is a mess and could or could not be stable. It also makes it very, very complicated to figure out the API. And you never quite know exactly which APIs you are supposed to trust on, since we've got also a bunch of interfaces which might or might not be stable. We do need interfaces for our own abstractions without revealing them to the developer too, and if we say that interfaces are stable, then well, then we'll end up doing a lot of casting. Putting everything in a single package which we agree to be stable makes it clear for everyone what we can and cannot do with it. It is not necessarily clear to the developer which JAR file his class comes from, so he might be surprised to find out that half of the interfaces he is coding on just are not reliable. We can, of course, put them in a separate source directory as well to ease management, but I would still keep them in the dedicated package. In addition, the dedicated package allows separate version numbering tracking for API and the core code. I also dislike the fact that we've got stuff in the "root" directory (which quite a few people have also pointed out on the mailing lists) - it's really just a collection of miscallaneous classes about which we don't know where to put them. I would greatly prefer if they were moved into a ".core" package to signify what they really mean. This is quite commonly done in large projects. It is most definitely not an abstraction; they're the same interface classes, but in a different package. Abstraction implies layering, which is not true. There is nothing very complicated in renaming the packages, so making it easier is not a very strong argument. Janne – let's start with the things we agree on. I agree that there are a few classes in the top-level (ecyrd) package that need to be moved to sub-packages. Moving, for example, TextUtil to the .util package was a good thing. There will be others. So, let's agree that is a good thing. But it's not really the core issue here. I also agree that renaming is fairly trivial. My point was that we strive for simplicity where we can get it. A straight rename from com.ecyrd.jspwiki to org.apache.jspwiki would be a good goal, even though in practice any third party will need to do some work. I'd just prefer to make it easier. As for creating a separate source folder for the API classes, it sounds like this is something we are both indifferent about, but are open to. Now, to the core issue. I am not sure how you can argue that today's top-level package is a mess, with too much stuff in it – and then suggest what we really need is a single "stable" .api package with – guess what? – lots of stuff in it. Other stuff, to be sure, but still things that are only related in the sense that we declare them to be stable. How do you manage what goes into the top-level package, and where do you stop? Does FilterException belong there? No, it probably belongs in the .filter package. As for additions, let's say we decide UserDatabase needs to be a "stable" API. Do you move that to the .api package? What about the RSS generators? The WikiActionBean interface? After a while that .api package will get pretty crowded too. See what I mean? You're just replacing one bag of unrelated stuff with another. By contrast, the "package hierarchy-plus-interface" paradigm, supplemented by additional .impl subpackages, is a well-understood convention. The package hierarchy itself gives developers contextual information. I do not understand how "saying interfaces are stable" could be a bad thing. They are supposed to be stable. This is what Bloch has been saying for years: I gave you two specific examples of projects that used this convention: slf4j and the J2EE packages. Let me give you 2 more: the J2SE itself (e.g., Collections API), and Stripes. Can you give me one example of a Java project that uses the approach you suggest? Janne, I'm not going to be able to comment conclusively until I have a chance to actually download and try out the 3.0 code, but from what I can see I have a sneaking suspicion that a large number of plugins I've developed and perhaps the entire edifice of the Assertion Framework will become impossible to migrate forward. There seem to be so many radical changes to the design, to the API approach, and while all this is in mind of being improvements, the complexity of JSPWiki 3.0 compared to previous versions for the developer is going to be either a real turn-on for some (who have enormous amounts of time to invest and love tinkering) and a complete turn-off for others. I'm frankly going to have to find a huge amount of time to simply digest what has happened, and as you may guess that's been short lately. I know you think my comments in this regard are just sour grapes, but I'm hoping (and I say this sincerely) that 3.0 hasn't become impossible for me to use, both in terms of breaking compatibility with previous extensions I've developed, in terms of the potential impossibility of implementing things that used to be simple (JSPs are simple), and in terms of the learning curve of technologies now required to develop for JSPWiki. I'm not talking here about renaming classes. Of course, there are separate issues for those migrating from previous versions and those coming new to 3.0. There's the changes in installation requirements for deployers. I really love the fact that JSPWiki is a really simple install, and I know that's been an enormous selling point for the project. By contrast, installing DSpace is a pain for a lot of people because it requires knowledge and installation of PostgresQL or MySQL. Not a big deal for some but an impenetrable wall for others. A lot of people are driven away by complexity. Now, I also know that we've already committed to this course. It just seems that the project is heading down the road of becoming so complex that it loses the appeal of being capable of being hacked by anyone who doesn't understand some rather complex technologies. What I'm saying is that I fear you may have lost me with 3.0. That may be my own failing, but you might consider that there are others like me. Where the project can remain simple (and I know some of the changes you're considering are simplifications, by adding someone else's complexity) I'm all for it. Thanks for listening. Murray Murray - you are right. You shouldn't comment until you've actually checked out what has been done. We would rather deal with facts rather than suppositions, fears and assumptions. Whether MySQL is more difficult to install DSpace or vice versa is completely and utterly irrelevant with respect to JSPWiki. You're just assuming that we're going to make things difficult for you. This is wrong. As you well know, making things simple is very complicated and difficult and time-consuming, and that is the discussion you are seeing. Murray, you are a committer to this project. You know very well the price of forking - you cannot assume that we'll keep everything the same to please you. If you had lots of invasive code you wanted to share, you should've discussed it with everyone else, and then started to commit it - because that is what committers do. Please stop assuming the worst and just start helping us, mmkay? We are all under time pressure, but if you can find the time to write long rants, you have the time to check out the code. Andrew, using J2SE and/or J2EE as examples is not valid, since they are specs and they are implemented by applications in a completely different hierarchy. For example, Tomcat implements the J2EE API using its own org.apache.catalina -hierarchy, and Sun implements J2SE in com.sun.* -hierarchy. Same goes with e.g. JCR, which is specified in the "javax.jcr." -hierarchy, but the actual implementation could be in "org.apache.jackrabbit." This provides a clear, visual separation for people when they are using the "official" API classes, and when they are using the implementation-specific classes. Remember, anything under "java.*" is official and won't change. We can sub-package the org.apache.jspwiki.api -package as needed, e.g. org.apache.jspwiki.api.acl to put the ACL stuff in it. It would essentially be a "root" package, not a generic grab-bag of things. We can design it to be a smart hierarchy with things grouped together logically and still keep it independent of any refactorings we do with the implementation classes. This would not be true if they were all under the same hierarchy. However, if we keep both the implementations and interfaces in the same package, logically and visually these are going to be equivalent for the developer. And, if we put all the implementations under .impl, then we have to put all of our 350+ classes under a hierarchy structure which starts with impl, essentially adding a layer of pain to us JSPWiki dev team, just so that we can keep a handful of classes in the main hierarchy. With the interfaces in a separate package, it is clear to the developer immediately when he sees the import-statements on the top of the file whether he is depending on internal classes.. Janne, I have previously commented on this project in ways I thought were productive, and your reaction has at times been very negative. I'm sorry you thought my message was a rant, as it certainly wasn't intended as such. I wasn't trying to suggest that you or anyone else in the project has bad judgment, has made bad decisions, etc. Far from it. I have a great deal of respect for the people involved, and the quality of the work is always very high. I have not assumed the worst, and I tried to make that clear in my message: I stated that I was basing my understanding on what I have read of the messages passing through jspwiki-dev, as I do read each one. As for forking, the substantial amount of code that I've written that extends the functionality was not written as a fork, as it relies entirely on the existing APIs and existing base classes (e.g., WikiContext). I have not contributed that code because I currently do not have that legal option. I continue to explore the possibility of releasing it with management but that timeline and priority is not up to me. We just had a contract change and the question is again in the air (though I have at least verbal assurances from my manager on releasing IP, for what that's worth). I was reacting to what looks to be some very substantive changes in 3.0, both with the APIs (things that were classes are turning into APIs and vice-versa) as well as backend changes that will take me a lot of time to deal with since they also break the existing provider API (from what I can see). As you know I'm quite interested in using priha but even that will take a lot of time to deal with since I'll likely be integrating it with other systems. Both this message and previous ones were simply pleas for simplicity: when and where there is an option for a simpler design I am simply asking that the simpler approach be taken. When there is an option to completely break previous APIs that those decisions be taken with the knowledge that this will have a very substantial impact on existing users, some a lot more than others. I happen to be a somewhat special case because I have done a lot of extension work, but all that work is based on APIs or base classes that look to change or disappear. If that's not a cause for concern amongst this project team I'm not sure what might be. I'm probably not the only person to have used JSPWiki as part of an enterprise system. Anyone else that has may find migrating to 3.0 difficult. But please don't consider this a rant. I am not angry. I am not trying to spread fear, merely have a conversation. I hope to be able to fully test out the code against the work I've done previously, but as a lot of people my time available to do this is short. I'm hoping to have more in the new year. We've had a rather surreal last six months. And as to message length, I'm a very fast typist. Murray Janne, Your previous comment was very helpful. It illuminated several important points, which I either failed to understand, or you failed to communicate. That's why we have these discussions, right? To make our points clear, so that we may find common ground. In particular, I failed to appreciate your intent to keep an entire package tree under the .api package. My impression was that there would not be a hierarchy there, and that everything that developers needed to rely on would go into one flat namespace. That seemed messy. Ok. It's good that that's not what you meant. Now, I could fence with you a bit more about the fact that J2SE and J2EE are, in fact, NOT specifications and are actually APIs, but would distract from where we need to go, which is to figure out what we need to do. So, let's see if we can agree on the following principles: 1. Specifying a public API – in an Apache package namespace – is a good and worthy goal 2. Implementing the API – in a separate, "private" package namespace is desirable 3. The namespaces should be distinct enough that developers should be able to tell what APIs are "public" and which ones are "private" 4. It follows that the public interface and private implementations should not be in the same package hierarchy (your logic was persuasive...) 5. The public interface and private implementations may be in the same source folder (though I'd like to see them separated...) With me so far? Ok, good. Here's what I propose: - Private implementations live on in the "com.ecyrd.jspwiki" package namespace - Public APIs move to the "org.apache.jspwiki" package namespace - We do not create a "org.apache.jspwiki.api" package – just snip the .api subpackage and move everything up a level - Create parallel packages under "org.apache.jspwiki.api" as needed. For example: .auth In essence, this creates two trees, one that is org.apache.jspwiki and the other, which is com.ecyrd.jspwiki (what we have now). The two would have parallel subpackage structures. Personally, I don't think we need to keep these in separate source folders, although we might want to package them into separate jars (jspwiki-api.jar + jspwiki-impl.jar?) at build-time. I think this is a nice solution. It has the benefit of being simple. The trick, of course, will be figuring out what types need to migrate from one tree to the other. Judicious use of the Extact Interface... refactoring tool will become an important skill. Reasonable? I should have mentioned: in terms of emphasis, the public API will be more weighted towards interfaces, while the private implementations will be more weighted towards concrete classes. In general, the (mostly) interfaces in the org.apache.jspwiki API namespace should require fewer methods than the corresponding classes that we have today in com.ecyrd.jspwiki. We should strive for "clawing back" the interface to its minimal essence. For example, if you look at my WikiContext interface for 3.0, you'll see that it loses about a half-dozen methods compared to the 2.8-era WikiContext class. It has no hasAccess() method, and the horrible Command methods are gone. So, as we extract interfaces and pull things over to the public API namespace, we should be mindful of implementation-level methods that have crept in to suit the needs of one or two callers. Any method that just a small handful of callers should probably be regarded as an implementation method, and omitted from the public API. >. If you're in Apache, you're in the apache name space: org.apache.jspwiki or org.apache.whateverWeChangeTheNameToNext. As far as I know, we're not talking about multiple independent implementations of a standard or common API. So having an org.apache.wiki interface doesn't sound right. But let's not continue on the thread of discussing keeping the com.ecyrd.jspwiki name space in Apache. That is a non-starter. To keep the api separate from the impl, you might consider defining two projects: the API and the impl project. Each would build its own jar file but from a programming perspective, you would only ever import the api packages and use the API jar. Just thinking out loud. Craig: good to know what our constraints are with respect to naming. Sounds like you agree 2 jars is the right solution: 1 for the API and one for the private implementation. If both public APIs and "private" implementation code must be under the org.apache namespace, then I am not sure that it makes a lot of sense to have both a .jspwiki package and a .wiki package (for example). However, what I do not want is to have the package that the general public uses to be four levels deep. Having developers needing to use org.apache.jspwiki.api just sounds wrong to me. This is my non-starter. If anything, I'd prefer to invert things, so that the public API namespace is org.apache.jspwiki, and the private namespace is off of that: org.apache.jspwiki.private. We could put 'em in different source folders too, so that IDE developers (basically, just about everyone) have clearly delineated working areas. Perfect? No. But if all top-level options other than org.apache are closed off to us, then this may be the best we can do, while still accomplishing the desired separation between public APIs and private implementations. Janne, what do you think? > However, what I do not want is to have the package that the general public uses to be four levels deep. Having developers needing to use org.apache.jspwiki.api just sounds wrong to me. This is my non-starter. Just so I'm clear what I'm hearing: you could have the public api be org.apache.jspwiki with sub-packages as deep as you like; and the implementation org.apache.jspwiki.impl or org.apache.jspwiki.private similarly with sub-packages to separate different parts of the project. In two separate jar files. Of course, you could always file a JSR and get the javax.wiki name space. <ducks now> Craig: yes, you've summarized my proposal correctly. We might wait until JSPWiki 6 to file the JSR, though. The Ruby on Rails folks have taught an important lesson: convention over configuration. I think this is very powerful, and Stripes uses the same paradigm as well by defaulting to certain package naming conventions for its ActionBeans: in fact, they force you to have an extra level of depth in your package naming by assuming your ActionBeans lie in the ".stripes." or ".action." packages. In the same vein, I think the separation of the API classes into a separate hierarchy of their own does make sense by encoding the fact that they are API classes directly into the package names themselves. Much like the "plugin" in the package name means that these are plugins. Now, I am not entirely certain whether moving practically all of our classes (save a few odd stranglers, perhaps ten or so for 3.0) into a .private -namespace makes sense either. It is certainly not the simple solution that Andrew alluded to above, and will probably confuse developers. Think about the two different messages: "You may use anything in the api-package, and that will be stable." vs "You may use anything under org.apache.jspwiki.* EXCEPT for the packages under private, and all those will be stable (except those aforementioned packages)" There's a subtle difference, and I would much rather go for the first one than the second one. Easier to communicate, easier to code, easier to understand and easier to build scripts for. And, while creation of an "impl" -package might keep third party developers happier, it will annoy our current core developer base by introducing an extra layer. After all, we have to deal with those 350+ classes all the time. However, we could make a cosmetic change by creating a JSPWiki API subproject, call it "jspwiki-api". It's really all about switching a period to a dash, but it does not create any further nesting levels (though since most developers these days do use IDEs, I don't think any extra levels are really a problem). So, we would have "org.apache.jspwiki" with the implementation, and "org.apache.jspwiki-api" for the API. Other possibility is to treat these in reverse, but I don't really like the "impl" or "private" names. There would need to be a better naming convention. I guess my main objection is that since the aim of this project is to produce the wikiengine itself, not the API, I really don't like the idea that we give the API the first-class citizen treatment and force the actual implementation to some second-tier packages. The org.apache.jspwiki -package should really be about the JSPWiki engine, not an abstract API class set. If we had set out to do to create an API definition, then yes, then obviously we would do things differently, but I don't think API creation has ever been more than just an issue on this tracker. If we wish to have a top-level package for the API, then perhaps a subproject with own dedicated package name might make most sense. Or - as a funny thought - the API package could be managed outside Apache (e.g. under the org.jspwiki-hierarchy). This is not unprecedented; Apache has on several occasions provided reference implementations to JCP packages, where the API classes are not maintained by Apache (though quite often it is the exact same people). Janne – I thought the general goal of any API design would be to treat the API as a first-class citizen, and not the implementation. That is why I proposed org.apache.jspwiki as the one where the API would live. It seems fairly clear that you and I are not going to agree. I confess I do not understand who the API package is aimed at, why they need it, and how it makes things less confusing for them rather than more. There has been no spontaneous developer uprising calling for a separate API package, and the one developer (other than me) who has weighed in on it – Murray – doesn't seem to like it. Because we do not agree, it seems that the status quo should hold. In other words: continued good class and interface design, appropriate refactorings for 3.0, and continued conversion of concrete classes to interfaces + factories. (Example: what we've done with WikiContext). More of the same, but better. That said, we do have to rename the package structure to org.apache.jspwiki. Therefore, I recommend we simply make a straight 1:1 rename from com.ecyrd.jspqiki to org.apache.jspwiki, and kill the API package. Let's vote on it. There have been several calls (over many years) to separate the rendering engine from the core, and unless we isolate the implementation and the API, it's simply not possible. Since there seems to be some perception that my comments are overly critical I'll preface this with the note that my entire motivation for writing this is in hope that I'll be able to continue to use JSPWiki 3.0 following the migration. There are a number of factors that effect that, the biggest being time available. [Apologies for the length.] As Andrew has alluded, I'm not one who appreciates complexity for complexity' sake. I doubt Janne is either and I'm assuming his motivations for this are as he has stated. Reading over his comments above (27/Dec/08 03:26 AM) I think I agree with most of them, particularly that our priority is to produce the WikiEngine. I certainly don't require an API to that, which would only limit my abilities. [And *please* don't create private packages or begin using dashes in package names. Ugh.] In the case of the JAXP API I can see the rationale for separating the API from the implementations, since the latter are derived from a number of disparate code bases. But in this case we are developing a wiki, which traditionally has been one of the simplest software applications, and we're developing an implementation that is: (a) likely to be the main if only implementation; (b) has no prior API commitments; and (c) is going to break all previous implementations, given it is going to (by design) recommend and/or require that all previous extensions (frameworks, plugins, etc.) use the new implementation since that implementation is guaranteed to change (with at least a package name change). This latter point might seem to suggest the need for an API, but that would also be satisfied by a relatively stable implementation (which has been the status quo). A package name change would maintain that stability more than changes to the fundamental architecture. I must confess that I don't know why we really need a separate API package. For the past few years I've been able to follow the changes to the implementation as it has gradually gone from version to version. Would such an API be any more stable? Really? I'm probably a good test case on how much access to the innards of the engine will be available via the API. While Janne is rightfully critical of my armchair statements, not having attempted to migrate my Assertion Framework to 3.0, at this point I'd be a fool to do so even if I had the time. It's a bit of a chicken and an egg problem, as I'd much prefer to wait until the 3.0 code has stabilised before doing that, yet I'd be the first to note any deficiencies for extension classes as I hit those limitations, so any API 1.0 would likely need to become an API 1.1 as I found places in the API that needed opening. While I'd really hate to lock the project into the 2.x code, I'm going to have a tough time convincing management of what is (to an outside observer) going to look like architectural changes with few if any user benefits. I.e., I'd be spending a large amount of time coding to remain with the 3.0 code base with little obvious benefit in the outward application. I'd have to rewrite all of the custom plugins (which includes all the CeryleWikiPlugins plus others not publicly available), the Assertion Framework, and (on the side, unrelated to paid work) all of the integration code used to make JSPWiki work embedded in Ceryle (if that will even be still possible, which remains to be seen since they're fairly tightly integrated). I don't know how much work that'd be but it hardly sounds trivial. Merely changing the package names is by comparison almost a trivial exercise (accomplished e.g., via a sed script). While not wanting to sound sour about 3.0 you might begin to see that my perspective on this is based on looking ahead at what sounds like a great deal of work (with little in-house support for it), which is why I'm a bit worried. I am enthusiastic as I've said before about Priha since I can likely justify that to my boss and perhaps even use that in other places. But wholesale architectural changes for what is an wiki project seem to me to be overkill, at least for 3.0. If put to a vote I'd agree with Andrew's recommendation that we simply rename the packages for now. If there's a need for an API for the rendering engine, we make a simple API for that within the package hierarchy. I don't see the need for a separate package for that API. If an API really simplifies things for me, well, then great; I just don't yet see that. In a nutshell, this is again simply a call for simplicity where possible. Murray PS. While this may seem tangential to any arguments here, my time on this is until sometime in the coming year very limited, as like many here I've got a busy project schedule. I've basically got to apply to my boss if I'm going to spend a substantial amount of time doing the migration, which means I need to convince him of the need. The Assertion Framework was "unfunded" by a rather useless director who just resigned, so following his happy disappearance I've got to re-apply in the new year to get the project re-funded, and the amount of time required will have a lot to do with the decision, given the organisation has a lot of pressing needs that I'm qualified to satisfy. I don't want to fork the code, or be forced into using a fork, but I can say that looking at our organisation for the next three years we're going to be under a very restrained budget. We're also moving out of the current building during a renovation with all that entails for a . I simply won't have the time to do large changes. I know that none of this is anyone else's concern but I doubt I'm the only one in a similar situation. Maybe I am and I'm therefore ignorable. I don't know. Murray, perhaps you don't realize this but you will have to rewrite all your plugins for 3.0 anyway, regardless of this discussion. The move to JCR means that most of the WikiEngine/PageManager/AttachmentManager/ReferenceManager APIs will be gone, period. My aim with the API package was to create a set of interfaces which could be used to create a set of functionality to prevent from this from occurring in the future. Something with sane design, preferably, as opposed to the ad-hoc design of WikiEngine. But hey, obviously people don't want APIs or any promises of stability. So be it then. It'll just be harder on everyone in the long run. Try to understand - if it is within the regular package hierarchy, I do not wish to make any promises about it being stable, just like with any of our current APIs, none of which are declared stable. We do not currently have an API, we have just a set of classes on which you can hack. And declaring some APIs stable within the regular class hierarchy is simply a non-starter for me. Janne, Yes, I have realized I would have to rewrite all of the existing plugins for 3.0. That represents about 45 plugins (of the publicly available ones; there are more private ones) and more than 150 classes, many of which will be affected. I assume that support for a lot of these in 3.0 will disappear, as I can't realistically rewrite that many plugins. Yes, I might be able to solicit some help. The Assertion Framework is about 20 plugin classes and a lot of tie-ins to the WikiEngine, PageManager, events code, etc. The TagManager would likewise need wholesale revision. I'm hoping all of my higher-priority classes can be migrated, otherwise I'll be forced to stay with 2.x. I really don't want to be the Fletcher Christian of JSPWiki, believe me. We all know what it was like on Pitcairn Island for the survivors. While we've not had interfaces, I've treated any of the *Manager classes as relatively stable entities, and in the past been able to maintain currency with the trunk through a constant migration. As I wrote, this was a relatively stable migration, and yes, it would have been mitigated had there been APIs. I have assumed that going to JCR meant a wholesale upending of any previous managers, but I have also assumed (perhaps wrongly) that the basic functionality of the manager classes would be relatively easy to recreate via some bridging manager classes. Are you saying there'd be no PageManager at all in 3.0? If there is, or there's something akin to it, it should be relatively easy to replicate the functionality via a bridging PageManager class that I can use. As for using factory classes rather than constructors, hey, I'm all for that. I don't mind rewriting stuff when it truly is a simplification but doesn't alter the fundamental structure and functionality. As you say, JSPWiki has been a set of classes on which you can. And I have hacked a great deal and now I must pay. As for not permitting APIs within the regular class hierarchy, why is that a non-starter? Aesthetics? I've resisted making arguments based on aesthetics during this discussion. I may not be the world's cleanest coder, but of the ~200K lines of code in Ceryle l've got interfaces and implementations in the same hierarchy, and I've got them in separate hierarchies. With versioning the interfaces over time (over seven years) I'm not sure that either has made a substantial difference. I've had to make changes in the interface based on changing project needs. Where the API was located would have made little difference. Interfaces vary in stability too. But yes, I understand the difference and I don't think I disagree with you in the basics. A guarantee would be nice, but it would be only a relative guarantee and (to my mind) shouldn't come at the expense of significantly complicated code or package structure. Factories are by nature a bit more complicated, but god have I seen a variety of different approaches to that, some extravagantly complicated, others almost as simple as using a constructor. What about the idea of creating a single package org.apache.jspwiki.api.*, and placing within that the WikiEngine and all of the existing manager classes as interfaces (bridges to previous managers)? How different is that from what you're proposing? Why do the APIs need to be in some other hierarchy? [I don't have a problem with Foo and FooImpl being in the same package hierarchy or even the same package if Foo is labeled as a stable interface with a version number, where FooImpl is the default implementation.] You keep writing things that I agree with, then you write something that seemingly contradicts that agreement. Perhaps I'm just misunderstanding you. Yes, we're just having a conversation towards finding a good solution to this. Thanks for your continued patience. Murray, that is exactly what I am proposing - having a separate org.apache.jspwiki.api -package, and placing a number of existing manager classes into it as interfaces. Or to be precise, a mostly equivalent set of APIs; JCR requires some modifications which would make it complicated to support the existing set of APIs - mainly because we need to carry state information in a WikiContext, which the old API signatures do not mostly support. I would also like us to start using WikiNames wherever useful. The idea behind this is largely the same as with our existing package structure - there is nothing technical in preventing everything to be in the org.apache.jspwiki -namespace (with no subpackages), but both aesthetics and experience tell us that it just makes sense to add subpackages to make class management easier. In the same vein, isolating the API interfaces into a separate package makes it a lot easier to manage than having everything dispersed around - e.g. if we want to create a jspwiki-api.jar, we only need to jar up the contents of a particular subdirectory. If the developer wants to know whether he is using any possibly unstable components, all he needs to do is to glance at the set of import statements on the top - if he's using other classes than org.apache.jspwiki.api.*, he knows that the classes are potentially unstable. In addition, it makes the committer's life easier, since there is an agreement by convention that the APIs inside *.api are not to be touched without common agreement, whereas anything outside of this package can be treated with more liberties. We have so many people contributing in the code that API incompatibilities can creep in without anyone noticing. (There is no PageManager at all; there is a ContentManager, which will have a very similar API, but it will manage both pages and attachments. Check out the JCR branch. I am not very hot into writing bridging classes, simply because it is a lot of work which will need to be supported for a long time, and I am running out of time already. If you want to write a compatibility suite, go ahead.) Murray – my counter-proposal was that we eliminate the .api package and focus on refactoring concrete classes into interfaces instead. I.e., WikiEngine becomes an interface in the top-level package, and we provide a concrete class called DefaultWikiEngine (for example) that provides the implementation. This could be combined with a factory class that produces WikiEngines. This has already been done, partially, with WikiContext (even though the interface ended up in the .api package, which I'd prefer us to avoid). In other words: focus on refactoring by extracting interfaces, not shunting developers off to another package. If we provide good interfaces, developers will use them in preference to the concrete classes anyway. Hi Andrew, I do admit to some confusion in this discussion, with my apologies for any part I've played in that confusion. Yes, this was what I'd mentioned in my message of 28/Dec/08 01:52 AM. I have several instances of an interface, a default implementation, and a factory all in the same package. I don't have an aesthetic issue with that, and it does provide the advantage of being able to use protected methods and member variables across the classes. It's also easy for the developer to locate all of the components since they're in one place. I can likewise understand Janne's desire to have all of the APIs in one place too, as it's a clear encapsulation of the set of guarantees being promised. If there's only to be that set of APIs I think Janne's suggestion is a good one, but if there are other APIs littered throughout the package hierarchy (and I suspect there are based on the current code) and those APIs aren't guaranteed as stable as those in the api package, well, that's rather muddy. I do think that (based on my experience in the JDK, particularly with JAXP) that there are advantages to both. But I wouldn't say I'm a fence-sitter on this, since the only way I think a separate api package is a good idea is if all the APIs of the entire application live there since that's what's suggested by its presence. If there are interfaces elsewhere I guess a developer would assume those aren't guaranteed as stable, which is weird. Also, in the api package we lose any sense of the API hierarchy and we end up mixing all sorts of things that aren't really related except by being APIs. I do like the simplicity (for developers) for those classes we expect to have an interface, a factory and a default implementation, that they live in one place. I'm trying to think how this isn't simply a matter of taste or aesthetics. After New Years as Janne suggested I'll download the JCR branch and take a look since I'm only speculating. I'm kinda busy right now with a couple other things, playing catch-up. (What are we doing not enjoying the holidays, anyway?) Andrew, if you read my proposal carefully, you will notice that interface extraction is the one thing we agree on. So this is not a "interface extraction vs shunting developers off to a different package" -situation, and please don't try to represent it as such. To recap: my proposal is to put the interfaces in a separate package (because of all the several reasons I have provided above), and your proposal is to keep them with the regular classes (because it's a non-starter for you). I would like to hear a bit more fact-based argumentation for your point of view. I don't think typing four extra letters (".api") is not a problem for developers, especially since most IDEs are able to fill it in automatically. So all that remains is your aesthetical objection (which, as Murray points out, makes it all rather muddy). There are three additional objections at coding against interfaces: - Exceptions are not interfaces, so therefore you cannot code purely against an interface. - We probably wish to retain the option of having unstable interfaces before they are graduated into a "stable" interface, which in my proposal is marked by transferring them from the regular class hierarchy to the .api -class hierarchy. Similarly, we lose the ability to make unstable "subinterfaces" by inheriting on a stable interface. For example, certain SPI code could easily be like this for a while - first releasing it "internally" in 3.1 so that developers can take a whack at it, then stabilizing it in 3.2. - It is impossible to tell, just by looking at the source code itself, whether you are using an interface or a class. Therefore, the developer lacks any visual cues as to whether he is relying on unstable or stable APIs, and needs to rely on documentation. @Murray: everything is an API, both classes and interfaces, if it's public. However, the question is really about stable and unstable APIs. The .api package comes with a promise (and hopefully, a test suite) to make sure that if you code against them only, your plugins will work in the future. And, if you do code against anything unstable, you know that it's likely to break from major version to major version. Currently, this is not the case. We have even broken interfaces (like the PageFilter one) between minor versions, which means that it's difficult for developers to keep up. The JCR branch has nothing to do with this, really. The API package does not mean that we lose any sense of the API hierarchy. We can easily have org.apache.jspwiki.api.acl org.apache.jspwiki.api.plugin org.apache.jspwiki.api.filter If we choose to have so. Having the interface in the same package as the concrete implementation has no advantage with respect to protected members, since interfaces cannot have members at all... Janne – sorry, I did not mean to suggest you were somehow against interface extraction. We do agree on this. Obviously, this particular discussion has raged for a while, and occasionally precision eludes me. You have recap'ped my counter-proposal accurately. I'd like to keep the interfaces in the same packages (after we rename them to org.apache.jspwiki), and not put them in a separate org.apache.jspwiki.api package, or in a separate package tree such as org.apache.jspwiki.api, OR org.apache.jspwiki-api. Not to put too fine a point on it, the burden on me is NOT to show that your idea is a bad one. Because what you've suggested is a significant change, the burden is on you to show why a separate .api package (or package tree) is warranted. I understand your reasoning but disagree on the conclusion. Consequently I will send out a message to the dev list requesting three votes: - Immediate renaming of all packages from com.ecyrd.jspwiki to org.apache.jspwiki, so that we can move to release an early alpha Apache build. - Refactoring concrete classes (e.g., WikiEngine) into interfaces whenever possible, and supplementing them with factory interfaces if warranted (e.g., interface WikiEngineFactory). - Creation of the .api package/package tree, and creation of various types in this package (e.g., WikiEngine, WikiPage) I will vote +1 for the first two items, and -1 (veto) on the third with the recommendation that we use existing (after-rename, org.apache.jspwiki) packages. The third vote is essentially a retroactive vote. Let's count 'em the votes. Smoke 'em if you got 'em. The package name org.apache.jspwiki seems to block JSP compilation by Tomcat's Jasper JSP compiler. See Harry's comments at the bug entry JSPWIKI-464. Just for the record, a separate issue for this JSP compilation problem has been filed: JSPWIKI-465 So shall we change this to "rename packages to org.apache.wiki" considering JSPWIKI-464? +1 +1, assuming you meant a straight rename from com.ecyrd.jspwiki --> org.apache.wiki. My concerns about the .api package still apply, but that's something we can resolve later. +1 @ajaquith: yes, that is a separate issue (which unfortunately flooded this discussion). Done in 3.0.0-svn-60. Leaving open until we're convinced the move went smoothly. I think the move went rather smoothly, can we close this one ? +1 Will close in a few days...... Recommend this be made dependent on JSPWIKI-36(Transfer JSPWiki 2.6 source to Apache SVN). At least, that is, if you're going to be transferring revision histories over into SVN, which I would also recommend doing.
https://issues.apache.org/jira/browse/JSPWIKI-38
CC-MAIN-2014-10
refinedweb
8,362
64
15 React Interview Questions with Solutions Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95( 'div', {className: 'sidebar'} ) Further reading: <ul> {todos.map((todo) => <li key={todo.id}> {todo.text} </li> )}; </ul> Not using a key, or using an index as a key, can result in strange behavior when adding and removing items from the collection. Further reading 7. How do you restrict the type of value passed as a prop, or make it required? Answer In order to type-check a component’s props, you can use the prop-types package (previously included as part of React, prior to 15.5) to declare the type of value that’s expected and whether the prop is required or not: import PropTypes from 'prop-types'; class Greeting extends React.Component { render() { return ( <h1>Hello, {this.props.name}</h1> ); } } Greeting.propTypes = { name: PropTypes.string }; Further reading 8. What’s prop drilling, and how can you avoid it? Answer Prop drilling is what happens when you need to pass data from a parent component down to a component lower in the hierarchy, “drilling” through other components that have no need for the props themselves other than to pass them on. Sometimes prop drilling can be avoided by refactoring your components, avoiding prematurely breaking out components into smaller ones, and keeping common state in the closest common parent. Where you need to share state between components that are deep/far apart in your component tree, you can use React’s Context API, or a dedicated state management library like Redux. Further reading 9. What’s React context? Answer The context API is provided by React to solve the issue of sharing state between multiple components within an app. Before context was introduced, the only option was to bring in a separate state management library, such as Redux. However, many developers feel that Redux introduces a lot of unnecessary complexity, especially for smaller applications. Further reading 10. What’s Redux? Answer Redux is a third-party state management library for React, created before the context API existed. It’s based around the concept of a state container, called the store, that components can receive data from as props. The only way to update the store is to dispatch an action to the store, which is passed into a reducer. The reducer receives the action and the current state, and returns a new state, triggering subscribed components to re-render. Further reading 11. What are the most common approaches for styling a React application? Answer There are various approaches to styling React components, each with pros and cons. The main ones to mention are: - Inline styling: great for prototyping, but has limitations (e.g. no styling of pseudo-classes) - Class-based CSS styles: more performant than inline styling, and familiar to developers new to React - CSS-in-JS styling: there are a variety of libraries that allow for styles to be declared as JavaScript within components, treating them more like code. Further reading 12. What’s the difference between a controlled and an uncontrolled component? Answer In an HTML document, many form elements (e.g. <textarea>, <input>) maintain their own state. An uncontrolled component treats the DOM as the source of truth for the state of these inputs. In a controlled component, the internal state is used to keep track of the element value. When the value of the input changes, React re-renders the input. Uncontrolled components can be useful when integrating with non-React code (e.g if you need to support some kind of jQuery form plugin). Further reading - Controlled vs Uncontrolled Inputs - Controlled Components (React Docs) - Uncontrolled Components (React Docs) 13. What are the lifecycle methods? Answer Class-based components can declare special methods that are called at certain points in its lifecycle, such as when it’s mounted (rendered into the DOM) and when it’s about to be unmounted. These are useful for setting up and tearing down things a component might need, setting up timers or binding to browser events, for example. The following lifecycle methods are available to implement in your components: - componentWillMount: called after the component is created, but before it’s rendered into the DOM - componentDidMount: called after the first render; the component’s DOM element is now available - componentWillReceiveProps: called when a prop updates - shouldComponentUpdate: when new props are received, this method can prevent a re-render to optimize performance - componentWillUpdate: called when new props are received and shouldComponentUpdatereturns true - componentDidUpdate: called after the component has updated - componentWillUnmount: called before the component is removed from the DOM, allowing you to clean up things like event listeners. When dealing with functional components, the useEffect hook can be used to replicate lifecycle behavior. Further reading 14. What are React hooks? Answer Hooks are React’s attempt to bring the advantages of class-based components (i.e. internal state and lifecycle methods) to functional components. Further reading 15. What are the advantages of React hooks? Answer There are several stated benefits of introducing hooks to React: - Removing the need for class-based components, lifecycle hooks, and thiskeyword shenanigans - Making it easier to reuse logic, by abstracting common functionality into custom hooks - More readable, testable code by being able to separate out logic from the components themselves Further reading - Benefits of React Hooks - React Hooks — advantages and comparison to older reusable logic approaches in short Wrapping Up Although by no means a comprehensive list (React is constantly evolving), these questions cover a lot of ground. Understanding these topics will give you a good working knowledge of the library, along with some of its most recent changes. Following up with the suggested further reading will help you cement your understanding, so you can demonstrate an in-depth knowledge. We’ll be following up with a guide to React interview code exercises, so keep an eye out for that in the near future. Good luck! Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/react-interview-questions-solutions/?utm_source=rss
CC-MAIN-2020-50
refinedweb
1,004
54.73
User-Agent: Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.9a3pre) Gecko/20070219 Minefield/3.0a3pre Build Identifier: Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.9a3pre) Gecko/20070219 Minefield/3.0a3pre. We will submit the full proposal to dev-platform@lists.mozilla.org and attach it to the bug for completeness :) We've attached a patch which provides the basic infrastructure needed and as a proof of concept adds probes to the layout engine to track layout phase timings for paint, reflow and frame construction. The patch was generated against Mozilla CVS head using 'cvs diff -puN'. I had to attach the mozilla-trace.d separately as I don't have commit access to Mozilla CVS to add the file. When the patch is applied, you need to run autoconf to regenerate configure. You also need to put the attached mozilla-trace.d probe file under the top level mozilla directory. After that a standard gmake should do it. To check you have probes built, on an OS supporting dtrace run mozilla/dist/bin/firefox-bin and type: $ dtrace -P 'mozilla*' -l | c++filt This will list out the probes. The proposal contains a few sample scripts to run to get the layout phase counts and timings. We've also attached a more comprehensive script that will do this and also track nested phase calls, mimics what the debug build is doing but without requiring a debug build. We don't have a detailed knowledge of the mozilla build system and code base, so apologies in advance for any blunders we've made in the patch, but we have tested it and it all appears to work :) John Rice Padraig O'Briain Alfred Peng Reproducible: Always Steps to Reproduce: RFE Created attachment 255689 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch Created attachment 255690 [details] Mozilla probes file File specifying initial set of Mozilla probes for the layout engine. Needs to put under toplevel Mozilla dir before running autoconf and configure to get the required mozilla-trace.h file generated. Created attachment 255691 [details] Simple script to cacluate layout phase timings using dtrace probes To run it just have a firefox build runnign with the dtrace probes enabled and type: $ dtrace -qs layout_time.d -p `pgrep firefox-bin` -o out.txt $ cat out.txt | c++filt Created attachment 255694 [details] More comples script to measure layout phase stimings and nested phase calls Measure the new AutoLayout Phases (Paint, Reflow and Frame Construction) for Firefox when loading a page, using the Mozilla dtrace layout probes, layout-start and layout-end: Usage: dtrace -qs layout_phaseflow.d -p <firefox-pid> <time (sec) to run> | c++filt Example: dtrace -qs layout_phaseflow.d -p `pgrep firefox-bin` 20 | c++filt Created attachment 255695 [details] Mozilla Dynamic Tracing Framework Proposal Will post this on dev-platform@lists.mozilla.org, just wanted it with the bug so we had everything in the one place. Comment on attachment 255689 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch >Index: mozilla/config/Makefile.in no. this should work: +ifdef HAVE_DTRACE +HEADERS += \ + $(DEPTH)/mozilla-trace.h \ + $(NULL) +endif >+MOZILLA_DTRACE_SRC = $(DEPTH)/mozilla-trace.d hrm. >+ dtrace -G -C -32 -s $(MOZILLA_DTRACE_SRC) -o $(DTRACE_PROBE_OBJ) $(OBJS) Can you include a url reference explaining the args here? (just a comment, not for the patch, that way people reading cvs blame can get to this bug and the comment.) no >+ifdef DTRACE_PROBE_OBJ >+$(LIBRARY): $(OBJS) $(DTRACE_PROBE_OBJ) $(LOBJS) $(SHARED_LIBRARY_LIBS) $(EXTRA_DEPS) Makefile Makefile.in EXTRA_DEPS += $(DTRACE_PROBE_OBJ) I think. sucky :( >+ifdef DTRACE_PROBE_OBJ >+ $(AR) $(AR_FLAGS) $(OBJS) $(DTRACE_PROBE_OBJ) $(LOBJS) $(SUB_LOBJS) >+else > $(AR) $(AR_FLAGS) $(OBJS) $(LOBJS) $(SUB_LOBJS) >+endif I'd vote for EXTRA_DEPS being used here. >Index: mozilla/layout/Makefile.in >+DTRACE_PROBE_OBJ = $(LIBRARY_NAME)-dtrace.$(OBJ_SUFFIX) >Index: mozilla/layout/base/Makefile.in >+DTRACE_PROBE_OBJ = $(LIBRARY_NAME)-dtrace.$(OBJ_SUFFIX) kinda pondering that you have this more than once... rules? >Index: mozilla/layout/base/nsCSSFrameConstructor.cpp >+#ifdef HAVE_SYS_SDT_H >+#include "mozilla-trace.h" >+#endif > nsCSSFrameConstructor::ConstructRootFrame(nsIContent* aDocElement, > nsIFrame** aNewFrame) >+ MOZILLA_LAYOUT_START((int *)mPresShell->GetPresContext(), eLayoutPhase_FrameC); Should probably have dbaron or bz ponder the name. Is this reentrant friendly? >+ MOZILLA_LAYOUT_END((int *)mPresShell->GetPresContext(), eLayoutPhase_FrameC); > nsCSSFrameConstructor::ReconstructDocElementHierarchy() I'd rather two ifdefs with a single common code path in the middle. nsresult rv; #ifdef HAVE_SYS_SDT_H MOZILLA_LAYOUT_START((int *)mPresShell->GetPresContext(), eLayoutPhase_FrameC); #endif AUTO_LAYOUT_PHASE_ENTRY_POINT(mPresShell->GetPresContext(), FrameC); rv = ReconstructDocElementHierarchyInternal(); #ifdef HAVE_SYS_SDT_H MOZILLA_LAYOUT_END((int *)mPresShell->GetPresContext(), eLayoutPhase_FrameC); #endif return rv; note the cannonical "rv" (not res). >Index: mozilla/layout/base/nsPresContext.h >@@ -133,6 +133,17 @@ enum nsLayoutPhase { change: #ifdef DEBUG ... >+#else >+#ifdef HAVE_SYS_SDT_H + ... +#endif to: #if defined(DEBUG) || defined(HAVE_SYS_SDT_H) >Index: mozilla/layout/base/nsPresShell.cpp >@@ -22,7 +22,7 @@ > * > * Contributor(s): > * Steve Clark <buster@netscape.com> >- * H�kan Waara <hwaara@chello.se> >+ * H\345kan Waara <hwaara@chello.se> Your editor is javaish, you might need to hand edit this out for the future :(. I'd vote for losing the comment: >+// Mozilla Trace >+#ifdef HAVE_SYS_SDT_H And I wonder if we can change this to a single statement INCLUDE_MOZILLA_DTRACE or something (and have mozilla-config.h make that a useful statement or nop) just a thought. I'm not the final word (on this or anything else). As for new files. please use cvsdo add <> cvsdo add files .... cvs diff -pNU8 generate a new patch, attach it and obsolete all the old attachments :) Comment on attachment 255689 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch I'm not going to be a position to review anything like this until at least mid-April, possibly much longer. Please ask someone else. Comment on attachment 255689 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch I don't see why you're putting code exactly at the points exactly at the callers of existing code that you can hook into (but in a less reliable way, since you don't get the benefits of the guard object's destructor always running). review- based on that alone. I'd also warn you that there are a number of other things that happen lazily within these phases, such as style resolution, so any performance numbers you get out of just this instrumentation are highly questionable. I'm also not sure why you need instrumentation at all to determine general amounts of time spent in different areas of the program. We've been doing splitting with jprof without any intrusive edits to the code. (See mozilla/tools/jprof/split-profile.pl For an example of its use, see bug 144533 comment 40.) Also, some of these hooks are at points that run a *lot*. And it would be significantly worse if we were actually doing correct instrumentation to distinguish style resolution. How much runtime overhead would adding these to release builds cause? Also, what's the goal of providing this to end users? Is it to have those users file useful performance bugs? If so, we probably do need accurate distinctions of the phases, or the end users will just assume that the tool is right and constantly correct developers when they correct what the tool is saying. David - we just took the Auto Layout as example suggested by Robert O'Callahan, who wanted to get some data on the phase timings. If there are other endpoints we need to watch then we should do so, such as the style resolution. The probes have no overhead if they are not enabled by running dtrace. That's the beauty of them. If a user has an issue they can just run a script there and then to generate data off a non-debug build on a system with dtrace. No overhead? Do they compile to something that's in a different section of the shared library that doesn't even get loaded if you're not running dtrace? How do you manage that? David to get into the gory details you need some of the dtrace kernel hackers to explain, but I think it involves the kernel linker putting in no ops for dtrace function names with well defined _dtrace_probe prefix when they are not enabled. I can ping them if you want. In addition if there is any overhead in setting up a probes arguments the dtrace header file generated from the probes file contains IS_ENABLED macros that can be used to bracket the probes and reduce the over head to virtually nil [this is the technique used in Ruby] More details on probe effect at: 4.2 Statically-defined Tracing "In keeping with our philosophy of zero probe ef- fect when disabled, we have implemented a statically- defined tracing (SDT) provider by defining a C macro that expands to a call to a non-existent function with a well-defined prefix (“ dtrace probe ”). When the kernel linker sees a relocation against a function with this prefix, it replaces the call instruction with a no- operation and records the full name of the bogus func- tion along with the location of the call site. When the SDT provider loads, it queries this auxiliary structure and creates a probe with a name specified by the func- tion name. When an SDT probe is enabled, the no- operation at the call site is patched to be a call into an SDT-controlled trampoline that transfers control into DTrace." timeless - just responding to a few of your comments above: We'll rework the patch as you suggested and resubmit: 1/ We'll create a simplified dtrace -h script or c program that can run on a linux system and generate the mozilla-trace.h from the mozilla-trace.d file. This will allow us to have both files in cvs and the build autobot can use the script to regenerate the header if the probes file has changed as you suggested. Always having the header will allow us to remove a lot of the guard defines in the code, the macros will just become noops if dtrace is not on the system. 2/ You asked above if the probes are reentrant - each probe is atomic and so this issue does not arise, there is no dependency between each probe. 3/ On the dtrace -G: We'll put in the patch comment with the patch when we regenerate it. dtrace -G -C -32 -s $(MOZILLA_DTRACE_SRC) -o $(DTRACE_PROBE_OBJ) $(OBJS) For the terminally curious: USDT Probes: -G: tells dtrace to process the list of .o's looking for USDT probes, change their status to IGNORE and generate a special <probe>.o that you should link against so the kernel linker can correctly initialized things when the shared lib is loaded. -C: use the C preprocessor to process the specified .d file, allows you to have #includes -32: compile for 32 bit architecture, we need to change what I gave you so it will work on 64 bit arch as well :( -s: source of probes, mozilla-trace.d in this instance -o: output special <probe>.o file, call it anything you want, just need to link against it. timeless: I have a query about one of your comments in #6 sucky :( >+ifdef DTRACE_PROBE_OBJ >+ $(AR) $(AR_FLAGS) $(OBJS) $(DTRACE_PROBE_OBJ) $(LOBJS) $(SUB_LOBJS) >+else > $(AR) $(AR_FLAGS) $(OBJS) $(LOBJS) $(SUB_LOBJS) >+endif I'd vote for EXTRA_DEPS being used here. Is this suggestion safe as EXTRA_DEPS is currently defined in a number of Makefiles and using $(EXTRA_DEPS) instead of $(DTRACE_PROBE_OBJ) could result in extra files, which are not object files being added to the library archive? (In reply to comment #12) timeless - we where thinking about the simplified dtrace -h for the build autobot: > 1/ We'll create a simplified dtrace -h script or c program that can run on a ... It looks like it would be a lot of work to pull this out of the dtrace code [it compiles the probe during this -h processing]. What might make a lot more sense is just to check the mozilla-trace.h into cvs. Means its always there for the builds and the trace macros are compiled out if dtrace is not on the system. If someone wants to add new probes they do so on a system with dtrace support, modify the mozilla-trace.d, rerun the configure which will recreate the mozilla-trace.h and either create a patch or check it into to cvs. What do you think? Created attachment 256048 [details] Mozilla Dynamic Tracing Framework Proposal Updated to reflect changes in implementation after feedback from timeless Created attachment 256051 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch Updated to reflect changes in implementation after feedback from timeless Main differences from initial patch: configure.in: ------------- Disable dtrace by default. Add --enable-dtrace option to configure.in which needs to be specified for dtrace support to be included even on OS's that support dtrace. mozill-trace.h.in: ------------------- Assume mozilla-trace.h.in is under cvs and can be used to generate mozilla-trace.h with configure on all architectures. Simplifies probe includes, macros do not need to be guarded with #ifedf's as they will always compile out to noops on architectures without dtrace or if the --enable-dtrace option has not been passed to configure. In C++ files now just add probe header and macro: #include "mozilla-trace.h" : nsCSSFrameConstructor::ConstructRootFrame(...) { TRACE_MOZILLA_LAYOUT_START(...); mozilla-trace.h (generated) ---------------------------- Added INCLUDE_MOZILLA_DTRACE to mozilla_dtrace.h which is defined if dtrace support is enabled and can be used if certain setup code needs added for a given dtrace probe in a component header for instance. mozilla-trace.d: ---------------- Changed namespace of probes: mozilla -> trace_mozilla Probe macros now have the form of TRACE_MOZILLA_<SUBCOMPONENT_NAME>(...) config/rules.mk: ----------------- +$(DTRACE_PROBE_OBJ): $(OBJS) + dtrace -G -C -32 -s $(MOZILLA_DTRACE_SRC) -o $(DTRACE_PROBE_OBJ) $(OBJS) Generates required probe.o to link against: -G: Generate an ELF file containing an embedded DTrace program. The DTrace probes specified in the program are saved inside of a relocatable ELF object which can be linked into another program. -C: Run the C preprocessor cpp(1) over D programs before compiling them. -32/64: determine the ELF file format (ELF32 or ELF64) produced by the -G option. This is meant to be auto detected, may have to add a DTRACEOPTIONS_FLAG to configure to allow this to be specified at build time. -s: Compile the specified D program source file. -o: the ELF file generated with the -G option is saved using the pathname specified as the argument for this operand. For explanation of dtrace params, see dtrace man page: To get dtrace probe.o file picked up when building Shared Libraries need to add this in several places in rules.mk. EXTRA_DEPS, $(AR), $(MKSHLIB) layout/base/Makefile.in: ------------------------- Changes in rules.mk allow us to include DTRACE_PROBE_OBJ in base makefile: layout/base/Makefile.in No change in layout/Makefile.in required. layout/base/nsPresContext.h ---------------------------- Using: #if defined (DEBUG) || defined (INCLUDE_MOZILLA_DTRACE) to include layout phase enum if dtrace is enabled mozilla-trace.d ---------------- Includes mozilla-trace.d so need to have this as separate attachment [cvsdo is my friend] Created attachment 256052 [details] Simple script to cacluate layout phase timings using dtrace probes Change in probe namespace, trace_mozilla, script updated to work with new probes Created attachment 256055 [details] More comples script to measure layout phase stimings and nested phase calls Change in probe namespace, trace_mozilla, script updated to work with new probes Created attachment 257958 [details] Mozilla Dynamic Tracing Framework Proposal Updated the proposal to reflect the changes in the framework and layout patch and the addition of the javascript probes patch [see patch comments]. Created attachment 257960 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch Update to framework and layout patch: Given David Baron's comments on problems with having matching exit probes in a large function block with an initial entry probe. As a workaround we declare a struct with a constructor and destructor into which you put the probes and you just declare an instance of this struct where you would have put the entry probe. When this struct goes out of scope it's destructor is called and the exit probe fires. There is no need to add a separate exit probe explicitly in the code. This does mean you pay the cost of declaring an extra struct in the calling code, but this should be neglible as the stuct has no expensive initialisation code, it just stores the construction parameters. We made these changes for the layout probes. This change means that the Function reported for these probes would now be the constructor and destructor of the struct, so we wrapped them in more meaningful extern C trace function names [refer to nsPresContext.h, nsPresContext.cpp in the patch]. Created attachment 257961 [details] Simple script to cacluate layout phase timings using dtrace probes Updated the script to reflect changes in the layout probes. Created attachment 257962 [details] More complex script to measure layout phase stimings and nested phase calls Updated the script to reflect changes in the layout probes. Created attachment 257963 [details] [diff] [review] Javascript probes patch This is an incremental patch that can be applied on top of the framework and layout probes patch. It provides Brendan Gregg's mozilla javascript probes as described at: And demo'ed at: All of the probes listed in Brendan's Blog on JavaScript and DTrace will work, but we have changed the namespace to be in sync with the other mozilla probes, so you will need to change the probe names in the script s appriopriately, for example: Change: javascript*:::function-entry -> trace_mozilla*:::js_function-entry To list available probes once once the patch is applied and mozilla is rebuilt with --enable-dtrace, just type: $ dtrace -n 'trace_mozilla*:::js*' -l Created attachment 259787 [details] Mozilla Dynamic Tracing Framework + layout probes patch Updated to apply to CVS HEAD Created attachment 259788 [details] [diff] [review] Javascript probes patch Update to apply to CVS HEAD and fix some errors The following simple dtrace scripts will give a quick test of the javascript functionality: Trace all javascript functions and print out the file they have been called from, the class name and function name. We use the flow indent flag -F to allow the entry and return function calls to be matched up and appropriately indented. dtrace -F -n 'trace_mozilla22094:::js_function*{printf("%s %s->%s", basename(copyinstr(arg0)), copyinstr(arg1), copyinstr(arg2));}' Note: often these last two params are not set and just return null, particularly on the internal chrome js calls. This will need to be addressed in the future. Simple test for object lifecycle probes: dtrace -n 'trace_mozilla22094:::js_object*{trace(arg0);}' Created attachment 265808 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch v4 Update to apply to the CVS head. Following is a proposal raised by Brendan Gregg. Hope to get some feedback from you guys here. " The DTrace patches currently provide this, trace_mozilla:::layout* trace_mozilla:::js_* we think that this approach is better (of course, your opinion counts as well), mozilla:::layout* javascript:::* (we do get fussy over conventions, as it would be nice if different providers looked similar). Reasons for each (and you are welcome to point out problems with this): mozilla:::layout* - "mozilla" won't clash with anything else - use it! We want to tell customers "this is the mozilla provider", not "this is the mozilla provider called trace_mozilla". And if other people write their own mozilla providers for different versions? No problem - so long as they export the same layout* probes. As a user writing a DTrace script, I should be able to write stable mozilla::: scripts that run on different mozilla versions, because they are exporting the same interface. javascript:::* - because it is the javascript provider, period. What about opera and other browsers? Since the javascript provider will be stable, they can export to the exact same namespace with the exact same probes. So long as the javascript provider exports no mozilla implementation specifics, which I don't think it does. The mental connection I had to make, was that entirerly different engines (mozilla versions, firefox/opera), can export the same stable provider and be used in parallel on the same system. Sounds a bit frightening, but it works fine. Bryan Cantrill just ran the following script with ruby running, # dtrace -n 'function-entry { trace(copyinstr(arg0)); }' and was suprised to see results that looked like javascript. He was running the old javascript bits, and his probe was matching crazy, javascript*:::function-entry ruby*:::function-entry crazy! different implementations, same probename and args. but it emphasised that this is doable. " As the patch contains the code change to the build system, cc benjamin to get some suggestions. If you want me to review something, please set the appropriate flag. Created attachment 266840 [details] [diff] [review] JavaScript probes patch v2 I've attached a patch for the latest version of the JavaScript probes. The new additions are, * the ability to identify anonymous functions (js_function-info) * access to function entry arguments (js_function-args) * access to function return value (js_function-rval) I was finding that the JavaScript provider was only helping with about 4 problems out of 10 (still, better than nothing). The improvements that I added should take that to around 8 out of 10 - so it is now proving to be really useful. Anyhow, here are some before and after New, # . New, # . To take the provider further (and to solve 10 out of 10 problems), may require some moderate code changes to libmozjs for the following features, * stack traces (perhaps by integrating with jsd) * objects as debug strings (more jsd integration?) And so these may be best for version 3 of this provider - which would be after the existing JavaScript probes are integrated (assuming that they ever are). I would also welcome a Mozilla developer who is more familiar with js/src and js/jsd to take ownership of the JavaScript provider and to build on the work I've done so far. Lastly, I am really hoping that we can call these probes "javascript:::*" and not "trace_mozilla:::js_*" - as this is the "JavaScript" provider. It would require hand generation of the header files rather than "dtrace -h", so that the code macros can remain of the style "TRACE_MOZILLA_JS_FUNCTION_ENTRY" (which is perfectly fine). I think the initial change from "javascript:::*" to "trace_mozilla:::js_*" was a side effect of changing the macros, but a side effect that we can avoid by not auto generating mozilla-trace.h. Created attachment 267801 [details] [diff] [review] Mozilla Dynamic Tracing Framework Split the “Mozilla Dynamic Tracing Framework + layout probes patch” into two separated ones. This one is for the DTrace Framework and ask for review. Please notice that I'm still using the provider name "trace_mozilla" and no probes are defined in this patch. It will have no impact on the code if --enable-dtrace is not passed to configure. John Rice demoed the Mozilla DTrace to jst and schrep when they visited Sun Beijing office. cc jst into this bug. DTrace will be provided in Leopard (MacOS 10.5). We'll provide the VMWare/ parallel images for Solaris with some DTrace tools soon. For more information about Mozilla DTrace, please go to the Mozilla DTrace project in OpenSolaris community:. Created attachment 268059 [details] [diff] [review] Mozilla Dynamic Tracing Framework v2 Update the framework patch to reflect the provider name change, from "trace_mozilla:::*" to "mozilla:::*" as Brendan Gregg suggested. No real probes defined in this patch. The layout probes and the javascript probes can be added based on this framework. I have been trying for a while now, what do I have to do to get this to apply to head (the newer V2 javascript stuff) At the mo I am stuck with the older patches, since these are the only ones that I can get to tastefully apply Created attachment 271003 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes patch v5 Update to apply to trunk code Created attachment 271004 [details] [diff] [review] JavaScript probes patch v3 One line update to the patch to apply to trunk code. Greg, if you want to try out the javascript probes, please patch the framework + layout first and then this one. BTW, there are other two approaches to try the probes: 1. Solaris VMware images are available for download:. And the Firefox 3.0 with DTrace has been bundled with it. 2. If you have already installed Solaris on your box, the Firefox 3.0a3 with DTrace is available here:. My interest is to try it with xulrunner, however I will do a firefox trial first Created attachment 272772 [details] Mozilla Dynamic Tracing Framework + layout probes patch v6 Updated v5 to apply to CVS HEAD Created attachment 273413 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout porobes patch v7 Created attachment 274169 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes v8 The previous layout probes patch no longer works with CVS HEAD. The layout probes are in the component gklayout. This used to be in the shared library components/libgklayout.so. In CVS HEAD the components are in the shared library toolkit/library/libxul.so. A library archive is created for each component and these library archives are linked into libxul.so. To get the probes to work I need to call dtrace -G when creating the shared library and specify the the object files which contain dtrace calls. This will be necessary if we put probes into more than one of the library archives which make up libxul.so. I can do this by specifying the library archives containing dtrace probes in the variable MOZILLA_PROBE_LIBS in toolkit/library/Makefile.in. Then in config/rules.mk when building a shared library I extract the object files from the list of library archives in MOZILLA_PROBE_LIBS and run dtrace -G on them. However, I have a problem with the link line. I need to specify the objects which have been processed by dtrace -G on the link line. I can do this by specifying $(PROBE_LOBJS) in the link line. I do not want to specify the library archives that these objects came from on the link line. The list of the archives is specified in SHARED_LIBRARY_LIBS and the archives which I do not want to specify on the link line is in MOZILLA_PROBE_LIBS. My solution is to remove all the objects from the library archives in MOZILLA_PROBE_LIBS after running dtrace -G. When the library archive is specified on the link line it is empty. After the linking the shared library the library archives in MOZILLA_PROBE_LIBS are deleted. An alternative which I looked at was to extract the object files from all the library archives and specify these objects on the link line instead of the library archives. This did not work because a file called nsWildCard.o is in more than one archive and the files are different. FWIW, the library that the layout code lives in depends on whether libxul is enabled when building. If it's not enabled, the code will still be in gklayout, if not, it'll be in the xul library. Created attachment 276949 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes v9 This patch replaces v8 and is updated to apply to CVS HEAD as of this morning. I also made a change in config/rules.mk so that the dtrace object file is built for the JavaScript dtrace probes. Created attachment 277087 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes va The updated patch will cause layout probes to be generated whether or not libxul is enabled. I test the case where libxul is not enabled by configuring with --enable-debug. Created attachment 277370 [details] [diff] [review] Mozilla Dynamic Tracing Framework + layout probes vb Update patch to apply to CVS HEAD. Created attachment 277393 [details] [diff] [review] Mozilla Dynamic Tracing Framework v3 This attachment and the next one splits the patch into two parts. This part is the framework part and is a prerequisite for the JavaScript probes patch in issue 388564. Created attachment 277395 [details] [diff] [review] layout probes vc This patch and Mozilla Dynamic Tracing Framework v3 patch replaces Mozilla Dynamic Tracing Framework + layout probes vb patch. The patch Mozilla Dynamic Tracing Framework v3 patch is a prerequisite for this patch. Created attachment 278054 [details] [diff] [review] Mozilla Dynamic Tracing Framework v4 Updated patch so that it applies cleanly Created attachment 278055 [details] [diff] [review] layout probes vd updated patch to apply to CVS HEAD. Created attachment 278998 [details] [diff] [review] Mozilla Dynamic Tracing Framework v5 update patch to apply cleanly to HEAD. Created attachment 279165 [details] [diff] [review] Mozilla Dynamic Tracing Framework v6 Mozilla Dynamic Tracing Framework v6 Removing layout probes and adding some loadURI start and done probes Created attachment 279166 [details] [diff] [review] Mozilla Dynamic Tracing Load URI probes v1 Mozilla Dynamic Tracing Load URI probes v1 Adding load URI start and done probes. $ dtrace -ln "moz*:::" ID PROVIDER MODULE FUNCTION NAME 71986 mozilla18214 libxul.so mozdtrace_load_uri_start load-uri-start 71987 mozilla18214 libxul.so mozdtrace_load_uri_done load-uri-done $ dtrace -qn moz*:::'{printf("%s\n",copyinstr(arg0));}' Created attachment 279199 [details] [diff] [review] Mozilla Dynamic Tracing Framework v7 Changes to allow debug build and adding Channel param to give probes unique ID. Created attachment 279200 [details] [diff] [review] Mozilla Dynamic Tracing Load URI probes v2 Adding load Image start probe. Need to get a unique ID, passing in a mRequest which is giving me 0 back. If anyone can suggest something in the loadImage code to use as a unique ID that would be great. Also not sure where to add the load Image Done probes. So if someone can point out where to add the Image Done probes just let me know and I can add them. Johnny I was talking with Brendan about these probes and we think it makes more sense to have a general load-start and load-done set of probes with a suitable enum param to identify the type of load that is taking place. I'd then like to add a payload to the probe of the URI data as a struct with simple strings. This will only need to be constructed when the probes are enabled. Having it built will make the scripts a lot easier to use, rather than having to do the URI processing in the scripts. Current probes: loadURI-start(int UNIQUE_ID, char * URI) loadURI-done(int UNIQUE_ID, char * URI) loadImage-start(int UNIQUE_ID, char * URI) loadImage-done(int UNIQUE_ID, char * URI) Proposed probes: load-start ( int UNIQUE_ID, int LOAD_TYPE, int * LOAD_DATA) load-done ( int UNIQUE_ID, int LOAD_TYPE, int * LOAD_DATA) Purpose of the UNIQUE_ID is to allow you to match async load-start and load-done probes. LOAD_TYPE =========== enum nsTraceLoadType { eLoad_URI, eLoad_Image }; LOAD_DATA ============== struct nsTraceLoadData { // From nsIChannel char * contentType // Content Type // From nsIURI char * spec // Complete URI // <scheme>://<username>:<password>@<hostname>:<port>//<directory><basename>.<extension>;<param>?<query>#<ref> char * scheme // Protocol to which this URI refers char * username char * password char * host // Internet domain name char * port char * path // <filepath>;<param>?<query>#<ref> // From nsIURL char * filePath // <directory><basename>.<extension> char * fileName // <basename> char * fileExtension // <extension> char * param char * query char * ref }; Struct elements will be set to empty string if element is not applicable to this URI. Refer to: Once we get these ones in then we need to look at adding probes for the other stages of loading and displaying a web page: LoadURI LoadImage | | ------------ | Network --> DNS Request | Parser | Content DOM | Layout | Painting Created attachment 281294 [details] [diff] [review] Mozilla Dynamic Tracing Framework v8 Adding common load-start and load-done probes for URI and Image laoding Created attachment 281297 [details] [diff] [review] Mozilla Dynamic Tracing Load URI probes v3 Adding common load-start and load-done probes for URI and Image loading. For example output and scripts using the probes refer to: Sample Image Loading Stats: Script used to generate Stats: Will move onto DNS lookup probes next: dnslookup-start: nsHostResolver::ResolveHost(...) and dnslookup-done: nsHostResolver::OnLookupComplete(...) I posted up on moz.dev.performance to get some feedback on other probe points: "Performance probes - looking for good start/ done points" Boris Zbarsky has pointed me at some spots for painting and layout. I'll perhaps combine this with the earlier layout probes. Dave (dbaron) if you have any suggestions please holler, I definitely need the input :) Jonas Sicking has given me a few hints on the DOM creation as well. So we are making progress i think. Created attachment 281320 [details] [diff] [review] Mozilla Dynamic Tracing Framework v9 Update to apply cleanly to CVS head and fix some errors Created combined URI and Image load timing script, available at: Script: Sample output: Created attachment 282438 [details] [diff] [review] Mozilla Dynamic Tracing Framework v10 Updating uri and image load probes to have a separate load-inage-init probe. Adding dnslookup init, start and done probes: Refer to: Created attachment 282440 [details] [diff] [review] Mozilla Dynamic Tracing Load URI probes v4 Updating uri and image load probes to have a separate load-inage-init probe. Adding dnslookup init, start and done probes: Refer to: Sorry, this has been sitting in my review queue way too long. I'll take a look at it tomorrow. Comment on attachment 268059 [details] [diff] [review] Mozilla Dynamic Tracing Framework v2 >Index: configure.in >=================================================================== > >+AC_ARG_ENABLE(dtrace, >+ "build with dtrace support if available [default=no]", >+ [enable_dtrace="yes"],) >+if test "x$enable_dtrace" = "xyes"; then >+ AC_CHECK_HEADER(sys/sdt.h, HAVE_DTRACE=1; dtrace -h -s mozilla-trace.d -o mozilla-trace.h.in) >+ if test -n "$HAVE_DTRACE"; then >+ AC_DEFINE(INCLUDE_MOZILLA_DTRACE) >+ fi >+fi Enable arguments should error if the requirements aren't met, not silently disable themselves. You could also optionally enable this by default on Solaris. See for example: >Index: mozilla-trace.d >Index: mozilla-trace.h.in I don't want these files in the root directory. Can we find a better place for them to live? Even somewhere in build/ would be more agreeable to me. Additionally, if mozilla-trace.h.in is a generated file, why are we checking in a copy of it? Is this just so you can unconditionally include it later? >Index: config/Makefile.in >=================================================================== > HEADERS = \ > nsBuildID.h \ > $(DEPTH)/mozilla-config.h \ >+ $(DEPTH)/mozilla-trace.h \ > $(srcdir)/nsStaticComponents.h \ > $(NULL) If we don't checkin the generated header file, you could wrap this line in ifdef HAVE_DTRACE endif >Index: config/rules.mk >=================================================================== >+ifdef HAVE_DTRACE >+ifdef DTRACE_PROBE_OBJ >+ifndef MOZILLA_DTRACE_SRC >+MOZILLA_DTRACE_SRC = $(DEPTH)/mozilla-trace.d >+endif >+$(DTRACE_PROBE_OBJ): $(OBJS) >+ dtrace -G -C -32 -s $(MOZILLA_DTRACE_SRC) -o $(DTRACE_PROBE_OBJ) $(OBJS) >+endif >+endif Reading your other patch, is there any reason we need to specify DTRACE_PROBE_OBJ in the Makefiles, or could we just have a variable like USE_DTRACE=1, and then autogenerate a probe object filename from LIBRARY_NAME or MODULE or whatever in here? I notice there's a newer version of this patch, but maybe you can roll my review comments into that and post a refreshed version? I promise you a speedier review this time! On choice of directory, when we were talking about probes in general a while back, Vlad suggested making a dedicated "probes" directory. It could be toplevel, since probing machinery doesn't quite fit into other categories, or perhaps go under "modules". Also, I bumped into a compiler error on Leopard with the latest patch version; in nsHostResolver.cpp:mozdtrace_dnslookup_request_done, struct nsTraceDNSLookupInfo sInfo is being initialized with -1 instead of one of the valid nsTraceDNSLookupStatus enums. I suggest adding an additional enum value for -1 or some such. Location - mozilla-trace.d, mozilla-trace.h.in: Where should we put them, mozilla/probes directory or mozilla/modules/probes? "mozilla-trace.h.in is a generated file, why are we checking in a copy of it?" As you said it was so we could unconditionally include it later.If you want us not to not check it in, we can wrap the includes in the .cpp files. Just let us know. I'm traveling, but will roll a patch with the various changes early next week. Created attachment 283494 [details] [diff] [review] XCPWrappedJSClass dtrace probes The attachment assumes the following probes are defined: probe xpc__wjs__entry(char *, char *, char *); probe xpc__wjs__return(char *, char *, char *); I stuck the macro definitions in mozilla-load-trace.h.in, but that doesn't seem ideal. The location doesn't really matter to me, someone should just make a judgement call and go with it. Also, I didn't notice that you were including mozilla-trace.h directly into source files. I still feel like you shouldn't be checking in a generated file that's going to be overwritten every time you use it. I'd prefer you ifdef the includes. Comment on attachment 283494 [details] [diff] [review] XCPWrappedJSClass dtrace probes >+ fp = JS_FrameIterator(cx, &iterator); >+ fp = JS_FrameIterator(cx, &iterator); That's a typo? (I don't know this code well enough to use a '.' instead of a '?') You guys need to remember this rule, before any others: always ask me before making a new top-level directory. Oh, and rule #2: don't add more stuff under mozilla/modules. There, two rules and you're done. :-P What would go in mozilla/probes? How many files, of what kinds? Examples and likely population and dependencies on other subdirs would help. Sorry if I'm missing this info in existing comments -- just point me to them if present. /be No problems - didn't know about the rules :) mozilla/probes/ mozilla-trace.d mozilla-trace-load.h.in [Contains support func extern "C" declarations and macros for the custom probes, appended to the generated mozilla-trace.h - was being used for load probes but is being used by others now, so should change this to mozilla-trace-probes.h.in] mozilla-trace.h.in [Generated by build, from the mozilla-trace.d file and used with the mozilla-trace-probes.h.in to create the final mozilla-trace.h file.] mozilla-trace.h [Generated by build by appending mozilla-trace.h.in and mozilla-trace-load.h.in Included by any sub modules with probes, link to it placed in dist/include] If you check the following patch you can see examples of mozilla-trace.d, mozilla-trace-load.h.in, mozilla-trace.h.in:: Note we had checked in the generated mozilla-trace.h.in file to avoid wrapping mozilla-trace.h includes in the sub modules when on systems without dtrace, but as per Ted's request above we'll change this, wrapping the includes and no longer check this in. So what we want to end up with after a build: mozilla/probes/ mozilla-trace.d mozilla-trace-probes.h.in mozilla-trace.h [Generated by build, linked to in dist/include] Robert: "xpc__wjs__entry( .... I stuck the macro definitions in mozilla-load-trace.h.in, but that doesn't seem ideal" The idea behind putting all the support wrapper funcs, enums and structs for the probes into one top level header is that it will make it a lot easier for folks who want to use the probes to find the various support structs they need in their D Scripts. It's also handy for people who want to add extra probes to just put their bits into this one file and have plenty of examples in it to follow. So in the next update to the patch I'd planned to change mozilla-trace-load.h.in to a more generic name mozilla-trace-probes.h.in If you'd prefer to create a separate header.in for each probe set we can do this as well. Just let us know what you want to do. Created attachment 283749 [details] [diff] [review] Mozilla Dynamic Tracing Framework (only), v11 This is an update based on the recent discussion in this bug. This change includes the framework only, no mentioning of any probes at all in this patch, those can come in later once the framework is in place IMO. This creates a new top-level probes directory, with mozilla-trace.d in it. This patch includes zero generated files, all files are generated at *build time*, not a configure time. The generated file ends up in dist/include. Other than that this is the same change as the earlier patches (with maybe some minor tweaks here n' there, interdiff available if anyone cares). John, if you can glance at this it'd be great too. Once this is in we can work on getting your load probes etc into the codebase as well. Brendan, you cool with a new top-level probes directory here? For now it's got dtrace stuff only in it (i.e. Solaris only, and Leopard once available), and if dtrace is not enabled (default) the build system does noting in that directory. I can't really think of a better place for this, unless we want to stick this stuff in mozilla/build or something. I would be ok with build if build guys are, since the current population is a bit thin to justify a new top-level, and the files are build-y. OTOH probes is good if we believe we'll add more files as we go (and perhaps we do so believe -- please comment). /be Comment on attachment 283749 [details] [diff] [review] Mozilla Dynamic Tracing Framework (only), v11 >---. >--- a/configure.in >+++ b/configure.in >+ if test -n "$HAVE_DTRACE"; then >+ AC_DEFINE(INCLUDE_MOZILLA_DTRACE) >+ fi This should have an else with an AC_MSG_ERROR. r=me with those two issues addressed. Vlad, Stan: Do you guys have any thoughts on mozilla/probes vs mozilla/build? Vlad, are there files part of your non-dtrace probe stuff that could live with the dtrace probes etc? If so, would you like a separated dir over stuffing things into mozilla/build? (In reply to comment #75) > (From update of attachment 283749 [details] [diff] [review]) > >---. Padraig, John, any thoughts on this? It's not clear to me what's going on here. > >--- a/configure.in > >+++ b/configure.in > >+ if test -n "$HAVE_DTRACE"; then > >+ AC_DEFINE(INCLUDE_MOZILLA_DTRACE) > >+ fi > > This should have an else with an AC_MSG_ERROR. Added, new patch coming up (with this issue fixed, and another quotation problem addressed too that I didn't catch earlier). > r=me with those two issues addressed. Thanks! Created attachment 283763 [details] [diff] [review] Updated framework diff (v11.1) Johnny - talked with Padraig and it looks like the DTRACE_PROBE_OBJ is a left over that should have been removed. Will rip it out and test the latest patch, to make sure this is OK and post an update.. I know I could create a git repository and work around it, but if you could create a mozilla/probes dir that would be better. I'll rework the load probes patch now to get it to build against this new framework. (In reply to comment #80) >. You might try upgrading cvsutils. According to the changelog <>, cvsdo supports adding directories in version 0.2.1 and up, and I was able to do so in 0.2.3 (on Linux, but it should work the same on Solaris): myk@myk:~/cvsdotest$ cvs -d :pserver:anonymous@cvs-mirror.mozilla.org:/cvsroot co mozilla/client.mk U mozilla/client.mk myk@myk:~/cvsdotest$ cd mozilla myk@myk:~/cvsdotest/mozilla$ mkdir foo myk@myk:~/cvsdotest/mozilla$ touch foo/bar myk@myk:~/cvsdotest/mozilla$ cvsdo add foo myk@myk:~/cvsdotest/mozilla$ cvsdo add foo/bar Use of uninitialized value in concatenation (.) or string at /usr/bin/cvsdo line 191, <ENTRIES> line 1. myk@myk:~/cvsdotest/mozilla$ cvs diff -uN foo cvs diff: Diffing foo Index: foo/bar =================================================================== RCS file: foo/bar diff -N foo/bar Thanks Myk - upgrade to 0.2.3 and it works a treat :) Created attachment 284031 [details] [diff] [review] Mozilla Dynamic Tracing Framework (only), v12 New frame work patch incorporating Ted and Johnny's changes. Removed DTRACE_PROBE_OBJ as it was no longer required. This patch will break the load probes. A new patch will be generated for them using the new framework structure. Will need to add additional code to probes/Makefile.in to concatenate the required headers and to the sub Makefiles so the mozilla-trace.d can be found under the probes dir. Comment on attachment 284031 [details] [diff] [review] Mozilla Dynamic Tracing Framework (only), v12 Asking sayrer for second-review Comment on attachment 284031 [details] [diff] [review] Mozilla Dynamic Tracing Framework (only), v12 looks good to me Comment on attachment 284031 [details] [diff] [review] Mozilla Dynamic Tracing Framework (only), v12 a=me, let's get this landed! I landed the dtrace framework patch today (with some slight whitespace tweaks). That turned the world red, due to mozilla/probes not being part of the list of directories that's checked out by client.mk, so I fixed that as well. I don't know if we want to close this bug now, and move the layout (etc) probe discussion to a new bug, or keep this bug open for that work. I don't really care either way... This bug is more than long enough. Let's file additional bugs for more work... I'd in particular like cycle collection probes. Agree that another bug would be a good idea. Do we want separate ones for each probe set? The difficulty with this is that they are all included in the mozilla-trace.d file as they are all in the mozilla provider. So for consistency it probably makes sense to collect them into one patch and land groups of probes as we go. Each time after we land a set starting up a new bug to catch the next group. This way we should be able to ensure consistency across the probes with regard to layout, implementation and so on. I have a patch for the load and dnslookup probes building and running against the new framework, so just want to know which bug to put this patch against. I'd like to see Robert's XPC_WJS probes in as well and some layout probes, based on the early patch but modified as per Dave B recomendations. What do folks think? Created attachment 284484 [details] [diff] [review] Mozilla probes v1 Mozilla load and dns lookup probes patch, changed to build and run with the new framework. Assume this will be moved to another Probes bug, but wanted to get them up for others to take a look. We can fold in Robert's probes to this and add new ones for Benjamin's cycle collections probes. Or we can split them up, up to you guys. Supports: load__init (void * unique_id, nsTraceLoadType type, struct nsTraceLoadInfo *info) load__start (void * unique_id, nsTraceLoadType type, struct nsTraceLoadInfo *info) load__done (void * unique_id, nsTraceLoadType type, struct nsTraceLoadInfo *info) dnslookup__init (void * unique_id, struct nsTraceDNSLookupInfo *info) dnslookup__start (void * unique_id, struct nsTraceDNSLookupInfo *info) dnslookup__done (void * unique_id, struct nsTraceDNSLookupInfo *info) I just created Bug 410588 to track adding the load/lookup probes, John could you attach your latest patch there please?
https://bugzilla.mozilla.org/show_bug.cgi?id=370906
CC-MAIN-2016-36
refinedweb
7,817
62.17
More Exercises: Objects and Classes - 3. Speed Racing Здравейте колеги. Дава ми 40 точки на следната задача. Условието е следното: 3. Speed Racing Your task is to implement a program that keeps track of cars and their fuel and supports methods for moving the cars. Define a class Car that keeps a track of a car’s model, fuel amount, fuel consumption for 1 kilometer and traveled distance. A Car’s model is unique - there will never be 2 cars with the same model. On the first line of the input you will receive a number N – the number of cars you need to track, on each of the next N lines you will receive information about a car in the following format “<Model> <FuelAmount> <FuelConsumptionFor1km>”. All cars start at 0 kilometers traveled. After the N lines, until the command "End" is received, you will receive commands in the following format "Drive <CarModel> <amountOfKm>". Implement a method in the Car class to calculate whether or not a car can move that distance. If it can, the car’s fuel amount should be reduced by the amount of used fuel and its traveled distance should be increased by the number of the traveled kilometers. Otherwise, the car should not move (its fuel amount and the traveled distance should stay the same) and you should print on the console “Insufficient fuel for the drive”. After the "End" command is received, print each car and its current fuel amount and the traveled distance in the format "<Model> <fuelAmount> <distanceTraveled>". Print the fuel amount rounded to two digits after the decimal separator. Кодът ми е следния: using System; using System.Collections.Generic; using System.Linq; public class Car { public double FuelAmount {get; set;} public double FuelConsumptionFor1km {get; set;} public int TraveledKilometers {get; set;} public void TryTraveledThisDistance(int currentDistanse) { double amountOfFuelRequired = FuelConsumptionFor1km * currentDistanse; if(FuelAmount > amountOfFuelRequired) { FuelAmount -= amountOfFuelRequired; TraveledKilometers += currentDistanse; } else { Console.WriteLine("Insufficient fuel for the drive"); } } } // end class Car public class Program { public static void Main() { var orderCar = new Dictionary<string, Car>(); int n = int.Parse(Console.ReadLine()); for(int i = 0; i < n; i++) { string[] input = Console.ReadLine() .Split(); string model = input[0]; double fuelAmount = double.Parse(input[1]); double fuelConsumptionFor1km = double.Parse(input[2]); orderCar[model] = new Car{FuelAmount = fuelAmount, FuelConsumptionFor1km = fuelConsumptionFor1km}; } // end for string inputPrim = null; while ((inputPrim = Console.ReadLine()) != "End") { string[] input = inputPrim .Split(); string currentModel = input[1]; int currentDistanse = int.Parse(input[2]); foreach (var kvp in orderCar) { if (kvp.Key == currentModel) { kvp.Value.TryTraveledThisDistance(currentDistanse); } } // end foreach for TryTraveledThisDistance } // end while for Drive foreach (var kvp in orderCar) { // AudiA4 1.00 50 Console.WriteLine("{0} {1:0.00} {2}", kvp.Key,kvp.Value.FuelAmount, kvp.Value.TraveledKilometers); } } } // 2 // AudiA4 23 0.3 // BMW-M2 45 0.42 // Drive BMW-M2 56 // Drive AudiA4 5 // Drive AudiA4 13 // End // // AudiA4 17.60 18 // BMW-M2 21.48 56 // 3 // AudiA4 18 0.34 // BMW-M2 33 0.41 // Ferrari-488Spider 50 0.47 // Drive Ferrari-488Spider 97 // Drive Ferrari-488Spider 35 // Drive AudiA4 85 // Drive AudiA4 50 // End // // Insufficient fuel for the drive // Insufficient fuel for the drive // AudiA4 1.00 50 // BMW-M2 33.00 0 // Ferrari-488Spider 4.41 97 A reliable informative post that you have shared Wonder Woman Jacket and appreciate your work for sharing the information. I have just visited your website, and I am glad to read the article's blade runner 2049 coat. The quality of the blogs is amazing, very well written and the language is extremely easy. This is my first visit to your site. You are providing a good articles. Now I am feeling a lot because for a longtime, I have missed your site. Concept is too good. Money heist costume
https://softuni.bg/forum/28281/more-exercises-objects-and-classes-3-speed-racing
CC-MAIN-2020-45
refinedweb
624
51.24
Difference between revisions of "Prime numbers miscellaneous" Latest revision as of 23:18, 21 June 2019 For the context to this, please see Prime numbers. Contents. Prime Wheels The idea of only testing odd numbers can be extended further. For instance, it is a useful fact that every prime number other than 2 and 3 must be of the form or . Here, prs2 | k <- [0..(p-1)], r <- rs, let r2 = n*k+r, r2 `mod` p /= 0] Combining both, we can make wheels that prick out numbers that avoid a given list ds of divis). One-liners Produce an unbounded (for the most part) list of primes. Using import qualified Data.List.Ordered as O (\\) = O.minus -- _not_ Data.List.\\ _Y g = g (_Y g) fix g = x where x = g x -- from Data.Function f <$> (a,b) = (a, f b) -- from Data.Functor unfold f a | (xs,b) <- f a = xs ++ unfold f b -- unfold f = concat . Data.List.unfoldr (Just . f) Here goes: [n | n<-[2..], product [1..n-1] `rem` n == n-1] -- Wilson's theorem Data.List.nubBy (((>1).).gcd) [2..] -- the shortest one [n | n<-[2..], all ((> 0).rem n) [2..n-1]] -- trial division [n | n<-[2..], []==[i | i<-[2..n-1], rem n i==0]] -- implementing `all` [n | n<-[2..], []==[i | i<-[2..n-1], last [n, n-i..0]==0]] -- and `rem` -- is this still a trial division? [n | n<-[2..], []==[i | i<-[2..n-1], j<-[0, i..n], j==n]] -- or a "forgetful" ... sieve of Eratosthen) fix $ map head . scanl (\\) [2..] . map (\p -> [p, p+p..]) -- executable spec fix $ map head . scanl ((\\).tail) [2..] . map (\p -> [p*p, p*p+p..]) _Y $ (2:) . concat . snd . mapAccumL (\(x:t) ms -> (t \\ ms, [x])) [3..] . map (\p -> [p*p, p*p+p..]) _Y $ (2:) . concat . snd . mapAccumL (\xs ms -> (\(h,t) -> (t \\ ms, h)) $ span (< head ms) xs) [3..] . map (\p -> [p*p, p*p+p..]) [n | n<-[2..], all ((> 0).rem n) [2..floor.sqrt.fromIntegral$n]] 2 : [n | n<-[3,5..], all ((> 0).rem n) [3,5..floor.sqrt.fromIntegral$n]] -- optimal trial division $ \ps -> 2 : [ n | (p, px) <- zip ps (inits ps), n <- take 1 -- sideways [n | n <- [p+1..], and [mod n p > 0 | p <- px]]] fix $ concatMap (fst.snd) -- by spans between primes' squares . iterate (\(p:t,(h,xs)) -> (t,span (< head t^2) [y | y<-xs, rem y p>0])) . (, ([2,3],[4..])) -- (xs \\ [p*p, p*p+p..]) -- segment-wise _Y $ \ps-> 2:[n | (r:q:_, px) <- (zip . tails . (2:) . map (^2) <*> inits) ps, n <- [r+1..q-1], all ((> 0) . rem n) px] -- n <- [r+1..q-1] Data.List.\\ [m | p <- px, m <- [0,p..q-1]]] -- n <- foldl (\\) [r+1..q-1] [[0,p..q-1] | p <- px]] -- n <- [r+1..q-1] \\ O.foldt O.union [] -- [[s,s+p..q-1] | p <- px, s <- [div (r+p) p * p]]] -- a sieve finds primes by eliminating the (\\) -- APL-style . scanl1 (zipWith (+)) $ repeat [2..] tail . unfold (\(a:b:t) -> (:t) . (\\ . (,)-> a \\ [i*i, i*i+2*i..n]) (2:[3,5..n]) [3,5..(floor . sqrt . fromIntegral) n] primesTo n = 2 : foldr (\r z-> if (head r^2) <= n then head r : z else r) [] (fix $ \rs-> [3,5..n] : [t \\ [p*p, p*p+2*p..] | (p:t)<-rs]) primesTo = foldr (\n ps-> foldl (\a p-> -- with repeated square root a \\ [p*p, p*p+p..n]) [2..n] ps) [] . takeWhile (> 1) . iterate (floor . sqrt . fromIntegral) concatMap (\(a:b:_)-> drop (length a) b) -- its dual, with repeated . tails . scanl (\ps n-> foldl (\a p-> -- squaring, ((\\ [p*p, p*p+2*p..]) t, ps))) ([5,7..], : \\ map (*p) a)) [1..] -- Euler's sieve unfold (\(p:xs) -> ([p], xs \\ map (*p) (p:xs))) [2..] unfold (\(p:xs) -> ([p], xs \\ [p, p+p..])) [2..] -- the basic sieve unfold (\xs@(p:_) -> (\\ [p, p+p..]) <$> splitAt 1 xs) [2..] fix $ (2:) . unfold (\(p:ps,xs) -> (ps ,) . -- postponed sieve (\\ . Some definitions use functions from the data-ordlist package, and the 2-3-5-7 wheel wh11; they might be leaking space unless minus is fused with its input (into gaps/ gapsW from the main page).] -- = fix $ map head . scanl minus [2..] . map (\p -> [p, p+p..]) primes = 2 : 3 : [5,7..] `minus` unionAll [[p*p, p*p+2*p..] | p <- tail primes] --
https://wiki.haskell.org/index.php?title=Prime_numbers_miscellaneous&curid=6915&diff=0&oldid=62917&printable=yes
CC-MAIN-2019-30
refinedweb
728
88.53
02 October 2011 16:08 [Source: ICIS news] BERLIN (ICIS)--A recent weakening of the Brazilian real (R) is good news for the local chemical industry because it will help boost exports while slowing the influx of imported material, a producer said on Sunday. The real weakened by around 15% in recent weeks, pressured by jitters in the global economy and a small drop in interest rates in the country, market sources said. The Brazilian currency started October trading at R1.88/US dollar, compared with R1.59/US dollar on 31 August. The strong real has been a struggle for the Brazilian chemical industry because it makes it difficult to compete with cheaper imports, the specialty chemicals producer said on the sidelines of the 45th annual European Petrochemical Association (EPCA) meeting. Brazilian chemical imports have skyrocketed in 2011, rising by around 30% so far this year, the source said. Meanwhile, chemical consumption in ?xml:namespace> “We are all hoping the real will continue to trade in the R1.80/dollar range going
http://www.icis.com/Articles/2011/10/02/9496850/epca-11-brazil-chemicals-producer-cheers-weaker-exchange-rate.html
CC-MAIN-2015-06
refinedweb
173
54.83
gomi Primitive macros in go, micros. All gomi files should end with .gomi gomi introduces 2 features, simple text replacement micro #mi and the shout keyword, a micro for errors Declaration of ALL micros should take place at the top of the file, between package and import keywords package main #mi PI 3.14159 #mi obj_type Object.content.message.GetType() import ( "fmt" "os" ) shout shout is a micro for errors, example usage: shout err // or shout e := obj.Err() will get converted to if err != nil { panic(err) } // or if e := obj.Err(); e != nil { panic(e) } The default shout error handler is set to panic(V), but you can change it by using a micro #shout log.Fatal(V) How to use setup - install parser.go - build it by running go build -o gomi.exe parser.go - add the file path to the gomi executable into your PATH usage cd into .gomi file location and run gomi gen sample.gomi to generate .go file, or, you can use it just like the go compiler , just change go run .... to gomi run ....
https://golangexample.com/primitive-macros-in-go-micros/
CC-MAIN-2022-40
refinedweb
182
68.67
On Fri, Dec 19, 2008 at 1:42 AM, Michael Niedermayer <michaelni at gmx.at> wrote: > On Thu, Dec 18, 2008 at 10:25:53PM -0800, Jason Garrett-Glaser wrote: >>. > [...] >> @@ -2853,6 +2864,23 @@ >> } >> #endif >> >> +#if defined(CONFIG_GPL) && defined(HAVE_YASM) >> + if( mm_flags&FF_MM_MMXEXT ) >> + { > >> +#ifdef ARCH_X86 >> + c->h264_v_loop_filter_luma_intra = ff_x264_deblock_v_luma_intra_mmxext; >> + c->h264_h_loop_filter_luma_intra = ff_x264_deblock_h_luma_intra_mmxext; >> +#endif > > why is this under ARCH_X86 ? > ARCH_X86 is always defined in this context Under 64-bit, x264 doesn't compile functions that are never used on 64-bit (e.g. cases where all x86_64 machines would use the SSE2 code instead of MMXEXT, so why waste binary space). In x264, ARCH_X86 and ARCH_X86_64 are mutually exclusive. Should I use #ifndef ARCH_X86_64 then? Dark Shikari
http://ffmpeg.org/pipermail/ffmpeg-devel/2008-December/044606.html
CC-MAIN-2014-52
refinedweb
115
57.16
Apparel stock lot Surplus cancelled shipment Coats and Jacket t-shirts Stocklot Sourcing Agent US $0.5-2 / Piece 200 Pieces (Min. Order) 2015 apparel stocklot kids christmas hoodied vest jacket US $4.5-5.5 / Piece 1500 Pieces (Min. Order) stock lot of garments US $1-2 / Piece 2000 Pieces (Min. Order) company looking for sales agents shipping charge from china to india air india cargo rates--- Amy --- Skype : bonmedamy US $1-10 / Kilogram 0.5 Kilograms (Min. Order) garment stock lot sourcing service US $0.1-0.2 / Bag 1 Bag (Min. Order) Garments Worldwide importers buyers exporters manufacturers dealers buying agents stock lot buyers traders wholesalers US $0.5-2 / Piece 200 Pieces (Min. Order) apparel kids winter jacket product garments stocks wholesale market US $8.5-15 / Piece 500 Pieces (Min. Order) garment stock lot mumbai dhl courier service from china to aden yemen--- Amy --- Skype : bonmedamy US $1-10 / Kilogram 0.5 Kilograms (Min. Order) indian garments stock lot shandong dhl pom shanghai oil diffusser--- Amy --- Skype : bonmedamy US $1-10 / Kilogram 0.5 Kilograms (Min. Order) garment stock lot sourcing agents 800 Pieces (Min. Order) import agent for china products of wholesale used clothing apparel garment stock lot buyers US $3.22-6.26 / Piece 1000 Pieces (Min. Order) supply what is woven fabrics fabric agents US $1000-2200 / Ton 1 Ton (Min. Order) factory cheap wholesale polo US $1-5 / Piece 20 Pieces (Min. Order) apparel stock lot agents t shirt US $1-8.5 / Piece 200 Pieces (Min. Order) Stock lot 100% polyester printed fabric for upholstery/curtain/bedding set US $0.3-1.2 / Meter 5000 Meters (Min. Order) women star printed stole US $2.5-3 / Piece 12 Pieces (Min. Order) Best services apparel stock lot third-party inspection US $138-258 / Piece 1 Unit (Min. Order) Cheesy cotton printed stars cute clothes for boys wear split hem t-shirt US $1-5 / Piece 1000 Pieces (Min. Order) China wholesale market agent rayon spandex stock lot polyester rib fabric US $6.3-7.3 / Kilogram 25 Kilograms (Min. Order) Low prices 100 polyester waterproof polyester taffeta stock lot US $0.8-1.29 / Meter 1000 Meters (Min. Order) corporate readymade garments export agents stock lot US $12-22 / Piece 500 Pieces (Min. Order) 2015 latest fashion new style fashion men jeans garment stock lot import agent for china products US $3.52-6.26 / Piece 1000 Pieces (Min. Order) cheap polo shirt stock lot US $3-6 / Piece 50 Pieces (Min. Order) In Stock Lot Flower Floral Pattern Printed Polyester Satin Fabric US $0.3-0.7 / Meter 5000 Meters (Min. Order) stock a lot 260gsm 100%cotton FR +AST+anti-acid & alkali fabric for garment /coveralls US $3.21-3.26 / Meter 3000 Meters (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show apparel stock lot agents or other products of your own company? Display your Products FREE now! Related Category Supplier Features Supplier Types Recommendation for you related suppliers related Guide
http://www.alibaba.com/showroom/apparel-stock-lot-agents.html
CC-MAIN-2017-22
refinedweb
520
69.99
0 Hello! I need help to construct a regular expression in Python that will help me get the file name of a include directive in C++. I think that regular expressions are a good way to solve this problem, but I'm open to new ideas. Consider these following includes: #include "hello.h" #include "d/hello.h" #include "dir/hello.h" #include "dir\hello.h" #include <hello.h> #include "a\b\c.h" #include <ref\six\eight.h> #include "123\456/789.h" #include "bye.h" In all these cases, I want the name of the included file (for example, hello.h, c.h, eight.h, 789.h and bye.h). I have written a regular expression that's not working (I think). Here it is: fileNamePattern = re.compile(r""" [\\/"<] #The last part of the include file path begins with these characters." [a-zA-Z0-9_]+ [>"] #The end of include line must end with > or " (\s)* #Any whitespace character (including tabs, carriage returns) $ #The end of the string. """, re.VERBOSE) I'm calling the groups() method on every match and on every string passed, it returns None (means that none of the strings match this regular expression). Can you help me make a regular expression work with my case? Thanks in advance.
https://www.daniweb.com/programming/software-development/threads/344423/regular-expression-for-fetching-file-name-in-include-directive-in-c-file
CC-MAIN-2017-17
refinedweb
212
79.26
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "algorithm" - - Friend: "What is devRant?" Me: "A place where programmers tell jokes and complain." Friend: "Why dont you just do that irl?" Me: "Because we never test in production"15 - - - - - - Friend texted me some binary. Decided to impress him and decode it by hand. Spent 5 minutes decoding "I eat ass".6 - - - - Proud moment today when I actually made an hsv to rgb conversion algorithm by following a formula rather than copying code from stack overflow28 - - git blame git fired git depression git divorce git homeless git commit git job git house git wife --better exit11 - - - The cool thing about side projects is you can do whatever you want. The shitty thing is you never complete whatever you want.7 - - - - - - Algorithm: from the Greek "algos" (pain) and rhythmos (repetition), we derive the true meaning of this word: REPEATED PAIN. :D5 - - - - - - - Had two choices: shut down and update, or shut down. Clicked shut down. Working on updates, 30% complete. Cunts. Why did you even ask.10 - - I was googling what happens when an ambulance gets stuck in traffic, but was reminded that I am a dev...11 - dfox please put in a "support us" feature of some sort that lets us watch ads of our own free will. I hate using your resources while not working for it at all.44 - - Pi Project It's pinging Google and measuring the response time every three seconds, then graphs the result on the LED SenseHat. It's graphing wifi stability.13 - - SO GUESS WHAT IF YOUR SHITTY WIFI CRAPS OUT DURING A VISUAL STUDIO UPDATE, VISUAL STUDIO FUCKING COMMITS SUICIDE MICROSOFT CAN SUCK A BIG, VEINY COCK. IM SO DONE WITH THEIR SENSITIVE, CONVOLUTED, SLOW IDE.21 - - Progress on my UWP file explorer! Here's some screenshots. I really enjoy the Acrylic material, and my favorite page is the "There's nothing here" page because it came out EXACTLY as I wanted it. More images in the comments!34 - There are two things about arrays that sometimes confuse me: [0]: They start at zero [1]: They end at one less than the length15 - - - In a previous rant I said I was gonna flip some bits. I couldn't wait until the weekend. I flipped some fucking bits, right fuckin now.17 - A story of love, loss, and devRant. My favorite sunglasses were a victim of hurricane Irma. They were sitting on the park bench when a powerful gust of wind blew them onto the cement floor, where both lenses were fucked right where I look out of them. I bought these sunglasses at Disney with my family and have not stopped wearing them since. I was pretty upset. Enter devRant. Sad and without sunglasses, I hoped that virtual ones might suffice. Lo and fucking behold, in the profile editor, there they were: my exact sunglasses, even with the choice between silver and gold. Absolutely fucking perfect. Made my day.10 -.9 - "Fuck JavaScript, its such a shitty language" seems to be quite a common rant today. It seems as if JS is actually getting more hate than PHP, which is certainly odd, considering the stereotype. So, as someone who has spent a lot of time in JS and a lot of time elsewhere, here are my views. Please, discuss your opinions with me as well. I am genuinely interested in an intelligent conversation about this topic. So here's my background: learned HTML/CSS/JS in that order when I was 12 because I liked computers. I was pretty shitty at JS until U was at least 15, but you get the point, Ive had it sploshing about in my brain for a while. Now, JS certainly has its quirks, no doubt, but theres nothing about the language itself that I would say makes it shitty. Its a very easy leanguage to use, but isn't overdeveloped like VB.net (Or, as I like to call it, TheresAFunctionForThat) Most of the hate is centered around JS being used for a very broad range of systems. I doubt JS would be in the rant feed so often if it were to stay in its native ecosystem of web browsers. JS can be used in server backend, web frontent, desktop and mobile applications, and even in some system services (Although this isn't very popular as of yet). People seem to be terrified that one very easy to learn language can go so far. And, oh god, its interpreted... How can a system app run off an interpreted language? That's absurd. My opinion on JSEverything is that it's progress. Thats what we're all about, right? The technologies already in place are unthreatened by JS, it isn't a gamechanger. The only thing JS integration is doing is making tedius and simple tasks easier. Big companies with large systems aren't going to jump ship and migrate to JS. A startup, however, could save a fucking ton of development time by using a JS framework, however. I want to live in a world where startups can become the next Google, because technology will stagnate when youre trying to protect your fortune, (Look at Apple for fucks sake) but innovation is born of small people with big ideas. I have a feeling the hate for JS is coming from fear of abandoning what you're already doing. You don't have to do that. JS is only another option (And a very good one, which is why it's becoming so popular). As for my personal opinion from my experiences... I've left this part til the end on purpose. I love programming and learning and creating, so I've never hated a lamguage, really. It all depends on what I want to do. In the times i've played arpund with JS, I've loved it. Very very easy. The idea of having it on both ends of web development makes a lot of sense too, no conversion, just direct communication. I would imagine this really helps with speed, as well. I wouldn't use it in a complicated system, though. Small things, medium size projects: perfect. Running a bank? No. So what do you think about this JSUniverse?13 - - I've always wanted to experiment with encryption but never do. This weekend, I'm fucking doing it. Even if I'm just flipping a few bits around, I'm fucking gonna flip those bits like they've never been flipped before and they are gonna FUCKING LIKE IT.4 -!60 - Dabbling in 3D modeling, I noticed the top of one of these fireworks has polygons visible while the others are shaded smoothly. Good job, Clash of Clans...11 - - I spent three hours coding some algorithm and then looked it up on Stack Overflow out of curiosity. Big surprise. Someone did the exact same thing but more effeciently, and had all the code in his answer on SO. Ughhhhhhhh 😓4 - - If I keep ignoring issues they'll eventually overflow and I'll have a negative amount of bugs, right?4 - - - - Seeing someone prototype a 3D game with complex lighting using OpenGL in a 15 minute video (It was sped up about 4x but, still, fuck me) Using c. Not c++. He also did 3D graphics in BASIC from scratch to explain how they work, generally.15 - 2 - - I always use brackets for clarity even if there is only one statement inside them if (boolean){ function (); } Cus it's so much easier to read, and if I need to add statements after the if I don't need to remember to add brackets. Plus the else may need brackets and an if with no brackets but an else with brackets looks awful.14 - After a long time in .NET and JavaScript, I have returned to Java today. I was quite happy about it for a few minutes!3 - Me: *Has 3 difficult exams to study for and hours of work* Also me: I should try my hand at encryption in Python.7 - - I have a co-worker that always uses "I'm too old" as an excuse. You're 33. You're just a lazy piece of shit that doesn't take care of his body. Fuck you, do work.7 - - - Not sure if it was the missing spaces in the title, the text lingo and bad grammar in the description, or the clickbait-style graphic but I AM SOLD.5 - Boils my fucking blood. How many billions of dollars is Microsoft worth again? And they still make me update my fucking updater? Why is there even a separate program for updating??18 - - - I didnt make my root partition big enough fuuuuuuuuck Stupid fucking tutorial said "10GB should be enough!" Should have listened to myself. Fuck me.18 - - Like a bad relationship Be really excited for the first month or so then once the new car smell starts to fade, lose interest and dump it.3 - - - - - - All right Bois, it's my first 3D model in years: a lamp post's base. Jesus fuck this shit has a learning curve.15 - HURRICANE INSPIRED PI PROJECT! So in anticipation of hurricane Irma I built this little thing which measures barometric pressure using my raspberry pi's sense hat! It also adds the data to a graph. Very fun!9 - - - - - - Why do you wake up tired?? Isn't that what you go to bed to fix?! What the fuck kind of bullshit is this?!13 - - - Scripting languages, markup languages, database querying languages, etc. Are all types of programming languages. A program is a set of instructions for a computer to follow. HTML is a programming language, fight me.48 - - When you can code thousands of lines but is lost of words when trying to communicate to humans how you did it.4 - - Since I have been working a LOT with terminal graphics lately, I made a really shit bitmap machine in JavaScript so I can draw bitmaps and get the int value instantly! Very proud of self, took 10 minutes between Overwatch games.8 - My PlayStation 1 has never needed an update, but these days, everything comes with an updater. Like, "oh, boy, my TV needs to update again". There's something wrong.50 - - - Was writing a comment with an unsmiley emoji IDE auto-completed the open parenthesis. I was surprised. :()3 - Me: *unsubscribes from Twitter emails* Twitter: successfully unsubscribed! We're still gonna see you emails tho, lmaoo2 - Asked a question on stack overflow, immediately got an upvote. Maybe 2018 will be different after all.3 - For all you meme lovers namespace Improvise { static class Adapt { int Overcome; } } Improvise.Adapt.Overcome2 - Got arch Linux working on my laptop. Installed Budgie GNOME, Cairo dock, Termite, VS Code, Code::Blocks, Android Studio, IntelliJ.... It's so beautiful9 - Okay, sublime text is amazing. Super fast and easy on the eyes. If I had $70 I would purchase a license.20 - Juste design a new algorithm to evaluate the chance of a project (any kind of project) will exceed the budget (time of cost). This algorithm been proven to be right almost all the time.1 - This spring I was working on a library for an algorithm class at uni with some friends and one of the algorithm was extremely slow, we were using Python to study graphs of roads on a map and a medium example took about 6-7h of commission to finish (I never actually waited for so long, so maybe more). I got so pissed of for that code that I left the lab and went to eat. Once I got back I rewrote just the god-damned data structure we were using and the time got down to 300ms. Milliseconds! Lessons learned: - If you're pissed go take a walk and when you'll come back it will be much easier; - Don't generalize to much a library, the data structure I write before was optimized for a different kind of usage and complete garbage for that last one; - Never fucking use frozen sets in Python unless you really need them, they're so fricking slow!3 - Someone should write a really infectious virus for Linux to make all the fanboys shut the fuck up about security.11 - - Not a rant about anything in particular. Just a summary of some feelings stored in the hateful part of my heart. Developing for Android: Add this third-party library to your Gradle build. Use (this) built-in Android class to make the thing work. *Clicks link Deprecated since API version SUCKMYDICK-7. Use (this) instead *Clicks link Deprecated since API version LICKMYBALLS-32. Use... Developing for Windows: Please use (this) API call. It was literally already available before Bill Gates was born. Carbon dating has placed this item to older than the universe itself and it is likely the entry point for the big bang. It is also still the best way to accomplish (task). Developing for Linux: "Hmm, I wonder how to use this" > > > Some shitty mailing list in small blue monospace font tells you to reference a man page that is three versions behind but the only version available. What? Those three sentences didn't explain it enough? Well, maybe you aren't cut out for this type of thing. JavaScript: you know how it is. SQL: You expect a decent-quality answer from stack overflow but you always get an outdated and hacky response and it's using syntax from Microsoft SQL. You need MySQL. C#: A surprising number of Microsoft forum results ranking high on Google. You click on one in hopes that it will be of any sort of quality. You quickly close the tab and wonder why you ever even had hope. Literally any REST API: Is it "query" or "q"? "UserID" or "user_id"? Oh, fuck, where's the docs again? You thought you escaped JavaScript, but it was a trick!: Some bullshit library you downloaded to make your other library work redefined one of the global variables in the project you inherited. Now you get 347 "<x> is not a function" errors in your console. Good luck, asshole. FontAwesome/ Material fonts/ Any icon font pack: You search "Close" for a close button icon. No results. You search "Simplified railroad crossing sign without the railroad". You get a close icon. I think that's all of my pent up rage. Each of them were too small for an individual rant so I had to do this essay - Today, social media databases around the world will be forced to accept billions of shitty firework photos.1 - - - - - - Genuinely upset that I don't have wifi Fuck Irma Fuck this. I can deal without a/c, not wifi. Kill me - - - Recently, Comcast limited my bandwidth to 1TB. I'd be upset, but my service is so slow I don't think I could use it all in just a month!7 - !rant After reading the wiki page over and over, I was finally able to implement Dijkstra's algorithm 😃4 - - The worst part about using the computers at my school is that my $200 mechanical rgb programmable keyboard with macro keys is at home.6 - - uhh interesting, bing search engine algorithm open sourced. Will anybody have a look at it? - - Is there a bug in the way it's decided which posts are displayed? This is on algorithm mode. There are a lot more 250++ posts below here with a couole of 1++ posts in between. And almost all of them I've seen before and ++d them23 - - Power-out inspired pi project! Just a simple flashlight. Brightness and color can be controlled by the joystick. It runs off a phone battery pack6 - - A LOT of this article makes me fairly upset. (Second screenshot in comments). Sure, Java is difficult, especially as an introductory language, but fuck me, replace it with ANYTHING OTHER THAN JAVASCRIPT PLEASE. JavaScript is not a good language to learn from - it is cheaty and makes script kiddies, not programmers. Fuck, they went from a strong-typed, verbose language to a shit show where you can turn an integer into a function without so much as a peep from the interpreter. And fUCK ME WHY NOT PYTHON?? It's a weak typed but dynamic language that FORCES good indentation and actually has ACCESS TO THE FILE SYSTEM instead of just the web APIs that don't let you do SHIT compared to what you SHOULD learn. OH AND TO PUT THE ICING ON THE CAKE, the article was comparing hello worlds, and they did the whole Java thing right but used ALERT instead of CONSOLE.LOG for JavaScript??? Sure, you can communicate with the user that way too but if you're comparing the languages, write text to the console in both languages, don't write text to the console in Java and use the alert api in JavaScript. Fuck you Stanford, I expected better you shitty cockmunchers.31 - Computer science students and data scientists rejoice, "All algorithms" implemented in many common languages: - Can anyone recommend good books for coding algorithms? Any tips and tricks would also be helpful. Thanks.12 - my Input: want from A -> B The algorithm of my Public transport App result: Walk from A to B, then take the tram from B to A. From there you walk to B. Ooooookay.3 - FUCK capitalist greed! I have befallen to their tricks once again. The daily dosage on my gummy vitamins was three a day but the total gummies in the container wasn't divisible by three so I had to buy three containers and eat one from each per day!21 - - For God sake tinder, fix your fucking algorithm. Why are you showing my beautiful and out of my league girls. I don't want depression everytime I open tinder. Please show me avarage looking girls.12 - - - - - - Dear management, I require one vertical monitor for my JS, one vertical monitor for my HTML, one vertical monitor for my CSS, and two stacked ultrawide monitors for testing in browser. Thanks, Many loves, Algo.17 - The Luhn algorithm implemented in cobol to validate Swedish personal identification number. What do you think?9 - - Works as backend developer for 2 years now; Almost fails simple university algorithm course. I'm contemplating my whole life and career choice right about now.2 - I finally finished my implementation of a suffix array construction algorithm! 😄 It takes about 30 mins to process 1GB of text and uses all available RAM, but it works! 🎉 Now I can optimize it, the original algorithm is 3x faster.9 - Algo algo1 = new Algo(); Algo algo2 = new Algo(); algo1.setOpacity(.5); algo2.setOpacity(.5); // The following image is ENTIRELY unedited:11 - - The code for some of the backend of my 100% custom media server in front of the blurred frontend of my media server. It just looks cool7 - People who actually implement DRY: "Don't repeat yourself!" People who "implement" DRY but are morons: "Don't repeat yourself, never say the same thing twice, and try not to be redundant.. - My mind is crippled but I have always wanted to implement Dijkstra’s algorithm, in cobol. Solved by using adjacency matrix. What do you think?8 - Caught a nice pair with the algorithm today... Here's hoping your mornings were all something a little more in-between!1 - I FREAKED OUT I WAS A DIFFERENT PERSON FOR SOME REASON Dfox thanks for the heart attack, I thought I got hacked.7 - Historically, in operating rooms, surgeons would discard biomedical waste in buckets. When a patient died on the operating table, the lead surgeon would get so upset that he would "kick the bucket", which is now a term for when someone dies. That's a fake fact, just like "Java runs on 4 billion devices.8 - When you don't want to explain what you've done with application's code base so you play the "algorithm" card. Boss: tell me about the new release. Me: updated the search algorithms boss. Boss: cool. Release! Me: 😎 - - after hours of trying to get the wifi working... Manjaro KDE is functional... and it's beautiful Look at the blurred background behind the terminal! Magic! Everything's so snappy... and the dark theme... I'm in love.4 - - - I made a devRant bot!! It's an anti-devRant bot bot that spams the notifications of devRant bots. Just call @fuckbots <bot name> <message to have the attacked bot execute> @fuckbots doesn't have a blacklist, so once all other bots have been defeated, I will call @fuckbots fuckbots. It's a wonderful circle.12 - We were going over man in the middle attacks today and I honestly just could not stop thinking about that SpongeBob episode where Squidward keeps intercepting the bubble messages between SpongeBob and Patrick and it was so dumb that I could not stop smiling.3 - - - We all know you can't "learn x programming language in a day" without travelling to the Arctic and catching a day that last half a year. But what's the worst language to try and learn in a day? I vote c++. Manual memory management, multiple inheritance, static compilation, operator overloading, and generally non-human syntax ( Like std::cout << "This is how you print!" << std::endl; ) make it a difficult one to attempt in a day.27 - - - - - It's official. I'm making multiplayer Minesweeper. There's a Trello board, Discord server, and GitHub repo. Initial gamemodes will be singleplayer, public (like agar.io) and vs (2+ players, support for teams is planned). My big idea besides that is to have interesting powerups you can buy or find, adding a component of luck and some exciting new rules to the game. And I may even put ads on it and try to feed myself.15 - - What the fuck happened? Yesterday I spent about an hour downloading ~40 files and after a restart they disappeared???20 - - I FUCKING HATE THE INSTAGRAM ALGORITHM FOR SUGGESTED POSTS TO EXPLORE. You piece of shit; I have NEVER EVER looked at pictures of nature, but I get a shit load of suggested pictures of nature. Can't forget the time they spammed it with memes in a random language I can't read. OH. AND LETS NOT FORGET YOUR TIMELINE ALGO. IT'S JUST AS SHITTY. "Hmm. Let him see a picture posted a few minutes ago or one from a week ago? Fuck it, a week ago it is!"7 - Last night tried to use VLC to rip DVDs for my home media server. All three failed in some manner! What a waste of time! Trying HandBrake today.23 - According to Steam Stats, Ethiopia has downloaded 1.5 terabytes of games in the past 7 days. Madagascar has downloaded 5.2 China is in the lead with 79 PB, followed closely by America with 60.12 - Whenever I feel like too much of an idiot to be a developer, I go to settings and tap my device info 10 times.2 - - - - - - - Well not bad for my first try eh? I implemented a std::vector-like container and it's about 4 times as fast as std::vector10 - *Writes email* *Reads over email* *Reads over a second time* *Skims over one last time* *Sends* *Goes to outbox* *Reads what I just sent* "Hope I didn't miss anything"2 - It has taught me to accept that I am frequently wrong. Not just when faced with code but with people too. All the years of "It can't possibly be MY code that's wrong" which of course always turns into "Well, I guess it was my code..." Had helped me think critically in relationships, politics, and many other areas of my life. Programming had actually heavily influenced my behavior and I would say it is largely for the better. However, one negative effect it has had on me is that I am less of an optimist. Code is very "cause and effect". This means a lot of my life is "no surprises" and "you get what you give" So I often feel like the most likely outcome is probably just the one that's gonna happen. There are no surprises, no miracles. Life is cause and effect.1 - - Hey Machine Learners and AI pips, whats the algorithm for detecting if a person is an asshole? :p11 - -).7 - When you spend a couple of hours considering how to change your implementation of Dijkstra's algorithm to be directional only to realise it already was Fuck! Self.setIntelligence(-1)3 - There's this little, self-loathing sigh I do every time my program locks up because I forgot to iterate inside a while loop - - - - Has anybody noticed that "this" in JavaScript is sort of a pain in the ass to keep track of? It appears to have identity issues.22 - - - - Me trying to impress my girlfriend with the light painting. Motivated me to develop my own algorithm for the effect.4 - Just fixed an error that I seeked for a long time in my optimization algorithm. Results are much worse now. Feature it is - Y'all know those classes where you get 100s on the assignments and understand the material, and participate in class discussions, but get low 50s on the tests? yea. Fuck those classes - Written coding test, first question : Form the minimum spanning tree of the given graph using Kruskals algorithm. Plot twist : No weights given. Assume unweighted graph4 - - - - Anyone played Human Resource Machine yet? It's fun to solve and optimize these algorithm puzzles after a hard work day. Does anyone know other games like this?5 - SELECT * FROM PEOPLE WHERE GIRLFRIEND = 'Y'; > 0 rows returned *Sigh* SELECT VIDEO FROM WEBSITES WHERE URL LIKE '%hub%';20 - - We have phonetic alphabets to clearly describe spellings (d as in Delta, etc.) What's your best misleading phonetic alphabets? I'll start: P as in pneumonia H as in honest18 - Guess I asked for it by joining the Windows Insider fast lane but my desktop background has disappeared :((3 - - Do you guys find yourself ignoring things you should be using just because you're too stubborn to learn how they work? Because I just used std::shared_ptr for the first time today.1 - While I know I can save a few characters, I think that if(username.length > 0 && password.length > 0) looks a lot nicer than just if(username.length && password.length) However, I am so lazy that I will probably always use the uglier version just to save like 0.00001 seconds typing.17 - - - - - - - I’m really liking the Algorithm Design Manual so far, except for the fact that there’s not a glossary somewhere for “random mathematical symbols you have probably never had to deal with but we will use in psuedocode examples”1 - - Dear Windows, Why can't you FUCKING remember my choice to "Open folder to view files"? WHY BOTHER asking every single time I plug in a USB if you can't be fucked to remember what I say?? Why is this even an issue?3 - - Has anyone else worked in business environments and found... em.. "wannabe-tech decisions?" For example, naming stuff with shortened words and underscores instead of spaces.... for no real reason? Or maybe using the word "database" a little too often, just to use the word? (similar to the way you might call someone by name, only to confirm to them that you have learned their name?) It doesn't actually bother me, rather, I think it's a bit cute that these people are interested in our culture and want to be a part of it, even if it's in sort of silly ways like this.4 - - deep thoughts on algorithm design final exam ... He wanted to make sure we fully understood the idea. :) No idea about my score -. - Spent a good minute figuring it out, but ES modules are pretty great. Using Node.JS 13 (and the --no-warnings flag) I can use the exact same file on both client and server. Wonderful!2 - - - The worst realtime coding you'll ever see.."Require efficient.... correction normal shipping/delivery algorithm - Captain America: Winter Soldier Thug officer: "Deploy Algorithm" Me as random operator: "Like all of them? At once? There are tens of thousands running on this thing" - - - Teacher explained how genetic algorithm works by using us as a population. Best way to hear about GA for first - anybody ever work with ProjectQ or QISKit? I'm doing a project for my algorithms class on Shor's algorithm, and I'm trying to find a guide for an implementation. - - - There should be algorithm in DevRant to identify duplicate photos - same memes/photos posted in the last 48hours or less. To stop seeing the same meme posted by 5 developers on the same day. Thoughts?20 - - I need to compare the JSON results of an API before and after a code change. But it was also moved to another API. However some fields are auto-generated like timestamp or derived off the url (resource links). Also if a JSON list is returned it maybe in different order... Wondering is there a quick way to test text likeness? I've done it before but just used matching status code and maybe measuring the diff in response size7 - Trying to make my first genetic algorithm program "to be or not to be" in javascript.... (coming from java and experimenting a bit) Can't even get past instantiating a function/class Gene.js file into the main.js file. -_- I got a long way to go...2 - Whenever I had to pronounce Djikstra's Algorithm. My tongue be like - dijjuuksstrra... Fuck you I am commiting suicide between these teeth.8 - Update to Doomsday !rant which can be found here: Hey Everyone I'd like to let everyone know that I finally found the original file for my Doomsday Algorithm and thought it would be fun to show you the dirtiest code I've ever written (I was 12 when I wrote it). here is the github link,... , and I'll likely update it will more cleaner languages in the future.2 - How to deduce the time complexity of any algorithm faster? And is there any software that calculates it and suggest another optimal algorithm instead?1 - - - - - - I got one task left... One algorithm to solve... That's the second day I'm on it... And I need to sleep so much... Fix yourself please, let me write random lines and please work...1 - - with the while demonetize drama around youtube and google blaming the "algorithm", one would think that TESTING ON THE FUCKING PRODUCTION SERVER WAS A BAD IDEA 😡1 - What is the best string comparison algorithm ? Example Use case: group messages from your telco into groups of related messages. I have gone through the various algos and implanted them. I just want your opinion if you have experience on this.15 - -. - #include <stdio.h> /* * Windows Update Algorithm */ int main() { int percent = 1; while (percent <= 100) { printf("Working on updates\n"); printf("%i %% complete\n", percent); printf("Don't turn off your computer\n\n"); if (percent == 30) { printf("Restarting\n"); break; } percent++; } return 0; } -. - The Devrant Algorithm not only reads everything and then sort by sarcasm but sees everything and then sort by sarcasm. Even puts Google's cloud vision to shame. - - - Anybody here implemented Dynamic Time Warping (DTW) algorithm? I need to implement it for a school project. Its basically an android application and want to authenticate users using this algorithm. Will appreciate any help possible.2 - Algorithm suggestions for 2d and 3d volumetric combination of voxels? I built an image to voxel converter for exporting to the game Avorion over the weekend. I am using a naive approach and treating each pixel as a box shape. I need to learn how to write an algorithm for combining voxels with the same color into rectangular cuboids. This will reduce the imported shapes. As is the game has issues with 64x64x1 images on import. It would be good to reduce the object count by creating cuboids with the pixels that can be combined. I would like to learn to write the algorithm for both 2d and 3d.4 - 3D Engines, and A* algorithm (aswell as pathfinding in usual), this is something I still can't get my head around and it is bothering me to the bone. - If you wanted to conquer the world using ninja kittens, which positions of power would you occupy first? I need advice on programming the mental conditioning algorithm.1 Top Tags
https://devrant.com/search?term=algorithm
CC-MAIN-2021-39
refinedweb
5,474
74.08
The Internet Message Access Protocol (IMAP) was designed to allow for remote access, storage, and management of email messages. This ability to store messages on a central server is useful for a couple of reasons. First, it makes email available in more than one place. If your mail is on an IMAP server, you can switch between your desktop and your laptop and still access your mail. Second, it makes it easier to administer email for workgroups and corporations. Instead of having to track and back up email across hundreds of hard drives, it can be managed in a single, central place. The specification for the current version of IMAP (Version 4, Revision 1) is defined in RFC 3501 (). IMAP is a powerful but complicated protocol, and the RFC takes up more than 100 pages. It's the kind of protocol that would be a ton of work to implement yourself. Fortunately, the Twisted developers have written an complete IMAP implementation, which provides a nice API for working with IMAP servers. This lab demonstrates how to log in to an IMAP server and get a list of available mailboxes. 7.4.1. How Do I Do That? Use a subclass of imap4.IMAP4Client as your Protocol. In the serverGreeting function, call self.login with a username and password. Once successfully logged in, call self.list to get a list of the available mailboxes (see Example 7-5). Example 7-5. imapfolders.py from twisted.protocols import imap4 from twisted.internet import protocol, defer class IMAPFolderListProtocol(imap4.IMAP4Client): def serverGreeting(self, capabilities): login = self.login(self.factory.username, self.factory.password) login.addCallback(self._ _loggedIn) login.chainDeferred(self.factory.deferred) def _ _loggedIn(self, results): return self.list("", "*").addCallback(self._ _gotMailboxList) def _ _gotMailboxList(self, list): return [boxInfo[2] for boxInfo in list] def connectionLost(self, reason): if not self.factory.deferred.called: # connection was lost unexpectedly! self.factory.deferred.errback(reason) class IMAPFolderListFactory(protocol.ClientFactory): protocol = IMAPFolderListProtocol def _ _init_ _(self, username, password): self.username = username self.password = password self.deferred = defer.Deferred( ) def clientConnectionFailed(self, connection, reason): self.deferred.errback(reason) if __name__ == "_ _main_ _": from twisted.internet import reactor import sys, getpass def printMailboxList(list): list.sort( ) for box in list: print box reactor.stop( ) def handleError(error): print >> sys.stderr, "Error:", error.getErrorMessage( ) reactor.stop( ) if not len(sys.argv) == 3: print "Usage: %s server login" % sys.argv[0] sys.exit(1) server = sys.argv[1] user = sys.argv[2] password = getpass.getpass("Password: ") factory = IMAPFolderListFactory(user, password) factory.deferred.addCallback( printMailboxList).addErrback( handleError) reactor.connectTCP(server, 143, factory) reactor.run( ) Run imapfolders.py with two arguments: the IMAP server and your login username. It will prompt you for your password, log you in to the server, and then download and print a list of your mailboxes: $ python imapfolders.py imap.myisp.com mylogin Password: INBOX Archive Drafts Mailing Lists Mailing Lists/Twisted Mailing Lists/Twisted Web Sent Spam Trash 7.4.2. How Does That Work? imapfolders.py uses the familiar pattern of a ClientFactory and Protocol working together. The Protocol communicates with the server, and the ClientFactory provides a Deferred that the program uses to track the success or failure of the task at hand. The IMAPFolderListProtocol class in imapfolders.py inherits from IMAP4Client, which provides a nice Deferred-based interface. Every IMAP command returns a Deferred that will be called back with the reply received from the server. The use of Deferreds makes it easy to run a series of commands, one after the other: have the callback handler for each command run the next command and return its Deferred. There's one Deferred method used in imapfolders.py that hadn't been used in any of the previous examples: chainDeferred. The chainDeferred method is used when you want to take the results of one Deferred and pass them to another Deferred. The line deferredOne.chainDeferred(deferredTwo) is equivalent to: deferredOne.addCallback(deferredTwo.callback) deferredOne.addErrback(deferredTwo.errback) The IMAPFolderListProtocol in imapfolders.py has its serverGreeting method called when the connection to the server is established and the server has indicated that it's ready to accept commands. In response to serverGreeting, it calls self.login, and sets up a callback to self._loggedIn. Then it uses chainDeferred to send the eventual results of self.login back to self.factory.deferred. The order of these steps is important. Because the callback to self._loggedIn is added before the call to chainDeferred, self.factory.deferred won't receive the direct result of self.login, but the result returned by self. _loggedIn. As it turns out, self. _loggedIn returns another Deferred, the result of self.list, which will be run through self. _gotMailboxList to extract the mailbox names from the full response returned by the server. So self.factory.deferred is called back with the value that IMAPFolderListProtocol is supposed to provide: a list of mailbox names. One of the most useful things about using Deferreds in this way is error handling. Remember how in Example 7-2, the POP3DownloadProtocol had to check the results of every command, and call a function to let the factory know there was an error? There's none of that here. Since self.list is called as part of the event handler for self.login, and self.login is chained to self.factory.deferred, self.factory.deferred will receive any error that occurs in either of these steps. Getting Started Building Simple Clients and Servers Web Clients Web Servers Web Services and RPC Authentication Mail Clients Mail Servers NNTP Clients and Servers SSH Services, Processes, and Logging
https://flylib.com/books/en/2.407.1/listing_mailboxes_on_an_imap_server.html
CC-MAIN-2019-22
refinedweb
941
53.17
Hi, Advertising On 22/09/16 14:40, Jan Just Keijser wrote: > Hi, > > On 22/09/16 15:07, debbie10t wrote: >> Hi >> >> posting in devel because I am asking for clarification of >> what the source code really does. >> >> Re: >> >> Config: >> |--- >> server *normal stuff* >> log-append /tmp/openvpn.log >> --- >> >> I have just tried with Ubuntu1604 myself and observe that: >> (My basic config I added: --log /tmp/client1.log) >> >> 1. $ sudo systemctl start openvpn@client1 = log file *not* created >> 2. $ sudo openvpn client1.conf = log file created normally in /tmp >> >> Obviously, systemctl start openvpn@client1 appends more options when >> starting openvpn (in my hand written service the only addition is >> --daemon client1) So I presume that by daemonizing something changes >> with regard to writing the log file to /tmp ?? >> >> Also note, in the forum post --daemon is used within the config file. >> >> I did grep -E "/tmp" src/openvpn/* and found some code in init.c >> (line 664) but it's all C, foo, bar to me (Sea food bar ;-) ) >> >> Anyhoo, can anybody provide a brief and simple explanation ? >> >> Many thanks > most likely this , from 'man systemd.exec' > > PrivateTmp= > Takes a boolean argument. If true, sets up a new file > system namespace for the executed processes and > mounts private /tmp and /var/tmp directories inside it, > that are not shared by processes outside of the > namespace. This is useful to secure access to temporary > files of the process, but makes sharing between > processes via /tmp or /var/tmp impossible. All temporary > data created by service will be removed after > service is stopped. Defaults to false. > > > thus, the output *is* logged to /tmp/openvpn.log but the problem is > that it's not in the /tmp you'd expect. > There's nothing OpenVPN can do about this, it's one of those weird > idiosyncracies of systemd. > > HTH, > > JJK > > Thanks JJK, this was *exactly* the problem .. I removed PrivateTmp=True from the unit file, (which I had over looked) systemctl daemon-reload and systemctl start openvpn@client1 and the file appeared at /tmp/client1.log (also thanks to samuli for looking) Many thanks ------------------------------------------------------------------------------ _______________________________________________ Openvpn-devel mailing list Openvpn-devel@lists.sourceforge.net
https://www.mail-archive.com/openvpn-devel@lists.sourceforge.net/msg12543.html
CC-MAIN-2016-40
refinedweb
359
55.84
IMPACT WRESTLING ON THE ROAD - Every Thursday on SpikeTV 92,438 views 1 month ago Watch TNA's IMPACT WRESTLING every Thursday night at 8/7c on SpikeTV - featuring your favorite TNA stars and packed with nonstop action! Recent uploads - Updated Card for Slammiversary XI in Boston on June 2nd - IMPACT moves to 9/8c on SpikeTV starting May 30th - Bully Ray and Sting Negotiate the Stipulations for Slammiversary in Boston - May 16, 2013 - X Division Title: Kenny King vs. Chris Sabin vs. Petey Williams - The Tag Team Championship Match is Made for Slammiversary - May 16, 2013 - D-Lo Brown vs. Joseph Park - May 16, 2013 - Christian York And Jay Bradley Fight for A Chance to Enter the Bound For Glory Series - May 16, 2013 - Suicide Returns Next Week on IMPACT WRESTLING - View all1,000+ items Playlists Inside IMPACT - Bad Influence after the Contenders Match (InsideIMPACT) - Magnus on the war vs. The Aces and Eights (InsideIMPACT) - Gail Kim on why she attacked Tara (InsideIMPACT) - Zema Ion talks about Chris Sabin's return (Inside IMPACT) - Is It The End For Aries and Roode As A Team? (Inside IMPACT) - Taryn Terrell: I'm Here To Be Knockouts Champion! (Inside IMPACT) - Bad Influence wants to reform Fortune (Inside IMPACT) - Rob Terry: No One Can Stop Me! (Inside IMPACT) - Mickie James after losing to Velvet Sky (Inside IMPACT) - Hulk Hogan talks Sting's return, Aces and Eights (Inside IMPACT) - Kenny King survives The Canadian Destroyer (InsideIMPACT) - Rob Terry on his future in IMPACT WRESTING - (InsideIMPACT) The TNA Knockouts - Music Video: The TNA Wrestling Knockouts - The TNA Wrestling Knockouts Website | Dare To Be! - All The Pretty Girls | The TNA Knockouts Website - The NEW Knockouts Website: DARE TO BE | Featuring Your Favorite Knockouts - Velvet Sky on what's in store for the TNA Knockouts - Velvet Sky: Did You Know? - Velvet Sky explains the "Dare To Be" concept - SoCal Val: Dare To Be Glamorous - SoCal Val: Did You Know? - SoCal Val explains the "Dare To Be" concept - SoCal Val: Dare To Be Glamorous - ODB: What's Planned For The TNA Knockouts
http://www.youtube.com/user/TNAwrestling
CC-MAIN-2013-20
refinedweb
344
60.18
Implement a chat server From HaskellWiki 1 Introduction This page describes how to implement a simple chat server. The server should support multiple connected users. Messages sent to the server are broadcast to all currently connected users. For this tutorial we'll use Network.Socket, which provides low-level bindings to the C-socket API. 2 Simple socket serverWe start with a simple server. The structure of this server begins with a -- in ChatServer.hs ChatServer 3 Using System.IO for sockets import System.IO [...] runConn (sock, _) = do hdl <- socketToHandle sock ReadWriteMode hSetBuffering hdl NoBuffering hPutStrLn hdl "Hi!" hClose hdl 4 ConcurrencySo far the server can only handle one connection at a time. This is ok for just writing a message but won't work for a chat server. We can fix this quite easily though, using import Control.Concurrent [...] mainLoop sock = do conn <- accept sock forkIO (runConn conn) mainLoop sock 5 Adding communication between threadsThis seems to be a hard problem. Luckily, the type Msg = String import Control.Concurrent.Chan [...] main = do [...] chan <- newChan mainLoop sock chan mainLoop :: Socket -> Chan Msg -> IO () mainLoop sock chan = do conn <- accept sock forkIO (runConn conn chan) mainLoop sock ch 6!
https://wiki.haskell.org/index.php?title=Implement_a_chat_server&redirect=no
CC-MAIN-2016-07
refinedweb
199
59.5
Reducing Abandoned Shopping Carts In E-Commerce - By Keir Whitaker - October 23rd, 2014 - 42 Comments In March 2014, the Baymard Institute, a web research company based in the UK, reported that 67.91%1 of online shopping carts are abandoned. An abandonment means that a customer has visited a website, browsed around, added one or more products to their cart and then left without completing their purchase. A month later in April 2014, Econsultancy stated2 that global retailers are losing $3 trillion (USD) in sales every year from abandoned carts. Clearly, reducing the number of abandoned carts would lead to higher store revenue — the goal of every online retailer. The question then becomes how can we, as designers and developers, help convert these “warm leads” into paying customers for our clients? Further Reading on SmashingMag: - Fundamental Guidelines Of E-Commerce Checkout Design3 - Local Storage And How To Use It On Websites4 - Boost Your Mobile E-Commerce Sales With Mobile Design Patterns5 - A Little Journey Through (Small And Big) E-Commerce Websites6 Before Cart Abandonment Link Let’s begin by looking at recognized improvements we can make to an online store to reduce the number of “before cart” abandonments. These improvements focus on changes that aid the customer’s experience prior to reaching the cart and checkout process, and they include the following: - Show images of products. This reinforces what the customer is buying, especially on the cart page. - Display security logos and compliance information. This can allay fears related to credit-card and payment security. - Display contact details. Showing offline contact details (including a phone number and mailing address) in addition to an email address adds credibility to the website. - Make editing the cart easier. Make it as simple as possible for customers to change their order prior to checking out. - Offer alternative payment methods. Let people check out with their preferred method of payment (such as PayPal and American Express, in addition to Visa and MasterCard). - Offer support. Providing a telephone number and/or online chat functionality on the website and, in particular, on the checkout page will give shoppers confidence and ease any concerns they might have. - Don’t require registration. This one resonates with me personally. I often click away from websites that require lengthy registration forms to be filled out. By allowing customers to “just” check out, friction is reduced. - Offer free shipping. While merchants might include shipping costs in the price, “free shipping” is nevertheless an added enticement to buy. - Be transparent about shipping costs and time. Larger than expected shipping costs and unpublished lead times will add unexpected costs and frustration. - Show testimonials. Showcasing reviews from happy customers will alleviate concerns any people might have about your service. - Offer price guarantees and refunds. Offering a price guarantee gives shoppers the confidence that they have found the best deal. Additionally, a clear refund policy will add peace of mind. - Optimize for mobile. Econsultancy reports that sales from mobile devices increased by 63% in 2013. This represents a real business case to move to a “responsive” approach. - Display product information. Customers shouldn’t have to dig around a website to get the information they need. Complex navigation and/or a lack of product information make for a frustrating experience. Unfortunately, even if you follow all of these recommendations, the reality is that customers will still abandon their carts — whether through frustration, bad design or any other reason they see fit. After Cart Abandonment Link The second approach is to look at things we can do once a cart has been abandoned. One tactic is to email the customer with a personalized message and a link to a prepopulated cart containing the items they had selected. This is known as an “abandoned cart email.”. In September 2013, Econsultancy outlined7 how an online cookie retailer recaptured 29% of its abandoned shopping carts via email. This is a huge figure and one we might naturally be skeptical of. To get a more realistic perspective, I asked my colleagues at Shopify8 to share some of their data on this, and they kindly agreed. Shopify introduced “abandoned cart recovery” (ACR) in mid-September 2013 (just over a year ago at the time of writing). Here’s a summary of its effectiveness: - In the 12 months since launching automatic ACR, $12.9 million have been recovered through ACR emails in Shopify. - 4,085,592 emails were sent during this period, of which 147,021 carts were completed as a result. This represents a 3.6% recovery rate. - Shop owners may choose to send an email 6 or 24 hours after abandonment. Between the two, 6-hour emails convert much better: a 4.1% recovery rate for 6 hours versus 3% for 24 hours. It’s worth noting that the 3.6% recovery rate is from Shopify’s ACR emails. Many merchants use third-party apps9 instead of Shopify’s native feature. Given that Shopify is unable to collect data on these services, the number of emails sent and the percentage of recovered carts may well be higher.. Creating An HTML Abandoned Cart Email Link The implementation of abandoned cart emails varies from platform to platform. Some platforms require third-party plugins, whereas others have the functionality built in. For example, both plain-text and HTML versions are available on Shopify. While the boilerplates are very usable, you might want to create a custom HTML version to complement the branding of your store. We’ll look at options and some quick wins shortly. In recent years, HTML email newsletters have really flourished. You only have to look at the many galleries10 to see how far this form of marketing has progressed. Sending an HTML version, while not essential, certainly allows for more flexibility and visual design (although always sending a plain-text version, too, is recommended). However, it’s not without its pain points. If you’ve been developing and designing for the web since the 1990s, then you will remember, fondly or otherwise, the “fun” of beating browsers into shape. Designing HTML newsletters is in many ways a throwback to this era. Table-based layouts are the norm, and we also have to contend with email clients that render HTML inconsistently. Luckily for us, the teams at both Campaign Monitor11 and MailChimp12 have written extensively on this subject and provide many solutions to common problems. For example, Campaign Monitor maintains a matrix and provides a downloadable poster13 outlining the CSS support of each major desktop and mobile email client. MailChimp, for its part, provides numerous resources on CSS14 and email template design15. Familiarizing yourself with the basics before tackling your first HTML email is worthwhile — even if you ultimately use a template. Open-Source Responsive Email Templates Link While many of you might wish to “roll your own” template, I often find it easier to build on the great work of others. For example, a number of great open-source projects focus on HTML email templates, including Email Blueprints16 by MailChimp. Another example comes from Lee Munroe. His “transactional HTML email templates17” differ in that they are not intended for use as newsletters, but rather as “transactional” templates. To clarify the difference, Lee breaks down transactional email into three categories: - action emails “Activate your account,” “Reset your password” “You’ve reached a limit,” “A problem has occurred” - billing emails monthly receipts and invoices The templates are purposefully simple yet elegant. They also have the added benefit of having been throughly tested in all major email clients. Finally, because they are responsive, they cater to the 50+%18 of emails opened via mobile devices. The Challenge Link Lee’s templates are a good option for creating a simple HTML email for abandoned carts. Therefore, let’s move on from the theory and look at how to create an HTML template for the Shopify platform. Let’s begin by setting some constraints on the challenge: - make the fewest number of markup changes to Lee’s template; - make use of the boilerplate text that is set as the default in the abandoned cart HTML template in Shopify; - inline all CSS (a best practice for HTML email); - send a test email with dummy data, and review the results in Airmail, Gmail and Apple Mail (on iOS). 1. Create a Local Copy of the Action Email Template Link Having looked at the three templates, the “action” version appears to offer the best starting point. You can download the HTML for this template directly from GitHub19 if you wish to follow along. The first step is to take the contents of Lee’s template and save it locally as abandoned-cart.html. A quick sanity check in a browser shows that the style sheet isn’t being picked up. Inlining all CSS is recommended (we’ll look at this in a later step), so add the styles to the <head> section of abandoned-cart.html. You can copy the CSS in its entirety from GitHub22 and then paste it in a <style> element. Another check in the browser shows that the styles are being applied. 2. Add the Content Link Now that the template is working as a standalone document, it’s time to look at integrating Liquid23’s boilerplate code from Shopify’s default template. This can be found in the Shopify admin section under “Settings” → “Notifications” → “Abandoned cart.” If you wish to follow along with these code examples, you can set up a free fully featured development store24 by signing up to Shopify’s Partner Program25. Hey{% if billing_address.name %} {{ billing_address.name }}{% endif %}, Your shopping cart at {{ shop_name }} has been reserved and is waiting for your return! In your cart, you left: {% for line in line_items %}{{ line.quantity }}x {{ line.title }}{% endfor %} But it’s not too late! To complete your purchase, click this link: {{ url }} Thanks for shopping! {{ shop_name }} All notification emails in Shopify make use of Liquid, the templating language developed by Shopify and now available as an open-source project and found in tools such as Mixture26 and software such as Jekyll27 and SiteLeaf28. Liquid makes it possible to pull data from the store — in this case, all of the details related to the abandoned cart and the user it belonged to. Having studied the markup, I’ve decided to place the boilerplate content in a single table cell, starting on line 2729 of Lee’s original document. After pasting in the boilerplate code, let’s double-check that the template renders as expected in the browser. At this stage, Liquid’s code is appearing “as is.” Only once the template is applied to Shopify’s template will this be replaced with data from the store. 3. Modify the Boilerplate Code Link The next stage involves tidying up some of the boilerplate code, including wrapping the boilerplate text in <p> tags. Then, it’s time to work out how best to display the cart’s contents in markup. For speed, I’ve chosen an unordered list. Liquid’s refactored for loop30 is pretty straightforward: <ul> {% for line in line_items %} <li>{{ line.quantity }} x {{ line.title }}</li> {% endfor %} </ul> After another sanity check, things are looking much more promising. However, we need to make a few final tweaks to make it work: - remove unwanted table rows, - add the correct link to the blue call-to-action button, - change the contents of the footer. 4. Make Final Adjustments Link Lee’s template includes markup to create a big blue “Click me” button. You can see this on line 3831: <a href="" class="btn-primary">Upgrade my account</a> Let’s turn this into a relevant link by changing the markup to this: <p><a href="{{ url }}" class="btn-primary">Check out now</a></p> In this case, {{ url }} represents the link to the abandoned (and saved) cart. I’ve enclosed the anchor in a paragraph to ensure consistent spacing when the email is rendered, and I’ve moved it up into the main section. Finally, we’ve changed the unsubscribe link in the footer to a link to the shop: <a href="{{ shop.url }}">Visit {{ shop_name }}</a> After a few minutes of editing, the template looks more than respectable. However, we’ve neglected one section, the text in the yellow highlighted “alert” section. I’ve changed this, along with the title element in the HTML, to this: Your cart at {{ shop_name }} has been reserved and is waiting for your return! Email notifications in Shopify have access to a number of variables that can be accessed via Liquid. A full list is available in Shopify’s documentation32. 5. Inline the CSS Link To recap, we’ve changed the template’s markup very little, and the CSS is identical to Lee’s original (albeit in the template, rather than in an external file). Shopify’s boilerplate text is also intact, albeit with a very small change to Liquid’s for loop. The next step is to inline the CSS in the HTML file. Because some email clients remove <head> and <style> tags from email, moving the CSS inline means that our email should render as intended. Chris Coyier penned “Using CSS in HTML Emails: The Real Story33” back in November 2007 — the landscape hasn’t changed much since. Thankfully, taking your CSS inline isn’t a long or difficult process. In fact, it’s surprisingly easy. A number of free services34 enable you to paste markup and will effectively add your styles inline. I’ve chosen Premailer35 principally because it has a few extra features, including the ability to remove native CSS from the <head> section of the HTML document, which saves a few kilobytes from the file’s size. After pasting in the markup and pressing “Submit,” Premailer generates a new HTML version that you can copy and paste back into your document. It also creates a plain-text version of the email, should you need it. Another great feature of Premailer is that you can view the new markup in the browser. You’ll find a link above the text box containing the new markup, titled “Click to View the HTML Results.” Clicking the link opens a hosted version of the new markup, which you can use to check your sanity or share with colleagues and clients. If you are keen to automate the creation of e-commerce notification emails, then Premailer also offers an API38. A number of libraries that support it are also available on GitHub, including PHP-Premailer39. The final task is to copy the new HTML code and paste it in the “HTML” tab of our abandoned cart notification in Shopify’s admin area. Once it’s applied, you can preview the email in the browser, as well as send a dummy copy to an email address. Below are the results in various email clients (both mobile and desktop). Airmail Link Apple Mail Link Gmail (Browser) Link Apple Mail on iOS Link The process of turning Lee’s template into a usable email took around 30 minutes, and I am pretty pleased with the result from such little input. Of course, this process screams out for automation. For those who are interested, Lee has also posted about his workflow for creating HTML email templates50 and the toolkit he uses (Sketch, Sublime, Grunt, SCSS, Handlebars, GitHub, Mailgun, Litmus). Taking It Further Link The template produced above is admittedly quite basic and only scratches the surface of what is possible. We could do plenty more to customize our email for abandoned carts, such as: - consider tone of voice, - show product images to jog the customer’s memory, - add a discount code to encourage the user to return and buy, - add upsells, - list complementary products. Dodo Case Link Tone of voice is a key consideration and goes a long way to engaging the customer. Dodo Case5351 has a great example: As always, context is very important when it comes to tone of voice. What’s right for Dodo Case might not be right for a company specializing in healthcare equipment. Let’s review a few examples (taken from Shopify’s blog55) to get a taste of what other companies are doing. Fab Link While this email from Fab5957 is pretty standard, the subject line is very attention-grabbing and is a big call to action. Chubbies Link The language and tone used in Chubbies’ email really stands out and is in line with the brand: fun-loving people. There’s also no shortage of links back to the cart, including the title, the main image and the call to action towards the bottom of the email. Black Milk Clothing Link Black Milk Clothing66 includes a dog photo and employs playful language, such as “Your shopping cart at Black Milk Clothing has let us know it’s been waiting a while for you to come back.” Holstee Link Finally, Holstee7068 asks if there’s a problem they can help with. It even goes so far as to include a direct phone number to its “Community Love Director.” Having worked with Holstee, I can confirm that this is a real position within the company! Conclusion. Further Reading Link - “Nine Case Studies and Infographics on Cart Abandonment and Email Retargeting71,” David Moth, Econsultancy - “13 Best Practices for Email Cart Abandonment Programs72,” Kyle Lacy, Salesforce Marketing Cloud Blog - “Lost Sales Recovery, Part 2,: Crafting a Perfect Remarketing Message73,” Vitaly Gonkov, The MageWorx Blog - “Why Online Retailers Are Losing 67.45% of Sales and What to Do About It74,” Mark Macdonald, Shopify Ecommerce Marketing Blog top Tweet itShare on Facebook
https://www.smashingmagazine.com/2014/10/reducing-abandoned-shopping-carts/
CC-MAIN-2017-43
refinedweb
2,931
61.77
Re: Are JavaScript strings mutable? - From: RobG <rgqld@xxxxxxxxxxxx> - Date: Wed, 19 Apr 2006 01:58:50 GMT VK said on 19/04/2006 6:19 AM AEST: Water Cooler v2 wrote: Are JavaScript strings mutable? How're they implemented - As a sequence of Unicode-16 characters. That also means that unlike in low-level languages in JavaScript there is not direct relation byte <> character. Also JavaScript doesn't have Char datatype (nor Byte for this matter), it knows only strings containing single Unicode-16 character. This is the max one can read out of specs IMHO because the internal implementation was left totally up to engines' producers. So are they mutable, immutable or carried by little green gnomes :-) depends I guess on the particular browser. Say in JScript and JScript.Net (IE) many parts are borrowed from the system. In the particular JScript String object is layer on System.String and it is immutable - as System.String itself. So in the proposed case: txt += "foo bar"; the engine creates anonymous string object for "foo bar", creates new joined string, set the reference to this new string from txt and marks both former txt and "foo bar" as GC ready. if txt has not already been given a string value (say empty string "" or some other value), the result will be: "undfinedfoo bar" If you conclude from this that the string concatenations are relatively slow in JScript - you are hundred times right :-) You need to define relative to what. JavaScript is much slower than compiled languages, but it isn't built for speed. Consider: Method 1: txt += 'more text'; Method 2: txt = txt + 'more text'; Method 3: txt = [txt, 'more text'].join(''); In Firefox, method 1 is fastest but all 3 methods take about the same time for say less than 10,000 concatenations. In IE, method 3 is about as fast as Firefox method 3, but 1 takes 20 times longer than 3 and method 2 about 6 times longer than that. A test case is provided below (careful, IE takes over minute to run it, Firefox a couple of seconds). <script type="text/javascript"> var ipsum = ['Facilisis ', 'illum ', 'et ', 'qui ', 'wisi ', 'nonummy ', 'sit, ', 'dolore ', 'delenit ', 'in ', 'ad ', 'at, ', 'vel ', 'wisi. ', 'Ut ', 'dolor ', 'nisl ', 'laoreet ', 'odio, ', 'delenit. ', 'Facilisi ', 'esse ', 'elit ', 'eu ', 'vel ']; function getRand(r){ return (Math.random()*r)|0; } var iterations = 30000; var catString = ''; var catArray = []; var j = ipsum.length; var s = new Date(); var i = iterations; while (i--){ catString += ipsum[getRand(j)]; } var f = new Date(); var txt = 'Using += ' + (f-s); s = new Date(); i = iterations; while (i--){ catString = catString + ipsum[getRand(j)]; } f = new Date(); txt += '<br>Using = + ' + (f-s); s = new Date(); i = iterations; while (i--){ catArray.push(ipsum[getRand(j)]); } var x = catArray.join(''); f = new Date(); txt += '<br>Using push/join ' + (f-s); document.write(txt); </script> Incidentally, there is an impsum lorem generator here: <URL:> -- Rob Group FAQ: <URL:> . - Follow-Ups: - Re: Are JavaScript strings mutable? - From: Dr John Stockton - References: - Are JavaScript strings mutable? - From: Water Cooler v2 - Re: Are JavaScript strings mutable? - From: VK - Prev by Date: Re: hidden/visible divs - Next by Date: Re: Cross Browser Problem - IE can not find a dynamic form - Previous by thread: Re: Are JavaScript strings mutable? - Next by thread: Re: Are JavaScript strings mutable? - Index(es):
http://newsgroups.derkeiler.com/Archive/Comp/comp.lang.javascript/2006-04/msg01810.html
crawl-002
refinedweb
553
71.75
Embedded Linux Flexes Its Muscles @ ESC 2001 Hemos posted more than 12 years ago | from the showing-off dept. A reader writes "This is Rick Lehrbaum's "traditional" report on "all things Linux" at the Embedded Systems Conference which took place during the week of April 9, 2001 in San Francisco, California. Lehrbaum briefly describes many of the Embedded Linux oriented exhibits, takes us on a photo tour of some cool Embedded Linux based devices that were being shown off, and offers his assessment of the current state of the Embedded Linux industry. There's even a "best of show award" for the "geekiest demo" at ESC! Full report is on Linuxdevices.com" Re:misunderstatement (1) Dave Fiddes (832) | more than 12 years ago | (#274907) I guess he's never heard of/used QNX, ChorusOS Nucleus, or ThreadX.. Of course you could look at RTEMS. The longstanding (1988, way before Linux existed) Open Source real-time operating system. It is as good as (many say better than) vxWorks, ChorusOS, Nucleus and ThreadX. [rtems.com] The one thing you wont get with RTEMS is the hype and geeky admiration of Linux geeks (or vxWorks, QNX, Windows CE, PalmOS,etc luvvies). You just put it in things you *REALLY* want to work. Re:value based posting (1) tzanger (1575) | more than 12 years ago | (#274908) Agreed, but then there lies the question of actually knowing what your internals are like, and as with an OpenSource based system versus closed source binaries, you have that flexibility to fix, change things on your own. Exactly. If you don't have an knowledge to debug the Linux kernel then it really isn't all that advantageous to use it over anthing else, unless you consider the ability to whinge on a newsgroup about the problem and hope someone fixes it for you. :-) But, as you said, if you've got the know-how you can at least attempt to fix it on your own. And if you're designing embedded systems chances are you have enough knowledge, gumption and/or instinctive knowledge to do it anyway. And therein lies the Open Source magic bullet. Re:misunderstatement (1) tzanger (1575) | more than 12 years ago | (#274909) I guess he's never heard of/used QNX, ChorusOS Nucleus, or ThreadX. Aside from QNX, I've never heard of the others and I do embedded systems design for a living.. As another poster mentioned, what exactly do linux companies have to do with embedded linux? RedHat could die tomorrow. Big whoop. The beauty of the core system being totally open source is that if it's THAT important to you, you can continue where they left off. Many embedded Linux kernels are based off of the base Linux system or uClinux with or without the hard realtime scheduling patches. If your core system is working just fine and the company supporting it dies, did you really lose anything? If a user discovers a nasty-ass bug that is traced to a core function or otherwise is a problem with the Linux core... you do have the source. You can fix it on your own. Waiting for a vendor to fix a problem is as much a pain in the ass as the customer screaming about the bug in the first place. Re:underpost(ing)ed (1) tzanger (1575) | more than 12 years ago | (#274910) Indeed it would be a nice idea, however if I'm not mistaken, these embedded Linux systems won't be open sourced, so tweaking code is out of the question ;) The embedded sytems don't have to be open sourced. If I'm selling widgets and there's a design flaw, I don't expect the user to fix it for me. In the same vein, if there's a driver bug or even a core flaw in Linux it's not my customer's problem. It's mine. Having the source and being able to contribute a patch back so that others won't be bitten is a good thing. It's been a while but I don't recall being able to do that with QNX. Re:The problem with Linux... (1) tzanger (1575) | more than 12 years ago | (#274911) The majority of your post is complete nonsense. I tried to comment on it but it's like trying to talk electronics to my wife. However your second paragraph I can comment on:. I've read a lot about the OpenBIOS project and had subscribed to the mailing list for a while before I came to the conclusion that there just is zero need for such a thing. Having a BIOS which can access a network via BOOTP/TFTP is all you need. If you need absolutely instant-on then you're entire OS will be in flash; not just some 32-bit BIOS implementation. Re:Why should we be excited about embedded Linux? (1) Burnon (19653) | more than 12 years ago | (#274912) What do you get? You get more people using the GNU tools, pushing for more functionality, and getting that stuff pushed back into the kernel. Most embedded developers don't want to fork the kernel or a compiler - they don't want the support headache. So they'll give features back to the community, just like desktop developers do. Hell, look at Cygnus, and now RedHat. They're paying the bills by supporting embedded developers! Finally, Linus himself pays the bills by working for Transmeta. Transmeta wants to put their chips into embedded devices, among other things. I'm sure he's happy to see embedded Linuxes take off. Seems like it's a win-win situation for everyone. Re:The problem with Linux... (1) willki (20190) | more than 12 years ago | (#274913). Ericsson Cordless Web Screen (1) jaclu (66513) | more than 12 years ago | (#274914) More or less every other single pad on CeBit was equiped with both modem and PCMCIA slotts. This one had only a blue-tooth link to a phone-line base-station. The sales rep I first spoke to was utterly clueless, when I asked about througput and broadband. Almost comical, when I asked her if she couldn't grasp that I already had a highspeed link at home, and wondered if they had a basestation for ethernet access, she responded: "Most people doesn't want complicated technology, those customers perfer modems" (!) I managed to find a techy in the booth, and after some evasive talk he admited to that the main reason it only uses a phone-modem in the base-station is that the hardware in the webpad, (or as say call it Web Screen) more or less sucks. Not so much the CPU, but the layout of the motherboard had all kinds of rather embarassing shortcommings, so it can't even comunicate at rates over 56kBit. Hes reasoning was that this platform is a rather dead-end market gimmick. If it makes any sales, they will develop a new generation with "serious network throughput" The trouble I see in this reasoning is that I doubt if the thing will sell much, I think they expect to sell it for something like $1200 So unfortunately this as most other Ericsson consumer stuff, is propably just another stock-sinker and I'm a bit sceptical about if they will get any generation 2 version out. If they do manage, I'd love to have one, it was nice except for the tiny litle bandwith detail Re:misunderstatement (1) naasking (94116) | more than 12 years ago | (#274915) ----- "Goose... Geese... Moose... MOOSE!?!?!" Re:Why should we be excited about embedded Linux? (1) drinkypoo (153816) | more than 12 years ago | (#274916) But you're wrong; We can get in touch with the code just about anywhere. If we're lucky, people will use the uCsimm [uclinux.org], and the hacking possibilities will be endless. It really does seem like the quickest route to embedding linux in your product. Note: I am not affiliated with uClinux, or the people who make the uCsimm, but I this this is really neat. Anyway, we've already been hacking boxes which run linux which we were Not Intended To Hack(tm). TiVo, anyone? One day, every device will have a fairly complete operating system in it. Right now, you can get a uCsimm, a complete (somewhat slow) computer on a simm. It's a more powerful system than, say, a palmpilot, and in fact the PalmOS should be portable to it, not that I expect Palm/3Com to bother. And a complete devkit is only $300. No memory protection in uClinux, but hey, life is hard. But one day, a dramatically more powerful system will come in a single-chip solution and cost significantly less. At that point, it's cheaper to put a pre-developed hardware solution into your system than to do a complete design, and your engineering cost drops to nearly nothing; You add a simm socket (or whatever) to your board, hook power up to it, and run input and/or output lines to the appropriate components. Anyone could implement it with a copy of orcad and a few days to learn how that software works; Just place a SIMM socket in your diagram, and start connecting traces and buses. It'll output a HPGL file which you can then send off to the people who will make your PCBs. So, since every device will have an OS on it eventually, perhaps we should push for it to be something (easily) hackable, and something which we'd want to hack. Just a thought. Just remember, at some point in the future, even a basic "smart" toaster (right now there are toasters which are supposed to not overtoast your bread, and they're pretty cheap, but they suck) will want to detect bread temperature, turn banks of heating elements on and off in areas to ensure an even toast, and so on. It'll probably want to tie that in with some sort of fuzzy logic system so it knows what sort of heat patterns to toast your particular breads with. And eventually, it'll be cheap enough to drop in a predeveloped embedded system. Wouldn't you like it to be linux? -- ALL YOUR KARMA ARE BELONG TO US Embedded Linux (1) hardburlyboogerman (161244) | more than 12 years ago | (#274917) Re:Linux Sucks? Hmmm. what about HailStorm? (1) DanBari (199529) | more than 12 years ago | (#274918) Linux has its advantages, there are very few and I'm not going to name any right now, however I do agree that some people do need to leave their server rooms and get some fresh air every so often Re:Linux Sucks? Hmmm. what about HailStorm? (1) DanBari (199529) | more than 12 years ago | (#274919) Plus, the XML parser for MSIE 5.5 is pretty crappy overall. Re:Linux Sucks? Hmmm. what about HailStorm? (1) DanBari (199529) | more than 12 years ago | (#274920) Re:Ship those niggers back (1) DanBari (199529) | more than 12 years ago | (#274921) Re:misunderstatement (1) mcspock (252093) | more than 12 years ago | (#274922) I haven't heard of chorusOS or nucleus. I've heard of ThreadX, and had a demo of it from green hills, but why bother? It comes to something like $20k for the scheduler, synch mechanisms, and a file i/o layer. My embedded OS of choice: eCos [redhat.com]. Also, FYI, distributions dont matter in the embedded space. Most embedded devices have limitations on space (or, if they dont, they should for cost reasons), so using a distribution is pointless. Just a kernel and a ramfs with some basic utilities is all you really need. This is how it's done on devices like the empeg. Why should we be excited about embedded Linux? (1) adadun (267785) | more than 12 years ago | (#274923) But don't really see what we gain by putting Linux in every toaster everywhere. Why should we fight for free software in embedded systems, where we never even get in touch with the code? Of course, free software is philosophically more correct than propriatary software, even in embedded systems. But still, what does it give us in return? We can never change the software in our toasters or our stereos, so why should we go out and put Linux in there? What do we (the free software movement) gain from this? The makers of embedded systems get a lot of good software for free, but does it give us something in return? The software for embedded systems is very closely tied with the hardware, which is proprietary. So any source code they have to publish will be nothing but drivers for their specialized hardware. Could somebody please enlighten me! Re:misunderstatement (1) robert-porter (309405) | more than 12 years ago | (#274924) Re:misunderstatement (1) robert-porter (309405) | more than 12 years ago | (#274925) Re:misunderstatement (1) sllort (442574) | more than 12 years ago | (#274926) a) You're as ignorant as he is, or: b) He's not ignorant. My hope is B. That said, the failure of poorly performing embedded Linux companies is just selective market forces in action. Windows resellers are going out of business too, does that mean Windows is doomed? What about the solid embedded Linux vendors [mvista.com] who are making money in this space? COUtrollGH COUGH embedded linux (1) argtroll++troll (445552) | more than 12 years ago | (#274927) they mod me down because they want my throbbing manhood. Re:misunderstatement (2) Adnans (2862) | more than 12 years ago | (#274928) What do failing distributions and/or companies have to do with the viability of Linux as an embedded OS? -adnans Re:misunderstatement (2) Adnans (2862) | more than 12 years ago | (#274929) -adnans Re:Standardization... (2) Y2K is bogus (7647) | more than 12 years ago | (#274930) Last time I was at a con, I got a 'demo' copy of Bluecat. Their licensing agreement makes it clear that it's a proprietary development environment based on Linux. BTW, you have have a 'Closed Source' Linux based OS. All the GPL says is that you have to redistribute source with any changes you make. Re:misunderstatement (2) Burnon (19653) | more than 12 years ago | (#274931) Likewise, folks working in the high-volume, low cost arena scoff at the big names - the cost associated with a license for VxWorks or WinCE would be prohibitive, and the resource usage would be as well (people frequently ship Nucleus and ThreadX systems measured in KILOBYTES of RAM+ROM). A lot of folks wouldn't even recognize that an operating system is running on these devices - but it's there, even if it's just pared down to a kernel for inter-thread communication (becuase the overhead for PROCESSES is just too high!). I guess the trick to interpreting statements like "the big 3" is to understand that the "embedded" space is big, and that folks who work in a fixed area of the space tend to have a pretty myopic view of options in operating systems, since they're only focusing on the tools that are appropriate to the job at hand. what about i/o devices (2) soldack (48581) | more than 12 years ago | (#274932) Wouldn't that be impossible? (2) Galvatron (115029) | more than 12 years ago | (#274933) The only "intuitive" interface is the nipple. After that, it's all learned. Cool, But... (2) kruczkowski (160872) | more than 12 years ago | (#274934) uhhh (2) fjordboy (169716) | more than 12 years ago | (#274935) another point of view (2) RoosterT (196177) | more than 12 years ago | (#274936) stop smoking crack (2) deran9ed (300694) | more than 12 years ago | (#274937) You should put your money where your mouth is, and show some supportive proof of these big three. e.g., Yahoo, Apache, Sony's Japan website, formerly Hotmail use FreeBSD, IBM, NYSE, American Express use AIX. Your post is pointless since thread does not discuss what will be run on the server(s). e.g. If your core webdesigners (programmers included) are extremely comptent with Oracle, Story Server (for Yahoo like pages) your not gonna run your site on NT unless your a dumbass and like headaches. Aside from that there are many instances of Windows underperforming as a server which sometimes can't cut it, so the mere mention of them is painful If you look at units shipped, QNX isn't even on the map. They've just started some bizzare marketing blitzes lately (starting with the whole Amiga switcheroo), so wannabes like yourself who know nothing about the embedded market know about QNX. You should do some research before posting... QNX is used for stuff Windows is likely not competent/reliable/trustworthy(crashmasterWindows failed models (2) deran9ed (300694) | more than 12 years ago | (#274938) Snowball effects. Think about the following scenario, Linux altogether dropping as a whole (could happen, did happen under the NeXT project) and others. Well reading some of the threads on Embedded Linux, you would know its not going to be an Open Sourced OS as typical Linux is, which means, as a developer, you don't have the luxury of modifications of anything. Which may not be so bad... Pay for play Linux? Why would I want to pay for for an embedded OS when I could use others that are semi-standards in the industry of embedded OS' (QNX, which Motorola uses, NASA, etc.) What make this the safe bet when under standard Linux, cmopanies are going bonkers, whats to say an embedded Linux won't go under as well? underpost(ing)ed (2) deran9ed (300694) | more than 12 years ago | (#274939) Indeed it would be a nice idea, however if I'm not mistaken, these embedded Linux systems won't be open sourced, so tweaking code is out of the question Again, maybe I underposted before or something who knows (lack of caffeine), these embedded Linux OS', from my perspective, are not your typical download-for-free-to-play-with-geek-friendly Linux distributions, so to sort of post it here as if, the average value based posting (2) deran9ed (300694) | more than 12 years ago | (#274940) Agreed, but then there lies the question of actually knowing what your internals are like, and as with an OpenSource based system versus closed source binaries, you have that flexibility to fix, change things on your own. Thats a benefit for using an Open Source OS. Whats more is, you won't have to wait for fixes to be assessed, and patches released, with a good administrative handling of the servers in question, things could be done on your/their own. Also beneficial to using an Open Source OS as opposed to binary based, is you have the flexibility to audit the codes for maximum reliability, e.g. you can tweak it to your needs to make it faster, more secure, etc. big dreamer (2) deran9ed (300694) | more than 12 years ago | (#274941) Now your asking a Linux vendor to take away from giving options to use something other than Gnome. Why not QT? Thats an argument for the masses. But the thought of just using a de facto standard under Linux would be taking away the fun from it all. But I should also note as in my other post, these aren't the typical "freebie-hobbyist" variants of Linux. Which also makes me point out, why should someone choose to go with embedded Linux over typical Linux, when in harsh reality, not everyone needs an embedded system. Sure its tech-chick, but lets get realistic these are not you (father's oldsmobile left out the coolest: 'terapin mine' (3) TheGratefulNet (143330) | more than 12 years ago | (#274942) so here I submit it as a followup: Terapin 'mine' [mineterapin.com] it does mp3, usb, ethernet, pcmcia, audio in/out, 16x4 lcd display, video out and 10GB hard drive. I'll certainly be watching for this one! -- misunderstatement (3) deran9ed (300694) | more than 12 years ago | (#274943) #include <rants.h> #incldue <clues.h> Just to be absolutely clear about what I'm saying, in my opinion the "big three" embedded OSes are, at the moment: (1) VxWorks, (2) Embedded Linux, (3) Embedded Windows -- or (1) VxWorks, (2) Embedded Windows, (3) Embedded Linux -- depending on how you count. I guess he's never heard of/used QNX [qnx.com], ChorusOS [sun.com] Nucleus [accelerate...nology.com], or ThreadX [ghs.com]. I did however like the gadgets, but taking a look at the last week, with all the Linux related companies going to the dogs [fuckedcompany.com], and 4 distributions going "kaput" within less than 6 months time, I would be looking at other alternatives to Linux, especially if my business were going to depend on them. © Gbonics [antioffline.com] changing the futurismisms of vocabularities worldomwide Standardization... (3) sllort (442574) | more than 12 years ago | (#274944) "As stated, the ELC proposal will allow closed source alternatives to be certified. An OS with runtime royalties can be certified; an unreliable and unrobust alternative can be certified; an OS with poor networking can be certified; an OS with few drivers and tools can be certified; an OS with a small number of trained programmers can be certified." That's the first time I've seen anyone in the mainstream mention a certified, closed source version of Linux. There is certainly a very strong push between a few vendors to become the "industry standard for embedded linux"... but closed source? yuck. how could any linux company be that stupid? Hopefully he's just being alarmist.
http://beta.slashdot.org/story/17771
CC-MAIN-2014-15
refinedweb
3,606
68.3
Feature #5644 add Enumerable#exclude? antonym Added by sunaku (Suraj Kurapati) over 8 years ago. Updated over 2 years ago. Description Please add Enumerable#exclude? as antonym of Enumerable#include? This allows me to construct Boolean expressions more pleasantly: if File.exist? some_file and not some_list.include? some_file Can be written as: if File.exist? some_file and some_list.exclude? some_file Thanks for your consideration. Updated by trans (Thomas Sawyer) over 8 years ago Hey, a use for functors! module Kernel def not Functor.new do |op,*a,&b| !send(op,*a,&b) end end end some_list.not.include? some_file Nothing against Enumerable#exclude? though. Seems reasonable. Updated by mame (Yusuke Endoh) about 8 years ago - Status changed from Open to Assigned - Assignee set to matz (Yukihiro Matsumoto) Updated by matz (Yukihiro Matsumoto) about 8 years ago - Status changed from Assigned to Feedback from that logic, don't we need to add antonyms to every predicate methods? could you show us why include? is special? Matz. Updated by sunaku (Suraj Kurapati) about 8 years ago Hi Matz, I didn't ask for antonmys for all predicates; only for #exclude?. The reason for #exclude? is for more "natural" boolean expressions: if File.exist? some_file and some_list.exclude? some_file if File.exist? some_file and not some_list.include? some_file That "not SOMETHING.include? SOMETHING" pattern appears often in my code, so that's why I created this request. Of course, me saying "appears often" does not constitute as solid evidence in support of adding #exclude? so I have no choice but to accept your judgement on this request. It's your call. Thanks for your consideration. Updated by trans (Thomas Sawyer) about 8 years ago I'll throw my hat in with #exclude? too. There's been a number of times that I would have liked to have it. "not include?" is a rather common predication and it's nice when code can line up neatly. Updated by now (Nikolai Weibull) about 8 years ago On Thu, Mar 29, 2012 at 01:26, sunaku (Suraj Kurapati) sunaku@gmail.com wrote: Issue #5644 has been updated by sunaku (Suraj Kurapati). The reason for #exclude? is for more "natural" boolean expressions I don’t think #exclude? really conveys what’s being done very well. Yes, “exclude” is the antonym of “include”, but the meaning of %[a b c].include? x is very natural, whereas %[a b c].exclude? x isn’t. Does it mean that %[a b c] contains the elements that should be excluded and is x among them, or is x not included among the elements of %[a b c]? Updated by matz (Yukihiro Matsumoto) about 8 years ago OK, you think negative for include? is special. Understood. But as Nikolai pointed out, exclude? is not the best name for the function. Any alternative? Matz. Updated by rosenfeld (Rodrigo Rosenfeld Rosas) about 8 years ago At first I agreed with Nikolay, but then I changed my minded because the method is called "exclude?" with a question mark, not "exclude", so I don't think anyone would expect that it would actually remove some element. Updated by rosenfeld (Rodrigo Rosenfeld Rosas) about 8 years ago I think I've misunderstood the question posed by Nikolai. I've just read it again but I think that the other meaning presented by him doesn't make any sense. "does the array contain the elements that should be excluded?". Really? I read this like in English: Does [1, 3, 5] exclude 4? Ask someone that doesn't know anything about programming and see what will she answer to this question. Updated by rosenfeld (Rodrigo Rosenfeld Rosas) about 8 years ago The most common antonym is "exclude", but maybe we could use "omit" if you prefer: Updated by trans (Thomas Sawyer) about 8 years ago There really is no better term b/c all such terms are going to have the same connotations. As with "include" if you add an "s" to the word then it reads more like typical English, i.e. "a excludes b ?". To use the singular form you have to add a modal verb like "does a exclude b ?" Which makes it easy to see that this is the right meaning. Updated by now (Nikolai Weibull) about 8 years ago On Fri, Mar 30, 2012 at 16:02, matz (Yukihiro Matsumoto) matz@ruby-lang.org wrote: OK, you think negative for include? is special. Understood. But as Nikolai pointed out, exclude? is not the best name for the function. Any alternative? How about changing the definition of Enumerable#none?, #any?, and #all? to take an optional argument that is compared against each element using #==. Then we have if File.exist? some_file and some_list.none? some_file I would not write it like that – I’d still use not a.include? b, which I think is hard to beat – but then we at least don’t need to come up with a new name and these three methods gain semantics that I’ve felt they were lacking. Updated by mame (Yusuke Endoh) over 7 years ago - Target version set to 2.6 Updated by trans (Thomas Sawyer) over 7 years ago Too bad we can't use symbols like ∉. Updated by naruse (Yui NARUSE) over 2 years ago - Target version deleted ( 2.6) Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/5644?tab=properties
CC-MAIN-2020-24
refinedweb
891
76.11
ผู้สอบถาม Chart Controls - FAQ - How do I hide labels for data points with zero value? - I'm using Silverlight... - I'm using WPF... - Chart displays incorrectly when all x values are zero - I'm using SSRS... - X axis does not display every label - ChartDataTableHelper does not work (This FAQ is an on-going project. More items will be added to the list as frequently asked questions are identified.) - แก้ไขโดย siplaModerator 11 ตุลาคม 2555 13:28 การสนทนาทั่วไป ตอบทั้งหมด How do I hide labels for data points with zero value? There are a couple of ways to do this. 1) Format the label using a custom numeric format string to hide zeroes: Chart1.Series(0).IsValueShownAsLabel = True Chart1.Series(0).LabelFormat = "{0;0;#}" Chart1.Series(0).Label = "#VALY{0;0;#}" 2) Only show labels on data points with values other than zero: (Note, for some reason the inverse of this does not work for every chart type.) For Each dp As DataPoint In Chart1.Series(0).Points If dp.YValues(0) <> 0 Then dp.IsValueShownAsLabel = True End If Next Note that setting Series or DataPoint.Label overrides IsValueShownAsLabel and LabelFormat. I'm using Silverlight... This forum is for the Chart Controls for .NET Framework which is intended to be used with ASP.NET or Windows Forms. For Silverlight Charts, try the Silverlight Toolkit. For questions concerning the use of the toolkit, visit the Silverlight Forums. - แก้ไขโดย siplaModerator 14 เมษายน 2555 9:33 This forum is for the Chart Controls for .NET Framework which is intended to be used with ASP.NET or Windows Forms. For WPF Charts, try the WPF Toolkit. For questions concerning the use of the toolkit, visit the WPF Toolkit Discussions site or the WPF forum. If you really want to use the Windows Forms Chart Control in WPF, apparently there is a way to do it with a WindowsFormsHost control. Any questions on the usage of the WindowsFormsHost should also be asked in the WPF forum. For more information, see: - Walkthrough: Hosting a Windows Forms Control in WPF by Using XAML - I want to use Dundas Chart in my WPF application, what should I do? (The Microsoft chart control is based on the Dundas Chart) - แก้ไขโดย siplaModerator 17 ตุลาคม 2555 12:52 Chart displays incorrectly when all x values are zero When all x values are zero, the chart control will automatically place the data points on the x axis based on their index instead of their x value. (Same as setting Series.IsXValueIndexed to true.) This is a feature in the chart controls that's intended to make charts with categorical (string) x values display properly, in which case the strings are assigned to the DataPoint.AxisLabel properties and the .XValue properties will be 0. Unfortunately it does not work well when all the x values are actually meant to be zero. Setting Series.IsXValueIndexed to false won't override this behaviour. Usually this is only a problem with scatter charts (Series.ChartType = SeriesChartType.Point.) One way to get around this is to add a transparent dummy data point to the series in question that has an x value other than zero. Dim series As Series = Chart1.Series(0) Dim area As ChartArea = Chart1.ChartAreas(0) Dim yValue as Double = series.Points(0).YValues(0) 'Make sure you already have some data in the series at this point. 'Determine whether every x value is zero Dim onlyZeroes As Boolean = True For Each dp As DataPoint In series.Points If dp.XValue <> 0 Then onlyZeroes = False Exit For End If Next If onlyZeroes Then 'Set x axis min & max to something suitable '(If not set, the dummy data point may affect the x axis limits) area.AxisX.Minimum = -1 area.AxisX.Maximum = 1 'Add a transparent point at an x position other than zero Dim dummyPoint As New DataPoint(1, yValue) dummyPoint.Color = Color.Transparent series.Points.Add(dummyPoint) End If If you've explicitly set the AxisX.Minimum & .Maximum somewhere (not relying on the chart control to automatically adjust the x axis limits), you can just add the dummy data point without checking for zeroes. Dim series As Series = Chart1.Series(0) Dim yValue as Double = series.Points(0).YValues(0) 'Add a transparent point at an x position other than zero Dim dummyPoint As New DataPoint(1, yValue) dummyPoint.Color = Color.Transparent series.Points.Add(dummyPoint) Other workarounds include adding a very small value (Single.Epsilon for example) to any x value that is zero. In this case you may need to adjust the x axis intervals, limits and labels to ignore the addition. In .NET 4.5 the correct way to handle this is by setting the series IsXAxisQuantitative custom property to "true". - แก้ไขโดย siplaModerator 13 ธันวาคม 2555 10:09 This forum is for the Chart Controls for .NET Framework which is intended to be used with ASP.NET or Windows Forms. For questions concerning SSRS, visit the SQL Server Reporting Services Forum. - แก้ไขโดย siplaModerator 23 เมษายน 2555 11:01 X axis does not display every label If using categorical x values (Strings), you'll probably want to show a label on the axis for every x value. However, by default the chart control will automatically adjust the x axis scale, label interval and font to make sure there are no overlapping labels. If there is not enough room for every label, some labels may be left out. Most of the time an acceptable solution is to force every label to show by setting the x axis interval to 1. (Doing this may change the label font size and/or orientation.) Chart1.ChartAreas(0).AxisX.Interval = 1 For different approaches and a more detailed explanation, see Alex Gorev's blog entry; Displaying Categorical Axis Labels - แก้ไขโดย siplaModerator 7 พฤศจิกายน 2555 13:49 ChartDataTableHelper does not work The ChartDataTableHelper is a utility class from the Samples Environment. It draws a table of values underneath the chart.You are free to use it in your project by copying the class from the samples project folder into your own prject. (Remove the namespace definition if that's causing problems.) If you are using C#, you should be fine. See the usage in the samples environment. The VB version of the class is buggy and won't even compile. It seems it has been converted to VB from C# with some tool. Here is a working version of the "Paint Event Handling" region: #Region "Paint Event Handling" ''' <summary> ''' Chart Paint event handler. ''' </summary> Private Sub Chart_PostPaint(ByVal sender As Object, ByVal e As System.Windows.Forms.DataVisualization.Charting.ChartPaintEventArgs) If TypeOf e.ChartElement Is ChartArea Then Dim area As ChartArea = CType(e.ChartElement, ChartArea) ' call the paint method. If ChartAreas.IndexOf(area.Name) >= 0 AndAlso enabled_Renamed Then PaintDataTable(sender, e) End If End If End Sub ''' <summary> ''' This method does all the work for the painting of the data table. ''' </summary> Private Sub PaintDataTable(ByVal sender As Object, ByVal e As System.Windows.Forms.DataVisualization.Charting.ChartPaintEventArgs) Dim area As ChartArea = CType(e.ChartElement, ChartArea) ' get the rect of the chart area Dim rect As RectangleF = e.ChartGraphics.GetAbsoluteRectangle(area.Position.ToRectangleF()) ' get the inner plot position Dim elemPos As ElementPosition = area.InnerPlotPosition ' find the coordinates of the inner plot position Dim x As Single = rect.X + (rect.Width / 100 * elemPos.X) Dim y As Single = rect.Y + (rect.Height / 100 * elemPos.Y) Dim ChartAreaBottomY As Single = rect.Y + rect.Height ' offset the bottom by the width+1 of the scrollbar if it is visible If area.AxisX.ScrollBar.IsVisible() AndAlso (Not area.AxisX.ScrollBar.IsPositionedInside) Then ChartAreaBottomY -= (CSng(area.AxisX.ScrollBar.Size) + 1) End If Dim width As Single = (rect.Width / 100 * elemPos.Width) Dim height As Single = (rect.Height / 100 * elemPos.Height) ' find the height of the font that will be used Dim axisFont As Font = area.AxisX.LabelStyle.Font Dim testString As String = "ForFontHeight" Dim axisFontSize As SizeF = e.ChartGraphics.Graphics.MeasureString(testString, axisFont) ' find the height of the font that will be used Dim titleFont As Font = area.AxisX.TitleFont testString = area.AxisX.Title Dim titleFontSize As SizeF = e.ChartGraphics.Graphics.MeasureString(testString, titleFont) Dim seriesCount As Integer = 0 ' for each series that is attached to the chart area, ' draw some boxes around the labels in the color provided Dim i As Integer For i = e.Chart.Series.Count - 1 To 0 Step -1 If area.Name = e.Chart.Series(i).ChartArea Then seriesCount += 1 End If Next i ' now, if a box was actually drawn, then draw ' the verticle lines to separate the columns of the table. If seriesCount > 0 Then Dim int As Integer = 0 Do While int < e.Chart.Series.Count If area.Name = e.Chart.Series(int).ChartArea Then Dim min As Double = area.AxisX.Minimum Dim max As Double = area.AxisX.Maximum ' modify the min value for the current axis view If area.AxisX.ScaleView.Position - 1 > min Then min = area.AxisX.ScaleView.Position - 1 End If ' modify the max value for the currect axis view If (area.AxisX.ScaleView.Position + area.AxisX.ScaleView.Size + 0.5) < max Then max = area.AxisX.ScaleView.Position + area.AxisX.ScaleView.Size + 0.5 End If ' find the starting point that will be display. ' this is dependent on the current axis view. ' this sample assumes the same number of points in each ' series so always take from the zeroth series Dim pointIndex As Integer = 0 Dim pt As DataPoint For Each pt In ChartObj.Series(0).Points If pt.XValue > min Then Exit For End If pointIndex += 1 Next pt Dim TableLegendDrawn As Boolean = False Dim AxisValue As Double = min Do While AxisValue < max Dim pixelX As Single = CSng(e.ChartGraphics.GetPositionFromAxis(area.Name, AxisName.X, AxisValue)) Dim nextPixelX As Single = CSng(e.ChartGraphics.GetPositionFromAxis(area.Name, AxisName.X, AxisValue + 1)) Dim pixelY As Single = ChartAreaBottomY - titleFontSize.Height - (seriesCount * axisFontSize.Height) Dim point1 As PointF = PointF.Empty Dim point2 As PointF = PointF.Empty ' Set Maximum and minimum points point1.X = pixelX point1.Y = 0 ' Convert relative coordinates to absolute coordinates. point1 = e.ChartGraphics.GetAbsolutePoint(point1) point2.X = point1.X point2.Y = ChartAreaBottomY - titleFontSize.Height point1.Y = pixelY ' Draw connection line e.ChartGraphics.Graphics.DrawLine(New Pen(borderColor_Renamed), point1, point2) point2.X = nextPixelX point2.Y = 0 point2 = e.ChartGraphics.GetAbsolutePoint(point2) Dim format As StringFormat = New StringFormat() format.Alignment = StringAlignment.Center format.LineAlignment = StringAlignment.Center ' for each series draw one value in the column Dim row As Integer = 0 Dim ser As Series For Each ser In ChartObj.Series If area.Name = ser.ChartArea Then If (Not TableLegendDrawn) Then ' draw the series color box e.ChartGraphics.Graphics.FillRectangle(New SolidBrush(ser.Color), x - 10, row * (axisFont.Height) + (point1.Y), 10, axisFontSize.Height) e.ChartGraphics.Graphics.DrawRectangle(New Pen(borderColor_Renamed), x - 10, row * (axisFont.Height) + (point1.Y), 10, axisFontSize.Height) e.ChartGraphics.Graphics.FillRectangle(New SolidBrush(tableColor_Renamed), x, row * (axisFont.Height) + (point1.Y), width, axisFontSize.Height) e.ChartGraphics.Graphics.DrawRectangle(New Pen(borderColor_Renamed), x, row * (axisFont.Height) + (point1.Y), width, axisFontSize.Height) End If If pointIndex < ser.Points.Count Then Dim label As String = ser.Points(pointIndex).YValues(0).ToString() Dim textRect As RectangleF = New RectangleF(point1.X, row * (axisFont.Height) + (point1.Y + 1), point2.X - point1.X, axisFont.Height) e.ChartGraphics.Graphics.DrawString(label, axisFont, New SolidBrush(area.AxisX.LabelStyle.ForeColor), textRect, format) End If row += 1 End If Next ser TableLegendDrawn = True pointIndex += 1 AxisValue += 1 Loop ' do this only once so break! Exit Do End If i += 1 Loop End If End Sub #End Region - แก้ไขโดย siplaModerator 24 กันยายน 2555 13:16 How do I hide labels for data points with zero value? There are a couple of ways to do this. .... Chart1.Series(0).IsValueShownAsLabel = True Chart1.Series(0).LabelFormat = "{0;0;#}" .... Chart1.Series(0).Label = "#VALY{0;0;#}".... I am having the same issue of trying to hide my 0 values in my charts, I have tried using temp.EmptyPointStyle.IsValueShownAsLabel = false; to no avail. When I try using the "#VALY{0;0;#}" I get a Parameter is not valid. error
https://social.msdn.microsoft.com/Forums/vstudio/th-TH/fb6da2c5-bb8e-40cc-bfe4-68bd76e1d826/chart-controls-faq?forum=MSWinWebChart
CC-MAIN-2015-14
refinedweb
2,025
52.05