text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Convert a wide-character string into a long integer #include <wchar.h> long wcstol( const wchar_t * ptr, wchar_t ** endptr, int base ); long long wcstoll( const wchar_t * ptr, wchar_t ** endptr, int base ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The wcstol() function converts the string pointed to by ptr into a long; wcstoll() converts the string into a long long. These functions recognize strings that contain the following: The conversion ends at the first unrecognized wide character. If endptr isn't NULL, a pointer to the unrecognized wide character is stored in the object endptr points to. The converted value. If the correct value causes an overflow, the returned value is LONG_MAX, LLONG_MAX, LONG_MIN, or LLONG_MIN, depending on the function and the sign, and errno is set to ERANGE. If base is out of range, zero is returned and errno is set to EINVAL.
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/w/wcstol.html
CC-MAIN-2018-22
refinedweb
155
62.78
Sympy solve() rounds to 0? I am passing solve() big and small numbers such as electron_charge = 1.6e-19 The function rounds those small numbers to 0. Has anyone seen this? An example is below. from sympy import * x, y, z = symbols('x y z') # this works because its only 1e-5 rslt = solve(Eq(0+1e-5, x), x) rslt = rslt[0] print rslt #this outputs zero cause number is too small? :( rslt = solve(Eq(0+1e-10, x), x) rslt = rslt[0] print rslt you will want the rational=False flag for solve. otherwise it tries to find approximate rational expressions. Try adding the flag rational=Falseto the solvecall: solve(Eq(1e-10, x), x, rational=False) By default, sympy converts floats to rationals and apparently this causes precision issues (even though when I do Float(Rational(1e-10))by hand I see no precision loss, but whatever). @omz According to this discussion on Stack Overflow something related to this has been fixed in sympy 0.7.5, and Pythonista comes with 0.7.4.1. Might be worth updating if you haven't done so already internally. Oh ok! Thanks guys! Very helpful.
https://forum.omz-software.com/topic/2959/sympy-solve-rounds-to-0
CC-MAIN-2018-30
refinedweb
196
74.39
CDI Dependency Injection - An Introductory Tutorial Part 1 - Java EE Code Listing: StandardAtmTransport changed to use one qualifier package org.cdi.advocacy; @StandardFrameRelaySwitchingFlubber @Default public class StandardAtmTransport implements ATMTransport { public void communicateWithBank(byte[] datapacket) { System.out.println("communicating with bank via Standard transport"); } }Next if I want my AutomatedTellerMachineImpl to have SuperFast transport with StandardFrameRelaySwitchingFlubber, I would use both in the injection target as follows: Code Listing: AutomatedTellerMachineImpl changed to use two qualifier public class AutomatedTellerMachineImpl implements AutomatedTellerMachine { @Inject @SuperFast @StandardFrameRelaySwitchingFlubber private ATMTransport transport; ...Output: deposit called communicating with bank via the Super Fast transport Exercise: Create a transport that is @SuperFast, @StandardFrameRelaySwitchingFlubber and an @Alternative. Then use beans.xml to activate this SuperFast, StandardFrameRelaySwitchingFlubber, Alternative transport. Send me your solution on the CDI group mailing list. The first one to send gets put on the CDI wall of fame. Exercise for the reader. Change the injection point qualifiers to make only the StandardAtmTransport get injected. Send me your solution on the CDI group mailing list. Don't get discouraged if you get a stack trace or two that is part of the learning process. The first one to send gets put on the CDI wall of fame (everyone else gets an honorable mention). Using @Qualfiers with members to discriminate injection and stop the explosion of annotation creation There could be an explosion of qualifers annotations in your project. Imagine in our example if there were 20 types of transports. We would have 20 annotations defined. This is probably not want you want. It is okay if you have a few, but it could quickly become unmanageable. CDI allows you to descriminate on members of a qualifier to reduce the explosion of qualifiers. Instead of having three qualifier you could have one qualifier and an enum. Then if you need more types of transports, you only have to add an enum value instead of another class. Let's demonstrate how this works by creating a new qualifier annotation called Transport. The Transport qualifier annotation will have a single member, an enum called type. The type member will be an new enum that we define called TransportType.Here is the new TransportType: Code Listing: Transport qualifier that has an enum member package org.cdi.advocacy; import java.lang.annotation.Retention; import java.lang.annotation.Target; import static java.lang.annotation.ElementType.*; import static java.lang.annotation.RetentionPolicy.*; import javax.inject.Qualifier; @Qualifier @Retention(RUNTIME) @Target({TYPE, METHOD, FIELD, PARAMETER}) public @interface Transport { TransportType type() default TransportType.STANDARD; }Here is the new enum that is part of the TransportType. Code Listing: TransportType enum that defines a type package org.cdi.advocacy; public enum TransportType { JSON, SOAP, STANDARD; }Next you need to qualify your transport instances like so: Code Listing: SoapAtmTransport using *`@Transport(type=TransportType.SOAP)`* package org.cdi.advocacy; @Transport(type=TransportType.SOAP) public class SoapAtmTransport implements ATMTransport { ... Code Listing: StandardAtmTransport using *`@Transport(type=TransportType.STANDARD)`* package org.cdi.advocacy; @Transport(type=TransportType.STANDARD) public class StandardAtmTransport implements ATMTransport { ... Code Listing: JsonRestAtmTransport using *`@Transport(type=TransportType.JSON)`* package org.cdi.advocacy; @Transport(type=TransportType.JSON) public class JsonRestAtmTransport implements ATMTransport { ... Code Listing: AutomatedTellerMachineImpl using @Inject *`@Transport(type=TransportType.STANDARD)`* @Named("atm") public class AutomatedTellerMachineImpl implements AutomatedTellerMachine { @Inject @Transport(type=TransportType.STANDARD) private ATMTransport transport; As always, you will want to run the example.Output deposit called communicating with bank via Standard transportContinue reading... Click on the navigation links below the author bio to read the other pages of this article. Be sure to check out part II of this series as well: Part 2 plugins and annotation processing ! About the author This article was written with CDI advocacy in mind by Rick Hightower with some collaboration from others.. Although not a fan of EJB 3, Rick is a big fan of the potential of CDI and thinks that EJB 3.1 has come a lot closer to the mark..) Dave Macpherson replied on Mon, 2011/03/28 - 10:21am "...Thus marking it so is redundant; and not only that its redundant" This made me laugh. Good one! (Good intro article too!) Dave Josh Marotti replied on Mon, 2011/03/28 - 2:28pm I have to admit, I hate the qualifier is put into an annotation. Why can't we inject and qualify by name we used from the @Named annotation instead (to kill the coupling that will occur with a new annotation)? Turn this: Into this: It may not be as readable, but doesn't require the annotation, and, thereby, code coupling. Perhaps it can already do this and you just haven't mentioned it? Rick Hightower replied on Mon, 2011/03/28 - 3:23pm Reza Rahman replied on Mon, 2011/03/28 - 4:25pm in response to: Rick Hightower You indeed can use @Named as a qualifier. The trade-off is sacrificing Java-based type-safety and readability as mentioned. The way I see it personally, in most real-life projects, you will only really need a handful of qualifers, so I don't see more type-safe qualifiers as a big issue, but an overall improvement over traditional name based injection point qualification (in fact I suspect it is an evolutionary quirk arising fom the heavy XML dependence of early DI frameworks). Walter Bogaardt replied on Mon, 2011/03/28 - 6:36pm Cheers Rick, Claming a little ignorance, but will you also be illustrating testing an application via mock objects using CDI. In spring using the xml mappings its easy as swaping out the xml files, but be interested in annotation based solution to this. Rick Hightower replied on Mon, 2011/03/28 - 10:02pm The short answer is Alternatives. I don't remember if I cover them here or in the next article. (I wrote the example code already). The longer answer will have to wait until I put the kids to bed. Cloves Almeida replied on Mon, 2011/03/28 - 10:18pm Rick Hightower replied on Mon, 2011/03/28 - 11:23pm Also you can use alternatives. There is a section in this tutorial on alternatives. Using @Alternative to select an Alternative Earlier, you may recall, we defined several alternative transports, namely, JsonRestAtmTransport and SoapRestAtmTransport. Imagine that you are an installer of ATM machines and you need to configure certain transports at certain locations. Our previous injection points essentially inject the default which is the StandardRestAtmTransport transport. You can scan for it. Here is a link to the section in the wiki version of this tutorial Alternatives As cloves was saying if you wanted to use the XML files. Seam has XML CDI plugin that is similar to what you would expect in a spring context xml file. Resin also supports an XML file version. In most cases, you should be able to use Alternatives. Rick Hightower replied on Mon, 2011/03/28 - 11:24pm One more thing. There are several unit testing frameworks for CDI. We plan on writing tutorials about them in the future. There is a lot more to come. There is a lot out there. CDI is nascent but coming together quickly. Josh Marotti replied on Tue, 2011/03/29 - 11:08am in response to: Rick Hightower I saw that, but it is still user-based created annotations or enums. When I load spring, for example, with just a config.xml, the actual objects are 'pure' in that they don't have external annotations or objects. I liked the idea of using annotations in spring, but it coupled the objects to spring, which I felt was dirty. When CDI came out, it causes a dependency to J2EE 6, but at least that was something that would have been loaded if we reused the objects in another project. The necessity of bringing the custom annotations or enums, though, makes me feel that it is back to where we were with spring... bringing along other coupled code for an object that can almost stand alone. I'm not saying it's the end of the world or anything. It is cleaner than what we have now. I just am wishing for a blue-sky perfectionism that we just don't find in architecture. Rick Hightower replied on Tue, 2011/03/29 - 12:36pm in response to: Josh Marotti As mentioned earlier, the Seam Weld project has a CDI plugin that provides a "config.xml" that is similar to Springs. Since it is a CDI plugin (called extensions), it should work on any standard CDI implementation. Expect a future article on using this plugin. Also Resin CANDI (another CDI implementation) has XML support, and this CDI XML support is also how you configure the server itself so the whole thing uses CDI throughout. Since the Seam CDI Weld project has a nice plugin, it might become the de facto standard for when you want XML instead of annotations. Personally, I prefer to use the annotations whenever possible and keep the XML as small as possible. Andy Gibson replied on Tue, 2011/03/29 - 12:43pm Josh the benefits of qualifiers, or qualifiers with Enums is that you get type safety when defining injection points and items to be injected. You could just use named but thats just attaching an untype safe name to the bean. I see what you are saying about having to drag external qualifiers in there, and I have two answers for that. First the qualifier could be regarded as part of your object semantics, the SoapTransport class really is a @Transport and marking it with that qualifier is along the lines of implementing the Transport interface. Think about it, you don't need to implement the Transport interface, you do it to make it type safe and to take advantage of polymorphism. Likewise you add a qualifier to make it type safe and to take advantage of looking up beans of similar types but with different qualifiers (i.e. semantic polymorphism or something!) We don't get rid of an interface to keep our objects pure when we only use one implementation of the interface (there are times we should but we don't always). Same way we don't throw out qualifiers just because we aren't always accessing the beans using the qualifier. Second option is to can create your pure SoapTransport class with no CDI annotations and have a CDI aware transport factory with a method to create a SoapTransport that is annotated with @Produces @Transport(TransportType.SOAP) . You would have to build the instance (or get it from the CDI container) and return it. However, you run into problems because if the SoapTransport class needs an xyz bean injecting into then its not going to happen automatically because you didn't define the injection point with @Inject because your bean knows nothing about CDI. At which point, you have to handle your own injection like you would if you had to define it in xml. In this case CDI makes you do your manual injections in type safe code rather than in untype safe xml, usually with no code assist either. Thanks for your thoughts, I think this is a good idea for a blog post. Reza Rahman replied on Tue, 2011/03/29 - 2:59pm in response to: Rick Hightower Just to be absolutely clear, you can put *all* meta-data in XML with CDI if you so wish (of course you'd be reverting back to name based qualification). XML config is definitely worth a look:. As far as XML goes, it is far more compact, readable and type-safe because it is schema driven as opposed to most legacy XML configuration (CanDI XML is the same way). In fact, I wish we could adopt the model for Java EE overall. Now, I doubt it's anything that anyone would recommend because other than a preference to remove all meta-data from Java code, there are few benefits to that approach and a lot of pitfalls - particularly because injection related meta-data hardly changes all that much and is likely semantically linked to the code anyway. BTW, chances are CDI XML configuration might be standardized in Java EE 7. Rick Hightower replied on Tue, 2011/04/05 - 4:30pm It is a bit shorter than Part 1 Rick Hightower replied on Wed, 2011/04/06 - 7:28pm I changed all < to < and all < to >. I actually have a Python text processor that I wrote that converts from google WIKI markup into other markups so it was as easy as adding line = line.replace("<", "<").replace(">",">"). To the part that does the code processing. James Weitlauf replied on Thu, 2011/04/14 - 7:39am Is it possible to mix the DI between xml and annotations, ie use @Inject for a bean that I configured using xml? Reason I ask is because I configured my Hibernate session factory in the xml but I need that injected into my Dao, which gets injected into Foo. So I would like to @Inject the Dao into Foo and @Inject my sessionfactory into my Dao. I am using Spring 3 if that matters. Nevermind, I found my answer... amazing what you can find if you read the documentation. Carla Brian replied on Tue, 2012/05/29 - 9:10am David Espinosa replied on Sat, 2012/11/17 - 2:54am This is the bes tutorial about CDI. Thanks.
http://java.dzone.com/articles/cdi-di-p1?page=0,4&%24Version=1&%24Path=%2F
CC-MAIN-2013-48
refinedweb
2,222
55.34
Installed Folsom services on a single node. nova-compute/ I have turned off use_namespaces and Overlapping IPs to make the above configuration work like the traditional nova-network setup. I have configured metadata_host and metadata_port correctly in l3-agent.ini file. In the console log, VM fails to access the metadata service. wget: can't connect to remote host (169.254.169.254): No route to host However, I am able to launch a VM. I am able to ping and ssh into the VM using private ip address. I am also able to ping/ssh from another VM using the private ip address. Usually, n nova-network setup, if there is an error in accessing metadata service, it would result in ping/ssh FAILURE. With quantum, it seems to work. Any clue to fix this issue is appreciated. Thanks, VJ Question information - Language: - English Edit question - Status: - Answered - For: - neutron Edit question - Assignee: - No assignee Edit question - Last query: - - Last reply: -
https://answers.launchpad.net/neutron/+question/218237
CC-MAIN-2021-49
refinedweb
162
58.69
Invoking webservice from actiongar jar May 8, 2008 11:07 AM How can I configure jbpm to action invoke a webservice? I want to action in my process, invoke a webservice thats it's on other machine. I've only a wsdl. I've worked with webservices with axis2 and I 've created clients, but in jbpm I don't know how can I do this. Somebody can tell me a tutorial or example? Thanks 1. Re: Invoking webservice from actionMauricio Salatino May 8, 2008 11:20 AM (in response to gar jar) I think this is a very easy task.. you only need to use/call your generated clients inside the action code... for example: public class MyActionHandler implements ActionHandler{ public void execute(){ WebServiceCliente.webServiceMethod(); } } Sounds to me that i don't understand your question.. let me know.. 2. Re: Invoking webservice from actionRonald van Kuijk May 8, 2008 4:07 PM (in response to gar jar) Salaboy is correct. That would have been my answer to. 3. Re: Invoking webservice from actiongar jar May 9, 2008 2:45 AM (in response to gar jar) Ok. That it's correct, but how can I call this webservice client if my websercive client it's on another project out of jbpm project. Should I import entire webserviceclient project into jbpm project? I'm using eclipse 3.0.3, jbpm 3.2.2 and webservice client it's generated with axis2 and it's project standalone. Maybe there are easy way to create webservice client into jbpm project? I'm confused Thanks 4. Re: Invoking webservice from actionRonald van Kuijk May 9, 2008 4:54 AM (in response to gar jar) but how can I call this webservice client if my websercive client it's on another project out of jbpm project.how would you normally invoke remote services? via ejb's? webservices? rest? Again, nothing different than normal java 5. Re: Invoking webservice from actionMauricio Salatino May 9, 2008 7:02 AM (in response to gar jar) Yeah! get your hands dirty! I suggest to 3 steps If you have 2 projects then do this... Generate the client and test it .. then copy the client class into the jbpm project (add all needed libraries and refactor the packages names if needed) Then call the web services inside an ActionHandler exactly in the same way that you do it on the test.. Hope it helps 6. Re: Invoking webservice from actiongar jar May 13, 2008 3:46 AM (in response to gar jar) Ok. My problem was solved. I generated .class webservice client with jax-ws and I tested it in a simple java project. I've included .class into .class folder in jbpm project and I use clases into action. Works fine. thanks salaboy and kukeltje 7. Re: Invoking webservice from actionMauricio Salatino May 13, 2008 8:58 AM (in response to gar jar) thats sounds good .. but you must include the source code in your second project.. or make a jar with all .class files that you need and the include it in the bpm project...! 8. Re: Invoking webservice from actiongar jar May 19, 2008 10:53 AM (in response to gar jar) Why sometimes when I execute process and enter in node that invoke webservice Exception is thrown because don't find the class? ava.lang.NoClassDefFoundError: com/aixtelecom/portbooker/UpdatePrice_Service at com.aixtelecom.portbooker.bpm.actions.Node1Action.execute(Node1Action.java:32) at org.jbpm.graph.def.Action.execute(Action.java:122) at org.jbpm.graph.def.GraphElement.executeAction(GraphElement.java:264) at org.jbpm.graph.def.GraphElement.executeActions(GraphElement.java:220) at org.jbpm.graph.def.GraphElement.fireAndPropagateEvent(GraphElement.java:190) at org.jbpm.graph.def.GraphElement.fireEvent(GraphElement.java:174) at org.jbpm.graph.def.Node.enter(Node.java:303) at org.jbpm.graph.def.Node$$FastClassByCGLIB$$d187eeda.invoke(<generated>) at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149) at org.hibernate.proxy.pojo.cglib.CGLIBLazyInitializer.intercept(CGLIBLazyInitializer.java:163) at org.jbpm.graph.def.Node$$EnhancerByCGLIB$$ef5f66fa.enter(<generated>) at org.jbpm.graph.def.Transition.take(Transition.java:151) at org.jbpm.graph.def.Node.leave(Node.java:394) at org.jbpm.graph.node.StartState.leave(StartState.java:70) Jax-ws only generate .class from webservice. I add these .class into folder and in eclipse I specify in build path project, Add class folder and I select folder that contains .class generated. Into action of jbpm, I can use this generated classes and invoke web service, but now, when I execute the process and enter into node that invoke ws, exception java.lang.NoClassDefFoundError is thrown but I don't know why. 9. Re: Invoking webservice from actionMauricio Salatino May 19, 2008 11:54 AM (in response to gar jar) like i said before.. add the source classes and then compile it all together... make sure that you can import the classes in your action handler... Hope it helps 10. Re: Invoking webservice from actiongar jar May 19, 2008 1:55 PM (in response to gar jar) The problem is that jax-ws only generate .class :/ but if I create a simple java project with main class and execute as java aplication runs ok but in jbpm thrown the exception NoClassDefFoundError
https://developer.jboss.org/thread/117283
CC-MAIN-2018-39
refinedweb
878
51.75
view raw I am using matplotlib to make scatter plots. Each point on the scatter plot is associated with a named object. I would like to be able to see the name of an object when I hover my cursor over the point on the scatter plot associated with that object. In particular, it would be nice to be able to quickly see the names of the points that are outliers. The closest thing I have been able to find while searching here is the annotate command, but that appears to create a fixed label on the plot. Unfortunately, with the number of points that I have, the scatter plot would be unreadable if I labeled each point. Does anyone know of a way to create labels that only appear when the cursor hovers in the vicinity of that point? From : from matplotlib.pyplot import figure, show import numpy as npy from numpy.random import rand) show()
https://codedump.io/share/nSiVG61nLRG8/1/possible-to-make-labels-appear-when-hovering-over-a-point-in-matplotlib
CC-MAIN-2017-22
refinedweb
157
72.66
In this article, you will the learn basic interview tips in C# In this article, I am going to explain the following C# basics concepts. I will be explaining all the topics, mentioned above in depth but the explanation will be more focused on the interview preparation i.e. I will explain all those things in the context of an interview. Actually, in the recent past, I have gone through dozens of interviews and I would like to share the experience with the other developers. I am going to start with the very basic concepts and the purpose of this article is that the readers of this article can make themselves ready for C# interviews. Value types & reference types In C#, types have been divided into 2 parts, value types and the reference types. The value types directly store the data in the stack portion of RAM whereas the reference type stores reference of the data in the stack and save the actual data in heap but this is a half-truth statement. I will explain about it later to prove why it’s half-truth. Value types A value type contains its data directly and it is stored in the stack of memory. Examples of some built-in value types are Numeric types (sbyte, byte, short, int, long, float, double, decimal), char, bool, IntPtr, date, Struct and Enum is the value type. If I need to create a new value type, then I can use structs to create a new value type. I can also use enum keyword to create a value type of enum type. Reference types Reference types store the address of their data i.e. pointer on the stack and actual data is stored in heap area of the memory. The heap memory is managed by the garbage collector. As I said, the reference type does not store its data directly, so assigning a reference type value to another reference type variable creates a copy of the reference address (pointer) and assign it to the 2nd reference variable. Thus, now both the reference variables have the same address or pointer for which an actual data is stored in heap. If I make any change in one variable, then that value will also be reflected in another variable. In some cases, it is not valid like when reallocating memory, it may update only one variable for which new memory allocation has been done. Suppose you are passing an object of customer class, let's say “objCustomer” inside a method “Modify(…)” and then re-allocate the memory inside the “Modify(…)” method & provide its properties value. In that case, customer object “objCustomer” inside the “Modify(…)” will have different value and those change will not be available to the variable “objCustomer” accessed outside of “Modify(…)”. You can also take the example of String class. ‘string’ is the best example for this scenario because in the case of a string, each time new memory is allocated. I will explain about all those scenarios in depth in “Usage of ‘ref’ & ‘out’ keyword in C#” & “Understanding the behavior of ‘string’ in C#” section of this article. Understanding Classes and other types The table is given below to understand Classes and other types. If you are aware of all the behaviors of the classes and other types, then you can skip this section and move to next section. I will explain all those things one by one for the developers, who are not aware of it. Note You can see that static class can inherit a class. I have written no but used the * notation with that. It means a static class cannot inherit any other class but must inherit System.Object. Thus, we can also write the System.Object Class name in the inheritance list but there is no need to write the name System.Object explicitly because all the classes are being inherited from System.Object implicitly and we do not need to write it explicitly. What are the different types of classes we can have? We can have an abstract class, partial class, static class, sealed class, nested class & concrete class (simple class). Static Class Abstract Class abstract class Student Sealed Class So, you can see that we can have partial interface, partial class & partial struct but the partial method can only be written inside Partial struct and Partial class. Structure (struct) in C# In C#, we can create a structure or struct, using the keyword struct. As explained earlier, a struct is a value type and it is directly stored in the stack of the memory. Create a struct Interface An interface can be created using the keyword ‘interface’ in C#. An interface is a contract and that’s why all the Methods, properties, Indexers, Events which are a part of the interface must be implemented by the class or struct, which implements the interface. An interface can contain only signature of Methods, Properties, Indexers and Events. Following is an example of an Interface. Access Modifiers in C# What are the access modifiers allowed with Class? What is the use of ‘typeof’ keyword in C#? It is used to get the types and using this, we can do a lot of things of reflection without using complex logic and a few examples are shown below. Condition 4 What is the difference between the following 2 code snippets? Code Snippet1 What is the use of 'extern' keyword in C#? We use 'extern' keyword with a method to indicate that the method has been implemented externally i.e. out of your current C# code. Below is a code sample from MSDN, which describes the use of extern [DllImport("User32.dll")] public static extern int MessageBox(int h, string m, string c, int type); NOTE While using [DllImport("User32.dll")], you need to include the namespace System.Runtime.InteropServices; for complete details about extern visit here. Sequence of Modifiers and other attributes with class and method In the case of C# sometimes sequence of modifiers matters and you should be aware of that if you are going for an interview. e.g. public abstract partial class Student or public partial abstract class Student You would be wondering why I am explaining microlevel questions? I am explaining it because in some cases, you have to face such questions and you will feel a bit irritated or upset if you are not aware of all those things. Sometimes sequence matters and sometimes it doesn’t matter but we should keep some best practices in mind so that we write an error free code. Class [modifiers] [partial] className [:] [inheritance list] Method [modifiers] [partial] returnType methodName([parameters])Method Modifiers List. Passing Value type variable without ‘ref’ keyword In the case mentioned above, the variable x is having an initial value 20 and its value i.e. 20 is passed as a parameter for the method Modify. Inside the method modify it has been modified and added 50 to its previous value so it becomes 70 but the value of the variable ‘x’ outside the Modify() method is still 20 because it has modified its value not the reference. Below is the complete code View All View All
https://www.c-sharpcorner.com/article/basic-interview-tips-in-c-sharp/
CC-MAIN-2021-43
refinedweb
1,200
61.06
Protected Access control can’t always be static. Sometimes the mutability, nullability and access of variables depends on context. When dealing with these scenarios, we usually end up writing wrappers or duplicate the class for each different context. Well no more! Protected is a Swift Package that allows you to specify the read and write rights for any type, depending on context by using Phantom types. Here’s a taste of the syntax (we will explain everything in time): struct MyRights: RightsManifest { typealias ProtectedType = Book let title = Write(\.title) let author = Read(\.author) } func work(book: Protected<Book, MyRights>) { book.title // ✅ works book.title = "Don Quixote" // ✅ works book.author // ✅ works book.author = "" // ❌ will not compile book.isbn // ❌ will not compile } This project is heavily inspired by @sellmair‘s post on Phantom Read Rights. For those curious Protected relies on phantom types and dynamic member look up to provide an easy API for specifying read and write rights for any type in Swift. Installation Swift Package Manager You can install Sync via Swift Package Manager by adding the following line to your Package.swift: import PackageDescription let package = Package( [...] dependencies: [ .package(url: "", from: "1.0.0") ] ) Usage So let’s imagine that you run a Book publishing company. Your codebase works with information about books at different stages of publishing. Most of the code revolves entirely around the following class: public class Book { public var title: String? public var author: String? public var isbn: String? } So what’s wrong with this code? Well plenty of things: - Everything is nullable. Despite the fact that there’s places in our code where we can be sure that they’re not null anymore. - Everyting can be read publicly. - Everything is mutable, all of the time. And if anything is mutable, you can bet someone will mutate it, and probably in a part of code where you are not expecting it. One way to address this would be to create a different version of Book for every scenario: PlannedBook, PrePublishingBook, PostPublishingBook, PublishedBook, etc. But this leads to an unsustainable amount of code duplication and added complexity. These things might not look to bad when it comes to a simple class with three attributes, but as your classes get more complicated and we get more and more cases, keeping track of what can be read and mutated where becomes very difficult. Enter our package Protected. When working with Protected, you write your model once, and we change how you access it. We are mainly working with two things: RightsManifests: basically a type that specifies to what you have access to and how much. Protected: a wrapper that will enforce at compile time that you only read and write what’s allowed by the manifest. So for our book example, we can consider that we want to safely handle the pre-publishing stage of a book. At this stage the author name is already set and should be changed. The title is also set, but is open to change. The ISBN should not be read at all. For this case we can write a RightsManifest struct PrePublishRights: RightsManifest { typealias ProtectedType = Book // a) Declare that we can read and write the title let title = Write(\.title!) // b) with the ! enforce that at this stage it's no longer optional // c) Declare that we can only read the name of the author let author = Read(\.author!) // Do not include any declaration for the ISBN } A RightsManifest is a type that includes variables pointing to either: Write: can be read be written to Read: can only be read Each attribute you declare in the manifest can then be read in that context. So let’s try to use it: let book = Protected(Book(), by: PrePublishRights()) book.title // ✅ works book.title = "Don Quixote" // ✅ works book.author // ✅ works book.author = "" // ❌ will not compile book.isbn // ❌ will not compile More Advanced Features Protecting nested types If your object contains nested types, you can specify in your manifest, the manifest that corresponds to that value, and Protected will in that case return a Protected Value For example, let’s say that your books point to an Author object where you quite insecurely store the password (I’ve seen worse security): class Author { var name: String? var password: String? } class Book { var title: String? var author: Author? } And let’s say that you want to make sure that when someone grabs the author object from your book, that they can’t see the password either. For that you can start by creating the manifests for both types. And when it comes to specifying the read right to the author, you can include that it should be protected by your other Manifest: struct AuthorBasicRights: RightsManifest { typealias ProtectedType = Author let name = Read(\.name) } struct BookBasicRights: RightsManifest { typealias ProtectedType = Book let title = Write(\.title) // specify that for the author you want the result to be protected by AuthorBasicRights let author = Read(\.author).protected(by: AuthorBasicRights()) } With this when you try to use it, you won’t be able to access the password: let book = Protected(Book(), by: BookBasicRights()) book.title // ✅ works let author = book.author // returns a Protected<Author, AuthorBasicRights>? author?.name // ✅ works author?.password // ❌ will not compile Manipulating Values and Changing Rights All Protected values are designed to be changed. If you use the same object at different stages, you would like to change the rights associated with that object at any given time. That’s why Protected comes with a couple of functions prefixed by unsafeX to signal that you really should know what it is that you’re doing with the object here. For example let’s imagine that you’re writing a piece of code that will create an ISBN for a book and move it to the post publishing stage. So you can imagine that your rights look as follows: struct PrePublishRights: RightsManifest { typealias ProtectedType = Book let title = Write(\.title!) let author = Read(\.author!) } struct PostPublishRights: RightsManifest { typealias ProtectedType = Book let title = Read(\.title!) let author = Read(\.author!) let isbn = Read(\.isbn!) } When you publish the book, you will efectively transition your object to be governed by the pre publish rights to the post publish rights. You can do this with the method: unsafeMutateAndChangeRights: func publish(book: Protected<Book, PrePublishRights>) -> Protected<Book, PostPublishRights> { return book.unsafeMutateAndChangeRights(to: PostPublishRights()) { book in // here you have complete unsafe access to the underlying `book` object, absolutely no limitations book.isbn = generateISBN() } } Other unsafeX functions to deal with the underlying data when needed include: unsafeMutate: let’s you mutate the underlying value however you like. unsafeChangeRights: let’s you create a new version of the protected, governed by a new manifest. unsafeMapAndChangeRights: let’s you map the value onto a new one, and wrap it in a new protected governed by a different manifest. unsafeBypassRights: just get the value no matter what the manifest says. More elaborate Read rights Read rights don’t necessarily need to be a keypath. For Read Rights you have multiple options for dealing with them. For example you can provide a more elaborate getter logic: struct AuthorBasicRights: RightsManifest { typealias ProtectedType = Author let name = Read(\.name) let password = Read { obfuscate($0.password) } } You can also include a .map after any Read to manipulate the value: struct AuthorBasicRights: RightsManifest { typealias ProtectedType = Author let name = Read(\.name) let password = Read(\.password).map { obfuscate($0) } } Caveats This is not a perfect protection for no one to be able to access things they shouldn’t. Protected is not a security framework, it will not prevent people from accessing or mutating anything. It is intended as an easy way to make safe usage clear and simple depending on context. - A code can always access everything using the unsafeXmethods provided. - You can (but really shouldn’t) include more rights whithin the extension of a manifest. This allows you to include more rights than intended while still appearing to be safe. Do not do this! Protected cannot protect you from doing this. Contributions Contributions are welcome and encouraged! License Protected is available under the MIT license. See the LICENSE file for more info.
https://iosexample.com/experimental-api-for-reads-and-writes-protected-via-phantom-types/
CC-MAIN-2022-27
refinedweb
1,352
57.87
Optimizing TypeScript Memory Usage— 13 min # Update (2020-02-28) The recording of my talk is online: Also, my PR was merged a few week ago! Motivated by that, I created two followups, and I plan to write another post on one of those, so watch closely. Since quite some time, I have been completely sold on TypeScript, though the typechecker itself can be very slow sometimes. At my previous job, we had a huge TS project, with more than 4.000 files, which took roughly 30 seconds to check. But the worst problem was, that it was running very close to the node memory limit, and would frequently just go out of memory. That problem was even worse when running tsserver in combination with tslint, which would crash due to OOM every few minutes, as I already wrote about in a previous post. Well, since one of the more recent VSCode updates, it is possible to increase the memory limit of tsserver, which would have saved my life back then. At some point, all this got too unbearable, and I started profiling and looking deeper into how things worked. In the end, I was able to save up to 6~8% of memory usage with a very trivial change. Let me take you on a journey of what I did to achieve these improvements. # Creating a reduced testcase Demonstrating this with a 4.000 file project is not really feasible, but luckily, we can reduce this to a very simple testcase. > npm i typescript @types/node Throughout this post, I will be using versions typescript@3.7.4 and @types/node@13.1.4, the most recent versions at the time. My tsconfig.json looks like this: { "compilerOptions": { "diagnostics": true, "noEmit": true, "strict": true, "target": "ES2020", "lib": ["ESNext"], "moduleResolution": "Node", "module": "ESNext" } } Very basic stuff. Using the latest lib version and target, with node modules, and without generating any emit output. The diagnostics option is the same as if you would use it on the command line with tsc --diagnostics, just a convenient shortcut, because I always find the infos useful. And then just create an empty file: > touch index.ts Running tsc now gives us some (abbreviated) output: > tsc Files: 82 Lines: 22223 Memory used: 61029K Total time: 1.28s You can use the command line option tsc --listFiles to find out what those 82 files are. Hint: it is just all the ts internal lib files, plus all of @types/node. Ok, so far this is not really interesting, lets extend our testcase a little bit: > npm i aws-sdk > echo 'export * from "aws-sdk";' > index.ts (Note: This just installed aws-sdk@2.598.0 which btw is 48M on disk) Lets run tsc again: > tsc Files: 345 Lines: 396419 Nodes: 1178724 Identifiers: 432925 Memory used: 465145K Parse time: 2.38s Bind time: 0.78s Check time: 2.22s Total time: 5.38s Say whaaaaaat?¿?¿ Adding a single dependency adds a whooping 400M of memory usage and roughly 4 seconds of runtime. I will let you in on a little secret: tsc is actually typechecking all of the aws-sdk, which can be slow. We can avoid that by using --skipLibCheck, which is recommended all over the internet to speed up tsc: > tsc --skipLibCheck Memory used: 375234K Parse time: 2.28s Bind time: 0.77s Check time: 0.00s Total time: 3.05s Not that much of an improvement, but we got rid of the check time, and about ~100M of memory usage. # Lets start profiling In order to find out where all of this memory usage is coming from, we need to start profiling. Luckily, the node docs are quite good. Take a minute to read that page. So, from now on, we will start tsc like this: node --inspect-brk node_modules/.bin/tsc --skipLibCheck. And I will be using chromium, navigate to chrome://inspect and wait for my node process to appear. Once the debugger is attached, we can resume execution (the --inspect-brk switch actually suspends execution). We watch our console in the background, and once we get the --diagnostics output, tsc is basically done, but it still holds on to its memory. Now we can switch to the Memory tab, and take a heap snapshot. This will take a while. In my opinion, the documentation for this tool could be a lot better, but it gives you the very basics. For someone who has never before seen this, this might be a bit overwhelming and confusing. And well, yes, it is. Memory profiling is actually a lot about intuition, and digging deeper into things. I have expanded the (string) category. We see 9M for tsc itself, and then a number of files which look very much like the sources of the aws-sdk, for a total of 67M. tsc essentially reads all the source files of aws-sdk and keeps them in-memory. According to our --diagnostics output, that is roughly ~250 files, and the complete aws-sdk is roughly 48M on disk, so the numbers start to add up. Moving on, lets expand the Node: Here we see that each of the nodes is 160 bytes, and both according to the memory profiler, and the tsc --diagnostics output, we have a bit more than 1 million Nodes, which adds up to almost 180M of memory. Expanding some Nodes, we also see that the Nodes have very different properties on them. One very relevant detail is also that not every property is shown, more on that later. # Diving into some theory To progress further, we need to know a little bit about how v8 manages its memory. Luckily, the v8 team talks quite a bit about this and other performance relevant topics. Go and read one of the very good posts on the v8 blog, or watch one of the recordings from various conferences. Also note that this is specific to v8, and other JS engines are different, though surprisingly still quite similar. Also, I might get some of the details wrong, or they might get outdated, so take this with a grain of salt. Alright! To move on, we have to understand how v8 saves JS objects in memory. Very simplified, an object looks like this: ┌────────────┐ │ Map │ ├────────────┤ │ Properties │ ├────────────┤ │ Elements │ ├────────────┤ │ … │ └────────────┘ Each one of these entries (slots) is "pointer sized", which on 64-bit system means 8 bytes. - The Map, also called Hidden Class or Shape, is an internal data-structure which describes the object. V8 and other JS engines have a lot of internal optimizations that depend on this Shape. For example, optimized code is specialized for one or more Shapes. When you pass in an object of a different Shape, the engine will bail out to slower code. - Properties, is a pointer to an optional hashmap, which can hold additional properties that get added later to an object. You will sometimes hear or read about "dictionary mode" objects. This is it. - Elements is a pointer to some optional indexed properties, like for an array. - …: And then each object can have a number of inlined properties. This is what makes property access fast. The Map describes which properties are inlined at which index, and optimized code will just fetch the property from index Xinstead of looking it up through Properties. Each object at least has the three special properties, so each object is at least 24 bytes. In our example, each Node is 160 bytes, so it has 20 slots, minus the special ones leaves us with up to 17 slots for arbitrary properties. That is quite a lot. So, what is such a Node anyway? When typescript, or any other parser essentially, parses the source code, it creates an internal data-structure, called the Abstract Syntax Tree (AST). And as the name says, it is a tree, consisting of Nodes. Each syntax construct is represented by a different type of node. - An Identifier ( ident) for example only has to know its name. - A MemberExpression ( object.property) has references the object and the property. - An IfStatement ( if (condition) { consequent } else { alternate }) also has references to its child blocks. - … and so on … While each one of these nodes share some common properties, like their location in the source file for example, each syntax node has very different properties. Which makes it hard for JS engines to optimize this particular data structure, and functions that work with these. # Trying to improve things There is one more very important detail I left out. V8 has a lot of heuristics, and one of them is that it groups all these objects based on the constructor function. And typescript unfortunately uses a single constructor function for all of these very different node types. It is quite unlikely that every AST node will need 17 properties. With this is mind, we can try to improve things. For a live demo, we can just live-patch the node_modules/typescript/lib/tsc.js file, and search for function Node(. In the typescript source tree, we find the code here. Surprisingly, right next to it is this thing called the objectAllocator: (I added a prettier-ignore comments, otherwise my editor will auto-format this) function Node(kind, pos, end) { this.pos = pos; this.end = end; this.kind = kind; this.id = 0; this.flags = 0; this.modifierFlagsCache = 0; this.transformFlags = 0; this.parent = undefined; this.original = undefined; } // [… snip …] // prettier-ignore ts.objectAllocator = { getNodeConstructor: function () { return Node; }, getTokenConstructor: function () { return Node; }, getIdentifierConstructor: function () { return Node; }, getSourceFileConstructor: function () { return Node; }, getSymbolConstructor: function () { return Symbol; }, getTypeConstructor: function () { return Type; }, getSignatureConstructor: function () { return Signature; }, getSourceMapSourceConstructor: function () { return SourceMapSource; }, }; So apparently, TypeScript already has all the necessary infrastructure in place to at least split the Nodes into four categories. Also note that it uses the same constructor function for SourceFiles, which are very different from AST Nodes. So just for fun, lets copy-paste this Node function, rename it, and use it for all of these different types… With this trivial change done, lets try running tsc again: Memory used: 353732K Scrolling back up, and running these commands a few more times, the numbers are very reproducible. Our memory usage went from 375M to 353M. We just saved ourselves 22M of memory usage, which amounts to roughly ~6%. Lets double-check using the memory profiler. In the end, we end up with these sizes: What we see from this is that mixing SourceFile with all the rest of the Nodes is not a really good idea. Also, 104 bytes equals 10 non-special properties, which is a lot for things like Tokens, which are usually punctuation, but TS uses them for literals, or Identifiers, which just represent one word in the source text. Careful analysis could further shrink the memory usage, by removing unused properties, or further splitting up and organizing the different token types. # Bad news While I only write about this in early January, I did all the analysis and patching in mid September last year. You can check the pull request on the typescript repo; it is still open as I write this blog. :-( When running typescripts own performance test suite, my patch demonstrated a 6~8% decrease in memory usage, so even more significant than the saving demonstrated with the testcase here. But there is apparently no interest from the maintainers to merge it. I asked again early December, one month ago, to get some feedback, but got no reply whatsoever. Compared to my first PR , which was merged in less than 24 hours, this is super disappointing and frustrating for an external contributor. So if anyone has any connections to the maintainers, please kick some ass to get some progress here. :-) The other thing is aws-sdk, which I used as the testcase here. One thing people could do it to better organize their library, for example by bundling both library code and their types. And it just so happens that I maintain rollup-plugin-dts which you should definitely check out :-) But introducing bundling after the fact might be a breaking change for library users, so I understand its not always feasible. BUT, after some digging around, I found out that the aws-sdk actually has more focused imports, so instead of import { S3 } from "aws-sdk", one can do import S3 from "aws-sdk/clients/s3" (one reason why bundling would break things). You might want to use such focused imports to save both startup time and memory usage at runtime. I haven’t checked what the runtime code actually does, but the type definitions end up including the whole world, even though you would like to use focused imports. I created an issue, also in September, which got a single comment along the lines of "we don’t really care, wait for the next major version", which is also quite disappointing. I don’t have such a deep insight, but I would guess that a fix for this would be quite simple; especially since aws-sdk has a ton of duplicated type aliases. # Conclusion Memory optimization is hard, especially in JS. Also, parsers and compilers are even harder to optimize in JS. It is amazing that something like an Identifier, which in minified code is only 1 character = 1 byte, is blown up to 160 bytes by parsing it into a data structure that a compiler can work with. Profiling JS is a complex thing to do. Engines have a ton of optimizations and heuristics. They try to be very smart. They mostly succeed, but there are some code patterns that are very hard to optimize. Figuring out what is really happening requires a lot experience, knowledge, guessing, and sometimes just luck. I hope I have opened the eyes of some by showing how I approach these kinds of problems. One recommendation for other developers, that you can also read and hear about a lot is to use constructor functions, which initialize all the properties that an object can have, with correct types. Just putting random properties on objects at random times, like typescript apparently does is really bad for performance. But in the end, the number one rule is to: measure, measure, measure! and then measure some more!
https://swatinem.de/blog/optimizing-tsc/
CC-MAIN-2020-34
refinedweb
2,370
63.7
Last Updated on January 10, 2020 Probability for Machine Learning Crash Course. Get on top of the probability used in machine learning in 7 days.. Kick-start your project with my new book Probability for Machine Learning, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. - Update Jan/2020: Updated for changes in scikit-learn v0.22 API. Who Is This Crash-Course For? Before we get started, let’s make sure you are in the right place. This course is for developers that may know some applied machine learning. Maybe you know how to work through a predictive modeling problem end-to-end, or at least most of the main steps, with popular tools. The lessons in this course do assume a few things about you, such as: - You know your way around basic Python for programming. - You may know some basic NumPy for array manipulation. - You want to learn probability to deepen your understanding and application of machine learning. You do NOT need to be: - A math wiz! - A machine learning expert! This crash course will take you from a developer that knows a little machine learning to a developer who can navigate the basics of probabilistic methods. Note: This crash course assumes you have a working Python3 SciPy environment with at least NumPy is a list of the seven lessons that will get you started and productive with probability for machine learning in Python: - Lesson 01: Probability and Machine Learning - Lesson 02: Three Types of Probability - Lesson 03: Probability Distributions - Lesson 04: Naive Bayes Classifier - Lesson 05: Entropy and Cross-Entropy - Lesson 06: Naive Classifiers - Lesson 07: Probability Scores statistical methods and the NumPy API and the best-of-breed tools in Python. (Hint: I have all of the answers directly on this blog; use the search box.) Post your results in the comments; I’ll cheer you on! Hang in there; don’t give up. Note: This is just a crash course. For a lot more detail and fleshed-out tutorials, see my book on the topic titled “Probability for Machine Learning.” Want to Learn Probability for Machine Learning Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Download Your FREE Mini-Course Lesson 01: Probability and Machine Learning In this lesson, you will discover why machine learning practitioners should study probability to improve their skills and capabilities. Probability is a field of mathematics that quantifies uncertainty. Machine learning is about developing predictive modeling: - Noise in observations, e.g. measurement errors and random noise. - Incomplete coverage of the domain, e.g. you can never observe all data. - Imperfect model of the problem, e.g. all models have errors, some are useful. Uncertainty in applied machine learning is managed using probability. - Probability and statistics help us to understand and quantify the expected value and variability of variables in our observations from the domain. - Probability helps to understand and quantify the expected distribution and density of observations in the domain. - Probability helps to understand and quantify the expected capability and variance in performance of our predictive models when applied to new data. This is the bedrock of machine learning. On top of that, we may need models to predict a probability, we may use probability to develop predictive models (e.g. Naive Bayes), and we may use probabilistic frameworks to train predictive models (e.g. maximum likelihood estimation). Your Task For this lesson, you must list three reasons why you want to learn probability in the context of machine learning. These may be related to some of the reasons above, or they may be your own personal motivations. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover the three different types of probability and how to calculate them. Lesson 02: Three Types of Probability In this lesson, you will discover a gentle introduction to joint, marginal, and conditional probability between random variables. Probability quantifies the likelihood of an event. Specifically, it quantifies how likely a specific outcome is for a random variable, such as the flip of a coin, the roll of a die, or drawing a playing card from a deck. We can discuss the probability of just two events: the probability of event A for variable X and event B for variable Y, which in shorthand is X=A and Y=B, and that the two variables are related or dependent in some way. As such, there are three main types of probability we might want to consider. Joint Probability We may be interested in the probability of two simultaneous events, like the outcomes of two different random variables. For example, the joint probability of event A and event B is written formally as: - P(A and B) The joint probability for events A and B is calculated as the probability of event A given event B multiplied by the probability of event B. This can be stated formally as follows: - P(A and B) = P(A given B) * P(B) Marginal Probability We may be interested in the probability of an event for one random variable, irrespective of the outcome of another random variable. There is no special notation for marginal probability; it is just the sum or union over all the probabilities of all events for the second variable for a given fixed event for the first variable. - P(X=A) = sum P(X=A, Y=yi) for all y Conditional Probability We may be interested in the probability of an event given the occurrence of another event. For example, the conditional probability of event A given event B is written formally as: - P(A given B) The conditional probability for events A given event B can be calculated using the joint probability of the events as follows: - P(A given B) = P(A and B) / P(B) Your Task For this lesson, you must practice calculating joint, marginal, and conditional probabilities. For example, if a family has two children and the oldest is a boy, what is the probability of this family having two sons? This is called the “Boy or Girl Problem” and is one of many common toy problems for practicing probability. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover probability distributions for random variables. Lesson 03: Probability Distributions In this lesson, you will discover a gentle introduction to probability distributions. In probability, a random variable can take on one of many possible values, e.g. events from the state space. A specific value or set of values for a random variable can be assigned a probability. There are two main classes of random variables. - Discrete Random Variable. Values are drawn from a finite set of states. - Continuous Random Variable. Values are drawn from a range of real-valued numerical values. A discrete random variable has a finite set of states; for example, the colors of a car. A continuous random variable has a range of numerical values; for example, the height of humans. A probability distribution is a summary of probabilities for the values of a random variable. Discrete Probability Distributions A discrete probability distribution summarizes the probabilities for a discrete random variable. Some examples of well-known discrete probability distributions include: - Poisson distribution. - Bernoulli and binomial distributions. - Multinoulli and multinomial distributions. Continuous Probability Distributions A continuous probability distribution summarizes the probability for a continuous random variable. Some examples of well-known continuous probability distributions include: - Normal or Gaussian distribution. - Exponential distribution. - Pareto distribution. Randomly Sample Gaussian Distribution We can define a distribution with a mean of 50 and a standard deviation of 5 and sample random numbers from this distribution. We can achieve this using the normal() NumPy function. The example below samples and prints 10 numbers from this distribution. Running the example prints 10 numbers randomly sampled from the defined normal distribution. Your Task For this lesson, you must develop an example to sample from a different continuous or discrete probability distribution function. For a bonus, you can plot the values on the x-axis and the probability on the y-axis for a given distribution to show the density of your chosen probability distribution function. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover the Naive Bayes classifier. Lesson 04: Naive Bayes Classifier In this lesson, you will discover the Naive Bayes algorithm for classification predictive modeling. In machine learning, we are often interested in a predictive modeling problem where we want to predict a class label for a given observation. One approach to solving this problem is to develop a probabilistic model. From a probabilistic perspective, we are interested in estimating the conditional probability of the class label given the observation, or the probability of class y given input data X. - P(y | X) Bayes Theorem provides an alternate and principled way for calculating the conditional probability using the reverse of the desired conditional probability, which is often simpler to calculate.. The direct application of Bayes Theorem for classification becomes intractable, especially as the number of variables or features (n) increases. Instead, we can simplify the calculation and assume that each input variable is independent. Although dramatic, this simpler calculation often gives very good performance, even when the input variables are highly dependent. We can implement this from scratch by assuming a probability distribution for each separate input variable and calculating the probability of each specific input value belonging to each class and multiply the results together to give a score used to select the most likely class. - P(yi | x1, x2, …, xn) = P(x1|y1) * P(x2|y1) * … P(xn|y1) * P(yi) The scikit-learn library provides an efficient implementation of the algorithm if we assume a Gaussian distribution for each input variable. a test dataset is listed below. Running the example fits the model on the training dataset, then makes predictions for the same first example that we used in the prior example. Your Task For this lesson, you must run the example and report the result. For a bonus, try the algorithm on a real classification dataset, such as the popular toy classification problem of classifying iris flower species based on flower measurements. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover entropy and the cross-entropy scores. Lesson 05: Entropy and Cross-Entropy In this lesson, you will discover cross-entropy for machine learning. Information theory is a field of study concerned with quantifying information for communication.). We can calculate the amount of information there is in an event using the probability of the event. - Information(x) = -log( p(x) ) We can also quantify how much information there is in a random variable. This is called entropy and summarizes the amount of information required on average to represent events. Entropy can be calculated for a random variable X with K discrete states as follows: - Entropy(X) = -sum(i=1 to K p(K) * log(p(K))) Cross-entropy is a measure of the difference between two probability distributions for a given random variable or set of events. It is widely used as a loss function when optimizing classification models. It builds upon the idea of entropy and calculates the average number of bits required to represent or transmit an event from one distribution compared to the other distribution. - CrossEntropy(P, Q) = – sum x in X P(x) * log(Q(x)) We can make the calculation of cross-entropy concrete with a small example. Consider a random variable with three events as different colors. We may have two different probability distributions for this variable. We can calculate the cross-entropy between these two distributions. The complete example is listed below. Running the example first calculates the cross-entropy of Q from P, then P from Q. Your Task For this lesson, you must run the example and describe the results and what they mean. For example, is the calculation of cross-entropy symmetrical? Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover how to develop and evaluate a naive classifier model. Lesson 06: Naive Classifiers In this lesson, you will discover how to develop and evaluate naive classification strategies for machine learning. Classification predictive modeling problems involve predicting a class label given an input to the model. Given a classification model, how do you know if the model has skill or not? This is a common question on every classification predictive modeling project. The answer is to compare the results of a given classifier model to a baseline or naive classifier model. Consider a simple two-class classification problem where the number of observations is not equal for each class (e.g. it is imbalanced) with 25 examples for class-0 and 75 examples for class-1. This problem can be used to consider different naive classifier models. For example, consider a model that randomly predicts class-0 or class-1 with equal probability. How would it perform? We can calculate the expected performance using a simple probability model. - P(yhat = y) = P(yhat = 0) * P(y = 0) + P(yhat = 1) * P(y = 1) We can plug in the occurrence of each class (0.25 and 0.75) and the predicted probability for each class (0.5 and 0.5) and estimate the performance of the model. - P(yhat = y) = 0.5 * 0.25 + 0.5 * 0.75 - P(yhat = y) = 0.5 It turns out that this classifier is pretty poor. Now, what if we consider predicting the majority class (class-1) every time? Again, we can plug in the predicted probabilities (0.0 and 1.0) and estimate the performance of the model. - P(yhat = y) = 0.0 * 0.25 + 1.0 * 0.75 - P(yhat = y) = 0.75 It turns out that this simple change results in a better naive classification model, and is perhaps the best naive classifier to use when classes are imbalanced. The scikit-learn machine learning library provides an implementation of the majority class naive classification algorithm called the DummyClassifier that you can use on your next classification predictive modeling project. The complete example is listed below. Running the example prepares the dataset, then defines and fits the DummyClassifier on the dataset using the majority class strategy. Your Task For this lesson, you must run the example and report the result, confirming whether the model performs as we expected from our calculation. As a bonus, calculate the expected probability of a naive classifier model that randomly chooses a class label from the training dataset each time a prediction is made. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover metrics for scoring models that predict probabilities. Lesson 07: Probability Scores In this lesson, you will discover two scoring methods that you can use to evaluate the predicted probabilities on your classification predictive modeling problem. Predicting probabilities instead of class labels for a classification problem can provide additional nuance and uncertainty for the predictions. The added nuance allows more sophisticated metrics to be used to interpret and evaluate the predicted probabilities. Let’s take a closer look at the two popular scoring methods for evaluating predicted probabilities. Log Loss Score Logistic loss, or log loss for short, calculates the log likelihood between the predicted probabilities and the observed probabilities. Although developed for training binary classification models like logistic regression, it can be used to evaluate multi-class problems and is functionally equivalent to calculating the cross-entropy derived from information theory. A model with perfect skill has a log loss score of 0.0. The log loss can be implemented in Python using the log_loss() function in scikit-learn. For example: Brier Score The Brier score, named for Glenn Brier, calculates the mean squared error between predicted probabilities and the expected values. The score summarizes the magnitude of the error in the probability forecasts. The error score is always between 0.0 and 1.0, where a model with perfect skill has a score of 0.0. The Brier score can be calculated in Python using the brier_score_loss() function in scikit-learn. For example: Your Task For this lesson, you must run each example and report the results. As a bonus, change the mock predictions to make them better or worse and compare the resulting scores. Post your answer in the comments below. I would love to see what you come up with. This was the final lesson. The End! (Look How Far You Have Come) You made it. Well done! Take a moment and look back at how far you have come. You discovered: - The importance of probability in applied machine learning. - The three main types of probability and how to calculate them. - Probability distributions for random variables and how to draw random samples from them. - How Bayes theorem can be used to calculate conditional probability and how it can be used in a classification model. - How to calculate information, entropy, and cross-entropy scores and what they mean. - How to develop and evaluate the expected performance for naive classification models. - How to evaluate the skill of a model that predicts probability values for a classification problem. Take the next step and check out my book on Probability for Machine Learning. Summary How did you do with the mini-course? Did you enjoy this crash course? Do you have any questions? Were there any sticking points? Let me know. Leave a comment below. Lesson 2: “For this lesson, you must practice calculating joint, marginal, and conditional probabilities.” This might be a stupid question but “how”? I even googled for calculating joint… etc, watched some khan videos and found some classroom pdfs obviously prepared by universities but I couldn’t embody the concept in my mind so I can comprehend it. Maybe I am not suitable for this passion. Sorry if my question is stupid again. Good question, this will help: Wow, thank you I will read the post. And I will use the search function before asking questions:) Thanks. Concept of Joint, Marginal and conditional probability is clear to me but please provide the python code to understand this concept with other example. See this: Hi please tell me how do you plot the sample to show if it is normally distributed ? (the third day of course) See this tutorial: I have gone through Entropy and Cross entropy. There are many divergence measures also. That, how to find the distance between two probability distributions? But your tutorials are nice and your work is amazing. I want to learn more and more. See this on kl-divergence: import pandas as pd from sklearn.naive_bayes import GaussianNB iris = pd.read_csv(‘’, header = None) # importing the dataset modelIris = GaussianNB() modelIris.fit(iris.iloc[:, 0:4], iris.iloc[:, 4]) # fitting the model modelIris.predict([[7.2, 4, 5, .5]]) # model predicts this data point to be a versicolor Nice work! Three reasons to learn probability. 1. To make my foundations strong. 2. To understand beauty of mathematics. 3. To predict the future. Nice work! My Top-3 Reasons: 1. Explain the feasibility(uncertainty) of ML models and their explanation in simplest of terms to Business users, utilizing probability as base 2. Generating effective/actionable business insights using applied probability 3. Representing any real world scenarios using “Conditional Probability” ( somehow feel this how LIFE works) Well done! My top three reasons to learn probability: 1. To understand different probability concepts like likelihood, cross entropy better. 2. I want to specialise in NLP where a lot of the algorithms are probabilistic in nature such as LDA, i want to understand those better. 3. I want to easily read and implement machine learning and deep learning based papers easily. Thanks! I am replying as part of the emails sent to my inbox. Here are the three reasons I believe ML practitioners should understand probability: 1. To make a valid experiment or test. For a specific example, statements of what outcome or output proves a certain theory should be reasonable. The wording and logic should be correct. Taking a somewhat famous case of statistics being misused: The probability that a randomly chosen person matches a criminal’s description is not the same as the probability that a given person who does match a criminal’s description is guilty. If the criminal’s appearance is so unique that the probability of a random person matching it is 1 out of 12 billion, that does not mean a man with no supporting evidence connecting him to the crime but does match the description going to be innocent 1 out of 12 billion times. 2. ML practitioners need to know what makes differences in measures/values (mean, median, differences in variance, standard deviation or properly scaled units of measure) are “significant” or different enough to be evidence. 3. Certain lessons in probability could help find patterns in data or results, such as “seasonality”. 4. Bonus: Knowledge in probability can help optimize code or algorithms (code patterns) in niche cases. Nice work! So, if I understand the cross-entropy, it’s not symmetric because the same outcome doesn’t have the same significance for the two sets. For instance, if I have a weighted die which has a 95% chance of rolling a 6, and a 1% of each other outcome, and a fair die with a 17% chance of rolling each number, then if I roll a 6 on one of the dice, I only favour it being the weighted one about 6:1, but if I roll anything else I favour it being the fair one about 17:1. (This assumes my priors is I’m equally likely to have picked either, let’s say I just own the two dice). Then if I pick the weighted die, I’ll have to roll it a few times to convince myself it’s the weighted on, but if I pick the unweighted one, I’ll convince myself it’s that one in many fewer rolls (if I only need to be 2-sigma confident, probably in 1 roll) Good question, yes kl-divergence and cross-entropy are not symmetrical. Also, this may help: In “Lesson 03: Probability Distributions”: “A discrete random variable has a finite set of states”. This description is not exact. It instead should be “A discrete random variable has a countable set of states”. For example, if a discrete random variable takes value from N* = {1,2,3,4,5…}. This set is countable, but not finite. Thanks for your precision, but in practice, if it’s not finite, we must model it a different way. Three reasons: 1. Career development 2. Personal interest 3. Better understanding for ML algorithms Thanks! The code for plotting binomial distribution of flipping biased coin (p=0.7) 100 times. from numpy import random import matplotlib.pyplot as plt import seaborn as sns sns.distplot (random.binomial(n=1, p=0.7, size=100), hist=True, kde=False) Well done! Lesson 1: – Working in a subsurface discipline (geophysics), the data I receive have undergone a long chain of processing and interpretation. Data rarely come with uncertainty, normally just the “best estimate”. I would like to engage colleagues in other disciplines to propagate uncertainty as well, and then I need to include that in my own analysis – Even after having statistics at university and repetition in ML courses, I find that you need to be exposed to probability estimation regularly to have it in your fingertips. It can’t be repeated too often – There are so many useful tools available now in Python, and crash-courses like this is a good way to get an overview of the most useful ones Nice work! Lesson two: A parallel classic case is the selection of one of three options, where only one gives an award. You select one without revealing its content. It is then shown that one of the remaining options does not give a reward, and you get the option to switch from your original choice to the last one. Should you do it? The answer is yes. When you make the initial selection P(right) = 1/3. When it is revealed that another option was wrong the last option has P(right) = 1/2, but your first selection is still locked into the P(right) = 1/3. This shows the difference between marginal probability (the first selection) and the conditional probability (the second selection). Well done! the reasons to learn probability : 1- Mathematical form of predication starts with probability. 2- Arrange of data in graphical form provides insight of data like mean , SD. 3- Probability helps to define sample data and population. Nice work! Three reasons why we want to learn probability in the context of machine learning. 1. To Evaluate our ML model Performance mathematically (i.e., Train the ML Model with larger datasets is quite complex process, as a ML Engineer We don’t know how our model learns Pattern from Larger Datasets, so We can use Probability Parameters to Evaluate the Model Learning Performances. 2. Pre-processing Data before giving the data to ML Models (i.e, ML Models Needs Clean and Accurate Data to Learn, if Data is Bad then ML Results also Bad, so we need to take assessment of data set before Teach ML Models, we can use Probabilistic Techniques to Conducting assessments of data 3. Interpret ML Model Learning Parameters (i.e, we can Tune the ML Model Parameters for making Good Learnable Model, for that tuning Process of Models needs Mathematical and Probability Knowledge) Nice work! very nice tutorial to follow , I have question if i register will i receive free eBook about probability-for-machine-learning ? Thanks! Yes, if you sign up for the email course you will get a PDF version of the course sent to you via email. 1) Reinforce and domain my basic knowledge on probability will let me improve my capability on problem’s resolution 2) To later domain fuzzy logic, which I understand envelop classic probability 3) To master ML algorithms understanding and coding Well done. Lesson 3: # Sample an exponential distribution from numpy.random import exponential import matplotlib.pyplot as plt n=10 # Generate the sample sample_e=[] for i in range(n): sample_e.append(exponential(scale=1.0)) print(sample_e) plt.hist(sample_e) plt.xlim(0,10) Well done! H(P, Q): 3.323 bits H(Q, P): 5.232 bits The cross-entropy is not symmetrical. Nice work! Day 6: Naive Classifiers. The model performs as expected, as it predicted the class with 75% accuracy, the highest possible. This is the result of setting “most_frequent” as strategy in DummyClassifier. The expected probability of a naive classifier model that randomly chooses a class label from the training dataset each time a prediction is made, is 62.5%. This calculation is possible setting “stratified” strategy in DummyClassifier. Well done.
https://machinelearningmastery.com/probability-for-machine-learning-7-day-mini-course/
CC-MAIN-2021-31
refinedweb
4,573
56.35
Monday, September 25, 2017¶ Coverage and Atelier¶ I continued to work in Atelier for #2074. The default values for coverage_command and prep_command have changed and are now defined atelier.invlib.ns together with the others. I completely removed special handling of projects having a pytest.ini file. One visible result is that the test coverage of atelier increased from 20% to 40% (under Python 3). Under Python 3, in atelier.sphinxconf.insert_input, I had a TypeError a bytes-like object is required, not 'str'. Had to do with the from io import BytesIO as StringIO in that module. It was about decoding the output of a subprocess… In atelier.sphinxconf.blog I had: WARNING: Inline emphasis start-string without end-string. but funnily only under Python 3. It took me some time to figure out that it was caused by: from builtins import map It seems that the map() function has the following docstring under Python 3: map(func, *iterables) –> map object I fixed the problem by just removing the import statement, together with similar lines for range and object. I continue to not understan the purpose of these imports. Optimizations in Lino Avanti¶ I checked in the code changes I did on Saturday. Users can now filter clients by coaching type. lino_xl.lib.coachings.Coachable mixin. The presence_sheet.weasy.html template didn’t show the names. Fixed. And it now shows the state (and remark) of the lino_xl.lib.cal.Guest object if such an object exists. If no Guest object exists, it continues to print eiterh X or blank based on the end_date and start_date of the enrolment. TODO: ATM this is still a single template used by both Lino Avanti and Lino Voga. Their usage is quite different though, and I guess that sooner or later we will need to split this template into two. The interesting question will then be how to keep as much as possible within a common base template. The quick_search_fields of lino_avanti.lib.avanti.Client now includes the ref field. The detail window of lino_avanti.lib.courses.Enrolment now also shows the data problems (i.e. lino.modlib.checkdata.ProblemsByController which now uses a summary panel and whose detail_layout I optimized). Documentation¶ I moved the docstrings from the lino_xl.lib.coachings module to the coachings : Managing coachings page. Lino surprised me once more: The help text of the coaching_type field is indeed given in the coachings : Managing coachings page. release@avanti¶ I upgraded their production site because all items of #2075 are done.
https://luc.lino-framework.org/blog/2017/0925.html
CC-MAIN-2020-24
refinedweb
421
58.79
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.. Hi Gaby, Phil, Benjamin, all first let me point out this: I stumbled on this deque-issue *by chance*, only when checking, to be sure, if perhaps something else was using those HP/SGI estensions present in stl_uninitialized.h. The real reason why today I became interested in the latter is that some of the extensions already inside include/ext (which I'm moving to __gnu_cxx) use for the implementation 'uninitialized_copy_n" (to wit, ropeimpl.h and stl_rope.h). This is, by itself, not troublesome, I think: something properly belonging to __gnu_cxx using something else properly belonging to __gnu_cxx.? As a matter of fact, the presence of extensions in stl_uninitialized.h used by stl_deque.h is blocking *all* the cleanups for the extensions already present in include/ext. Honestly, at this point, I don't think it is an option trying to make quickly surgery to stl_deque.h eliminating with a sleight of hand the use of __uninitialized_copy_fill, __uninitialized_fill_copy, __uninitialized_copy_copy. We should alter not trivially the code and either replace those calls with something else, duplicate them inside namespace std, or something else I cannot foresee right now... The patch for stl_uninitialized I have just proposed uses as a sort of *temporary* hack some bits of ext/memory in stl_deque.h but only in the form of 4 *calls* (no namespace pollution!). This would allow me to clean-up momentarily all the remaining stuff alreasy inside include/ext, as I said above. I would rather prefer proceeding in this way, and then work with Phil in cleaning up stl_deque.h, -possibly- (only possibly) removing completely the reference to those ext/memory bits. Alternatively I could propose leaving for the moment stl_uninitializes.h and stl_deque.h as they are (i.e., forget about the present patch) and call std::uninitialized_copy_n from ropeimpl.h and stl_rope.h. Then, in a second pass, work on the former two, clean them, and consequently adjust those call to __gnu_cxx::uninitialized_copy_n. I see only the preceding two possibilities if we don't want to block the cleanup of include/ext (that is, rope, slist, hash_map, hash_set) in the hope that we can definitely eliminate references to __uninitialized_copy_fill, __uninitialized_fill_copy, __uninitialized_copy_copy from stl_deque.h. What do you think? As regards the necessity of some testcases for the extensions I will of course work on them ASAP... By the way, as you probably know better then me, many library features sadly are still untested, not only the extensions... Cheers, Paolo.
http://gcc.gnu.org/ml/libstdc++/2001-12/msg00427.html
crawl-001
refinedweb
427
56.55
LD be configured to run against the same LDAP directory. In that case, for your initial deployment, it may be easier to copy the portal-ext.properties file to all of the nodes so the first time they start up, the settings are correct. Regardless of which method you use, the available settings are the same. You configure the global values from the LDAP tab of the Authentication page. Enabled: Check this box to enable LDAP Authentication. Required: Check this box if LDAP authentication is required. Liferay will then not allow a user to log in unless he or she can successfully bind to the LDAP directory first. Uncheck this box if you want to allow users with Liferay accounts but no LDAP accounts to log in to the portal. LDAP Servers: Liferay supports connections to multiple LDAP servers. You can click on the Add button beneath this heading to add LDAP servers. We explain how to configure new LDAP servers below. Import/Export: You can import and export user data from LDAP directories using the following options: Import Enabled: Check this box to cause Liferay to do a mass import from your LDAP directories. If you want Liferay to only synchronize users when they log in, leave this box unchecked. Definitely leave this unchecked if you are working in a clustered environment. Otherwise, all of your nodes would try to do a mass import when each of them starts up. Import on Startup Enabled: Check this box to have Liferay run the import when it starts up. Note: This box only appears if you check the Import Enabled box above. Export Enabled: Check this box to enable Liferay to export user accounts from the database to LDAP. Liferay uses a listener to track any changes made to the Userobject and will push these changes out to the LDAP server whenever the Userobject is updated. Note that by default on every login, fields such as LastLoginDateare updated. When export is enabled, this has the effect of causing a user export every time the user logs in. You can disable this by setting the following property in your portal-ext.propertiesfile: users.update.last.login=false Use LDAP Password Policy: Liferay uses its own password policy by default. This can be configured on the Password Policies page of the control panel. Check the Use LDAP Password Policy box if you want to use the password policies defined by your LDAP directory. Once this is enabled, the Password Policies tab will display a message stating you are not using a local password policy. You will now have to use your LDAP directory’s mechanism for setting password policies. Liferay does this by parsing the messages in the LDAP controls returned by your LDAP server. By default, the messages in the LDAP controls that Liferay is looking for are the messages returned by the Fedora Directory Server. If you are using a different LDAP server, you will need to customize the messages in Liferay’s portal-ext.properties file, as there is not yet a GUI for setting this. See below for instructions describing how to do this. Once you’ve finished configuring LDAP, click the Save button. Next, let’s look at how to add LDAP servers. Adding LDAP Servers The Add button beneath the LDAP servers heading allows you to add LDAP servers. If you have more than one, you can arrange the servers by order of preference using the up/down arrows. When you add an LDAP Server, you will need to provide several pieces of data so Liferay can bind to that LDAP server and search it for user records. Regardless of how many LDAP servers you add, each server has the same configuration options. Server Name: Enter a name for your LDAP server. Default Values: Several leading directory servers are listed here. If you are using one of these, select it and click the Reset Values button. The rest of the form will be populated with the proper default values for that directory. Connection: These settings cover the basic connection to LDAP. Base Provider URL: This tells the portal where the LDAP server is located. Make sure the machine on which Liferay is installed can communicate with the LDAP server. If there is a firewall between the two systems, check to make sure the appropriate ports are opened. Base DN: This is the Base Distinguished Name for your LDAP directory. It is usually modeled after your organization. For a commercial organization, it may look similar to this: dc=companynamehere,dc=com. Principal: By default, the administrator ID is populated here. If you have removed the default LDAP administrator, you will need to use the fully qualified name of the administrative credential you use instead. You need an administrative credential because Liferay will be using this ID to synchronize user accounts to and from LDAP . Credentials: This is the password for the administrative user. This is all you need to make a regular connection to an LDAP directory. The rest of the configuration is optional. Generally, the default attribute mappings provide enough data to synchronize back to the Liferay database when a user attempts to log in. To test the connection to your LDAP server, click the Test LDAP Connection button. If you are running your LDAP directory in SSL mode to prevent credential information from passing through the network unencrypted, you will have to perform extra steps to share the encryption key and certificate between the two systems. For example, assuming your LDAP directory happens to be Microsoft Active Directory on Windows Server 2003, you would take the following steps to share the certificate: Click Start → Administrative Tools → Certificate Authority. Highlight the machine that is the certificate authority, right-click on it, and click Properties. From the General menu, click View Certificate. Select the Details view, and click Copy To File. Use the resulting wizard to save the certificate as a file. As with the CAS install (see the below section entitled Single Sign-On), you will need to import the certificate into the cacerts keystore. The import is handled by a command like the following: keytool -import -trustcacerts -keystore /some/path/jdk1.5.0_11/jre/lib/security/cacerts -storepass changeit -noprompt -alias MyRootCA -file /some/path/MyRootCA.cer The keytool utility ships as part of the Java SDK. Once this is done, go back to the LDAP page in the control panel. Modify the LDAP URL in the Base DN field to the secure version by changing the protocol to ldaps and the port to 636 like this: ldaps://myLdapServerHostname:636 Save the changes. Your Liferay Portal will now use LDAP in secure mode for authentication. Users: This section contains settings for finding users in your LDAP directory. Authentication Search Filter: The search filter box can be used to determine the search criteria for user logins. By default, Liferay uses users’ email addresses for their login names. If you have changed this setting, you will need to modify the search filter here, which has been configured to use the email address attribute from LDAP as a search criterion. For example, if you changed Liferay’s authentication method to use screen names instead of the email addresses, you would modify the search filter so it can match the entered log in name: (cn=@screen_name@) Import Search Filter: Depending on the LDAP server, there are different ways to identify the user. Generally, the default setting ( objectClass=inetOrgPerson) is fine but if you want to search for only a subset of users or users that have different object classes, you can change this. User Mapping: The next series of fields allows you to define mappings from LDAP attributes to Liferay fields. Though your LDAP user attributes may be different from LDAP server to LDAP server, there are five fields Liferay requires to be mapped for the user to be recognized. You must define a mapping to the corresponding attributes in LDAP for the following Liferay fields: Screen Name Full Name Middle Name Job Title Group The control panel provides default mappings for commonly used LDAP attributes. You can also add your own mappings if you wish. - Test LDAP Users: Once you have your attribute mappings set up (see above), click the Test LDAP Users button and Liferay will attempt to pull LDAP users and match them with their mappings as a preview. to display a list of the groups returned by your search filter. Export: This section contains settings for exporting user data from LDAP. Users DN: Enter the location in your LDAP tree where the users will be stored. When Liferay does an export, it will export the users to this location. User Default Object Classes: When a user is exported, the user is created with the listed default object classes. To find out what your default object classes are, use an LDAP browser tool such as JXplorer to locate a user and view the Object Class attributes stored in LDAP for that user. Groups DN: Enter the location in your LDAP tree where the groups will be stored. When Liferay does an export, it will export the groups to this location. Group Default Object Classes: When a group is exported, the group is created with the listed default object classes. To find out what your default object classes are, use an LDAP browser tool such as Jxplorer to locate a group and view the Object Class attributes stored in LDAP for that group. Figure 15.16: Mapping LDAP Groups Once you’ve set all your options and tested your connection, click Save. From here, you can add another LDAP server or set just a few more options that apply to all of your LDAP server connections. LDAP Options Not Available in the GUI Although most of the LDAP configuration can be done from the control panel, there are several configuration parameters that are only available by editing portal-ext.properties. These options will be available in the GUI in future versions of Liferay Portal but for now they can only be configured by editing the properties file. If you need to change any of these options, copy the LDAP section from the portal.properties file into your portal-ext.properties file. Note that since you have already configured LDAP from the GUI, any settings from the properties file that match settings already configured in the GUI will be ignored. The GUI, which stores the settings in the database, always takes precedence over the properties file. ldap.auth.method=bind #ldap.auth.method=password-compare Set either bind or password-compare for the LDAP authentication method. Bind is preferred by most vendors so you don’t have to worry about encryption strategies. Password compare does exactly what it sounds like: it reads the user’s password out of LDAP, decrypts it and compares it with the user’s password in Liferay, syncing the two. ldap.auth.password.encryption.algorithm= ldap.auth.password.encryption.algorithm.types=MD5,SHA Set the password encryption to used to compare passwords if the property ldap.auth.method is set to password-compare. ldap.import.method=[user,group] If you set this to user, Liferay will import all users from the specified portion of the LDAP tree. If you set this to group, Liferay will search all the groups and import the users in each group. If you have users who do not belong to any groups, they will not be imported. ldap.error.password.age=age ldap.error.password.expired=expired ldap.error.password.history=history ldap.error.password.not.changeable=not allowed to change ldap.error.password.syntax=syntax ldap.error.password.trivial=trivial ldap.error.user.lockout=retry limit These properties are a list of phrases from error messages which can possibly be returned by the LDAP server. When a user binds to LDAP, the server can return controls with its response of success or failure. These controls contain a message describing the error or the information that is coming back with the response. Though the controls are the same across LDAP servers, the messages can be different. The properties described here contain snippets of words from those messages and will work with Red Hat’s Fedora Directory Server. If you are not using that server, the word snippets may not work with your LDAP server. If they don’t, you can replace the values of these properties with phrases from your server’s error messages. This will enable Liferay to recognize them. Next, let’s look at the Single Sign-On solutions Liferay supports. SSO Single Sign-On solutions allow you to provide a single login credential for multiple systems. This allows you to have people authenticate to the Single Sign-On product and they will be automatically logged in to Liferay and to other products as well. Liferay supports several single sign-on solutions. Of course, if your product is not yet supported, you may choose to implement support for it yourself by use of the extension environment. Alternatively, your organization can choose to sponsor support for it. Please contact sales@liferay.com for more information about this. Authentication: Central Authentication Service (CAS) CAS is an authentication system originally created at Yale University. It is a widely-used open source single sign-on solution and was the first SSO product to be supported by Liferay. Please follow the documentation for CAS to install it on your application server of choice. Your first step will be to copy the CAS client .jar file to Liferay’s library folder. On Tomcat, this is in [Tomcat Home]/webapps/ROOT/WEB-INF/lib. Once you’ve done this, the CAS client will be available to Liferay the next time you start it. The CAS Server application requires a properly configured Secure Socket Layer certificate on your server to work. If you wish to generate one yourself, you will need to use the keytool utility that comes with the JDK. Your first step is to generate the key. Next, you export the key into a file. Finally, you import the key into your local Java key store. For public, Internet-based production environments, you will need to either purchase a signed key from a recognized certificate authority (such as Thawte or Verisign) %FILE_NAME% -keypass changeit -keystore $JAVA_HOME/jre/lib/security/cacerts If you are on a Windows system, replace $JAVA_HOME above with %JAVA_HOME%. Of course, all of this needs to be done on the system on which CAS will be running. Once your CAS server is up and running, you can configure Liferay to use it. This is a simple matter of navigating to the Settings → Authentication → CAS tab in the control panel. Enable CAS authentication and then modify the URL properties to point to your CAS server. Enabled: Check this box to enable CAS single sign-on. Import from LDAP: A user may be authenticated from CAS and not yet exist in the portal. Select this to automatically import users from LDAP if they do not exist in the portal.. Authentication: Facebook Liferay Portal also enables users to log in using their Facebook accounts. To enable this feature, you simply need to select the Enable box and enter the Application ID and Application Secret which should have been provided to you by Facebook. Facebook SSO works by taking the primary Facebook email address and searching for the same email address in Liferay’s User_ table. If a match is found, the user is automatically signed on (provided the user clicked allow from the Facebook dialog). If there isn’t a match, the user is prompted in Liferay to add a user from Facebook. Once selected, a new user is created by retrieving four fields from Facebook (first name, last name, email address and gender). Authentication: NTLM NTLM is a Microsoft protocol that can be used for authentication through Microsoft Internet Explorer. Though Microsoft has adopted Kerberos in modern versions of Windows server, NTLM is still used when authenticating to a workgroup. Liferay Portal now supports NTLM v2 authentication. NTLM v2 is more secure and has a stronger authentication process than NTLMv1. Enabled: Check this box to enable NTLM authentication. Domain Controller: Enter the IP address of your domain controller. This is the server that contains the user accounts you want to use with Liferay. Domain: Enter the domain / workgroup name. Service Account: You need to create a service account for NTLM. This account will be a computer account, not a user account. Service Password: Enter the password for the service account. Authentication: OpenID OpenID is a new single sign-on standard which is implemented by multiple vendors. The idea is multiple vendors can implement the standard and then users can register for an ID with the vendor they trust. The credential issued by that vendor can be used by all the web sites that support OpenID. Some high profile OpenID vendors are AOL, LiveDoor and LiveJournal. Please see the OpenID site for a more complete list. A main benefit of OpenID for the user is he or she no longer has to register for a new account on every site in which he or she wants to participate. Users can register on one site (the OpenID provider’s site) and then use those credentials to authenticate to many web sites which support OpenID. Many web site owners often struggle to build communities because end users are reluctant to register for so many different accounts. Supporting OpenID makes it easier for site owners to build their communities because the barriers to participating (i.e., the effort it takes to register for and keep track of many accounts) are removed. All of the account information is kept with the OpenID provider, making it much easier to manage this information and keep it up to date. Liferay Portal can act as an OpenID consumer, allowing users to automatically register and sign in with their OpenID accounts. Internally, the product uses OpenID4Java to implement the feature. OpenID is enabled by default in Liferay but can be disabled here. Atlassian Crowd Atlassian Crowd is a web-based Single Sign-On product similar to CAS. Crowd can be used to manage authentication to many different web applications and directory servers. Because Atlassian Crowd implements an OpenID producer, Liferay works and has been tested with it. Simply use the OpenID authentication feature in Liferay to log in using Crowd. Authentication: OpenSSO OpenSSO is an open source single sign-on solution that comes from the code base of Sun’s System Access Manager product. Liferay integrates with OpenSSO, allowing you to use OpenSSO to integrate Liferay into an infrastructure that contains a multitude of different authentication schemes against different repositories of identities. You can set up OpenSSO on the same server as Liferay or a different box. Follow the instructions at the OpenSSO site to install OpenSSO. Once you have it installed, create the Liferay administrative user in it. Users are mapped back and forth by screen names. By default, the Liferay administrative user has a screen name of test, so in OpenSSO, you would register the user with the ID of test and an email address of test@liferay.com. Once you have the user set up, log in to Open SSO using this user. In the same browser window, go to the URL for your server running Liferay and log in as the same user, using the email address test@liferay.com. Go to the control panel and click Settings → Authentication → OpenSSO. Modify the three URL fields (Login URL, Logout URL and Service URL) so they point to your OpenSSO server (i.e., only modify the host name portion of the URLs), click the Enabled check box and then click Save. Liferay will then redirect users to OpenSSO when they click the Sign In link. Authentication: SiteMinder SiteMinder is a single sign-on implementation from Computer Associates. Liferay 5.2 introduced built-in integration with SiteMinder. SiteMinder uses a custom HTTP header to implement its single sign-on solution. To enable SiteMinder authentication in Liferay, check the Enabled box on the SiteMinder tab. If you are also using LDAP with Liferay, you can check the Import from LDAP box. If this box is checked, users authenticated from SiteMinder who do not exist in the portal will be imported from LDAP. The last field defines the header SiteMinder is using to keep track of the user. The default value is already populated. If you have customized the field for your installation, enter the custom value here. When you are finished, click Save. Next, let’s learn about the SAML 2.0 Provider EE plugin. SAML SAML is an XML-based open standard data format for exchanging authentication and authorization data between parties known as an identity provider and a service provider. An identity provider is a trusted provider that enables users to use single sign-on to access other websites. A service provider is a website that hosts applications and grants access only to identified users with proper credentials. SAML is maintained by the OASIS Security Services Technical Committee. See for more information. Liferay 6.1 EE and later versions support SAML 2.0 integration via the SAML 2.0 Provider EE plugin. This plugin is provided as an app from Liferay Marketplace that allows Liferay to act as a SAML 2.0 identity provider or as a service provider. First, we’ll look at how to set Liferay up as an Identity Provider and then we’ll look at how to set it up as a Service Provider. Setting up Liferay as a SAML Identity Provider, additional options appear in the SAML Admin Control Panel portlet. There are three tabs: - General - Identity Provider -. Finally, after you’ve saved your certificate and private key information, check the Enabled box at the top of the General tab and click Save. Great! You’ve successfully set Liferay up as a SAML Identity Provider! To configure Liferay’s SAML Identity Provider Settings, navigate to the Identity Provider tab of the SAML Admin Control Panel not both! If you’ve already set up one Liferay instance as a SAML Identity Provider, use a different Liferay instance as a SAML Service Providersp. Suppose that you have two Liferay instances running on ports 8080 and 9080 of your host. Suppose further that you’ve configured the Liferay running on port 8080 as a SAML Identity Provider and the Liferay running on port 9080 as a SAML Service Provider, following the instructions above. If your Identity Provider and Service Provider have been correctly configured, navigating to initiates the SAML Identity Provider based login process. To initiate the SAML Service Provider based login process, just navigate to the Liferay running on port 9080 and click Sign In, navigate to, or try to access a protected resource URL such as a Control Panel URL. Next, let’s examine how to configure portal-wide user settings. Users The Users page of Portal Settings has.
https://help.liferay.com/hc/es/articles/360018153991-Integrating-Liferay-Users-into-Your-Enterprise-
CC-MAIN-2019-43
refinedweb
3,843
63.59
. Urs, Very interesting this post (actually all your blog), congratulations… I understand the point, but I don’t see a thing clear… You say ).” How can you minimize navigation in the solution explorer (VS) in a VS solution with many VS projects? E. g., MyProduct_PL.exe, MyProduct_AL.dll, MyProduct_DL.dll and MyProduct_IL.dll (PL = PresentationLayer, AL = ApplicationLayer, DL = DomainLayer, IL = InfrastructureLayer) Creating a folder on each project? (each project would have the namespaces as you recommend). A folder like “Theme1.Editing” or “Theme2.Search” in MyProduct_PL.exe with view, another folder on MyProduct_AL.dll with application service, another on MyProduct_DL.dll with entities and value objects and another on MyProduct_IL.dll where they live repositories, NH mappings, … Or perhaps I should not have “as many” projects? Yes to the logical layers but not as many fisical tiers. E. g., MyProduct_Client.exe and MyProduct_Server.dll, also performing a work of layering. Thanks very much, […] Structure your code by feature – Urs Enzler discusses an alternative approach to structuring your code, looking at structuring it based on feature / requirement, building more structure on top of the standard layered approach and easing maintainability in the long term. […] Why did you split your code into these dedicated layer oriented assemblies? If these assemblies are not installed on different tiers I suggest to consolidate as many as possible into a single assembly. Then navigating gets a lot easier as explained. Cheers Urs Why? :mmmm Damn books and frameworks!!!! :-p After “talking” to you (wait! Was it a secret? X-D) I restructured my current project in only one project (also because the project must end now before the agreed): will be deployed with ClickOne on each client PC. Only database will be on the server. Anyway, despite being a single project, I have views and presenters (PL), entities and value objects (DL), repositories and NH mappings (IL), … One physical tier, several logical layer. Have you wondered for this, right? What I like to have as many projects is the “power” of scope classes: I can create a public class in a layer and hide to the world – with internal scope – several classes because they are an implementation detail, ensuring that nobody can never instanciate/reference them accidentally. With a single assembly, internals = public Very good your link about themes vs. epics vs. user stories () !!!! This fits right in with a lot of what Bob Martin has been saying lately about clean architecture and the rediscovery of Ivar Jacobson’s Boundary Control Entity pattern. This is an interesting idea. Are there any open source projects out there with this structure? I’d like to visually see a bit better what you are referring too. I can understand and agree with parts of it but things like shared base classes / common services / etc don’t seem to fit. Unless you have some core/shared libraries and then jump into Feature X / Feature Y. @Nathan Palmer Unfortunately, I don’t know any open source projects following this approach that is big enough to get a picture. My open source project bbv.Common follows these principles, but it’s a library and there, everything is easy anyway. And infrastructure code can often be extracted into a library style package of its one, thus defining its own technical “feature” and therefore namespace. That how bbv.Common started 8 years ago. @JC Yes, you can’t hide classes anymore. But regarding extensibility it was never a good idea to hide them anyway. I deal with this situation by putting these kind of classes into ‘Internal’ namespaces. This has also to do with the way you build up your application. We always use some kind of IoC container. Therefore, all implementation classes have to be accessible by the binding definition, which permits internal anyway because bindings should be specified at a place as high up in the dependency graph as possible. […] und Blog post des Kurzvortrages “Structure your code by feature” von Urs […] […] classes and moving classes between namespaces to structure the code in a better understandable way (here is explained how we structure our code). Sometimes, this results in unit tests not renamed or moved […] Hear Hear. I’ve seen a few projects with a load of assemblies like Product.DTO, Product.DAL, Product.BusinessLayer, etc, and they are awful to work on – a change such as adding a field needs you to touch files in 4 or 5 different places and recompile all those projects. Structure by feature is much nicer.
http://www.planetgeek.ch/2012/01/25/3077/
CC-MAIN-2016-07
refinedweb
750
57.37
SYNOPSIS #include <nng/protocol/bus0/bus.h> DESCRIPTION The bus protocol provides for building mesh networks where every peer is connected to every other peer. In this protocol, each message sent by a node is sent to every one of its directly connected peers. All message delivery in this pattern is best-effort, which means that peers may not receive messages. Furthermore, delivery may occur to some, all, or none of the directly connected peers. (Messages are not delivered when peer nodes are unable to receive.) Hence, send operations will never block; instead if the message cannot be delivered for any reason it is discarded. Socket Operations The nng_bus0_open() functions create a bus socket. This socket may be used to send and receive messages. Sending messages will attempt to deliver to each directly connected peer. Protocol Versions Only version 0 of this protocol is supported. (At the time of writing, no other versions of this protocol have been defined.) Protocol Options The bus protocol has no protocol-specific options. Protocol Headers When using a “raw” bus socket, received messages will contain the incoming pipe ID as the sole element in the header. If a message containing such a header is sent using a raw bus socket, then, the message will be delivered to all connected pipes except the one identified in the header. This behavior is intended for use with device configurations consisting of just a single socket. Such configurations are useful in the creation of rebroadcasters, and this capability prevents a message from being routed back to its source. If no header is present, then a message is sent to all connected pipes. When using “cooked” bus sockets, no message headers are present.
https://nng.nanomsg.org/man/v1.2.2/nng_bus.7.html
CC-MAIN-2020-10
refinedweb
284
56.66
Welcome back for part 5 of building a wireless temperature sensor network. In part 3 and 4 we integrated a portable power source for each sensor design. In this post we will add internet monitoring and data logging capabilities to our temperature sensor network. To do this we will replace the Aruduino Uno in our sensor network controller with an Arduino Yun. Out of all the Arduino's I have used the Arduino Yun is my favorite, because it can do so much! If you are not familiar with the Arduino Yun I encourage you to read up on it and its capabilities by clicking here. For this project we will be using the WiFi capability of the Yun to monitor our sensors over the internet and the microSD card drive to log our temperature data. If you are using a brand new Yun be sure to configure its WiFi before using it for this project, for instructions on configuring the Yun's WiFi click here. The Yun does not come with a microSD card so you will need to purchase one and plug it into the drive on the Yun for this project. Unlike the Uno, the Yun does not have an onboard voltage regulator so we will need to add one to convert the output voltage of our DC power supply to 5 V for powering the Yun. For the voltage regulator I decided to use an LM7805C since it is low cost, outputs 5 V, and they sell it at the local Radio Shack. You could also use the LM317 regulator we used for the sensor 1 design, you would just need to tweek the output control resistors to get 5 V instead of 3.3 V at the output. Below you will find the updated schematic of the network controller with the Yun and LM7805C added in place of the Uno. Please note that you will need to use a heat sink with the voltage regulator to dissipate heat since the Yun has a microcontroller and a processor it can easily consume current levels > 300 mA. The Yun does not have dedicated serial communication capabilities like the Uno so we will have to add serial communication capability in our sketch code to be able to communicate with the XBee module. The benefit of not having dedicated serial communication pins is we can now use any digital pins on the Arduino as our communication pins, instead of always having to use D0 and D1. Notice from the figure that we are using D10 as the receive pin and D11 as the transmit pin. To add internet monitoring and data logging to our project as well as re-add serial communication to our project we will take advantage of the awesome libraries that are available for the Yun. Below is the sketch code for our project. Notice at the top of the sketch we are using five different libraries! The first is the Bridge library, this library is needed for the microcontroller to communicate with the processor on the Yun. The Bridge is needed to access a lot of the Yun's capabilities including communication over the internet. The YunServer and YunClient libraries are what we will use to communicate temperature data over the internet. These libraries use the TCP / IP protocol which uses a client / server model to communicate data. In this project the controller is the server and what ever device we use to connect and get the temperature data is the client. The SoftwareSerial library is for serial communication between the Yun and the XBee module (this capability was built-in to the Uno). The FileIO library is what we will use to log time stamped temperature data to our microSD card. /*This sketch was written for the Arduino Yun. The Yun has an XBee Series 2 RF Module connected to it as a coordinator. The Yun uses the XBee coordinator to communicate with two XBee routers. Each XBee router has an analog pin set to measure a temperature sensor and a second analog pin set to measure the voltage level of the battery powering the XBee. This program receives the temperature readings from the two router XBees and allows the readings to be read over the internet and logs the temperature data to a microSD card. This sketch is part of a tutorial on building a wireless sensor network, the tutorial can be found at*/ #include <Bridge.h> //needed for comm between microcontroller and proccesor on Yun #include <YunServer.h> //needed for LAN or WiFi device to connect (Yun acts as server) #include <YunClient.h> //use to manage connected clients #include <SoftwareSerial.h> //Need to create serial comm with XBee (Yun does not have dedicated serial port like Uno) #include <FileIO.h> //Used for logging data to a file stored on the microSD card #define PORT 6666 //port number used to communicate temperature over the internet using TCP/IP YunServer server(PORT); //create server object and set comm port SoftwareSerial mySerial(10, 11); // Declare serial object and set serial comm pins (RX, TX) int addr1; //variables to hold end point XBee address int addr2; //Each address variable is two bytes int addr3;//XBee address is 64 bits long but first 32 bits are common to all so just need last 32 int addr4; String sen1Temp; //stores temperature value for XBee with sensor 1 String sen2Temp; //stores temperature value for XBee with sensor 2 String sen3Temp; //stores temperature value from Yun with sensor 3 int sen3Counter = 0; //This counter variable is used print sensor 3 every 5 seconds float batDead = 6.2; //battery pack voltage level where it needs to be replaced void setup() { // put your setup code here, to run once: mySerial.begin(9600); //Set the baud rate for serial communication Bridge.begin(); //initiate the SPI based communication between the microcontroller and processor on the Yun FileSystem.begin(); //Initializes the SD card and FileIO class server.noListenOnLocalhost(); //Tells the server to begin listening for incoming connections server.begin(); //Start the server so it is ready to take connections from clients pinMode(13, OUTPUT); //set LED pin to output digitalWrite(13, HIGH); //turn LED on so we know all setup is complete and is YUN is connected to WiFi } void loop() { if (mySerial.available() >= 23) { // Wait for a full XBee frame to be ready if (mySerial.read() == 0x7E) { // Look for 7E because it is the start byte for (int i = 1; i<19; i++) { // Skip through the frame to get to the unique 32 bit address //get each byte of the XBee address if(i == 8) { addr1 = mySerial.read(); } else if (i==9) { addr2 = mySerial.read(); } else if (i==10) { addr3 = mySerial.read(); } else if (i==11) { addr4 = mySerial.read(); } else { byte discardByte = mySerial.read(); } //else throwout byte we don't need it } int aMSBBat = mySerial.read(); // Read the first analog byte of battery voltage level data int aLSBBat = mySerial.read(); // Read the second byte int aMSBTemp = mySerial.read(); // Read the first analog byte of temperature data int aLSBTemp = mySerial.read(); // Read the second byte float voltTemp = calculateXBeeVolt(aMSBTemp, aLSBTemp); //Get XBee analog values and convert to voltage values float voltBat = calculateBatVolt(aMSBBat, aLSBBat); //Get Xbee analog value and convert it to battery voltage level int id = indentifySensor(addr1,addr2,addr3,addr4); //save identity of sensor if(voltBat > batDead) { //This if else statement checks the battery voltage, if it is too low alert the user setAndLogSensorValue(id,calculateTempF(voltTemp),1); //set sensor string and log temperature only if battery is still good } else { setAndLogSensorValue(id,voltBat,0); //set sensor string for low battery, temperature reading will not be logged } } } delay(10); //delay to allow operations to complete //This if else statement is used to print the reading from sensor 3 once every ~5 second to match the XBee routers //It uses the delay() function above to calculate 5 seconds. May need to tweek count in if statement to get 5 seconds if (sen3Counter < 300) { sen3Counter++; } else { setAndLogSensorValue(3,calculateTempF(calculateArduinoVolt(analogRead(A0))),1); sen3Counter = 0; //reset counter back to zero } YunClient client = server.accept(); //accept any client trying to connect if(client.connected()){ //If we are connected to a client send identity and temperature data if (sen1Temp.length()==0) { sen1Temp = "Empty value\n"; } //if string is empty, let client know client.write((uint8_t*)&sen1Temp[0], sen1Temp.length()); //send sensor 1 temp or low battery warning if (sen2Temp.length() == 0) { sen2Temp = "Empty value\n"; } //if string is empty, let client know client.write((uint8_t*)&sen2Temp[0], sen2Temp.length()); //Send sensor 2 temp or low battery warning if (sen3Temp.length() == 0) { sen3Temp = "Empty value\n"; } //if string is empty, let client know client.write((uint8_t*)&sen3Temp[0], sen3Temp.length()); //Send sensor 3 temp client.stop(); //disconnect from client } } //This function takes in the XBee address and returns the identity of the Xbee that sent the temperature data int indentifySensor(int a1, int a2, int a3, int a4) { //These arrays are the unique 32 bit address of the two XBees in the network int rout1[] = {0x40, 0xB0, 0xA3, 0xA6}; int rout2[] = {0x40, 0xB0, 0x87, 0x85}; if(a1==rout1[0] && a2==rout1[1] && a3==rout1[2] && a4==rout1[3]) { return 1; //temp data is from XBee or sensor one } else if(a1==rout2[0] && a2==rout2[1] && a3==rout2[2] && a4==rout2[3]) { return 2; } //temp data is from XBee or sensor two else { return -1; } //Data is from an unknown XBee } float calculateTempF(float v1) { //calculate temp in F from temp sensor*(1.2/1023); /; } //This function calculates the measured voltage of the battery powering the sensor float calculateBatVolt(int aMSB, int aLSB) { float mult = 10.0; //multiplier for calculating battery voltage return (calculateXBeeVolt(aMSB, aLSB)*mult); //xbee voltage x voltage divider multiplier equals battery voltage } //This function builds the temperature strings that are communicated over the internet and logs time stamped temperature data to file on //microSD card void setAndLogSensorValue(int sen, float val, int temp) { String dataString = getTimeStamp() + " "; //get time info and append space to the end if (sen == 1) { if (temp == 1) { sen1Temp = "Sensor 1 temperature: " + String(val) + "\n"; dataString += sen1Temp; writeToFile(dataString); //write temp value to file } else { sen1Temp = "Sensor 1 low bat volt: " + String(val) + "\n"; } } else if (sen == 2) { if (temp == 1) { sen2Temp = "Sensor 2 temperature: " + String(val) + "\n"; dataString += sen2Temp; writeToFile(dataString); //write temp value to file } else { sen2Temp = "Sensor 2 low bat volt: " + String(val) + "\n";} } else { sen3Temp = "Sensor 3 temperature: " + String(val) + "\n"; dataString += sen3Temp; writeToFile(dataString); //write temp value to file } } // This function return a string with; } //This function writes data to a file called TempData on the microSD card void writeToFile(String data) { // open the file. note that only one file can be open at a time, // so you have to close this one before opening another. // The FileSystem card is mounted at the following "/mnt/FileSystema1" File dataFile = FileSystem.open("/mnt/sd/TempData.txt", FILE_APPEND); // if the file is available, write to it: if (dataFile) { dataFile.println(data); dataFile.close(); } // if the file isn't open then you could signal an error here else { } } The code for the Arduino Yun sketch is well commented, but let's highlight certain areas and add some explanation: - A server object is created in the setup function and in each iteration of the main loop it checks if there is a client device that wants to connect. If there is a client trying to connect the server accepts it, sends the latest temperature data from each sensor, and closes the connection. In the TCP / IP protocol you have to choose a port number to communicate over. We will use Port 6666 for our temperature network communication. - Each temperature sensor reading is time stamped and stored in the TempData.txt file. If the TempData.txt file does not exist on the microSD card, it will be created. If the file does already exist it will append the new data to any existing data in the file. If sensor 1 or 2's battery gets too low that sensor's temperature reading is no longer stored in the TempData.txt file. - Since the Yun is doing complex operations like writing to a file, reading data from the XBee, and sending readings over the internet it is not as easy as it was earlier in the project to sync sensor 3 to the same timing intervals as sensors 1 and 2. You may need to adjust the sensor 3 delay loop a bit to get the timing just right. - When you "Verify" this sketch you will notice that the code takes up approximately 84% of program memory space in the microcontroller so be sure to carefully optimize if you need to add extra capabilities, more sensors, or more extensive error checking to this code. The easiest way to test the internet monitoring capability of our project without having to write a program is to use a Terminal and the "netcat" command. If you have a Linux based computer or a Mac you have a Terminal. For Mac's the Terminal is located in the Mac HD --> Applications --> Utility directory. The netcat command allows you to communicate with another internet connected device, such as the Yun, using the TCP / IP protocol. To connect to the Yun using the nc command (short for netcat) we need to know the Yun's IP address and the port we want to communicate over. We know the port from our sketch (6666). If the Yun is powered on and connected to your router its IP address can be obtained from the Arduino IDE, just go to Tools --> Port. You can also obtained the IP address of the Yun by going to the Device Table of the internet router you are using (see router's instruction manual to access Device Table). To connect to the Yun with your PC Terminal use the following command sequence: nc <Yun IP Addres> 6666 Please note for this to work your computer has to be connected to the same local network or router that your Yun is connected to. For instance if you are in your home, the Yun and your computer both should be connected to your home internet router. If you are using a Windows computer and following along to this project, there is an open source netcat.exe program out there on the web. You can download it at . I have not had the chance to try it out. Refer to the image below of a Terminal showing the our updated project in action. You can see a connection is made, temperature data is sent to the Terminal, and then the connection is closed. From the Terminal image you can see a connection was made and temperature data was fetched three times. For this example a variable power supply was used to simulate sensor 2's battery getting low, this is detected and the user is notified through the Terminal. Next let's look at an example data log file. This file came from the same example as the above terminal image so we would expect to see sensor 2 temperature readings to stopped being logged when its battery gets too low. From the below TempData.txt file you can see the date and time of the sensor reading is captured. If you scroll to the bottom you will notice we no longer see readings from sensor 2. This is due to its battery voltage becoming too low so the Yun no longer logs temperature data from it. That is all for part 5 of building a wireless temperature sensor network. In part 6, the final post of this project, we will look at how to access our temperature sensor network data outside of the router or local network it is connected to. We will also look at how to access our sensor network from an iOS device, such as an iPhone or iPad. If you have any questions or comments on what was covered in this post use the Comments section below or feel free to email me at forcetronics@gmail.com.
http://forcetronic.blogspot.com/2014/02/building-wireless-temperature-sensor_16.html
CC-MAIN-2018-30
refinedweb
2,683
56.79
need help! need to prevent withdraw function from occuring when savings balance is below 25. The program will run, although when the balance gets below $25 it just states "account inactive" which is fine but then the menu does not display again. This is what i have so far: int main() { Generic check; //object of Generic class created Savings save; //object of Savings class created CharRange input ('A', 'J'); //object of CharRange class created, will check for chars A-J char choice; save.status(true); void withdraw(Savings &savings) { int balance; double dollars; savings.status(false); cin >> dollars; cin.ignore(); if(!savings.withdraw(dollars)) cout << "ERROR: Non-sufficient funds.\n\n"; } in "Savings.cpp" I have: #include <iostream> #include "Savings.h" using namespace std; bool Savings::withdraw(double amount) { if (balance < amount) return false; else if (balance < 25) return false; //not enough in the account else { balance -= amount; withdrawals++; transactions++; return true; } } void Savings::status(bool active) { if (balance > 25) { bool active = true; cout << "Account is active\n"; cout << "Enter the amount of the withdrawl: "; } else { bool active = false; cout << "Account is inactive\n"; } }
https://www.daniweb.com/programming/software-development/threads/243826/classes-and-if-stmts
CC-MAIN-2018-30
refinedweb
184
53.1
The IEnumerable<T> interface is a key part of LINQ to Objects and binds many of its different features together into a whole. This series of posts explains IEnumerable<T> and the role it plays in LINQ to Objects. If you hear people talking about IEnumerable<T>, and sometimes wished you better understood its significance, then you should find this text helpful. Collections and IEnumerable<T> Though LINQ to Objects can be used to query several C# types, it cannot be used against all your in-process data sources. Those that can be queried all support the IEnumerable<T> interface. These include the generic collections found in the System.Collections.Generic namespace. The commonly used types found in this namespace include List<T>, Stack<T>, LinkedList<T>, Queue<T>, Dictionary<TKey, Value> and Hashset<T>. All of the collections in the System.Collections.Generic namespace support the IEnumerable<T> interface. Here, for instance, is the declaration for List<T>: public class List<T> : IList<T>, ICollection<T>, IEnumerable<T>, IList, ICollection, IEnumerable You will find IEnumerable<T> listed for all the other generic collections. It is no coincidence that these collections support IEnumerable<T>. Their implementation of this interface makes it possible to query them using LINQ to Objects. LINQ to Objects and IEnumerable<T> Consider the following simple LINQ query: List<int> list = new List<int> { 1, 3, 2 }; // The LINQ Query expression var query = from num in list where num < 3 select num; foreach (var item in query) { Console.WriteLine(item); } The type IEnumerable<T> plays two key roles in this code. - The query expression has a data source called list which implements IEnumerable<T>. - The query expression returns an instance of IEnumberable<T>. Every LINQ to Objects query expression, including the one shown above, will begin with a line of this type: from x in y In each case, the data source represented by the variable y must support the IEnumerable<T> interface. As you have already seen, the list of integers shown in this example supports that interface. The same query shown here could also be written as follows: IEnumerable<int> query = from num in list where num < 3 select num; This code makes explicit the type of the variable returned by this query. As you can see, it is of type IEnumerable<int>. In practice, you will find that most LINQ to Objects queries return IEnumerable<T>, for some type T. The only exceptions are those that call a LINQ query operator that return a simple type, such as Count(): int number = (from num in list where num < 3 select num).Count(); In this case the query returns an integer specifying the number of items in the list created by this query. LINQ queries that return a simple type like this are an exception to the rule that LINQ to Objects queries operate on class that implement IEnumerable<T> and return an instance that supports IEnumerable<T>. Composable The fact that LINQ to Objects queries both take and return IEnumerable<T> enables a key feature of LINQ called composability. Because LINQ queries are composable you can usually pass the result of one LINQ query to another LINQ query. This allows you to compose a series of queries that work together to achieve a single end: List<int> list = new List<int> { 1, 3, 2 }; var query1 = from num in list where num < 3 select num; var query2 = from num in query1 where num > 1 select num; var query3 = from num1 in query1 from num2 in query2 select num1 + num2; Here the results of the first query are used as the data source for the second query, and the results of the first two queries are both used as data sources for the third query. If you print out the results of query3 with a foreach loop you get the numbers 3 and 4. Though it is not important to the current subject matter, you might have fun playing with the code to understand why these values are returned. Summary By now it should be clear to you that IEnumerable<T> plays a central role in LINQ to Objects. A typical LINQ to Objects query expression not only takes a class that implements IEnumerable<T> as its data source, but it also returns an instance of this same type. The fact that it takes and returns the same type enables a feature called composability. The next logical question would be to ask why this type plays such a key role in LINQ to Objects. One simple answer would be that the creators of LINQ decided that it should be so, and hence it is so. But one can still ask why they picked this particular type. What is it about IEnumerable<T> that makes it a useful data source and return type for LINQ to Objects queries? The answer to that question will be found in the second part of this series of articles.…
https://blogs.msdn.microsoft.com/charlie/2008/04/28/linqfarm-understanding-ienumerablet-part-i/?replytocom=28213
CC-MAIN-2018-34
refinedweb
828
66.88
Hi, I'm using Xerces DOM Parser as below: import org.apache.xerces.parsers.DOMParser; DOMParser parser = new DOMParser(); parser.setFeature("",true); parser.setFeature("", true); parser.setErrorHandler(new errHandler()); parser.parse(xmlFile); The xml file being parsed looks like this: <CCphysical xmlns: <root name="Name"> <directory name="doc" /> <directory name="doc2" /> <directory name="XML" /> </root> <CCphysical/> Extract from the schema: <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element </xsd:sequence> <xsd:attribute </xsd:complexType> </xsd:element> When parsing the file I don't get 3 childNodes for root but 6 because the parser also counts the whitespaces. Even when setting the feature parser.setFeature(" ... ace",false); it does not work. What's wrong and what do I have to do so that whitespeces are ignored? Thanks for your help, Fabian Problem with whitespaces when parsing with Xerces This should cover W3C XML Schema, Relax NG and DTD related problems. Workaround Yes, indeed the root node has as children besides the directory nodes the line break text nodes. The safe thing to do (and that would work for any xml file no matter how many white spaces are between tags) would be that when iterating through the root's children to add this condition for each node: The safe thing to do (and that would work for any xml file no matter how many white spaces are between tags) would be that when iterating through the root's children to add this condition for each node: and only take into consideration the element nodes. Code: Select all n.getNodeType() == Node.ELEMENT_NODE
https://www.oxygenxml.com/forum/post2057.html
CC-MAIN-2021-31
refinedweb
260
52.19
The function void setbuf(FILE *stream, char *buffer); sets the buffer to be used by a stream for Input/Output operations. If buffer argument is NULL, buffering is disabled for that stream. The setbuf function must be called once the file associated with the stream has already been opened, and before doing any input or output operation on stream. Function prototype of setbuf void setbuf(FILE *stream, char *buffer); - stream : A pointer to a FILE object which identifies a stream. - buffer : This is pointer to a memory block to be used as buffer for given stream. The size of this buffer must be at least BUFSIZ bytes. Return value of setbuf NONE C program to show the use of setbuf function The following program shows the use of setbuf function to set buffer for stdout stream. #include <stdio.h> int main(){ char buffer[BUFSIZ]; /*Setting buffer for stdout stream */ setbuf(stdout, buffer); puts("setbuf C Standard library function"); /* Flushing the buffer, Now string got printed on screen */ fflush(stdout); getch(); return(0); } Program Output setbuf C Standard library function
http://www.techcrashcourse.com/2015/08/setbuf-stdio-c-library-function.html
CC-MAIN-2016-44
refinedweb
179
58.32
To internationalize your application, you have to make sure of the following steps: 1 That you mark your text strings using the _() function. 2 That you have create message catalogs in the right directory structure for all your text strings and supported languages. 3 That any date or number strings are formatted using the turbogears.i18n formatting functions. 1. Mark your text strings with _() All text strings you want translated should be included in the _() function. This function is builtin to turbogears, so you don't need to import it specifically into your module, if you have already called "import turbogears". For example: import turbogears from turbogears import controllers class Controller(controllers.Root): @turbogears.expose(html="myapp.templates.welcome.kid") def index(self): return dict(message=_("Welcome")) If you want to explicitly pass in the locale in the _() call, you can do this: print _("Welcome", "de") Handling text strings in Kid templates is somewhat easier. If you set the cherrypy configuration setting "i18n.runTemplateFilter". Create your message catalogs Use the Python scripts in tools/i18n, pygettext.py and msgfmt.py, to create your .po file and .mo files. For details on using this tool and creating your message catalogs, see. Basically you should have a directory structure like this: myapp/ locales/ messages.pot en/ LC_MESSAGES/ messages.po messages.mo de/ LC_MESSAGES/ messages.po messages.mo Eventually we should have tools in tg-admin to handle text extraction and compilation. By default, _() looks for subdirectory "locales" and domain "messages" in your project directory. You can override these settings in your config file by setting "i18n.localeDir" and "i18n.domain" respectively. If a language file(.mo) is not found, the _() call will return the plain text string. 3. Localize your dates and numbers The i18n package has a number of useful functions for handling date, location and number formats. Data for these formats are located in LDML (Locale Data Markup Language) files under turbogears/i18n/data; if you wish to use your own files, set the config setting i18n.dataDir. When searching for a format, the full locale name (e.g. en_CA for Canadian English) is used; if no matching format is found then the fallback locale(e.g. en for English) is used. These functions are found in i18n/formats.py. They include: format_date: returns a localized date string get_countries: returns a list of tuples, with the international country code and localized name (e.g. ("AU", "Australia")) format_currency: returns a formatted currency value (e.g. in German 56.89 > 56,89) Finding the user locale The default locale function, _get_locale, can be overriden by your own locale using the config setting "i18n.getLocale". This default function finds the locale setting in the following order: 1 By looking for a session value. By default this is "locale", but can be changed in the config setting "i18n.sessionKey". 2 By looking in the HTTP Accept-Language header for the most preferred language. 3 By using the default locale (config setting "i18n.defaultLocale", by default "en"). Encoding The _() and all formatting functions return unicode strings.
http://trac.turbogears.org/wiki/Internationalization?version=7
CC-MAIN-2017-30
refinedweb
520
59.09
FileStore and file uploads¶ When enabled, CKAN’s FileStore allows users to upload data files to CKAN resources, and to upload logo images for groups and organizations. Users will see an upload button when creating or updating a resource, group or organization. New in version 2.2: Uploading logo images for groups and organizations was added in CKAN 2.2. Changed in version 2.2: Previous versions of CKAN used to allow uploads to remote cloud hosting but we have simplified this to only allow local file uploads (see Migration from 2.1 to 2.2 for details on how to migrate). This is to give CKAN more control over the files and make access control possible. See also Resource files linked-to from CKAN or uploaded to CKAN’s FileStore can also be pushed into CKAN’s DataStore, which then enables data previews and a data API for the resources. Setup file uploads¶ To setup CKAN’s FileStore with local file storage: Create the directory where CKAN will store uploaded files: sudo mkdir -p /var/lib/ckan/default Add the following line to your CKAN config file, after the [app:main] line: ckan.storage_path = /var/lib/ckan/default Set the permissions of your ckan.storage_path directory. For example if you’re running CKAN with Apache, then Apache’s user (www-data on Ubuntu) must have read, write and execute permissions for the ckan.storage_path: sudo chown www-data /var/lib/ckan/default sudo chmod u+rwx /var/lib/ckan/default Restart your web server, for example to restart Apache: sudo service apache2 reload FileStore API¶ Changed in version 2.2: The FileStore API was redesigned for CKAN 2.2. The previous API has been deprecated. Files can be uploaded to the FileStore using the resource_create() and resource_update() action API functions. You can post multipart/form-data to the API and the key, value pairs will treated as as if they are a JSON object. The extra key upload is used to actually post the binary data. For example, to create a new CKAN resource and upload a file to it using curl: curl -H'Authorization: your-api-key' '' --form [email protected] --form package_id=my_dataset (Curl automatically sends a multipart-form-data heading with you use the --form option.) To create a new resource and upload a file to it using the Python library requests: import requests requests.post('', data={"package_id":"my_dataset"}, headers={"X-CKAN-API-Key": "21a47217-6d7b-49c5-88f9-72ebd5a4d4bb"}, files=[('upload', file('/path/to/file/to/upload.csv'))]) (Requests automatically sends a multipart-form-data heading when you use the files= parameter.) To overwrite an uploaded file with a new version of the file, post to the resource_update() action and use the upload field: curl -H'Authorization: your-api-key' '' --form [email protected] --form id=resourceid To replace an uploaded file with a link to a file at a remote URL, use the clear_upload field: curl -H'Authorization: your-api-key' '' --form url= --form clear_upload=true --form id=resourceid Migration from 2.1 to 2.2¶ If you are using pairtree local file storage then you can keep your current settings without issue. The pairtree and new storage can live side by side but you are still encouraged to migrate. If you change your config options to the ones specified in this docs you will need to run the migration below. If you are running remote storage then all previous links will still be accessible but if you want to move the remote storage documents to the local storage you will run the migration also. In order to migrate make sure your CKAN instance is running as the script will request the data from the instance using APIs. You need to run the following on the command line to do the migration: paster db migrate-filestore This may take a long time especially if you have a lot of files remotely. If the remote hosting goes down or the job is interrupted it is save to run it again and it will try all the unsuccessful ones again. Custom Internet media types (MIME types)¶ New in version 2.2. CKAN uses the default Python library mimetypes to detect the media type of an uploaded file. If some particular format is not included in the ones guessed by the mimetypes library, a default application/octet-stream value will be returned. Users can still register a more appropiate media type by using the mimetypes library. A good way to do so is to use the IConfigurer interface so the custom types get registered on startup: import mimetypes import ckan.plugins as p class MyPlugin(p.SingletonPlugin): p.implements(p.IConfigurer) def update_config(self, config): mimetypes.add_type('application/json', '.geojson') # ...
https://docs.ckan.org/en/ckan-2.4.9/maintaining/filestore.html
CC-MAIN-2019-35
refinedweb
790
53.1
What!? .NET? PHP? In the same sentence? Yes! My requirement has been to build a WebDAV server for .NET, and the most successful option has been to combine .NET and PHP. No, not interpreted, slow PHP, but PHP running at compiled code speed and able to use .NET assemblies. Click on this link to access both the "how-to" presentations referenced in the article as well as the binary and source code files. If you download the binaries, let me know if you have any difficulties installing a server instance. If you create an interface for another repository, it will be great to hear about your experiences. This article is about implementing a WebDAV server. If PHP and/or WebDAV is not on your radar, then this article will not be for you. Equally, if all you want to do is share files using WebDAV, then use IIS because this facility is built in. The purpose of this article is to show how a flexible WebDAV server can be built using .NET, one that can be extended to return documents from any type of repository, or can be extended to introduce additional document properties or support more granular access control. The binary files available from the link associated with this article are the ones you need to install, setup, and run the WebDAV server discussed in this article. To install the server, unzip the binary files and then create a virtual directory in IIS using a wildcard map. For detailed information, see the installation note below. The possible omission is that if the files are loaded on to PC running vanilla Windows 2003 or Window 2000, then one of the VS2005 C++ run-time files may be needed also. This issue is reviewed in this Phalanger forum note. The C++ run-time files installer can be found here. These files will almost certainly be installed if you have installed the .NET 2.0 SDK. The source code files include only those changes and additions you need to apply to the Phalanger source code and which are reviewed in the source code notes below. Go to Phanalger project on CodePlex for the full Phalanger source. The PHP source code is included with the binaries as these files really form part of the WebDAV server. Note: Do not use the binaries associated with this article with Phalanger. You may have already or maybe planning to install the Phalanger binaries and Visual Studio extensions. In case you do, and to prevent namespace clashes with the Phalanger assemblies, the assemblies in the binary files have been renamed from their original Phalanger counterparts. The extensions have also been modified to work with the renamed file, so they will not work with Phalanger binaries. The potential to clash occurs because Phalanger installs its assemblies into the GAC, assemblies which .NET may use preferentially. This is a potential problem because to implement the WebDAV server as a handler (see the Architecture notes), modifications to one of the source files of the Phalanger core must be made to make two methods public. If you are not interested in understanding how the server works and just want to get on with it, download the binaries and follow the instruction in the installation section below. Let me know how you get on. In my scenario, .NET support is imperative because the server must be able to access properties exposed by a suite of existing .NET assemblies. If the target platform is Java, then there are plenty of choices, but repeated web searches revealed very few options for the .NET platform. There seems to be just one commercial offering, and there's a DAV framework project called Sphorium on SourceForge, but that seems to be about it. The commercial option might be possible; though, at nearly $3K for a redistributable license, it's not cheap. The Sphorium framework looks promising, but it doesn't implement a couple of really important elements of the DAV specification. A few people have blogged about creating a DAV server for .NET, but none have been willing or able to release code. Just as I was preparing to raid the piggy-bank and buy a licence to the commercial option, I came across two projects. One is a PHP implementation of a WebDAV server. Its part of the PEAR project. It's a reasonably complete implementation of a DAV 1 and 2 server, and comes with a sample implementation that shows how to allow controlled access to files on a server. However, it's PHP, and using PHP doesn't easily meet the requirement to have access to .NET assemblies, and this is where the second project comes in. Phalanger is a PHP compiler for .NET implemented in .NET 2.0. That is, it will take PHP scripts and compile them so they can be used from ASP.NET or as a standalone application. I'm not going to go into details about Phalanger except where it becomes necessary to explain the DAV server, because there's lots of information on the Phalanger website, on CodePlex, and here on CodeProject. Microsoft has hired two of the founders of the Phalanger project and, given the news that Microsoft has created the Dynamic Language Run-time (open sourced) to facilitate dynamic languages, I'm hopeful that in due course we'll see PherrousPHP alongside IronPython, IronRuby, JScript, and VBScript. If you are reading this, you probably have a good idea what WebDAV is, so there's no point going into exhaustive detail. You may already be using it in the form of Web Folder in Windows. DAV is short for Distributed Authoring and Versioning. It is a specification of the Internet Engineering Task Force (IETF) that extends HTTP so that a client and a server can communicate about the documents to be managed as well as retrieve and store those documents. The specification is RFC 2518, and can be found here:. Using the protocol, clients can also request properties about documents and ask for or release locks on a document. The server may be responsible for returning files managed by the server's file system but, equally, it may be returning documents recorded in a database. HTTP defines verbs like GET and PUT. In the DAV specification, these verbs are retained so, for example, a PUT is a valid verb in DAV as it is in HTTP. The semantics are pretty much the same though there are important and fairly obvious differences. In DAV, a resource being uploaded (a document is referred to as a resource) may be locked by another client, so the server may need to prevent the upload taking place and return a specified HTTP status code that represents this condition. DAV then specifies new status codes and verbs, such as OPTIONS (allows a client to discover which WebDAV features are supported), PROPFIND (find out about available documents and their properties), LOCK, UNLOCK, MKCOL (directories are referred to as collections), DELETE, COPY, MOVE, and so on. Of course, it's not as simple as just creating a request and specifying a verb. Most verbs will expect additional information as headers or as XML in the request body. Servers respond with lots of information, maybe as headers, but probably as XML in the response body. All of the potential interactions are described in the 94 page specification. If all you want to be able to do is allow users access to a set of files, then HTTP will meet your needs. If you configure your Web Server to allow directory browsing, then users are able to list, download, and upload documents. However, if you want to: then DAV is one option to consider. A question you might ask is: What sort of repositories might you present via WebDAV? Microsoft Exchange is a WebDAV server, and a typical repository is a user's Inbox. Microsoft SharePoint is also a WebDAV server. Someone has created a WebDAV server (in Java) to interface with the Amazon S3 facility. Sub-version, the popular source code control service, supports WebDAV. There's a list of WebDAV service applications on, but any repository of things that can represented as a set of documents and folders is capable of being presented by a WebDAV service. There are already many WebDAV clients including Windows Explorer and all of the Microsoft Office programs. That is, Windows Explorer, Word, and the other Office programs will treat a WebDAV service just like any other networked repository of files. Windows Explorer allows you to assign a WebDAV service to a drive letter or keep it as a network place. This means, you can implement a WebDAV server knowing that users will be able to access the service with software that is in use every day and is already supported by IT departments. A quick aside here about WebDAV support in Windows, because if I don't point it out, someone will: Windows clients (Explorer or Office) are not 100% compliant - no surprise there. All the Microsoft clients including Windows Explorer and the Office suite also, and preferentially, support FrontPage extensions. Because these Microsoft clients will default to FrontPage, they have to be told that the server is really a WebDAV server, and this is where the spec difference occurs. WebDAV services that are to be consumed by Windows clients have to return an additional Windows specific header in response to any request using the OPTIONS verb. This detail is covered in the server implemented in the downloadable files associated with this article. The practical reason is that the WebDAV server I found is written in PHP. Beyond that, it's a language I know well. PHP is a well regarded scripting language used to create pages by millions of websites including mine. Now that Ruby and Python are being supported for .NET, I could look for a server written in one of these languages, but I don't know either sufficiently well to tackle a project to generate a WebDAV service. The WebDAV RFC (2518) runs to 94 pages of exacting specification, so anyone has to be motivated by the prospect of using an existing project that provides a head start. By the way, if you are a Python coder, there is a WebDAV server written in Python which can be found on SourceForge. With the ability to combine PHP and .NET, I think I get the best of two worlds. I get the option to work with a compiled application that is going to have the performance characteristics of a native .NET application. Plus, if issues arise on a client site, I can revert to the scripts and so be able to debug the problem and even change the code without the need to have Visual Studio installed and then recompile the scripts to restore performance. A WebDAV server has to respond to the possible combinations of verbs, request headers, and XML defined to be legal in the specification with response codes, headers, and XML also defined by the specification. What the PHP code does is to implement a base class that hides all the management of verbs, headers, response code, and XML, and allows derived classes to implement functions that return arrays of information and a success/fail response. The implementation of the derived class does not need to know how the specification demands that the request or response XML be formatted or what response codes and headers the specification requires or allows. The image blow shows the base class as it looks from the derived class. The methods in capitals are abstract and can be overridden. They correspond to the verbs specified in the webDAV specification, and are (optionally) implemented in the derived class to provide support for that verb. Using the PHP base class, the implementor can focus on implementing the logic required to support the chosen repository. The most important and complex verb to support is PROPFIND. PROPFIND is the verb a client will use to query the server for information about the resources and collections available. The request is a mixture of headers and, optionally, XML, that together act somewhat like a SQL Where clause. A request might ask: Where It looks straightforward but, as ever, the devil is in the details. The PHP base class takes care of the details, and requires only that the derived class return a structured array containing details when requested. WebDAV servers have to handle all request verbs and all query paths. This is important because the path requested by a user might be a directory or a source code file. This is unfamiliar territory when working with IIS. IIS selects an application (ASP.NET, PHP, Perl, etc.) based on the file extension mappings associated with a virtual directory. Even when IIS works out that a request should be handled by ASP.NET for example, this is then further refined by the .NET run-time based on the handlers associated with a file extension map defined in machine.config and web.config. To ensure that the WebDAV server will receive all requests, it is implemented as a handler; that is, the "main" class is derived from IHttpHandler. This handler is specified in web.config, and it is defined to respond to all verbs and all paths. It's also necessary to make sure that the web site or virtual directory hosting the WebDAV server uses a wildcard mapping (see the installation section for more information). IHttpHandler The WebDAV handler is responsible for setting up details of the AppDomain and for invoking the Phalanger RequestContext class which is ultimately responsible for parsing the PHP scripts, pre-compiling, and then executing the generated code to produce a response for the client. RequestContext The ASP.NET IHttpHandler interface specifies one property, IsReusable(), and one method, ProcessRequest(). This method is called by the ASP.NET run-time, and is passed a HttpContext instance for the current request. IsReusable() ProcessRequest() HttpContext Here's the implementation of this method used in the binaries: public void ProcessRequest(HttpContext context) { if (context == null) throw new ArgumentNullException("context"); // disables ASP.NET timeout if possible: try { context.Server.ScriptTimeout = Int32.MaxValue; } catch (HttpException) { } // ensure that Session ID is created RequestContext.EnsureSessionId(); // default culture: Thread.CurrentThread.CurrentCulture = CultureInfo.InvariantCulture; RequestContext request_context = RequestContext.Initialize(ApplicationContext.Default, context); PHP.Core.Debug.WriteLine("REQUEST", "Processing request"); if (IsPHP(context.Request.Path)) { Process(request_context, context); } else if (IsWebDAV(context.Request.Path)) { Process(request_context, context, "Scripts/webdav.php"); } else { context.Response.WriteFile(context.Request.PhysicalPath); } context.Response.End(); if (request_context != null) request_context.Dispose(); } The method as implemented is pretty straightforward: RequestContext.EnsureSessionId() My implementation assumes that any PHP file (.php) is to be executed as a PHP script rather than regarded as a resource to be accessed or updated, because this helps me with debugging. But there's no inherent reason why these cannot be regarded as resources to be transferred to the client. If the extension of the requested resource is not .PHP, then the handler is hard coded to execute the script "Scripts/webdav.php" which implements the WebDAV logic. Phalanger is responsible for working out if the requested script has been pre-compiled and for using the correct assembly if it is, or for compiling the script dynamically. void Process(RequestContext request_context, HttpContext context, string scriptFilename) { PhpSourceFile requestFile = new PhpSourceFile( new FullPath(HttpRuntime.AppDomainAppPath), new FullPath(HttpRuntime.AppDomainAppPath + scriptFilename) ); if (request_context.ScriptContext.Config.Session.AutoStart) request_context.StartSession(); Type script = null; try { script = request_context.GetCompiledScript(requestFile); if (script != null) { request_context.IncludeScript(context.Request.PhysicalPath, script); } } catch (PHP.Core.ScriptDiedException) { Console.WriteLine("Died"); } catch(Exception ex) { ReportStatus(ex.Message, 0); System.Console.WriteLine(ex.Message); // A user code or compiler have reported a fatal error. // We don't want to propagate the exception to web server. } } This private method is called from ProcessRequest() to run the WebDAV code. The requested file is wrapped in an PhpSourceFile instance, then GetCompiledScript() is called to retrieve a pre-compiled version of the script or to compile the requested (and dependent) script. PhpSourceFile GetCompiledScript() Finally, the script is executed using the call to IncludeScript(). Woo hoo, that's it. IncludeScript() You may have noticed in the preceding script that the call to request_context.StartSession() is conditional upon the value of request_context.ScriptContext.Config.Session.AutoStart. This value is one of many values that Phalanger retrieves from web.config. Ordinarily, PHP uses a file called PHP.ini to store its properties. In Phalanger, this information is stored in a custom section of web.config. There a comprehensive example of the properties that can be setup on the Phalanger website. request_context.StartSession() request_context.ScriptContext.Config.Session.AutoStart In summary, Phalanger uses a custom configuration section called "phpNet". The WebDAV binaries associated with this article include a web.config file that serves the needs of the WebDAV server, and is a sample to compare with the more comprehensive example on the CodePlex website. phpNet Among the important features you can enable or disable is run-time and compile time error capturing; this is information that can be captured in addition to information that is written to the ASP.NET trace collection. The Web.config included with the server enables the reporting of all run-time and compile-time errors, and directs them to a file called output.html. To create a working server, it has been necessary to make modifications to both the Phalanger project and the PHP WebDAV scripts. The most important change to the Phalanger code is to the file RequestContext.cs of the PhpNetCore project. The change to this file is to make the methods EnsureSessionId, GetCompiledScript(), and MultiScriptAssembly() public so that they can be called from an external assembly. Phalanger implements its own handler in the PhpNetCore project, but this assumes it will handle only requests for files with a .php extension, which, for a WebDAV server, is not sufficient. It is for this reason that a WebDAV handler has been created, and it is this handler that needs access to the modified methods of the PhpNetCore project. EnsureSessionId MultiScriptAssembly() HttpHeaders.cs of the PhpNetCore project has also been changed because I believe a bug has been introduced that prevents the header being written to the response stream correctly. These changed files are included in the source code file associated with this article, and you should be able to apply them to the Phalanger source code. The changes included have been made to build 22713, so you may need to be careful applying them to later Phalanger builds. The PHP portion of the WebDAV server is also included in the associated file, and comprises several script files: HTTP_WebDAV_Server_Filesystem FileSystem ServeRequest() The remaining files provide miscellaneous support functions. There are two additional files called Server.php and FileSystem.php. The files FileSystemAccess.php and FileSystemMySQL.php contain the contents of both Server.php and FileSystem.php. It turns out that the line number information generated by Phalanger for debugging within Visual Studio is not correct for the case of a derived class that is implemented in a different file to that of the base class. This makes it impossible to use Visual Studio to step through the PHP code in the file implementing the derived class. This problem does not arise when both the base and derived classes are in the same file. So the main implementations are in FileSystemAccess.php and FileSystemMySQL.php because they combine both the base and derived classes, while Server.php implements the base class alone, and FileSystem.php the derived class (MySQL) alone. The PHP files have been modified quite a bit to use calls to ASP.NET classes when these make more sense, and they've been updated to add statements to write trace information which can be reviewed by looking at trace.axd. Also, the PEAR PHP implementation assumes WebDAV will be running from the root of a website, while I want the WebDAV server to be attached to a website root or to a specific virtual directory. There's a significant amount of PHP code, so it's not practical to cover it all in this article, so I'll just review WebDAV.php in parts because it illustrates the integration of PHP and .NET: import namespace System; import namespace System:::Configuration; $appSettings = ConfigurationManager::$AppSettings; $realm = $appSettings->Get("Realm"); $DBHOST = $appSettings->Get("DBHOST"); $DB_WEBDAV = $appSettings->Get("DB_WEBDAV"); $DBUSER = $appSettings->Get("DBUSER"); $DBPWD = $appSettings->Get("DBPWD"); $SITEPATH = $appSettings->Get("SITEPATH"); $USERPASSWORDS = $appSettings->Get("Users"); $UseAuthentication = $appSettings->Get("UseAuthentication"); $ConnectionTimeout = $appSettings->Get("ConnectionTimeout"); System:::Web:::HttpContext::$Current->Trace->Write("REALM", $realm); The WebDAV.php file begins by importing .NET namespaces in a similar way to C# or VB. In C# and VB, the dot (.) is used to signify "member access", whether of a class or a namespace. In PHP, the dot is reserved as the concatenation operator. Instead, the Phalanger designers chose to use three colons to signify namespace access. The designers have chosen to re-use other operators including $ (signifies a field such as a variable or property), two colons (: (static class or member access), and -> (dynamically allocated member access). In principal, this is straightforward, but it does mean that you have to know how a class, method, or member is declared so that you can use the correct syntax. In C#, you use dot whether you are accessing a dynamic or static class member or property. Take the line: $appSettings = ConfigurationManager::$AppSettings; This returns access to the appSettings collection, but because AppSettings is defined as a property, it must be accessed using the $ prefix. Also, because the property is declared statically, it must be accessed using the :: operator rather than the -> operator. appSettings AppSettings The returned $appSettings is a regular Hashtable, so its members can be accessed using the -> operator. You can see this pattern repeated in the line: $appSettings Hashtable System:::Web:::HttpContext::$Current->Trace->Write("REALM", $realm); This line access the Trace class of the current context to write information to the trace log. Trace So far, the only code used has been .NET. Below, .NET and PHP are mixed to authenticate the current user based on user credentials specified in web.config. if (isset($UseAuthentication) && strtolower($UseAuthentication) != "false") { $users = array(); if (strlen($USERPASSWORDS) > 0) { $userarray = explode(";", $USERPASSWORDS); foreach($userarray as $user) { if (strlen($user) == 0) continue; list($username, $password) = explode(":", $user); $md5 = md5($username.":".$realm.":".$password); $users[$username] = $md5; // System:::Web:::HttpContext::$Current->Trace->Warn("USER", // "$username:$realm:$password:$md5:" . $users[$username]); } if (count($users) > 0) { include_once("auth.php"); $HTTPDigest =& new HTTPDigest($realm); if (!$authed = $HTTPDigest->authenticate($users)) { System:::Web:::HttpContext::$Current->Trace->Warn( "AUTHENTICATION", "Not logged in"); $HTTPDigest->send(); } } } if (count($users) == 0) { System:::Web:::HttpContext::$Current->Trace->Write("AUTHENTICATION", "No users have been defined"); header('HTTP/1.0 401 No users'); die('401 No users'); } System:::Web:::HttpContext::$Current->Trace->Write("AUTHENTICATION", "Authenticated"); } Finally, the WebDAV FileSystem class is instantiated, and the ServeRequest() method is called passing in the location of the file the WebDAV server will manage. require_once "FilesystemNew.php"; $server = new HTTP_WebDAV_Server_Filesystem(); $server->db_host = $DBHOST; $server->db_name = $DB_WEBDAV; $server->db_user = $DBUSER; $server->db_passwd = $DBPWD; $server->ServeRequest($SITEPATH); System:::Web:::HttpContext::$Current->Trace->Write("WEBDAV", "Completed"); Finding and fixing issues in a client server application is always a challenge. To help with this process, Phalanger generates a log file. The name of the file created and the specific information written to the file is controlled by settings in Web.config. I've also taken advantage of the ability to extend PHP with calls to .NET classes to add information into the ASP.NET trace log as the PHP code is executed. You can access this additional information by accessing the ASP.NET tracing page trace.axd. This assumes you have enabled tracing in Web.config. Of course, you can also edit the PHP scripts and add your own additional trace information. A complete WebDAV query can involve several requests and responses, and sometimes it's good to be able to have the detail of each request and response. In my experience, Fiddler2 by Eric Lawrence of Microsoft is a great tool for this purpose. It will record the details of all requests and their responses. It also allows you to issue a specific request, and control the verb, headers, and body sent. Using this feature, it is relatively easy to focus on and debug responses to a specific verb. Fiddler is simple to install. When it is started, it injects itself as a local proxy so that it can record and relay all HTTP transactions. There are a couple of things to be aware of. Fiddler does not proxy the localhost (127.0.0.1). If you are working with a test server instance on your own PC, you will need to refer to your web server using your machine name, not localhost. Secondly, when Fiddler is started, it works all the time, and this is normally OK. However, if you download a file while it's running, as a proxy, it will be caching the file in the background, and you will only see the Run/Open/Save/Cancel button when it has finished downloading. Because you will normally expect the prompt to appear before the file is downloaded, you may think that the file transfer has gone wrong, especially if the background download takes a while to complete. Finally, don't forget that changes you make may manifest themselves as .NET run-time errors. If you are working on the server, you will be able to see these errors in the browser, but if you are working on a different PC, then you will need to ensure that this element: <customerrors mode="Off"> is included the web.config. Also, you should temporarily set the trace element's localonly attribute to false so that you can see trace output on another PC. <customerrors mode="Off"> localonly false Unzip the files in the webdavbinary.zip file to a folder. The unzipped folder will have the following structure: Dynamic, Extensions, TypeDefs, and Wrappers are folders required by Phalanger, and which are referenced by the phpNet custom section of web.config. For more information about the purpose of these folders, see the Phalanger documentation. A WebDAV server that supports the specification must support lots of features including the ability to hold details about the "lock" status of any given resource. In this sample server, the lock information is held in a JET (Access/.mdb) database. Make sure that the account used to run IIS has rights to update the .mdb file.. If IIS is unable to update this file, you will see an ADO.NET error message indicating an updatable query must be used. Also, if you plan to use the binaries associated with this article on Windows 2000 Pro or Server, you will need to make sure version 2.6 or later of the Microsoft Data Access Components (MDAC) is installed. There are a couple of reasons for using JET, the primary one being that it's easy to install because the .mdb file is copied with the binaries and there's no database or script to run or database permissions to setup. The second reason is that the use of JET serves to illustrate how the .NET Framework, in this case, classes from the System.Data assembly, can be used from a PHP script. Here's an example of establishing a connection: try { $con = new System:::Data:::OleDb:::OleDbConnection(); $con->ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;User ID=Admin;Data Source=.\\WebDAV.mdb;"; $con->Open(); } catch(System:::Exception $e) { parent::TraceCategory("FILESYSTEM", "Locks database connection error: " . $e->Message); return; } Most of this is what you'd expect. A point to note is that to trap errors, the .NET Exception type must be caught. You can't rely on the PHP native Exception class to catch .NET exceptions because they have a different ancestry. Exception The code below shows the established connection being used to populate a reader and then iterate over the returned rows. The main difference is that Phalanger doesn't yet support indexers. The consequence of this omission is that the values of the reader must be accessed using the more primitive IDataReader methods GetOrdinal() and GetString(). IDataReader GetOrdinal() GetString() try { parent::TraceCategory("FILEINFO", $query); $cmd = new System:::Data:::OleDb:::OleDbCommand($query, $con); $reader = $cmd->ExecuteReader(); $hasRows = $reader->HasRows; } catch(System:::Exception $e) { parent::TraceCategory("FILEINFO", "QUERY ERROR: " . $e->Message); } try { if ($hasRows) { while ($reader->Read()) { $col_ns = $reader->GetOrdinal("ns"); $col_name = $reader->GetOrdinal("name"); $col_value = $reader->GetOrdinal("ns"); $info["props"][] = $this->mkprop($reader->GetString($col_ns), $reader->GetString($col_name), $reader->GetString($col_value)); } } } catch(System:::Exception $e) { parent::TraceCategory("FILEINFO", "QUERY ERROR: " . $e->Message); } catch(Exception $e) { parent::TraceCategory("FILEINFO", "FETCH ERROR: " . $e->message); } if ($reader != null) $reader->Close(); It you really prefer to use a "database", then the file called FileSystemMySQL.php implements the same WebDAV, but uses MySQL as a database. The MySQL implementation uses the native PHP extension to interact with MySQL, so serves to illustrate how Phalanger has retained support for existing PHP extensions even though they are written in unmanaged code. The binary files associated with this article also contains a .sql script to create the required MySQL database and tables. When you have created the database, you will need to edit the web.config file to edit the appSetting keys that describe the database location, database name, username, and password. appSetting To create a WebDAV server in IIS, follow these instructions. If you like to see, rather then just read, how to create a WebDAV application, there are Flash presentations here showing how to setup a server using IIS 5.x and IIS 6.0. Edit the web.config to: SITEPATH appSetting UseAuthentication To test the WebDAV server, start by pointing your browser at the website or virtual directory you have created. If the application is successful, you will see a listing of any files that you have stored in the SITEPATH location. The presentation of the output is controlled by the GetDir() method of the derived FileSystem class so you can adjust it to suit your needs. If you don't see a listing, there are pointers in the Debugging section to help find out what's going wrong. In summary, this is to review the log file which will list any compilation errors, to look at the trace information, and if all else fails, use Fiddler2 to look at the transactions between the client and server. GetDir() When you see a file/directory listing shown in your browser, you are in a position to use a WebDAV client. The easiest thing to do is to open an Office product like Word, then click on the file open menu open and enter the website or web site/virtual directory in the box labeled File name. The Office product will attempt to use the WebDAV service just like any other file service and present a list of files and folders for you to use. Again, if you don't see the files and folders you expect or if the Office product reports an error, then you can run through the steps identified in the Debugging section. Although access to the service is from Excel, because it's still going through your IIS website or virtual directory, you can use a browser to display the trace pages. The final thing to do is to create a WebDAV Net workplace. This gives you access to the WebDAV service from Windows Explorer. One of the potential benefits of Phalanger is that it permits the pre-compilation of PHP scripts. The PHP compiler (phpc.exe) is included in the binary files associated with this article (.\WebDAV\Server\phpc.exe), along with a command file to run the compiler (.\WebDAV\Scripts\build.cmd). The compiler, config file, and script are tailored to compile the scripts in .\WebDAV\Scripts and copy the generated assembly into the .\WebDAV\Server\bin folder. The WebDAV server works without pre-compilation because it will use the copy of the scripts in .\WebDAV\Server\Scripts by default. Phalanger will use the compiled script assembly preferentially so there's no need to remove the .\WebDAV\Server\Scripts sub-folder, though it will make it more convincing that the generated assembly is being used if this folder is at least renamed. When I've looked closely at most WebDAV implementations, whether Java, C#, or Python, there are aspects of the WebDAV specification which have not been implemented. The PEAR WebDAV server used as the basis of this article is no exception. The authors of the original PHP script included comments in the code to highlight potential deficiencies. Their deficiencies are not material to me, but you can review the code to see if these are important to you. These omissions include: WebDAV is a specification that describes a protocol that is intended to facilitate standardized communication between a client and a server. However, one of the issues is whether both implement the protocol in the same way. In this server, the potential for misunderstandings between the client and server is enhanced because the server also implements Digest Authentication. It's known that IIS, Apache, Firefox, and IE sometimes have different ways of implementing the detail of the Digest specification. The following table shows the combinations of client and server that I've tested successfully. It's not exhaustive, and does not include non-Microsoft clients, but at least you can see the combinations I expect will work. The license for this software is controlled by the licenses of the two constituent projects. Phalanger is released under the Microsoft Shared Source Permissive License (SS-PL) which permits redistribution in binary form. The PEAR PHP HTTP_WebDAV_Server is released under version 2.02 of the PHP license which permits redistribution providing copyright notices are retained. is_readable is_writable ishidden isreadonly isstructureddocument iscollection checklock system.web <authentication mode="Windows" /> and: <deny users="?"/> Fixes already - thanks to those who sent emails. First version. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) public class clsMyHandler : IHttpHandler { public void ProcessRequest(System.Web.HttpContext context) { StreamWriter sw = new StreamWriter(@"C:\requestLog.txt",true); sw.WriteLine("Got a request at " + DateTime.Now.ToString()); sw.Close(); } public bool IsReusable { get { return true; } } } <handlers> <add name="testhandler" verb="*" path="*.txt" type="MyPipeLine.clsMyHandler, MyPipeLine" /> </handlers> General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/18873/Using-NET-and-PHP-to-create-an-extensible-WebDAV-s
CC-MAIN-2016-36
refinedweb
5,768
55.54
It sounds to me like you are corrupting your model and introducing ambiguity. Using Strings for Dates and Integers is a bad idea. We've successfully dealt with those issues without corrupting our model. That is the purpose of the ActionForm. It is a point in which you need to convert your user input to your model. I think you are shortcutting here and will regret it later on. This is not a good practice. Brandon On 5/5/05, Darek Dober <dooverone@op.pl> wrote: > Registration my own implementation of converter is not a good choice, > because I want to treat only particular columns in that way, and converter > would treat > all integer types in the same way (I could be wrong:) > > Nest my real bean is the best solution to make that "clean", however I > decided to solve this problem on database side. > > If you notice, you would have the same problem with dates. If you want to > validate them, and keep invalid data to correct them, you have to keep them > as String ( I also could be wrong here:) > > The easiest and most painless method for me, was to use database functions: > For example: > In postgresql you can use function TO_DATE(), or just use expression like > this: #deptId:INTEGER#::integer, which shows the clue to convert type on > database side: > > SELECT '123'::integer, convert number 123 to integer type. > > In Oracle there are also functions like this: TO_DATE, TO_NUMBER, and so on. > > That's the way I did it > > Darek Dober > > > ----- Original Message ----- > From: "Lieven De Keyzer" <lieven_dekeyzer@hotmail.com> > To: <dooverone@op.pl> > Sent: Thursday, May 05, 2005 3:14 PM > Subject: Re: re: struts vs ibatis - Integer type > > > Where you able to use this advice? I'm kind of in same situation and > wanted > > to know if you could solve it this way. > > > > >The only way to make that "clean" is to push the translation effort > > >into your ActionForm and nest your real beans in the ActionForm. you > > >would then have a String value that you convert and set into your > > >nested bean. The other option is to write a different BeanUtils > > >Converter implementation for numerics and register it > > > >( > ls/ConvertUtils.html#register(org.apache.commons.beanutils.Converter,%20java > .lang.Class). > > > > > >I would personally avoid cluttering up your domain bean with faux > > >setters and getters. > > > > > >Brandon > > > > > >>On 5/2/05, Darek Dober <[EMAIL PROTECTED]> wrote: > > >> Hi, I hava a table 'users' with column dept_id (id of department in > > >>departments > > >table) > > >> This column is optional. That means the operator doesn't have to > assign > > >>inserted user to any department. If I have bean: public class > UserBean > > >>{ Integer departmentId; .... } struts will make automatic > > >>conversion of type. So departmentId will be set > > >>to 0, if I don't set any of department. That's a cheat, because, I > don't > > >>want to to have a department with id equals to 0, it should be NULL. > On > > >>the other hand, when I implement departmentId as String, struts act > > >>correctly. But while inserting record to the database, I get an error > sth > > >>like this: database column dept_id is type of bigint, inserted value is > > >>type of > > >>varchar. I have the solution: departmentId is type of String, but for > > >>ibatis I have > > >>the other metod getDepartmentIdAsInteger which return Integer or null if > > >>value is empty. It works, but i don't like this. Is there any cleaner > > >>solution for this. I looked into jpetstore, but there > > >>were columns of type varchar. Rest of them was mendatory. I cannot use > > >>columns of type varchar as foreign keys. Usage: .... VALUES( > > >> #departmentId:INTEGER#, > > >>.... doesn't help if departmentId is String Any ideas? Darek > > > > > >
http://mail-archives.apache.org/mod_mbox/ibatis-user-java/200505.mbox/%3C2fe5ef5b05050507193a59bb5e@mail.gmail.com%3E
CC-MAIN-2017-39
refinedweb
609
63.29
How to Create Todo List in Angular 7 ? The ToDo app is used to help us to remember some important task. We just add the task and when accomplished, delete them. This to-do list uses various Bootstrap classes that make our web application not only attractive but also responsive. Approach: - Create a new angular app using following command: ng new my-todo-list - Move inside the app by cd and run. After that open local host and check if the app is working. cd my-todo-list ng serve - Install bootstrap using the following command: npm install bootstrap Edit style.css file in the project - Open src/app folder and start editing app.component.html - Open app.component.ts file and write functions for adding and deleting tasks. - For working with forms that is taking input we have to import FormsModule in app.module.ts file. import { FormsModule } from '@angular/forms' Output: My Personal Notes arrow_drop_up
https://www.geeksforgeeks.org/how-to-create-todo-list-in-angular-7/?ref=rp
CC-MAIN-2021-25
refinedweb
157
59.4
Hi, It is instructed that in order to run Partial Lease Square on SPSS from the "regression" menu, one has to install IBM SPSS Statistics 20-essentials for Python. I installed both the essentials and Python 2.7.1, but still could not run the Partial Lease Square analysis. Can someone help? Answer by SystemAdmin (532) | Jun 19, 2012 at 06:32 PM There are additional requirements. You need to install the appropriate version of the numpy and scipy scientific libraries. See the installation instructions in the download package. You should also get a message to that effect when you run the procedure. HTH, Jon Peck Answer by csting (0) | Jun 21, 2012 at 09:06 PM Hi Jon, Thank you for the good tips. Not quite sure though what content you referred to by "installation instructions in the download package". But I installed the two following versions of Numpy and Scipy, it still doesn't work. Did I use the right version? What could be the problem? scipy-0.11.0b1-py2.7-python.org-macosx10.6.dmg numpy-1.6.2-py2.7-python.org-macosx10.3.dmg Thanks! Answer by SystemAdmin (532) | Jun 21, 2012 at 11:09 PM The zip file contains a file called PLS_Extension_Module_Install_Instructions.pdf. When you say PLS doesn't work, that doesn't give us anything to go on. Please describe specifically what happens when you run a PLS command. There is also a topic called I want to run PLS, but I can't find the required numpy and scipy libraries. What can I do? in the FAQ, which you can get to from the link on the front page of the SPSS Community site. Answer by csting (0) | Jun 28, 2012 at 10:19 AM Hi Jon, I have already installed the numpy and scipy libraries instructed by you as well as the required Python 2.7.1 and the plug in essential for pythons as instructed by the document you mentioned below. With my limited IT knowledge, I have only followed the instructions before session "silent installation". The problem is when I run "PLS" command from the Analyze menu, a pop-up window opens and says: " PLS dialog box is an extension procedure that requires the PLS Extension Model to be installed. You can download the extension model from SPSS community: ()" Does this help you to understand the situation and help me out? BRs, Ting Answer by SystemAdmin (532) | Jun 29, 2012 at 04:49 PM I need to dig into the exact criteria that trigger this message, but I do not see in the list of things that you have installed that you installed the PLS command itself. You need to download the PLS package from the SPSS Community site and follow the installation instructions there. The syntax definition and the main part of the implementation are not part of the Python Essentials or the third party libraries. The dialog box is included with the system. So be sure that you have actually installed the PLS command itself. One way to confirm this is to open a syntax window and just type PLS. If that command is not recognized, then that's where the problem is. HTH, Jon Peck Answer by csting (0) | Jun 30, 2012 at 02:07 PM Hi Jon, I think you might find the problem. I tried to run pls in syntax window and an error message was shown in the output window. Thus I think I don't have the PLS extension model yet. My last step is to install the PLS extension model following the instruction below. I assumed this should be done in Terminal. But I don't know (I found some instructions from the internet but I am afraid to mess the system up), can you please help to provide the commands in terminal in order to implement the following actions: "Copy plscommand.xml and PLS.py to the extensions subdirectory under the IBM SPSS Statistics installation directory. For Mac, the installation directory refers to the Contents directory in the SPSS Statistics application bundle." Thanks a lot! BRs, Ting Answer by SystemAdmin (532) | Jun 30, 2012 at 02:39 PM The download for PLS is a zip file. So just open that file and drag the files in it to the necessary directory. You will know that it is the right directory if you see other extension command parts (mainly xml and py files) in that location. I am not a Mac user, so I can't give you Terminal commands for this, but it would be the same as copying any other files. You won't mess the system up by just copying these files. If you are still unsure about where to put these files, run this program from a syntax window. begin program. import SPSSINC_CREATE_DUMMIES print SPSSINC_CREATE_DUMMIES end program. That will show you where another extension command was installed on your system. CREATE DUMMIES is one of the standard set of extensions that is installed with the Essentials. HTH, Jon Peck Answer by csting (0) | Jul 23, 2012 at 12:58 PM Hi Jon, I almost gave up the try but today I found the right location to copy the files of the PLS extension module and it works! The directory is somehow hidden behind and one needs to find it by right clicking the mouse and opening the "enlosing folder". Anyway, thank you so much for the guidance! BRs, Ting Answer by SystemAdmin (532) | Jul 23, 2012 at 01:11 PM Glad this works. Congratulations. 44 people are following this question. New to spss, can C# can call the spss ? 5 Answers How to generate a .sav file from c# application 1 Answer XML to SPSS in C# 2 Answers when call the spss dll, it thrown the exception 2 Answers Error loading 'spssxd.dll' in VB.NET 3 Answers
https://developer.ibm.com/answers/questions/215093/cant-run-partial-lease-squares-on-spss-20-1.html
CC-MAIN-2019-26
refinedweb
984
72.56
A and write data into KML format. I just needed to read the data for my later calculations. So I decided to build a solution using the Python Standard Library. The first trick is that a KMZ file is nothing else but a zip-compressed KML file. Inside you’ll find a file called doc.kml. So let’s open and extract: from zipfile import ZipFile kmz = ZipFile(filename, 'r') kml = kmz.open('doc.kml', 'r').read() The KML data’s juicy part looks something like this: <Folder> <name>11112222-XXYYZ-TESTTRACK</name> <Document> <name>11112222XXYYZTESTTRACK-track-20161214T105653+0100.kmz</name> <Placemark> <name>1111XXYYZ-track-20161214T105653+0100</name> <gx:Track> <when>2016-12-13T13:16:01.709+02:00</when> <when>2016-12-13T13:18:02.709+02:00</when> <when>2016-12-13T13:23:21.709+02:00</when> <when>2016-12-13T13:24:23.709+02:00</when> <!-- more timestamps --> <gx:coord>13.7111482XXXXXXX 51.0335960XXXXXXX 0</gx:coord> <gx:coord>13.7111577XXXXXXX 51.0337028XXXXXXX 0</gx:coord> <gx:coord>13.7113847XXXXXXX 51.0339241XXXXXXX 0</gx:coord> <gx:coord>13.7115764XXXXXXX 51.0341949XXXXXXX 0</gx:coord> <!-- more coordinates --> <ExtendedData> </ExtendedData> </gx:Track> </Placemark> <Placemark> <name>Reference Point #1</name> <Point> <coordinates>13.72467XXXXXXXXX,51.07873XXXXXXXXX,0</coordinates> </Point> </Placemark> <!-- more Placemarks --> </Document> </Folder> Now we can parse the resulting string using lxml : from lxml import html doc = html.fromstring(kml) for pm in doc.cssselect('Folder Document Placemark'): tmp = pm.cssselect('track') name = pm.cssselect('name')[0].text_content() if len(tmp): # Track Placemark tmp = tmp[0] # always one element by definition for desc in tmp.iterdescendants(): content = desc.text_content() if desc.tag == 'when': do_timestamp_stuff(content) elif desc.tag == 'coord': do_coordinate_stuff(content) else: print("Skipping empty tag %s" % desc.tag) else: # Reference point Placemark coord = pm.cssselect('Point coordinates')[0].text_content() do_reference_stuff(coord) Alright. Let’s see what’s going on here: First we regard the document as HTML and parse it using lxml.html. Then we iterate over all Placemarks in Folder > Document > Placemark. If a Placemark has a child track, it’s holding our timestamps and coordinate data. Otherwise it’s considered a reference point just holding some location data. With cssselect we can get the respective data and do stuff with it. Just keep in mind it returns a list, so you always have to access the first element. Then we call text_content()l to convert the tag content to a string for further manipulation and logging. It’s also worth mentioning that lxml and by extension cssselect do not support the necessary pseudo elements for KML. So you won’t be able to address anything like gx:Track. It’s not a big deal here if you know that you can still address the element with cssselect('track'). For more info look it up in the docs. I’m lazy, so I use cssselect. You might have to install this as a dependency with pip3 install cssselect. You can also use the selecting mechanism lxml provides, but previous experience has shown that it’s very tedious and hard to debug for such a quick and dirty hack. The rest is just string magic, really. Just split the content you get, convert it to a float and insert it into your data structure of choice to continue working with it later. Some info that helped me get a grip on the KML format:
https://dmuhs.blog/2018/09/14/parsing-kmz-track-data-in-python/
CC-MAIN-2022-21
refinedweb
567
52.56
Summary Elliotte Rusty Harold talks with Bill Venners about the API design principles that guided the design of the XOM (XML Object Model) motivated you to create XOM? What were your personal and business motivations? Elliotte Rusty Harold: I have no business model for XOM. I don't expect to make any money out of this, certainly not directly, probably not indirectly. More than anything else, at this time I had just finished writing a thousand-plus page book called Processing XML with Java, in which I exhaustively went through SAX, DOM, JDOM, TrAX, Jaxen, and a little bit into some other APIs. Along the way as I was doing this, I just noticed lots and lots of things. I'd say, "This could be better. That doesn't make any sense. That's really silly." And I thought, maybe the time was right for something better. Most immediately, the specific motivation was that Walter Perry of the New York XML Users Group wrote to me and said, "Hey, do you want to come present to us this September?" I said, "Sure." And I sent him about three possible topics. One topic was, "What's Wrong with XML APIs and How to Fix Them." Walter said, "That sounds like the most interesting one. Why don't we do that?" So I said, "OK, now I've got to actually figure out how to fix them." So I spent a couple of months, part time, developing XOM in preparation for that presentation. Bill Venners: Did you start from scratch, or did you refactor the JDOM codebase? Elliotte Rusty Harold: I originally planned to fork JDOM. When I actually looked at it, however, it rapidly became apparent that it would be easier to start from a blank page. Currently, there's only one non-public class in XOM that is taken from JDOM, the Verifier class. Not coincidentally, that is the class in JDOM with which I've had the most personal involvement. I wrote the original version of Verifier, and various other people have contributed to it since then. Other than that, there is no JDOM code in XOM. Bill. Bill:
http://www.artima.com/intv/xomdesignP.html
crawl-002
refinedweb
360
74.19
Why this post? It's very easy to create react app with terminal by following these commands. npx create-react-app my-app cd my-app npm start easy right? However, there is a possibility that we don't no what is running behind the scene. What components are required in order to get our react app working. So that's what we see in this post. How you can make a react app from scratch. In order to get this going, we need npm installed in your system Steps Create a directory in your system mkdir my-app go to the directory cd my-app create package.json file npm init -y npm init will create new package.json file Flag -y will tell npm to use all default config option while creating package.json install react and react-dom npm install react react-dom Use: it will add react and react-dom under dependencies in package.json file as well as it create node_modules directory with all other dependent libraries in it. create gitIgnore to avoid pushing unnecessary code to github vim .gitignore possible items you need in your gitignore file are node_modules .DS_Store -- If you're on mac machine dist dist (Distribution folder)-- is a build directory which is generated by webpack and babel when we build our code. So we dont want this folder to go along as it generates on the by compilers for the production app. create app folder mkdir app create two files within it. touch index.js index.css Start writing your hello world react application import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; class App extends React.Component{ render(){ return( <div>Hello World</div> ) } } ReactDOM.render(<App />, document.getElementById('app')) So when you try to run this code in the browser it will give an error as the our code is written in JSX and browser does not understand it. So this is the point where Babel and Webpack comes into play. To install all required dependency for these two run following command from your terminal. npm install --save-dev @babel /core @babel /preset-env @babel /preset-react webpack webpack-cli webpack-dev-server babel-loader css-loader style-loader html-webpack-plugin Flag --save-dev : we use this flag to differentiate out build dependencies than our app dependencies You can check your package.json file in order to see how this flag differentiate in that. Webpack configurations Webpack is module bundler. So currently we only have on module. However, as our app expands we will have more modules. So webpack intelligently bind all those modules together and creates one single file which serves all these. touch webpack.config.js' }) ] } How to configure webpack check out my previous post in conjuction with out babel-loader works we have to add babel preset config to our package.json file "main": "index.js", "babel":{ "presets" : [ "@babel/preset-env", "@babel/preset-react" ] } To run the build we have to add webpack to our script tag in our package.json "main": "index.js", "babel":{ "presets" : [ "@babel/preset-env", "@babel/preset-react" ] }, "scripts": { "create": "webpack" }, So when i run npm run create from terminal it will run the webpack which will create the dist folder and our bundle file with index.html file. It's hassle to run webpack every time. So you can start a webpack dev server. so it will start build your code as soon as you run it. modify your script in package.json with following "scripts": { "start": "webpack-dev-server --open" } Now when you run npm run start it will start the dev server and open your app in the browser. Final directory structure will look like this Gist conclusion we create our simple app component, render that app component in element which has id as 'app'. Webpack takes all the modules in our application starts from index.html, run through all the transformations. like babel-loader, css-loader, styles-loader. Babel loader uses env and react presets to support older browsers and spit the entire code out in dist folder which we can use it to deploy in the production environment. Discussion (24) Thank you, thank you, thank you. I was using the create react app npm module and it's too much for what I currently need in my react learning path. I also I like to have a certain degree of control over my app structure. Glad you find it useful I'm still new to managing npm modules/packages. I've been thinking about the package.json - One thing I wonder is: Should edits to the package.json be mainly automated through installations and other configuration options invoked or is a lot of time spent manually editing packages.json for config in some cases? in my opinion, I would not do that. I will install the libraries as i needed and add --save or --dev-save flags accordingly. So it will added automatically to the package.json file. I am sure you're aware of it. For sure you can add manage it manually if needed. I am not doing it so far as my workflows concerned. Yeah, just by scrolling through it I got the vibe that it's not common to manually edit. Thanks for replying. :D Good job! Really useful! I have a question though, how can I access the app from another machine? I know that while using create-react-app it is possible to access is from a adress, but it does not seem to work at the same way when installing webpack manually... Awesome step-by-step guide! I bookmarked it a few months ago and came back to it many times since. There's only one thing missing (no big deal). You don't mention the creation of index.html at any point in the guide (it's listed below with all the files below though). thanks .. it's awesome I think you forgot add html-loader to webpack.config.js. it was didn't loading template without html-loader. Thanks for this post! I've used create-react-app as a crutch for far too long. I finally built a react app using your article as a guide. Keep at it! Glad it helps Hi, Very good, may I ask how to make it works in IE11? I tried to use polyfill but I have no success. Got this error in IE11: " Object doesn't support property or method 'entries'" Any idea? Thanks Yes, creation of index.html step is not mentioned as well as If you're using webpack-cli 4 or webpack 5, change webpack-dev-server to webpack serve. Otherwise, was great intro to react for me. thanks thanks. this article helped me recap all this react.js stuff! Wonderful post, you rock dude!! Excellent Thanks, I always have to remove eslint when ever I use the npx create Well explained ! Gonna go try this way right now. Awesome...thankx for sharing...but i have a question: why did put the "dist" folder in your .gitignore file ? I think every time when you create build it will generate this dist folder automatically. Thx for this post! Thanks for this. has anyone checked the console, I see an error "Entrypoint undefined = index.html" Thanks a lot. ❤❤❤
https://dev.to/vish448/create-react-project-without-create-react-app-3goh
CC-MAIN-2021-49
refinedweb
1,223
67.45
Getting, we’ll build on what we learned and look at calling backend APIs using a class. Ionic Start We’ll start by creating an app with the blank template, using ionic start. ionic start apiApp blank --v2 --ts As described in the first post of this series, we now have some basic plumbing, including a Creating a New Provider Let’s look at adding a new provider (also known as a service), which will be used to make an HTTP request to an API. Similar to the last post, where we created a page, the CLI makes this significantly easier by providing automatic provider generation, using ionic g. After changing into the project directory ( cd apiApp), let’s create a new provider called PeopleService, using the CLI. ionic g provider PeopleService The CLI will generate the an @Injectable class called PeopleService in app/providers/people-service/people-service.ts. This class will have the basic boilerplate code inside of a load method to make an HTTP request. load() { if (this.data) { // already loaded data return Promise.resolve(this.data); } //('path/to/data.json') .map(res => res.json()) .subscribe(data => { // we've got back the raw data, now generate the core schedule data // and save the data for later reference this.data = data; resolve(this.data); }); }); } Random User API The Random User Generator is a free API that generates data of fake users, including names, emails, pictures, etc. It is a very helpful API for doing mockups and learning. Making an HTTP request to will return a JSON response similar to the following: { "results":[ { "gender":"male", "name":{ "title":"mr", "first":"eugene", "last":"jordan" }, "location":{ "street":"3277 green rd", "city":"australian capital territory", "state":"queensland", "postcode":275 }, "email":"[email protected]", "login":{ "username":"beautifulbutterfly703", "password":"redhot", "salt":"tva1i6Oo", "md5":"a4231f30aa1fcfe46e4c7c4537a4bf11", "sha1":"d6051a921eba285bbeccd95388332f92a50047ce", "sha256":"093b0e1b429a105902f91e4be28c9dc12629701924312d63d55cdfd556d54c38" }, "registered":1000882268, "dob":537587321, "phone":"02-4894-6208", "cell":"0477-498-405", "id":{ "name":"TFN", "value":"571061435" }, "picture":{ "large":"", "medium":"", "thumbnail":"" }, "nat":"AU" } ], "info":{ "seed":"8eb0b2c2e327a185", "results":1, "page":1, "version":"1.0" } } If we modify our request to, the results array in the response will contain ten users. Let’s put this in our PeopleService. We will need to make a couple of changes to the code the provider gave us. First, let’s put in the URL: this.http.get('') Currently, our code stores/returns the whole response. However, we only want to return the results array in the response. We’ll modify the .subscribe method: .subscribe(data => { this.data = data.results; resolve(this.data); }); Now, our method, when called, will return a promise, which will resolve with an array of users when we get a response back from the API. Calling PeopleService Let’s take a look at calling our PeopleService and outputting the results as a list on our home page. First, inside app/pages/home/home.ts, we need to import our service: import {PeopleService} from '../../providers/people-service/people-service'; Next, in our @Page decorator, we will need to add our service as a provider. @Page({ templateUrl: 'build/pages/home/home.html', providers: [PeopleService] }) export class HomePage { Now, we can add a constructor to our page, create a people property, import the PeopleService, and assign the PeopleService to a property of the class. export class HomePage { public people: any; constructor(public peopleService: PeopleService){ } } Let’s add a method to our HomePage class called loadPeople that will call our peopleService.load method and assign the result of the promise in a people property of the class. loadPeople(){ this.peopleService.load() .then(data => { this.people = data; }); } Finally, we will call this method from our constructor. export class HomePage { public people: any; constructor(public peopleService: PeopleService){ this.loadPeople(); } loadPeople(){ this.peopleService.load() .then(data => { this.people = data; }); } } Now that our class is getting the data, let’s modify our template for this page ( app/pages/home.html) to list out users with their picture, first name, last name, and email. We’ll accomplish this by looping through our people array property using ngFor. <ion-navbar *navbar> <ion-title> Home </ion-title> </ion-navbar> <ion-content <ion-list> <ion-item * <ion-avatar item-left> <img src="{{person.picture.thumbnail}}"> </ion-avatar> <h2>{{person.name.first}} {{person.name.last}}</h2> <p>{{person.email}}</p> </ion-item> </ion-list> </ion-content> Serve Finally, in the CLI, we’ll run ionic serve to view our app in the browser: ionic serve You should end up with something similar to the following in your browser: Conclusion In under ten minutes, using the power of the Ionic CLI and some simple code, we have created an app that interacts with a backend API. This functionality is needed to create most apps, and this ability to get it working quickly will help you greatly on your journey toward creating the next #1 app in the app stores!
https://blog.ionicframework.com/10-minutes-with-ionic-2-calling-an-api/
CC-MAIN-2019-18
refinedweb
805
54.12
Introduction In the first tutorial, we looked at how to develop the Profile containing the Web Service modeling elements for use by the TigerTeam Trimm Model Generator and how to save it as a Profile that can be imported into EA. In this second tutorial we will look into how we can make the Profile and its content more usable by combining it with a Toolbox and a Diagram into an MDG. Why do we want to combine those and introduce the MDG when the profile already seem to do the trick? The simple answer is ease of use compared to using just the Profile. By adding a Toolbox and a Diagram, we can make our Profile act like any other built in feature and make it a lot easier to use. This tutorial In the first part, we created the Profile and tested it, so now we need to make its elements available in a Toolbox and a Diagram. In this part two of the tutorial we will still use the Web Service example and add the remaining elements to that. We still need to have the 5 following elements go together: - The Elements for modeling Web Services, aka. the Profile. Did that in Part 1 - A Toolbox, that organizes the elements from the Profile - A Diagram for modeling Web Services, that defaults to our Toolbox and Profile elements - The MDG control file. Keeps all the elements together when generation the MDG - The generated TrimmWS Web Service MDG file itself This picture shows how things are connected: Everything is, apart from the MDG Control file and the generated Web Service MDG file, modeled inside EA. Now that we have recapped, lets start defining the Toolbox. Creating an EA Project for the MDG and organizing it We will use the same EA project as we started in part one and it will still be organized as shown below: Creating the Toolbox The purpose of the Toolbox is to organize the elements in our Profile in neat sections and with nice names so it is easy for us to locate them and add them to our Diagram. First step is to create the Toolbox Diagram and it is done like this. Again we are using the Tigerteam Web Service MDG as an example: - Create the “nice-name” package that will contain the Profile itself. I have called it “Tigerteam TrimmWS Toolbox” - Say yes to creating a Diagram. I have chosen a “Package Diagram” with the same name as the Package, i.e. “Tigerteam TrimmWS Toolbox” Now we have a Diagram where we can create the “real” Toolbox package: - First we have to select the “Profile” toolbox. it is done by clicking on “More tools..” on the Toolbox window and select “Profile” - Drag the “Profile” package from the Profile toolbox onto the diagram - Give the Toolbox whatever name you like. I have chosen to give it the same name as my MDG, i.e. “TrimmWS” - Make sure that the “Automatically add new diagram” check mark is set and select a Class diagram. More or less same procedure as when we created the Profile Diagram. Now you are ready to define your Toolbox. You should now have a diagram that looks something like this: Designing our Toolbox If you haven’t done so already, double-click on the “<<profile>> TrimmWS” Package to get to the Toolbox diagram. it is here we add the Toolbox Page and one or more Toolboxes. In our example we only need one Toolbox, but feel free to add more if you think the elements in the Profile needs splitting up. I usually separate the Elements from the Connectors in two Toolboxes to make it easier to locate the different elements. - First step is to create a Toolbox Page. This is done by adding a “Class” to the diagram, giving it the name “ToolboxPage” and the stereotype “metaclass”. We name it “ToolboxPage” because thats what Sparx Systems has decided the naming convention for Toolbox pages should be. If we call it something else, it will not work. That takes care of the Toolbox Page. - Now we can add our Toolbox. To do so, drag a Stereotype” element into the Diagrams, see below, and nam it using our chosen ID, “TrimmWS”. - Now we have to attach our “TrimmWS” Toolbox to our ToolboxPage by dragging an “Extends” arrow from the Toolbox to the ToolboxPage as shown bellow: Before we can start adding our Profile elements to the Toolbox, we have to give the Toolbox an Alias and Notes. This is added to the “Notes:” field on the Toolbox Diagram Properties which you access by double-clicking on the diagram background. You should see something like this: Now we can move on to attaching Profile elements to the “TrimmWS” Toolbox. These are added as Attributes and follow a very specific syntax that is easier to explain by an example. Lets say we want to add the “WSPackage” element from our Profile to the Toolbox we will add an Attribute with the line below in the “Name:” field: TrimmWS::WSPackage(UML::Package) TrimmWS is the name of our Profile Package and acts as a namespace for our WSPackage Element. This makes it possible to mix elements in a toolbox from different Profiles. (UML::Package) tells us that the TrimmWS::WSPackage is based on a UML Package Element from the UML Namespace. In more general terms, the syntax is: OwnProfilePackage::OwnProfileElement(BuiltInProfile::BuiltInElement) In the “Initial Value:” field on the Attribute, we write the more readable name we would like to have shown in the Toolbox next to the element Icon. In this case it is WS Package. Note: The elements will be shown in the same order as the Attributes have in the Toolbox. Once we have added the elements from our Profile that we want to be visible in the Toolbox, we have a toolbox that looks like this: Tip: Here we only have one toolbox, but you can have as many as you like and you can have the same profile element in more than one. To add more toolboxes, just add them to the diagram, give them a name, and have them extend the ToolboxPage metaclass. Saving the Toolbox Saving the Toolbox is almost the same as saving the Profile. EA generates an XML file that in this case is the be included in the MDG file we generate later. The biggest difference is that the Profile HAD to be saved from the Package, but the Toolbox must be saved from the Diagram. Thats the only way I can make Toolbox XML file a name that contains the text “Toolbox” so it is easier to locate later when I am using it in my MDG. The check-marks in the “Include” section does not really change anything when saving a Toolbox. Don’t change the text in the “Notes:” field Press Save and EA will generate an XML file containing the Toolbox. Creating the Diagram Now we have a Profile and a Toolbox, so what we need now to complete the picture is a Diagram. A Diagram that has our newly created Toolbox added to it, so we can create Web Services using a Diagram made for just that purpose. Creating a Diagram is a variation of the theme of creating both Profiles and Toolboxes. In order to make it easier to understand the following chapters, I have chosen to call the Diagram we are creating for the MDG Diagram there is less confusion between that and the diagrams we use for creating and designing the MDG Diagram. First step is to create the Diagram for the MDG Diagram and it is done like this. Again we are using the TigerTeam Web Service MDG as an example: - Create the “nice-name” package that will contain the MDG Diagram itself. I have called it “TigerTeam TrimmWS Diagram” - Say yes to creating a Diagram. I have chosen a “Package Diagram” with the same name as the Package, i.e. “Tigerteam TrimmWS Diagram” Now we have a Diagram where we can create the “real” MDG Diagram package: - First we have to select the “Profile” toolbox. it is done by clicking on “More tools..” on the Toolbox window and select “Profile” - Drag the “Profile” package from the Profile toolbox onto the diagram - Just as for the Profile, you have to give the MDG Diagram Package the same name as you have chosen as an ID for your MDG. In our case it is “TrimmWS” - Make sure that the “Automatically add new diagram” check mark is set and select a Class diagram. More or less same procedure as when we created the Profile Diagram. Now you are ready to define your MDG Diagram. You should now have a diagram that looks something like this: Designing the MDG Diagram If you haven’t done so already, double-click on the “<<profile>> TrimmWS” package to get to the MDG Diagram diagram. It is here we add the elements that is needed to define our MDG Diagram. - First step is to create a Stereotype class that will be our MDG Diagram and give it a name that makes sense for our MDG. Drag the Stereotype class onto the diagram and name it “TrimmWS”. - Next step is to tell which built-in diagram we want our MDG DIagram to be based upon. To do so, we add a “Class” to the diagram, giving it the stereotype “metaclass”. Naming it is a bit more tricky. The must always be prefixed with “Diagram_” and then the name of the built-in diagram type you want to base your own MDG Diagram on. I have chosen to use a Component diagram, so the metaclass will be named “Diagram_Component”, as you can see o the figure below. The full list of diagram types can be found in the EA User Guide. Just search for “Create Custom Diagram Profiles”. - Now we have to attach our “TrimmWS” MDG Diagram to the Diagram type we want it to extend, by dragging an “extends” arrow from the “TrimmWS” class to the “Diagram_Component” metaclass. So far so good. Now we have the basics in place, but we still need to tell the MDG Diagram which Toolbox it has to use, what to call the MDG Diagram and so on. This is done by adding attributes to the “Diagram_xxx” class and the absolute minimum attributes are these: We are not fully done with the stereotype class, the “TrimmWS”, yet. We can give it a more human readable name by using the “Alias:” field, and I have chosen to call this “Trimm WS”. We can also add a description in the “Notes:” that tells what the diagram is used for. This description will be show in the bottom right-hand corner of the “New Diagram” dialog when we use the MDG in the future, as shown below: You should now have an MDG Diagram definition that looks like this: There are lots more options to put on the “Diagram_Component” class, but they are not necessary for this example. Hint: You can have as many MDG Diagrams in the same diagram as you like. Lets say you design and MDG that has elements that is to be used on a class diagram and elements used on an activity diagram. Then you create a Toolbox page per MDG diagram type and define 2 MDG diagrams and let each of them point to their own Toolbox. Controlling the name shown in the “New Diagram” dialog When you add a digram in EA you are presented with a dialog, like the one shown below, where you first select the MDG Technology, the “Select from:” list and then the actual Diagram contained on the MDG, from the “Diagram Types:” list: We have selected the “Trim WebService” MDG, but where does that nice name come from? As with the Toolbox profile, it comes from Alias information added to the “Notes:” field of the Diagram’s diagram properties. Double-click on the diagram background and you will get this dialog: Saving the MDG Diagram Saving the MDG Diagram is the same as saving the Toolbox. EA generates an XML file that in this case is the be included in the MDG file we generate later. As with the Toolbox, the MDG Diagram is saved from the diagram to get it MDG Diagram a name that contains the text “Diagram” so it is easier to locate later when I am using it in my MDG. The check-marks in the “Include” section does not really change anything when saving a Diagram. Press Save and EA will generate an XML file containing the MDG Diagram. Putting it all together in an MDG File So far we have designed our Profile, our Toolbox and our MDG Diagram and saved them as 3 separate XML files. Now we have to make them all come together in one common XML file that contains the full MDG. In order to do so, EA has a built-in MDG generator that will generate 2 more files for you. One with “.MDG” suffix that contains information about what to include in the generated MDG and one with “.xml” suffix, that is the actual generated MDG file. The MDG generator is a wizard where the steps are defined by what elements you want to include in the resulting MDG XML file. To make it all more clear I will bring you through all the steps necessary to generate the Tigerteam Web Service MDG. - First you start the MDG generator by selecting “Tools” -> “Generate MDG Technology file..”. You will be presented with this dialog where you just press “Next>”. - Now you have to decide if you want to continue without saving the MDG control file, create a new one or use an existing. I always create one so I don’t have to type in all my selections everytime I am generating a new version. Just to be nice to you, I choose to create a new MDG file, that I, of course, will reuse the next time I generate. I select “Create a new MTS file” and press “Next>”. The dialog looks like this: - We give our MDG control file a name and I have chosen to call this one “TigerTeam TrimmWS MDG.mts”. Press “Next>”: - Now it starts to get interesting. On this next dialog, there are a number of interesting fields: The rest of the fields are optional and fairly self explanatory. Fill in the information you want to and press “Next>” - This next dialog is where we select what elements we want to include in our MDG. The remaining number of steps in the wizard is based on the selections we make here: Our MDG consists of a Profile, a Toolbox and an MDG Diagram, so we check “Profiles”, “Diagram Types”, and “Toolboxes. We also have 2 Tagged Value types, “WSNamespace” and “Fault Messages”, so we also check “Tagged Value Types”. Now we can press “Next>”. - In this step we select the Profile we want to include. It is here our naming convention comes in handy as you can see below: Select the file containing our Profile and press “Next>”. - We do the same for both the Diagram and for the Toolbox as shown here for the MDG Diagram: - And for the Toolbox: - Next we select the Tagged Value types we want to include in the MDG. I always remove the 3 default ones, that for some reason always are present, from the model so I don’t risk selecting them by mistake. I have selected all and then I press “Next>”. - Now EA will present you with an “are you sure you want to continue?” dialog, and if you are, like I am now, press “Finish”. SUCCESS!!!!: We have now generated a complete MDG technology and it is saved in the file “TigerTeam TrimmWS.xml” ready for distribution! Deploying the MDG Technology Now we have a fine new MDG Technology, but how do we make it available to the rest of the world? Note: If you have developed an MDG that contains either Diagram Wizards or have used EMF images to change the appearance of your elements, they have to be installed/copied manually you have installed the MDG file. As a general rule, Wizards and EMA files must be copied to the same folder as the MDG. EA does not have any import facilities for that, so if you would like to have an easily installable MDG, you have to build install scripts for them or package them as an EA Add-in. Seems to me to be a prime candidate for an enhancement request. Always remember to restart all instances of EA after having installed the MDG files. Simple file copy The simplest way is to copy the XML file containing the MDG, in our case “TigerTeam TrimmWS.xml”, to the folder “C:\Program Files (x86)\Sparx Systems\EA\MDGTechnologies”, but that has its limitations. It will work fine if you are the only one using the MDG but if you are to distribute it to a larger number of users, it can become a hassle to make sure that everybody has the same version. Also you will mix your custom made MDG’s with the ones provided with EA and risk loosing it when you upgrade to a newer version. Remember to restart EA after you have copied the file. If your MDG contains EMF files, they must be copied to the same folder. Wizards can also be copied to the MDG folder, but it is more correct to place them in the “C:\Program Files (x86)\Sparx Systems\EA\ModelPatterns” folder. To uninstall the MDG files, they have to be manually deleted from the respective folders. Import locally via EA Another way to import an MDG is by using the “MDG Technology Import” functionality of EA. Using that, EA will place the imported MDG under your Roaming User account making it accessible to you if you log-in on another machine on the same network under Active Directory control. The MDG will still be local to you, but available to you from other machines. To use the MDG importer you have to: - Select “Tools” -> “MDG Technology Import”. You will be presented with this dialog: I have cheated a bit and already selected the “TigerTeam TrimmWS.xml” file containing the MDG. - Press “OK” and EA will ask you to restart . This is important because the MDG will not be loaded until all instances of EA has been shut down before you start Ea again. The dialog looks like this: The “TrimmWS” MDG is now imported and ready for use. Note: EA will place the MDG’s imported using this method on “C:\Users\<your user>\AppData\Roaming\Sparx Systems\EA\MDGTechnologies”. If your MDG contains EMF files or Diagram Wizards, they must be manually copied to the same folder. To uninstall the MDG you will have to manually delete its files. Using a shared directory A more elegant solution is to use EA’s built-in MDG Technology organizer to include the MDG. By doing so you can place the MDG file anywhere you like, even on a shared folder, which makes distribution a lot easier. To do so you need to: - Select “Settings” -> “MDG Technologies..”. You will be presented with this dialog: As you can see, all the available MDG’s are listed here, but ours is missing. To add our MDG: - Press “Advanced..” and on the next dialog press “Add” and “Add Path..”. Select the directory where your MDG file(s) are located and press “OK”. Mine are located on the “Z:” drive as you can see here: Note: You will only see the paths and not the files, so don’t worry. Just press “OK”. EA will tell you to restart and you have to in order to make the MDG appear on the list, as shown below: If your MDG contains EMF files or Diagram Wizards, they must be copied to the same folder as well. To uninstall the MDG you will have to manually delete its files. The MDG is now ready to use. Closing remarks I hope you have found the tutorials on how to build an MDG for EA useful and helpful enough to start developing your own MDG’s. I am sure that there are other ways to do it easier or simpler, but what I have described here is what I have managed to get to work. Next step In Part 3 I will explain how to add Datatypes to our TrimmWS MDG and how make sure the TrimmWS elements make us of them in the most user friendly way possible Download the MDG files You can download the complete “TigerTeam TrimmWS” MDG model and files from our Downloads page. Really valuable tutorial. But a “Page not found” error appears when trying to download the MDG file. When clicking the link on the Downloads section. Can you correct it? Thanks Thank you for informing us about the download error. Fixed, so please try to download again. This tutorial is perfect. It helped me a lot. You could write about how to assign an icon to a toolbox item. It’s not hard, but will save us some time. Thanks Thank you for your feedback. Yes, I will include a description on how to add icons to a toolbox item too. Henrik Nice tutorial! But I’m wondering… How can I open an existing MDG file (eg. BPMN 1.1 Technology.xml) to add some tagged values? –> Now I have changed the XML itself, but it seems easier if I could open the MDG in enterprise architect… And second question: how can I update existing elements after changing the xml? Thanks! Thanks. Always nice to know that what you have made is useful. The short answer is that you can’t. The MDG files provided by EA and Sparx are to be regarded as compiled MDG and since you do not have the the source, i.e. the EA Model for the MDG, you cannot open it. The only way to change them is to update the generated XML files directly. I am not quite sure what you mean by your second question. Do you mean how you update the models where you have already used the elements that you have changed? Henrik
https://trimm.tigerteam.dk/how-to-develop-mdgs-for-enterprise-architect-part-2/
CC-MAIN-2020-45
refinedweb
3,742
68.4
Understanding Server-Side Blazor Understanding Server-Side Blazor With its latest update, the Blazor framework now offers developers the ability to create server-side applications. Read on to learn how. Join the DZone community and get the full member experience.Join For Free Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development. Introduction We all know that the Blazor framework is a client-side web framework. But is it possible to run a Blazor application separate from the UI thread? The latest version of Blazor (0.5.0) gives us the flexibility to run Blazor in a separate process from the rendering process. We are going to explore server-side Blazor in this article. What Is Server-Side Blazor? Since Blazor is a client-side web framework, the component logic and DOM interaction both happens in the same process. However, the design of the Blazor framework is smart and flexible enough to run the application separate from rendering process. For example, we can run Blazor in a web worker thread separately from UI thread. In this scenario, the UI thread will push the events to the Blazor worker thread and Blazor will push UI updates to the UI thread as needed. Although Blazor does not support this functionality yet, the Blazor framework is designed to handle such scenarios and is expected to support it in future releases. Starting with Blazor 0.5.0, we can run a Blazor application on the server. It means that we can run Blazor components server-side on .NET Core while other functionalities, such as UI updates, event handling, and Javascript interop calls, are handled by a SignalR connection over the network. The .NET part runs under CoreCLR instead of WebAssembly which provides us access to the complete .NET ecosystem, debugging, JIT compilation, etc. This adds extensibility to the Blazor framework as the server-side Blazor uses the same component model as a client-side Blazor app. Let us create our first server-side Blazor app and explore it to get a better understanding of this new feature. Prerequisites - Install the .NET Core 2.1 or above SDK from here. - Install Visual Studio 2017 v15.7 or above from here. - Install ASP.NET Core Blazor Language Services extension from here. Visual Studio 2017 versions below v15.7 does not support Blazor framework. Creating Server-Side Blazor Application Open Visual Studio and select File >> New >> Project. After selecting the project, a "New Project" dialog will open. Select .NET Core inside Visual C# menu from the left panel. Then, select "ASP.NET Core Web Application" from the available project types. Set the name of the project as ServerSideBlazor and press OK. After clicking OK, a new dialog will open asking you to select the project template. You'll see two drop-down menus at the top left of the template window. Select ".NET Core" and "ASP.NET Core 2.1" from these dropdowns. Then, select "Blazor (Server-side in ASP.NET Core)" template and press OK. This will create our server-side Blazor solution. You can observe the folder structure in Solution Explorer, as shown in the below image: The solution has two project files: - ServerSideBlazor.App: this is our ASP.NET Core hosted project. - ServerSideBlazor.Server: this contains our server-side Blazor app. All of our component logic lies in the server-side Blazor app. However, this logic does not run on the client-side in the browser, instead it runs on the server-side in the ASP.NET Core host application. This application uses blazor.server.js for bootstrapping instead of blazor.webassembly.js used to blazor.webassembly.js instead of blazor.server.js inside the index.html file then this application will behave as a client-side Blazor app. The Blazor app is hosted by an ASP.NET Core app, which also sets up the SignalR endpoint. Since the Blazor app is running on the server, the event handling logic can directly access the server resources and services. For example, if we want to fetch any data, we no longer need to issue an HTTP request, instead, we can configure a service on the server and use it to retrieve the data. In the sample application that we have created, the WeatherForecastService class is defined inside of the "ServerSideBlazor.App/Services" folder. using System; using System.Linq; using System.Threading.Tasks; namespace ServerSideBlazor.App.Services { public class WeatherForecastService { private static string[] Summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; public Task<WeatherForecast[]> GetForecastAsync(DateTime startDate) { var rng = new Random(); return Task.FromResult(Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = startDate.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = Summaries[rng.Next(Summaries.Length)] }).ToArray()); } } } Furthermore, we need to configure the service inside the ConfigureServices method in the "ServerSideBlazor.App/startup.cs" file. public void ConfigureServices(IServiceCollection services) { services.AddSingleton<WeatherForecastService>(); } We will then inject the service into the FetchData.cshtml view page, where the method GetForecastAsync is invoked to fetch the data. @using ServerSideBlazor.App.Services @page "/fetchdata" @inject WeatherForecastService ForecastService // HTML DOM here. @functions { WeatherForecast[] forecasts; protected override async Task OnInitAsync() { forecasts = await ForecastService.GetForecastAsync(DateTime.Now); } } Go ahead and launch the application in Google Chrome. It will open a browser window and the app will look like a normal Blazor app. Open the Chrome Dev Tools. Navigate to the "Network" tab and you'll see that the application has not downloaded any .NET runtimes or the app assembly. This is because the app is running on the sever-side of .NET Core. Since the dependencies are not downloaded when the application is booted up, the size of the app is smaller and also it gets loaded faster compared to a normal Blazor app. Advantages of Server-Side Blazor Server-side Blazor application provides us many benefits. - Since the UI update is handled over a SignalR connection, we can avoid unnecessary page refreshes. - The app download size is smaller and the initial app loads faster. - The Blazor component can take full advantage of server capabilities such as using .NET Core compatible APIs. - It will also support existing .NET tooling like debugging the application and JIT compilation. - Since server-side Blazor runs under native .NET Core process, and not under Mono WebAssembly, it is also supported on browsers that have no WebAssembly support. There are also a few drawbacks to server-side Blazor apps. - Since UI interaction involves SignalR communication, it adds one extra step in making network calls which results in latency. - The scalability of apps to handle multiple client connections is also a challenge. Conclusion We have learned about the latest server-side Blazor application, introduced in the Blazor 0.5.0 release, and looked at how it is different from normal client-side Blazor apps. We also discussed the pros and cons of using a server-side Blazor app over a client-side blazor app. You can check my other articles on Blazor here. }}
https://dzone.com/articles/understanding-server-side-blazor
CC-MAIN-2019-09
refinedweb
1,170
52.15
Rodney Mach is HPC Technical Director for Absoft. He can be contacted at rwm@absoft.com. Mac OS X Tiger is the first version of the Macintosh operating system that supports 64-bit computing, thereby letting you fully exploit the 64-bit PowerPC G5 processor. However, this does not necessarily mean that you should migrate every application to the 64-bit platform. Most OS X apps don't need to be ported to 64-bit, and in fact will execute faster as 32-bit applications. The main reason you might want to make an application 64-bit is if it needs to access more than 4 GB of memory. Applications in this category include scientific and engineering programs, rendering applications, and database apps. So before looking at what's necessary to port your applications to 64-bit, it is a good idea to examine the circumstances that don't require applications to be ported to 64-bit: - 64-bit math. You don't need to port to 64-bit to do 64-bit arithmetic with OS X on 64-bit PowerPC G5 hardware. The PowerPC supports 64-bit arithmetic instructions in 32-bit mode. You can use the GCC options -mcpu=G5 to enable G5-specific optimizations, as well as -mpowerpc64 to allow 64-bit instructions. Using these two options enables performance gains in 32-bit applications. - Apple has announced that the Mac platform will be transitioning to Intel. Intel processors, such as the 64-bit Intel Xeon, require applications to be 64-bit to take advantage of the additional 64-bit general-purpose registers (unlike the PowerPC). Therefore, you may need to reevaluate decisions to port to 64-bit once more details about the Intel on Mac architecture become availableespecially if your code is integer intensive. - 64-bit data types. You don't need to port to 64-bit to gain access to 64-bit data types. For example, long long and int64_t are 64 bit and can be used by 32-bit applications. - Faster code. You should not port to 64-bit if your code is performance sensitive and highly tuned for 32-bit. The increased size of 64-bit pointers and long can cause increased cache pressure, as well as increased disk, memory, and network usage, which can lead to application performance degradation. 64-Bit Clean Once you determine that an application does need to be 64 bit, then you should make your code "64-bit clean." The 64-bit C data model used by Mac OS X (and all modern UNIX derivatives) is commonly referred to as "LP64." In the LP64 data model, ints are 32 bit, while longs and pointers are 64 bit. The 32-bit data model is referred to as "ILP32," and ints, longs, and pointers are all 32 bit. This difference in the size of long and pointer between ILP32 and LP64 can cause truncation issues in code that assumes the same width as int. Many of these 64-bit porting bugs can be detected by using the -Wall -Wformat -Wmissing-prototypes -Wconversion -Wsign-compare -Wpointer options with GCC. (For more information on general 64-bit porting issues, refer to my article "Moving to 64-Bits," C/C++ Users Journal, June 2005;.) However, there is a 64-bit caveat: Support for 64-bit programming is not available throughout the entire OS X API for 64-bit computing on OS X Tiger. For example, application frameworks such as Cocoa and Carbon are not yet available for 64-bit development. This means you cannot simply recompile 32-bit GUI apps as 64 bit on OS Xonly command-line apps can be recompiled as 64 bit. However, this doesn't mean GUI applications cannot take advantage of 64-bit computing. In the rest of this article, I examine how you work around this issue by porting an example 32-bit OS X GUI application to 64-bit. The Demo Application The 32-bit demo application that I 64-bit enable here is a simple "array lookup" application. Users enter an index of the array, and the application returns the array value at that index; see Figure 1. I want to migrate this application to 64 bit to take advantage of arrays greater than 4 GB. The GUI in this example is written in Qt 4 (), an open-source C++ application framework that makes it straightforward to write cross-platform native GUIs (Carbon on OS X). At Absoft (where I work), all of our cross-platform developer tools are written in Qt for easy maintenance, and native speed on all of our supported platforms (Windows, Linux, and OS X). If your application is not Qt based and uses native OS X APIs, the strategy I present here still applies. The Methodology To convert the 32-bit demo application to 64 bit, I split the 32-bit application into two parts to work around the limitation that only command-line apps can be 64 bit on OS X: - A 64-bit command-line server that does the necessary 64-bit operations such as array allocation and management. - A 32-bit GUI that displays result and interfaces with users. The existing GUI is refactored to launch and communicate with the server. This is the same strategy we used at Absoft with our 64-bit Fx2 debugger on OS X Tiger. The debugger is a 32-bit UI that communicates with a 64-bit back end. Refactoring the application into a 64-bit executable and 32-bit GUI is the most difficult task for most GUI applications. Once you have identified a strategy for 64-bit enabling of the application, you must decide on the communication method between the 64-bit server and 32-bit GUI client. There are several mechanisms you can use for communication: - Communicate using message passing between STDIN and STDOUT of the 64-bit application. - Use UNIX Domain sockets for same host communication. - Use TCP/IP client/server mechanisms. - Use shared memory or other IPC mechanism. The method you select depends on the application. The implementation I present here is based on UNIX Domain sockets. UNIX Domain sockets are lightweight, high-performance sockets that enable communication between processes on the same host. If you are familiar with standard TCP sockets, you will find UNIX domain sockets easy to master. UNIX Domain sockets also assist in future proofing your code by enabling an easy upgrade path to more heavyweight TCP sockets. For example, a future version of your application could have the server run on a PowerPC-based Mac, and the GUI client on the Intel-based Mac. Creating the Server The server handles allocating the array so you can access more than 4 GB of memory. It also provides an interface that a client can use to look up values from the array. This server can be tested independently of the GUI, letting you hammer out the client-server interaction before refactoring the GUI. Use fixed-width datatypes for sharing between ILP32 and LP64. Listing One (server.c) is the server source code. In lines 16-18 of Listing One, the code uses fixed-width datatypes such as uint64_t instead of unsigned long long. It is a good practice to use fixed-width datatypes when sharing data over a socket, or sharing data on disk between ILP32 and LP64. This guarantees that the size of the data does not change while communicating between the two different data models. It also future proofs your code against changes in the width of fundamental datatypes and saves you headaches in the future. These fixed-width datatypes were introduced by C99, and are located in the header file <stdint.h>. While this C99 feature is not technically part of the C++ Standard, it is a feature supported by most C++ compilers (such as Absoft 10.0 a++ and GNU g++). Use the _LP64_ macro to conditionally compile 64-bit-specific code. When maintaining a single code base for 32- and 64-bit code, you may want to conditionally compile the code depending on whether it is 64 bit or 32 bit. In this case, I want the defined ARRAY_SIZE on line 18 to be larger when compiled as 64-bit to take advantage of larger memory. Listing Two (__LP64__) is the macro to use on OS X. In UNIX Domain sockets, a pathname in the filesystem ("/tmp/foo," for instance) is used as the address for the client and server to communicate. This filename is not a regular filename that you can read from or write toyour program must associate this filename with a socket in order to perform communication. You can identify this special socket using the UNIX command ls -laF on the file; you will see a "=" appended to the filename indicating it is a socket: % ls -laF /tmp/sock srwxr-xr-x 1 rwm wheel 0 Oct 29 21:51 /tmp/sock= Returning to the server code in Listing One, the server must be prepared to accept connections, which is done via the socket, bind, and listen calls. On line 26 of Listing One, the socket call creates an endpoint for communication, returning an unnamed socket. The socket call takes three arguments: - The first argument is the family type. In this case, I use AF_LOCAL to specify UNIX Domain family. - The second argument of SOCK_STREAM type provides sequenced, reliable, two-way connection-based bytestreams for this socket. - The final argument selects the protocol for the family. In this case, zero is the default. In lines 30-33 of Listing One, I set up the sockaddr_un structure with the filename to use. Note that the SOCK_ADDR filename is defined in the absoft.h header file (Listing Two) as a UNIX pathame "/tmp/sock." The filename is arbitrary, but must be defined the same in both the client and server, and must be an absolute pathname. Be sure to delete this file as it may have been left over from a previous instance on line 35 and ensure that the bind call succeeds. Next, on line 37, I bind the unnamed socket previously created with the name I just configured. Finally, on line 42, I use the listen call to begin accepting connections on this connection. On line 46, I sit in a loop and wait to accept connections from the client. Once you have received a connection, you read in the array index the user selected on line 54, and return the array value on line 64. Note the use of readn and written functions. Regular read/write do not guarantee that all the bytes requested will be read/written in one call. Wrapper functions are used to ensure all bytes are read/written as expected (see util.c, available electronically, "Resource Center," page 6). Creating the Client To test the server, create a C client that connects to the server, requests an array index, and fetches the result. You can use this client to test the server interaction before having to refactor the GUI. The client uses the socket and connect calls to talk to the server; see Listing Three for the implementation of the client lookUp function. The client code should be easy to follow because it is similar to the server but uses the connect system call to connect to the already existing server socket. You may wonder why the server and client were not written in C++. The main reason is portability. C socket implementations are portable to a variety of platforms without the need for third-party libraries or a roll-your-own implementation. If you do need to code the client/server in C++, Qt provides a QSocket class that you can extend to support UNIX Domain sockets. Refactoring the GUI At this point, you have a server that allocates the array, and a client that can call the server and fetch values from the server. It is now time to tackle the messy partrefactoring the GUI. You must identify everywhere the GUI currently manipulates or queries the array directly, and direct it to use the client function call instead. Luckily, only one method, Viewer::lookupArray() in line 52 of Viewer.cpp (available electronically), is used to look up values in the array. This method is modified on line 54 to call the client lookupUp function in a thread. To leave the original behavior intact, wrap the new functionality in a DIVORCE_UI define statement so you can conditionally compile-in changes. To simplify the code, I made all network calls blocking. You can't issue a blocking call from the UI thread in Qt (and most GUI frameworks) without making the UI unresponsive to users. Therefore, I issue the blocking call to the server inside a thread, and have the thread alert the UI when the blocking network communication has completed. See the FetchDataThread.cpp class (Listing Four) for the implementation of my thread wrapper to the fetchData function. The run() method in Listing Four calls the blocking lookupValue function call defined in Listing Three. The method locks a mutex around critical data to ensure thread safety. In line 27 of Viewer.cpp, I use the Qt emit keyword to emit a signal containing the result received from the server. The GUI receives this method by connecting a "slot" in Qt parlance to the "signal" from the FetchDataThread thread (see lines 40-43 in Viewer.cpp). The end result is the showResult method in Viewer.cpp. It is called to display the results from the server and enable the Lookup button in the application. Starting and Stopping the Server The final piece of the puzzle is to have the GUI automatically start the 64-bit server to make the split appear transparent. The main() function in Viewer.cpp uses the Qt class QProcess to launch the server executable on lines 83-88, and shuts the server down on lines 93-97 before the applications exits. Creating a Universal Binary You may want to ship 32-bit and 64-bit servers so your application can run on a wide variety of Macintosh hardware. Instead of shipping multiple versions of the application, you can create a Universal Binary (also called a "Fat Binary") that lets you ship one server binary that is both 32 bit and 64 bit. A Universal Binary automatically selects the correct code, depending on the user's system without additional coding or user intervention. It is straightforward to create a Universal Binary using Xcode, or using the lipo tool shipped with OS X. Lipo "glues" your 32-bit and 64-bit applications into one binary. Listing Five is an example makefile that creates a Universal Binary for the server presented here. Use the UNIX file command to examine the resulting binary: % file server server: Mach-O fat file with 2 architectures server (for architecture ppc): Mach-O executable ppc server (for architecture ppc64): Mach-O 64-bit executable ppc64 Building and Running the Application To build the application after you have installed Qt, enter: % qmake ; make ; make -f Makefile.server at the command line. The qmake utility (included with Qt) creates a Makefile for building the GUI from the Viewer.pro file in Listing Six. The Makefile.server builds the server as a Universal Binary. Once the build has completed, you can execute the 64-bit enabled Viewer application by running it from the command line: %./Viewer.app/Contents/MacOS/Viewer Conclusion With its UNIX heritage and innovative features such as Universal Binaries, OS X is a great 64-bit platform to develop 64-bit applications on. Migrating command-line applications to 64-bit is straightforward, and the strategy I've outlined here will help you in 64-bit enabling your GUI applications to harness the full power of Mac OS X Tiger. DDJ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <sys/un.h> #include <inttypes.h> #include <unistd.h> #include "absoft.h" int main(int argc, char *argv[]) { int listenfd,/* listen socket descriptor */ clientfd, /* socket descriptor from connect */ i; int32_t x; /* array index from the client */ uint64_t result; /* result sent to client */ static uint64_t bigarray_[ARRAY_SIZE]; socklen_t clientlen; struct sockaddr_un server, client; /* Initialize array with random values */ for ( i = 0 ; i < ARRAY_SIZE ; i++ ) { bigarray_[i] = 10000000000000000000ULL + i; } /* AF_LOCAL is Unix Domain Socket */ if ((listenfd = socket(AF_LOCAL, SOCK_STREAM, 0)) < 0) { perror("socket"); exit(1); } /* Setup socket info */ bzero((char *) &server, sizeof(server)); server.sun_family = AF_LOCAL; strncpy(server.sun_path, SOCK_ADDR, sizeof(server.sun_path)); /* Unlink file to make sure bind succeeds. Ignore error */ unlink(SOCK_ADDR); /* Bind to socket */ if (bind(listenfd, (struct sockaddr *)&server, sizeof(server)) < 0 ) { perror("bind"); exit(2); } /* Listen on socket */ if (listen(listenfd, LISTENQ) < 0 ) { perror("listen"); exit(3); } for(;;) { printf("Waiting for a connection...\n"); clientlen = sizeof(client); if ((clientfd = accept(listenfd, (struct sockaddr *)&client, &clientlen)) < 0) { perror("accept"); exit(4); } /* Read the array index UI has requested */ readn(clientfd, &x, sizeof(x)); printf("Read in request for array element %d\n", x); if ( x > ARRAY_SIZE || x < 0 ) { /* Error */ result = 0; } else { result = bigarray_[x]; } /* Print specifier for unsigned 64-bit integer*/ printf ("Server sending back to client: %llu\n", result); if (writen(clientfd, &result, sizeof(result)) < 0 ) { exit(5); } close(clientfd); } exit(0); }Back to article Listing Two 1 #ifndef ABSOFT_H 2 #define ABSOFT_H 3 #include <stdint.h> 4 #include <stdlib.h> 5 #define SOCK_ADDR "/tmp/sock" 6 #define LISTENQ 5 7 /* When compiled as 64-bit, use larger array 8 * (for demo the size is just 1 larger then 32-bit 9 */ 10 #ifdef __LP64__ 11 #define ARRAY_SIZE 1001 12 #else 13 #define ARRAY_SIZE 1000 14 #endif /* __LP64__ */ 15 /* Protos */ 16 ssize_t readn(int fd, void *vptr, size_t n); 17 ssize_t writen(int fd, const void *vptr, size_t n); 18 uint64_t lookupValue(int32_t x); 19 #endifBack to article Listing Three 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <errno.h> 4 #include <string.h> 5 #include <sys/types.h> 6 #include <sys/socket.h> 7 #include <sys/un.h> 8 #include <sys/uio.h> 9 #include <sys/fcntl.h> 10 #include <inttypes.h> 11 #include <stdint.h> 12 #include <unistd.h> 13 #include "absoft.h" 14 /* Lookup array value at index x 15 * by connecting to unix domain socket 16 */ 17 uint64_t lookupValue(int32_t x) 18 { 19 int s; 20 struct sockaddr_un remote; 21 uint64_t result; 22 if ((s = socket(AF_LOCAL, SOCK_STREAM, 0)) < 0 ) { 23 perror("socket"); 24 return(0); 25 } 26 bzero(&remote, sizeof(remote)); 27 printf("Trying to connect...\n"); 28 remote.sun_family = AF_LOCAL; 29 strcpy(remote.sun_path, SOCK_ADDR); 30 if (connect(s, (struct sockaddr *)&remote, sizeof(remote)) < 0) { 31 perror("connect"); 32 return(0); 33 } 34 printf("Connected and sending %d\n", x); 35 if (writen(s, &x, sizeof(x)) < 0 ) { 36 perror("send"); 37 return(0); 38 } 39 readn(s, &result, sizeof(result)); 40 printf ("Client received result from server = %llu\n", result); 41 close(s); 42 return result; 43 }Back to article Listing Four 1 #include "FetchDataThread.h" 2 FetchDataThread::FetchDataThread(QObject *parent) 3 : QThread(parent) 4 { 5 } 6 FetchDataThread::~FetchDataThread() 7 { 8 cond.wakeOne(); 9 wait(); 10 } 11 void FetchDataThread::fetchData(const int32_t x) 12 { 13 // Hold mutex until function exits 14 QMutexLocker locker(&mutex); 15 this->x = x; 16 if (!isRunning()) 17 start(); 18 else 19 cond.wakeOne(); 20 } 21 void FetchDataThread::run() 22 { 23 QMutexLocker locker(&mutex); 24 int32_t xv = x; 25 // This is the call that blocks 26 uint64_t result = lookupValue(xv); 27 /* Minimal error checking. Returns 0 if error */ 28 if ( result == 0 ) { 29 emit errorOccured("Error looking up value"); 30 return; 31 } else { 32 QString str; 33 emit fetchedData( str.setNum(result) ); 34 } 35 }Back to article Listing Five CFLAGS= -Wall -Wformat -Wmissing-prototypes -Wconversion -Wsign-compare -Wpointer-arith all: server server32: util.c server.c gcc $(CFLAGS) -m32 util.c server.c -o server32 server64: util.c server.c gcc $(CFLAGS) -m64 util.c server.c -o server64 server: server32 server64 lipo -create server32 server64 -output server clean: rm -rf server32 server64 serverBack to article Listing Six # Use The Qt utility "qmake" to build # a Makefile from this file TEMPLATE = app CONFIG += qt release TARGET += DEPENDPATH += . INCLUDEPATH += . DEFINES += DIVORCE_UI HEADERS += Viewer.h HEADERS += absoft.h HEADERS += FetchDataThread.h SOURCES += client.c SOURCES += util.c SOURCES += Viewer.cpp SOURCES += FetchDataThread.cppBack to article
http://www.drdobbs.com/parallel/mac-os-x-tiger-64-bits/parallel/mac-os-x-tiger-64-bits/184406429
CC-MAIN-2015-22
refinedweb
3,396
63.29
A more expressive way to return error or success values via the type system. Instead of throwing exeptions return an error state of any type, even if it conflicts with the type of the success return value. The higher kinded type of the Either will be used to determine if the value contains an error or a success value. Either is an interface implemented by the two concrete types of Left and Right. By convention, Left is used to encapsulate an error value while Right is used to encapsulate a success value. Left and Right are both projectable to Option types so you can chain and compose operations together based on success or failure values in the typical monadic flow style via Option#map, Option#flatMap, etc. import 'package:either/either.dart'; main() { var left = new Left<String, String>("left"); var right = new Right<String, String>("right"); var leftIsLeft = left.isLeft(); var leftIsRight = left.isRight(); var leftLeftProjection = left.left(); var leftRightProjection = left.right(); var leftSwapped = left.swap(); var leftFold = left.fold( (v) => "folded ${v}", (v) => "never executed" ); var rightIsLeft = right.isLeft(); var rightIsRight = right.isRight(); var rightLeftProjection = right.left(); var rightRightProjection = right.right(); var rightSwapped = right.swap(); var rightFold = right.fold( (v) => "never executed", (v) => "folded ${v}" ); } This is the Either interface implemented by both Left and Right part of option; import 'package:option/option.dart'; abstract class Either<L, R> { /** * Returns true if this `Either` type is `Left`, fasle otherwise * * @return {Boolean} */ bool isLeft(); /** * Returns true if this `Either` type is `Right`, false otherwise * * @return {Boolean} */ bool isRight(); /** * If this `Either` type is `Left` then `leftCase` will be called with the * inner value of `Left` and the result of `leftCase` will be returned. The * same applies for `rightCase` in the event that this `Either` type * is `Right` * * @param {dynamic(L)} leftCase - The computation to run on `Left` type * @param {dynamic(R)} rightCase - The computation to run on `Right` type * @return {dynamic} - The result of the computation that was ran */ dynamic fold(dynamic leftCase(L left), dynamic rightCase(R right)); /** * Returns an `Option` projection of the `Left` value of this `Either` type. * So if this is type `Left` it returns an instance of `Some` but if this is * a `Right` type this returns an instance of `None` * * @return {Option<L>} - The optional left projection */ Option<L> left(); /** * Returns an `Option` projection of the `Right` value of this `Either` type. * So fi this is type `Right` it returns an instance of `Some` but if this is * a `Left` type thin this returns an instance of `None` * * @return {Option<R>} - The optional right projection */ Option<R> right(); /** * When ran on a `Left` type this returns a `Right` with the same inner value. * When ran on a `Right` type this returns a `Left` with the same inner value. * * @return {Either<R, L>} - The swapped `Either` type */ Either<R, L> swap(); } Add this to your package's pubspec.yaml file: dependencies: either: ^0.1.8 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:either/either either depends on option >=0.2.0 which requires SDK version <2.0.0, version solving failed. Format lib/src/either.dart. Run dartfmt to format lib/src/either.dart. Format lib/src/left.dart. Run dartfmt to format lib/src/left.dart. Format lib/src/right.dart. Run dartfmt to format lib/src/right. Maintain an example. None of the files in your example/ directory matches a known example patterns. Common file name patterns include: main.dart, example.dart or you could also use either.dart.
https://pub.dartlang.org/packages/either
CC-MAIN-2018-43
refinedweb
618
65.22
We've just started I/O streams and i need some help with my prjct. Here is the lab: "Write a program that gives and takes advice on program writing. The program starts by writing a piece of advice to the screen and asking the user to type is or her advice by pressing the Return key two times. Your program can then test to see that it has reaches the end of the input by checking to see when it reads two consecutive occurrences of the character’\n’." My code so far [#include<iostream> #include<fstream> using namespace std; int main(){ char symbol; ifstream input; ofstream output; input.open("input2.txt"); while(!(input.eof())){ input.get(symbol); cout<<symbol; } cout<<endl; cout<<endl; cout<<"Enter your advice.\n" <<"Press enter twice when finished.\n"; do{ cin.get(symbol); }while(symbol!='\n'); input.close(); system("pause"); } ] My problem is how to get the input sent to the file, and check if return is hit twice.
https://www.daniweb.com/programming/software-development/threads/275028/i-o-stream-help-please
CC-MAIN-2018-43
refinedweb
165
72.46
Device and Network Interfaces - UFS file system #include <sys/param.h> #include <sys/types.h> #include <sys/fs/ufs_fs.h> #include <sys/fs/ufs_inode.h> UFS is an optional disk-based file system for the Oracle Solaris environment. The UFS file system is hierarchical, starting with its root directory (/) and continuing downward through a number of directories. The root of a UFS file system is inode 2. A UFS file system's root contents replace the contents of the directory upon which it is mounted. Subsequent sections of this manpage provide details of the UFS file systems. UFS uses state flags to identify the state of the file system. fs_state is FSOKAY - fs_time. fs_time is the timestamp that indicates when the last system write occurred. fs_state is updated whenever fs_clean changes. Some fs_clean values are: Indicates an undamaged, cleanly unmounted file system. Indicates a mounted file system that has modified data in memory. A mounted file system with this state flag indicates that user data or metadata would be lost if power to the system is interrupted. Indicates an idle mounted file system. A mounted file system with this state flag indicates that neither user data nor metadata would be lost if power to the system is interrupted. Indicates that this file system contains inconsistent file system data. Indicates that the file system has logging enabled. A file system with this flag set is either mounted or unmounted. If a file system has logging enabled, the only flags that it can have are FSLOG or FSBAD. A non-logging file system can have FSACTIVE, FSSTABLE, or FSCLEAN. It is not necessary to run the fsck command on unmounted file systems with a state of FSCLEAN, FSSTABLE, or FSLOG. mount(2) returns ENOSPC if an attempt is made to mount a UFS file system with a state of FSACTIVE for read/write access. As an additional safeguard, fs_clean should be trusted only if fs_state contains a value equal to FSOKAY - fs_time, where FSOKAY is a constant integer defined in the /usr/include/sys/fs/ufs_fs.h file. Otherwise, fs_clean is treated as though it contains the value of FSACTIVE. Extended Fundamental Types (EFT) provide 32-bit user ID (UID), group ID (GID), and device numbers. If a UID or GID contains an extended value, the short variable (ic_suid, ic_sgid) contains the value 65535 and the corresponding UID or GID is in ic_uid or ic_gid. Because numbers for block and character devices are stored in the first direct block pointer of the inode (ic_db[0]) and the disk block addresses are already 32 bit values, no special encoding exists for device numbers (unlike UID or GID fields). A multiterabyte file system enables creation of a UFS file system up to approximately 16 terabytes of usable space, minus approximately one percent overhead. A sparse file can have a logical size of one terabyte. However, the actual amount of data that can be stored in a file is approximately one percent less than one terabyte because of file system overhead. On-disk format changes for a multiterabyte UFS file system include: The magic number in the superblock changes from FS_MAGIC to MTB_UFS_MAGIC. For more information, see the /usr/include/sys/fs/ufs_fs file. The fs_logbno unit is a sector for UFS that is less than 1 terabyte in size and fragments for a multiterabyte UFS file system. UFS logging bundles the multiple metadata changes that comprise a complete UFS operation into a transaction. Sets of transactions are recorded in an on-disk log and are applied to the actual UFS file system's metadata. UFS logging provides two advantages: A file system that is consistent with the transaction log eliminates the need to run fsck after a system crash or an unclean shutdown. UFS logging often provides a significant performance improvement. This is because a file system with logging enabled converts multiple updates to the same data into single updates, thereby reducing the number of overhead disk operations. The UFS log is allocated from free blocks on the file system and is sized at approximately 1 Mbyte per 1 Gbyte of file system, up to 256 Mbytes. The log size may be larger (up to a maximum of 512 Mbytes), depending upon the number of cylinder groups present in the file system. The log is continually flushed as it fills up. The log is also flushed when the file system is unmounted or as a result of a lockfs(1M) command. You can mount a UFS file system in various ways using syntax similar to the following: Use mount from the command line: # mount -F ufs /dev/dsk/c0t0d0s7 /export/home Include an entry in the /etc/vfstab file to mount the file system at boot time: /dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export/home ufs 2 yes - For more information on mounting UFS file systems, see mount_ufs(1M). See attributes(5) for a description of the following attributes: df(1M), fsck(1M), fsck_ufs(1M), fstyp(1M), lockfs(1M), mkfs_ufs(1M), newfs(1M), ufsdump(1M), ufsrestore(1M), tunefs(1M), mount(2), attributes(5) For information about internal UFS structures, see newfs(1M) and mkfs_ufs(1M). For information about the ufsdump and ufsrestore commands, see ufsdump(1M), ufsrestore(1M), and /usr/include/protocols/dumprestore.h. If you experience difficulty in allocating space on the ufs filesystem, it may be due to framentation. Fragmentation can occur when you do not have sufficient free blocks to satisfy an allocation request even though df(1M) indicates that enough free space is available. (This may. To correct a fragmentation problem, run ufsdump(1M) and ufsrestore(1M) on the ufs filesystem.
http://docs.oracle.com/cd/E26502_01/html/E29044/ufs-7fs.html
CC-MAIN-2016-18
refinedweb
941
54.02
You can subscribe to this list here. Showing 3 results of 3 Hi, I'm trying to fix Scintilla, with the aim of creating a new release of this module. The problem is that Scintilla defines and uses it's own PERLWIN32GUI_USERDATA structure. As this structure has changed over time in Win32-GUI it creates all kinds of problems with this module (Scintilla subclasses Win32-GUI, and uses the PERLWIN32GUI_USERDATA structure passed to it by Win32-GUI). The most extreme behaviour is with perl 5.8.x where the Scintilla does not process events - there is also cases where Scintilla crashes. I've managed to get a build of Scintilla working, but I've got several issues. Instead of defining it's own version of PERLWIN32GUI_USERDATA it now picks it up from GUI.h - which I think is the best solution (?). 1) The original Scintilla: #include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include <windows.h> #include "./include/Scintilla.h" changed to: #include "../Win32-GUI/GUI.h" #include "./include/Scintilla.h" However, when compiling with VC I get two errors: *** Using Preserved Perl context. ../Win32-GUI/GUI.h(439) : error C2143: syntax error : missing ')' before '=' ../Win32-GUI/GUI.h(439) : error C2072: 'DoHook' : initialization of a function ../Win32-GUI/GUI.h(439) : error C2059: syntax error : ')' ../Win32-GUI/GUI.h(809) : error C2059: syntax error : 'string' NMAKE : fatal error U1077: 'C:\WINDOWS\system32\cmd.exe' : return code '0x2' Stop. this is with the line: void DoHook(NOTXSPROC LPPERLWIN32GUI_USERDATA perlud, UINT uMsg, WPARAM wParam, LPARAM lParam, int* PerlResult, int notify = 0); removing " = 0" fixes that issue. The second error: *** Using Preserved Perl context. ../Win32-GUI/GUI.h(809) : error C2059: syntax error : 'string' NMAKE : fatal error U1077: 'C:\WINDOWS\system32\cmd.exe' : return code '0x2' Stop. this is with: extern "C" BOOL WINAPI GetWindowInfo( HWND hwnd, PWINDOWINFO pwi ); Commenting out this code removes the error. In both cases I don't understand why they cause errors? 2) I'm confused about the Perl context and how it's used within Win32-GUI - and how Scintilla should handle things. I've got it working by doing: dTHX; /* fetch context */ in the Scintilla event handlers, but had to remove NOTXSCALL/NOTXSPROC in some functions where the context isn't used/needed. If it would help, I can check in what I've got, with the idea of fixing things once I've got my head around these issues? Cheers, jez. Last Night I committed a couple of bug fixes: - GUI.h change order of instructions in PERL_FREE macro to prevent crashes (Trackers 1243378 and 1248578) I hope this fix will eliminate most of the crashes/warnings hat happen on program termination. If you have any example code that still crashes or warns when your script ends, then I be interested in seeing it. Details of the exact problem below. - GUI.xs change logic in all message loops (Dialog, DoEvents, DoModal) to prevent memory leak (Tracker: 1201190) All 3 message loops were leaking memory whenever one of the windows functions TranslateMDISysAccel, TranslateAccelerator or IsDialogMessage processed and dispatched a message. - Listbox.xs add documentation to differentiate between SetCurSel and SetSel (Tracker 1177898) Crashes on Exit - explanation of the problem and the (simple) fix ------------------------------------------------------------------ Tracker 1243378 gave me some code that reliably crashed on exit in my environment (AS Perl 5.8.7, Win98, MSVC 6) The code was approximately: [1] my $mw = Win32::GUI::Window->new( ... ); [2] my $re = $mw->AddRichEdit( ... ); [3] $re->Change( -onMouseMove => sub {my $text = $re->Text() } ); Some notation: - The Win32::GUI::Window object will be referred to as WO - The Win32::GUI::RichEdit object will be referred to as RO - The anonymous sub in line [3] will be referred to as AS Here's what was happening: Line [1] creates TWO, with ref count 1 (from $mw) Line [2] creates RO, with ref count 2 (from $re and WO) Line [3] creates AS, with ref count 1 (from storing in hvEvents hash in perlud for RO) Line [3] increases ref count of RO to 3 (from closure on $re) Now, when the script finishes the following happens: $mw and $re go out of scope. WO ref count goes to 0, RO ref count goes to 2 DESTROY gets called on WO, as it's ref count is 0: - all child object refs are removed from WO, leading to RO ref count going to 1 (from the closure referenced in its own hvEvents hash) - DestroyWindow() is called on WO window handle. DestroyWindow() sends WM_DESTROY messages to each of the top level window's children, before sending WM_DESTROY to the top-level window itself. - WM_DESTROY received by RO, calls PERLUD_FREE, which among other things clears the hvEvents hash. This reduces the ref count of RO to 0, as AS ref count goes to 0, resulting in RO's DESTROY method being called before PERLUD_FREE finishes - RO's destroy method removes any child window references from RO (none in this case), then calls DestroyWindow() on RO window handle, resulting in RO receiving WM_DESTROY again, and calling PERUD_FREE a second time. PERLUD_FREE gets a pointer to perud, and tries to free stuff that may already have been freed by the first call, and frees stuff that the first call has not yet freed, but will try to once control returns there. The fix was simply to make PERUD_FREE set the pointer to perud to NULL before freeing the memory used, so that the second PERUD_FREE sees that perud has already been tidied-up and does nothing. Regards, Rob. Hi, I've just committed a change to the typemap. It now uses one hash lookup, rather than two, and as a result is faster. As this typemap is used for almost all objects and method calls, there might be some gains for some calls (depending on what the method actually does). The benchmark below is for a dummy method that doesn't do anything: Benchmark: timing 1000000 iterations of NewTypeMap, OldTypeMap... NewTypeMap: 29 wallclock secs (29.64 usr + 0.06 sys = 29.70 CPU) @ 33666.63/s ( n=1000000) OldTypeMap: 38 wallclock secs (38.22 usr + 0.02 sys = 38.23 CPU) @ 26154.05/s (n=1000000) I've tested this change under mingw and vc, and it seems to work fine. If there are any issues, drop me a mail. Cheers, jez.
http://sourceforge.net/p/perl-win32-gui/mailman/perl-win32-gui-hackers/?viewmonth=200512&viewday=3
CC-MAIN-2015-27
refinedweb
1,058
71.65
Testing the untestable To test your code in isolation, Mocking frameworks like RhinoMocks or Moq are an important tool in your toolbox. By using an isolation(mocking) tool, you can replace an existing object by a ‘fake’ implementation. Unfortunately these tools cannot help you solve everything. If your system is not designed with testing in mind, it can be very hard or even impossible to replace an existing implementation with a mock object. Let’s for example take DateTime.Now. This is a static property on a static class, how are you going to replace this? Let’s have a look how Microsoft Fakes, the isolation framework that Microsoft introduced in Visual Studio 2012. Microsoft Fakes Microsoft Fakes help you isolate the code you are testing by replacing other parts of the application with stubs or shims. A stub replaces another class with a small substitute that implements the same interface. To use stubs, you have to design your application so that each component depends only on interfaces, and not on other components.So this is similar to the functionality that most other mocking frameworks offer..(It is using the Moles Isolation Framework behind the scenes) So what does this mean? By using the ‘shims’ we can isolate any kind of functionality inside our .NET application, including non-virtual and static methods in sealed types. No longer we are limited to to interfaces or virtual methods to replace dependencies. Remark: Using Shims has a performance impact on your tests because it will rewrite your code at run time. How to use Microsoft Fakes? Let’s walk through an example. In this sample I want to test the SmtpClient, preferably without sending real emails. Below you’ll find the MailService class I want to test. This class has a direct dependency on the SmtpClient making it hard to test: The SmtpClient class lives in the System assembly. To create a shim for this class, go to the Test Project, right-click on the System assembly in the references for the project, and choose “Add Fakes Assembly”: This creates a System.fakes file within the project: It also creates a Fakes folder containing a .fakes file for every fakes assembly we created: Inside the System.fakes file, we can specify for which types a shim should be created(for some types, this is done by default): Let’s now write our test. Start by creating the test method and specifying the ‘arrange’ part: Now we continue by adding the ‘act’ and ‘assert’ part of our tests. Because we want to use shims, we’ll have to wrap our calls into a special ShimsContext. The Microsoft Fakes framework created a ShimSmtpClient in the System.Net.Mail.Fakes namespace. On this ShimSmtpClient, we can overwrite the SendMailMessage method of all SmtpClient implementations through the Action<SmtpClient,MailMessage> property available on the AllInstances property: This naming convention is used for all shims you create in your test application.
http://bartwullems.blogspot.com/2013/05/visual-studio-2012-testing-untestable.html
CC-MAIN-2017-17
refinedweb
493
63.39
public static boolean isOdd(int i) { return i % 2 != 0; } Check whether given number is even or odd Upasana | May 24, 2019 | 1 min read | 412 views We can easily find if a number is even or odd in Java using the below method. Here we are checking if the remainder operation returns a non-zero result when divided by 2. Generally, the better (and faster) approach is to using AND operator, as shown in below code snippet: public static boolean isOdd(int i) { return (i & 1) != 0; } Here we are checking if the least significant bit (LSB) is zero or not. If it is zero then number is even else number is odd. This may not work As suggested in Puzzle 1: Oddity in Java Puzzler book, the below code will not work for all negative numbers. public static boolean isOdd(int i) { return i % 2 == 1; } Reference Java Puzzlers: Traps, Pitfalls, and Corner Cases by Joshua Bloch, Neal Gafter. Puzzle 1: Oddity Top articles in this category: - Check if the given string is palindrome - Check a number is Prime: Java Coding Problem - Palindrome checker in Java - Armstrong Number in Java - Find two numbers of which the product is maximum in an array - SDET Java Coding Challenges - 50 SDET Java Interview Questions & Answers
https://www.javacodemonk.com/check-whether-given-number-is-even-or-odd-10e63144
CC-MAIN-2022-21
refinedweb
214
56.29
In looking for ways to make couplets of items I've discovered four brilliant methods in the itertools module. In making couplets its important to understand how the pairing should be defined. Once an item is chosen, can it be chosen again as the next element in the group? If a set is defined as {0, 1, 2, 3}, can we define a group as (0, 0)? This is referred to as replacement. In making these groups, does the order of the elements matter? is (0, 1) unique to (1, 0)? If it doesn't matter than my set of groups should only include one of the pair, but if it does the set should include both. A good analogy to make is if you were choosing a leader for each pair, the leader being the first in the pair, then the order would matter— 0 is the leader in (0, 1) and vice versa—versus if you were just choosing teams where each member held the same status; it wouldn't matter in what order you defined the group they're all the same group. The itertools methods are the following from itertools import ( product, combinations, combinations_with_replacement, permutations) l = list(range(3)) npairs = 2 print(l) [0, 1, 2] list(permutations(l, npairs)) [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)] This is essentially the different ways to order a pair measured by the Binomial Coefficient list(combinations(l, npairs)) [(0, 1), (0, 2), (1, 2)] list(combinations_with_replacement(l, npairs)) [(0, 0), (0, 1), (0, 2), (1, 1), (1, 2), (2, 2)] product operates on multiple lists, returning pairings between items of different lists. To use on the same list (you probably shouldn't) you would specify the size of the group with the parameter keyword repeat IF used in this way one think of it as WITH replacement, where order DOES matter, so (0, 0) is acceptable, and (1,0) is unique to (0, 1) list(product(l, repeat=npairs)) [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)] A better way to use product is with different lists shirts = ['hawaii', 'collared'] shoes = ['sneakers', 'slippers'] pants = ['jeans', 'shorts'] outfits = list(product(shirts, shoes, pants)) outfits [('hawaii', 'sneakers', 'jeans'), ('hawaii', 'sneakers', 'shorts'), ('hawaii', 'slippers', 'jeans'), ('hawaii', 'slippers', 'shorts'), ('collared', 'sneakers', 'jeans'), ('collared', 'sneakers', 'shorts'), ('collared', 'slippers', 'jeans'), ('collared', 'slippers', 'shorts')]
https://crosscompute.com/n/hS1rBUtOEOPuYYBwjFxDydAxWGMRXUZn/-/different-combinations
CC-MAIN-2019-13
refinedweb
404
53.07
#include <hallo.h> * Guus Sliepen [Sun, Jan 04 2009, 10:45:23AM]: > On Sat, Jan 03, 2009 at 09:57:33PM +0100, Eduard Bloch wrote: > > > PS: I plan to hack it a little bit and use syssconf function on Debian > > systems to determine the real number of CPU cores (#x) since pigz's > > default value is 8 which is much more than home systems have nowadays, > > and the performance isn't getting (much) better with a constant number > > of idle threads, they just consume more memory. > > Although sysconf() can tell you the total number of cores and the number of > cores that are online in your system, it does not tell you how many cores are > available for your program. It's better to use sched_getaffinity() to get the > set of CPU (cores) available to your program. And if you know better than the > OS, use sched_setaffinity() to bind each thread to it's own core. Sounds like a plan, but I don't feel very comfortable to do that in the Debian package. Let me explain why: - sched_setaffinity method seems to be Linux specific - though it might be beneficial to pinpoint each thread to the same CPU, I don't think (*assumption*) that the data resides long enough in the CPU cache anyway so we wouldn't win much - I have not used it before but I used the sysconf method in portable apps (i.e. in cloop-utils), sysconf method is also used in pbzip2 - it's hard to imagine environments with big difference between count(cores) and count(available cores) - the code change is minimal, see below @Marc: please check the -s option, it's called -S in gzip (upper case). What's the reason for the different case? Regards, Eduard. --- pigz-2.1.4.orig/pigz.c +++ pigz-2.1.4/pigz.c @@ -2795,8 +2795,13 @@ #ifdef NOTHREAD procs = 1; #else + +#ifdef _SC_NPROCESSORS_ONLN + procs=sysconf(_SC_NPROCESSORS_ONLN); +#else procs = 8; -#endif +#endif /* _SC_NPROCESSORS_ONLN */ +#endif /* NOTHREAD */ size = 131072UL; rsync = 0; /* don't do rsync blocking */ dict = 1; /* initialize dictionary each thread */ -- Naja, Garbage Collector eben. Holt den Müll sogar vom Himmel. (Heise Trollforum über Java in der Flugzeugsteuerung) Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-devel/2009/01/msg00076.html
CC-MAIN-2015-14
refinedweb
367
60.38
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location. map::emplace_hint Visual Studio 2013 Inserts an element constructed in place (no copy or move operations are performed), with a placement hint. No iterators or references are invalidated by this function. During emplacement, if an exception is thrown, the container's state is not modified. The value_type of an element is a pair, so that the value of an element will be an ordered pair with the first component equal to the key value and the second component equal to the data value of the element. // map_emplace.cpp // compile with: /EHsc #include <map> #include <string> #include <iostream> using namespace std; template <typename M> void print(const M& m) { cout << m.size() << " elements: " << endl; for (const auto& p : m) { cout << "(" << p.first << "," << p.second << ") "; } cout << endl; } int main() { map<string, string> m1; // Emplace some test data m1.emplace("Anna", "Accounting"); m1.emplace("Bob", "Accounting"); m1.emplace("Carmine", "Engineering"); cout << "map starting data: "; print(m1); cout << endl; // Emplace with hint // m1.end() should be the "next" element after this emplacement m1.emplace_hint(m1.end(), "Doug", "Engineering"); cout << "map modified, now contains "; print(m1); cout << endl; } Reference Show:
http://msdn.microsoft.com/en-us/library/windows/apps/dd998534
CC-MAIN-2014-15
refinedweb
211
58.58
date_diff When you use this function in a query, it determines the difference in dates between the two specified column in the specified dataset. For more information on using query functions and operators in a REST API request, see Queries. For an end-to-end description of how to create a query, see Creating a Query. The codeblock example below calculates the differences in dates between the entries in the Datetime and Datetime2 columns of the earthquake dataset, whose {DATASET_ID} is 90af668484394fa782cc103409cafe39. Note that this dataset does not actually have a second Datetime column. { "version": 0.3, "dataset": "90af668484394fa782cc103409cafe39", "namespace": { "date_difference": { "source": ["datetime","datetime2"], "apply": [{ "fn": "date_diff", "type": "transform", "params": ["month"] }] } }, "metrics": ["date_difference"], } datetime2. When you submit the above request, the response includes an HTTP status code and a JSON response body. For more information on the HTTP status codes, see HTTP Status Codes. For more information on the elements in the JSON structure in the response body, see Query.
https://developer.here.com/documentation/geovisualization/topics/query-rule-date-diff.html
CC-MAIN-2019-22
refinedweb
160
53.92
Working with AJAX in MVC application In this article we are going to focus on Working with AJAX in MVC application. We are using AJAX because we would like to avoid reloading the complete page whenever the form is submitted to the server. In this article we are going to focus on Working with AJAX in MVC application. We are using AJAX because it is asynchronous and it can partially load the page. It requests data from the server in the background without reloading the page. The MVC Framework unobtrusive Ajax feature is based on jQuery. Step 1 Create a new MVC Application with basic Template and name it "MVCUnobtrusiveAjaxDemo". This will create a basic skeleton for our MVC application with the basic Controllers/Views/Models folders. Step 2 Right-Click Models folder and Add Class Emp with the following code. public class Emp { public int Id { get; set; } public string Name { get; set; } public decimal Salary { get; set; } public Dept Department { get; set; } } public enum Dept { IT, Sales, HR, Marketing, Admin } We have a Emp class and an enum Dept which is used in Emp class to represent the department of each employee. Step 3 Right-Click controllers folder and add New Controller with the name "EmpController". Write the following code in the EmpController class public class EmpController : Controller { private Emp[] EmpData = { new Emp {Id=1,Name = "Riya", Department = Dept.IT,Salary=9000}, new Emp {Id=2,Name = "Sam", Department = Dept.Sales,Salary=1000}, new Emp {Id=3,Name = "Mary", Department = Dept.HR,Salary=34000}, new Emp {Id=4, Name = "John", Department = Dept.IT,Salary=20000}, }; public PartialViewResult GetEmpInfo(string selectedDept="All") { if(selectedDept == null || selectedDept== "All") { return PartialView(EmpData); } else { Dept dept = (Dept)Enum.Parse(typeof(Dept), selectedDept); var result = EmpData.Where(d => d.Department == dept); return PartialView(result); } } public ActionResult GetEmpData(string selectedDept = "All") { return View((object)selectedDept); } Step 4 Right-Click the GetEmpData action method in the EmpController class and choose Add View. Keep the view name as it is which would be "GetEmpData". This will create "GetEmpData.cshtml" file in the /Views/Emp/ folder. Step 5 Similarly Right-Click the GetEmpInfo action method in the EmpController class and choose Add View. Keep the view name as it is which would be "GetEmpInfo". This will create "GetEmpInfo.cshtml" file in the /Views/Emp/ folder. Step 6 Copy and paste the below code in the GetEmpData.cshtml file @using MVCUnobtrusiveAjaxDemo.Models @model string @{ ViewBag.Title = "GetEmpInfo"; AjaxOptions ajaxOpts = new AjaxOptions(); ajaxOpts.UpdateTargetId = "tbody"; ajaxOpts.LoadingElementDuration = 1000; ajaxOpts.LoadingElementId = "loading"; } <div id="loading" class="load" style="display:none"> <p>Loading Data...</p> </div> <h2>Get Emp Data</h2> <table> <thead> <tr> <th>Name</th> <th>Salary</th> <th>Department</th> </tr> </thead> <tbody id="tbody"> @Html.Action("GetEmpInfo", new { selectedDept = Model }) </tbody> </table> @* When we set up our Ajax-enabled form as shown below, we passed in the name of the action method that we wanted to be called asynchronously i.e., the GetEmpInfo action, which generates a partial view containing a fragment of HTML. *@ @using (Ajax.BeginForm("GetEmpInfo",ajaxOpts)) { <div>@Html.DropDownList("selectedDept",new SelectList(new[]{"All"}.Concat(Enum.GetNames(typeof(Dept))))) <button type="submit">Submit</button></div> } MVC Framework supports AJAX forms using the Ajax.BeginForm helper method. The first parameter to this method is the name of the action method that will handle the request and the second parameter is AjaxOptions object which we have created at the start of the view in the Razor code block. AjaxOptions class defines properties that can be used to configure how asynchrounous request to the server is made and how the data we get from the server is processed. In the AjaxOptions object we have set the UpdateTargetId property value to "tbody", which is the id we have assigned to the tbody HTML element in the view. LoadingElementId: Id of the HTML element that will be displayed while the Ajax request is being processed at the server side. LoadingElementDuration: The duration of the loading message that is used to reveal the loading element to the user. When the user hits the Submit button, an asynchronous request will be made to the GetEmpInfo action method and the HTML markup that is returned is used to replace the existing elements in the tbody. Step 7 Copy and paste the following code in GetEmpInfo.cshtml file @using MVCUnobtrusiveAjaxDemo.Models @model IEnumerable<MVCUnobtrusiveAjaxDemo.Models.Emp> @foreach(Emp e in Model) { <tr> <td>@e.Name</td> <td>@e.Salary</td> <td>@e.Department</td> </tr> } Please note the following points about the above view: 1. Here GetEmpInfo is a strongly-typed view whose model type is IEnumerable<Emp>. 2. We are enumerating the Emp objects in the model to create rows in an HTML table. Step 8 Open the _Layout.cshtml file in the /Views/Shared folder and make sure that you have reference to the files : "jquery-1.8.2.min.js", "jquery.unobtrusive-ajax.js" in the head section as shown below: <script src="~/Scripts/jquery-1.8.2.min.js" type="text/javascript"></script> <script src="~/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> </script> The jquery-1.8.2.min.js file contains the core jQuery library. The jquery.unobtrusive-ajax.min.js file contains the Ajax functionality. Step 9 In the Web.config file of the root folder, make sure that configuration/appSettings element contains an entry UnobtrusiveJavaScriptEnabled which is set to true. Step 10 Build and Run the application. Navigate to Here port number may vary based on your project settings. This will result in the output as shown below: Step 11 Let us filter the Emp data by selecting "IT" department and then click on Submit. While the request is submitted to the server and it is processing the UI displays the Loading... message until we get the output from the server. Once the data is filtered. The output is as shown below: Please note that the above code will work only if the Javascript is enabled in the browser. If javascript is disabled, The output with Emp data filtered by HR department will be displayed in another page as shown below: To see this in action, you can disable javascript in IE 10 by following the below steps: Open IE browser window -> Tools -> Internet Options -> Security -> Internet -> Custom Level -> Security Settings - Internet Zone -> Scripting ->Active Scripting -> Disable. Now run the application and see the output as shown above. Hi Vaishali I try this article but i meet this issue how to fixed this? Error executing child request for handler 'System.Web.Mvc.HttpHandlerUtil+ServerExecuteHttpHandlerAsyncWrapper'. Can you share the Source code for this can you send mail
http://www.dotnetspider.com/resources/45817-Working-with-JSON-in-MVC-application.aspx
CC-MAIN-2018-39
refinedweb
1,111
57.67
Introduction : In this tutorial, we will learn how to find out all numbers in a string using python. The program will take one string as an input from the user. It will find out all numbers in that string and print them on the console. We will learn two different ways to solve this problem. The first method will iterate through the string and check for each character if it is a number or not and the second method will use one lambda and the third method will use one regular expression or Regex to find out all numbers in one go. Approach 1: Iterating through all characters : In this approach, we will iterate through each character of the string one by one. The complete program will look like as below : str = input("Enter a string : ") for c in str: if(c.isdigit()): print(c) It uses one for loop to iterate through the characters of the string. isdigit() checks if a character is digit or not. Approach 2: using lambda : str = input("Enter a string : ") digits = list(filter(lambda ch: ch.isdigit(), str)) print(digits) The filter method will filter out all digits from the string str and generate one list from these values. digits is the final list. Sample output : Enter a string : Hello2 w3r1d5 !0 ['2', '3', '1', '5', '0'] Approach 3: using a regex : import re str = input("Enter a string : ") digits = re.findall(r"\d",str) print(digits) It will produce output like below : Enter a string : he33llo wo4 ['3', '3', '4'] \d is the same as [0-9] i.e. it is used to match all numbers. This example considers all numbers as a single digit, i.e. it treats 33 as two 3s. Replace \d with \d+ if you want all numbers with single or multiple digits. import re str = input("Enter a string : ") digits = re.findall(r"\d+",str) print(digits) For the same above example : Enter a string : he33llo wo4 ['33', '4'] Conclusion : I have listed here a couple of different ways to find all numbers in a python string. Drop a comment below if you know any other ways to solve it. All of these programs are for python 3. Try to go through them and drop one comment below if you have any questions. Similar tutorials : - Python program to capitalize first letter of each words of a string - Python program to pad zeros to a string - Python program to replace single or multiple character,substring in a string - Python program to find a substring in a string - Python raw strings : Explanation with examples - Convert string to float in python
https://www.codevscolor.com/python-find-all-numbers-string
CC-MAIN-2020-50
refinedweb
440
71.55
C#, ASP.NET, and other stuff Find new books and literate friends with Shelfari, the online book club. You've created a class like this and all your unit tests pass just fine when you check for null strings or strings greater than 50 characters: 1: public class Person 2: { 3: [NotNullValidator(MessageTemplate="First Name is required.")] 4: [StringLengthValidator(50, MessageTemplate="First Name must be 1-50 characters.")] 5: public string FirstName { get; set; }: 1: [StringLengthValidator(1, 50, MessageTemplate="First Name must be 1-50 characters.")] Additionally, the VAB will allow you to specify whether you want the number specified for the lower/upper bound to be considered inclusvie/exclusive for the range. Skin design by Mark Wagner, Adapted by David Vidmar
http://geekswithblogs.net/michelotti/archive/2008/06/10/122780.aspx
crawl-002
refinedweb
122
52.9
This is the comment thread for the React article in the Meteor Guide. Read the article: The comment thread for each article appears at the very bottom of the page. Use this thread to post: Or anything else related to this topic that people could find useful! I'm finding that if I don't import React in the same place where I import render from react-dom, calls to render fail with the complaint that React isn't defined. React render react-dom My imports look like this: import { Meteor } from 'meteor/meteor';import { render } from 'react-dom';import React from 'react';import App from '../../ui/components/App.jsx'; Removing the React import results in the error. Does that file use any JSX? Yes, in the exact same way the example code in the guide uses it. I think the Using Blaze with React section should be renamed to Using React Components in Blaze and that we should have a corresponding Using Blaze Templates in React. Or something along those lines. My preferred (and biased) way of doing this is with my gadicc:blaze-react-component, which looks like this: import React from 'react'; import Blaze from 'meteor/gadicc:blaze-react-component'; const App = () => ( <div> <Blaze template="itemsList" items={items} /> </div> ); Alternatives include: Both are for Meteor 1.2 and have hard dependencies on meteor-react. The former involves first converting the template in JS, and then using the newly created Component directly, whereas I preferred to have a single step and be clear that I'm using a Blaze component. I did however offer the same ability in case package authors wanted to use it. For the component syntax, I tried to keep it short and similar to react-template-helper. meteor-react react-template-helper I'm happy to do a PR for this if there's a consensus around the approach, and/or open a separate topic for further discussion. I'm also happy to 'donate' the code to the Meteor namespace; I'm not too interested in maintaining another package, just felt it import to have something that works well straight away with 1.3. As a newcomer to React it is really hard to choose between Flow Router and React Router. Maybe a little more opinion would help. I think JSX compiles to stuff that uses React, so you might need to import React even if the file doesn't contain the words React anywhere. I think it could be worth opening a new discussion to decide which package for this is the best to recommend! I agree that having this section would be awesome! Here we go Best way to import Blaze templates into React Yes, it is required that React be defined if you use JSX. There's an eslint rule that picks this up (see the code style guide article for how to use eslint if you are interested): Fair enough! We probably squibbed on this one @sashko. Perhaps we should do a poll of what people use? Thanks for this @gadicc I would love to see a howto on Redux and Meteor. Specifically: How to import reactive data into Redux state and keep it up-to-date How to differentiate between local and non-local changes How to subscribe/unsubscribe based on the application state Essentially, I'd like to see how we can use Redux as the single source of truth for our apps, while still benefitting from Meteor's Mongo proxy and reactivity. I'm new to meteor and created a first "test" application in version 1.2.1 and react. I had a lot of fun with it because meteor supplied me almost everything I need to jump in very quick and also with great tutorials. Now with Meteor 1.3, I got the feeling that everything is so overloaded and complicated. What type of router should I use? What is the best way to receive my mongo data-collections? In addition (just my issue) is, that I have to learn how ECMA 2015 works with modules. For so many breaking changes, it would be nice to have either a well structured turorial or even better a video turorial. Don't understand me wrong: You probably made a great job, but for beginners there want to work with meteor are to many changes and to many packages out. Maybe this is a general problem in JavaScript too.... Best regards Have you tried doing the React tutorial again? It has been updated with all-new recommendations for some of this stuff. Abhi Aiyer has this: The guide calls containers from the router. Instead is it OK to nest containers inside other react components? Can containers have defaultProps? Yes! definitely. Not that I know of. Perhaps you could just use _.default or something inside the data function? _.default It would be cool if the guide had more examples of containers. It's taking a lot of experimenting to figure out what I can and cannot do in terms of props, propTypes, defaultProps, contextTypes, getChildContext, state, initial state etc. Especially when you have deeply nested components. I also agree with @rdagger to have more explanation on what createContainer can do. Maybe a more extensible guide on manage mutating variables when using React and Meteor. Because now it's confusing whether you will use React's this.state or Meteor's ReactiveVar and ReactiveDict.
https://forums.meteor.com/t/react-guide-article/20192
CC-MAIN-2017-26
refinedweb
905
64
Ruby Array Exercises: Check whether every element is a 3 or a 5 in a given array of integers Ruby Array: Exercise-35 with Solution Write a Ruby program to check whether every element is a 3 or a 5 in a given array of integers. Ruby Code: def check_array(nums) i = 0 while i < nums.length if(nums[i] != 3 && nums[i] != 5) return false end i = i + 1 end return true end print check_array([3, 5, 3, 3]),"\n" print check_array([2, 3, 2, 5]),"\n" print check_array([3, 5, 5, 5]),"\n" Output: true false true Flowchart: Ruby Code Editor: Contribute your code and comments through Disqus. Previous: Write a Ruby program to check whether the number of 2's is greater than the number of 5's of a given array of integers. Next: Write a Ruby program to check whether it contains no 3 or it contains no
https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-35.php
CC-MAIN-2021-21
refinedweb
153
51.72
hexString.append(Integer.toHexString(0xFF & digest[i])); public class Test { public static void main(String[] args){ byte b1 = (byte)0x04; byte b2 = (byte)0xa4; System.out.println("b1 = " + Integer.toHexString(0xFF & b1)); System.out.println("b2 = " + Integer.toHexString(0xFF & b2)); } } Ok, so if there is a bug in MY program, the bug is actually part of the Integer API. Am I wrong? So why does java security API allow you to Digest a string but doesn't give you the appropriate methods to return that back to you correctly? Originally posted by Ab Beland: Because the process is "one-way" the behaviour (I wouldn't call it a bug) is irrelevant in this case. With that said, THANKS, I was looking exactly for this!
http://www.coderanch.com/t/133191/Security/MD-Class
CC-MAIN-2014-15
refinedweb
124
56.55
-1 I'm a computer science major and I'm taking an intro to C++ programming course. I have to write a program that reads in the data from a .txt file and echoprint it, and also print the data again but with every fifth word with 10 underscores. I was able to achieve the first half (echoprinting it) but how would I be able to reprint it replacing evey 5th word with 10 uderscores. It's supposed to be a cloze test... Note: I'll be using a const string UNDERLINE = "__________" please e-mail me back if you can at <snipped email> I'll paste in the code that I currently have, but I'm not sure how it'll show up here. #include <iostream> // Include Standard I/O Libraries #include <iomanip> #include <fstream> using namespace std; int main() { char cloze[1100]; char ch; const string UNDERLINE = "__________" ifstream inFile; cout << "Please enter a filename: " << endl; cin >> cloze; cout << "\n\n"; inFile >> ch; inFile.open ( "cloze.txt" ); inFile.clear(); inFile.seekg(0L, ios::beg); if ( !inFile ) { cout << "Error opening file." << endl; } else { while ( inFile ) { inFile.get(ch); cout << ch; } } // end of else else inFile.close(); return (0); } // end of main
https://www.daniweb.com/programming/software-development/threads/108397/cloze-test
CC-MAIN-2016-50
refinedweb
202
81.43
Issues Fixed in CDH 5.7.x The following topics describe issues fixed in CDH 5.7.x, from newest to oldest release. You can also review What's New In CDH 5.7.x or Known Issues in CDH. - HADOOP-13433 - Race in UGI.reloginFromKeytab. - HADOOP-13590 - Retry until TGT expires even if the UGI renewal thread encountered exception. - HADOOP-13627 - Have an explicit KerberosAuthException for UGI to throw, text from public constants. - HADOOP-13641 - Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation. - HADOOP. -336 -. - HDFS-11160 - VolumeScanner reports write-in-progress replicas as corrupt incorrectly. - HDFS-11229 - HDFS-11056 failed to close meta file. - HDFS-11275 - Check groupEntryIndex and throw a helpful exception on failures when removing ACL. - HDFS-11292 - log lastWrittenTxId etc info in logSyncAll. - HDFS-11306 - Print remaining edit logs from buffer if edit log can't be rolled. -. - HIVE-12976 - MetaStoreDirectSql doesn't batch IN lists in all cases. - HIVE-13129 - CliService leaks HMS connection. - HIVE-13149 - Remove some unnecessary HMS connections from HS2. - HIVE-13240 - GroupByOperator: Drop the hash aggregates when closing operator. - HIVE-13539 - HiveHFileOutputFormat searching the wrong directory for HFiles. - HIVE-4662 - [security] fixing Hue - Wildcard Certificates not supported. - HUE-4466 - [security] deliver csrftoken cookie with secure bit set if possible. - HUE-5163 - [security] Speed up initial page rendering. - HUE-4916 - [core] Truncate last name to 30 chars on ldap import. - HUE-5050 - [core] Logout fails for local login when multiple backends are used. - HUE-5042 - [core.backend] Unable to kill jobs after Resource Manager failover. - HUE-4201 - [editor] Add warning about max limit of cells before truncation. - HUE-4968 - [oozie] Remove access to /oozie/import_wokflow when v2 is enabled. -. - PIG-5025 - Fix flaky test failures in TestLoad.java. - SENTRY-1260 - Improve error handling - ArrayIndexOutOfBoundsException in PathsUpdate.parsePath can cause MetastoreCacheInitializer intialization to fail. - SENTRY-1270 - Improve error handling - Database with malformed URI causes NPE in HMS plugin during DDL. - SENTRY. - SQOOP-3053 - Create a cmd line argument for sqoop.throwOnError and use it through SqoopOptions. - SQOOP-3055 - Fixing MySQL tests failing due to ignored test inputs/configuration. - SQOOP-3057 - Fixing 3rd party Oracle tests failing due to invalid case of column names. - SQOOP-3071 - Fix OracleManager to apply localTimeZone correctly in case of Date objects too. - SQOOP-3124 - Fix ordering in column list query of PostgreSQL connector to reflect the logical order instead of adhoc ordering. Issues Fixed in CDH 5.7.5 --4723 - NodesListManager$UnknownNodeId ClassCastException - YARN-4940 - YARN node -list -all fails if RM starts with decommissioned node - YARN-5704 - Provide configuration knobs to control enabling/disabling new/work in progress features in container-executor - HBASE-16294 - hbck reporting "No HDFS region dir found" for replicas - HBASE-16699 - Overflows in AverageIntervalRateLimiter's refill() and getWaitInterval() --12077 - MSCK Repair table should fix partitions in batches - HIVE-12475 - Parquet schema evolution within array<struct<>> does not work - HIVE-12785 - View with union type and UDF to the struct is broken - HIVE-13058 - Add session and operation_log directory deletion messages - HIVE-13198 - Authorization issues with cascading views - HIVE-13237 - Select parquet struct field with upper case throws NPE - HIVE-allocating YARN container when driver wants to stop all Executors - SPARK-12392 - Optimize a location order of broadcast blocks by considering preferred local hosts - SPARK-12941 - Spark-SQL JDBC Oracle dialect fails to map string datatypes to Oracle VARCHAR datatype mapping - SPARK - SQOOP-3021 - ClassWriter fails if a column name contains a backslash character Issues Fixed in CDH 5.7.4 credentials sometimes could skip directory with update in CloudFS (Azure FileSystem, S3, and so on) - shut down back end Issues Fixed in CDH 5.7.3 Upstream Issues Fixed The following upstream issues are fixed in CDH 5.7.3: - FLUME-2821 - KafkaSourceUtil Can Log Passwords at Info remove logging of security related data in older releases. - FLUME-2913 - Don't strip SLF4J from imported classpaths. - FLUME-2922 - Sync SequenceFile.Writer before calling hflush - HADOOP-8751 - NPE in Token.toString() when Token is constructed using null identifier. - HADOOP-11361 - Fix a race condition in MetricsSourceAdapter.updateJmxCache. -. --9276 - Failed to Update HDFS Delegation Token for long running application in HA mode. - HDFS-9466 - TestShortCircuitCache#testDataXceiverCleansUpSlotsOnFailure is flaky. - HDFS-9939 - Increase DecompressorStream skip buffer size. - HDFS-10512 - VolumeScanner may terminate due to NPE in DataNode.reportBadBlocks. - MAPREDUCE-6442 - Stack trace is missing when error occurs in client protocol provider's constructor. - MAPREDUCE-6473 - Job submission can take a long time during Cluster initialization. - MAPREDUCE-6675 - TestJobImpl.testUnusableNode failed - YARN-4459 - container-executor should only kill process groups. - YARN-4784 - Fairscheduler: defaultQueueSchedulingPolicy should not accept FIFO. - YARN-4866 - FairScheduler: AMs can consume all vcores leading to a livelock when using FAIR policy. - YARN-4878 - Expose scheduling policy and max running apps over JMX for Yarn queues. - YARN-5077 - Fix FSLeafQueue#getFairShare() for queues with zero fairshare. - YARN-5272 - Handle queue names consistently in FairScheduler. - HBASE-14963 - Remove use of Guava Stopwatch from HBase client code. - HBASE-15621 - Suppress Hbase SnapshotHFile cleaner error messages when a snaphot is going. - HIVE-11432 - Hive macro gives same result for different arguments. - HIVE-11487 - Add getNumPartitionsByFilter api in metastore api. --13043 - Reload function has no impact to function registry. - HIVE-13090 - Hive metastore crashes on NPE with ZooKeeperTokenStore. - HIVE-13372 - Hive Macro overwritten when multiple macros are used in one column. - HIVE-13704 - Do not call DistCp.execute() instead of DistCp.run(). - HIVE-13749 - Memory leak in Hive Metastore. - HIVE-13884 - Disallow queries in HMS fetching more than a configured number of partitions - HIVE-14055 - directSql - getting the number of partitions is broken. --3481 - [assist] Do not sort the columns by name, instead use the creation order. - HUE-3842 - [core] HTTP 500 while emptying Hue 3.9 trash directory. - HUE-3845 - [sentry] Sometimes see group as editable on role section. - HUE-3880 - [core] Add importlib directly for Python 2.6. - HUE-3988 - [search] Support schemaless collections. - HUE-3999 - [oozie] list_oozie_workflow page should not break in case of bad json from oozie. -. - IMPALA-3711 - Remove unnecessary privilege checks in getDbsMetadata(). - IMPALA-3915 - Register privilege and audit requests when analyzing resolved table refs. - OOZIE-2391 - spark-opts value in workflow.xml is not parsed properly. - OOZIE-2537 - SqoopMain does not set up log4j properly. - SOLR-7280 - BackportLoad cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts. - SOLR-9236 - AutoAddReplicas will append an extra /tlog to the update log location on replica failover. - SPARK-14963 - [YARN] Using recoveryPath if NM recovery is enabled. - SPARK-16505 - [YARN] Optionally propagate error during shuffle service startup. - SQOOP-2561 - Special Character removal from Column name as avro data results in duplicate column and fails the import. - SQOOP-2906 - Optimization of AvroUtil.toAvroIdentifier. - SQOOP-2971 - OraOop does not close connections properly. - SQOOP-2995 - Backward incompatibility introduced by Custom Tool options. Issues Fixed in CDH 5.7.2 CDH 5.7.2 fixes the following issues. Kerberized HS2 with LDAP authentication fails in a multi-domain LDAP case In CDH 5.7, Hive introduced a feature to support HS2 with Kerberos plus LDAP authentication; but it broke compatibility with multi-domain LDAP cases on CDH 5.7.x and C5.8.x versions. Affected Versions: CDH 5.7.1, CDH 5.8.0, and CDH 5.8.1 Fixed in Versions: CDH 5.7.2 and higher, CDH 5.8.2 and higher Bug: HIVE-13590. Workaround: None.-4916 - Revert "TestNMProxy.tesNMProxyRPCRetry fails - Issues Fixed in CDH 5.7.1 CDH 5.7.1 fixes the following issues..7.1: - AVRO-1781 - Schema.parse is not thread safe - FLUME-2781 - Kafka Channel with parseAsFlumeEvent=true should write data as is, not as flume events - -591 - Issues Fixed in CDH 5.7.0 CDH 5.7.0 fixes the following issues. Apache Flume TailDirSource throws FileNotFound Exception if ~/.flume directory is not created already Bug: FLUME-2773 This fix ensures that any missing parent directories in the positionFile path (either default or user given input) are always created. flume_env script should handle JVM parameters like -javaagent -agentpath -agentlib Bug: FLUME-2763 This fix enables the flume_env script to handle JVM parameters such as -javaagent -agentpath and -agentlib. Kafka channel timeout property is overridden by default value Bug: FLUME-2734 When the Kafka channel timeout property is passed to the Kafka consumer internally, it does not work as expected. It is overridden by the default value or the value specified by the .timeout property, which is undocumented. Now the kafka.consumer.timeout.ms value specified in the configuration takes effect like other Kafka consumer properties. Apache Hadoop ReplicationMonitor can infinitely loop in BlockPlacementPolicyDefault#chooseRandom() Clean up temporary files after fsimage transfer failures Lease recovery should return true if the lease can be released and the file can be closed fsck does not list correct file path when bad replicas or blocks are in a snapshot Cloudera Bug: CDH-32221 When blocks are corrupt in a snapshot, the fsck command lists the original directory and not the snapshot directory. This happens even when the original file is deleted. The specific commands are fsck -list-corruptfileblocks and fsck -list-corruptfileblocks -includeSnapshots. Make DataStreamer#block thread safe and verify generationStamp in commitBlock Delayed heartbeat processing causes storm of subsequent heartbeats Cloudera Bug: CDH-33589 The NameNode usually handles DataNode heartbeats quickly, but can be delayed for various reasons, such as a long garbage collection or lock contention. After the NameNode recovers, the DataNode sends a storm of heartbeat messages in a tight loop which, in a big cluster, can overload the NameNode and make cluster recovery difficult. FSImage may get corrupted after deleting snapshot Apache HBase See also Known Issues In CDH 5.7.0. Potential data loss after a RegionServerAbortedException Bug: HBASE-13895 If the master attempts to assign a region while handling a RegionServer abort, the returned RegionServerAbortedException is handled as though the region had been cleanly taken offline, so the new assignment is allowed to proceed. If the region is opened in its new location before WAL replay has completed, the replayed edits are ignored, or are later played back on top of new edits that happened after the region was opened. In either case, data can be lost. Workaround: None. Data loss can occur if a table has more than 2,147,483,647 columns Bug: HBASE-15133 Data loss can occur if a table has more than 2,147,483,647 (Integer.MAX_INT) columns, because some key variable types are INT rather than LONG. Workaround: Adjust your schema to use fewer than Integer.MAX_INT columns. Delete operations that occur during a region merge may be eclipsed by new Put operations Bug: HBASE-13938 The master's timestamp is not used when sending hbase:meta edits on region merges, so correct ordering of new region additions and old region deletes is not assured and data loss can occur if edits are applied in the wrong order. Workaround: None. RPC handler / task monitoring seems to be broken after 0.98 Bug: HBASE-14674 After pluggable RPC scheduler, the way the tasks work for the handlers got changed. We no longer list idle RPC handlers in the tasks, but we register them dynamically to TaskMonitor through CallRunner. However, the IPC readers are still registered the old way (meaning that idle readers are listed as tasks, but not idle handlers). From the javadoc of MonitoredRPCHandlerImpl, it seems that we are NOT optimizing the allocation for the MonitoredTask anymore, but instead allocate one for every RPC call breaking the pattern Conflicts between HBase Balancer and hbase:meta reassignment Bug: HBASE-14536 If hbase:meta is assigned to a RegionServer that becomes unavailable, and the HBase balancer has scheduled but not completed a plan to move hbase:meta to a different RegionServer, the hbase:meta becomes unassigned. Workaround: None. Regions can fail to transition in a write-heavy cluster with a small number of read handlers Bug: HBASE-13635 On a write-heavy cluster configured with a small number of read handlers, all requests that are not mutations are sent to the read handlers, including ReportRegionInTransition requests. If these requests time out, the RegionServer is assumed to be unavailable, and the regions cannot transition correctly. Workaround: None. In a secured environment, when a RegionServer is stopped, znodes may not be cleaned up correctly Bug: HBASE-14581 When a RegionServer process is stopped, the zkcli command is invoked to delete its znodes. In a secure cluster, the zkcli command does not authenticate to ZooKeeper and the deletion fails. This problem occurs because the REGIONSERVER_OPTS environment variable is not correctly passed when invoking the zkcli command. Workaround: None. Delays in RegionServer responses can cause a region to be closed indefinitely Bug: HBASE-14407 Handling of region assignment by the master has a flaw when RegionServer responses are delayed due to network delays, system load, or other reasons. This flaw can cause the master to close a region indefinitely. Workaround: Restart the RegionServer to force the region to be reassigned. When a RegionServer crashes, replication peers can crash due to inode exhaustion from old WALs Bug: HBASE-14621 The fix for HBASE-12865 ensures that loadWALsFromQueues attempts a retry when the replication source version is changed while loading the replication queue. However, the fix introduced a bug in ReplicationLogCleaner that causes an infinite loop when a RegionServer crashes. As a result, old WALs are not cleaned up. In a cluster under high load, the inode limit on the replication peer RegionServer can be exhausted, causing the RegionServer to crash. Workaround: None. When a RegionServer crashes, cell-level visibility tags may be lost during WAL replay Bug: HBASE-15218 When reading cells after a RegionServer crash, the KeyValueCodec and the WallCellCodec both use NoTagsKeyValue, which does not preserve visibility tags. Workaround: None. Column is not deleted if you do not pass the visibility label Bug: HBASE-14761 If a column was created or modified with a visibility label, and you attempt to delete it without passing the visibility label, the column is not deleted. It is not visible using a Scan operation, but is visible using a raw Scan, and is marked with deleteColumn. Workaround: None. If multiple users are configured with the role hbase.superuser, an attempt to connect to a secure ZooKeeper instance fails Bug: HBASE-14425 The hbase.superuser configuration option is a comma-separated list of users. A bug in the code to connect to a secure ZooKeeper causes the list to be evaluated as a single value, so a list of multiple users fails because no username matches the comma-separated list. Workaround: Only specify a single user in the hbase.superuser configuration option. Region split request audits are performed against the hbase user instead of the requesting user Bug: HBASE-14475 When checking whether the requesting user has permission to perform a region split, the hbase user's permissions are checked instead of those of the requesting user.Due to this bug, CREATE is sufficient for the split, rather than CREATE and ADMIN. Because CREATE permissions are also sufficient for flushes and compactions, this issue is not severe in most environments. Workaround: None. Incorrect timestamp checking causes unpredictable deletes with VisibilityScanDeleteChecker. Bug: HBASE-13635 Incorrect timestamp checking when VisibilityScanDeleteChecker is used causes unpredictable results when deleting cells. In some cases, the timestamp is deleted but the cell contents are not deleted. In other cases, a request to delete an entire row or to delete a version results in only a single cell being deleted. Workaround: None. A BulkLoad of an HFile with tags that requires splits does not preserve the tags Bug: HBASE-15035 When an HFile is created with cell tags and is imported into HBase using a bulk load, the tags are present as expected when the HFile is loaded into a single region. However, if the bulk load spans multiple regions, the original HFile is automatically split into a set of HFiles corresponding to each of the regions the original HFile covers. Tags, including ACLs, TTLs, and MOB pointers, are not copied to the split files. Workaround: None. Restoring a snapshot from a table in a user-defined namespace causes a URISyntaxException Bug: HBASE-14578 A table in a user-defined namespace uses a colon between the namespace and the table name (for instance, example_ns:users). This colon is interpreted incorrectly when restoring from a snapshot. Workaround: None. The list_snapshots HBase shell command shows all snapshots, regardless of the user's permission to view them Bug: HBASE-12552 A user with no privileges to interact with a snapshot can list the snapshot using the list_snapshots HBase shell command. Workaround: None. ExportSnapshot or DistCp operations may fail on the Amazon s3a:// protocol Bug: None. ExportSnapshot or DistCP operations may fail on AWS when using certain JDK 8 versions, due to an incompatibility between AWS Java SDK 1.9.x and the joda-time date-parsing module. Workaround: Use joda-time 2.8.1 or higher, which is included in AWS Java SDK 1.10.1 or higher. If HDFS is restarted while HBase is running, WALs being replicated may not close correctly Bug: HBASE-15019 The RegionServer receiving the replicated WALs has no mechanism to be notified to perform a recovery if HDFS is restarted on the source cluster. Workaround: Restart the RegionServer to force the master to trigger the lease recovery during WAL splitting. The permissions of the .top/ directory are not explicitly set during LoadIncrementalHFiles operations Bug: HBASE-14005 Permissions are not explicitly set on the .top/ directory created during LoadIncrementalHFiles. The permissions should be set the same as the .bottom/ and .tmp/ directories. Workaround: None. Nonfatal errors in the FSHLog subsystem are incorrectly logged as fatal errors Bug: HBASE-14042 If an IOException causes a log roll to be requested, it is logged as a fatal event, although it should be logged as a warning. Workaround: None. FuzzyRowFilter may omit some rows if multiple fuzzy keys are present Bug: HBASE-14269 If you use the FuzzyRowFilter for Scan operations, and you have multiple fuzzy keys, some rows may be omitted from the RowTracker. Workaround: None.. Values of some metrics may appear to be negative Bug: HBASE-12961 Some metric value are stored in integers, and cannot accommodate real-world values. This causes metric values to appear to be negative. Workaround: None. The HBase Shell cannot handle Scan filters which contain non-UTF8 characters Bug: HBASE-15032 The HBase Shell incorrectly handles filter strings which contain non-UTF8 characters. Workaround: None. Reverse scans do not work when Bloom blocks or leaf-level inode blocks are present Bug: HBASE-14283 Because the seekBefore() method calculates the size of the previous data block by assuming that data blocks are contiguous, and HFile v2 and higher store Bloom blocks and leaf-level inode blocks with the data, reverse scans do not work when Bloom blocks or leaf-level inode blocks are present when HFile v2 or higher is used. Workaround: None. Apache Hive Fix regression in bind and search logic for Hive external LDAP authentication Bug: HIVE-12885 Cloudera Bug: CDH-35075 Fixes a regression in LDAP bind and search authentication from CDH 5.5.0. Some queries using LEFT SEMI JOIN fail with IndexOutOfBoundsException Bug: HIVE-13082 Cloudera Bug: CDH-37515 Some queries using LEFT SEMI JOIN fail with IndexOutOfBoundsException. Constant propagation optimization for these queries is now enabled. BETWEEN predicate is not functioning correctly with predicate pushdown on Parquet tables Bug: HIVE-13039 Cloudera Bug: CDH-37322 BETWEEN becomes exclusive in Parquet table when predicate pushdown (PPD) is enabled for Parquet tables, leading to potential incorrect results. Performance degradation when running Hive queries against wide tables with Sentry enabled Bug: SENTRY-1007 Cloudera Bug: CDH-35908 Fixes a performance regression due to inefficient authorization checks in the Sentry Hive binding for Hive tables that are wide (more than 100 columns). Optionally cancel queries after configurable timeout waiting on compilation lock Bug: HIVE-12431 Cloudera Bug: CDH-34693 Adds a new configuration option, hive.server2.compile.lock.timeout, that cancels queries if they are waiting for the compile lock for more than this amount of time. This applies only to queries waiting on compilation and does not cancel queries that are being compiled. By default, the timeout is unlimited. Hue The Hive Sample Table, customer, Cannot be Queried Apache Impala For the list of Impala fixed issues, see Issues Fixed in Impala for CDH 5.7.0. See also Apache Impala Known Issues for issues that are known but not resolved yet. MapReduce MapReduce Rolling Upgrades To and From CDH 5.6.0 Fail Cloudera Bug: CDH-38587 Users can now safely use rolling upgrade from releases CDH 5.6.0 and lower to CDH 5.7.0. Cloudera Search Reordered updates cause leaders and replicas to become out of sync Bug: SOLR-8586, SOLR-8691 Solr relied on checking leader/replica document synchronization by comparing the last 100 updates on the leader and replica for significant overlap, and then applying any missing updates from the leader. In certain cases, document updates could be significantly reordered, resulting in mismatches in the index, even when the last 100 documents matched. Solr now implements hashing over the versions of all the documents to check for synchronization, eliminating a class of errors in which replicas and leaders could become out of sync. Apache Sentry Fixed Sentry Oracle upgrade script Bug: SENTRY-1066 This fixes previous Sentry upgrade issues with Oracle (ORA-0955). Tables with non-HDFS locations break Hive Metastore startup Bug: SENTRY-1044 Tables with non-HDFS locations cause the HDFS/Sentry plugin to enter an invalid state and fail with the exception, Could not create Initial AuthzPaths or HMSHandler !!. URI check is now case-sensitive Bug: SENTRY-968 Sentry no longer ignores case when validating privileges for URIs. TRUNCATE on empty partitioned table in Hive fails Bug: SENTRY-826 PathsUpdate.parsePath(path) will throw an NPE when parsing relative paths Bug: SENTRY-1002 Sentry now skips relative paths (that is, paths without a fully qualified scheme) rather than failing with a NPE. The Sentry Server should be not be updated if the CREATE/DROP operations fail Bug: SENTRY-1008 Previously, even if a CREATE TABLE operation failed, the Sentry Server would still be updated with a path to the table. This has been fixed. SimpleDBProviderBackend should retry the authorization process Bug: SENTRY-902 This fix includes corrections to the retry logic to remove recursive calls and include a wait time between retries when authorization fails. Support Hive RELOAD by updating the classpath for Sentry Bug: SENTRY-1003 When Hive issues the RELOAD command, Sentry now gets the updated auxiliary JAR path from the hive.reloadable.aux.jars.path property. RealTimeGet with explicit ids can bypass document-level authorization Bug: SENTRY-989 Users can no longer bypass security by guessing the document IDs for the RealTimeGet command. Updated Apache Shiro dependency Bug: SENTRY-1054 External partitions referenced by more than one table can cause some unexpected behavior with Sentry HDFS sync Bug: SENTRY-953 INSERT INTO command no longer requires URI privilege on partition location under table Bug: SENTRY-1095 The checks on the Hive INSERT INTO command have been relaxed. The INSERT INTO command adds location information to the partition description. Although this requires a check on URI privileges, in this case location information can be generated even if the partition is under the table directory. Improvement to the SentryAuthFilter error message when authentication fails Bug: SENTRY-1060 Avoid logging all DataNucleus queries when debug logging is enabled Bug: SENTRY-945 Logging DataNucleus queries when debugging can fill up 2 GB of log file space in less than five minutes. getGroup and getUser should always return original HDFS values for paths that are not managed by Sentry Bug: SENTRY-936 Paths that do not correspond to Hive metastore objects should not be affected by HDFS/Sentry sync. Exceptions in MetastoreCacheInitializer should not prevent HMS from starting up Bug: SENTRY-957 Instead of only throwing a runtime exception, this fix ensures failed tasks are first retried. Set maximum message size for Thrift messages Bug: SENTRY-904 This ensures that security scans and unformatted messages do not bring down the Sentry server by going out of bounds. Allow SentryAuthorization setter path always fall through and update HDFS Bug: SENTRY-988 Setting HDFS rules on Sentry-managed HDFS paths should not affect original HDFS rules Bug: SENTRY-944 Removing and modifying ACLs on Sentry-managed paths that correspond to Hive metastore objects should not affect the original HDFS access rules. Fix inconsistency in column-level privileges Bug: SENTRY-847 If you have column-level privileges for a table, the SHOW columns operation should not require extra table-level privileges. Performance Improvements Improved performance for filtering Hive SHOW commands Bug: SENTRY-565 HiveAuthzBinding has been improved to reduce the number of RPC calls when filtering SHOW commands. Improved Sentry column-level performance for wide tables Bug: SENTRY-1007 Apache Spark Spark SQL does not support the char type Spark SQL does not support the char type (fixed-length strings). Like unions, tables with such fields cannot be created from or read by Spark. Spark SQL statements that can result in table partition metadata changes may fail Because Spark does not have access to Sentry data, it may not know that a user has permissions to execute an operation and instead fail it. SQL statements that can result in table partition metadata changes, for example, "ALTER TABLE" or "INSERT", may fail. Cloudera Bug: CDH-33446 Certain Spark MLlib features not supported - spark.ml - ML pipeline APIs. Spark SQL cannot retrieve data from a partitioned Hive table When reading from a partitioned Hive table, Spark SQL cannot identify the column delimiter used and reads the full record as the first column entry. Cloudera Bug: CDH-37189 Workaround: Contact Cloudera Support for information on how to deploy a patch to resolve the issue. Tasks that fail due to YARN preemption can cause job failure Bug: SPARK-8167 Tasks that are running on preempted executors will count as FAILED with an ExecutorLostFailure. Apache Sqoop Oracle connector not working with lowercase columns Bug: SQOOP-2723 The Oracle connector now works with lowercase columns. Run only one map task attempt during export Bug: SQOOP-2712 In most scenarios, running multiple map task attempts by default when performing an export is not required. The default is now one map task attempt during export operations. Do not dump data on error in TextExportMapper by default Bug: SQOOP-2651 Dumping data in the TextExportMapper might unintentionally leak sensitive information to logs. The enableDataDumpOnError key is set to false by default. A user can set the value to true to intentionally write the data to the log. Support of glob paths during export Bug: SQOOP-1281 Glob paths are now supported for export. Sqoop should support importing from table with column names containing some special characters Bug: SQOOP-2387 Sqoop supports some special characters in column names. The specific list of characters depends on those supported for a particular database. Avro export ignores --columns option Bug: SQOOP-1369 AvroExportMapper now supports the --columns option to restrict the columns to export. JDK Java 8 (updates 60 and higher) has problems with joda-time and S3 requests Bug: SPARK-11413 Cloudera Bug: CDH-31245 Versions of Java 1.8, from update 60 and higher (jdk1.8.0_60++), cause S3 to fail because joda-time cannot format time zones.
https://docs.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_fixed_in_57.html
CC-MAIN-2020-45
refinedweb
4,574
54.12
Quick reference for the ESP8266¶ The Adafruit Feather HUZZAH board (image attribution: Adafruit). Installing MicroPython¶ See the corresponding section of tutorial: Getting started with MicroPython on the ESP8266.(160000000) # set the CPU frequency to 160 MHz import esp esp.osdebug(None) # turn off vendor O/S debugging messages esp.osdebug(0) # redirect vendor O/S debugging messages to UART(0) adddress wlan.ifconfig() # get the interface's IP/netmask/gw/DNS addresses ap = network.WLAN(network.AP_IF) # create access-point interface ap.active(True) # activate the interface ap.config(essid='ESP-AP') # set the ESSID of the access point, and is accessed via the machine.SPI class: from machine import Pin, SPI # construct an SPI bus on the given pins # polarity is the idle state of SCK # phase=0 means sample on the first edge of SCK, phase=1 means the second spi = SPI(-1,ing, and is accessed via the machine.I2C class:() driver¶ Use the neopixel module:)8266().
http://docs.micropython.org/en/v1.9/esp8266/esp8266/quickref.html
CC-MAIN-2022-21
refinedweb
160
51.04
Updated for Xcode 13.0 Updated for iOS 14 SwiftUI offers 17 property wrappers for our applications, each of which provide different functionality. Knowing which one to use and when is critical to getting things right, so in this article I’m going to introduce you to each of them, and give you clear guidance which to use. I’m going to explain more in a moment, but here’s the “too long; didn’t read” summary that describes roughly what each wrapper does, whether it owns its data or not (i.e. whether the data belongs to it and is managed by it), along with links to more: @AppStoragereads and writes values from UserDefaults. This owns its data. More info. @Bindingrefers to value type data owned by a different view. Changing the binding locally changes the remote data too. This does not own its data. More info. @Environmentlets us read data from the system, such as color scheme, accessibility options, and trait collections, but you can add your own keys here if you want. This does not own its data. More info. @EnvironmentObjectreads a shared object that we placed into the environment. This does not own its data. More info. @FetchRequeststarts a Core Data fetch request for a particular entity. This owns its data. More info. @FocusedBindingis designed to watch for values in the key window, such as a text field that is currently selected. This does not own its data. @FocusedValueis a simpler version of @FocusedBindingthat doesn’t unwrap the bound value for you. This does not own its data. @GestureStatestores values associated with a gesture that is currently in progress, such as how far you have swiped, except it will be reset to its default value when the gesture stops. This owns its data. More info. @Namespacecreates an animation namespace to allow matched geometry effects, which can be shared by other views. This owns its data. @NSApplicationDelegateAdaptoris used to create and register a class as the app delegate for a macOS app. This owns its data. @ObservedObjectrefers to an instance of an external class that conforms to the ObservableObjectprotocol. This does not own its data. More info. @Publishedis attached to properties inside an ObservableObject, and tells SwiftUI that it should refresh any views that use this property when it is changed. This owns its data. More info. @ScaledMetricreads the user’s Dynamic Type setting and scales numbers up or down based on an original value you provide. This owns its data. More info. @SceneStoragelets us save and restore small amounts of data for state restoration. This owns its data. More info. @Statelets us manipulate small amounts of value type data locally to a view. This owns its data. More info. @StateObjectis used to store new instances of reference type data that conforms to the ObservableObjectprotocol. This owns its data. More info. @UIApplicationDelegateAdaptoris used to create and register a class as the app delegate for an iOS app. This owns its data. More info. When it comes to storing data in your app, the simplest property wrapper is @State. This is designed to store value types that are used locally by your view, so it’s great for storing integers, Booleans, and even local instances of structs. In comparison, @Binding is used for simple data that you want to change, but is not owned by your view. As an example, think of how the built-in Toggle switch works: it needs to move between on and off states, but it doesn’t want to store that value itself so instead it has a binding to some external value that we own. So, our view has an @State property, and the Toggle has an @Binding property. There is a variation of @State called @GestureState, specifically for tracking active gestures. This isn’t used so often, but it does have the benefit that it sets your property back to its initial value when the gesture ends. For more advanced purposes – i.e., dealing with classes, or sharing data in many places – you should not use @State and @Binding. Instead, you should create your object somewhere using @StateObject, then use it in other views with @ObservedObject. A simple rule is this: if you see “state” in the name of a property wrapper, it means that views definitely owns the data. So, @State means simple value type data created and managed locally but perhaps shared elsewhere using @Binding, and @StateObject means reference type data created and managed locally, but perhaps shared elsewhere using something like @ObservedObject. This is important: if you ever see @ObservedObject var something = SomeType() it should almost certainly be @StateObject instead so that SwiftUI knows the view should own the data rather than just refer to it elsewhere. Using @ObservedObject here can sometimes cause your app to crash because the object is destroyed prematurely. If you find yourself handing the same data from view to view to view, you’ll find the @EnvironmentObject property wrapper useful. This lets you read a reference type object from a shared environment, rather than passing it around explicitly. Just like @ObservableObject, @EnvironmentObject should not be used to create your object initially. Instead, create it in a different view and use the environmentObject() modifier to inject it into the environment. Although the environment will automatically keep ownership of your object, you can also use @StateObject to store it wherever it was originally created. This is not required, though: putting an object into the environment is enough to keep it alive without further ownership. The final state-based property wrapper is @Published, which is used inside your reference types to annotate the properties. Any property marked with @Published will cause its parent class to announce that a change has occurred, which in turn will cause any view observing that object to make any changes it needs. SwiftUI has three property wrappers designed to store and/or retrieve data. The first is @AppStorage, which is a wrapper around UserDefaults. Every time you read or write a value from app storage, you’re actually reading or writing from UserDefaults. The second is @SceneStorage, which is a wrapper around Apple’s state restoration APIs. State restoration is what allows an app to be closed and reloaded, and come back to the same state the user left off – it makes it look like our apps were always running, even though they were silently terminated. @AppStorage and @SceneStorage are not secure and should not be used to store sensitive data. Although @AppStorage and @SceneStorage sound the same, they are not: @AppStorage stores one value for your entire application, whereas @SceneStorage will automatically save multiple values for the same data for times when the user has your app window open multiple times – think iPadOS and macOS. So, you might use @AppStorage to store global values such as “what is the user’s high score?”, and you might use @SceneStorage to store “what page is the user reading right now?” The third data property wrapper is @FetchRequest, which is used to retrieve information from Core Data. This will automatically use whichever managed object context is in the environment, and update itself when the underlying data has changed. SwiftUI has two properties wrappers for reading the user’s environment: @Environment and @ScaledMetric. @Environment is used to read a wide variety of data such as what trait collection is currently active, whether they are using a 2x or 3x screen, what timezone they are on, and more. It also has a couple of special application actions, such as exporting files and opening a URL in the system-registered web browser. @ScaledMetric is much simpler, and lets us adapt the size of our user interface based on a user’s Dynamic Type settings. For example, a box that is 100x100 points might look great using the system default size, but with @ScaledMetric it will automatically become 200x200 when a larger Dynamic Type setting is enabled. SwiftUI has provides the @Namespace property wrapper, which creates a new namespace for animations. Animation namespaces let us say “animate views with an ID of 5”, and all views in that namespace with the ID 5 will be animated. You can share namespaces between views by using the property type Namespace.ID and injecting the @Namespace value from whichever view created it. This allows you to created matched geometry effect animations across views, rather than storing all the data in the current view. If you ever need access to the old UIApplicationDelegate and NSApplicationDelegate methods and notifications, you should use the @UIApplicationDelegateAdaptor and @NSApplicationDelegateAdaptor property wrappers respectively. You provide these with the class of your app delegate, and they will make sure an instance is created and sent all appropriate notifications. Earlier I described which property wrappers own their data, and really this comes to sources of truth in your application: wrappers that own their data are sources of truth because they create and manage the value, and wrappers that do not own their data are not sources of truth because they get the value from somewhere else. Property wrappers that are sources of truth These create and manage values directly: @AppStorage @FetchRequest @GestureState @Namespace @NSApplicationDelegateAdaptor @Published @ScaledMetric @SceneStorage @State @StateObject @UIApplicationDelegateAdaptor Property wrappers that are not sources of truth These get their values from somewhere else: @Binding @Environment @EnvironmentObject @FocusedBinding @FocusedValue @ObservedObject I want to make a new property owned by the current view. You should use @State for value types, and @StateObject for reference types. I want to refer to a value created elsewhere. You should use @Binding for value types, and either @ObservedObject or @EnvironmentObject for reference.
https://www.hackingwithswift.com/quick-start/swiftui/all-swiftui-property-wrappers-explained-and-compared
CC-MAIN-2021-43
refinedweb
1,600
53.51
In Machine Learning, a “model” could be anything. The following pasta machine could be described as a model. It takes “input”, and a set of “hyperparameters”. Well, at least a couple of hyperparameters: the “cutter” determines whether the “output” is fettuccine or spaghetti, and the speed that you spin the crank determines the thickness of the noodles. Change the hyperparameters and you get different output. Of course, the internals of a pasta machine don’t change over time, and it doesn’t produce better “output” every time you cook (given appropriate “loss” ratings from your dinner guests). That would be cool! So, the metaphor is not complete, but I do like the idea of Pasta Machine Learning. The pasta machine shows that the idea of a model can be varied between chefs, er, I mean data scientists. Thanks to scikit learn, models in Python often have a similar interface to one another (for better or worse). For example, models often have methods like: model.fit() model.train() model.test() Regardless of what kind of model you are using in your Machine Learning code, Comet ML can handle it. Comet has a sophisticated system for logging, registering, versioning, and deploying machine learning models. The steps of the Comet model pipeline are: - Log an Experiment Model, via Comet’s Python SDK Experiment - Register an Experiment Model - Track Model Versions of the Registered Models - Deploy a Registered Model We’ll go through each of these steps. 1. Log an Experiment Model To Comet, a model can be composed of any files or folders in the file system. That is, a model can be any collection of files. The first step in the model pipeline is to log all of the associated files with a model name via an Experiment. If all of the files connected to a model are in a single folder (and all of the files there are related to the model), you can simply log the model with a name and the path to the folder. Here, we use the name “MNIST CNN” and all of the model files are in the “../models/run-026/” folder: from comet_ml import Experiment experiment = Experiment() experiment.log_model("MNIST CNN", "../models/run-026/") After you log a model via an Experiment,. Clicking the + Register link will take you to the Registry view: 2. Register a Model After you log a model through an Experiment through the SDK, you can then register it. Registered Models belong to a workspace, and are shared with your team there. You can register an Experiment’s model in two ways: 1. Through the Experiment’s Asset Tab in the Comet.ml User Interface 2. Programmatically through the Comet Python SDK To register an experiment’s model via the Comet User Interface, simply go the Experiment’s Asset tab, locate the model in the asset list, and click on the Register link. If you have previously registered a model, you can save this model to an Existing version: However, if you would like to save this as a new model, simply click the “Register New Model” link. That will bring up the Register Model window: Here, you can give the registered model a name, a version, and set its visibility to Public or Private. You can also register an experiment’s model via the Python API. That would look similar to: from comet_ml import API api = API() experiment = api.get("workspace/project/experiment") experiment.register_model("MNIST CNN") 3. Track Model Versions Registered models belong to the workspace. In this manner, you can share your models with the workspace team. At the workspace view, you can select between viewing the Projects or the Registered Models: Selecting a registered model’s card in the Model Registry will take you to the detailed view of the registered model: Here you can see the latest version of the registered model, and all previous versions as well. Registered models contain the following properties, which can all be edited here: - Registered name - Description - Visibility setting (public vs. private) - Notes 4. Deploy Models. That’s it! I hope you enjoyed our quick snack overview of Comet’s Model Registry! For more information, please see: comet.ml/docs/user-interface/models/ Bon appétit!
https://www.comet.ml/site/using-comet-model-registry/
CC-MAIN-2020-40
refinedweb
705
54.83
Could not connect to remote portGuru Prashanth Thanakodi Apr 6, 2016 7:45 AM Hi I am trying to change the log level from info to debug in the standalone.xml configuration file during run time using the ModelControllerClient. But I am unable to connect to the native interface. It fails with the error Could not connect to remote port. I am using WildFly 9.0.0.Final . If I use the configuration mentioned in this page, WildFly fails to start saying invalid attribute interface. Can someone tell me what's wrong in the confugration? Here is the configuration in the standalone.xml file <native-interface <socket-binding </native-interface> Section referring to Here is the code to do that. client = ModelControllerClient.Factory.create( InetAddress.getByName("127.0.0.1"), 9999); // NOSONAR op.get("name").set("level"); op.get("value").set(stringLevel); client.execute(op); Thanks Guru 1. Re: Could not connect to remote portGuru Prashanth Thanakodi Apr 7, 2016 12:14 PM (in response to Guru Prashanth Thanakodi) Any help here? 2. Re: Could not connect to remote portErhard Siegl Apr 7, 2016 4:55 PM (in response to Guru Prashanth Thanakodi) WildFly uses port 9990 for the native interface, not 9999: jboss-cli.sh --connect --controller=localhost:9990 "/subsystem=logging/logger=my.test:add(level=DEBUG)" That works out of the box. 3. Re: Could not connect to remote portehugonnet Apr 11, 2016 1:43 AM (in response to Guru Prashanth Thanakodi) You are not specifying which protocol you are using so it falls back to http-remote which is on port 9990. ModelControllerClient.Factory.create("remote", InetAddress.getByName("127.0.0.1"), 9999); // NOSONAR 4. Re: Could not connect to remote portGuru Prashanth Thanakodi Apr 12, 2016 3:36 AM (in response to Erhard Siegl) Is there a particular subsytem responsible for the native interface. I am upgrading from JBOSS 7.1. So I am using the 7.1 standalone.xml file and making the changes accordingly in Wildfly. I may have to add a susbsystem from wildfly standalone.xml. Thanks Guru 5. Re: Could not connect to remote portErhard Siegl Apr 13, 2016 11:42 AM (in response to Guru Prashanth Thanakodi) I guess that the confusion comes from the fact that the documentation page The native management API - WildFly 9 - Project Documentation Editor was copied from The native management API - JBoss AS 7.1 - Project Documentation Editor and someone forgot to change port 9999 to 9990, since everything else is the same in WildFly compared to AS7. You don't have to configure anything special to use the native management API (on port 9990). The CLI tool that comes with the application server uses this interface, and user can develop custom clients that use it as well. Side note: I would strongly recommend not to use the standalone.xml from AS7 in WildFly9 but to start with the default configuration and make the changes your application needs. Personally I never edit the standalone.xml directly, but use scripts with "jboss-cli.sh --file=xxx" to do the changes. The commands for the CLI are much more stable than the XML-format and most configurations can be migrated to a newer version with no or only minor changes. 6. Re: Could not connect to remote portehugonnet Apr 14, 2016 3:57 AM (in response to Erhard Siegl) Hum I disagree. The native management use port 9999 if you configure a native interface. Now native is no longer the default way to connect, you should use the The HTTP management API - WildFly 10 - Project Documentation Editor . It is specified on the native management API that you have to create the interface (it's the first line actually). Maybe this is not enough visible but it is there 7. Re: Could not connect to remote portErhard Siegl Apr 16, 2016 12:21 PM (in response to ehugonnet) Well as I understand it the native management API is the thing that jboss-cli.sh uses and that can be used with java code. Maybe you can open another interface on port 9999, but I think for WildFly it is simpler to use the existing management port on 9990. But maybe it's better to look at an example. I have the following code with EAP6.2 (AS7): package jbnative; import java.io.IOException; import java.net.InetAddress; import org.jboss.as.controller.client.ModelControllerClient; import org.jboss.dmr.ModelNode; public class ReadVersion { public static void main(String[] args) throws IOException { ModelControllerClient client = ModelControllerClient.Factory .create(InetAddress.getByName("localhost"), 9999); ModelNode op = new ModelNode(); op.get("operation").set("read-attribute"); op.get("name").set("product-version"); ModelNode returnVal = client.execute(op); System.out.println(returnVal.get("result").toString()); client.close(); } } and the following pom.xml: <project xmlns="" xmlns:xsi="" xsi: <modelVersion>4.0.0</modelVersion> <groupId>at.gepardec.demo</groupId> <artifactId>jbnative</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.jboss.as</groupId> <artifactId>jboss-as-controller-client</artifactId> <version>7.2.0.Final</version> </dependency> </dependencies> </project> When I start EAP6 with default configuration and run this code I get: "6.2.0.GA" Now I want to migrate this to WildFly (I use EAP-7.0.0.Beta). I change the connect to: ModelControllerClient client = ModelControllerClient.Factory .create(InetAddress.getByName("localhost"), 9990); and the dependency in pom.xml to: <dependency> <groupId>org.wildfly.core</groupId> <artifactId>wildfly-controller-client</artifactId> <version>2.0.3.Final-redhat-1</version> </dependency> I start WildFly with the default configuration, run the code and get: "7.0.0.Beta1" So I just have to change the port and use the current libraries. No fiddling with configuring additional interfaces. And to answer Gurus question, I would suggest he does just that. 8. Re: Could not connect to remote portehugonnet Apr 16, 2016 2:59 PM (in response to Erhard Siegl) Widlfy use http-remoting as its default remoting protocol where EAP 6/ AS7 used remote (from Jboss Remoting project). You can use the remote protocol on 9999 but that will require some configuration change on WildFly. Gurus has 2 options : change his code or change the configuration. Not knowing his constraints I left it to him to choose 9. Re: Could not connect to remote portGuru Prashanth Thanakodi Apr 29, 2016 1:20 AM (in response to Erhard Siegl) Thanks . The issue is that I used the standalone.xml from 7.1 directly. Finally I had to edit the standalone.xml manually and it worked . I will use the CLI instead of manually editing it.. Thanks Guru
https://developer.jboss.org/thread/268988
CC-MAIN-2018-39
refinedweb
1,097
50.53
Expand MetaWear--SDK to include Droid and BluetoothLE Bajet $30-250 USD I'm looking to add a new library to the mbielat/MetaWear-SDK-CSharp project so that I can use it in a Xamarin existing Android project. The SDK is on GitHub ([url removed, login to view]) and makes this comment: "Developers using this build, such as in a Xamarin Forms project, will need to plugin their own Bluetooth LE library to implement the interfaces defined in the [url removed, login to view] namespace." That's what I need help with, plugging in a BLE library to implement the interfaces from MbientLab. I would also then like a test Android application that proves that the library works and can find a MetaWear device. You'll need experience w/ MetaWear devices (and have one) (maybe) to work on this project. Project must build in Visual Studio 2017 and use the code from GitHub as the base.
https://www.my.freelancer.com/projects/mobile-phone/expand-metawear-sdk-include-droid/
CC-MAIN-2018-30
refinedweb
157
63.22
Introduction and Quickstart¶ Welcome to Pygments! This document explains the basic concepts and terms and gives a few examples of how to use the library. Architecture¶ There are four types of components that work together highlighting a piece of code: - A lexer splits the source into tokens, fragments of the source that have a token type that determines what the text represents semantically (e.g., keyword, string, or comment). There is a lexer for every language or markup format that Pygments supports. - The token stream can be piped through filters, which usually modify the token types or text fragments, e.g. uppercasing all keywords. - A formatter then takes the token stream and writes it to an output file, in a format such as HTML, LaTeX or RTF. - While writing the output, a style determines how to highlight all the different token types. It maps them to attributes like “red and bold”. Example¶ Here is a small example for highlighting Python code: from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter code = 'print "Hello World"' print highlight(code, PythonLexer(), HtmlFormatter()) which prints something like this: <div class="highlight"> <pre><span class="k">print</span> <span class="s">"Hello World"</span></pre> </div> As you can see, Pygments uses CSS classes (by default, but you can change that) instead of inline styles in order to avoid outputting redundant style information over and over. A CSS stylesheet that contains all CSS classes possibly used in the output can be produced by: print HtmlFormatter().get_style_defs('.highlight') The argument to get_style_defs() is used as an additional CSS selector: the output may look like this: .highlight .k { color: #AA22FF; font-weight: bold } .highlight .s { color: #BB4444 } ... Options¶ The highlight() function supports a fourth argument called outfile, it must be a file object if given. The formatted output will then be written to this file instead of being returned as a string. Lexers and formatters both support options. They are given to them as keyword arguments either to the class or to the lookup method: from pygments import highlight from pygments.lexers import get_lexer_by_name from pygments.formatters import HtmlFormatter lexer = get_lexer_by_name("python", stripall=True) formatter = HtmlFormatter(linenos=True, cssclass="source") result = highlight(code, lexer, formatter) This makes the lexer strip all leading and trailing whitespace from the input (stripall option), lets the formatter output line numbers (linenos option), and sets the wrapping <div>‘s class to source (instead of highlight). Important options include: - encoding : for lexers and formatters - Since Pygments uses Unicode strings internally, this determines which encoding will be used to convert to or from byte strings. - style : for formatters - The name of the style to use when writing the output. For an overview of builtin lexers and formatters and their options, visit the lexer and formatters lists. For a documentation on filters, see this page. Lexer and formatter lookup¶ If you want to lookup a built-in lexer by its alias or a filename, you can use one of the following methods: >>> from pygments.lexers import (get_lexer_by_name, ... get_lexer_for_filename, get_lexer_for_mimetype) >>> get_lexer_by_name('python') <pygments.lexers.PythonLexer> >>> get_lexer_for_filename('spam.rb') <pygments.lexers.RubyLexer> >>> get_lexer_for_mimetype('text/x-perl') <pygments.lexers.PerlLexer> All these functions accept keyword arguments; they will be passed to the lexer as options. A similar API is available for formatters: use get_formatter_by_name() and get_formatter_for_filename() from the pygments.formatters module for this purpose. Guessing lexers¶ If you don’t know the content of the file, or you want to highlight a file whose extension is ambiguous, such as .html (which could contain plain HTML or some template tags), use these functions: >>> from pygments.lexers import guess_lexer, guess_lexer_for_filename >>> guess_lexer('#!/usr/bin/python\nprint "Hello World!"') <pygments.lexers.PythonLexer> >>> guess_lexer_for_filename('test.py', 'print "Hello World!"') <pygments.lexers.PythonLexer> guess_lexer() passes the given content to the lexer classes’ analyse_text() method and returns the one for which it returns the highest number. All lexers have two different filename pattern lists: the primary and the secondary one. The get_lexer_for_filename() function only uses the primary list, whose entries are supposed to be unique among all lexers. guess_lexer_for_filename(), however, will first loop through all lexers and look at the primary and secondary filename patterns if the filename matches. If only one lexer matches, it is returned, else the guessing mechanism of guess_lexer() is used with the matching lexers. As usual, keyword arguments to these functions are given to the created lexer as options. Command line usage¶ You can use Pygments from the command line, using the pygmentize script: $ pygmentize test.py will highlight the Python file test.py using ANSI escape sequences (a.k.a. terminal colors) and print the result to standard output. To output HTML, use the -f option: $ pygmentize -f html -o test.html test.py to write an HTML-highlighted version of test.py to the file test.html. Note that it will only be a snippet of HTML, if you want a full HTML document, use the “full” option: $ pygmentize -f html -O full -o test.html test.py This will produce a full HTML document with included stylesheet. A style can be selected with -O style=<name>. If you need a stylesheet for an existing HTML file using Pygments CSS classes, it can be created with: $ pygmentize -S default -f html > style.css where default is the style name. More options and tricks and be found in the command line reference.
http://pygments.org/docs/quickstart/
CC-MAIN-2014-15
refinedweb
898
56.96
Keen IO Official Ruby Client Library keen-gem is the official Ruby Client for the Keen IO API. The Keen IO API lets developers build analytics features directly into their apps. Installation Add to your Gemfile: gem 'keen' or install from Rubygems: gem install keen keen is tested with Ruby 1.9.3 + and on: - MRI - Rubinius - jRuby (except for asynchronous methods - no TLS support for EM on jRuby) Usage Before making any API calls, you must supply keen-gem with a Project ID and one or more authentication keys. (If you need a Keen IO account, sign up here - it's free.) Setting a write key is required for publishing events. Setting a read key is required for running queries. Setting a master key is required for performing deletes. You can find keys for all of your projects on keen.io. The recommended way to set keys is via the environment. The keys you can set are KEEN_PROJECT_ID, KEEN_WRITE_KEY, KEEN_READ_KEY and KEEN_MASTER_KEY. You only need to specify the keys that correspond to the API calls you'll be performing. If you're using foreman, add this to your .env file: KEEN_PROJECT_ID=aaaaaaaaaaaaaaa KEEN_MASTER_KEY=xxxxxxxxxxxxxxx KEEN_WRITE_KEY=yyyyyyyyyyyyyyy KEEN_READ_KEY=zzzzzzzzzzzzzzz If not, make a script to export the variables into your shell or put it before the command you use to start your server. When you deploy, make sure your production environment variables are set. For example, set config vars on Heroku. (We recommend this environment-based approach because it keeps sensitive information out of the codebase. If you can't do this, see the alternatives below.) Once your environment is properly configured, the Keen object is ready to go immediately. Synchronous Publishing Publishing events requires that KEEN_WRITE_KEY is set. Publish an event like this: Keen.publish(:sign_ups, { :username => "lloyd", :referred_by => "harry" }) This will publish an event to the username and referred_by properties set. The event properties can be any valid Ruby hash. Nested properties are allowed. Lists of objects are also allowed, but not recommended because they can be difficult to query over. See alternatives to lists of objects here. You can learn more about data modeling with Keen IO with the Data Modeling Guide. Protip: Marshalling gems like Blockhead make converting structs or objects to hashes easier. The event collection need not exist in advance. If it doesn't exist, Keen IO will create it on the first request. Asynchronous publishing Publishing events shouldn't slow your application down or make users wait longer for page loads & server requests. The Keen IO API is fast, but any synchronous network call you make will negatively impact response times. For this reason, we recommend you use the publish_async method to send events when latency is a concern. Alternatively, you can drop events into a background queue e.g. Delayed Jobs and publish synchronously from there. To publish asynchronously, first add em-http-request to your Gemfile. Make sure it's version 1.0 or above. gem "em-http-request", "~> 1.0" Next, run an instance of EventMachine. If you're using an EventMachine-based web server like thin or goliath you're already doing this. Otherwise, you'll need to start an EventMachine loop manually as follows: require 'em-http-request' Thread.new { EventMachine.run } The best place for this is in an initializer, or anywhere that runs when your app boots up. Here's a useful blog article that explains more about this approach - EventMachine and Passenger. And here's a gist that shows an example of Eventmachine with Unicorn, specifically the Unicorn config for starting and stopping EventMachine after forking. Now, in your code, replace publish with publish_async. Bind callbacks if you require them. http = Keen.publish_async("sign_ups", { :username => "lloyd", :referred_by => "harry" }) http.callback { |response| puts "Success: #{response}"} http.errback { puts "was a failurrr :,(" } This will schedule the network call into the event loop and allow your request thread to resume processing immediately. Running queries The Keen IO API provides rich querying capabilities against your event data set. For more information, see the Data Analysis API Guide. Running queries requires that KEEN_READ_KEY is set. Here are some examples of querying with keen-gem. Let's assume you've added some events to the "purchases" collection. # Various analysis types Keen.count("purchases") # => 100 Keen.sum("purchases", :target_property => "price", :timeframe => "today") # => 10000 Keen.minimum("purchases", :target_property => "price", :timeframe => "today") # => 20 Keen.maximum("purchases", :target_property => "price", :timeframe => "today") # => 100 Keen.average("purchases", :target_property => "price", :timeframe => "today") # => 60 Keen.median("purchases", :target_property => "price", :timeframe => "today") # => 60 Keen.percentile("purchases", :target_property => "price", :percentile => 90, :timeframe => "today") # => 100 Keen.count_unique("purchases", :target_property => "username", :timeframe => "today") # => 3 Keen.select_unique("purchases", :target_property => "username", :timeframe => "today") # => ["Bob", "Linda", "Travis"] # Group by's and filters Keen.sum("purchases", :target_property => "price", :group_by => "item.id", :timeframe => "this_14_days") # => [{ "item.id": 123, "result": 240 }] Keen.count("purchases", :timeframe => "today", :filters => [{ "property_name" => "referred_by", "operator" => "eq", "property_value" => "harry" }]) # => 2 # Relative timeframes Keen.count("purchases", :timeframe => "today") # => 10 # Absolute timeframes Keen.count("purchases", :timeframe => { :start => "2015-01-01T00:00:00Z", :end => "2015-31-01T00:00:00Z" }) # => 5 # Extractions Keen.extraction("purchases", :timeframe => "today") # => [{ "keen" => { "timestamp" => "2014-01-01T00:00:00Z" }, "price" => 20 }] # Funnels Keen.funnel(:steps => [{ :actor_property => "username", :event_collection => "purchases", :timeframe => "yesterday" }, { :actor_property => "username", :event_collection => "referrals", :timeframe => "yesterday" }]) # => [20, 15] # Multi-analysis Keen.multi_analysis("purchases", analyses: { :gross => { :analysis_type => "sum", :target_property => "price" }, :customers => { :analysis_type => "count_unique", :target_property => "username" } }, :timeframe => 'today', :group_by => "item.id") # => [{ "item.id" => 2, "gross" => 314.49, "customers" => 8 } }] Many of these queries can be performed with group by, filters, series and intervals. The response is returned as a Ruby Hash or Array. Detailed information on available parameters for each API resource can be found on the API Technical Reference. The Query Method You can also specify the analysis type as a parameter to a method called query: Keen.query("median", "purchases", :target_property => "price") # => 60 This simplifes querying code where the analysis type is dynamic. Query Options Each query method or alias takes an optional hash of options as an additional parameter. Possible keys are: :response – Set to :all_keys to return the full API response (usually only the value of the "result" key is returned). :method - Set to :post to enable post body based query (). Query Logging You can log all GET and POST queries automatically by setting the log_queries option. Keen.log_queries = true Keen.count('purchases') # I, [2016-10-30T11:45:24.678745 #9978] INFO -- : [KEEN] Send GET query to<YOUR_PROJECT_ID>/queries/count?event_collection=purchases with options {} Saved Queries You can manage your saved queries from the Keen ruby client. # Create a saved query Keen.saved_queries.create("name", saved_query_attributes) # Get all saved queries Keen.saved_queries.all # Get one saved query Keen.saved_queries.get("saved-query-slug") # Get saved query with results Keen.saved_queries.get("saved-query-slug", results: true) # Update a saved query saved_query_attributes = { refresh_rate: 14400 } Keen.saved_queries.update("saved-query-slug", saved_query_attributes) # Delete a saved query Keen.saved_queries.delete("saved-query-slug") Getting Query URLs Sometimes you just want the URL for a query, but don't actually need to run it. Maybe to paste into a dashboard, or open in your browser. In that case, use the query_url method: Keen.query_url("median", "purchases", :target_property => "price", { :timeframe => "today" }) # => "<project-id>/queries/median?target_property=price&event_collection=purchases&api_key=<api-key>" If you don't want the API key included, pass the :exclude_api_key option: Keen.query_url("median", "purchases", { :target_property => "price", :timeframe => "today" }, :exclude_api_key => true) # => "<project-id>/queries/median?target_property=price&event_collection=purchases" Listing collections The Keen IO API let you get the event collections for the project set, it includes properties and their type. It also returns links to the collection resource. Keen.event_collections # => [{ "name": "purchases", "properties": { "item.id": "num", ... }, ... }] Getting the list of event collections requires that the KEEN_MASTER_KEY is set. Deleting events The Keen IO API allows you to delete events from event collections, optionally supplying a filter to narrow the scope of what you would like to delete. Deleting events requires that the KEEN_MASTER_KEY is set. # Assume some events in the 'signups' collection # We can delete them all Keen.delete(:signups) # => true # Or just delete an event corresponding to a particular user Keen.delete(:signups, filters: [{ :property_name => 'username', :operator => 'eq', :property_value => "Bob" }]) # => true Other code examples.publish(:sign_ups, { :keen => { :timestamp => "2012-12-14T20:24:01.123000+00:00" }, :username => "lloyd", :referred_by => "harry" }) Batch publishing The keen-gem supports publishing events in batches via the publish_batch method. Here's an example usage: Keen.publish_batch( :signups => [ { :name => "Bob" }, { :name => "Mary" } ], :purchases => [ { :price => 10 }, { :price => 20 } ] ) This call would publish 2 purchases events - all in just one API call. Batch publishing is ideal for loading historical events into Keen IO. Asynchronous batch publishing Ensuring the above guidance is followed for asynchronous publishing, batch publishing logic can used asynchronously with publish_batch_async: Keen.publish_batch_async( :signups => [ { :name => "Bob" }, { :name => "Mary" } ], :purchases => [ { :price => 10 }, { :price => 20 } ] ) Configurable and per-client authentication To configure keen-gem in code, do as follows: Keen.project_id = 'xxxxxxxxxxxxxxx' Keen.write_key = 'yyyyyyyyyyyyyyy' Keen.read_key = 'zzzzzzzzzzzzzzz' Keen.master_key = 'aaaaaaaaaaaaaaa' You can also configure unique client instances as follows: keen = Keen::Client.new(:project_id => 'xxxxxxxxxxxxxxx', :write_key => 'yyyyyyyyyyyyyyy', :read_key => 'zzzzzzzzzzzzzzz', :master_key => 'aaaaaaaaaaaaaaa') em-synchrony keen-gem can be used with em-synchrony. If you call publish_async and EM::Synchrony is defined the method will return the response directly. (It does not return the deferrable on which to register callbacks.) Likewise, it will raise exceptions 'synchronously' should they happen. Beacon URLs It's possible to publish events to your Keen IO project using the HTTP GET method. This is useful for situations like tracking email opens using image beacons. In this situation, the JSON event data is passed by encoding it base-64 and adding it as a request parameter called data. The beacon_url method found on the Keen::Client does this for you. Here's an example: Keen.project_id = 'xxxxxx'; Keen.write_key = 'yyyyyy'; Keen.beacon_url("sign_ups", :recipient => "[email protected]") # => "" To track email opens, simply add an image to your email template that points to this URL. For further information on how to do this, see the image beacon documentation. Redirect URLs Redirect URLs are just like image beacon URLs with the addition of a redirect query parameter. This parameter is used to issue a redirect to a certain URL after an event is recorded. Keen.redirect_url("sign_ups", { :recipient => "[email protected]" }, "") # => "" This is helpful for tracking email clickthroughs. See the redirect documentation for further information. Generating scoped keys Note, Scoped Keys are now deprecated in favor of access keys. A scoped key is a string, generated with your API Key, that represents some encrypted authentication and query options. Use them to control what data queries have access to. # "my-api-key" should be your MASTER API key scoped_key = Keen::ScopedKey.new("my-api-key", { "filters" => [{ "property_name" => "accountId", "operator" => "eq", "property_value" => "123456" }]}).encrypt! # "4d1982fe601b359a5cab7ac7845d3bf27026936cdbf8ce0ab4ebcb6930d6cf7f139e..." You can use the scoped key created in Ruby for API requests from any client. Scoped keys are commonly used in JavaScript, where credentials are visible and need to be protected. Access Keys You can use access keys to restrict the functionality of a key you use with the Keen API. Access keys can also enrich events that you send. Read up on the full key body options. Create a key that automatically adds information to each event published with that key: key_body = { "name" => "autofill foo", "is_active" => true, "permitted" => ["writes"], "options" => { "writes" => { "autofill": { "foo": "bar" } } } } new_key = client.access_keys.create(key_body) autofill_write_key = new_key["key"] You can revoke and unrevoke keys to disable or enable access. all will return all current keys for the project, while get("key-value-here") will return info for a single key. You can also update and delete keys. Additional options HTTP Read Timeout The default Net::HTTP timeout is 60 seconds. That's usually enough, but if you're querying over a large collection you may need to increase it. The timeout on the API side is 300 seconds, so that's as far as you'd want to go. You can configure a read timeout (in seconds) by setting a KEEN_READ_TIMEOUT environment variable, or by passing in a read_timeout option to the client constructor as follows: keen = Keen::Client.new(:read_timeout => 300) You can also configure the NET::HTTP open timeout, default is 60 seconds. To configure the timeout (in seconds) either set KEEN_OPEN_TIMEOUT environment variable, or by passing in a open_timeout option to the client constructor as follows: keen = Keen::Client.new(:open_timeout => 30) HTTP Proxy You can set the KEEN_PROXY_TYPE and KEEN_PROXY_URL environment variables to enable HTTP proxying. KEEN_PROXY_TYPE should be set to socks5. You can also configure this on client instances by passing in proxy_type and proxy_url keys. keen = Keen::Client.new(:proxy_type => 'socks5', :proxy_url => '') Troubleshooting EventMachine If you run into Keen::Error: Keen IO Exception: An EventMachine loop must be running to use publish_async calls or Uncaught RuntimeError: eventmachine not initialized: evma_set_pending_connect_timeout, this means that the EventMachine loop has died. This can happen for a variety of reasons, and every app is different. Issue #22 shows how to add some extra protection to avoid this situation. publish_async in a script or worker If you write a script that uses publish_async, you need to keep the script alive long enough for the call(s) to complete. EventMachine itself won't do this because it runs in a different thread. Here's an example gist that shows how to exit the process after the event has been recorded. Additional Considerations Bots It's not just us humans that browse the web. Spiders, crawlers, and bots share the pipes too. When it comes to analytics, this can cause a mild headache. Events generated by bots can inflate your metrics and eat up your event quota. If you want some bot protection, check out the Voight-Kampff gem. Use the gem's request.bot? method to detect bots and avoid logging events. Changelog 1.1.1 - Added an option to log queries - Added a cli option that includes the Keen code 1.1.0 - Add support for access keys - Move saved queries into the Keen namespace - Deprecate scoped keys in favor of access keys 1.0.0 - Remove support for ruby 1.9.3 - Update a few dependencies 0.9.10 - Add ability to set the open_timesetting for the http client. 0.9.9 - Added the ability to send additional optional headers. 0.9.7 - Added a new header Keen-Sdkthat sends the SDK version information on all requests. 0.9.6 - Updated behavior of saved queries to allow fetching results using the READ KEY as opposed to requiring the MASTER KEY, making the gem consistent with 0.9.5 - Fix bug with scoped key generation not working with newer Keen projects. 0.9.4 - Add SDK support for Saved Queries - Removed support for Ruby MRI 1.8.7 0.9.2 - Added support for max_age as an integer. 0.9.1 - Added support for setting an IV for scoped keys. Thanks @anatolydwnld 0.8.10 - Added support for posting queries. Thanks @soloman1124. 0.8.9 - Fix proxy support for sync client. Thanks @nvieirafelipe! 0.8.8 - Add support for a configurable read timeout 0.8.7 - Add support for returning all keys back from query API responses 0.8.6 - Add support for getting query URLs - Make the querymethod public so code supporting dynamic analysis types is easier to write 0.8.4 - Add support for getting project details 0.8.3 - Add support for getting a list of a project's collections 0.8.2 - Add support for medianand percentileanalysis - Support arrays for extraction property_namesoption 0.8.1 - Add support for asynchronous batch publishing 0.8.0 - UPGRADE WARNING Do you use spaces in collection names? Or other special characters? Read this post from the mailing list to make sure your collection names don't change. - Add support for generating scoped keys. - Make collection name encoding more robust. Make sure collection names are encoded identically for publishing events, running queries, and performing deletes. - Add support for grouping by multiple properties. 0.7.8 - Add support for redirect URL creation. 0.7.7 Add support for HTTP and SOCKS proxies. Set KEEN_PROXY_URLto the proxy URL and KEEN_PROXY_TYPEto 'socks5' if you need to. These properties can also be set on the client instances as proxy_urland proxy_type. Delegate the master_keyfields from the Keen object. 0.7.6 - Explicitly require CGI. 0.7.5 - Use CGI.escapeinstead of URI.escapeto get accurate URL encoding for certain characters 0.7.4 - Add support for deletes (thanks again cbartlett!) - Allow event collection names for publishing/deleting methods to be symbols 0.7.3 - Add batch publishing support - Allow event collection names for querying methods to be symbols. Thanks to cbartlett. 0.7.2 - Fix support for non-https API URL testing 0.7.1 - Allow configuration of the base API URL via the KEEN_API_URL environment variable. Useful for local testing and proxies. 0.7.0 - BREAKING CHANGE! Added support for read and write scoped keys to reflect the new Keen IO security architecture. The advantage of scoped keys is finer grained permission control. Public clients that publish events (like a web browser) require a key that can write but not read. On the other hand, private dashboards and server-side querying processes require a Read key that should not be made public. 0.6.1 - Improved logging and exception handling. 0.6.0 - Added querying capabilities. A big thanks to ifeelgoods for contributing! 0.5.0 - Removed API Key as a required field on Keen::Client. Only the Project ID is required to publish events. - You can continue to provide the API Key. Future features planned for this gem will require it. But for now, there is no keen-gem functionality that uses it. 0.4.4 - Event collections are URI escaped to account for spaces. - User agent of API calls made more granular to aid in support cases. - Throw arguments error for nil event_collection and properties arguments. 0.4.3 - Added beacon_url support - Add support for using em-synchrony with asynchronous calls Questions & Support For questions, bugs, or suggestions about this gem: File a Github Issue. For other Keen-IO related technical questions: 'keen-io' on Stack Overflow For general Keen IO discussion & feedback: 'keen-io-devs' Google Group Contributing keen-gem is an open source project and we welcome your contributions. Fire away with issues and pull requests! Running Tests bundle exec rake spec - Run unit specs. HTTP is mocked. bundle exec rake integration - Run integration specs with the real API. Requires env variables. See .travis.yml. bundle exec rake synchrony - Run async publishing specs with EM::Synchrony. Similarly, you can use guard to listen for changes to files and run specs. bundle exec guard -g unit bundle exec guard -g integration bundle exec guard -g synchrony Running a Local Console You can spawn an irb session with the local files already loaded for debugging or experimentation. $ bundle exec rake console 2.2.6 :001 > Keen => Keen Community Contributors Thanks everyone, you rock!
https://www.rubydoc.info/gems/keen/1.1.1
CC-MAIN-2020-45
refinedweb
3,159
59.8
Surface Point A SurfacePoint is a generic location on a surface, which might be at a vertex, along an edge, or inside a face. Surface points are used throughout geometry-central for methods that input or output arbitrary locations on surfaces. #include "geometrycentral/surface/surface_point.h" The field SurfacePoint::type is an enum: enum class SurfacePointType { Vertex = 0, Edge, Face }; which indicates what kind of point this is. if the surface point is a vertex, the field SurfacePoint::vertexindicates which vertex. Otherwise it is the null default vertex. if the surface point is along an edge, the field SurfacePoint::edgeindicates which edge. Otherwise it is the null default edge. The field SurfacePoint::tEdgeindicates the location along that edge, in the range [0,1], with 0at edge.halfedge().vertex(). if the surface point is inside a face, the field SurfacePoint::faceindicates which face. Otherwise it is the null default face. The field SurfacePoint::faceCoordsindicates the location inside that face, as barycentric coordinates (numbered according to the iteration order of vertices about the face, as usual). Surface points have a few useful utility methods: T SurfacePoint::interpolate(const VertexData<T>& data) Given data of tempalte type T defined at vertices, linearly interpolates to a value at this location. SurfacePoint SurfacePoint::inSomeFace() All surface points (vertex, edge, face) have an equivalent point in one or many adjacent faces. For instance, a vertex could be equivalently a point in any of the incident faces, with a single 1 barycentric coordinate, or a point on an edge could be a point in either of the two adjacent faces. This function returns one of the equivalent surface points in a face (chosen arbitrarily). If this point is a face point, the output is a copy of this point. Vertex SurfacePoint::nearestVertex() Returns the nearest vertex which is adjacent to this point. For surface points which are vertices, it will return the same vertex. For surface points which are along edges, it will return one of the two incident vertices. For surface points which are inside faces, it will return one of the three incident vertices.
https://geometry-central.net/surface/utilities/surface_point/
CC-MAIN-2020-16
refinedweb
349
54.63
Introduction I’ve been writing software since 1978. Which is to say that I’ve seen many paradigm changes. I witnessed the inception of object oriented programming. I first became aware of objects when I read a Byte magazine article on a language called SmallTalk (issued August 1981). I read and re-read that article many times to try and understand what the purpose of object oriented programming was. Object oriented programming took ten more years before programmers began to recognize it. In the early 90’s the University of Michigan only taught a few classes using object oriented C++. It was still new and shiny. Now all languages are object oriented, or they are legacy languages. The web was another major paradigm that I witnessed. Before the browser was invented (while I was in college), all programs were written to be executed on the machine it was run on. I was immersed in the technology of the Internet while I was a student at UofM and we used tools such as Telnet, FTP, Archie, DNS, and Gopher (to name a few), to navigate and find information. The Internet was primarily composed of data about programming. When Mosaic came along as well as HTML, the programming world went crazy. The technology didn’t mature until the early 2000’s. Many programming languages were thrown together to accommodate the new infrastructure (I’m looking at you “Classic ASP”). Extreme programming came of age in the late 90’s. I did not get involved in XP until the mid 2000’s. Waterfall was the way things were done. The industry was struggling with automated testing suites. Unit testing came onto the scene, but breaking dependencies was an unknown quantity. It took some years before somebody came up with the idea of inversion of control. The idea was so abstract that most programmers ignored it and moved on. The latest paradigm change, and it’s much bigger than most will give it credit for is the IOC container. Even Microsoft has incorporated this technology into their latest tools. IOC is part of .Net Core. If you’re a programmer and you haven’t used IOC containers yet, or you don’t understand the underlying reason for it, you better get on the bandwagon. I predict that within five years, IOC will be recognized as the industry standard, even for companies that build software for their internal use only. It will be difficult to get a job as a programmer without understanding this technology. Don’t believe me? Pretend you’re a software engineer with no object oriented knowledge. Now search for a job and see what results come up. Grim, isn’t it? Where am I going with this? I currently work for a company that builds medical software. We have a huge collection of legacy code. I’m too embarrassed to admit how large this beast is. It just makes me cry. Our company uses the latest tools and we have advanced developers who know how to build IOC containers, unit tests, properly scoped objects, etc. We also practice XP, to a limited degree. We do the SCRUMs, stand-ups, code-reviews (sometimes), demos, and sprint planning meetings. What we don’t do is unit testing. Oh we have unit tests, but the company mandate is that they are optional. When there is extra time to build software, unit tests are incorporated. Only a small hand-full of developers incorporate unit tests into their software development process. Even I have built some new software without unit tests (and I’ve paid the price). The bottom line is that unit tests are not part of the software construction process. The company is staffed with programmers that are unfamiliar with TDD (Test Driven Development) and in fact, most are unfamiliar with unit testing altogether. Every developer has heard of unit test, but I suspect that many are not sold on the concept of the unit test. Many developers look at unit testing as just more work. There are the usual arguments against unit testing: They need to be maintained, they become obsolete, they break when I refactor code, etc. These are old arguments that were disproved years ago, but, like myths, they get perpetuated forever. I’m going to divert my the subject a bit here, just to show how crazy this is. Our senior developers have gathered in many meetings to discuss the agreed upon architecture that we are aiming for. That architecture is not much different from any other company: Break our monolithic application into smaller APIs, use IOC containers, separate database concerns from business and business from the front-end. We have a couple dozen APIs and they were written with this architecture in mind. They are all written with IOC containers. We use Autofac for our .Net applications and .Net Core has it’s own IOC container technology. Some of these APIs have unit tests. These tests were primarily added after the code was written, which is OK. Some of our APIs have no unit tests. This is not OK. So the big question is: Why go through the effort of using an IOC container in the first place, if there is no plan for unit tests? The answer is usually “to break dependencies.” Which is correct, except, why? Why did anybody care about breaking dependencies? Just breaking dependencies gains nothing. The IOC container itself, does not help with the maintainability of the code. Is it safer to refactor code with an IOC container? No. Is it easier to troubleshoot and fix bugs in code that has dependencies broken? Not unless your using unit tests. My only conclusion to this crazy behavior is that developers don’t understand the actual purpose of unit testing. Unit Test are Part of the Development Process The most difficult part of creating unit tests is breaking dependencies. IOC containers make that a snap. Every object (with some exceptions) should be put into the container. If an object instance must be created by another object, then it must be created inside the IOC container. This will break the dependency for you. Now unit testing is easy. Just focus on one object at a time and write tests for that object. If the object needs other objects to be injected, then us a mocking framework to make mock those objects. As a programmer, you’ll need to go farther than this. If you want to build code that can be maintained, you’ll need to build your unit tests first or at least concurrently. You cannot run through like the Tasmanian devil, building your code, and then follow-up with a hand-full of unit tests. You might think you’re clever by using a coverage tool to make sure you have full code coverage, but I’m going to show an example where code-coverage is not the only reason for unit testing. Your workflow must change. At first, it will slow you down, like learning a new language. Keep working at it and eventually, you don’t have to think about the process. You just know. I can tell you from experience, that I don’t even think about how I’m going to build a unit test. I just look at what I’ll need to test and I know what I need to do. It took me years to get to this point, but I can say, hands down, that unit testing makes my workflow faster. Why? Because I don’t have to run the program in order to test for all the edge cases. I write one unit test at a time and I run that test against my object. I use unit testing as a harness for my objects. That is the whole point of using an IOC container. First, you take care of the dependencies, then you focus on one object at a time. Example I’m sure you’re riveted by my rambling prose, but I’m going to prove what I’m talking about. At least on a small scale. Maybe this will change your mind, maybe it won’t. Let’s say for example, I was writing some sort of API that needed to return a set of patient records from the database. One of the requirements is that the calling program can feed filter parameters to select a date range for the records desired. There is a start date and an end date filter parameter. Furthermore, each date parameter can be null. If both are null, then give me all records. If the start parameter is null, then give me up to the end date. If the end date is null, then give me from the start date to the latest record. The data in the database will return a date when the patient saw the doctor. This is hypothetical, but based on a real program that my company uses. I’m sure this scenario is used by any company that queries a database for web use, so I’m going to use it. Let’s say the program is progressing like this: public class PatienData { private DataContext _context; public List<PatientVisit> GetData(int patientId, DateTime ? startDate, DateTime ? endDate) { var filterResults = _context.PatientVisits.Where(x => x.BetweenDates(startDate,endDate)); return filterResults.ToList(); } } You don’t want to include the date range logic in your LINQ query, so you create an extension method to handle that part. Your next task is to write the ugly code called “BetweenDates()”. This will be a static extension class that will be used with any of your PatientVisit POCOs. If you’re unfamiliar with a POCO (Plain Old C# Code) object, then here’s a simple example: public class PatientVisit { public int PatientId { get; set; } public DateTime VisitDate { get; set; } } This is used by Entity Framework in a context. If you’re still confused, please search through my blog for Entity Framework subjects and get acquainted with the technique. Back to the “BetweenDates()” method. Here’s the empty shell of what needs to be written: public static class PatientVisitHelpers { public static bool BetweenDates(this PatientVisit patientVisit, DateTime ? startDate, DateTime ? endDate) { } } Before you start to put logic into this method, start thinking about all the edge cases that you will be required to test. If you run in like tribe of Comanche Indians and throw the logic into this method, you’ll be left with a manual testing job that will probably take you half a day (assuming you’re thorough). Later, down the road, if someone discovers a bug, you will need to fix this method and then redo all the manual tests. Here’s where unit test are going to make your job easy. The unit tests are going to be part of the discovery process. What Discovery? One aspect of writing software that is different from any other engineering subject is that every project is new. We don’t know what has to be built until we start to build it. Then we “discover” aspects of the problem that we never anticipated before. In this sample, I’ll show how that occurs. Let’s list the rules: - If the dates are both null, give me all records. - If the first date is null, give me all records up to that date (including the date). - If the last date is null, give me all records from the starting date (including the start date). - If both dates exist, then give me all records, including the start and end dates. According to this list, there should be at least four unit tests. If you discover any edge cases, you’ll need a unit tests for each edge case. If a bug is discovered, you’ll need to add a unit test to simulate the bug and then fix the bug. Which tells you that you’ll keep adding unit tests to a project every time you fix a bug or add a feature (with the exception that one or more unit tests were incorrect in the first place). An incorrect unit test usually occurs when you misinterpret the requirements. In such an instance, you’ll fix the unit test and then fix your code. Now that we have determined that we need five unit tests, create five empty unit test methods: public class PatientVisitBetweenDates { [Fact] public void BothDatesAreNull() { } [Fact] public void StartDateIsNull() { } [Fact] public void EndDateIsNull() { } [Fact] public void BothDatesPresent() { } } I have left out the IOC container code from my examples. I am testing a static object that has no dependencies, therefore, it does not need to go into a container. Once you have established an IOC container and you have broken dependencies on all objects, you can focus on your code just like the samples I am showing here. Now for the next step: Write the unit tests. You already have the method stubbed out. So you can complete your unit tests first and then write the code to make the tests pass. You can do one unit test, followed by writing code, then the next test, etc. Another method is to write all the unit tests and then write the code to pass all tests. I’m going to write all the unit tests first. By now, you might have analyzed my empty unit tests and realized what I meant earlier by “discovery”. If you haven’t, then this will be a good lesson. For the first test, we’ll need the setup data. We don’t have to concern ourselves with any of the Entity Framework code other than the POCO itself. In fact, the “BetweenDates()” method only looks at one instance, or rather, one record. If the date of the record will be returned with the set, then the method will return true. Otherwise, it should return false. The tiny scope of this method makes our unit testing easy. So put one record of data in: [Fact] public void BothDatesAreNull() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; } Next, setup the object and perform an assert. This unit test should return a true for the data given because both the start date and the end date passed into our method will be null and we return all records. [Fact] public void BothDatesAreNull() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; var result = testSample.BetweenDates(null,null); Assert.True(result); } This test doesn’t reveal anything yet. Technically, you can put code into your method that just returns true, and this test will pass. At this point, it would be valid to do so. Then you can write your next test and then refactor to return the correct value. This would be the method used for pure Test Driven Development. Only use the simplest code to make the test pass. The code will be completed when all unit tests are completed and they all pass. I’m going to go on to the next unit test, since I know that the first unit test is a trivial case. Let’s use the same data we used on the last unit test: [Fact] public void StartDateIsNull() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; } Did you “discover” anything yet? If not, then go ahead and put the method setup in: [Fact] public void StartDateIsNull() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; var result = testSample.BetweenDates(null, DateTime.Parse("1/8/2015")); } Now, you’re probably scratching your head because we need at least two test cases and probably three. Here are the tests cases we need when the start date is null but the end date is filled in: - Return true if the visit date is before the end date. - Return false if the visit date is after the end date. What if the date is equal to the end date? Maybe we should test for that edge case as well. Break the “StartDateIsNull()” unit test into three unit tests: [Fact] public void StartDateIsNullVisitDateIsBefore() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; var result = testSample.BetweenDates(null, DateTime.Parse("1/3/2015")); Assert.True(result); } [Fact] public void StartDateIsNullVisitDateIsAfter() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; var result = testSample.BetweenDates(null, DateTime.Parse("1/8/2015")); Assert.False(result); } [Fact] public void StartDateIsNullVisitDateIsEqual() { var testSample = new PatientVisit { PatientId = 1, VisitDate = DateTime.Parse("1/7/2015") }; var result = testSample.BetweenDates(null, DateTime.Parse("1/7/2015")); Assert.True(result); } Now you can begin to see the power of unit testing. Would you have manually tested all three cases? Maybe. That also reveals that we will be required to expand the other two tests that contain dates. The test case where we have a null end date will have a similar set of three unit tests and the in-between dates test will have more tests. For the in-between, we now need: - Visit date is less than start date. - Visit date is greater than start date but less than end date. - Visit date is greater than end date. - Visit date is equal to start date. - Visit date is equal to end date. - Visit date is equal to both start and end date (start and end are equal). That makes 6 unit test for the in-between case. Bringing our total to 13 tests. Fill in the code for the remaining tests. When that is completed, verify each test to make sure they are all valid cases. Once this is complete, you can write your code for the helper method. You now have a complete detailed specification for your method written in unit tests. Was that difficult? Not really. Most unit tests fall into this category. Sometimes you’ll need to mock an object that your object under test depends on. That is made easy by the IOC container. Also, you can execute your code directly from the unit test. Instead of writing a test program to send inputs to your API, or using your API in a full system where you are typing data in manually, you just execute the unit test you are working with. You type in your code, then run all the unit tests for this method. As you create code to account for each test case, you’ll see your unit tests start to turn green. When all unit tests are green, you’re work is done. Now, if Quality finds a bug that leads to this method, you can reverify your unit tests for the case that QA found. You might discover a bug in code that is outside your method or it could have been a case missed by your unit tests. Once you have fixed the bug, you can re-run the unit tests instead of manually testing each case. In the long run, this will save you time. Code Coverage You should strive for 100% code coverage. You’ll never get it, but the more code you can cover, the safer it will be to refactor code in the future. Any code not covered by unit tests is at risk for failure when code is refactored. As I mentioned earlier, code coverage doesn’t solve all your problems. In fact, if I wrote the helper code for the previous example and then I created unit tests afterwards, I bet I can create two or three unit tests that covers 100% of the code in the helper method. What I might not cover are edge cases, like the visit date equal to the start date. It’s best to use code coverage tools after the code and unit tests are written. The code coverge will be your verfication that you didn’t miss something. Another problem with code coverage tools is that it can make you lazy. You can easily look at the code and then come up with a unit test that executes the code inside an “if” statement and then create a unit test to execute code inside the “else” part. The unit tests might not be valid. You need to understand the purpose of the “if” and “else” and the purpose of the code itself. Keep that in mind. If you are writing new code, create the unit test first or concurrently. Only use the code coverage tool after all your tests pass to verify you covered all of your code. Back to the 20,000 foot View Let’s take a step back and talk about what the purpose of the exercise was. If you’re a hold-out for a world of programming without unit tests, then you’re still skeptical of what was gained by performing all that extra work. There is extra code. It took time to write that code. Now there are thirteen extra methods that must be maintained going forward. Let’s pretend this code was written five years ago and it’s been humming along for quite some time without any bugs being detected. Now some entry-level developer comes on the scene and he/she decides to modify this code. Maybe the developer in question thinks that tweaking this method is an easy short-cut to creating some enhancement that was demanded by the customer. If the code is changed and none of the unit tests break, then we’re OK. If the code is changed and one or more unit tests breaks, then the programmer modifying the code must look at those unit tests and determine if the individual behavoirs should be changed, or maybe those were broken because the change is not correct. If the unit tests don’t exist, the programmer modifying the code has no idea what thought process and/or testing went into the original design. The programmer probably doesn’t know the full specification of the code when it was written. The suite of unit tests make the purpose unambiguous. Any programmer can look at the unit tests and see exactly what the specification is for the method under test. What if a bug is found and all unit tests pass? What you have discovered is an edge case that was not known at the time the method was written. Before fixing the bug, the programmer must create a unit test with the edge case that causes the bug. That unit test must fail with the current code and it should fail in the same manner as the real bug. Once the failing unit test is created, then the bug should be fixed to make the unit test pass. Once that has been accomplished, run all unit tests and make sure you didn’t break prevous features when fixing the bug. This method ends the whack-a-mole technique of trying to fix bugs in software. Next, try to visualize a future where all your business code is covered by unit tests. If each class and method had a level of unit testing to the specification that this mehod has, it would be safe and easy to refactor code. Any refactor that breaks code down the line will show up in the unit tests (as broken tests). Adding enhancements would be easy and quick. You would be virtually guarenteed to produce a quality product after adding an enhancement. That’s because you are designing the software to be maintainable. Not all code can be covered by unit tests. In my view, this is a shame. Unfortunately, there are sections of code that cannot be put into a unit test for some reason or another. With an IOC container, your projects should be divided into projects that are unit testable and projects that are not. Projects, such as the project containiner your Entity Framework repository, are not unit testable. That’s OK, and you should limit how much actual code exists in this project. All the code should be POCO’s and some connecting code. Your web interface should be limited to code that connects the outside world to your business classes. Any code that is outside the realm of unit testing is going to be difficult to test. Try to limit the complexity of this code. Finally… I have looked over the shoulder of students building software for school projects at the University of Maryland and I noticed that they incorporated unit testing into a Java project. That made me smile. While the project did not contain an IOC container, it’s a step in the right direction. Hopefully, withing the next few years, universities will begin to produce students that understand that unit tests are necessary. There is still a large gap between those students and those in the industry that have never used unit tests. That gap must be filled in a self-taught manner. If you are one of the many who don’t incorporate unit testing into your software development process, then you better start doing it. Now is the time to learn and get good at it. If you wait too long, you’ll be one of those Cobol developers that wondered who moved their cheese.
http://blog.frankdecaire.com/2018/01/01/unit-tests-are-not-an-option/
CC-MAIN-2018-05
refinedweb
4,201
73.58
An extremely fast random generator. More... #include <nanobench.h> An extremely fast random generator. Currently, this implements RomuDuoJr, developed by Mark Overton. Source: RomuDuoJr is extremely fast and provides reasonable good randomness. Not enough for large jobs, but definitely good enough for a benchmarking framework. This random generator is a drop-in replacement for the generators supplied by <random>. It is not cryptographically secure. It's intended purpose is to be very fast so that benchmarks that make use of randomness are not distorted too much by the random generator. Rng also provides a few non-standard helpers, optimized for speed. Definition at line 453 of file nanobench.h. This RNG provides 64bit randomness. Definition at line 458 of file nanobench.h. As a safety precausion, we don't allow copying. Copying a PRNG would mean you would have two random generators that produce the same sequence, which is generally not what one wants. Instead create a new rng with the default constructor Rng(), which is automatically seeded from std::random_device. If you really need a copy, use copy(). Creates a new Random generator with random seed. Instead of a default seed (as the random generators from the STD), this properly seeds the random generator from std::random_device. It guarantees correct seeding. Note that seeding can be relatively slow, depending on the source of randomness used. So it is best to create a Rng once and use it for all your randomness purposes. Creates a new Rng that is seeded with a specific seed. Each Rng created from the same seed will produce the same randomness sequence. This can be useful for deterministic behavior. embed:rst .. note:: The random algorithm might change between nanobench releases. Whenever a faster and/or better random generator becomes available, I will switch the implementation. As per the Romu paper, this seeds the Rng with splitMix64 algorithm and performs 10 initial rounds for further mixing up of the internal state. Generates a random number between 0 and range (excluding range). The algorithm only produces 32bit numbers, and is slightly biased. The effect is quite small unless your range is close to the maximum value of an integer. It is possible to correct the bias with rejection sampling (see here, but this is most likely irrelevant in practices for the purposes of this Rng. See Daniel Lemire's blog post A fast alternative to the modulo reduction Definition at line 1081 of file nanobench.h. Produces a 64bit random value. This should be very fast, thus it is marked as inline. In my benchmark, this is ~46 times faster than std::default_random_engine for producing 64bit random values. It seems that the fastest std contender is std::mt19937_64. Still, this RNG is 2-3 times as fast. Definition at line 1071 of file nanobench.h. Same as Rng(Rng const&), we don't allow assignment. If you need a new Rng create one with the default constructor Rng(). Definition at line 1106 of file nanobench.h. Shuffles all entries in the given container. Although this has a slight bias due to the implementation of bounded(), this is preferable to std::shuffle because it is over 5 times faster. See Daniel Lemire's blog post Fast random shuffling. Definition at line 1097 of file nanobench.h. Provides a random uniform double value between 0 and 1. This uses the method described in Generating uniform doubles in the unit interval, and is extremely fast. Definition at line 1087 of file nanobench.h. Definition at line 564 of file nanobench.h. Definition at line 565 of file nanobench.h.
https://doxygen.bitcoincore.org/classankerl_1_1nanobench_1_1_rng.html
CC-MAIN-2021-04
refinedweb
599
59.8
On Mon, 16 Mar 2009, Mathieu Desnoyers wrote:> * Thomas Gleixner (tglx@linutronix.de) wrote:> > Is this a contribution to the "most useless patch of the week"> > contest ?> > > > You have my vote.> > > > Count mine too ! :) Actually, this was not what it should look like.> Here is my version. I am not totally convinced that the struct irq_desc> is absolutely required, but in some include orders, it might matter.I have not seen a compile failure report yet.> However, the #endif around the EXPORT_SYMBOL is definitely needed.Right.> struct irq_desc *desc = irq_to_desc(irq);> return desc ? desc->kstat_irqs[cpu] : 0;> }> -#endif> EXPORT_SYMBOL(kstat_irqs_cpu);> +#endifThanks, tglx
http://lkml.org/lkml/2009/3/17/66
CC-MAIN-2014-35
refinedweb
102
62.85
In previous versions of MSCRM, users have been able to integrate their CRM data with e-mail hosted on Microsoft Exchange servers. This was done through the use of Forward Mailboxes. The method used to determine if incoming e-mail messages should be tracked or not was by matching a tracking token in the subject line of the e-mail. In addition, administrators had to install and configure the e-mail router service on the Exchange server hosting the Forward Mailbox. MSCRM v4 allows you to continue to integrate your e-mail systems in the same way, but many new advancements in design enable users to create a richer and customized solutions for their unique situations and configurations. Here is a look at many of the new features and functionality of the MSCRM v4 e-mail integration story. No more tracking tokens! The e-mail tracking token is used help the router decide if an incoming e-mail should be tracked in CRM or not, and if it is to be tracked it will ensure that the correct regarding object is set for the received e-mail activity. This option still exists in v4, however it is now optional. MSCRM v4 utilizes smart matching technology which renders the tracking token obsolete. Smart matching enables the e-mail router to process incoming e-mail to determine CRM relevance by comparing subject lines and party lists (user e-mail addresses in the to:, from:, cc: and bcc: lists) with e-mails already existing in CRM. The tracking token can be configured or enabled/disabled by a CRM system administrator via the System Settings dialog.. MSCRM v4 offers administrators the flexibility to configure users and queues in multiple methods in the same deployment. It is possible to configure some user/queue mailboxes to be monitored directly on one e-mail server while others are configured so the E-mail Router processes mail from a Forward Mailbox on a different e-mail server. Individual mailbox monitoring also provides the administrator the option to allow CRM users to enter their email credentials securely in their personal options. New e-mail tracking options. CRM users also have the option to customize what types of e-mail messages the E-mail Router will track. These options include all incoming e-mail, mail in response to CRM e-mail, or incoming e-mail from CRM accounts, leads, or contacts. This functionality is available for queue records as well directly within the queue edit form. Improved e-mail configuration management. MSCRM v4 now provides the ability to customize and configure the e-mail router service to meet all incoming and outgoing e-mail connectivity requirements via a Configuration Manager application. This UI-based tool allows an administrator to configure incoming and outgoing profiles, including connection types, direction, access credentials, connection ports, time outs and other connection details. Connections to CRM deployments can also be similarly configured. Based on the deployment selected, both incoming and outgoing e-mail profiles can be specified for selected users, queues, and forward mailbox. Support for POP3. Until now, all e-mail monitoring and processing was only possible with mailboxes on Exchange servers. MSCRM v4 now offers administrators the flexibility to utilize POP3 mail for their user base. POP3 monitoring is configured through out-of-the-box settings in the E-mail Router Configuration Manager tool. De-coupled Router Service. In v3 the E-mail Router had to be installed on the same Exchange server hosting the Forward Mailbox. In v4 we have removed this restriction allowing the administrator to install and run the service from any machine which can communicate with both the MSCRM server and the e-mail server. The router is effectively an intermediary connecting the two separate systems (CRM and e-mail) and this new design enables full functionality without the previous restrictions. All that is needed are the administrator credentials to connect to MSCRM and the credentials to communicate with the remote e-mail server. Configurable SMTP connection for outgoing e-mail. In the previous CRM versions outgoing e-mail was delivered via the MSCRM platform using a remote or local SMTP server. In v4 this is easily configured via the E-mail Router Configuration Manager tool. Given the flexibility of the e-mail connector, it is now possible to define multiple outgoing SMTP server profiles. This is extremely useful for enterprise deployments where you may want users in one region to use one SMTP server and users in another region to use another SMTP server. As with the incoming profile configurations, all that is needed to properly connect to the SMTP server(s) are the proper login credentials. Extensibility. The E-mail Router in MSCRM v4 provides the extensibility for external vendors to create their own custom e-mail providers which can be plugged into the MSCRM E-mail Router service. The custom providers can be written to perform deployment-specific tasks such as downloading RSS feeds and tracking them as e-mails, blocking e-mails with certain types of attachments, analyzing e-mail responses and uploading statistics, etc. The public class used for creating custom e-mail providers is Microsoft.Crm.Tools.Crm.EmailProvider. Improved troubleshooting. In MSCRM v3, 7 performance counters were provided which tracked message processing such as the number of e-mail messages delivered, discarded, processed, etc. In MSCRM v4 the performance tracking capabilities have been improved by increasing the number of counters to 21. This will enable administrators to better track e-mail processing in order to manage overall e-mail router performance. Within the E-mail Router Configuration Manager administrators have the ability to verify connectivity between the CRM system and the e-mail servers/mailboxes being configured via the Test Access feature. This process actually shows the administrator whether a connection can be expected to work as configured. In the case where a connection fails, useful error messages are surfaced to describe the problems in the connection to aid in troubleshooting. Additional “What’s New” Changes. Along with the items already discussed, the following are noteworthy changes made to the MSCRM v4 E-mail integration story. These items highlight the advancements made in the area of E-mail support in the latest release of MSCRM. Look for future blogs highlighting topics such as how to set up and configure e-mail for your MSCRM v4 deployment in an enterprise environment utilizing the Forward Mailbox monitoring functionality effectively for scale, using individual mailbox and MSCRM Outlook client monitoring with POP3 and/or Exchange providers and proper login credential management, and troubleshooting common problems along with suggested solutions. For further information about Microsoft Dynamics CRM v4 e-mail configuration and usage, please visit the following resources: David West If you would like to receive an email when updates are made to this post, please register here RSS Saying that the tracking token is 'obsolete' is a bit strong unless there is more to this intelligent tracking technology than stated. If there are numerous emails with the same subject and to/from people but from completely unrelated records, I don't see how the emails will track properly without a token. How many emails do we all get that say just CRM and/or re:CRM. Without a token, those will not get tracked to the proper regarding value, will they? It many not be totally obsolete yet. But it's certainly getting close to it. Carmelo Lisciotto The tracking token is no longer required as part of e-mail tracking in MSCRM v4, so in that sense it is 'obsolete'. Users have the option to include it if they wish, but many users have asked for a way to remove it. As to the smart matching logic used to process incoming e-mail messages, an algorithm computes the 'likeness' of incoming messages compared to existing e-mail activities, and based on this will decide if it is a match and set the regarding object accordingly. If no match can be determined, the e-mail activity will be created with no regarding object set. Since the regarding object can be set on closed e-mail activities, the user can manually set this to its proper value as needed. In the case of multiple matches, the most recent CRM e-mail activity will be selected for matching and its regarding object will be copied to the new activity. This should happen rarely in actual practice. Consider an example of a CRM user sends multiple e-mails to external contacts with the subject "your order". Each outgoing e-mail activity will have a different regarding object (contact A and contact B). If contact A replies directly, the subject will be "re: your order" and the party list matching will match it to the outgoing activity and set its regarding object appropriately. Contact B forwards the e-mail and drops the CRM user to the cc: line, so the incoming e-mail will be "fw: your order". Since the party list matches at least 2 parties and the subject still matches ("fw: " is ignored), the e-mail router will match this to the correct outgoing e-mail and set the proper regarding object. These work as expected Now suppose the CRM user sends another e-mail to contact A about a different order but has the same "your order" subject content. If the contact replies to the first "your order" e-mail now, the router will use the most recent e-mail activity to process in the incoming e-mail and match its regarding object. If the regarding objects are the same for both outgoing e-mail activities then there is no conflict since the same regarding object will be set on the new incoming message. In those rare instances where the e-mail is matched to the wrong regarding object (e.g. the order instead of the contact), the CRM user can modify the regarding object directly to match the correct thread. What's the performance hit on the smart matching technology? David, I am actually seeing that incoming status failure on one of our implementations, it looks very similar or exactly like that. Do you have any ideas as to what's causing this? -James sorry but I don't quite know how to configure the email access type settings for incoming and outgoing mail to monitor users's mailboxes per the section headed above which I have copied below ." How do these access types allow an administrator to monitor a mailbox? Any more help would be most welcomed. Thanks James - in the screenshot I included, the incoming status failure is due to the mailbox not being initialized in Exchange. Check to see that your mailbox is initialized properly. Robin - the e-mail router does the actual monitoring and processing of incoming mail, but the administrator needs to set this up correctly. To enable the router to process a user's incoming mail directly (e.g. from the user's inbox), the option for "E-mail Router" must be selected in the user's CRM form. Within the E-mail Router Configuration Manager tool, an incoming profile must exist which includes the necessary access information for the router to be able to access the user's inbox, and this profile must be applied to that user. If the profile has Access Credentials set to "User Specified", then the user must enter their credentials in their personal options dialog within CRM. This would not be necessary if the router's incoming profile specifies "Other Specified" or "Local System Account" access credentials. I hope this helps. Look for future posts describing the configuration and deployment options in greater detail. A few days ago, David West posted about the new e-mail features in CRM 4.0 . For many of you, this list Bueno, el mundillo entorno a Microsoft Dynamics CRM 4.0 ya empieza a ponerse frenético y es difícil mantenerse I'm not clear on how to install the e-mail router on a server that isn't an Exchange server. In the "E-mail Router Configuration Profile" dialog box, what am I entering in the "server" field? Is this the OWA URL for the target Exchange server or something else? Also, what credentials should I be using? should this be a domain user with full access to all forwarding mailboxes used by this profile? We are running CRM 3.0 and can not send any mail directly from within a CRM shell (contact to SEND). Outlook is fine and tokens a CRM #. Tracking is fine. We are using SBS2003, Exchange 2003, seperate CRM App Server Adam, if you're having a problem sending e-mail from CRM 3.0, you should contact our Product Support team and work with them to solve this issue. Thanks. Hi. We are having a problem whit the email router. I configured my incoming method against my gmail/pop3 account for testing purpose. For outgoing method I use one of our internal smtp servers. Everything works great when I start the service and try the mail flow (in/out), but after some hours of email inactivity the router does not process anything. Even if I wait 1-2h nothing is being processed. When I then restart the service all of my messages get processed. What is happening? Is there something I can do to prevent this? Note that I am using the default timesettings under the advance tab in the config-manager. I am the only user on this environment and I am not sending large attachments or anything. Regards Kristoffer "...the MSCRM v4 e-mail router is fully compatible with Exchange 2007." What precisely does this mean ? Does this mean that the MSCRM v4 e-mail router totally removes the requirement for the CRM E Mail Router to be connected to a server that is running Exchange 2003 and that is part of an Exchange 2007 organization. That is _NO_ Exchange 2003 server is required. You can have just Exchange Server 2007 installations only in the organization !?! This doesn't seem very well thought through, but perhaps I am missing something. We have use tracking tolken selected, but it is very common that emails which originate from outside our organization use the same subject. When the email is sent to CRM by the user, the regarding is resolving automatically based on the last email with this subject. This would seem to be an unintended consequence of the changes to tracking. Anybody running into this? Any way around it? What is the possibility of using the new email router to support the CRM 3 implementation? I wanted to get rid of the Exchange 2003 completely and it is currently only there because of the CRM 3 Email Router. Would that be possible if I install the CRM 4 Email Router on my Exchange 2007 box but supporting the CRM 3 implementation? I have question. Can I turn off tracking email in CRM 4.0 completely? I don't want to assign email to case automatically. What I can do with it? We have installed the Email Router on the CRM 4.0 server and not the Exchange Server (which is of course a supported configuration). Based on this, can we still use Local System Account for the definition of incoming profiles? The system allows us to do so but testing a user who has been assigned this incoming profile always gives a failure with an authentication error. How can Local System Account can log into any user's mailbox and retrieve/check incoming email? Is this possible? How? if this is totally impossible in our scenario, will it be possible if the email router was installed on the exchange server 2007 machine instead of the crm machine? We have tested User Specific and Other Specified and both of them work fine. However, we can not use any of them since with User Specific will force the users to maintain password changes in their CRM email profile (users in this specific scenario are forced to change passwords every 30 days) and Other Specified will create other serious complexities Any help will be appreciated We have the user incoming and outgoing set to MS Dynamics CRM for Outlook. We have the user's CRM, Options, Email set to check incoming email and the drop-down is set to track email messages in response to CRM emails. I'm assuming that by having hte check incoming email box checked, we are turning on the Smart Matching. If we uncheck it, we're turning it off. Can you confirm? What is correlation and why is it required. One of the important scenarios in email management within Have 2 queues set up in recently upgraded CRM 4.0. Router is installed on CRM server. Connectivity testing with queues and Exchg server is successful. Issue is that one of the queues will only process e-mails into queue one time after a server reboot and then it won't process any more until server is rebooted again. Other queue processes fine. Not sure why queue will only process once as everything looks fine. It did work fine originally after upgrade for a few days but then kicked into this one time only after a server reboot thing. Any thoughts on what to look at? Is there someone that knows any performance issue integrating Exchange 2007 with CRM 4.0? We are seeing CRM 4.0 track emails simply by the text in the subject line without the CRM tracking token. How can we turn off that type of tracking? Our sales people send out quotes to people. The all seem to name the email subject the same, something like "Here's your quote" (for example). Some track, some don't. The system seems to be automatically tracking and regarding these emails even if they didnt originally track the outgoing email. So emails are being associated with totally non related CRM records. We also have emails from our website come into an email box in marketing. At some point someone tracked one of these emails. They all have the same subject line. Now all emails from the website are being tracked against that one CRM record, which is wrong. We also have a network scanner that sends the scanned documents to the person scanning them via email. These emails are all called the same thing and now they are all being tracked. We have found some very sensitive documents inadvertently tracked against a totally non related CRM record. Any comments from Microsoft or ideas on how to fix this? I would implement the token again if this would help. How do I stop the automatic tracking of these emails that all have the same names? Thanks.
http://blogs.msdn.com/crm/archive/2008/01/29/what-s-new-in-microsoft-dynamics-crm-4-0-e-mail-integration.aspx
crawl-002
refinedweb
3,138
62.38
This is a practical course that will provide you with a skill that you can use in may other courses. The purpose of the course is to have you be able to write programs in the C++ language. C++ is a large and complex language. It took many people many years to develop the language and define the language as an international standard. ISO/IEC 14882:1998 This course will teach C++ according to the standard because eventually the various C++ compilers will accept the standard language rather than the various experimental versions of C++. See the course WEB page. See the course syllabus. See the course homework assignments. See chaotic state of C++ compilers. See Compact Summary of C++ Technically there is a C++ language and there are C++ libraries. The ISO/IEC 14882:1998 C++ standard includes the formal definition of both the language and the libraries. A list of the library names is in the C++ summary The simplest Input/Output is from the keyboard/display. To use this Input/Output a program must include the two lines: #include <iostream> using namespace std; The "#" line is a preprocessor directive to read the standard library file named iostream . The "using namespace std;" is necessary to make the operators >> and << visible. The statement cout << "Hello." << endl; would write Hello. to the display. The statements int amount; cin >> amount; would read an integer from the keyboard and place the value in 'amount' A skeleton program to read and print data is shown below. change.cpp // change.cpp #include <iostream> // basic console input/output #include <cctype> // handy C functions e.g. isdigit using namespace std; // bring standard include files in scope int main() { int amount; // value read by cin while(!cin.eof()) // a loop that ends when a end of file is reached { try // be ready to catch any exceptions and stay in the loop { if(!isdigit(cin.peek())) throw "not a number"; cin >> amount; //read something, hopefully an integer cout << "Input: " << amount ; // start the output line // more code here that computes and outputs using cout cout << endl; // this ends the output line } catch(...){ cout << " no change possible " << endl; } cin.ignore(80, '\n'); // get rid of any junk after the number } // end of while loop cout << "end of change run" << endl; return 0; } // end main For disk file I/O see test_file_io.cpp The structure of exceptions is: try { ... statements throw "ooopse"; ... statements } catch( ) { ... statements } catch(...) { ... statements } Several example programs and their output are: try_try2.cc // try_try2.cc demonstrate how the type of 'catch' get various 'throw' #include <iostream> using namespace std; int main() { typedef int my_int; my_int k; for(int i=0; i<5; i++) { cout << i << '\n'; try { cout << "in try block, i= " << i << '\n'; if(i==0) throw "first exception"; cout << "in try block after first if \n"; if(i==1) throw i; cout << "in try block after second if \n"; if(i==2) throw (float)i; cout << "in try block after third if \n"; if(i==3) { k=i; throw k;} cout << "in try block after third if \n"; throw 12.5; // a double cout << "should not get here \n"; // compiler warning also } catch(const char * my_string){ cout << "my_string " << my_string << '\n'; } catch(const int j){ cout << "caught an integer = " << j << '\n'; } catch(const double j){ cout << "caught a double = " << j << '\n'; } catch(...){ cout << "caught something \n" << '\n' << '\n'; } cout << "outside the try block \n"; } cout << "outside the loop, normal return \n"; return 0; } // end main 0 in try block, i= 0 my_string first exception outside the try block 1 in try block, i= 1 in try block after first if caught an integer = 1 outside the try block 2 in try block, i= 2 in try block after first if in try block after second if caught something outside the try block 3 in try block, i= 3 in try block after first if in try block after second if in try block after third if caught an integer = 3 outside the try block 4 in try block, i= 4 in try block after first if in try block after second if in try block after third if in try block after third if caught a double = 12.5 outside the try block outside the loop, normal return And, now you can see how an exception thrown from a nested (possibly deeply nested) function comes up the call stack to the first 'catch' that can handle the type of the thrown exception. try_nest.cpp // try_nest.cpp 'throw' from nested function call #include <iostream> using namespace std; void f1(void); void f2(void); int main(void) { cout << "Staring try_nest.cpp \n"; try { f1(); cout << "should not get here in main \n"; } catch(const int j){ cout << "caught an integer = " << j << '\n'; } cout << "outside the try block \n"; return 0; } // end main void f1(void) { cout << "in f1, calling f2 \n"; f2(); cout << "in f1, returned from f2 \n"; } void f2(void) { cout << "in f2, about to throw an exception \n"; throw 7; } Staring try_nest.cpp in f1, calling f2 in f2, about to throw an exception caught an integer = 7 outside the try block A class in C++ is a data structure (often called an abstract data type) and all the functions needed to operate on that data structure (called member functions). This is a very simple example of a circle class. A class is a definition and the class defines a new type. A typical setup is shown below: The class Circle is defined in a header file circle.h The class is implemented in a .cpp .cc .C file circle.cpp because users of the class just need the header. The test program test_circle.cpp must do a #include "circle.h" but definitely does NOT include circle.cpp The test program instantiates the class Circle to create the object 'c1' The test program uses the constructor Circle to initialize the object. // circle.h #ifndef CIRCLE_H // be sure file only included once per compilation #define CIRCLE_H // // Define a class that can be used to get objects of type Circle. // A class defines a data structure and the member functions // that operate on the data structure. // The name of the class becomes a type name. class Circle // the 'public' part should be first, the user interface // the 'private' part should be last, the safe data { public: Circle(double X, double Y, double R); // a constructor void Show(); // a member function void Set(double R); // change the radius void Move(double X, double Y); // move the circle private: double Xcenter; double Ycenter; double radius; }; #endif // CIRCLE_H nothing should be added after this line // circle.cpp // implement the member functions of the class Circle #include <iostream> #include "circle.h" using namespace std; Circle::Circle(double X, double Y, double R) { Xcenter = X; Ycenter = Y; radius = R; } void Circle::Show() { cout << "X, Y, R " << Xcenter << " " << Ycenter << " " << radius << endl; } void Circle::Set(double R) { radius = R; } void Circle::Move(double X, double Y) { Xcenter = X; Ycenter = Y; } // test_circle.cpp #include <iostream> #include "circle.h" using namespace std; int main() { Circle c1(1.0, 2.0, 0.5); // construct an object named 'c1' of type 'Circle' Circle circle2(2.5, 3.0, 0.1); // another object named 'circle2' c1.Show(); // tell the object c1 to execute the member function Show circle2.Show(); // circle2 runs its member function Show c1.Move(1.1, 2.1); // move center c1.Show(); circle2.Set(0.2); // set a new radius circle2.Show(); return 0; } The files above can be saved to your directory and compiled then executed with: on gl SGI CC -n32 -Iinclude -o test_circle test_circle.C circle.C test_circle on Unix g++ -o test_circle test_circle.cc circle.cc test_circle on PC VC++ cl /GX /ML test_circle.cpp circle.cpp test_circle The result of the execution is: X, Y, R 1 2 0.5 X, Y, R 2.5 3 0.1 X, Y, R 1.1 2.1 0.5 X, Y, R 2.5 3 0.2 A very similar program to Lecture 4 can be created using class inheritance. A useful memory aid to determine when inheritance should be used is to say "inheriting class" is a "base class" The "is a" applies here: a Circle is a Shape The example below defines a base class (class that does not inherit) Shape2 and defines a class Circle2 that inherits the base class. The word "inherits" is to be taken literally. The class Circle2 really has member functions Move, Xc and Yc . The class Circle2 really has three double precision numbers in its data structure (in the private part) Xcenter, Ycenter and radius. The following code show five files: shape2.h shape2.cpp circle2.h circle2.cpp test_shape.cpp // shape2.h #ifndef SHAPE2_H #define SHAPE2_H // // Demonstrate simple class inheritance and its test program all in one file // the Shape2 class provides the center for specific shapes that inherit it class Shape2 // demonstrate a base class { public: void Move(double X, double Y); // move center of shape to new X,Y double Xc(); // return Xcenter double Yc(); // return Ycenter private: double Xcenter; double Ycenter; }; // end Shape2 #endif // SHAPE2_H // shape2.cpp // implement the member functions of the class Shape2 #include "shape2.h" void Shape2::Move(double X, double Y) { Xcenter = X; Ycenter = Y; } double Shape2::Xc() { return Xcenter; } double Shape2::Yc() { return Ycenter; } // end shape2.cpp // circle2.h #ifndef CIRCLE2_H #define CIRCLE2_H // // Define a class that can be used to get objects of type Circle. // A circle is a shape, so it makes sense for Circle2 to inherit Shape2. // A class defines a data structure and the member functions // that operate on the data structure. // The name of the class becomes a type name. #include "shape2.h" class Circle2 : public Shape2 // <-- the colon ":" means "inherit" // the 'public' part should be first, the user interface // the 'private' part should be last, the safe data { public: Circle2(double X, double Y, double R); // a constructor void Show(); // a member function void Set(double R); // set a new radius // by inheritance Move is here private: double radius; // by inheritance Xcenter and Ycenter are here }; // end Circle2 #endif // CIRCLE2_H // circle2.cpp // implement the member functions of the class Circle2 #include <iostream> #include "circle2.h" using namespace std; Circle2::Circle2(double X, double Y, double R) { Move(X, Y); // puts values into Xcenter, Ycenter in Shape2 radius = R; } void Circle2::Set(double R) { radius = R; } void Circle2::Show() { cout << "X, Y, R " << Xc() << " " << Yc() << " " << radius << endl; } // end circle2.cpp // test_shape.cpp #include <iostream> #include "circle2.h" using namespace std; int main() { Circle2 c1(1.0, 2.0, 0.5); // construct an object named 'c1' of type 'Circle' Circle2 circle1(2.5, 3.0, 0.1); // another object named 'circle2' c1.Show(); // tell the object c1 to execute the member function Show circle1.Show(); // circle2 runs its member function Show c1.Move(1.1, 2.1); c1.Show(); circle1.Set(0.2); circle1.Show(); return 0; } // end test_shape and the output from executing: X, Y, R 1 2 0.5 X, Y, R 2.5 3 0.1 X, Y, R 1.1 2.1 0.5 X, Y, R 2.5 3 0.2 Before moving on to multiple inheritance, we need to understand the use of the three possible sections of a class with the following visibility: public: member functions and objects defined in this section of a class are visible to any class that inherits this class as 'public' and visible in any object of this class. protected: member functions and objects defined in this section of a class are visible to any class that inherits this class as 'public' or 'protected' and not visible anywhere else. private: member functions and objects defined in this section of a class are not visible anywhere. Then, the way a class is inherited may restrict the visibility further: inheriting class A using : public A; keeps all visibility as above. inheriting class B using : protected B; makes the public section of B become restricted to protected visibility. inheriting class C using : private C; makes all of C become restricted to private visibility. The following example, inherit.cpp demonstrates the three restrictions of inheritance with the three sections. (All defaults are private.) // inherit.cpp public: protected: private: // // define three classes A, B, C with variables 1, 2, 3 in // public: protected: and private: respectively // Then class D inherits public A, protected B and private C class A { public: int a1; protected: int a2; private: int a3; }; class B { public: int b1; protected: int b2; private: int b3; }; class C { public: int c1; protected: int c2; private: int c3; }; class D: public A, protected B, private C // various restricted inheritance { public: int d1; // also here a1 int test(); protected: int d2; // also here a2 b1 b2 private: int d3; // also here a3 b3 c1 c2 c3 }; int D::test() { return d1 + a1 + d2 + a2 + b1 + b2 + d3; // all visible inside D // not visible a3 b3 c1 c2 c3 inside D } int main() { D object_d; // object_d has 12 values of type int in memory return object_d.d1 + object_d.a1; // only these are visible outside D // not visible object_d. a2 a3 b1 b2 b3 c1 c2 c3 d2 d3 } Now, Multiple Inheritance Pictorially, multiple inheritance can be shown by representing classes by boxes and inheritance by lines. The class at the top, class A, is a base class. +---------+ | | | class A | a base class defining two int's | | | int p,q | | | +---------+ / \ / \ / \ +---------+ +---------+ | | | | C now has two int's inherited | class B | | class C | from A in addition to the | | | | two int's C defines | int q,r | | int q,s | | | | | +---------+ +---------+ \ / \ / \ / +---------+ The programmer has a design decision with | | multiple inheritance: Does the programmer | class D | want (or need) two copies of A in class D? | | Without 'virtual' you get two copies. | int q,t | Using 'virtual' you get one copy. | | +---------+ The following program multinh.cc shows the design getting two copies of class A and thus two sets of A's int p,q; (Further down is virtinh.cc that gets one copy of class A) // multinh.cc demonstrate multiple inheritance // design requirement is that B and C have their own A // compare this to virtinh.cc class A // base class for B and C { // indirect class for D public: int p, q; }; class B : public A // single inheritance { public: int q, r; // now have A::p A::q B::q B::r void f(int a){ A::q = a;} // because A::q won't be visible int g(){ return A::q;} // define functions to set and get }; class C : public A // another single inheritance { public: int q, s; // now have A::p A::q C::q C::s void f(int a){ A::q = a;} // four integers int g(){ return A::q;} }; class D : public B, public C // multiple inheritance { public: int q, t; // now have AB::p AB::q B::q B::r }; // AC::p AC::q C::q C::s // D::q D::t // ten integers #include <iostream> using namespace std; int main() { D stuff; // ten (10) integers stuff.B::p = 1; // really in A, ambiguous stuff.C::p = 2; // really in A, picked p from A in C // stuff.B::A::q = 3; // can not get to this one as object stuff.B::f(3); // set via a function in B // stuff.C::A::q = 4; // can not get to this one as object stuff.C::f(4); // set via a function in C stuff.B::q = 5; // the local q in B stuff.C::q = 6; // the local q in C stuff.D::q = 7; // the local q in D stuff::q also works stuff.r = 8; // from B unambiguous stuff.s = 9; // from C unambiguous stuff.t = 10; // from D unambiguous cout << stuff.B::p << stuff.C::p << stuff.B::g() << stuff.C::g() << stuff.B::q << stuff.C::q << stuff.D::q << stuff.r << stuff.s << stuff.t << endl; return 0; // prints 12345678910 to show ten variables stored } // // functions are needed to get/set some variables // this is due to lack of syntax in present C++ stuff.B::A::q should work Now, with a different design, the programmer may want only one copy of class A to be inherited into D. This design technique is implemented by inheriting using the key word 'virtual'. The rule is that any duplicate inheritances of a class will not occur if that class is inherited as virtual. If all inheritances are virtual, exactly one copy of the class will be inherited. This is demonstrated by the file virtinh.cc and should be compared to the file above to see the differences. // virtinh.cc demonstrate multiple inheritance with 'virtual' // design requirement needs only one copy of A in D // compare this to multinh.cc class A // base class for B and C { // indirect class for D public: int p, q; }; class B : virtual public A // single inheritance, virtual { public: int q, r; // now have A::p A::q B::q B::r // no special set/get functions needed }; class C : virtual public A // another single inheritance, virtual { // now have four integers public: int q, s; // now have A::p A::q C::q C::s }; class D : public B, public C // multiple inheritance, only one A { // class inherited because B, C virtuals public: int q, t; // now have A::p A::q B::q B::r }; // C::q C::s D::q D::t // now have eight integers #include <iostream> using namespace std; int main() { D stuff; // eight (8) integers stuff.p = 1; // the local p in A stuff.A::q = 2; // the local q in A note four (4) q's stuff.B::q = 3; // the local q in B stuff.C::q = 4; // the local q in C stuff.q = 5; // the local q in D stuff.r = 6; // from B unambiguous stuff.s = 7; // from C unambiguous stuff.t = 8; // from D unambiguous cout << stuff.p << stuff.A::q << stuff.B::q << stuff.C::q << stuff.D::q << stuff.r << stuff.s << stuff.t << '\n'; return 0; // prints 12345678 to show eight variables stored } The programmer of a class has a design technique to cause member functions to be invisible when inherited by any class that uses the same function prototype as the member function. This technique is know as defining a virtual function. A class inheriting a virtual function gets an actual function as long as it is not over ridden by a local definition. An example of a virtual member function is: virtual double Compute(double X); Given a class hierarchy, a set of classes using inheritance, the user of the classes can achieve what is called 'polymorphism'. Polymorphism is the result of having pointers to classes in a hierarchy and having a given pointer point to different classes at different times during the execution of a program. This can result in run time polymorphism where the decision of what function to call is made at execution time rather than at compilation time. The cases where polymorphism occurs are indicated in the comments in the examples below. The first example below demonstrates many cases for virtual member functions. The second example below demonstrates some cases of pure virtual member functions and some abstract classes. // test_vir.cc // show how a virtual function in a base class gets called // demonstrate polymorphism // compare this with testpvir.cc using a pure function to make an abstract class class Base // terminology: a base class is any class that { // does not inherit another class public: void f(void); virtual void v(void); }; class D1 : public Base // note both f and v inherited { // then defined again public: void f(void); virtual void v(void); }; class D2 : public Base // note both f and v inherited { // only v defined again public: virtual void v(void); }; class D3 : public Base // note both f and v inherited { // only f defined again public: void f(void); }; class DD1 : public D1 // a class derived from the a derived class { // now have potentially three f's and three v's public: void f(void); virtual void v(void); }; int main(void) { Base b, *bp; // declare an object and a pointer for each class D1 d1, *d1p; D2 d2, *d2p; D3 d3, *d3p; DD1 dd1, *dd1p; b.f(); // calls Base::f() b.v(); // Base::v() bp=&b; bp->f(); // Base::f() bp->v(); // Base::v() no difference with virtual yet has no f(), get from base type() d3.v(); // Base::v() D3 has no v(), get from base type d3p=&d3; d3p->f(); // D3::f() d3p->v(); // Base::v() D3 has no v(), get from base type bp=&d3; // now, make a pointer to the base type, point to an // object of a derived type of the base type bp->f(); // Base::f() choose function belonging to pointer type bp->v(); // Base::v() no local function, choose from pointer type! // this is run time polymorphism return 0; } #include <iostream> using namespace std; void Base::f(void) // implementation of each function { // each function just outputs its; } /* result of running above file: Base::f() Base::v() Base::f() Base::v() D1::f() D1::v() D1::f() D1::v() Base::f() D1::v() Base::f() D2::v() Base::f() D2::v() Base::f() D2::v() D3::f() Base::v() D3::f() Base::v() Base::f() Base::v() DD1::f() DD1::v() DD1::f() DD1::v() Base::f() DD1::v() */ The programmer of a class has a design technique to force any class inheriting class to define a member function with a specific function prototype. An example of such a function is: virtual void Draw(void) = 0; The combination of 'virtual' and the syntax = 0; causes this class to be an abstract class and causes any class that inherits this class to define a function void Draw(void) if that class is to be other than an abstract class. An abstract class may define a pointer object but may not define an object. Now we use virtual ... = 0; to make a class an abstract class. No objects can be created from an abstract class. // testpvir.cc // show how a virtual function in a base class gets called // demonstrate polymorphism // compare this file to test_vir.cc that does not use 'virtual' class Base // a base class { public: void f(void); virtual void v(void) = 0; // makes v a pure virtual function }; // makes Base an abstract class class D1 : public Base // a derived class { // inherits both f and v public: // then defines both void f(void); virtual void v(void); }; class D2 : public Base // inherits f and v { // defines only v public: virtual void v(void); }; class D3 : public Base { public: void f(void); // no v defined, thus D3 abstract also }; class DD1 : public D1 // a class derived from the a derived class { public: void f(void); virtual void v(void); }; class D4 : public D3 // now make non virtual so we can get instance (object) { public: void v(void); }; int main(void) { Base *bp; // no object but a pointer for each pure virtual class D1 d1, *d1p; D2 d2, *d2p; D3 *d3p; // no object because v() not defined DD1 dd1, *dd1p; D4 d4; // now v() defined and can get object // b.f(); // no longer can get an object for the pure virtual Base class // b.v(); // bp=&b; // can get pointer, but can not use until later class instance // bp->f(); // bp->v();() // not with pure virtual // d3.v(); // D3::v() d3p=&d4; d3p->f(); // D3::f() d3p->v(); // D4::v() only choice bp=&d4; // now, make a pointer to the base type, point to an // object of a derived type of the base type bp->f(); // Base::f() choose function belonging to pointer type bp->v(); // D4::v() only choice! // run time polymorphism return 0; } #include <iostream> using namespace std; void Base::f(void) // code for all the function prototypes above { // each function just prints its full; } void D4::v(void) { cout << "D4::v()" << endl; } /* results of running above program: D1::f() D1::v() D1::f() D1::v() Base::f() D1::v() Base::f() D2::v() Base::f() D2::v() Base::f() D2::v() D3::f() D4::v() Base::f() D4::v() DD1::f() DD1::v() DD1::f() DD1::v() Base::f() DD1::v() */ see homework assignment for details see homework assignment for details The key word 'friend' is used to allow visibility to the private part of a class. In the following example, class A1 declares that class B1 is a friend. Thus, allowing class B1 to have access to the private part of class A1. The 'friend' declaration is used most often when a pair of classes are being defined that have close connection and much interaction. // friends.cpp #include <iostream> using namespace std; class A1; // incomplete type definition needed because B1 refers to A1 // and A1 refers to B1 class B1 { public: B1(int pub, int pri){b1pub=pub; b1pri=pri;} void B1_out(A1 aa); int b1pub; private: int b1pri; }; class A1 { friend B1; // says B1 can access this classes private part public: A1(int pub, int pri){a1pub=pub; a1pri=pri;} void A1_out(B1 bb); int a1pub; private: int a1pri; }; void A1::A1_out(B1 bb) { cout << "A1_out " << a1pub << a1pri << bb.b1pub /* << bb.b1pri */ << '\n'; // B1 public OK ----- ----- no private } void B1::B1_out(A1 aa) { cout << "B1_out " << b1pub << b1pri << aa.a1pub << aa.a1pri << '\n'; // both public and private OK because of 'friend' } int main(void) { A1 a(1,2); B1 b(3,4); cout << "friends.cpp \n"; a.A1_out(b); b.B1_out(a); return 0; } // results: friends.cpp // A1_out 123 // B1_out 3412 There is a special case when a operator needs to be defined for another class and variables in the private part of a class need to be used by that operator. The most common case is when the cout operator << needs to be defined for a class. The operator is not a member function of the class being defined but need to access private variables. Thus a special use of 'friend' allows only the operator to have access to the private part. friend ostream &operator << (ostream &stream, A_class x); says that the operator << defined for reference to class ostream is allowed visibility to A_class private section. In general, any C++ predefined operator can be given a definition for a class. This example also shows the comparison operator < being defined. Note the function return type 'bool' is typically used for comparison operators. The first object to compare is this object, denoted by the key work 'this'. The second object used in the comparison is a reference to another object, 'y'. A simple comparison was shown here yet the class may define whatever makes sense for 'greater than' for the operator '<'. The precedence of operators can not be changed. A new form of constructor initialization is also demonstrated. The syntax : variable(constant or argument) initializes the variable in the object to the constant or argument. Actually any valid expression based on arguments and constants may be used for the initialization expression in the parenthesis. // cout_friend.cpp define the cout operator << for a class // define operator < for a class // demonstrate constructor initialization :var(arg) // a_class.h should be in a file by itself #include <iostream> // basic console input/output using namespace std; // bring standard include files in scope class A_class { public: A_class(int Akey, double Aval): key(Akey), val(Aval) {} friend ostream &operator << (ostream &stream, A_class x); bool operator < (A_class &y) {return this->key < y.key;} private: // bug in VC++ means this has to be public in VC++ only int key; double val; }; // #include "a_class.h" // would be needed if lines above were in a_class.h int main() { A_class a(5,7.5); A_class b(6,2.4); cout << "a= " << a << " b= " << b << endl; // using A_class << if( a < b) // using A_class < { cout << "OK a < b because 5 < 6" << endl; } else { cout << "BAD compare on a < b fix operator < definition" << endl; } return 0; } // end main // The lines below should be in a file a_class.cpp .cc .C // #include "a_class.h" // The 'friend' statement in A_class makes key and val visible here ostream &operator<< (ostream &stream, A_class x) { stream << x.key << " "; stream << " $" << x.val << " "; // just demonstrating added output return stream; } // output of execution is: // a= 5 $7.5 b= 6 $2.4 // OK a < b because 5 < 6 The next topic is the four uses of the key word 'static'. The following file static.cc is well commented to show the uses. Note the execution results that show the single location B1 is shared by all objects of class Base. // static.cc test various cases note //! is about 'static' FOUR USES !! // 1 a static member function // 2 a static member variable (shared by all objects of this class // 3 local to this file // 4 static (not on stack) variable in function - holds value // between calls to the function #include <iostream> using namespace std; static int funct(int stuff); //! local to this file, not seen by linker 3 class Base { public: Base(void) { B2=2; B3=3;} static void BF1(void); //! definition can not go here 1 void BF2(void) { cout << B1 << " " << B2 << " BF2 in Base \n"; B2=5; } private: static int B1; //! no =1; here (shared by all objects of this class) 2 int B2; int B3; }; int Base::B1 = 1; //! has to be at file scope. Not in 'main' or in class 2 //!=3; //! means local to this file, not visible to linker 3 static Base c; int main(void) { static int funct(-2); //! call to function, local to this file } static int funct(int stuff) //! local to this file, not seen by linker 3 { static int i=4; // hold value between calls to funct 4 j=i; i++; return stuff+3; } // results, note B1 is changed in object 'a' and B1 gets changed also in 'b' // while B2 is changed in object 'a'' The English language terminology "is a" A is a B would usually mean class A would inherit class B. Shown below: a vehicle is a transporter a car is a vehicle The English language terminology "has a" A has a B would usually mean class A is composed of one or more objects of type B. This is called assembly, construction or composition in various books on object oriented programming. Shown below: a car has a engine a car has a hood a car has wheels a car has doors } Templates allow the developer to leave one or more types to be defined by the user of the template. Functions take values as parameters, but templates take types as parameters. The first example shows the syntax and provides many comments about the definition and use of a template of a class: // test1_template.cc basic test of templates #include <iostream> #include <string> // this is definitely NOT !! using namespace std; template <class T> class foo; // a declaration that foo is a template class template <class T> // declaration or specification of foo class foo // the user gets to choose a type for 'T' { public: foo(); void put(T value); T get(int &j); private: int mine(T &a_value); T a; int i; }; // remember the ending semicolon template <class T> // implementation or body of foo foo<T>::foo() { i = 1; // can not initialize 'a' here because we do not know what type 'T' is } template <class T> void foo<T>:: put(T value) { a = value; // since 'a' and 'value' are both type 'T' this is OK } template <class T> T foo<T>::get(int &j) { j = ++i; // 'j' was passed by reference and gets incremented 'i' return a; // return 'a' whatever type it is } int main(int argc, char *argv[]) { foo<double> x; // in object 'x', the type 'T' is double double y; int j; x.put(1.5); // put the value 1.5 into 'a' in 'x' y = x.get(j); // increment 'i' in 'x' and return 'a' cout << "j= " << j << " y= " << y << endl; foo<string> glob; // in object 'glob' , the type 'T' is string::string string y_string; glob.put("abc"); // put the string "abc" int 'a' in 'glob' y_string = glob.get(j); // increment 'i' in 'glob' and return 'a' cout << "j= " << j << " y_string= " << y_string << endl; return 0; } Now, we define a template function where the type will be chosen by the user. The test case uses numeric types because + - * / have to be defined for the type 'typ' Notice the difference in using a template function, there is no < > needed because the types of the function parameters select (or determine) the type the template will be instantiated with. Note the differences in the results for the different user types: // test_template.cpp define and use a function template // define a template template <class typ > // user chooses a type for 'typ' typ funct( typ x, typ y, typ z) // the function takes 3 parameters { typ a, b; a = x + y; // our template limits the users b = y - z; // choice for 'typ' to a type return a * a / b; // that has +, -, *, and / defined } #include <iostream> // 'typ' must also have operator << defined using namespace std; int main() { int i = funct(3, 5, 7); // integer for typ long int j = funct(3L, 5L, 7L); // long int for typ unsigned int k = funct(3U, 5U, 7U); // unsigned int for typ float x = funct(3.1F, 5.2F, 7.3F); // float for typ double y = funct(3.1, 5.2, 7.3); // double for typ cout << i << " int, " << j << " long, " << k << " unsigned, " << x << " float, " << y << " double.\n"; return 0; } // end main // output is // -32 int, -32 long, 0 unsigned, -32.8047 float, -32.8048 double. // note the very different answer for the unsigned type // remember a small negative number becomes a very large positive number Now a more complicated template that takes two types are parameters. This example has a no parameter constructor for the template class and a two parameter constructor. The comments indicate which is being used. // template_class.cc a template that needs two classes to instantiate // this should be in a file my_class.h template $lt;class C1, class C2> // user needs two class types for 'C1' and 'C2' class My_class { public: My_class(void){} My_class(C1 Adata1, C2 Adata2) { data1 = Adata1; /*data2 = Adata2;*/} // other functions using types C1 and C2 // using statements such as below put requirements on C1 and C2 int Get_mode(void) { return data1.i; } int Get_mode(C2 Adata3) { return Adata3.mode; } // ... protected: C1 data1; // C2 data2(7); // initialization not allowed here // ... }; // in users main program file // #include <my_class.h> class A_class // users class that will be OK for template type 'C1' { public: A_class(void) { i = 3; } int i; }; class B_class // users class that will be OK for template type 'C2' { public: B_class(int Amode) { mode = Amode; stuff = 0.0; } int mode; private: double stuff; }; #include <iostream> using namespace std; int main(void) { A_class AA; // we need an object of type A_class B_class B1(7); // we need an object of type B_class B_class B2(15); // another object My_class<A_class, B_class> stuff; // 'stuff' is the instantiated template cout << "stuff.Get_mode(B1) " << stuff.Get_mode(B1) << endl; // now instantiate the template again to get the object 'things' My_class<A_class, B_class> things(AA, B2); // using the constructor to // pass objects 'AA' and 'B2" cout << "things.Get_mode() " << things.Get_mode() << endl; return 0; } // output from execution is: // stuff.Get_mode(B1) 7 // things.Get_mode() 3 test_vector.cpp test_vector.out A demonstration of using some functions from STL algorithm (Note: These are template functions and not template classes that define data structures. Thus, the user must create data objects for the algorithms to work on.) test_algorithm1.cc Special case that works in VC++ (user template function) test_algorithms.cpp Modified case that works in g++ test_algorithms.cc Unfortunately three compilers differ on handling the type 'string' This is one of the most complex STL header files. The typedef for string had to instantiate basic_string<...> which in turn has to instantiate char_traits<...>. Then there are template functions that must be instantiated. Visual C++ works most like the standard but may still have bugs: test_string.cpp g++ may be doing 'swap' correctly but does not have all the 'compare' test_string.cc On UMBC node 'retriever' SGI CC outputs a bunch of ^E^E^E ... and a bunch of line feeds on some commands. .capacity() did not work, possibly a problem with header files. Seems much better on node 'gl'. test_string.C Be sure you test whatever compiler you use. It is better to not use a feature that is non standard. Work around problems with as simple code as possible. Now, consider that the type 'string' is a STL container. Thus, many algorithms from 'algorithm' will work, for example: test_string2.cc This works in both g++ and CC on gl machines. Sample programs: test_list.cpp test_set.cpp test_priority_queue.cpp file io example see project description Go over HW5 - HW8 (fix it if you did not get it right) For the STL library <vector> <algorithm> <string> <list> <set> Know how to instantiate the template to get an object. Know how to get initial values into the object. Know how to add more values to the object. Know how to delete or remove or erase values from the object. Know how to print the object. Know how to look up member functions and algorithm template functions to be able to call the functions. test_print_template.cpp shows how to instantiate, initialize and print various STL templates. // test_print_template.cpp a general template function to print a STL container #include <iostream> #include <algorithm> #include <functional> #include <string> #include <vector> #include <list> #include <set> #include <queue> #include <stack> using namespace std; // a template for a general STL container print function template <class forward_iterator> void print(forward_iterator first, forward_iterator last, const char* title) { cout << title << endl; while (first != last) cout << *first++ << " "; cout << endl << endl; } int main() { int i; int n; // size of container int data[4] = {3, 7, 5, 4}; print(data, data+4, "normal integer array"); vector<int> v1(4); v1[0] = 3; v1[1] = 7; v1[2] = 5; v1[3] = 4; print(v1.begin(), v1.end(), "vector<int> data"); list<string> l1; l1.push_back("3"); l1.push_back("7"); l1.push_back("5"); l1.push_back("4"); print(l1.begin(), l1.end(), "list<string> data"); set<int> s1; s1.insert(3); s1.insert(7); s1.insert(5); s1.insert(4); print(s1.begin(), s1.end(), "set<int> data - note sorted"); multiset<int> ms; ms.insert(3); ms.insert(7); ms.insert(5); ms.insert(4); print(ms.begin(), ms.end(), "multiset<int> data - note sorted"); queue<int> q1; q1.push(3); q1.push(7); q1.push(5); q1.push(4); // print(q1.begin(), q1.end(), "queue<int> data"); // won't work cout << "special print loop for queue<int>, destroys queue" << endl; n = q1.size(); // can not be in 'for' statement for(i=0; i<n; i++) {cout << q1.front() << " "; q1.pop();} cout << endl << endl; priority_queue<int> pq; pq.push(4); pq.push(5); pq.push(7); pq.push(3); // print(pq.begin(), pq.end(), "priority_queue<int> data"); // won't work cout << "special print loop for priority_queue<int> sorted, destroys priority_queue" << endl; n = pq.size(); // can not be in 'for' statement for(i=0; i<n; i++) {cout << pq.top() << " "; pq.pop();} cout << endl << endl; deque<int> dq; dq.push_front(4); dq.push_front(5); dq.push_front(7); dq.push_front(3); // print(dq.begin(), dq.end(), "deque<int> data"); // won't work cout << "special print loop for deque<int>, destroys deque" << endl; n = dq.size(); // can not be in 'for' statement for(i=0; i<n; i++) {cout << dq.front() << " "; dq.pop_front();} cout << endl << endl; stack<int> st; st.push(4); st.push(5); st.push(7); st.push(3); // print(st.begin(), st.end(), "stack<int> data"); won't work cout << "special print loop for stack<int>, destroys stack" << endl; n = st.size(); // can not be in 'for' statement for(i=0; i<n; i++) {cout << st.top() << " "; st.pop();} cout << endl << endl; return 0; } // output from execution is: // normal integer array // 3 7 5 4 // // vector<int> data // 3 7 5 4 // // list<string> data // 3 7 5 4 // // set<int> data - note sorted // 3 4 5 7 // // multiset<int> data - note sorted // 3 4 5 7 // // special print loop for queue<int>, destroys queue // 3 7 5 4 // // special print loop for priority_queue<int> sorted, destroys priority_queue // 7 5 4 3 // // special print loop for deque<int>, destroys deque // 3 7 5 4 // // special print loop for stack<int>, destroys stack // 3 7 5 4 // see homework assignment page for details All about const in declaring types of pointers: Note what works, what is required and what is not allowed: // test_const.cpp pointer and value constant and non constant int main() { int i1 = 1; int i2 = 2; int i3 = 3; int i5 = 5; const int ic = 9; // variable 'ic' unchangeable // naming is pointer_object // con for constant // var for variable (not const) // four cases are possible: int * var_var; int * const con_var = &i1; // requires initialization const int * var_con; const int * const con_con = &i5; // requires initialization // ^ ^-- refers to the pointer // L__ refers to the value (dereferenced pointer) var_var = &i2; // con_var = &i2; // not allowed var_con = &i3; // con_con = &i2; // not allowed *var_var = 7; *con_var = 8; // *var_con = 7; // not allowed // *con_con = 7; // not allowed // var_var = ⁣ // not allowed // con_var = ⁣ // not allowed var_con = ⁣ // con_con = ⁣ // not allowed return 0; } /* test_swap.c older style of test_swap.cpp */ void swap(int *v1, int *v2) /* pass by pointer */ { int tmp = *v2; *v2 = *v1; *v1 = tmp; } /* end swap */ #include <stdio.h> int main() /* C version of test swap */ { int i=10; int j=20; printf("before swap i= %d j= %d \n", i, j); swap(&i,&j); /* note users call requires & */ printf("after swap i= %d j= %d \n", i, j); return 0; } /* end main */ // test_swap.cpp demonstrate pass parameter by reference void swap(int &v1, int &v2) // pass parameter by reference { int tmp = v2; v2 = v1; v1 = tmp; } // end swap #include <iostream> using namespace std; int main() // C++ test swap { int i=10; int j=20; cout << "before swap i= " << i << " j= " << j << endl; swap(i,j); // note: just use object names, no '&' cout << "after swap i= " << i << " j= " << j << endl; return 0; } // end main /* test_swap_str.c older style of test_swap_str.cpp */ void swap(char**v1, char**v2) /* pass by pointer */ { /* note has to be pointer to pointer */ char* tmp = *v2; /* in order to swap pointers */ *v2 = *v1; *v1 = tmp; } /* end swap */ #include <stdio.h> int main() /* C version of test swap str */ { char* i="abcdef"; char* j="ghi"; printf("before swap i= %s j= %s \n", i, j); swap(&i, &j); /* note users call needs & */ printf("after swap i= %s j= %s \n", i, j); return 0; } /* end main */ // test_swap_str.cpp demonstrate pass parameter by reference void swap(char* &v1, char* &v2) { char* tmp = v2; v2 = v1; v1 = tmp; } // end swap #include <iostream> using namespace std; int main() // C++ test swap { char* i="abcdef"; char* j="ghi"; cout << "before swap i= " << i << " j= " << j << endl; swap(i,j); cout << "after swap i= " << i << " j= " << j << endl; return 0; } // end main // test_swap_template.cpp demonstrate pass parameter by reference template <class some_type> void swap_any(some_type &v1, some_type &v2) { some_type tmp = v2; v2 = v1; v1 = tmp; } // end swap_any #include <iostream> #include <string> using namespace std; int main() // C++ test swap { int i=10; int j=20; cout << "before swap i= " << i << " j= " << j << endl; swap_any(i, j); cout << "after swap i= " << i << " j= " << j << endl; string s10("string 10"); string s20("string 20, longer OK"); cout << "before swap s10= " << s10 << " s20= " << s20 << endl; swap_any(s10, s20); cout << "after swap s10= " << s10 << " s20= " << s20 << endl; return 0; } // end main Then, for using the various cases of 'const' in parameter passing, see test_parameters.cpp As a side issue, here is a C++ program that uses a dynamic matrix built on the vector template class, that needs no pointers, no malloc or new, and can use conventional matrix subscripting. // test_matrix.cpp dynamic matrix, just vector of vectors #include <vector> #include <iostream> using namespace std; int main() { typedef vector<double> d_vec; typedef vector<d_vec> d_mat; d_mat mat; // dynamic size matrix, initially 0 by 0 d_vec row; cout << "test_matrix.cpp" << endl; row.push_back(1.0); row.push_back(2.0); row.push_back(3.0); // build the first row mat.push_back(row); row[0]=4.0; mat.push_back(row); row[1]=5.0; mat.push_back(row); // now have 3 by 3 1.0 2.0 3.0 // 4.0 2.0 3.0 // 4.0 5.0 3.0 cout << "mat[2][2]=" << mat[2][2] << endl; mat[1][1]=9.0; mat[0].push_back(4.0); mat[1].push_back(6.0); mat[2].push_back(7.0); row.push_back(8.0); mat.push_back(row); // now have 4 by 4 1.0 2.0 3.0 4.0 // 4.0 9.0 3.0 6.0 // 4.0 5.0 3.0 7.0 // 4.0 5.0 3.0 8.0 cout << "mat[2][3]=" << mat[2][3] << endl; mat[2][3]=7.0001; // obviously do not use mat[4][4], it does not exists for(int i=0; i<mat.size(); i++) for(int j=0; j<mat[i].size(); j++) cout << "mat[" << i << "][" << j << "]= " << mat[i][j] << endl; return 0; } Sometimes it is convenient to have a member function call itself. This is called a recursive member function. There must be a way for the function to return without calling itself or else there is an infinite loop. First, look a the simplest recursive function, n! called n factorial. 0! is defined as 1 1! is defined as 1 * 0! = 1 2! is defined as 2 * 1! = 2 3! is defined as 3 * 2! = 6 Note that the definition is 'recursive' because n! is defined as n * (n-1)! Each bigger factorial is defined using the next smaller factorial. Because n! grows fast with increasing n, integer overflow will occur. Where the overflow (a number bigger than a C++ type can hold) occurs depends on what computer and possibly what compiler you are using. Beware! Overflow is not easily detected as can be seen in the output below. Note how the function 'factorial' is coded from the definition of factorial. // test_factorial.cpp the simplest example of a recursive function // a recursive function is a function that calls itself static int factorial(int n) // n! is n factorial = 1*2*3*...*(n-1)*n { if( n <= 1 ) return 1; // must have a way to stop recursion return n * factorial(n-1); // factorial calls factorial with n-1 } // n * (n-1) * (n-2) * ... * (1) #include <iostream> using namespace std; int main() { cout << " 0!=" << factorial(0) << endl; // Yes, 0! is one cout << " 1!=" << factorial(1) << endl; cout << " 2!=" << factorial(2) << endl; cout << " 3!=" << factorial(3) << endl; cout << " 4!=" << factorial(4) << endl; cout << " 5!=" << factorial(5) << endl; cout << " 6!=" << factorial(6) << endl; cout << " 7!=" << factorial(7) << endl; cout << " 8!=" << factorial(8) << endl; cout << " 9!=" << factorial(9) << endl; cout << "10!=" << factorial(10) << endl; cout << "11!=" << factorial(11) << endl; cout << "12!=" << factorial(12) << endl; cout << "13!=" << factorial(13) << endl; cout << "14!=" << factorial(14) << endl; cout << "15!=" << factorial(15) << endl; // expect a problem with cout << "16!=" << factorial(16) << endl; // integer overflow cout << "17!=" << factorial(17) << endl; // uncaught in C++ (Bad!) cout << "18!=" << factorial(18) << endl; return 0; } // output of execution is: // 0!=1 // 1!=1 // 2!=2 // 3!=6 // 4!=24 // 5!=120 // 6!=720 // 7!=5040 // 8!=40320 // 9!=362880 // 10!=3628800 // 11!=39916800 // 12!=479001600 // 13!=1932053504 // wrong! 13! = 13 * 12!, must end in two zeros // 14!=1278945280 // wrong! and no indication! // 15!=2004310016 // wrong! // 16!=2004189184 // wrong! // 17!=-288522240 // wrong and obvious if you check your results // 18!=-898433024 // Only sometimes does integer overflow go negative Now, look at a program to solve the eight queens problem. The problem is defined as having an 8 by 8 chess board and you must find a way to place eight queens on the chess board such that no queen can capture any other queen. Queens can capture horizontally, vertically and diagonally. Rather obviously, no two queens can be in the same column or same row. The data structure is based on an object, a queen, and a simple linked list of such objects. The execution model is to recursively traverse the linked list. The author of this program went a little overboard with recursion but it still serves as an example. The student must be very careful in analyzing the program to keep tract of which object a function is working on during each recursive call. // eight_queens.cpp demonstrate recursive use of objects // place eight queens on a chess board so they do not attack each other #include <iostream> using namespace std; class Queen // in main() LastQueen is head of the list { // of queen objects. public: Queen(int C, Queen *Ngh); int First(void); void Print(void); private: int CanAttack(int R, int C); // private member functions int TestOrAdvance(void); // just used internally int Next(void); // try next Row int Row; // Row for this Queen int Column; // Column for this Queen Queen *Neighbor; // linked list of Queens }; Queen::Queen(int C, Queen *Ngh) // normal constructor { Column = C; // Column will not change Row = 0; // Row will be computed Neighbor = Ngh; // linked list of Queen objects } int Queen::First(void) { Row = 1; if (Neighbor && Neighbor -> First()) // order is important { // note recursion return TestOrAdvance(); } return 1; } void Queen::Print(void) { if(Neighbor) { Neighbor -> Print(); // recursively print data inside all Queens } cout << "column " << Column << " row " << Row << endl; } int Queen::TestOrAdvance(void) { if(Neighbor && Neighbor -> CanAttack(Row, Column)) { return Next(); // the member function 'next' does the 'advance' } return 1; } int Queen::CanAttack(int R, int C) // determine if this queen attacked { int Cd; if (Row == R) return 1; // a set of rules that detect in same Cd = C - Column; // row or column or diagonal if((Row+Cd == R) || (Row-Cd == R)) return 1; if(Neighbor) { return Neighbor -> CanAttack(R, C); // recursive } return 0; } int Queen::Next(void) // advance Row but protect against bugs { if (Row == 8) { if (!(Neighbor && Neighbor->Next())) return 0; Row = 0; } Row = Row + 1; return TestOrAdvance(); } int main() { Queen *LastQueen = 0; for ( int i=1; i <= 8; i++) // initialize { LastQueen = new Queen(i, LastQueen); } if (LastQueen -> First()) // solve problem { LastQueen -> Print(); // print solution } return 0; // finished } Output of execution of eight_queens column 1 row 1 column 2 row 5 column 3 row 8 column 4 row 6 column 5 row 3 column 6 row 7 column 7 row 2 column 8 row 4 When a class has only objects in it, that is no pointers that are set by the 'new' operator, a class is reasonably safe. But, when using the 'new' operator in a class: Use all three of these C++ constructs to build a safe class: Use the keyword 'explicit' on all constructors to prevent use of "=" Use a 'copy constructor' that really makes new space and copies an existing object into the new space. Use 'operator=', in other words, code your own '=' operator for a = b; This uses the same type of code as the copy constructor with the addition of a statement 'return *this' The links show test_safe1.cpp a bad example. Then test_safe2.cpp safe but ugly. Finally test_safev.cpp neat using STL. Note: The STL version provides a dynamic vector, yet it is an object, thus the 'explicit', copy constructor, and operator= are not needed. The above set works with Microsoft Visual C++, below with g++ The links show test_safe1.cc a bad example. Then test_safe2.cc safe but ugly. Finally test_safev.cc neat using STL. Note: The STL version provides a dynamic vector, yet it is an object, thus the 'explicit', copy constructor, and operator= are not needed. Suppose you want to make some C code so that it works in both C and C++. Well, the technique is to use a specific form of a header file for your C code. The commented example below shows the header file: /* c_header.h header file that works for both C and C++ */ /* note that this works in both C and C++ because: 1) only C style comments are used 2) name mangling is turned off using extern "C" { ... } in #ifdef __cplusplus <-- a special name 3) good practice is used to be sure included just once */ #ifndef C_HEADER_H_ /* to only include once */ #define C_HEADER_H_ /* to only include once */ #ifdef __cplusplus /* make it C if in C++ compiler */ extern "C" { /* never get here in a C compiler */ #endif #include <time.h> /* other C header files */ void some_f(float seconds); /* types and function prototypes */ /* keep strictly C, not C++ */ #ifdef __cplusplus /* just closing } for C in C++ compiler */ } #endif #endif /* C_HEADER_H_ to only include once */ An example C program to be sure above header file works /* test_c_header.c */ #include "c_header.h" int main() { return 0; } And, an example C++ program to be sure above header file works // test_c_header.cpp #include "c_header.h" int main() { return 0; } Now, demonstrate calling a C compiled function from a C++ compiled function and calling a C++ compiled function from a C compiled function. The C compiled function is compiled gcc -c call_c.c to produce an object file call_c.o /* call_c.c goes with call_cc.cpp */ /* no main() in this file */ char call_cc(int x, float y); // function prototype // function defined in C++ #include <stdio.h> char call_c(int x, float y) // simple C function { char c; /* x and y passed through */ printf("in c: about to call a C++ function\n"); c = call_cc(x, y); printf("in c: returned from call_cc with %c \n", c); printf("in c: x = %d, y = %g \n", x, y); return 'a'; // return something to C++ } In order to call the above C function (or any C library) from C++ the essential statement is to place the C related statements in a block extern "C" { } Any code or files in the block is treated as C code. The function names are not mangled as they are in C++. The calling and return code is for C rather than for C++. It turns out, you can use C++ statements inside the extern "C" { } but it is not recommended. Now, compile the main() and link in the call_c.o with the command g++ -o call_cc call_cc.cpp call_c.o // call_cc.cpp calls a C program which calls back to this C++ program // must be compiled with call_cc.c or at least linked with // the object file from the C program extern "C" { // function prototype for a C function char call_c(int x, float y); } #include <iostream> using namespace std; int main() { char c; int x=5; float y = 1.5; cout << "in C++ about to call a c: function, call_c" << endl; c = call_c(x, y); cout << "in C++: returned from call_c with " << c << endl; cout << "in C++: x = " << x << " y = " << y << endl; return 0; } extern "C" { // a C++ function callable by a C function #include <stdio.h> // probably should use C++ rather than C I/O char call_cc(int x, float y) { try // can use C++ in C callable function { // but must load C++ and C libraries cout << "in C++ extern \"C\" test that C++ allowed" << endl; } catch(...) {printf("some exception thrown in C++\n");} printf("in C++ called from c: x = %d, y = %g \n", x, y); return 'q'; } } // output of execution is: // in C++ about to call a c: function, call_c // in c: about to call a C++ function // in C++ extern "C" test that C++ allowed // in C++ called from c: x = 5, y = 1.5 // in c: returned from call_cc with q // in c: x = 5, y = 1.5 // in C++: returned from call_c with a // in C++: x = 5 y = 1.5 The above program demonstrated a main C++ function calling a C function 'call_c' then the C function 'call_c' called a function 'call_cc' compiled in C++ yet having extern "C" { } The C++ compiled function returned to the C function which finally returned to main(). One additional point: A class member function can only be called from a C function when the class member function is declared "static" See static.cc The X Windows library, for example, can be used with a C++ program, keeping in mind that the X Windows call-backs must be to C++ code enclosed in extern "C" { } blocks. (And be static member functions if the call-back is to a class member function.) The STL <complex> defines a template class complex that provides complex arithmetic and a few complex math functions. A sample use is test_complex.cpp A simple way to extend a standard C++ library class is to just define functions that work on that classes type. An extension is defined in complexf.h and has the corresponding code in complexf.cpp A test of the fuller definition of the complex<double> class is shown in test_complexf.cpp which uses the "complexf.h" header file. Compile with g++ -o test_complexf test_complexf.cpp complexf.cpp Execute test_complexf A more general extension would provide template functions rather than functions for a specific type. An example of "operator" functions that can not be member functions using a non template, very small, implementation of a complex class: // my_complex.cc a partial, non template, complex class // demonstrate some operator+ must be non-member functions // operator<< and operator >> must be non-member functions #include <iostream> using namespace std; class my_complex { public: my_complex() { re=0.0; im=0.0; } // three typical constructors my_complex(double x) { re=x; im=0.0; } my_complex(double x, double y):re(x),im(y) {} // { re=x; im=y; } my_complex operator+(my_complex & z) // for c = a + b { return my_complex(re+z.real(), im+z.imag()); } my_complex operator+(double x) // for c = a + 3.0 { return my_complex(re+x, im); } // many other operators would typically be defined double real(void) { return re; } double imag(void) { return im; } friend ostream &operator<< (ostream &stream, my_complex &z); private: double re; // the data values for this class double im; }; // non member functions my_complex operator+(double x, my_complex z) // for c = 3.0 + a { return z+x; } ostream &operator<< (ostream &stream, my_complex &z) { stream << "(" << z.real() << "," << z.imag() << ")"; return stream; } int main() { my_complex a(2.0, 3.0); cout << a << " =a\n"; my_complex b(4.0); cout << b << " =b\n"; my_complex c = my_complex(5.0, 6.0); cout << c << " =c\n"; my_complex d; cout << d << " =d\n"; c = a + c; cout << c << " c=a+c\n"; c = 3.0 + a; // not legal without non member function operator+ cout << c << " c=3.0+a\n"; c = a + 3.0; // not legal without second operator+ on double cout << c << " c=a+3.0\n"; c = a + 3; // not legal without second operator+ on double // uses standard "C" C++ conversion int -> float -> double cout << c << " c=a+3\n"; d = c.imag(); cout << d << " d=c.imag()\n"; return 0; } // Output from running my_complex.cc // (2,3) =a // (4,0) =b // (5,6) =c // (0,0) =d // (7,9) c=a+c // (5,3) c=3.0+a // (5,3) c=a+3.0 // (5,3) c=a+3 // (3,0) d=c.imag() We will use a very crude screen plot output to show physically how objects can be created and manipulated. The very crude drawing is output by a very old C program, header file vt100_plot.h and code file vt100_plot.c. We added the #ifdef __cplusplus to the C header file per the previous lecture in order to use the old C code with C and with object oriented C++. For demonstration purposes, all the C++ header files and code files are shown in one physical file. Each file name is shown as comments and should be a separate file. (Uncommenting the //#include lines.) Note the public, protected and private parts of class Shape. Note how classes Circle and Rectangle inherit class Shape. A sample main() shows just a few usage examples. The program may be compiled and executed using: gcc -c vt100_plot.c g++ -o test_shape3 test_shape3.cpp vt100_plot.o test_shape3 // test_shape3.cpp all in one file, should really be split up #include "vt100_plot.h" // shape3.h struct Point{float X; float Y;}; class Shape // modified for shape made up of lists of shapes { public: Shape(); // constructor ~Shape(); // destructor, mainly deletes linked list void SetCenter(Point ACenter); Point Center(); void Move(float dx, float dy); // may also want MoveTo(Point XY) void AddComponent(Shape *Ashape); void AddSibling(Shape *Ashape); virtual void Draw(); protected: Point TheCenter; private: Shape *list_of_component_shapes; Shape *list_of_sibling_shapes; }; // shape3.cpp //#include "shape3.h" Shape::Shape() // body for constructor, default initialization { TheCenter.X = 0; // make it legal the first time, keep it legal TheCenter.Y = 0; list_of_component_shapes = 0; list_of_sibling_shapes = 0; } Shape::~Shape() // body for destructor { // could use 'delete' in here on list_of_component_shapes // and list_of_sibling_shapes } void Shape::SetCenter(Point ACenter) // body for function SetCemter { TheCenter = ACenter; } Point Shape::Center() { return TheCenter; } void Shape::Move(float dx, float dy) { TheCenter.X += dx; TheCenter.Y += dy; } void Shape::AddComponent(Shape *Ashape) { Ashape->list_of_component_shapes=list_of_component_shapes; list_of_component_shapes=Ashape; } void Shape::AddSibling(Shape *Ashape) { Ashape->list_of_sibling_shapes=list_of_component_shapes; list_of_sibling_shapes=Ashape; } void Shape::Draw() // draws linked list of sub-shapes { Shape *local_list = list_of_component_shapes; while(local_list){ local_list->Draw(); local_list=local_list->list_of_component_shapes; } } // circle3.h //#include "shape3.h" class Circle : public Shape { public: Circle(float X, float Y, float R); void SetRadius(float Aradius); void Draw(); float Radius(); private: float TheRadius; }; // circle3.cpp //#include "circle3.h" Circle::Circle(float X, float Y, float R) { Point p={X,Y}; SetCenter(p); TheRadius = R; } void Circle::SetRadius(float Aradius) { TheRadius = Aradius; } void Circle::Draw() { DRAW_CIRCLE(TheCenter.X, TheCenter.Y, TheRadius); } float Circle::Radius() { return TheRadius; } // rectangle3.h //#include "shape3.h" class Rectangle : public Shape { public: Rectangle(float X, float Y, float W, float H); void SetSize(float Awidth, float Aheight); void Draw(); float Width(); float Height(); private: float TheWidth; float TheHeight; }; // rectangle3.cpp //#include "rectangle3.h" Rectangle::Rectangle(float X, float Y, float W, float H) { TheCenter.X = X; TheCenter.Y = Y; TheWidth = W; TheHeight = H; } void Rectangle::SetSize(float Awidth, float Aheight) { TheWidth = Awidth; TheHeight = Aheight; } void Rectangle::Draw() { DRAW_RECTANGLE(TheCenter.X, TheCenter.Y, TheWidth, TheHeight); } float Rectangle::Height() { return TheHeight; } float Rectangle::Width() { return TheWidth; } // test_shape3.cpp //#include "circle3.h" //#include "rectangle3.h" int main() // just a few tests { Circle a_circle(1.0F, 2.0F, 3.0F); Circle *circle_ptr = new Circle(7.0F, 0.0F, 3.0F); Point a_point = {3,4}; Rectangle a_rect(-8.0, 2.0, 4.0, 6.0); Shape a_shape; INITIALIZE(); a_circle.SetCenter(a_point); a_circle.SetRadius(2); a_circle.Draw(); a_circle.Move(12.5, 5.0); a_circle.Draw(); circle_ptr->Draw(); a_rect.Draw(); a_shape.SetCenter(a_point); a_shape.AddComponent(new Circle(-12,5,4)); a_shape.AddComponent(new Rectangle(-18,-6,4,4)); a_shape.Draw(); PRINT(); return 0; } The very crude output would look something like: **** * * ***** * * ** ** * * ** ** **** * * *** * ***** ** ** * * * * * * ** *** * ** ****** ** ** * **** ** ***** * * * ***** * * ** ** ***** ***** * * * * * * ***** Object Oriented Design, analysis problem. Proposed problem: Develop a simulation of traffic flow that can be used to determine travel times under varying traffic conditions. Some requirements and desires: 1) Have a graphical view of the traffic situation in order to help identify problems and possibly identify solutions. (X Windows on Unix, MS Windows in Visual C++) 2) Have more than one type of vehicle, e.g. at least passenger car and truck 3) Have intersections and traffic signals. The traffic signals need to have setable timers and sensors. 4) Have the ability to insert vehicles into the area of study based on specific requirements or based on mean and standard deviation of time of entry. ( For each type of vehicle.) 5) Have the ability to define the road pattern and intersection type. ( Intersections can be no traffic control, stop signs or traffic signal.) 6) The vehicles must be able to have a goal or driving instructions (a route). The default goal is to drive at the speed limit whenever possible while obeying all traffic rules. Analysis of objects: 1) It seems a class 'Vehicle' is needed. Initialization with characteristics like weight, length, maximum acceleration, maximum deceleration and route instructions would provide the ability to get all of the required objects. Route instructions could be a list of travel and turn commands like 'go N blocks, turn W' where N counts intersections and W is left or right. 2) It seems a class 'Road' may be needed. A road object may have any number of vehicles on it, may have any number of intersections. A road object should be able to be initialized with a speed limit. A road object may have to accept vehicles from other road objects and hand off vehicles to other road objects. But, further thought indicates... 3) A better building block may be a 'Lane'. A lane handles any number of vehicles going in one direction. A lane starts at an intersection ( the entrance) and stops at an intersection ( the exit). A lane has a length and a speed limit ( and possibly a road condition that can affect acceleration.) A lane has an X,Y grid coordinate for its entrance and exit. 4) It seems a class 'Intersection' is needed to join lanes or roads. It may be that an intersection needs to be composed of smaller classes. 5) A 'simulation' class or object is needed for general stuff outside any other classes. 6) A display class or object or function or package is needed to provide for the graphical display. 7) A setup program is needed to initialize the road system and connectivity. This program would control over all timing and sequencing Miscellaneous notes: Vehicles will follow a policy of staying at least 16 feet apart for each 10 feet per second of speed. Vehicles can accelerate at from 1/32 G to 1/3 G depending on type. ( 1/3 G is about 11 feet per second per second, 0 to 88 feet per second (60 MPH) in 8 seconds.) Loaded trucks accelerate much slower. Because simulation uses discrete movement, the modeling is not as easy as it might be. The lead vehicle in a lane must move forward before the vehicle behind it. The ripple effect then moves the later vehicles. In C++ the best data structure for an unknown number of objects that must be controlled is a linked list. An agent method will traverse the list and cause the objects to perform the "move" method in the right order and at the right time. Implementation: It was a design decision to allow many vehicles and to allow long running times. This decision led to having the vehicle class store needed data locally rather than "reach" outside the class for data. e.g. When a vehicle is placed on a lane, it acquires the speed limit and other lane data into its local private variables. In this way each vehicle object is autonomous and can move and draw itself. Each step of the simulation, the main program operates each intersection in a linked list of intersections. Each intersection, in turn, operates each outgoing lane. Each lane, in turn, moves each vehicle in the linked list of vehicles. Timing is controlled by the main program and there is a delay if all moving is finished before the time of the next discrete time step. // vehicle.h for traffic simulation // includes class route for vehicle #ifndef VEHICLE_H_ #define VEHICLE_H_ #include "simulation.h" class route { public: route():intersection_count(0), moving(reset), next(0){} // default last move route(int Aroute_count, direction Amoving, route * route_list); void add_route( int Aroute_count, direction Amoving); direction get_moving(void); int get_count(void); route * get_next(void); private: int intersection_count; direction moving; route *next; }; // vehicle.h the vehicle class for autonomous vehicle class vehicle { public: vehicle( float Alength, float Aweight, float Aaccel, float Adecell, float Aposition, float Aspeed, float Aspeed_limit, float Ax_base, float Ay_base, direction Amoving, float Asignal_pos, signal_state Asignal, vehicle *Anext_ptr); void move(void); vehicle *next(void); void new_next( vehicle *a_next ); bool exiting( void ); void set_new_lane_data(float Aspeed_limit, float Ax_base, float Ay_base, direction Amoving, float Asignal_pos, signal_state Asignal); void set_signal(signal_state Asignal); void draw(bool erase_only=false); private: int id; float length; float weight; float accel; float decell; float position; float position_last; float speed; float speed_limit; float time_last_move; float x_base; float y_base; direction moving; signal_state signal; float signal_pos; route * route_list; route * route_now; int route_count; vehicle *next_ptr; }; // end vehicle.h #endif // VEHICLE_H_ // lane.h for traffic simulation #ifndef LANE_H_ #define LANE_H_ #include "simulation.h" #include "vehicle.h" class lane { public: lane(void): id(0), position_x(0.0), position_y(0.0), speed_limit(0.0), moving(north), length(0.0), signal(none), vehicle_list(0) {}; lane(float Aposition_x, float Aposition_y, float Aspeed_limit, direction Amoving, float Alength); void operate(void); void vehicle_start(float Alength, float Aweight, float Aaccel, float Adecell); bool vehicle_exiting( void ); vehicle * vehicle_remove( void ); void vehicle_add( vehicle *a_vehicle ); void lane_end( float & x_lane_end, float & y_lane_end); void set_signal(signal_state Asignal); void draw(void); private: int id; float position_x; float position_y; float speed_limit; direction moving; float length; signal_state signal; vehicle *vehicle_list; }; // end lane.h #endif // LANE_H_ // intersection.h for traffic simulation #ifndef INTERSECTION_H_ #define INTERSECTION_H_ #include "simulation.h" #include "vehicle.h" #include "lane.h" class intersection { public: intersection(float Ax_location, float Ay_location, intersection * Aintersection_list); void set_signal( intersection * Aintersection_list, signal_state Anorth_south, signal_state Aeast_west); void operate(intersection * Aintersection_list); void creator( float Alength, float Aweight, float Aaccel, float Adecell, direction Amoving); void connect_from( intersection * Aintersection_list); intersection * next(void); void draw(intersection * Aintersection_list); private: float x_location; float y_location; intersection * next_intersection; lane * lane_north; // fed by lane * lane_from_south; lane * lane_east; // fed by lane * lane_from_west; lane * lane_south; // fed by lane * lane_from_north; lane * lane_west; // fed by lane * lane_from_east; signal_state north_south; signal_state east_west; }; // end intersection.h #endif // INTERSECTION_H_ A somewhat working version of the complete program on Unix using X Windows is provided by these additional files: Makefile_traffic_sim traffic_sim.cc simulation.h simulation.cc vehicle.cc lane.cc intersection.cc x_plot_d.h x_plot_d.cc delay.h delay.c Some strange code may be due to this being designed for multiple platforms where the plot routine and delay are different. Other strange code is just due to the author. see Syllabus and see homework assignment page for exam details see homework assignment page for exam details Last updated 5/1/01
http://www.csee.umbc.edu/~squire/cs291_lect.shtml
crawl-003
refinedweb
11,647
60.85
I know this has been asked before, but I'm trying to learn Python and I don't want the answer handed to me in the form of completed code. I'm hoping someone can just point me in the right direction. I know how to do loops and functions and stuff, but I don't know how I should set up a check to determine if a number is prime or not. This might actually be a simple math question really. What math process do I use to find out if a number is a prime? Here's how it could be broken down: num_to_test dbetween 2and num_to_test-1, we will try and figure out if ddivides num_to_test num_to_testis not a prime. Now, how to do this pythonically? User input With Python 3, all you have to do to request an integer input is a line like this: num_to_test = int(input('What\'s your number?')) Of course, you want to be sure that the user enters a number, that's why you'll surround this by a try/ except block and a while True loop, asking again and again until the answer is valid. while True: try: # request input if satisfies_your_conditions(num): break else: # explain why not good except ValueError: # print error message Looping over the integers First thing to know, you don't need to go further than the square root of num_to_test. sr = int(math.sqrt(num_to_test))+1 And you'll do the test for d in range(2, sr). Since we don't have a list of previously verified primes, we'll have to do it for all integers. To check whether d divides num_to_test, use the modulo operation: if num_to_test % d == 0: # d does divide num_to_test else: # d does not divide num_to_test Ending One cool feature is the for...else...: for ... : # do your tests, with possibility of breaking the loop else: # this part of the code will only be reached if the loop did not break With this, you'll be able to set a boolean to True or False and then You could use the ternary operator for this: print('some string' if is_prime else 'another string') This is only a possibility of implementation among others. import math def main(): while True: try: num_to_test = int(input('What\'s your number? ')) if num_to_test == int(num_to_test) and num_to_test > 2: break else: print('Must be an integer larger than 1. Try again.') except (NameError, ValueError): print('Not a number. Try again.') sr = int(math.sqrt(num_to_test))+1 for d in range(2, sr): if num_to_test % d == 0: is_prime = False break else: is_prime = True print('%d is a prime number'%num_to_test if is_prime else '%d is not prime'%num_to_test) if __name__ == '__main__': main()
https://codedump.io/share/Qbsbc7QUpNhz/1/what-is-the-process-to-find-if-a-number-is-prime
CC-MAIN-2018-26
refinedweb
455
64.85
Be the first to know about new publications.Follow publisher Unfollow publisher Gambit New Orleans - Info Spread the word. Share this publication. - Stack Organize your favorites into stacks. - Like Like this publication. Gambit- June 20, 2011 New Orleans Guide to News and Entertainment Lobster Night $25 Every Thursday Night in June Fresh Maine 1 ¼ lb. Lobster w/Salad & Side (Lobsters are limited, reservations recommended) Fri • June 24 Sat • June 25 Lisa Lynn Trio The Legendary Luther Kent 9:30PM 9:30PM Pianist Sun, Mon & Tues 7-10pm Stop In for Weekly Summer Specials & Cool Summer Drinks! 830 conti st. (in the prince conti hotel) 504.586.0972 • 800.699.7711 dinner & music nightly validated parking (at Iberville & Dauphine) WE BUY AND SELL traditional • contemporar y • vintage Gambit > bestofneworleans.com > JUne 21 > 2011 hotel curtains 06 Any furnishings for your: • • • • home office restaurant hotel C/F Canal Furniture $99 table, 6 chairs & china cabinet $1,500 (1 set only!) 3534 Toulouse St (at Bayou St. John) | Mid City | 504-482-6851 Monday: 10 am-6pm | Tues-Sat:10am-5pm | 504-442-5383 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > JEREMY ALFORD CLANCY DUBOS < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < KNOWLEDGE < < < < < < < < < < <IS < <POWER <<<<<<<<<<<<<<<<<< 14 15 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< <<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< scuttle Butt QUOTES OF THE WEEK “These. Tara New One A CLAIBORNE PARISH TEACHER IS TRYING TO DO WHAT MOST PUNDITS SAY IS IMPOSSIBLE: BEAT BOBBY JINDAL IN THE OCTOBER GUBERNATORIAL ELECTION, AND DO IT AS A DEMOCRAT. “What the governor has done with this veto is repudiate his life’s work.” — Rep. Harold Ritchie, D-Bogalusa, commenting on Gov. Bobby Jindal’s veto of the renewal of a 4-cents-apack. B Y WA LT E R P I E R C E T . STAGE FIGHT Haynesville Democrat Tara Hollis says she will run for governor if that’s what it takes to get Bobby Jindal out of the Governor’s Mansion. The bitter feud between Le Petit Theatre du Vieux Carre’s board of governors and its support guild took on a nasty new twist June 15, when the guild announced it had obtained a temporary restraining order halting the sale of 60 percent of the financially challenged theater to restaurateur Dickie Brennan, who plans to open his fourth French Quarter restaurant in the building. PAGE 11 c'est what? DO YOU SUPPORT SEN. ROB MARIONNEAUX’S BILL TO PHASE OUT LOUISIANA’S INDIVIDUAL AND CORPORATE INCOME TAXES? 35% 47% YES NO 18% DEPENDS ON WHAT'S CUT Vote on “c’est what?” on bestofneworleans.com THIS WEEK’S QUESTION What’s your preferred station for local TV news? PAGE10 BoUQuets The New Orleans Rugby Football Club THIS WEEK’S HEROES AND ZEROES. Gambit > bestofneworleans.com > JUne 21 > 2011 ara 09 811 Conti Street scuttlebutt Walter Pierce is managing editor of The Independent in Lafayette. Mon-Sun 10am-6am page 9 The two sides will meet in court June 24, and the guild-backed group Save Le Petit will hold a rally June 21 at the Columns Hotel. “We urge all (Le Petit) subscribers from last season to attend, as well as all those who cherish the iconic landmark on the corner of Jackson Square,” said guild president Jim Walpole. City Councilwoman Kristin Gisleson Palmer, in whose district the building stands, is scheduled to attend. According to the theater’s board, Le Petit currently has a $700,000 mortgage and “desperately” needs $1 million in repairs, both of which would be addressed immediately with the Brennan infusion of cash. Meanwhile, Gambit has obtained copies of emails about another offer — this one made June 13, four days after the Brennan deal was announced. The proposal was from former Le Petit artistic director Gary Solomon Jr. and his father, banker Gary Solomon Sr. In the email, the Solomons offered to buy the building’s mortgage, “holding it for five years without payments. At the end of the fifth year, the debt would be forgiven one-fifth of the outstanding amount for each year up to five more years, provided the entire Le Petit Theatre facility ... continues to operate as a theater for the community.” The Solomons’ offer also included “100 percent of the building in the theater’s control” and a five-year, $1.25 million endowment for “operating costs.” In a letter the next day, board member Michael S. Mitchell said the board had “decided to reject [the offer] for a variety of reasons,” including the fact that management of the theater was not addressed and the “angel” investors were not identified by name. He also noted the Solomon offer wouldn’t pay off the mortgage for 10 years, while the Brennan deal would retire the debt immediately. Mitchell’s letter also said the offer was made “at the last possible moment,” concluding, “Frankly, we are disinclined to enter into business dealings with someone who has recently threatened to sue us.” — Kevin Allman Le Petit NORD? At the June 9 press conference where the board of governors of Le Petit Theatre announced the impending sale of 60 percent of the theater.” Asked about the particulars of the potential city deal, Ryan Berni, spokesman for Mayor Mitch Landrieu, wrote in an email, “The administration was approached by members of the Board at some point. We were interested in the possibility of using Le Petit for NORDC [New Orleans Recreation Development Commission] programming and to create a theater center.” Berni clarified, “It was discussions with board members and not a formal presentation to the full board.” (Currently NORD partners with the Crescent City Lights Youth Theater, which stages musicals at the NORD Ty Tracy Center at Gallier Hall.) Le Petit was one of the few French Quarter buildings that suffered major damage during Hurricane Katrina, leaving open the possibility that it might be eligible to use FEMA funds for some of the repairs. Berni declined comment on questions regarding how much the city had proposed to pay for the building, and said it hadn’t been formally decided where the money would have come from to buy and renovate it. On June 14, Landrieu acknowledged he may have to cut another $3.5 million from the city’s operating budget this year. — Allman tax RePeaL Death RattLe A proposal to phase out the Louisiana income tax over a 10-year period starting in 2014 appears to be in its death rattle in the final days of this year’s legislative session. Senate Bill 259 by Sen. Rob Marionneaux, D-Livonia, started out as a measure to cut income taxes and replace them with a combination of cuts and other revenue measures, but it has been amended repeatedly and now barely resembles its original text. The measure came out of the Senate with an amendment that turned the phase-out into a “study resolution,” which rendered it meaningless. The bill’s tax-reduction provisions were later restored in a House committee, which stripped off the Senate floor amendment and returned the bill to its original form. Last week, however, the full House amended the bill again — this time putting back the “study commission” provision — before deferring final action on it. It is probably dead for the session. Proponents and opponents of the bill chided each other for playing politics with the measure, which is a sure sign that neither side really wants to deal with the issue. — Clancy DuBos CLaRifiCatiON In last week’s cover story (“Under Pressure,” June 14, 2011), New Orleans Police Chief Ronal Serpas estimated that NOPD had lost 150 officers over the last year, and had his staff follow up with an email with the exact number. After the story ran, the NOPD contacted Gambit to say their initial email was incorrect. The correct figures, according to the NOPD: The department had 1,539 officers in May 2010, and 1,386 officers in May 2011 — a total loss of 153 officers. Bucyrus 200 Saturday, June 25 - 5:30pm Toyota/SaveMart 350 Sunday, June 26 - 3pm $1.50 HIGH LIFES - Mondays 12am-2am Late Night Food Bringing you quality, consistency and value since 1971. Featured on: DINERS, DRIVE-INS & DIVES chargrilled OYSTERS DAILY Gambit > bestofneworleans.com > JUne 21 > 2011profile.” 504-523-8619 11 Your business is the business that matters to us. George A. Mueller III, Attorney at Law gam@chehardy.com Taxation Gambit > bestofneworleans.com > JUne 21 > 2011 One Galleria Boulevard, Suite 1100 Metairie, Louisiana 70001 phone (504) 833-5600 fax (504) 833-8080 toll free 1(855)833-5600 12 Business Law Chehardy Sherman’s team of skilled attorneys is ready to manage a vast array of legal matters. George Mueller has experience in Taxation and Business Law. YOGA ALSO FEATURING A COMPLETE CARDIO & STRENGTH TRAINING CENTER & PERSONAL TRAINING 2917 MAGAZINE STREET • SUITE 202 • 896-2200 FOLLOW US ON: Special Summe r Membe rship CALL TODAY news views what’s in Your Bookbag? Public schools can choose their own textbooks under a bill headed to the senate. By K andace P ower Gr aves he Louisiana Senate is expected to pass a. It list. “It passed overwhelmingly in the full House; I’m sure it will pass in the full Senate,” says Ian Binns, an LCFS member and assistant professor in Louisiana State University’s Department of Educational Theory, Policy and Practice. “We were critical about it being open season on the materials they can buy with T state money and how they will oversee those resources,” Binns says. “Currently our textbook adoption process … makes sure [textbooks] not only address GLEs (Grade-Level Expectations), but that the information is also appropriate.” Erin Bendily, chief of departmental support for DOE, says, the bill gives local school districts greater flexibility in selecting resources. “The schools could use any textbooks they want as long as they meet the minimum requirements,” she says. The state would maintain its list of approved textbooks, but the books would be “recommended” instead of required, she says. The bill keeps in place the people who screen textbooks. “We may do random monitoring of school districts and review textbooks and inform the district they can’t use the texts if they don’t meet the standards,” Bendily says. , opening the door for creationism to be taught and evolution — a major tenet of biology — to be questioned. Textbook selection and approval, however, remains the purview of BESE and the DOE. lawmakers voting against it. Bendily says she doesn’t believe opponents’ claims that Hoffmann designed HB580 to sneak creationism into public classrooms. “All the bill does, is to add some flexibility for the school board,” says committee chair Sen. Ben Nevers, D-Bogalusa. “Some of our textbooks are a number of years old ... and there could be better or more modern textbooks school boards want to use. I didn’t look at [the bill] as something that would hurt education in Louisiana; I think it could improve it.” The LCFS also says HB508 could allow “inappropriate understanding of other areas, not just science,” Binns says. “How will BESE know that a parish or a teacher has chosen to use inappropriate materials?” “Those teachers and school boards have too much to lose by teaching stuff that’s off the wall,” including lawsuits and low standardized test scores LaFleur says. “I hope it doesn’t come to a lawsuit,” Binns says. “I wish they would wake up and realize what they are playing with. It’s not something small, it’s a child’s future.” Makes a great gift! • Gift Cards Available at the Women’s Center • 4241 Veterans Memorial Boulevard, Suite 100 • Metairie, LA 70006 THE MASSAGE IS CLEAR – THURSDAYS AT THE WOMEN’S CENTER Therapeutic and Swedish Massage services from Kathleen Corchiani, L.M.T. Therapeutic Massage - Neck, back and shoulder pain - Chronic headaches - Insomnia - Depression - Plantar Fasciitis 30 minutes: $75/60 minutes: $100 ROSES ALWAYS MAKE HER ORDER ONLINE SMILE ROSES CASH & CARRY $650/DZ 815 FOCIS STREET [OFF VETERANS ] 837-6400 Focused on relaxing muscles by applying pressure to them against deeper muscles and bones. Rubbing in the same direction as the flow in the blood and releases toxins from the muscles. 30 minutes: $50/60 minutes: $75 Patient Scheduling: 504-883-5999 Visit to schedule an appointment or to request more information Gambit > bestofneworleans.com > JUne 21 > 2011 To prevent and alleviate pain, discomfort, muscle spasm & stress. To improve the functioning of the circulatory, lymphatic, muscular, skeletal & nervous systems. Swedish Massage 2035 METAIRIE ROAD 13 jeremy ALFORD THE STATE OF THE STATE Inside the Rails HOW GOV. BOBBY JINDAL DAMPENED THE LEGISLATURE’S RECENT BOUT OF INDEPENDENCE. hen your postman. Gov. Bobby Jindal saw to that June 14, when he vetoed a bill to extend a 4-centsa, W April. (Examples: merging UNO and SUNO; selling off three state prisons; deep recently. . House members who are term-limited are saying goodbye, even though some. HURRICANE SEASON Gambit > bestofneworleans.com > JUne 21 > 14 • INSURANCE NEW ORLEANS 504-241-7510 clancy POLITICS DUBOS Follow Clancy on Twitter @clancygambit. Post-K Reforms Continue T Katrina was a terrible tragedy, but it also spurred citizens to get engaged and demand more of their elected officials. 10-50% BLOW OUT SALE entire store Pass by after coffee, take a quick lunch break, leave work early. OFF @714 ADAMS STREET • UPTOWN! WE’RE HAVING A SALE! Jewelry, Shoes, Purses, Pottery, Dress Tops, Jeans, Skirts, Cami’s, Coats, Belts and Hats Enjoy scrumptious food and refreshing cocktails in our cool, misted courtyard. Live music every night! CLOTHING apparel shoes jewelry pottery Spirited Happy Hour 4-7pm Tues-Fri Jazz Brunch 11am-3pm Sunday [behind Starbucks at Maple] [504] 872-9230 Open | Monday-Saturday 10-6 | Sunday 10-3 HOURS: 4pm-close, Tues-Fri 11am-close, Sat-Sun 437 Esplanade at Frenchmen 504.252.4800 MEXICAN & CUBAN FOOD The best kept secret in New Orleans SPRAY-FOAM & BLOW-IN INSULATION Best Fajitas in Town! HOT ATTIC? FREE ESTIMATES PUERCO FRITO - $10.50 ROPA VIEJA - $8.15 Come Have Lunch With Me! COUNTRY FLAME 620 IBERVILLE STREET • 522.1138 OPEN EVERYDAY ‘TIL 8:30PM. $4,000 Incentives Plant sales & rentals 1135 PRESS ST. @ NEW ORLEANS 2900 ST. CLAUDE (504) 947-7554 504.914.0591 greenbeaninsulation.com A New Orleans, LA Co. *credit cards accepted GAMBIT > BESTOFNEWORLEANS.COM > JUNE 21 > 2011 Mojitos has become an even ‘cooler’ place to go for dinner, Sunday Jazz Brunch, and Happy Hour. Chatelain does not report directly to one branch of government over the other. In addition, her duties will not be the same as those of an inspector general. An IG is more of an independent auditor, whereas Chatelain will function more as an in-house 15 MAPLE STREET BOOK SHOP SPECIAL SUMMER EVENTS Voted Best BBQ by Times Picayune readers THE PIT MASTER IS IN… FRIDAY, JUNE 24 Jeff Kinney On Friday, June 24, 2011, Jeff Kinney will be at the Nims Fine Art Center on The Academy of the Sacred Heart’s campus at 4301 St. Charles Ave. Doors open at 3:00 P.M. and you must have a bracelet, which can be picked up at Maple Street Book Shop, to attend. You can purchase the Wimpy Kid books, including the latest, The Wimpy Kid Do-It-Yourself Book (revised and expanded edition), at Maple Street Book Shop and onsite at the event, and you need a sales receipt from Maple Street Book Shop to get any books signed. People may bring one book from home, and Mr. Kinney will sign this additional book along with those purchased through Maple Street. Hope to see you there. SUNDAY JUNE 26 SATURDAY JUNE 25 Tomie DePaola 2:15 to 3:15 P.M. Richard Peck 1:00 to 2:00 P.M. & His Whole Host of Sunshine! Kevin Henkesis coming to visit Maple Street and to sign his two latest books, Little White Rabbit and Junonia. While I know we’ll have food, Mr. Henkes may lead y’all through an art project and read a little of his work. Richard Peck , author of Three Quarters Dead and many other middle reader and young adult classics, will sign his books at Maple Street Book Shop. MONDAY JUNE 27 Gambit > bestofneworleans.com > JUne 21 > 2011 David Unger N.O. Public Library 20 Join David Unger at 1:00 P.M. at the New Orleans Public Library, 219 Loyola, for a discussion and signing of his book Price of Escape. Michael Brown Former FEMA Director, Michael D. Brown, will be with us on Saturday, June 25, 2011, 3:00-4:30 P.M. His new book (co-authored with Ted Schwartz), Deadly Indifference The Perfect (Political) Storm: Hurricane Katrina, the Bush White House, and Beyond, presents Brown’s side of things without pulling any punches for himself or others. Please join us for this enlightening conversation. REAL PIT BBQ catering menu for your next special event: CARRY OUT & DELIVERY AVAILABLE BBQ Brisket • Ribs Smoked Chicken Smoked Sausage Pulled Pork Tomie DePaola), author and illustrator of such children’s favorites as Strega Nona and the newest, Let the Whole Earth Sing Praise. Kevin Henkes 11:30AM LOW & SLOW N.H. Senzai and Frances O’Roark Dowell N. H. Senzai, author of Shooting Kabul and Frances O’Roark Dowell, author of Ten Miles Past Normal and The Secret Language of Girls will be with us on Monday, June 27, 2011, 1:00 P.M. New, Used, & Rare Books 7523-7529 MAPLE ST. 5 0 4 . 8 6 6 . 4 9 1 6 ( n ew ) 504.866.7059 (used) F I G H T T H E S T U P I D S W W W. M A P L E S T R E E T B O O K S H O P.C O M SIDES Coleslaw • Potato Salad Mac N Cheese • Chili Veg of the Day Baked Beans • Cornbread ShaneMade Biscuits BBQ Sauce available by the gallon CALL TO PLACE YOUR ORDER TODAY! 1821 HICKORY AVE, HARAHAN, LA O P E N 7 D AY S A W E E K ( 5 0 4 ) 2 8 7 - 4 5 8 1 • w w w. f a t h e n g r i l l . c o m CoVeR StorY What the media say about GIVERS AmericAn indiefolk style moving from its bright, charismatic rhythms to its slow, tranquil moments.” “… In Light, their official debut, fortunately transcends their transparent inspirations the oldfashioned).” page 18 : ‘Whateverwinning. page 23 Gambit > bestofneworleans.com > JUne 21 > 2011 pASte photo By Zack Smith 21 reen matters ++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++ + + +bringing +++++++++++++ ++++++++++++ home + + + + + + + +sustainability ++++++++ ++++++++++++ ++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++ 26 27 greeniverse 29 more micro greens greenlight micro reens +++++++++++ +++++++++++ +++++++++++ +++++++++++ +++++++++++ Carpoolers Unite loCaVore rising RestauRants, bloggeRs and goveRnment officials get in on the inauguRal eat local challenge. Eat Local Challenge participants receive a discount on Hollygrove Market & Farm’s produce boxes, featuring locally grown fruits and vegetables and other goods. b y a l e x W o o d Wa R d. To inspire fellow challengers, participants in the blogosphere are sharing recipes, or in the case of Tumblr blogger Caroline Heylman (nolavore.tumblr.com), uploading a photo of every meal for every day of the challenge. Aryanna Gamble (. com)mile, page 26 VoiCes needed in Wetlands restoration The U.S. Geological Survey issued a report this month showing Louisiana’s coastline is losing its wetlands at a rate of one football field every hour. The Urban Conservancy and the Delta Discussion Group are addressing wetlands loss — and giving residents a voice to help restore it. The organizations host “Getting it Done Together: The Public’s Role in Shaping Our Coast’s Future” from 5 p.m. to 7:30 p.m. Thursday, June 23. The forum addresses public opportunities for shaping the future of Louisiana’s coastline and wetlands restoration. The conference outlines the federal and state processes allowing public input, and how to navigate the often byzantine, long-term planning process for coastal development. Michele Deshotels and Leslie Suazo from the Louisiana Office of Coastal Protection and Restoration will present Louisiana’s 2012 Master Plan, and Mark Davis with Tulane’s Institute on Water Resources Law & Policy will discuss the Natural Resource Damage Assessment, the critical study, performed following the Gulf oil disaster, that could determine the coast’s future protection (and current health). Amanda Moore with the National Wildlife Federation will discuss legislation for coastal restoration projects, and the Gulf Restoration Network’s Cynthia Sarthou will address the citizen advisory council representing the Gulf of Mexico Ecosystem Restoration Task Force. Scott P. Milroy will detail his research with the University of Southern Mississippi on polycyclic aromatic hydrocarbons (PAH), the carcinogenic contaminants found in oil and gas waste. Milroy’s research analyzes PAH presence in the Mississippi Sound and its impact on seafood safety. The free program is held at Longue Vue House & Gardens (7 Bamboo Road, 488-5488;. com). Registration is required; visit for details. Gambit > bestofneworleans.com > JUne 21 > 2011mile.”. page 29 25 reen matters +++++++++++++++++++++++++++ +++++++++++++++++++++++++++ +++++++++++++++++++++++++++ +++++++++++++++++++++++++++ +++++++++++++++++++++++++++ +++++++++++++++++++++++++++ Gambit > bestofneworleans.com > JUne 21 > 2011 page 25 26. org), reeniverse Green Homes Broadmoor Breaks ground on sustainaBle home designs Explore the ON SELECTED WEDNESDAYS THIS SUMMER. Marriage of Wine and Food CHEF DE CUISINE BRETT DUFFEE AND CHEF SUSAN SPICER PREPARE A FOUR COURSE MENU _ $88 _ roadmoor Development Corporation broke ground last week on the first of its four planned LEED Platinum-certified homes. The U.S. Green Building Council’s 2010 Natural Talent Design Competition selected the home’s sustainable designs, created by two student-led groups and two emerging professional teams. The four groups — representing Connecticut, Hawaii, New York City and Pittsburgh — maintained the neighborhood’s aesthetic and added innovative highlights like an “outdoor living room,” storm-water collection systems and breakthrough drainage and ventilation features. Wuijoon Ha, the only single-person design team, conceived the “E.A.S.Y. House,” featuring operable skylights, a green B availability issues. “They’ll ask questions, ‘Why aren’t you carrying these products that are Louisiana made?’ And (retailers) respond, ‘We’ll see what we can do,’” Stafford says. “There are 300 marketers out there looking for Louisiana products, and then they have the blog to discuss with the retailer, ‘I. ++++++++++++++++ ++++++++++++++++ ++++++++++++++++ ++++++++++++++++ ++++++++++++++++ roof and a wheelchair lift. The Salvation Army’s EnviRenew initiative, which aims to rebuild communities using green building practices, is helping fund the project along with the New Orleans Redevelopment Authority and New Orleans Metropolitan Area Realtors. First NBC Bank is financing the homes; each is estimated to cost $135,000 (and eligible for up to $75,000 in grants). Landis Construction, Eskew+Dumez+Ripple and Green Coast Enterprises also are helping build the homes. The homes join Broadmoor’s Andrew H. Wilson Charter School in the neighborhood’s commitment to getting greener buildings in as rebuilding continues. The school, completed in January 2010, received $29 million in renovations and construction. The school features a 12,000-gallon rainwater-collecting cistern, a solar panel array that heats 90 percent of the kitchen’s water, and other energy efficiency measures. Braxton’s Restaurant Is the Summer too hot for you? Cool off with our Vietnamese fresh SPRING ROLLS & VERMICELLI SALAD to fill you up. Also, our CHINESE & VEGETARIAN dishes will cure that Summer time hunger. Gambit > bestofneworleans.com > JUne 21 > 2011 introducing 28 all you can eat lunch & dinner buffet $7.95 monday - friday: VIETNAMESE FRESH SPRING ROLLS $6.95 >> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >> << <<<<<<<<<<<<<<< <<<<<<<<<<<<<<<<<<<<<<<<< << MUSIC FILM ART STAGE EVENTS CUISINE >> >>>>>>>>>>>>>> >> WHAT TO KNOW BEFORE YOU GO << <<<<<<<<<< << 34 38 41 45 46 51 >> >>>>>>>>>>>>> >>>>>>>>>>>>>>> >> << <<<<<<< <<<<<<<<<<<<<<<<<<< << THE >> >>>>>>>> >>>>>>>>>> >> << <<<< <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >> >>>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>> << <<<<<<<<<<<<< <<<<<<<<<<<< >> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>> > JUN << <<<<<<<<<<<<<<<<<<<< <<<<<<<<<<<<<<<<<< <MY LIFE WITH THE THRILL KILL KULT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The many (and mostly former) members of My Life With The Thrill Kill Kult must wonder what they have to do to offend someone and get some press. Since the early hit “A Daisy Chain 4 Satan,” the group’s lyrics, videos The Definition of Bounce CD and album covers and release and book signing titles have gratuitously NOON-2 P.M. SATURDAY embraced everything racy, vice-y and degenerPEACHES RECORDS, ate. But its sleazy-funky 408 N. PETERS ST., 282-3322; WWW. PEACHESRECORDSNEWORLEANS.COM post-industrial lounge-core grooves have remained 3 P.M. SATURDAY cheap thrills from a mostly cult following. Twitch the Ripper and 16 Volt open. Tickets $15. 9 p.m. Wednesday. Howlin’ Wolf, NUTHIN BUT FIRE RECORDS, 907 S. Peters St., 529-5844; 1840 N. CLAIBORNE AVE., 940-5680 22 9 P.M. SATURDAY BEHIND THE 8 BALL CLUB, 3715 TCHOUPITOULAS ST., 897-3415 Rapper 10th Ward Buck (center) at a block party with Sissy Nobby (on ladder) and Big Freedia. PHOTO BY JORDAN BLANTON Bounce; JUN.; GENERATIONALS WITH GIANT CLOUD, EMPRESS HOTEL AND AU RAS AU RAS JUN 25 Harmonizing pop quintet Giant Cloud seemed primed to pick up the defunct Peekers’ dropped baton — until it, too, abruptly broke up late last year, leaving a nearly finished Park the Van debut in the cutting room. This long-scheduled show will either be a road to reconciliation or a one-off reunion. Empress Hotel and Au Ras Au Ras open; Generationals headlines. Tickets $11 in advance, $13 at the door. 8:45 p.m. Saturday. Tipitina’s, 501 Napoleon Ave., 895-8477; Gambit > bestofneworleans.com > JUne 21 > 2011 W 23 GILBERTO SANTA ROSA 10TH WARD BUCK TALKS ABOUT BOUNCE BY WILL COVIELLO ith ‘What JUN MAN OF LA MANCHA 33 MUSIC Showcasing Local Music MON 6/20 Papa Grows Funk TUE 6/21 Rebirth Brass Band WED 6/22 Roy Jay Band THU The Trio featuring 6/23 Johnny V & Special Guests FRI 6/24 SAT 6/25 Mia Borders Feed the Kitty TrioTrio w/ Walter SUN Joe JoeKrown Krown SUN “Wolfman” Washington feat. Russell Batiste & Walter& 6/26 3/13 Russell Batiste Wolfman Washington New Orleans Best Every Night! 8316 Oak Street · New Orleans 70118 (504) 866-9359 LISTINGS OLD COFFEE POT RESTAURANT — Gypsy Elise & Ryan Way, 7 OLD OPERA HOUSE — Bonoffs, 4; Vibe, 8:30 PALM COURT JAZZ CAFE — Duke Heitger & Tim Laughlin feat. Crescent City Joymakers, 7 PAVILION OF THE TWO SISTERS — Thursdays at Twilight feat. Pfister Sisters, 6 PRESERVATION HALL — Tornado Brass Band feat. Darryl Adams, 8 PRIME EXAMPLE — Philip Manuel, 8 & 10 RIVERSHACK TAVERN — Brent George, 7 ROCK ’N’ BOWL — Chris Ardoin, 8:30 SNUG HARBOR JAZZ BISTRO — Treme Brass Band CD release, 8 & 10 SPOTTED CAT — Brett Richardson, 4; Miss Sophie Lee, 6; New Orleans Moonshiners, 10 THREE MUSES — Washboard Rodeo, 7:30 TIPITINA’S — Righteous Buddha, Easy Company, Yo Jimbo, 9 VAUGHAN’S — Kermit Ruffins & the Barbecue Swingers, 8:30 WINDSOR COURT HOTEL (POLO CLUB LOUNGE) — Zaza, 6 Friday 24 BABYLON LOUNGE — Cape Of The Matador, Snake Oiler, 10 Gambit > bestofneworleans.com > JUne 21 > 2011 BANKS STREET BAR — Granade Man, Slack Assister, 9 36 haPPy hour $1.50 PBR PInts $2 gAme RentAls $3 ImPortS $2 monDAYs gAme RentAls • PBR PInts jameSon ShotS FRIDAY • 6/24 • 9 pm ZynC & PrytanIa $5 sAtuRDAY • 6/25 • 10pm dj KodIaK free EVERY SUNDAY • 8pm-2Am KaraoKe 1/2lB. tuRBo BuRgeR wIth FRIes $6 4133 S. Carrollton ave ( @ T u l a n e ) 301-0938 S H a M r O C K Pa r T Y. C O M preview OLD POINT BAR — Blues Frenzy, 7 BALCONY MUSIC CLUB (BMC) — Moonshine & Caroline, 7 • 3-6pm DAILY • Flavored Kisses, 10 BAYOU BAR AT THE PONTCHARTRAIN HOTEL — Armand St. Martin, 7; Philip Melancon, 8 BLUE NILE — Mykia Jovan & Jason Butler, 8; Kermit Ruffins & the Barbecue Swingers, 11 BMC — Moonshine & Caroline, 7; Soul Project, 10; One Mind Brass Band, 1 a.m. BOMBAY CLUB — Monty Banks, 6; Lisa Lynn & Trio, 9:30 BOOMTOWN CASINO — Brandon Foret, 9 CARROLLTON STATION — The Tanglers, Darla & the Hip Drops, 9:30 CHECK POINT CHARLIE — El Camino Royales, Rotten Cores, Rok Boms, 10 CHICKIE WAH WAH — Pfister Sisters, 5:30; Paul Sanchez, 8; Twangobangorama, 10 COLUMNS HOTEL — Alex Bachari Trio, 5 DAVENPORT LOUNGE — Jeremy Davenport, 9 D.B.A. — Meschiya Lake & the Little Big Horns, 6; The Iguanas, 10 DOS JEFES UPTOWN CIGAR BAR — Eric Traub Trio, 10 THE EMBERS “ORIGINAL” BOURBON HOUSE — Curtis Binder, 6 FELIPE’S TAQUERIA — Fredy Omar con su Banda, 10 FRENCH MARKET — Sweet Jones, 4; Flow Tribe, 5 BAYOU BAR AT THE PONTCHARTRAIN HOTEL — Armand St. Martin, 7; Philip Melancon, 8 Melting Potpourri After performing and directing for 63 years, 86-year-old trombonist Milton Bush leads his New Orleans Trombone Choir in a set of patriotic music at Trinity Episcopal Church’s annual Patriotic Music Festival. His ensemble is fronted by anywhere from seven to 10 trombones, and the group will play “New York, New York,” “Stardust” and a medley of music by George M. Cohan at the festival. The band joins the typically eclectic lineup organized by music ministry director Albinas Prizgintas for its Independence Day celebration. Other performers include the New Orleans Navy Band (pictured), the National World War II Museum’s Victory Belles, Prizgintas, who will perform music by John Philip Sousa and other American composers, and others. The Trombone Choir also will offer a crowd pleaser, Bush says. “We usually finish with (‘When the Saints go Marching In’), which is New Orleans patriotic.” Free admission. — Marta Jewson JUN 26 Patriotic Music Festival 3 p.m.-6 p.m. Sunday Trinity Episcopal Church, 1329 Jackson Ave., 670-2520; HERMES BAR — Sasha Masakowski & Sidewalk Strutters, 9:30 & 11 HEY! CAFE — Spraynard, The Lollies, 8 HI-HO LOUNGE — The Great In Between, Thomas Jefferson, 10 THE HOOKAH — Gravity A, Earphunk, 9 HOUSE OF BLUES — Zoso, 9 HOWLIN’ WOLF (THE DEN) — All I Am, Divebomb, 10 IRVIN MAYFIELD’S JAZZ PLAYHOUSE — Joe Krown, 8; Little Freddie King, 8; Burlesque Ballroom feat. Linnzi Zaorski, 12 a.m. JIMMY BUFFETT’S MARGARITAVILLE CAFE — Colin Lake, 3; Irving Bannister’s All-Stars, 6 & 9 JUJU BAG CAFE AND BARBER SALON — Michaela Harrison, Todd Duke, 7:30 LE BON TEMPS ROULE — Dave Jordan & the Neighborhood Improvement Association, 11 Johnny Burke, 9 PALM COURT JAZZ CAFE — Lucien Barbarin & Palm Court Jazz Band, 7 PELICAN CLUB — Sanford Hinderlie, 7 THE PERFECT FIT BAR & GRILL — Rechelle, Regeneration, 5:30 PRESERVATION HALL — Preservation Hall Jazz Masters feat. Leroy Jones, 8 RIVERSHACK TAVERN — Refried Confuzion, 9:30 ROCK ’N’ BOWL — Beth McKee feat. Barbara Menendez & the Help, 9:30 SHAMROCK BAR — Prytania, Zinc, 9 SIBERIA — Crime Wave, Pallbearers, Die Rotzz, Zero Progress, She’s Still Dead, 10 SNUG HARBOR JAZZ BISTRO — Ellis Marsalis Quartet, 8 & 10 THE MAISON — Nasimiyu, 10 SPOTTED CAT — Brett Richardson, 4; Washboard Chaz Blues Trio, 6:30; New Orleans Cotton Mouth Kings, 10 MOJITOS RUM BAR & GRILL — Jerry Jumonville, 4; Alex Bosworth, 7; Fredy Omar con su Banda, 10:30 TIPITINA’S — Flow Tribe, To Be Continued Brass Band, 10 MAPLE LEAF BAR — Mia Borders, 10 NEW ORLEANS JAZZ NATIONAL HISTORICAL PARK — Richard Scott & Stephen Dale, 2 NEW ORLEANS MUSEUM OF ART — Friendly Travelers, 5:30 OAK — Reed Alleman, 6; Christina Perez, 10 OLD COFFEE POT RESTAURANT — Gypsy Elise & Ryan Way, 7 OLD OPERA HOUSE — Bonoffs, 1; Vibe, 8:30 OLD POINT BAR — The Larry Hall Band, 9:30 ONE EYED JACKS — James McMurtry, THREE MUSES — Glen David Andrews, 10 TOMMY’S WINE BAR — Tommy’s Latin Jazz Band feat. Matthew Shilling, 9 VOILÀ — Mario Abney Quartet, 5 WINDSOR COURT HOTEL (POLO CLUB LOUNGE) — Zaza, 6; Anais St. John, 9 Saturday 25 BLUE NILE — Washboard Chaz Blues Trio, 7; Johnny Sketch & the Dirty Notes, 10 BMC — New Orleans Jazz Series, 3; Jayna Morgan & the Sazerac Sunrise Jazz Band, 6:30; Gypsy Elise & Ryan Way, 8; Eudora Evans & Deep Soul, 9:30; Ashton & the Big Easy Brawlers Brass Band, 12:30 a.m. BOMBAY CLUB — Monty Banks, 6; Legendary Luther Kent & His Quartet, 9:30 BOOMTOWN CASINO — Burgundy, 9 BUFFA’S LOUNGE — Royal Rounders, 8 CAFE NEGRIL — Jamey St. Pierre & the Honeycreepers, 7 CARROLLTON STATION — Loaded Dice, Joshua Richoux, 9 CHECK POINT CHARLIE — Blues Frenzy, 7; Peoples Blues Richmond, 11 CHICKIE WAH WAH — Kelcy Mae Band, 9 COLUMNS HOTEL — Andy Rogers & guest, 9 THE CYPRESS — Monks Of Makumba, Clement Brothers, Liquid Peace Revolution, 7 DAVENPORT LOUNGE — Jeremy Davenport, 9 D.B.A. — Linnzi Zaorski, 7; Rotary Downs, 11 DECKBAR & GRILLE — Miche & MixMavens, 8 DOS JEFES UPTOWN CIGAR BAR — The Roebucks, 10 HERMES BAR — Paul Sanchez, 9:30 & 11 HI-HO LOUNGE — Joey Allcorn, The Unnaturals, 10 HOUSE OF BLUES — The Psychedelic Furs, 8 HOWLIN’ WOLF NORTHSHORE — For Karma, 10 HOWLIN’ WOLF (THE DEN) — Diamond D, Lyrikill and others, 10 IRVIN MAYFIELD’S JAZZ PLAYHOUSE — Joe Krown Swing Band, 8; Irvin Mayfield’s NOJO Jam, 12 a.m. JIMMY BUFFETT’S MARGARITAVILLE CAFE — Joe Bennett, 3; Irving Bannister’s All-Stars, 6 & 9 LE BON TEMPS ROULE — Dave Reis, 7; Ernie Vincent & the Top Notes, 11 LOUISIANA MUSIC FACTORY — Joyful, USA, 2; Sasha Masakowski, 3 THE MAISON — Josh Reppel, 5; Smoking Time Jazz Club, 10; Yojimbo, 12 a.m. MAPLE LEAF BAR — Feed the Kitty, 10 MOJITOS RUM BAR & GRILL — Kristina Morales, 4; Charley & the SoulaBillySwampBoogie Band, 7; Dana Abbot Band, 10:30; the Mumbles, 12:30 a.m. MULATE’S CAJUN RESTAURANT — Bayou DeVille, 7 APPLE BARREL — Peter Orr, 7 NEW ORLEANS JAZZ NATIONAL HISTORICAL PARK — Dr. Jazz & the New Orleans Sounds, 2 BANKS STREET BAR — Gravy OLD OPERA HOUSE — Bonoffs, 1; Vibe, 8:30 BABYLON LOUNGE — 4 Mag Nitrous, No Room For Saints, Large Marge, 10 OAK — Glen David Andrews, 9 A ROOM WITH A VIEW Listings editor: Lauren LaBorde listingsedit@gambitweekly.com FAX:483-3116 Deadline: noon Monday Submissions edited for space JUMPING THE BROOM (NR) — Worlds collide when two African-American families from disparate socioeconomic backgrounds get together for a wedding in Martha’s Vineyard. AMC Palace 16, AMC Palace 20 KUNG FU PANDA 2 (PG) — The NOW SHOWING THE ART OF GETTING BY (PG-13) — A lonely teen makes it to his senior year of high school without doing a day of work, and he meets a popular girl who finds in him a kindred spirit. AMC Palace 16, AMC Palace 20 10, AMC Palace 12, AMC Palace 16, AMC Palace 20, Grand, Hollywood 9, Hollywood 14 DEEP SEA (NR) — Audiences experience the depths of the ocean. Entergy IMAX FAST FIVE (PG-13) — Vin Diesel and Dwayne Johnson star in the latest installment of the Fast and the Furious franchise. AMC Palace 20, Hollywood 14 Gambit > bestofneworleans.com > JUne 21 > 2011 GRAND CANYON: RIVER AT RISK (NR) — Robert Redford 38 narrates a 15-day river-rafting trip that highlights the beauty of the Colorado River. Entergy IMAX UNCH S P Y C U L ” R E H C AD TEA MUSICBY MICHAEL ANDREW B “ N O I T C U D O R P SAIC AL TOM WOLFE TUPNITSKY O M A S T N E S E R P NE S URES SIC RAV COLUMBIA PHICITGGINS SUPERVISMIOUN BY MAENIKSAHSDAN LEE EISENBERGRGGE JOHN MEXICECHUUTCAIVEERESLGEORGIA KACWARITNTEDNES JEAKSTUPNITSKY & LEE EISEDNIRBECETEDBY JAKE KASDAN PROD BY GEN LTER O H E S U O H D I V A LER D PRODUCEBDY JIMMY MIL locAl lISTINgS FoR STARTS FRIDAY, JUNE 24 chEck ThEATERS AND ShowTImES GREEN LANTERN (PG-13) — In the DC Comics adaptation that was filmed in New Orleans, a hot-shot test pilot must maintain peace in the universe using a mystical green ring. AMC Palace 10, AMC Palace 12, AMC Palace 16, AMC Palace 20, Chalmette Movies, Grand, Hollywood 9, Hollywood 14, Prytania, Prytania JUDY MOODY AND THE NOT BUMMER SUMMER (PG) — The book series by Megan McDonald gets a big-screen adaptation. AMC Palace 10, AMC Palace 12, AMC Palace 16, AMC Palace 20, Grand, Hollywood 9, Hollywood 14 16, AMC Palace 20, Canal Place, Grand MR. POPPER’S PENGUINS (PG) — Jim Carrey plays Mr. Popper, a business man whose world is turned upside down when six penguins turn his swanky New York apartment into a snowy winter wonderland. AMC Palace 10, AMC Palace 12, AMC Palace 16, AMC Palace 20, Chalmette Movies, Grand, Hollywood 9, Hollywood, Grand, Hollywood 9, X-MEN: FIRST CLASS (PG-13) — The prequel tells the origin story of the Marvel Comics supergroup. AMC Palace 10, AMC Palace 12, AMC Palace 16, AMC Palace 20, Chalmette Movies, Grand, Hollywood 9, Hollywood 14 OPENING FRIDAY BAD TEACHER (R) — Cameron Diaz plays a foul-mouthed, FILM LISTINGS gold-digging seventh-grade teacher. CARS 2 (PG-13) — The Pixar sequel finds its characters competing in an international race. SPECIAL SCREENINGS BLANK CITY (NR) — Celine Danhier’s documentary tells the story of renegade filmmakers who emerged from the economically bankrupt and dangerous New York of the 1970s and ’80s. Tickets $7 general admission, $6 students and seniors, $5 members. 9:15 p.m. TuesdayThursday, Zeitgeist MultiDisciplinary Arts Center, 1618 Oretha Castle Haley Blvd., 827-5858;. net BRIT WIT — The Big Top screens British comedies every week. 7 p.m. Tuesday, 3 Ring Circus’ The Big Top Gallery, 1638 Clio St., 5692700; CASABLANCA (NR) — The 1942 drama follows an American expatriate who meets a former lover in Africa during the early days of World War II. Tickets $5.50. Noon Saturday-Sunday and June 29, Prytania Theatre, 5339 Prytania St., 891-2787; THE FILMS OF HELEN HILL — The American Library Association and friends of the late filmmaker present a selection of her works. Tickets $15 in advance, $17 at the door. Proceeds will benefit the 2012 Helen Hill Award and the Francis Pop Education Fund. 7 p.m. hors d’oeuvres and cash bar, 8 p.m. to 9:30 p.m. screening. Sunday, Zeitgeist MultiDisciplinary Arts Center, 1618 Oretha Castle Haley Blvd., 827-5858;. net FROM BRITAIN WITH LOVE— The Film Society of the Lincoln Center, UK Film Council and Emerging Pictures presents the touring showcase of British films. Films include Toast, Africa United, A Boy Called Dad, In Our Name and Third Star. Visit for the full schedule. Tickets $7 general admission, $6 students and seniors, $5 members, $20 series pass (includes all five films). Tuesday-Thursday, Zeitgeist Multi-Disciplinary Arts Center, 1618 Oretha Castle Haley Blvd., 827-5858; THE LORD OF THE RINGS (PG13) — Local theaters (AMC Palace 20, AMC Palace 16, Hollywood 14) screen the Academy Award-winning trilogy in three parts. 7 p.m. Tuesday and June 28. PAGE 40 Garden Concert Series THIS WEEK’S PERFORMANCE Pfister Sisters GAMBIT AND THE BRIDGE LOUNGE presents New Orleans swing featuring vocal jazz harmony. JUNE 23 THE PERFECT Adults: $8 / Children 5-12: $3 Children 4 & Under = FREE Mint Juleps and other refreshments available for purchase For more information call Happy Hour (504) 483-9488 THURSDAY 4:00 TIL JUNE 23 RD 7:00pm Gambit > bestofneworleans.com > JUne 21 > 2011 “Since 1969” CELEBRATE SUMMER COUPON cut flowers stock colors 50 OFF EXPIRES 7/1/11 CASH & CARRY ONLY NOT VALID W/ ANY OTHER COUPONS. COUPON MUST BE PRESENT AT TIME OF PURCHASE. METAIRIE 750 MARTIN BEHRMAN AVE (504) 833-3716 COVINGTON 1027 VILLAGE WALK (985) 809-9101 VISIT US ON 40 % $6 PERFECT VODKA Cocktails $7 PERFECT VODKA Martinis PLUS PERFECT VODKA SAMPLES + DOOR PRIZES bestofneworleans.com FILM Thursdays at Twilight PAGE 38 ON TIME — The center premieres a series of locally produced short films. Tickets $7 general admission, $6 students and seniors, $5 members. 7:30 p.m. Saturday, Zeitgeist MultiDisciplinary Arts Center, 1618 Oretha Castle Haley Blvd., 827-5858; www. zeitgeistinc.net PSYCHO (NR) — Alfred Hitchcock’s 1960 thriller follows an embezzler who is hiding at the eerie Bates Motel. Free admission. 8 p.m. Monday, La Divina Gelateria, 621 St. Peter St., 302-2692; SINGIN’ IN THE RAIN (NR) — Gene Kelly’s 1952 movie-musical follows a silent film production company as they make a difficult transition to talkies. Tickets $5.50. Noon Wednesday, Prytania Theatre, 5339 Prytania St., 891-2787; THE TERMINATOR (R) — Arnold Schwarzenegger stars in the 1984 action flick about an unstoppable, human looking cyborg sent on a kill mission. Tickets $8. Midnight Saturday-Sunday, Prytania Theatre, 5339 Prytania St., 891-2787; !WOMEN ART REVOLUTION (NR) — Lynn Hershman Leeson’s film explores the “secret history” of feminist art through conversations, observations, archival footage and works of artists, historians, curators and critics. Tickets $7 general admission, $6 students and seniors, $5 members. 7:30 p.m. TuesdayThursday, Zeitgeist Multi-Disciplinary Arts Center, 1618 Oretha Castle Haley Blvd., 827-5858;. net687231; Prytania, 891-2787; Solomon Victory Theater, National World War II Museum, 527-6012 Compiled by Lauren LaBorde 10% OFF with this ad ART LISTINGS PAGE 41 Smith, Jack Beech, Harriet Blum, Kevin Roberts and others, ongoing. BERTA’S AND MINA’S ANTIQUITIES GALLERY. 4138 Magazine St., 895-6201 — “Louisiana! United We Stand to Save Our Wetlands,” works by Nilo and Mina Lanzas; works by Clementine Hunter, Noel Rockmore and others; all ongoing. “comfort food incarnate” Happy Hour Food and Drink Specials from 5-6:30pm 200 Julia St • 504-304-6318 BONJOUR GALLERY & MARKETPLACE. 421 N. Columbia St., Covington, (985) 635-7572 — “Classic Cars,” paintings by Nancy Lowentritt, through June. tropical isle® HOME OF THE Hand Grenade® -Sold Only At- 738 Toulouse St. • 523-5530 VISIT OUR WEBSITE Mitchell, ongoing. BYRDIE’S GALLERY. 2422-A St. Claude Ave., — Artfully Aware exhibition, through July 5. 435, 600, 610, 721, 727 CALICHE & PAO GALLERY. 312 Royal St., 588-2846 — Oil New Orleans’ Most Powerful Drink! CALLAN FINE ART. 240 Chartres St., 524-0025; www. callanfineart.com — Works Bourbon St. 3 full bars • 10:30-til BRYANT GALLERIES. 316 Royal St., 525-5584; — Paintings by Dean Live Entertainment Nightly paintings by Caliche and Pao, ongoing. by Eugene de Blass, Louis Valtat and other artists of the Barbizon, Impressionist and Post-Impressionist schools, ongoing. CARDINAL GALLERY. 541 Bourbon St., 522-3227 — Exhibition serving new orleans' favorites Gambit > bestofneworleans.com > JUne 21 > 2011 Po-Boys, Pizzas & Plates 42 including Seafood Muffeletas, Italian Meatballs, Veal Marsala, Mirliton Casserole, Fettucine Alfredo, Grilled Chicken or Grilled Shrimp Salad, Gumbo and more. 3939 Veterans • 885-3416 (between Cleary Ave & Clearview) Mon-Tues 11-3 • Wed-Thurs 11-7:30 Fri 11-8:30 • Sat 11-8:00 REVOLUTION at the Green Goddess! There’s a revolutionary air sweeping the world. Whether from farm to table, or by shaking shoes at corrupt dictators, people are dedicated to making peaceful changes. The Green Goddess salutes all these revolutionaries! Together, we share the dreams, blood, sweat, and fierce determination to make it happen, NOW! LONG LIVE THE REVOLUTION The Green Goddess 307 Exchange Alley in the French Quarter- twined,” paintings by Karen Stastny, through Saturday. COLLECTIVE WORLD ART COMMUNITY. Poydras Center, 650 Poydras St., 339-5237; www. collectiveworldartcommunity. com — Paintings from the Blue Series by Joseph Pearson, ongoing. D.O.C.S. 709 Camp St., 524-3936 — “So Much Art, So Little Time, Again,” exhibition of work by gallery artists from the past year, through Aug. 4. DU MOIS GALLERY. 4921 Freret St., 818-6032 — “Cold Drink” printmaking invitational, through Aug. 6. DUTCH ALLEY ARTIST’S CO-OP GALLERY. 912 N. Peters St., 4129220;. com — Works by New Orleans-a-Night,” installation by Hannah Chalew, ongoing. FIELDING GALLERY. 525 E. Boston St., Covington, (985) 377-22; www. fredrickguessstudio.com — — Photography by Christopher Porche West, ongoing. GALERIE ROYALE. 3648 Magazine St., 894-1588; www. galerieroyale.com — “Expres- featuring works by nine gallery artists, through July 9. ISABELLA’S GALLERY. 3331 Severn Ave., Suite 105, Metairie, 7793202; — Hand-blown glass;. com — “Rhythm on the River,” paintings by Derenda Keating, through June. — sions of Me,” mixed media on canvas by Kim Albrecht, through July 4. “Wrong Sounding Stories,” paintings by Adam Mysock; “Eternal Moment,” drawings by Rieko Fujinami, through June. GALLERIA BELLA. 319 Royal St., 581-5881 — Works by gallery artists, ongoing. JULIE NEILL DESIGNS. 3908 Magazine St., 899-4201; www. julieneill.com — “Facade,” GALLERY BIENVENU. 518 Julia St., 525-0518; — “My Pinocchio Syndrome for Abigail ... Ten Years Later. This Aint’t Disney Jeff,” mixed media by Blake Boyd, through July 23. THE GARDEN DISTRICT GALLERY. 1332 Washington Ave., 891-3032; — “Seeing Music,” a group ex- hibition photographs by Lesley Wells, ongoing. KAKO GALLERY. 536 Royal St., 565-5445; — Paintings by Don Picou and Stan Fontaine; “Raku” by Joy Gauss; 3-D wood sculpture by Joe Derr; all — “Faces Shannon Landis Hansen; textile constructions by Christine Sauer, through July 30. LIVE ART STUDIO. 4207 Dumaine St., 484-7245 — “New Orleans is Alive,” acrylics by Marlena Stevenson, through July. Expanded listings at bestofneworleans.com LOUISIANA CRAFTS GUILD. 608 Julia St., 558-6198; — Group show featuring works from guild members, ongoing. MALLORY PAGE STUDIO. 614 Julia St.; — Paintings by Mallory Page, Mondays-Fridays. MARTIN LAWRENCE GALLERY NEW ORLEANS. 433 Royal St., 299-9055;. com — Works by René Lalonde, through June. MARTINE CHAISSON GALLERY. 727 Camp St., 304-7942; www. martinechaissongallery.com — “Embers of a Floating World,” works by Caroline Wright, through July 9. — Illumi- nated glass sculpture by Curt Brock; enameled copper jewelry by Cathy DeYoung; hand-pulled prints by Dominique Begnaud, through July 30. NEWCOMB ART GALLERY. Woldenberg Art Center, Tulane University, 865-5328; www. newcombartgallery.tulane.edu — “The History of the Future,” photographs by Michael Berman and Julian Cardona, through June 29. OCTAVIA ART GALLERY. 4532 Magazine St., 309-4249; www. octaviaartgallery.com — Acrylic on canvas by Cleland Powell, through June 28. PARSE GALLERY. 134 Carondelet St. — “Chicken Lovers,” works by Barbie L’Hoste and Megan Hillerud, through Friday. — Photography by Louis Sahuc, ongoing. REINA GALLERY. 4132 Magazine St., 895-0022;. com — “Vintage New Orleans. media works by Ricardo Lozano, Michael Flohr, Henry Ascencio, Jaline Pol and others, ongoing. “Luminous Sculpture,” works by Eric Ehlenberger, ongoing. VINCENT MANN GALLERY. 305 Royal St., 523-2342; — “Françoise Gilot and the Figure: 19402010,” paintings and drawings by the artist, through June. WMSJR. 1061 Camp St., 299-9455; — Paintings by “B Movie Double Feature,” photographs and ceramic collectors’ plates by Heather Weathers, Wednesdays-Sundays. Through July. METAIRIE PARK COUNTRY DAY SCHOOL. 300 Park Road, Metairie, 837-5204;. com — “The Unconventional RODRIGUE STUDIO. 721 Royal St., 581-4244; — Works by George Rodrigue, ongoing. Will Smith, ongoing. ROSETREE GLASS STUDIO & GALLERY. 446 Vallette St., Algiers Point, 366-3602; — Hand-blown glass by Juli Juneau; photographs from the New Orleans Photo Alliance; both ongoing. MOJO COFFEE HOUSE. 1500 Magazine St., 525-2244; www. myspace.com/mojoco — Photographs by Marc Pagani, ongoing. CALL FOR ARTISTS NEUTRAL GROUND COFFEEHOUSE. 5110 Danneel St., 8913381; —. SOREN CHRISTENSEN GALLERY. 400 Julia St., 569-9501; www. sorengallery.com — “Horsing Around,” oil paintings by Campbell Hutchinson, through June. STELLA JONES GALLERY. Place St. Charles, 201 St. Charles Ave., Suite 132, 568-9050 — “Street Children,” a group exhibition of works by Zambian youth, through Aug. 1. Wednesday. THOMAS MANN GALLERY I/O. 1812 Magazine St., 581-2113; www. thomasmann.com — “Where’s the Money?” group exhibit interpreting the economy, ongoing. TRIPOLO GALLERY. 401 N. Columbia St., (985) 893-1441 — Works by Bill Binnings, Robert Cook, Donna Duffy, Scott Ewen, Juli Juneau, Kevin LeBlanc, Ingrid Moses, Gale Ruggiero, Robert Seago and Scott Upton, ongoing. UNO-ST. CLAUDE GALLERY. 2429 St. Claude Ave. — “Mara/Thal- assa/Kai: The Sea,” works by Anastasia Pelias, Rian Kerrane and Melissa Borman, through July. VENUSIAN GARDENS ART GALLERY. 2601 Chartres St., 943-7446; — A WORK OF ART GALLERY. 8212 Oak St., 862-5244 — Glass works DRAWING THE LINE. Octavia Art Gallery, 4532 Magazine St., 309-4249; — The gallery seeks works in all media that focus on the use of line for a upcoming juried exhibition (Aug. 6-27). Email art@octaviaartgallery. com for details. Submissions deadline is July 1. NOLA NOW! The Contemporary Arts Center seeks submissions for an exhibit featuring works produced in the last two years by artists currently living and working in the greater New Orleans area. The exhibition opens Oct. 1. Call 528-3805 or visit for details. Submissions deadline is July 8. SPARE SPACES BUD’S BROILER. 500 City Park Ave., 486-2559 — Works by Andrew Bascle, Evelyn Menge and others, ongoing. CAFE ROSE NICAUD. 632 Frenchmen St., 949-3300 — Paintings by Clarke Peters, through June. CAMPBELL’S COFFEE & TEA. 516 S. Tyler St., Covington, (985) 2466992;. com — Multimedia works by Margaux Hymel, ongoing. Portrait,” works by Mark Bercier, David Halliday, Gina Phillips and Alexander Stolin, ongoing. Work by local artists, ongoing. NEW ORLEANS CAKE CAFE & BAKERY. 2440 Chartres St., 9430010 —. DOS JEFES UPTOWN CIGAR BAR. 5535 Tchoupitoulas St., 891-8500; — exhibits of jazz artists, a St. Joseph’s altar replica, the Louisiana Italian-American Sports Hall of Fame and a research library with genealogy records. HI-HO LOUNGE. 2239 St. Claude Ave., 945-4446; — Works by Robin AMISTAD RESEARCH CENTER. 6823 St. Charles Ave., 862-3222 — “Richmond Barthe: Builder Works by Mario Ortiz, ongoing. Durand, Brad Edelman, Tara Eden, Eden Gass and others, ongoing.. LIBERTY’S KITCHEN. 422 1/2 S. Broad St., 822-4011 — Paintings on canvas by YA/YA artists, ongoing. MARIGNY PHO. 2483 Burgundy St., 267-5869 — Selections from. GERMAN-AMERICAN CULTURAL CENTER. 519 Huey P. Long Ave., Gretna, 363-4202; — Museum exhibits depict the colonial experience, work, culture and religion of German immigrants. GREAT AMERICAN ALLIGATOR MUSEUM. 2051 Magazine St., 5235588-5488; — “Magic Spell of Memory: The Photography of Clarence John Laughlin,” through fall 2011. City Park, 1 Collins Diboll Circle, 658-4100; — “Ancestors of Congo Square: African Art in the New Orleans Museum of Art,” through July 17. “Read My Pins: The Madeleine Albright Collection,” more than 200 pins from Albright’s personal collection, through Aug. 14. “Thalassa,” a 20-foottall suspended sculpture by Swoon, through Sept. 25. “Peter Carl Faberge and Other Russian Masters,” permanent collection of Faberge objects; “Six Shooters,” photographs from the New Orleans Photo Alliance; both ongoing. NEW ORLEANS PHARMACY MUSEUM. 514 Chartres St., 565-8027; —. LOUISIANA FILM MUSEUM. Montrel’s Bistro, 1000 N. Peters St., 524-4747; — The museum OLD URSULINE CONVENT. 1100 Chartres St., 529-3040 — “France LOUISIANA STATE MUSEUM PRESBYTERE. 751 Chartres St., 568-6968;. la.us — “Before During After,” OLD U.S. MINT. 400 Esplanade Ave., 568-6990; lsm.crt.state. la.us/site/mintex.htm — “Race: Are We So Different?” an exhibit exploring the history, science and everyday experience of race, through March 31. features props, costumes, video clips, still photographs, posters and other exhibits from major films produced in Louisiana.. through Sept. 25. “It’s Carnival Time in Louisiana,” Carnival artifacts, costumes, jewelry and others items, ongoing. LOUISIANA SUPREME COURT MUSEUM. Louisiana Supreme Court, 400 Royal St., 310-2;. org — “Absinthe Visions,” photographs by Damian Hevia, ongoing. NEW ORLEANS MUSEUM OF ART. in America,” photographs by Arielle de la Tour d’Auvergne, through June. SOUTHERN FOOD & BEVERAGE MUSEUM. Riverwalk Marketplace, 1 Poydras St., Suite 169, 569-0405; — “Aca- d 21 > 2011 ONE SUN GALLERY. 616 Royal St., (800) 501-1151 — Works by local and national artists, ongoing. RIVERSTONE GALLERIES. 719 Royal St., 412-9882; 729 Royal St., 581-3688; Riverwalk Marketplace, 1 Poydras St., Suite 36, 566-0588; 733 Royal St., 525-9988; www. riverstonegalleries.net — Multi- ART 43 LISTINGS GET IN ON THE ACT Listings editor: Lauren LaBorde listingsedit@gambitweekly.com FAX:483-3116 Deadline: noon Monday Submissions edited for space THEATER HAIRSPRAY. St. Lukes Method- ist Church, 5875 Canal Blvd., 486-3982 ext. 104 — A cast of 80 performs the musical about a plump teen who gets her dream of dancing on a popular 1962 TV show and tries to use her newfound stardom to racially integrate the program. Tickets are free, but donations are accepted. 7:30 p.m. Wednesday-Saturday. JULIUS CAESAR. Lupin Theatre, Tulane University, 865-5106; — The production sets the William Shakespeare tragedy in 1930s America amid a poverty-stricken population lead by scheming politicians. The play is part of the New Orleans Shakespeare Festival at Tulane. Call the box office or email box@tulane.edu for reservations. Tickets $30. 7:30 p.m. Thursday-Saturday. LOUISIANA LADIES. Louisiana State Museum Cabildo, 701 Chartres St., 568-6968; www. lsm.crt.state.la.us — In celebration of Louisiana’s 200th anniversary of statehood, the show features monologues and songs by Troi Bechet, Margarita Bergen, Leslie Castay, Tari Hohn Lagasse and others. Tickets $25 general admission, $45 reserved seating. Call 522-6545 or visit for reservations. 7 p.m. Monday. 0760; — A self-diagnosed agoraphobic’s fears begin to permeate his dysfunctional household in Bud Faust’s comedy. Tickets $18.50. 8 p.m. Friday-Saturday. THE ROCKY HORROR SHOW. AllWays for details. 8 p.m. and midnight Friday-Saturday, 8 p.m. Sunday through July 2. THE TRIP TO BOUNTIFUL. NOCCA Riverfront, Nims Blackbox Theatre, 2800 Chartres St., 940-2875; —. University, Dixon Hall, 865-5105 ext. 2; — Summer Lyric Theatre at Tulane University presents the Tony Award-winning musical based on Cervantes’ Don Quixote. Tickets start at $28. Call 8655269 for reservations. 8 p.m. Thursday-Saturday, 2 p.m. Sunday. CHIPPENDALES. Harrah’s Casino MILDRED, DEAREST. Le Chat SOUTHERN VOICES: DANCE OUT LOUD 4. Contemporary Arts Noir, 715 St. Charles Ave., 5815812;. com — Running With Scissors regulars star in the send-up to Hollywood’s greatest legends. Tickets $26 (includes $5 drink credit). 8 p.m. Friday-Saturday, 6 p.m. Sunday June 10-26. ON THE AIR. Stage Door Canteen at The National World War II Museum, 945 Magazine St., 528-1944 — Bob Edes Jr., Gary Rucker and others star in the musical that pays tribute to the heyday of radio broadcasts. Call 528-1943 or visit for details. No show June 25. 6 p.m. Friday, 11 a.m. Sunday. OUTSIDE IN. Cutting Edge Theater at Attractions Salon, 747 Robert Blvd., Slidell, (985) 290- (Harrah’s Theatre), 1 Canal St., 533-6600; — The theater hosts the iconic all-male revue. Tickets $33.60 (includes fees). 7 p.m. and 10 p.m. Friday. DANCE Center, 900 Camp St., 528-3800; — D’Project’s festival highlights New Orleans dancers, choreographers and dance companies through two showcases. Tickets $20 general admission, $16 CAC members and students. Showcase 1 is at 8 p.m. Friday and 2 p.m. Saturday; showcase 2 is at 8 p.m. Saturday and 2 p.m. Sunday. STAGE EVENTS STORER BOONE AWARDS. Le Chat Noir, 715 St. Charles Ave., 581-5812; — Members of the New Orleans theater community vote for the best performers and productions CALL FOR THEATER review Hand Jive in 29 categories. Tickets $10. 8 p.m. Monday.. THOROUGHLY MODERN MILLIE. Slidell High School, 1 Tiger Drive, Slidell, (985) 643-2992; www. slidellhigh.stpsb.org — Slidell Little Theatre seeks actors for the upcoming production of the musical. Actors should prepare 16-32 bars of an uptempo contemporary song for the audition. 7 p.m. Monday and June 28. NEW ORLEANS FRINGE FESTIVAL. The annual theater festival, held Nov. 16-20, seeks applications for 30-60 minute alternative theater performances. Visit for details. There is a $25 application fee. Submission deadline is July 1. COMEDY A.S.S.TRONOTS. La Nuit Comedy Theater, 5039 Freret St., 64443. BILLY D. WASHINGTON. Boom- town Casino, Boomers Saloon, 4132 Peters Road, Harvey, 366-7711; — The standup comedian performs. Free admission. 8 p.m. Wednesday. BROWN HQ. Pip’s Bar, 5252 Veter- ans Blvd., 456-9234 — Audience members can participate in the show performed by select cast members of the improv comedy troupe. Visit www. brownimprovcomedy.com/ BrownHQ for details. Tickets are free for performers, $5 general admission. 8 p.m. Tuesday. COMEDY CATASTROPHE. Lost Love Lounge, 2529 Dauphine St., 949-2009;.;. competition. Tickets $10. 7 p.m. Friday-Saturday. FEAR & LOATHING IN NEW ORLEANS. La Nuit Comedy Theater, 5039 Freret St., 644-4300; www. nolacomedy.com — The sketch comedy show boasts vampires, zombies, relationship advice and other horrors. 8:30 p.m. Friday. FRIDAY NIGHT LAUGHS. La Nuit Comedy Theater, 5039 Freret St., 644-4300;. com — Jackie Jenkins Jr. hosts the open-mic comedy show. Free admission. 11 p.m. Friday. GOD’S BEEN DRINKING. La Nuit Comedy Theater, 5039 Freret St., 644-4300;.. IVAN’S OPEN MIC NIGHT. Rusty Nail, 1100 Constance St., 525-5515; — The Rusty Nail hosts a weekly openmic comedy and music night. 9 p.m. Tuesday. JODI BORRELLO. Lakeview Har- bor, 911 Harrison Ave., 486-4887 — Comedians JD Sledge and James Cusimano also perform. Tickets $15. 8:30 p.m. Sat., June 25. LA NUIT STAND-UP OPEN MIC. La Nuit Comedy Theater, 5039 Freret St., 644-4300; www. nolacomedy.com — The theater hosts an open mic following the God’s Been Drinking show. 11 p.m. Friday. LAUGH OUT LOUD. Bootleggers Bar and Grille, 209 Decatur St., 525-1087 — Simple Play presents a weekly comedy show. 10 p.m. Thursday-UP COMEDY. Bullets Sports Bar, 2441 A.P. Tureaud Ave., 948-400. SIDNEY’S STAND-UP OPEN MIC. Sidney’s, 1674 Barataria;. com — The improv comedy troupe performs. Tickets $5. 8:30 p.m. Tuesday. THINK YOU’RE FUNNY? Carrollton Station, 8140 Willow St., 865-9190; — The weekly open-mic comedy showcase is open to all comics. Sign-up is 8:30 p.m. Show starts at 9 p.m. Wednesday. Gambit > bestofneworleans.com > JUne 21 > 2011 MAN OF LA MANCHA. Tulane STAGE 45 EVENTS LISTINGS Listings editor: Lauren LaBorde listingsedit@gambitweekly.com FAX:483-3116 Deadline: noon Monday Submissions edited for space FAMILY Tuesday 21 ALADDIN . Rogers Memorial Chapel, Tulane University, 8623214 — The Patchwork Players present their outside-the-box, audience participation-heavy version of the tale. Reservations are recommended. Call 3142579, email patchworkplayersnola@gmail.com or visit www. patchworkplayersnola.com for details. Tickets $8. 10 a.m. and 11:30 a.m. Tuesday-Friday, 11 a.m. Saturday. TODDLER TIME . Louisiana Children’s Museum, 420 Julia St., 523-1357; — The museum hosts special Tuesday and Thursday activities for children ages 3-under and their parents or caregivers. Admission $8, free for members. 10:30 a.m. Thursday 23 ART ACTIVITIES DURING AFTER HOURS. Ogden Museum of Southern Art, 925 Camp St., 539-9600; — The Ogden offers art activities for kids during the weekly After Hours concerts. 6 p.m. to 8 p.m. Friday 24 Gambit > bestofneworleans.com > JUne 21 > 2011 BUTTERFLY TEA . Windsor Court 46 Hotel, 300 Gravier St., 522-1922; — The hotel and the Audubon Insectarium host the tea with live butterflies on every table and butterfly-themed treats. Reservations are recommended. Call 596-4773 or visit www. grillroomneworleans.com/ le-salon for details. Admission $19-$38. Seatings at 11 a.m., 2 p.m. and 4:30 p.m. FridaySaturday. Saturday 25 BAYOU BRIDGE BREAKFAST. Bayou St. John, Magnolia Bridge, Moss Street and Harding Drive — Re-Bridge hosts the children’s event that includes breakfast, a walking tour of Bayou St. John, a demonstration on how to interpret specimen samples from the bayou, and the painting of a mural to span Magnolia Bridge. Call 309-2116 or visit www. rebridge.org for details. Free admission. 9 a.m. to 11 a.m. EVENTS Tuesday 21 BOULIGNY LECTURE SERIES ON SPANISH LOUISIANA . Historic BE THERE DO THAT New Orleans Collection, 533 Royal St., 523-4662;. org — Alfred E. Lemmon, director of the Williams Research Center at The Historic New Orleans Collection, discusses “Following the Paper Trail: The Daily Life of a Spanish Colonial Document.” Free admission. 6:30 p.m. CRESCENT CITY FARMERS MARKET. Tulane University Square, 200 Broadway St. —. DEPRESSION AND BIPOLAR SUPPORT ALLIANCE. Tulane- Lakeside Hospital, 4700 South I-10 Service Road West, Metairie — The peer support group meets the first and third Tuesdays of every month. Visit for details. 7:30 p.m. JOHN R. BEYRLE EVENTS. National World War II Museum, 945 Magazine St., 527-6012; — The museum, along with the World Trade Center New Orleans, World Affairs Council of New Orleans, The University of New Orleans Centre Austria and the Consular Corps of New Orleans, hosts a luncheon honoring the U.S. ambassador to the Russian Federation at 6 p.m. at the Stage Door Canteen. At 6 p.m. in the museum, Beyrle and his siblings present a discussion in conjunction with the museum’s Joe Beyrle: A Hero for Two Nations exhibit. MASTER GARDENERS DEMO. Sydney and Walda Besthoff Sculpture Garden, New Orleans Museum of Art, 1 Collins Diboll Circle, City Park, 6584100; — LSU Agricultural Extension Service and Master Gardeners of Greater New Orleans present a bed design and planting demonstration. Call 658-4153 or 866-2381 for details. Free admission. 9 a.m. to 11 a.m. Wednesday 22; — The weekly market offers seasonal produce, seafood, prepared foods, smoothies and more. 3 p.m. to 7 p.m. preview Follow the Money New Orleanians may remember the turmoil when the Archdiocese of New Orleans announced it would close St. Augustine’s Church following Hurricane Katrina and the levee failures. Not just St. Augustine’s congregation was outraged, but before the church assented to keep the parish open, the widely popular Father Jerome LeDoux moved to Texas. The archdiocese had suffered economic losses due to flooding, and there was an outcry as other parishes were shuttered. Archbishop Alfred Hughes’ handling of the situation showed ways in which the church could be at best out of touch with its flock and at worst uninclined to be accountable to it. That saga is briefly recounted in Jason Berry’s Render Unto Rome: The Secret Life of Money in the Catholic Church (Random House). The focus of the book is how the church handles its money. Even without scandal, how a global organization manages its finances is an interesting question. The church has an estimated 1.2 billion members worldwide and massive wealth and assets, yet it is largely free to disclose only what it chooses about its income and spending. As the church has had to pay hundreds of millions of dollars to settle clergy sexual abuse lawsuits — and at the same time close parishes — Berry examines how such a large financial entity accounts for itself. The book is another installment in Berry’s reporting on the Catholic church, and it picks up on issues and church figures he’s covered previously, including clergy abuse (Lead Us Not Into Temptation) and Father Marcial Maciel, a major fundraiser for the church who was involved in both abuse scandals and money issues (Vows of Silence). Berry introduces many Catholic parishioners, some who became disillusioned with the church and others who became more involved in addressing its monetary management. With the costs of the clergy abuse scandals, the church’s actual wealth is an intriguing subject. Given the way the church chronically failed to handle its problem with pedophile priests, frequently “recycling” them to different parishes, one isn’t inspired with confidence that money is better managed. The book investigates the church’s fundraising and various monetary arms, including the secretive Vatican Bank. Other areas of seemingly failed oversight include a Philadelphia cardinal who spent $5 million renovating a church-owned vacation home. Berry also analyzes the way scandals and parish closings depress contributions. He tenaciously follows various money trails, including court cases, reports made public and stories told by church insiders. Ultimately, the question is what does the church owe its followers, especially when it asks for their support every Sunday? — Will Coviello JUN 22 Jason Berry signs Render Unto Rome 5:30 p.m. Wednesday Garden District Book Shop, 2727 Prytania St., 895-2266;. Ave., Sala Avenue at Fourth Street, Westwego — The market offers organic produce, baked goods, jewelry, art and more, with live music and pony rides. 8 a.m. to 2 p.m. Wednesday and Saturday. Thursday 23 ALVAR CHESS. Alvar Library, 913 Alvar St., 596-2667 — Library guests can play chess with expert player Bernard Parun Jr. 5 p.m. to 7 p.m. AMERICAN LIBRARY ASSOCIATION ANNUAL CONFERENCE AND EXHIBIT. Ernest N. Morial Convention Center, 900 Convention Center Blvd. — Columnist Dan Savage and comedian and actress Molly Shannon are two of the speakers at the annual conference, which brings together librarians, educators, authors, publishers, literacy experts, illustrators and suppliers. Visit for the full schedule and other details. Thursday-Monday and June 28.. THE GATHERING. On the first Thursday of the month, a local restaurant will host the dinner event featuring special menus, featured artists and discussion topics to raise money for a charity. The location will be revealed when the reservation is made. Email gatheringnola2011@gmail.com or visit. com for details. Admission $50. 6:30 p.m. to 9:30 p.m. HAITIAN DRUM & DANCE CLASS. Ashe Cultural Arts Center, 1712 Oretha Castle Haley Blvd., 569-9070; — Haitian master drummer Damas “Fan Fan” Louis, drummer David Braswell and Haitian dancer Michelle Martin lead the class. Free admission. 6 p.m. RENOVATORS’ HAPPY HOUR . The Preservation Resource Center event features a renovation-in-progress in the Carrollton neighborhood (8826 Willow St.). The event also has wine and light refreshments. Call 636-3399 or email sblaum@prcno.org for details. Admission $5 for PRC members, $7 non-members. 5:30 p.m. to 7:30 p. Friday 24 ADULT CHILDREN OF ALCOHOLIC/DYSFUNCTIONAL FAMILIES. Fair Grinds Coffeehouse, 3133 Ponce de Leon Ave., 913-9073; www. fairgrinds.com — The weekly support group meets at 6:15 p.m. Fridays. Visit for details. JESUIT HIGH SCHOOL FISHING RODEO. Jesuit High School, 4133 Banks St., 483-3816 — Participants can fish anywhere, then bring their fish to the high school for the weigh-in. The weigh-in also features food and door prizes. Rodeo registration is required by noon Thursday. Call 486-6631 or visit for details. Admission $15-$35. Fishing begins at 6 a.m., weigh-in 2 p.m. to 5 p.m. MARKETPLACE AT ARMSTRONG PAGE 48 GAMBIT > BESTOFNEWORLEANS.COM > JUNE 21 > 2011 47 EVENTS LISTINGS BE THERE DO THAT PAGE 46 PARK. Armstrong Park, North Rampart and St. Ann streets — The weekly market features fresh produce, baked goods, Louisiana seafood, natural products, art, crafts and entertainment. 2 p.m. to 5 p.m. Fridays. NEW ORLEANS MEDICAL MISSION SERVICES GALA. Generations Hall, 310 Andrew Higgins Drive, 581-4367; www. generationshall.net — The nonprofit that provides health care to the poor hosts a gala. Visit for details. Tickets start at $60. 7 p.m. to 11 p.m. NOLA PRIDE FESTIVAL. Various locations, visit website for details — Events include a pub crawl, family activities, a block party, book signings, the Grand Marshal reception and a parade and street festival featuring performances by Amanda Shaw, Jason Dottley, the 80s pop star Tiffany and others. Visit. biz for the full schedule and other details. Friday-Sunday. SQUIRE CREEK LOUISIANA PEACH FESTIVAL. Railroad Park, 107 Park Ave., (800) 392-9032 — The annual festival features more than 200 artisans and vendors, a parade, rodeo, tennis tournament, 5K run, antique car displays and more. Visit for details. Admission $5-$10. 5 p.m. to 10 p.m. Friday, 8 a.m. to 10 p.m. Saturday. WHERE Y’ART. New Orleans GAMBIT > BESTOFNEWORLEANS.COM > JUNE 21 > 2011 Museum of Art, City Park, 1 Collins Diboll Circle, 658-4100; — The museum’s weekly event features music, performances, film screenings, family-friendly activities and more. 5:30 p.m. to 8 p.m. Fridays. 48 Saturday 25 ALLIGATOR LIFE . Fontainebleau State Park, 67825 Hwy. 190, Mandeville, (888) 677-3668 — The program focuses on one of Louisiana’s most well-known residents: the American alligator. 11 a.m. BARKING BOOT CAMP. LA/ SPCA, 1700 Mardi Gras Blvd., 368-5191; — A fitness trainer teaches the dog-and-owner class that mixes cardio, resistance training, obstacle courses and “doga” (dog yoga). Proceeds benefit the LA/SPCA. Preregistration is required. Call 810-1835 or visit for details. Admission $40 for four sessions. 7 a.m. to 7:45 a.m. BUSH MAN COMPETITION 2011 SIGN-UP & TRYOUTS. Tekrema Center for Art and Culture, 5640 Burgundy St. — EcoLifestyles seeks black males ages 21-35 who are creative, mentally and physically fit and socially responsible for the competition that promotes eco-friendly role models. Men should be prepared to discuss “going green” and to showcase a talent. Email info@ bman2011.com or visit www. bman2011.com for details. 4 p.m. to 7 p.m. COOKING DEMONSTRATION WITH MARTHA HALL FOOSE. Southern Food & Beverage Museum, Riverwalk Marketplace, 1 Poydras St., Suite 169, 569-0405; www. southernfood.org — The author demonstrates her recipe for fudge and signs copies of her newest book, A Southerly Course. 2 p.m. CRESCENT CITY FARMERS MARKET. Magazine Street Market, Magazine and Girod streets, 861-5898; www. marketumbrella.org —1163 for details. 10 a.m. to 11:30 a.m. FOCUS ON WOMEN LUNCHEON . La Maison Creole, 1605 8th St., Harvey, 3623908 — The Epsilon Sigma Chapter of Sigma Gamma Rho Sorority hosts the luncheon spotlighting women who have promoted leadership and service within their communities. Call 458-8383 or email dominiquempayne@gmail. com for details. Tickets $40. 11:30 a.m. FRENCH AMERICAN CHAMBER OF COMMERCE SUMMER WINE FESTIVAL . Shops at Canal Place, 333 Canal St., 522-9200;. com — Besides more than 20 wines, the French-themed festival features French music and cuisine and an auction. Call 458-3528 or email info@ facc-la.com for details. Tickets $45 FACC/LA members in advance, $55 non-members in advance, $65 at the door. 6 p.m. to 9 p.m. FRIENDS OF HARBOR CENTER GALA . Northshore Harbor Center, 100 Harbor Center Blvd., Slidell, (985) 781-3650 — The Friends of the Northshore Harbor Center present the gala featuring live music, food, an open bar with a speciality cocktail and a chance to win a cruise for two. Call (985) 7813650 for details. Admission $75. 7 p.m. to midnight.. INDOOR TRI-CHALLENGE . East Jefferson General Hospital Wellness Center, 3601 Houma Blvd., Suite 204, Metairie; — The indoor race includes a 15-minute swim, a 15-minute bike race and a 15-minute run for individuals or relay teams. Pre-registration is required by 4 p.m. Friday. Call 456-5000 for details. Admission $25 Wellness Center members, $35 non-members. 9 a.m. LACOMBE CRAB FEST. John Davis Park, Bayou Lacombe, Lacombe, (985) 882-3010 — The crab-themed festival features food, rides, an interactive cultural village, children’s activities, and live music by Rockin’ Dopsie, the Boogie Men, the Mighty Supreme and others. Admission $5, free for children under 10. 11 a.m. to 11 p.m. Saturday, noon to 9 p.m. Sunday. NATURE: A CLOSER LOOK . Fontainebleau State Park, 67825 Hwy. 190, Mandeville, (888) 677-3668 — Park rangers lead a weekly nature hike. 9 a.m. to 10:30 a.m. RENAISSANCE MARKETPLACE OF EASTERN NEW ORLEANS. Renaissance Marketplace, 5700 Read Blvd. — The market offers cuisine from area restaurants, shopping, arts and crafts, children’s activities and more. 1 p.m. to 6 p.m. SANKOFA FARMERS MARKET. Sankofa Farmers Market, 5500 St. Claude Ave., 975-5168;. org — The weekly market offers fresh produce and seafood from local farmers and fishermen. 10 a.m. to 2 p.m. Saturdays. ST. BERNARD PARISH CHAMBER OF COMMERCE BUSINESS EXPO. Frederick J. Sigur Civic Center, 8201 W. Judge Perez Drive, Chalmette, 278-4242 — The expo gives guests the opportunity to learn more about area businesses and the services they provide. Call 250-6121 or email director@ stbernardchamber.org for details. 11 a.m. to 4 p.m. Sunday 26 DIMENSIONS OF LIFE DIALOGUE . New Orleans Lyceum, 618 City Park Ave., 460-9049; — The nonreligious, holistic discussion group Expanded listings at bestofneworleans.com EVENTS focuses on human behavior with the goal of finding fulfillment and enlightenment. Call 368-9770 for details. Free. 9 a.m. to 10:30 a.m.. SUNDAY SALON. Longue Vue House and Gardens, 7 Bamboo Road, 488-5488; — Dr. Richard Vinroot Jr. presents “Healing in the Hot Zone: One Doctor’s Passion for Medicine in Critical Parts of the World.” Refreshments will be served. Call 293-4726 or email hschackal@longuevue.com for details. Free admission. 3 p.m. to 5 p.m. Monday 27. SPORTS NEW ORLEANS ZEPHYRS. Zephyr Field, 6000 Airline Drive, Metairie, 734-5155; — The Zephyrs play the Round Rock Express. 7 p.m. TuesdayFriday. NEW ORLEANS JESTERS. Pan American Stadium, City Park, 1 Zachary Taylor Drive — The Jesters play the Baton Rouge Capitals. faith-based nonprofit seeks homes to rebuild that suffered damage of 50 percent or more. BAYOU REBIRTH WETLANDS EDUCATION . Bayou Rebirth seeks volunteers for wetlands planting projects, nursery maintenance and other duties. Visit for details. JEFFERSON COMMUNITY SCHOOL . The charter school that educates at-risk middle school students who have been expelled from Jefferson Parish’s public schools seeks adult mentors for its students. Call 836-0808. OPERATION REACH..on-one with public school students on reading and language skills. Call 899-0820, email elizabeth@scapc.org or visit for details. TEEN SUICIDE PREVENTION . The Teen Suicide Prevention Program seeks volunteers to help teach middle- and upperschool New Orleans students. Call 831-8475 for details. Do You Want A New Smile? IT’S POSSIBLE WITH ESSIX.® ESSIX IS: INVISIBLE • AFFORDABLE • REMOVABLE • COMFORTABLE • QUICK Essix is similar to Invisalign but much less expensive. Actual results from a patient treated by Dr. Schmidt after wearing the Essix aligners for 9 months.* * Actual treatment times may vary. 8978107 for information. BEFORE WORDS BARNES & NOBLE JR . Barnes & Noble Booksellers, 3721 Veterans Memorial Blvd., Metairie, 455-5135 — The bookstore regularly hosts free reading events for kids. Call for schedule information. BOOK SIGNINGS AT NOLA PRIDE FESTIVAL. French Quarter, corner of St. Anne and Bourbon Streets — Faubourg Marigny Art & Books hosts book signings with David Lummis, Michael Patrick Welch, Tom Carson and others during the NOLA Pride Festival. 1 p.m. to 5 p.m. Sunday. CASSANDRA CLARE & ELLEN HOPKINS. Octavia Books, 513 Octavia St., 899-7323 — Clare, author of City of Fallen Angels, and Hopkins, author of Crank, Burned, Impulse and others, sign and discuss their books. 5:30 p.m. Saturday. COOKBOOK CLUB. Garden District Book Shop, The Rink, 2727 Prytania St., 895-2266 — Sheri Castle discusses and signs New Southern Garden Cookbook. Bringing food is encouraged but not required. 6 p.m. Monday. COOKBOOKS & COCKTAILS SERIES. Kitchen Witch Cookbooks Shop, 631 Toulouse St., 528-8382 — The group meets weekly to discuss classic New Orleans cookbooks. 4:30 p.m. to 6:30 p.m. Friday. DAVID UNGER . New Orleans Public Library, Main Library, 219 Loyola Ave., 596-2602 — The author signs and discusses Price of Escape. 1 p.m. Saturday. DINKY TAO POETRY. Molly’s at the Market, 1107 Decatur St., 525-5169; — The bar hosts a free weekly poetry? • Did you previously wear braces and your teeth have begun to shift? • Are your upper and lower teeth crowded? • Is there a gap between your two front teeth? • Are your teeth slightly crooked? If you answered " YES" to any of these, call today for a Consultation. Get the NEW SMILE you've been waiting for! For a free report, request one from contactriverbend@aol.com. 49 $ * CONSULTATION SPECIAL TO 1ST 5 CALLERS ONLY *EXPIRES 07/03/2011 GREAT SMILES - WITHOUT BRACES GLENN SCHMIDT, D.D.S., M.S. GENERAL DENTISTRY UPTOWN 8025 Maple Street @ Carrollton · 504.861.9044 Gambit > bestofneworleans.com > JUne 21 > 2011 Goodwill Training Center, 3400 Tulane Ave. — Nonprofit Central hosts a weekly meeting for all leaders of nonprofit groups. Email susan_unp@ yahoo.com for details. 9:30 a.m. to 11 a.m. from Hurricane Katrina. Call 942-0444, ext. 244 for details. 49 S:2.281” EVENTS LISTINGS reading with open mic. 9 p.m. Tuesday. FEATURING AUTHENTIC VIETNAMESE DELICACIES FREE DELIVERY TO MID-CITY & LAKEVIEW ELANA JOHNSON, JENNY HAN, JESSI KIRBY & JOHN COREY WHALEY. Octavia Books, 513 Octavia St., 899-7323 — The young adult book authors sign their works. 2 p.m. Monday. FAIR GRINDS POETRY EVENT. Fair Grinds Coffeehouse, 3133 Ponce de Leon Ave., 913-9073; — Jenna Mae hosts poets and spokenword readers on the second, fourth and fifth Sunday of each month. 8 p.m. FRIENDS OF THE NEW ORLEANS PUBLIC LIBRARY BOOK SALE . Buy 2 Entrees Get 1 Free Appetizer Buy 3 Entrees Get 2 Free Appetizers nothing like juicy gossip over a filet with the girls. MON-FRI 11AM-9:30PM SAT 12 NOON-9:30PM DINNER MENU ONLY 135 N. CARROLLTON S:10.833” Gambit > bestofneworleans.com > JUne 21 > 2011 BRUNCH WEEKDAYS ONLY DINNER MENU 4PM-9:30PM 309-7286 / FAX 309-7283 Latter Library Carriage House, 5120 St. Charles Ave., 5962625; — The group hosts twice-weekly sales of books, DVDs, books on tape, LPs and more. 10 a.m. to 2 p.m. Wednesday and Saturday. INGRID LAW, JAY ASHER & MAUREEN JOHNSON . Octavia Books, 513 Octavia St., 8997323 — The authors sign their books. 3 p.m. Sunday. JASON BERRY. Garden District Book Shop, The Rink, 2727 Prytania St., 895-2266 — The author discusses and signs Render Unto Rome: The Secret Life of Money in the Catholic Church. 5:30 p.m. Wednesday. JEFF KINNEY. Academy of the Sacred Heart, Nims Fine Arts Center, 4301 St. Charles Ave., 899-7323 — The author and cartoonist signs The Wimpy Kids Do-It-Yourself Book. 4 p.m. Friday. JESUS ANGEL GARCIA . Antenna Gallery, 3161 Burgundy St., 957-4255; — The author presents a “transmedia” reading of badbadbad. 7 p.m. Saturday. JOSH KILMER-PURCELL & BRENT RIDGE . Anthropologie, The Shops at Canal Place, 333 Canal St., 592-9972; www. anthropologie.com — The stars of the Planet Green network show The Fabulous Beekman Boys sign The Bucolic Plague: How Two Manhattanites Became Gentlemen Farmers: An Unconventional Memoir. 10 a.m. Saturday. JULIE KANE . Louisiana Humanities Center, 938 Lafayette St., Suite 300, 523-4352; — Louisiana’s poet laureate reads from her poetry collections. 6 p.m. Thursday. KATE DICAMILLO. Octavia Books, 513 Octavia St., 8997323 — The author of Because of Winn-Dixie signs her books. 2 p.m. Sunday. Metairie • New Orleans • Biloxi ruthschris.com KEVIN HENKES. Maple Street Book Shop, 7523 Maple St., 866-4916; — The children’s author signs Little White Rabbit and Junonia. 11:30 a.m. Saturday. LAUREN MYRACLE . Octavia 50 BE THERE DO THAT Books, 513 Octavia St., 8997323 — The young adult book author signs and reads from Shine. 2 p.m. Saturday. 866-4916; — The young adult book author signs Three Quarters Dead. 1 p.m. Sunday. LOCAL WRITERS’ GROUP. SARAH DESSEN . Octavia. MICHAEL BROWN . Garden District Book Shop, The Rink, 2727 Prytania St., 895-2266 — The author discusses and signs Deadly Indifference: The Perfect (Political) Storm Hurricane Katrina, The Bush White House, and Beyond. 6 p.m. Friday. The author also appears at Maple Street Book Shop (7523 Maple St., 8664916;) at 3 p.m. Saturday. MO WILLEMS. Garden District Book Shop, The Rink, 2727 Prytania St., 895-2266 — The children’s author discusses and signs Should I Share My Ice Cream and Hooray For Amanda and Her Alligator. 3 p.m. Saturday. N.H. SENZAI & FRANCES O’ROARK DOWELL. Maple Street Book Shop, 7523 Maple St., 866-4916; www. maplestreetbookshop.com — Senzai, author of Shooting Kabul, and Dowell, author of Ten Miles Past Normal and The Secret Language of Girls, sign their books. 1 p.m. Monday. ORDERLY DISORDER: LIBRARIAN ZINESTERS IN CIRCULATION TOUR . Newcomb College Center for Research on Women, Caroline Richardson Hall, 62 Newcomb Place, 865-5238 — Librarians hosts a discussion and reading of zines in conjunction with the American Library Association’s 2011 conference. Email zinelibtour@gmail.com or visit zinemobile.wordpress.com for details. 6:30 p.m. to 8 p.m. Sunday. PASS IT ON . George & Leah McKenna Museum of African American Art, 2003 Carondelet St., 586-7432; — Poet Gian “G-Persepect” Smith and Alphonse “Bobby” Smith host a weekly spoken-word and music event. Admission $6. 9 p.m. Saturdays. POETRY MEETING . New Orleans Poetry Forum, 257 Bonnabel Blvd., Metairie, 835-8472 — The forum holds workshops every Wednesday. 8 p.m. to 10:30 p.m. RICHARD PECK . Maple Street Book Shop, 7523 Maple St., Books, 513 Octavia St., 8997323 — The young adult book author signs What Happened to Goodbye. 1 p.m. Saturday. SPOKEN WORD. Ebony Square, 4215 Magazine St. — The center hosts a weekly spokenword, music and open-mic event. Tickets $7 general admission, $5 students. 11 p.m. Friday. STIEG LARSSON BOOK CLUB. East Bank Regional Library, 4747 W. Napoleon Ave., Metairie, 838-1190 — The group discusses the movie version of The Girl with the Dragon Tattoo. 6:30 p.m. Sunday. TAO POETRY. Neutral Ground Coffeehouse, 5110 Danneel St., 891-3381; — The coffeehouse hosts a weekly poetry reading. 9 p.m. Wednesday. TOM FRANKLIN & LAURA LIPPMAN . Octavia Books, 513 Octavia St., 899-7323 — Franklin, author of Crooked Letter, Crooked Letter, and Lippman, author of I’d Know You Anywhere, sign their books. 6 p.m. Monday. TOMIE DEPAOLA . Maple Street Book Shop, 7523 Maple St., 866-4916; — The children’s author signs Strega Nona and Let the Whole Earth Sing Praise. 2:15 p.m. Saturday. TOMIE DEPAOLA & RICHARD PECK . Octavia Books, 513 Octavia St., 899-7323 — Children’s book author dePaola, author of the forthcoming Streganona’s Gift, and Peck, author of A Year Down Yonder, sign their books. Noon. Sunday. UNIVERSES. Craige Cultural Center, 1800 Newton St., Algiers — The center hosts a weekly spoken-word, music and open-mic event. Tickets $5. 8 p.m. Sunday.. WOMEN’S POETRY CIRCLE . St. Anna’s Episcopal Church, 1313 Esplanade Ave., 947-2121; — The group meets at 2 p.m. Mondays. Call 289-9142 or email poetryprocess@gmail. com for details. YOHANNES GEBREGEORGIS. New Orleans Public Library, Martin Luther King Branch, 1611 Caffin Ave., 529-READ; — The author reads from Silly Mammo and Tirhas Celebrates Ashenda. 10 a.m. Friday. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Email Ian McNulty at imcnulty@cox.net. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> DINING DUO <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Two related restaurants opened simultaneously beside > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >each other on Freret Street. Ancora Pizzeria & Salumeria < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < <PUTTING < < < < < < <EVERYTHING < < < < < < < < < <ON < < <THE < < < TABLE < < < < < < < < < < < < < < WHAT restaurants, along with partners Jeff Talbot at Ancora and Chip Cafe Gambino Apperson at High Hat. am B WHERE 4821 Veterans Memorial Blvd., Metairie, 885-7500; WHEN Lunch Mon.-Fri. RESERVATIONS Accepted HOW MUCH Inexpensive WHAT WORKS Dishes made from scratch, plenty of local seafood WHAT DOESN'T The dessert list omits the bakery’s specialties. ZIMET BENEFIT, PART I This week marks the first of two major benefits for Nathanial Zimet, the chef/owner of Boucherie (8115 Jeannette St., 862-5514;) who is recovering from gunshot wounds suffered during a robbery last month. Local breweries and liquor providers come together June 24 for “Beers Not Bullets,” a tasting event at NOLA Brewing (3001 Tchoupitoulas St., 613-7727;) with food and live music. Advance tickets are $30. See. Zimet’s supporters also are planning a July 10 benefit called “Beasts and Brass.” five 5 IN FIVE PLACES FOR CHEESE PLATES A MANO 870 TCHOUPITOULAS ST., 208-9280 Savor an Italian cheese tour with traditional accompaniments. CHECK, PLEASE Down-home CreoleItalian lunch in an unexpected setting BACCHANAL 600 POLAND AVE., 948-9111 The wine shop serves generous cheese plates. Any Way You Slice It Chef Wanda McKinney prepares hearty Creole and Italian dishes at Cafe Gambino. HAVE YOUR CAKE AND EAT, TOO, AT CAFE GAMBINO. PHOTO BY CHERYL GERBER DOMENICA 123 BARONNE ST., 648-6020 Cheese boards come with unique fried bread and house-made garnishes. BY IAN MCNULTY M. GREEN GODDESS 307 EXCHANGE PLACE, 301-3347 The cheese list is as eclectic and adventurous as the menu. ST. JAMES CHEESE COMPANY 5004 PRYTANIA ST., 899-4737 Plates are assembled from a global inventory at this cheese emporium. Questions? Email winediva1@earthlink.net. 2007 Joseph Drouhin Chorey-les-Beaune BOURGOGNE, FRANCE / $20 RETAIL The very small appellation of Chorey-les-Beaune lies between the prestigious neighbors Savignyles-Beaune and Aloxe-Corton. This bottling is aged 12 to 15 months in 10 percent new French oak. The elegant wine exudes intense aromas of cherry, red berries and forest floor followed on the palate by red fruit with notes of dried strawberries, spice and earth. It’s a good red wine for summer and goes well with cold cuts, roast fowl or game, grilled meats, pizza, Tex-Mex fare and soft cheeses. Buy it at: Hopper’s Wines & Spirits and Hopper’s Cartes des Vins. Drink it at: Cafe Degas, Le Foret, Stella!, Cochon, Martinique Bistro, Crescent City Steak House and Meauxbar Bistro. — Brenda Maitland Gambit > bestofneworleans.com > JUne 21 > 2011 ention Gambino’s Bakery and some New Orleanians automatically will crave dessert, especially a tall, multi-layered, pudding-filled slice of the bakery’s. 51 >>>>>>>>>>>>> >>>>>>>>>>>>>>> <<<<<<<<<<<<< <<<<<<<<<<<<<<<<< >>>>>>>>>>>>> >>>>>>>>>>>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>>>>>>>>>>> <<<<<<<<<<<< >>>>>>>>>>>> <<<<<<<<<<<<< >>>>>>>>>>>>> >>>>>>>>>>>>>>>>> <<<<<<<<<<<<< <<<<<<<<<< YOU ARE WHAT YOU EAT >>>>>>>>> >>>> of dishes from wonton soup to siz< < < < < < <zling < seafood combinations served > > > > > > > >on>a hot plate to sizzling Go-Ba to lo mein dishes. Delivery and ban<<< quest facilities available. Reserva>> tions accepted. Lunch and dinner <daily. < Credit cards. $$ JUNG’S GOLDEN DRAGON — 3009 Magazine St., 891-8280; www. < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < <jungsgoldendragon2.com < — Jung’s > > > > > > > > > > > > > Out > > >2 >Eat > >is>an > >index > > >of> Gambit > > > > >contract > > > > >advertisers. > > > > > > >Unless > > > >noted, > > > >addresses > > > > > >are > >for > >New > > >Orleans. > > > > > > > offers > > a mix of Chinese, Thai and. AMERICAN FAT HEN GRILL — 1821 Hickory Ave.,. $$ BAR & GRILL. $ Gambit > bestofneworleans.com > JUne 21 > 2011 THE RIVERSHACK TAVERN — 3449 52 River Road, 834-4938; — This bar and music spot offers a menu of burgers, sandwiches overflowing with deli meats and changing lunch specials. No reservations. Lunch and dinner daily. Credit cards. $ SHAMROCK BAR & GRILL — 4133 S. Carrollton Ave., 301-0938 — Shamrock serves burgers, shrimp or roast beef po-boys, Reuben sandwiches, cheese sticks and fries with cheese or gravy. Other options include corned beef and cabbage, and fish and chips. No reservations. Dinner and late night daily. Credit cards. $ BARBECUE ABITA BAR-B-Q — 69399 Hwy. 59, Abita Springs, (985) 892-0205 — Slow-cooked brisket and pork are specialties at this Northshore smokehouse. The half-slab rib plate contains six ribs served with a choice of two sides. No reservations. Lunch Mon.-Sat., dinner Tue.Sat. Credit cards. $ BOO KOO BBQ — 3701 Banks St., 202-4741; — The Boo Koo burger is a ground brisket patty topped with pepper Jack cheese, boudin and sweet chile aioli. The Cajun banh mi fills a Vietnamese roll with hogshead cheese, smoked pulled pork, boudin, fresh jalapeno, cilantro, cucumber, carrot, pickled radish and sriracha sweet chile aioli. No reservations. Lunch and dinner Mon.-Sat., late-night Fri.-Sat. Cash only. $ WALKER’S BAR-B-QUE — 10828 Hayne. $ tions. Breakfast and lunch daily, dinner Fri.-Sat. Credit cards. $$VIEW CAFE AT CITY PARK — BREWPUB. Reservations recommended. Lunch and dinner daily. Credit cards. $$ BURGERS BEACHCORNER BAR & GRILL — 4905 Canal St., 488-7357; — Top a 10-oz. Beach burger with cheddar, blue, Swiss or pepper Jack cheese, sauteed mushrooms or housemade hickory sauce. Other options include a grilled chicken sandwich. No reservations. Lunch and dinner daily. Credit cards. $; — The cafe serves breakfast itemes like the Freret Egg Sandwich with scrambled. $$ ECO CAFE & BISTRO — 3903 Canal St., 561-6585; — Eco Cafe serves sandwiches like the veggie club, layered with Swiss cheese, tomatoes, onions, cucumbers, spinach and baby pickles. There are fresh squeezed juices, and Friday and Saturday evenings feature tapas dining. No reserva-; — Pravda is known for its Soviet kitsch and selection of absinthes, and the kitchen offers pierogies, beef empanadas, curry shrimp salad and a petit steak served with truffle aioli. No reservations. Dinner Tue.-Sat. Credit cards. $ RICCOBONO’S PANOLA STREET CAFE —. $. $$ CHINESE CHINA ORCHID — 704 S. Carrollton Ave., 865-1428; —. $ FIVE HAPPINESS — 3511 S. Carrollton Ave., 482-3935 — The large menu at Five Happiness offers a range Korean cuisine. Chinese specialties include Mandarin, Szechuan and Hunan dishes. Grand Marnier shrimp are lightly battered and served with Grand Marnier sauce, broccoli and pecans. Reservations accepted. Lunch and dinner daily. Credit cards. $ THREE HAPPINESS — 1900 Lafayette St., Suite 4, Gretna, 368-1355; www. threehappiness.com —. $$ TREY YUEN CUISINE OF CHINA — 600 N. Causeway Approach., Mandeville, (985) 626-4476; 2100 N. Morrison Blvd., Hammond, (985) 345-6789; —. $ including the Fat Elvis, made with banana cake and topped with peanut butter frosting. fa- vorites; www. oaknola.com —; www. one-sl. $$ brunch Sun. latenight Tue.-Sun. Credit cards. $$ DELI CG’S CAFE AT THE RUSTY NAIL — 1100 Constance St., 722-3168; www. therustynail.biz — latenight Tue.-Sat. Cash only. $ KOSHER CAJUN NEW YORK DELI & GROCERY — 3519 Severn Ave., Me- tairie, 888-2010;. com —; www. martinwine.com — Sandwiches piled high with cold cuts, salads, hot sandwiches, soups and lunch specials are available at the deli counter. The Cedric features chicken breast, spinach, Swiss, tomatoes and red onions on seven-grain bread. No reservations. Lunch daily. St., 525 St., 566-9051;. com — Located in a historic building, the quaint bistro serves starters like chicken and andouille gumbo and fried frogs legs. Entrees include choices like fried chicken, Gulf fish and burgers. Reservations accepted. Dinner Wed.-Sat., DINER DAISY DUKES — 121 Chartres St., 561- 5171;. com — Daisy Dukes is known for its seafood omelet and serves a wide variety of Cajun spiced Louisiana favorites, burgers, po-boys and seafood, including boiled crawfish and oysters on the halfshell. Breakfast is served all day. No reservations. Open 24 hours daily. Credit cards. $$ FRENCH FLAMING TORCH — 737 Octavia St., 895-0900;. Mag- Expanded listings at bestofneworleans.com Classifieds Find your Happy Place in Gambit’s classifieds... Everyone else is! in print & online Rentals Real Estate Jobs Services Autos Mind, Body, Spirit Events Specials & More GOURMET TO GO INDIAN., 8913644 — Kyoto’s sushi chefs prepare rolls, sashimi and salads. “Box” sushi is a favorite, with more than 25 rolls. Reservations Lori Beth DeGrusha and her father Johnny DeGrusha greet guests at Johnny’s Po-Boys (511 St. Louis St., 524-8129). PhOTO By CheRyL GeRBeR classadv@gambitweekly.com page 54 504-483-3100 COUPONS For Your Savings go to bestofneworleans.com • • • • • • Airline Auto Care & Detailing- 10% off Chinese Health Spa- $10 off 1 hr Massage Eco Urban- Save $40 First hour maintenance Freret Garden Center- 10% off Gattuso’s- Buy 1 Burger, Get 1 Burger 50% off Glenn Michael Salons- $100 off Keratin Treatment • Hi-Ho Bar-B-Que- Buy any Sandwich get the fries FREE • • Nola Snow- 50 cents off any item Southern Refinishing, LLC- $25 off Any Regular Reglazing • Superior Aire, Inc- Special Price • Suzette’s- See Store for Savings Gambit > bestofneworleans.com > JUne 21 > 2011. $ Sun. Credit cards. $$ gambit azine. $$$ 53 OUT2EAT page 53 and dinner Mon.-Sat. Credit cards. $$ LOUISIANA CONTEMPORARY BOMBAY CLUB — 830 Conti St., 586- COUNTRY FLAME — 620 Iberville St.,., latenight Fri.-Sat., brunch Sun. Credit cards. $$ JUAN’S FLYING BURRITO — 2018 MILA — 817 Common St., 412-2580; — MiLA takes a fresh approach to Southern and New Orleans cooking, focusing on local produce and refined techniques. Try New Orleans barbecue lobster with lemon confit and fresh thyme. Reservations recommended. Lunch Mon.Fri. dinner Mon.-Sat. Credit cards. $$$ RALPH’S ON THE PARK — 900 City Gambit > bestofneworleans.com > JUne 21 > 2011 MEXICAN & SOUTHWESTERN 0972; — Mull the menu at this French Quarter hideaway while sipping a well made martini. The duck duet pairs confit leg with pepper-seared breast with black currant reduction. Reservations recommended. Dinner daily, late-night Fri.-Sat. Credit cards. $$$ MIA’S — 1622 St. Charles Ave., 30195. $$ 54 thentic, healthy and fresh Mediterranean cuisine featuring such favorites as sharwarma prepared on a rotisserie. No reservations. Lunch and dinner daily. Credit cards. $$ Park Ave., 488-1000; — Popular dishes include baked oysters Ralph, turtle soup and the Niman Ranch New York strip. There also are brunch specials. Reservations recommended. Lunch Fri., dinner daily, brunch Sun. Credit cards. $$$ REDEMPTION — 3835 Iberville St., 309-35; — Attiki features a range of Mediterranean cuisine including entrees of beef kebabs and chicken shawarma. Reservations recommended. Lunch, dinner and latenight daily. Credit cards. $$ PYRAMIDS CAFE — 3151 Calhoun St., 861-9602 — Diners will find au-69-0000; 4724 S.Carrollton Ave. 486-99; 1000 S. Clearview Pkwy., Harahan, 7361188; — poboys reserva- tions. Breakfast, lunch and dinner daily. Credit cards. $$ SIBERIA — 2227 St. Claude Ave., 265- 8855 — This music clubs serves dishes like fish and chips, spicy hot wings, tacos and more. There are weekly specials and vegetarian and vegan options. No reservations. Dinner and late-night Mon.Sat. Credit cards. $ Boudreaux pizza; — Seafood platters, muffulettas and more than 15 types of po-boys are available.; — These cafes serve soups, salads, sandwiches, wraps and entrees. Shrimp Carnival features smoked sausage, shrimp, onion and peppers in roasted garlic cream sauce over pasta. No reservations. Lunch and dinner Mon.Sat. Credit cards. $$ RAJUN CAJUN CAFE — 5209 W. Napoleon Ave., Metairie, 883-55 ITALIAN PIE — Citywide; www. italianpie8032; — Disembark at Mark Twain’s for salads, po-boys and pies like the Italian pizza with salami, tomato, artichoke, sausage and basil. No reservations. Lunch Tue.-Sat., dinner Tue.-Sun.. $ SEAFOOD. Credit cards. $$ COTE BRASSERIE — 700. $$$ LA. $$ WIT’S INN — 141 N. Carrollton Ave., VILLAGE INN — 9201 Jefferson Hwy., SANDWICHES & PO-BOYS SOUL FOOD 486-1600 — This Mid-City bar and restaurant features pizzas, calzones, toasted subs, salads and appetizers for snacking. No reservations.. $ PARKWAY BAKERY AND TAVERN — 538 N. Hagen Ave., 482-3047 — Parkway serves juicy roast beef po-boys, hot sausage poboys, fried seafood and more. No reservations. Kitchen open from 11 a.m. to 10 p.m. Wed.- 737-4610 — Check into Village Inn for seasonal boiled seafood or raw oysters. Other options include fried seafood platters, po-boys, pasta and pizza. Reservations accepted. Lunch and dinner Tue.-Sat. Credit cards. $$ BIG MOMMA’S CHICKEN AND WAF- FLES — 5741 Crowder Blvd., 241-2548; — Big Mamma’s serves hearty combinations like the six-piece which includes a waffle and six fried wings served crispy or dipped in sauce. Breakfast is served all day. All items are cooked to order. No reservations. Breakfast Sat.-Sun., Lunch daily, dinner Sun. Credit cards. $ STEAKHOUSE CRESCENT CITY STEAKS — 1001 N. Broad St., 821-3271; —. $$$ RUTH’S CHRIS STEAK HOUSE — Harrah’s Hotel, 525 Fulton St., 587- 70. Credit cards. $$$ latenightarie Road, 836-2007; — Vega’s mix of hot. $$$ VIETNAMESE AUGUST MOON — 3635 Prytania. Carrollton Ave., 309-7283 — Noodles abound at this Mid-City eatery, which excels at vinegary chicken salad over shredded cabbage and bowls of steaming pho. No reservations. Lunch and dinner Mon.-Sat. Credit cards and checks. $$ PHO HOA RESTAURANT — 1308 Manhattan Blvd., 302-2094 — Pho Hoa Restaurant serves Vietnamese dishes including beef broth soups, vermicelli bowls, rice dishes and banh mi sandwiches. Appetizers include fried egg rolls, crab rangoons and rice paper spring rolls. No reservations. Breakfast, lunch and early dinner daily. Credit cards. $ PHO NOLA — 3320 Transcontinental Drive, Metairie, 941-7690; www. pho-nola.com —. $ CLASSIFIEDS PET ADOPTIONS AUTOMOTIVE 483-3100 • Fax: 483-3153 3923 Bienville St. New Orleans, LA 70119 classadv@gambitweekly.com CASH, CHECK OR MAJOR CREDIT CARD DOMESTIC AUTOS $10,995 504-368-5640 ‘10 FORD FOCUS SES Deadlines:. Gambit > bestofneworleans.com > JUne 21 > 2011 56 Rentals & Employment Advertise in NOLA Lab Mix ‘09 SCION XD $13,995 504-368-5640 ‘09 SUBARU IMPREZA i $13,995 504-368-5640 ‘10 HONDA FIT Certified $15,995 504-368-5640 ‘10 HYUNDAI ACCENT $10,995 504-368-5640 SPORT UTILITY VEHICLES A Touch gift Certificates for Father’s Day Aloha of ‘04 TOYOTA HIGHLANDER La Lic #2983 • Member of BBB Providing Therapeutic Massage/Non Sexual ‘07 FORD EXPLORER MERCHANDISE $9,995 504-368-5640 40K MI $15,995 Call 504-368-5640 ‘09 SUBARU FORESTER AWD $18,995 Call 504-368-5640 NOTICE Massage therapists are required to be licensed with the State of Louisiana and must include the license number in their ads. A BODY BLISS MASSAGE Jeannie LMT #3783-01. Flexible appointments. Uptown Studio or Hotel out calls. 504.894.8856 (uptown) BYWATER BODYWORKS Swedish, deep tissue, therapeutic. Flex appts, in/out calls, OHP/student discounts, gift cert. $65/hr, $75/ 1 1/2hr. LA Lic# 1763 Mark. 259-7278 Summer Special Introductory price 1 hr $40 APPLIANCES 18 Cubic Ft Fridge Toto deserves a loving homeb est in a home w/ no kids housebroken &obeys commands. good watch dog contact Traci- tbkestler@cox.net 504-975-5971 Lab Mix 3 yr/ M, Neuterd, House Broken, Up to date on vaccines, Playful & Sweet Brenda 504-838-0736 bmigaud@ cox.net Shepherd mx pup Merlin (approx 15 wks) very friendly w/ ppl & other animals. housebroken. contact Tracy tscannatella@gmail.com 504-874-0598 Tigger Very sweet male 2 yr pld golden brown tabby. shots ,tested ,neutered. 504 462-1968 Almond Color. $50. Call 943-7699. ELECTRIC RANGE Hotpoint Almond Color 30in, Good working Condition. $50. Call 943-7699 To Advertise in EMPLOYMENT Call (504) 483-3100 5 min from Elmwood Hours: 10am-7:30pm Mon - Sat Alicia LA Lic# 520 16 yrs exp. Non-Sexual call 504-317-4142 To Advertise in REAL ESTATE Call (504) 483-3100 LEGAL NOTICES ForeSite Services, Inc., located at 5809 Feldspar Way, Birmingham, Alabama 35244, and AT&T Mobility, in accordance with requirements of Section V.B. of the March 2005 Nationwide Programmatic Agreement (NPA) for Review of Effects on Historic Properties for Certain Undertakings Approved by the Federal Communications Commission (FCC), are requesting comment regarding potential impacts to historical or archaeological properties listed on, or eligible for listing on the National Register of Historic Places (NRHP), by installation of wireless telecommunications antennae on an office building rooftop, located at 8200 Hampson Street, New Orleans, Orleans Parish, Louisiana, at latitude 29° 56’ 40.71” north and longitude 90° 08’ 5.41” west. All comments should be submitted within 30 days of the publication of this notice referencing project FOR01P1123 and sent to the attention of Mr. Henry Fisher, Environmental Engineers, Inc., 11578 U.S. Highway 411, Odenville, AL 35120. Mr. Fisher may also be reached via email at towerinfo@envciv.com, via telephone at (205) 629-3868, or via facsimile at (877) 847-3060. BABY ITEMS Weekly Tails Double Stroller Mac Laren. Like New. $100. 832-1689 PETEY Kennel #A12480209 90 min. avail • Swedish & Deep Tissue MARKETPLACE Gambit’s weekly guide to Services, Events, Merchandise, Announcements, and more for as little as $60 Gorgeous 7 yr old male Siamese extremely sweet and loving ,neutered shots ,rescue 504 462-1968 Leather, sunroof $14,995 504-368-5640 LICENSED MASSAGE Real Estate Kirin ‘08 VW JETTA SE MIND, BODY, SPIRIT ASK ABOUT OUR SPECIAL RATES FOR Very cute sweet petite kitty, 3yrs old , only 6 lbs, white/black spayed,shots 504 462-1968 IMPORTED AUTOS Free Ads: Private party ads for merchandise for sale valued under $100 (price must be in ad) or ads for pets found/lost. No phone calls. Please fax or email. Itty Bitty Inky $12,995 504-368-5640 Online: When you place an ad in Gambit’s Classifieds it also appears on our website, 5 yr old gorgeous solid white Angora male cat super smart and sweet.Shots ,neuter ,rescue 504 462-1968 Free kittens to good home. We live in New Orleans please call Pricilla @ (601) 569-3661 ‘09 CHRYSLER PT CRUISER Mon.-Fri. 8:30 a.m.- 5:30 p.m. ANNOUNCEMENTS Elijah PETS LOST/FOUND PETS 20 YEAR OLD YORKIE Female. Deaf, losing eyesight. Blonde coat. Long skinny tail. Carondelet at Washington. Call Kena, 504-615-4943 REWARD- LOST (Mid City but could be anywhere by now),Ozzie, male, brown/black stripe (brindle), pit mix, sweet, call him & he will come, hold him &call me asap, Traci 504-975-5971. MAXINE Kennel #A12985280 Petey is a 7-month-old, neutered, Pit Bull mix. He loves nothing better than to eat, sleep and PLAY! Petey is housetrained, crate trained, knows basic commands and gets along well with other dogs and cats. To meet Petey or any of the other wonderful pets at the LA/SPCA, come to 1700 Mardi Gras Blvd. (Algiers), 10-4, Mon.Sat. & 12-4 Sun. or call 368-5191. Maxine is a 3-year-old, spayed, brown tabby DSH. She’s a playful gal who also enjoys soakingup the sun in the windowsill and a thoroughly enjoys being brushed. To meet Max. CLASSIFIEDS ANNOUNCEMENTS SERVICES LEGAL NOTICES HOME SERVICES Don’t Replace Your Tub REGLAZE IT. Chip/Spot Repair - Colors Available Clawfoot tubs for sale Southern Refinishing LLC Certified Fiberglass Technician Family Owned & Operated 504-348-1770 southernrefinishing.com AIR COND/HEATING GULF STATES AIR Service & Sales 3 TON A/C Condenser & Installed $1499 5 Year Warranty Service Calls only $49.50 Gulf States Air (504) 464-1267 ELECTRICAL/INSTALLATION DIX COMPANIES ELECTRICAL, Construction, Concrete, Fencing, Bobcat. Licensed & Insured. Mike Dix, 504307-7195 or Elmo Dix, 504-329-2726. mdixsr@gmail.com IN THE CHANCERY COURT OF ADAMS COUNTY, MISSISSIPPI DELTA SOD Certified Grade “A” Turf St. Augustine, Tifway Bermuda Centipede, Zoysia. WE BEAT ALL COMPETITORS! 504-733-0471 TREE MEDICS $50 OFF Trimming & Removal To Gambit Readers - Thru July Free estimates 504-488-9115 nolatrees.com PEST CONTROL TERMINIX DRIVERS/DELIVERY EMPLOYMENT VOLUNTEER DRIVERS: EMPLOYMENT Paid In Advance! Make $1,000 a Week mailing brochures from home! Guaranteed Income! FREE Supplies! No experience required. Start Immediately! Long term Local & out/back loads! Free medical, dental w/more benefits avail. CDL-A w/Hazmat, Tanker and TWIC. 1 yr. TT Exp. Req. 1-888-380-5516 RESTAURANT Offers Volunteer Opportunities. Make a difference in the lives of the terminally ill & their families. Services include: friendly visits to patients & their families, provide rest time to caretaker, bereavement & office assistance. School service hours avail. Call Volunteer Coordinator @ 504-818-2723 #3016 ADVERTISING/MARKETING. BEAUTY SALONS/SPAS OLD METAIRIE DAY SPA seeks exp Nail Tech for FT position. Under new ownership. High volume salon. Fax resume to 504-837-4792. Classifieds w w w.martinwine.com OperatiOns Manager/Chef Metairie BistrO-Deli FT, salaried with benefits. Day, evening & weekend availability req’d. Seeking highly motivated, self-directed, culinary trained individual. Assist Bistro-Deli Mgr with deli, kitchen, catering & dining room. Work with Chefs to ensure quality control. Work with Inventory Mgr to manage cost & pricing. Prior experience req’d. Submit resume to HR: hr@martinwine.com; fax (504) 894-6559; PO Box 19091 NOLA 70119 MISCELLANEOUS $$:// Employers... Gambit readers are who you need! Reach the most qualified applicants by placing your open positions in Gambit Classifieds. Jackson, Mississippi 39225-2546 (601) 355-2022 (601) 355-0012 fax To Advertise or for more information call your Classified Account Executive or (504) 483-3100 or email classadv@gambitweekly.com Gambit > bestofneworleans.com > JUne 21 > 2011 IN RE: THE ESTATE OF CATHERINE ELLIS, DECEASED LANDSCAPE/HORTICULTURE CLASSIFIEDS EMPLOYMENT 57 REAL ESTATE CLASSIFIEDS COVINGTON REAL ESTATE FOR SALE 109 BELLE TERRE BL Charming cottage on huge lot. 3br, 2 ba. Great schools. 9’ ceil. Den, sunrm, garage converted to huge game rm. Huge bkyd & storage galore. $250,000. Call Joan Soboloff, Avalar Realty. 985-264-1125. soboloff@aol. com.. 16062 LAKE RAMSEY RD Better than new! 3 br, 2 ba. High ceil & crown mouldings. Beaut wd flrs. Huge master ste. Close to town, on a lrg 100 x 300 lot! $179,000. Call Joan Soboloff, Avalar Realty. 985-264-1125. soboloff@aol.com. UPTOWN/GARDEN DISTRICT 19084 S. FITZMORRIS Custom design, 5 br, 3.5 ba, pristine cond. Open flr plan, hdwd flrs. On 1 acre in River Heights. Lg fen yd, x-large gar, work area. More! $350,000. Joan Soboloff, Avalar Realty. 985-264-1125. soboloff@aol.com. MAKE ME BEAUTIFUL AGAIN! REAL ESTATE Call (504) 483-3100 Organic Modern! Open, flex flrplan, 5 br, 4.5 ba. Master bath is a spa! Top of line dream kit. Media game rm. On golf course, end of cul-de-sac. $690,000. Joan Soboloff, Owner/Agent 985-2641125. soboloff@aol.com. 208 Chateau De Brie Classy & Custom Built! From the architectural style roof, to hrdwd flrs, & everything in between! Granite counter tops, cherry-wood cabinets, dual vanities. Agent, Tontinette Puissegur, Latter & Blum, 985-630-8465. MANDEVILLE 12 CHANDON CT Waterfront home nr Causeway. 4 br 3.5 ba. 2 story. Huge back deck, 2 fabulous firepl, kit has custom ss countertops, new ac & heat. $337,500. Call Joan Soboloff, Avalar Realty. 985264-1125. soboloff@aol.com Stunning Sanctuary Elegance 1Br, 1 Ba, Nwly Remod, furn. Qn bed, WiFi, Cbl. Pkg.Util Incl. Lndry Fac. Sec Cameras $1200/mth. 1 mth min. 2325 Pasadena, Met. 504-491-1591. 90 Cardinal Lane. Upgrades Galore. 5305 / 7106 sq ft. Approx 1 acre lot . Reduced to $999,000. Call Marlene Zahn 504-236-8262 or Cindy Saia 504-577-5713. Latter & Blum Realtors, 985-246-3505. mzahn@ latterblum.com SLIDELL 120 PARADISE POINT New Orleans Area 10 Min to Downtown COMMERCIAL RENTALS METAIRIE 3020 VETERANS BLVD 3000 sg ft for lease off Causeway Blvd. 1 story in small strip mall. A/C, Heat and Water included in lease. Call Rick, 504-486-8951. Kirschman Realty, LLC. 740 N RAMPART 1350 sq ft, zone VCC-2, across from Armstrong Arch, corner of St Ann. $1750. Contact: 504-908-5210 THERAPIST OFFICE SPACE Victorian Building in Lower Garden District. Fridays Only. Call 670-2575 for information Call (504) 483-3100 New Orleans, Louisiana 3 Br, 2.5 Ba. Approx 1800 sq ft. Lg fenced yard. Small pet OK $1200/mo plus deposits.. 504-442-0618 OLD METAIRIE 1/2 OFF FIRST MONTH OLD METAIRIE SECRET 1 or 2 BR, Sparkling Pool, Bike Path, 12’ x 24’ Liv.Rm, Sep Din, King Master, No Pets, No Sect 8, $699 & $799 . 504-236-5776 METAIRIE TOWERS To Advertise in REAL ESTATE Le Fleur De Lis Realty, LLC 4608 FAIRFIELD ST. Rent $970/mo 1BR, 1-1/2 BA, pool. Elec & cable incld, prkg. 24 hr Concierge Service- 914-882-1212. Where dreams come home Outstanding view of majestic wildlife. 2 story, 4BR, 3BA, study, upstairs loft. Bathrooms & kitchen updated. Deck, patio & porch. Quiet cul-de-sac. $419,900. 985-640-8775. 133 ABERDEEN DRIVE Cross Gates Beauty 4 br 2.5 ba . Beautiful landscaping. Big kitchen, den, formal din rm, office. No flood zone. Home wrnty. Carole Woodward, Keller Williams Realty. (504) 578-7691. 201 N. SILVER MAPLE DR Ashton Oaks, 4 BR, 2.5 BA. Gameroom, Kit with granite, wood floors down. Big Master ste, hi ceil. Never flooded. Home wrnty. Carole Woodward, Keller Williams Realty. (504)578-7691. Ann de Montluzin Farmer The Historic House, Luxury Home and Second Home Specialist Gambit > bestofneworleans.com > JUne 21 > 2011 CORPORATE RENTALS 1103 ROYAL UNIT A 1 bedroom, 1 bath, cen a/h, Jacuzzi tub, w/d, water incl. Furnished or unfurnished. $1500/mo. Avail June 1. Call for appt, 504-952-3131. broker 58 REAL ESTATE FOR RENT Stunning custom home in Grande Maison. 4 BR, 3.5 BA. On cul-de-sac lot backing greenspace. Gourmet kit, keeping rm, butler’s pantry, bonus rm, basketball court & more. $499,000. 504-248-0945.. com/114561 20152 PALM BLVD. To Advertise in 147 E RUELLE Residential /Commercial Sales and Leasing, Appraisals. (504) 895-1493 (504) 430-8737 farmeran@gmail.com Licensed in Louisiana for 32 years, building on a real estate heritage since 1905 1125 Lynnette Dr. Metairie 3 BR, 2 BA, Covered Patio. $154,900 2156 Euclid St.,Terrytown Lot for Sale. 6534 sq. ft. $55,000 Jennifer Z. LeBlanc Realtor/Broker Affordable Housing Certified Native of New Orleans great 124 Four O’clock Ln, Waggaman 3 BR, 2 BA $89,000 Cell/Office: (504) 975-1757 jennifer@lefleurdelisrealty.com 24/7 online resident services pet free friendly spaces off street parking fully enclosed access gates Features vary by community. River Ridge Metairie Baton Rouge Kenner Slidell Mandeville Mandeville Jackson, MS Picayune, MS 3626 Upperline Upr dplx, 3 br, 1.5 ba, wd flrs, cei fans, furn kit, w/d, off st pkg. Nice area. $1200/mo. Louis, 874-3195. ESPLANADE RIDGE 1208 N. GAYOSO Upper 2 BR, LR, DR, 1 BA, KIT, wood/ ceramic flrs, high ceilings, cen a/h, w/d hkups, $1150/mo. 432-7955. 2919 Lepage 2b/1b, living, din, furn kit, w/d, cen a/h, wd flrs, high cel. garage $1100/ mo, no dogs. 985-231-8597 FRENCH QUARTER/ FAUBOURG MARIGNY 1103 ROYAL UNIT A BYWATER 1023 PIETY ST 2 br, 2 full ba, w/d hkps, cen a/h, c-fans, fncd yd, avail now. $875. 888239-6566 or mballier@yahoo.com 1 bedroom, 1 bath, cen a/h, Jacuzzi tub, w/d, water incl. Furnished or unfurnished. $1500/mo. Avail June 1. Call for appt, 504-952-3131. 1200 sq.ft 1/1 FURNISHED $1800/month includes gas & water. Newly renov’t 1850’s bldg. Call 985807-5398. Pics @ vrbo.com/142813 GENTILLY SINGLE FAMILY HM BYWATER STUDIOS 2 apts available, one mid-July and one mid-August. Located between Chartres and Royal, furnished including linens, kitchen ware, tv, cable, wi-fi, bottled water...the works - $850/ mo, 900 for short term, free laundry on premises. Call Gloria 504-948-0323 CBD 339 CARONDELET LUXURY 1 BDRM APTS Newly renovated 1850’s bldg on CBD st car line. 600-1000 sq ft. $1200-$2000/mo. 18 Units. Catalyst Development L.L.C. Owner/Agent. . 504-648-7899 CITY PARK/BAYOU ST. JOHN 4228 ORLEANS AVE. 1/2 Dble 2 Sty, 2Bd, 1Ba, A/C, Refig, Stove, W/D, Garage. $1300/mo, 1-yr Lse Sec Dep, No Pets.. Call 225-8026554/ email dicklea@cox.net DOWNTOWN 1327 FRENCHMAN ST. Living room, 1 BR, kitchen, tile bath. No pets. $500/mo. Call 504-494-0970. IRISH CHANNEL 1/2 BLOCK TO MAGAZINE 1 BR $695/mo. 2 BR, $900/mo (2 BR includes utilities), hardwood/carpet floors. . 504-202-0381, 738-2492. LAKEVIEW/LAKESHORE LOWER GARDEN DISTRICT St. Andrew - O/S, gtd pkng, pool, laun, $775/mo & up 2833 MAGAZINE 1BR/1BA Mod kit, o/s pkng, pool, coin op laun, $800/mo 891-2420 1 BLK TO AUDUBON PARK 6230 Annunciation, 3 BR, 2 BA, furn kit, cen a/h, w/d, off st prkg, $1950, lease. Call 621-7795 UPTOWN/ GARDEN DISTRICT 1, 2 & 3 BEDROOMS AVAILABLE CALL 899-RENT 1 Blk to St. Charles 1205 ST CHARLES/$1075 577 S CARROLLTON 1026 LYONS ST 1510 CARONDELET 1 block to St. Charles 8401 WILLOW ST 1711 2nd St. Lrg 1b/1b, dish washer, w/d onsite, cent AC, marble mantels, patio $850/mo 895-4726 or 261-7611 2 br, 1 ba, furn kit, w/d hkps, cen a/h, hdwd flrs, 10’ ceils, off st parking. $1200/mo. Call 9-5, M-F, ASC Real Estate 504-439-2481 2011 GEN PERSHING Beautiful Neighborhood! 4129 VENDOME PLACE Beautifully renovated spacious home. 3/4 br, 3 BA, kit w/ ss appl. w/d, cen a/h, lg yard, small gar. $2500/mo. $1500 dep. 504-621-9337 Fully Furn’d studio/effy/secure bldg/ gtd pkg/pool/gym/wifi/laundry. 985871-4324, 504-442-0573. 2 Eff apts. Lower $625 tenant pays elec. Upper $700 incl util, w/d on site 1-888-239-6566 or mballier@ yahoo.com 1729 1/2 ROBERT ST Studio apt, 2 rms, tile ba, effc kit, patio, wd flrs, fpl, bkcse, a/c unit/heat, security. $700/mo. 866-8118 4830 1/2 CHESTNUT 1 bdrm, furn kit, cen a/h, wd flrs, hi ceil, w/d hkps, ceil fans, balc. $750/ mo. ASC Real Estate. Call between 10am & 4pm. 504-439-2481. REAL ESTATE Call Bert: 504-581-2804 $1000 1726 St. Charles 1br/1ba Apartment Over Pralines $800 1207 Jackson 1br/1ba "Aquatic Garden Apt" $750 1514 Euterpe $600 "Efficiency Off St. Charles" 1300sf, 2 or 3br, 1ba, furn kit, laun, c-a/h, hdwd flrs, ceil fans, Offst pkg. $1200 • wtr pd. 504-865-9964 FURN 2BDRM/1BA HOUSE Complete w/fridge, w&d, mw, stove, sec sys, CA&H, os pkng. On srtcr & Busline. Quiet n’bhood. $1,100 mo + sec dep. No pets/smokers. Call (504) 866-2250 NEAR SACRED HEART Fantastic neighborhood, 3 br, 2.5 baths, fenced in yard. Lovely details and amenities. Ready 6/17/11 $1,800/mo. 4620 Carondelet St. 7234472 or 872-9365 RAISED COTTAGE UPPER HOWARD SCHMALZ & ASSOCIATES 628 Julia 1br/1ba "Arts District Apartment" By St. Charles, 3BR, 1BA, furn kit, w/d, cen air, $1450/mo util & Direct TV incl 504-913-6999, 504-259-6999 Deluxe furn 2 Br, w/10x12 luxury ba, cent. air, wd & tile floors, ceil fans, mini blinds, yd, screen prch, w/d, 5300 Freret at Valmont. $1200-$1400/mo incl. gas/wtr 504-899-3668 RENTALS TO SHARE ALL AREAS - ROOMMATES.COM. Browse hundreds of online listings with photos and maps. Findyour roommate with a click of the mouse! Visit: http://. To Advertise in REAL ESTATE Call (504) 483-3100. readers need KAYAK ON THE BAYOU 3237 Grande Route St. John. New renovation. 2BR/1.5BA, AC & heat, balconies. $2000/mo. (504) 210-5383. UNIVERSITY AREA 1616 SONIAT ST Patio ent, 2br, lr, din rm, 1ba stand up shower. Kit apl inc w&d, balc, stor, garage. NOPP sec. $1500/Lease. Avail Aug 1. 504-891-1863 a new home to RENT Gambit > bestofneworleans.com > JUne 21 > 2011 Across from Pontchartrain Golf Course! 4 BR/2 BA, CA&H. Built In electric. No smokers. Avail now! $1500/mo + deposit. Call 504-491-9834 UPTOWN/GARDEN DISTRICT GRT LOCATIONS! REAL ESTATE You can help them find one. To advertise in Gambit Classifieds’ “Real Estate” Section call 504.483.3100. 59 21 > 2011 ANSWERS FOR LAST WEEK ON PAGE 57 62. Home&Garden Gambit’s Guide to Home & Garden Professionals r 25 Yea rable Transfe ty Warran Never Paint Your Home Again! Paint does not stop moisture, cracking, rot & the damaging rays of the sun. Add protection & beauty to your home with Rhino Shield, the permanent protective coating. Over 10 times thicker than paint • Won’t chip, flake or peel • Waterproofs & ends rot • Unlimited colors Bonds to wood, brick, stucco, steel & vinyl 1-877-52-RHINO Free Evaluation • Financing Available Call Our Trained Experts & Experience The Difference Dear Future Rhino Shield Customer: If you are like us, you are careful who you hire to do work on your home. Rhino Shield met with us and gave us a thorough evaluation of the work; they did it on time and budget with no surprises.The best part was the “can do” attitude and the crew of hardworking employees.Things were tidy before they left each afternoon, and they showed respect while working on our property. We can’t thank them enough. Sincerely, Garland Robinette & Nancy Rhett - Chip/Spot Repair - Colors available - Clawfoot tubs & hardware FOR SALE 15% off any job of $3,000 or more Rhino Shield Louisiana, LLC 1-877-52-RHINO DON’T REPLACE YOUR TUB, REGLAZE IT Home of the $650 Termite Damage Repair Guarantee! WE DO IT ALL ... TERMITES, ROACHES, RATS & ANTS TOO! NEW ORLEANS METRO (504) 834-7330 2329 Edenborn Ave, Metairie, LA • 348-1770 Southernrefinishing.com 708 BARATARIA BLVD. SOUTHERN REFINISHING LLC Certified Fiberglass Technician • St. Augustine (Including Palmetto®) • Tifway Bermuda • Centipede • Zoysia WE BEAT ALL COMPETITORS! The Contractor’s Choice for Premium Quality Grass! Family Owned & Operated DIX COMPANIES Specializing in CONSTRUCTION • CONCRETE FENCING • BOBCAT Saltwater Systems Service, Maintenance, Repair LICENSED & INSURED 504-270-7307 Elmo Dix Mike Dix 504.329.2726 504.307.7195 mdixsr@gmail.com A BEST Sewer & Drain Service, Inc. Since 1975 Why consider installing spray foam insulation? NEW ORLEANS KENNER-JEFFERSON LAPLACE MANDEVILLE 522-9536 652-0084 466-8581 • Save up to 50% off heating /cooling bills • Live in a more comfortable home • Improve sound control • Reduce Unplanned moisture movement • Reduce your carbon footprint Call today for a Free Estimate Roland (Rusty) Cutrer Jr., Owner 504-432-7359 Locally Owned & Operated cutrerjrjr@cox.net Gambit > bestofneworleans.com > JUne 21 > 2011 ELECTRICAL 63
http://issuu.com/gambitneworleans/docs/gambit620
CC-MAIN-2015-32
refinedweb
19,197
67.76
Reference on V.2rc React Components theming? Hello, I have just discovered Onsen UI and I would like to know about customization and theming provided react components. I don’t find reference/tutorial on the v2.rc about those topics, so here is my questions: - How far we can customize existing components? I am looking for something more mutable than material-ui. - Which css methodology does Onsen use? (I saw some BEM in the default style sheet) - Does it use css-module or inline style or BEM namespace to avoid naming conflict? - What is the workflow for theming? - Is it compatible with JSS? Thank you :) @dagatsoin We use BEM to avoid name conflicts. You can build your own theme by customizing the Stylus files in the css-componentsdirectory. Just copy one of the existing themes and compile it with Stylus. Or you can override the current styles but that would create some superfluous rules. You should be able to run the Stylus compiler on these files. for those who ask: here is a demo repo which works with JSS: npm install -g monaca # Install Monaca CLI - Onsen UI toolkit monaca signup # Free sign up git clone git@github.com:dagatsoin/JSS-OnseUI-example.git cd JSS-OnseUI-example npm install monaca preview @dagatsoin Looks very nice. I will definitely take a look at JSS
https://community.onsen.io/topic/412/reference-on-v-2rc-react-components-theming
CC-MAIN-2020-40
refinedweb
224
68.16
print() and breakpoint() In this lesson, you’ll learn how to debug in Python using print() and breakpoint(). breakpoint() is an alias that works in Python 3.7 and newer for import pdb; pdb.set_trace(), which works in all Python versions. pdb is Python’s built in debugger and helps you step line by line through your code. If you want to learn more, check out Python Debugging With pdb. In the video, you saw float("inf"), which is a concise way to find the largest number (infinity). You can also use -float("inf"), which finds the smallest possible number. Check out the Python Built-in Documentation to learn more. 00:00 Let’s look at two different ways to debug in Python. First, let’s look at print statements. Let’s write a function max() that takes in our list and returns the max number without using the built-in max() function. Let’s do this iteratively. 00:14 Let’s define a max_num number to be set at 0, loop through the numbers in our list, and then check if num > max_num, 00:25 max_num = num, and then return num. You might have already seen the bug here, but let’s say it’s an interview. You’re under pressure, you’re trying to work fast, and you miss the bug. 00:36 What do you do? Well, let’s write a test real quick. 1, -2, 4—so, we’re just going to find the max of 1, -2, 4, and then run our code. 4. So, it worked there. 00:50 Let’s say the interviewer gives you another test, or even better, you come up with better edge case tests to really test your code. Let’s try all negative numbers. -4. 01:03 That doesn’t seem right, because the max of -1, -2, and -4 is -1. So, something’s wrong here. I actually can’t spot it right away, so I’m actually going to debug. max_num—oops. 01:17 Put this into the for loop. Save it, run it. -1, -2, -4. Why did we return -4? Oh! return num, here. 01:30 So, that actually wasn’t the bug I intended. There’s another one, but you can see that with printing, we were able to see that num should really be max_num. 01:39 Let’s save it, run it. -1 0, -2 0, -4 0, and 0. Okay, so now we’re returning 0 as the max_num. 01:46 Why is that happening? Well, max_num is never changing. We can verify that by printing a string here, like 'entered if statement', saving it, running it, and noticing it never enters the if statement. 02:00 That’s because negative numbers are always less than zero. So, what do we need to do? Well, we need to change max_num to be instantiated to -float('inf') (negative float 'inf'). 02:11 Or, you could import the math module and find the smallest number that way—I think there’s some variable there, but I like float('inf'). It’s a little bit more explicit. Save it, run it, and we got -1. So that worked using print statements. 02:25 I like using print statements in for loops because you see multiple lines of output with just one click. But now, let’s try to use the built-in debugger in Python, which is pdb. 02:35 So instead of printing, just write breakpoint() and it’s a function, so you have to call it. And then, what will happen is as it’s executing the code, once it hits the breakpoint() it will stop immediately there, and then you can print out variables and do all sorts of stuff. 02:51 Running it, it says line 5, if num > max_num:. It’s right about to execute this code, but it hasn’t yet. You can print out num, max_num, or lst—stuff like that, and then you can type n to go to the next line—line 6. 03:09 It’s about to execute line 6. max_num, num, n—now, it is about to execute the next iteration in our list. max_num is now -1 and num is -1. 03:27 Now, num changed to -2, because we’re on the next element, and max_num is still -1, et cetera, like this. 03:35 And then, just press n a bunch, and we’re at the return statement. 03:41 So, using breakpoint() is very useful when there are many variables, or if there is very complicated logic, or if you’re really not sure—and so you just put the breakpoint() in multiple spots in the code and just try to print stuff out—instead of the print statement, where you sort of have to know what you’re looking for. 03:58 One more note is the word breakpoint() only works in Python 3.7, I believe—and newer. For older versions, you have to type out explicitly import pdb, pdb.set_trace(). 04:11 breakpoint() just calls this—it’s just a wraparound, so you don’t have to type—I don’t know—20 characters? But this is what you would use in Python 2 or any version before Python 3.7. 04:22 I will link a Real Python article on pdb, which will have all the commands and go way in-depth. I just wanted to expose you all to this great debugging tool. Also, if you’re doing a phone interview on a online code editor like HackerRank, they probably don’t have pdb. 04:37 So, pdb is really only if you’re doing it on your computer and they’re screen-sharing, or if you’re on an onsite and you’re using one of their computers. But a lot of times, interviewers may ask, “How would you debug this code?” and you can mention pdb. In the next video, you will learn about f-strings, which are a new feature in Python 3.8 and newer. @Piotr thank you for the clarification. I will add a comment to the f-Strings video If you want to learn more, here is a Real Python walkthrough video on pdb debugging: Python Debugging With pdb Also in the video, I use float("inf") which is a very concise way to find the largest number (infinity). You can also do -float("inf") which finds the smallest possible number. See the Python Built-in Documentation for more. Ranodm question. Why start with -inf when you could just start with the first elelment from the list? I couldn’t think of an edge case where this would break but maybe there is one? Become a Member to join the conversation. Piotr on April 25, 2020 Actually, f-strings were added in 3.6.
https://realpython.com/lessons/print-and-breakpoint/
CC-MAIN-2020-40
refinedweb
1,148
83.46
Note: This is a brief instruction for EMAN2 compilation on windows with visual c++. You will get a general idea for how to get EMAN2 compiled on windows. The actual version of these dependent libraries may be different. 1. Use cmake to generate Visual Studio 2008 solution: Download cmake windows installer cmake-2.8.1-win32-x86.exe from. - run cmake, select build solution for "visual c++ 9.0", fill the EMAN2 source directory to "Where is the source code", and specify a directory to "Where to build the binaries". - click configure button, then OK. You will get error dialog. Then follow the next item to solve all dependency libraries. 2. The dependent libraries for EMAN2 building: - Note: I suppose all packages are installed on C: drive. You can adjust it according to your actual path. Python Download python-2.6.5.msi from, run it to install. - set PYTHON_INLCUDE_PATH to C:/Python26/include - set PYTHON_LIBRARY to C:/Python26/libs/python26.lib Numpy Download numpy-1.4.1-win32-superpack-python2.6.exe from, run it to install. - set NUMPY_INCLUDE_PATH to C:/Python26/Lib/site-packages/numpy/core/include Boost.python Download boost_1_43_0.zip and boost-jam-3.1.18-1-ntx86.zip from. - unzip boost-jam-3.1.18-1-ntx86.zip, you will get a executable file bjam.exe, put this file in your PATH(make sure you can execute it in a console window). - unzip boost_1_43_0.zip. - open a Visual Studio 2008 Command Prompt, cd C:\boost_1_43_0 - run bootstrap.bat - bjam --with-python toolset=msvc link=shared threading=multi runtime-link=shared - set the BOOST_INCLUDE_PATH to C:/boost_1_43_0/include - set the BOOST_LIBRARY to C:/boost_1_43_0/bin.v2/libs/python/build/msvc-9.0/release/threading-multi/boost_python-vc80-mt-1_43.lib HDF5 Download hdf5-1.8.4-patch1-win32-vs2008-ivf101-enc.zip from. - unzip hdf5-1.8.4-patch1-win32-vs2008-ivf101-enc.zip, get a folder hdf5lib. - set the HDF5_INCLUDE_PATH to C:/hdf5lib/include. - set the HDF5_LIBRARY to C:/hdf5lib/dll/hdf5dll.lib. FFTW3 Download precompiled fftw3.1.2 windows dlls fftw-3.2.2.pl1-dll32.zip from. - unzip fftw-3.2.2.pl1-dll32.zip. - open a Visual Studio 2008 Command Prompt, cd c:\fftw-3.2.2.pl1-dll32. follow the instruction from, run these commands: lib.exe /machine:i386 /def:libfftw3f-3.def - set FFTW3_INCLUDE_PATH to C:/fftw-3.2.2.pl1-dll32. - set FFTW3_LIBRARY to C:/fftw-3.2.2.pl1-dll32/libfftw3f-3.lib GSL Download gsl-1.13-windows-binaries.zip from - unzip gsl-1.13-windows-binaries.zip to C:\gsl. - set GSL_INCLUDE_PATH to C:/gsl/include. - set GSL_CBLAS_INCLUDE_PATH to C:/gsl/include. - set GSL_LIBRARY to C:/gsl/lib/gsl.lib. - set GSL_CBLAS_LIBRARY to C:/gsl/lib/cblas.lib. libtiff Download tiff-3.8.2-1-lib.zip from, click the zip link close to "Developer files". - unzip tiff-3.8.2-1-lib.zip. In C drive, create directory include and lib. - open a Visual Studio 2008 Command Prompt, cd c:\tiff-3.8.2-1-lib\lib - Run this commands: lib.exe /machine:i386 /def:libtiff.def - copy header files in C:\tiff-3.8.2-1-lib\include to C:\include, copy C:\tiff-3.8.2-1-lib\libtiff.lib to C:\lib. - set TIFF_INCLUDE_PATH to C:/include. - set TIFF_LIBRARY to C:/lib/libtiff.lib. download binary tiff-3.8.2-1-bin.zip from. Save the libtiff3.dll for runtime use. libjpeg Download jpeg-6b-4-lib.zip from, click the zip link close to "Developer files". - unzip jpeg-6b-4-lib.zip to to C:\jpeg-6b-4-lib. - open a Visual Studio 2008 Command Prompt, cd c:\jpeg-6b-4-lib\lib - Run this commands: lib.exe /machine:i386 /def:jpeg.def - copy header files in C:\jpeg-6b-4-lib/include to C:\include, copy C:/jpeg-6b-4-lib/lib/jpeg.lib to C:\lib. - set JPEG_INCLUDE_PATH to C:/include. - set JPEG_LIBRARY to C:/lib/jpeg.lib. download binary jpeg-6b-4-bin.zip from. Save the jpeg62.dll for runtime use. libpng Download libpng-1.2.37-lib.zip from, click the zip link close to "Developer files". - unzip libpng-1.2.37-lib.zip to C:\libpng-1.2.37-lib. - open a Visual Studio 2008 Command Prompt, cd c:\libpng-1.2.37-lib\lib - Run this commands: lib.exe /machine:i386 /def:libpng12.def - copy header files in C:/libpng-1.2.37-lib/include to C:\include, copy C:/libpng-1.2.37-lib/lib/libpng.lib to C:\lib. - set PNG_INCLUDE_PATH to C:/include. - set PNG_LIBRARY to C:/lib/libpng.lib. download binary libpng-1.2.37-bin.zip from. Save the libpng12.dll for runtime use. zlib Download zlib-1.2.3-win32-vs2008.zip from. - unzip zlib-1.2.3-win32-vs2008.zip to C:\zlib. - copy header files in C:\zlib\include to C:\include, copy zlib1.lib and zlib1.dll from C:\zlib\dll to C:\lib. - set ZLIB_LIBRARY to C:/lib/zlib1.lib. szlib Download szip-2.1-win32-vs2008-enc.zip from. - unzip szip-2.1-win32-vs2008-enc.zip to C:\szip. - copy header files in C:\szip\include to c:\include, copy C:\szip\lib\szlib.lib to C:\lib. - set SZLIB_LIBRARY to C:/lib/szlib.lib. OpenGL Download glext.h from into C:/Program Files/Microsoft SDKs/Windows/v6.0A/Include/GL. - set GLU_INCLUDE_PATH to C:/Program Files/Microsoft SDKs/Windows/v6.0A/Include. - set GLU_LIBRARY to C:/Program Files/Microsoft SDKs/Windows/v6.0A/Lib/GlU32.Lib. - set GL_INCLUDE_PATH to C:/Program Files/Microsoft SDKs/Windows/v6.0A/Include. - set GL_LIBRARY to C:/Program Files/Microsoft SDKs/Windows/v6.0A/Lib/OpenGL32.Lib. FTGL Download ftgl-2.1.3-rc5.tar.gz from. Download freetype-2.3.5-1-bin.zip from. - set ENABLE_FTGL - set FTGL_INCLUDE_PATH to C:/ftgl-2.1.3~rc5/src - set FTGL_LIBRARY to C:/Download/ftgl-2.1.3-rc5/ftgl-2.1.3~rc5/msvc/build/ftgl_static_D.lib. - In "Advance view" mode, set ENABLE_STATIC_FTGL - set FREETYPE_INCLUDE_PATH to C:/freetype-2.3.5-1-bin/include. - set FREETYPE_LIBRARY to C:/freetype-2.3.5-1-bin/lib/freetype.lib. Berkeley DB - The Berkeley DB and the following pybsddb is optional because Python interpreter already have them. But the bsddb comes with Python interpreter is not stable as current release. We STRONGLY suggest you compile and install the current release of Berkeley DB and pybsddb. We do so for all our binary release of EMAN2. The simple way to check whether you have Python's default bsddb or self-compile one: the self-compile one will become bsddb3 in Python. So if you can "import bsddb3" in Python, you have your own berkeley DB installed. Otherwise, you can only do "import bsddb". Download Berkeley DB source package (Berkeley DB 5.3.21.NC.zip) from oracle. Do NOT download the windows installer (Berkeley DB 5.3.21.msi Windows installer), I never get it worked with EMAN2. - unzip the file. Open Visual C++ .NET 2008, Choose File -> Open -> Project/Solution.... In the build_windows directory, select Berkeley_DB.sln and click Open. - The Visual Studio Conversion Wizard will open automatically. Click the Finish button. - On the next screen click the Close button. - Choose Release from the drop-down menu on the tool bar. - Choose the Win32 platform configuration from the drop-down menu on the tool bar. - To build, right-click on the Berkeley_DB solution and select Build Solution. - Look for build results in directory build_windows\Win32\Release. - You need build_windows\Win32\Release\libdb53.lib for your pybsddb compilation. - Copy the runtime library libdb53.dll from build_windows\Win32\Release to your EMAN2\lib directory. This is required. - Copy the executable file from db_archive.exe to db_verify.exe to your EMAN2\bin directory. This is optional. You may need those commands for some diretly bsddb operation if things go wrong. pybsddb - pyBSDDB is the Python wrapping for Berkeley DB. With this package, you can utilize berkeley DB in Python. Download the latest release of pyBSDDB from - Extract it to get folder which contains pybsddb3 source and setup files, for example \bsddb3-5.3.0 lke I get. - Create a folder db in \bsddb3-5.3.0. - Create a folder include in \bsddb3-5.3.0\db, copy following herder files from your Berkeley DB folder to \bsddb3-5.3.0\db\include: \dbinc\queue.h clib_port.h db.h db_config.h db_cxx.h db_int.h dbstl_common.h - Create a folder bin in \bsddb3-5.3.0\db, copy executable file from db_archive to test_cutest and libdb*.dll to this \bsddb3-5.3.0\db\bin directory. - Create a folder lib in \bsddb3-5.3.0\db, copy following libraries from your Berkeley DB's build_windows\Win32\Release\libdb53.lib to \bsddb3-5.3.0\db\lib directory. - This is the tricky part, copy the \bsddb3-5.3.0\db\lib\libdb53.lib in the same folder as \bsddb3-5.3.0\db\lib\libdb53s.lib, it's needed in the later on compilation. - Open visual studio 2008 command prompt, cd into folder \bsddb3-5.3.0 - Use an editor, open setup2.py, comment out following three lines: assert (fullverstr[0], fullverstr[2]) in db_ver_list, ( "pybsddb untested with this Berkeley DB version", ver) print 'Detected Berkeley DB version', ver, 'from db.h' - and this line: runtime_library_dirs = [ libdir ], - Run following command: python.exe setup.py bdist --formats=wininst python.exe setup.py install - Congratulations! Now you have your Berkeley DB and pyBSDDB installed. You can check their version in Python to confirm you have successfully installed them: import bsddb3 bsddb3.__version__ #this is the pyBSDDB version bsddb3.db.version() #this is the Berkeley DB version 3. Actions before build: - In cmake configuration page, you need change the CMAKE_CONFIGURATION_TYPES to Release. If you did not configure the build type to Release in cmake. In the Visual Studio 2008 menu, choose build -> configuration manager. Set the "Active Solution Configureation" to release. Then check the INSTALL and test in the check boxes. - right click the "Solution 'EMAN2'" in the left "Solution Explorer" pane, then choose "Build Solution". If you see all 27 projects build succeeded. Then you got a successfull EMAN2 build. 4. After compilation - We need copy the runtime libraries into EMAN2/lib directory. - boost_python-vc90-mt-1_43.dll - hdf5.dll - libfftw3f-3.dll - szip.dll - zlib1.dll - jpeg62.dll, duplicate it to jpeg.dll - libtiff3.dll, change the name to libtiff.dll - libpng12.dll 5. Install runtime libraries IPython Download ipython-0.10.win32-setup.exe from. Click and install. Download pyreadline-1.5-win32-setup.exe from. Click and install. This package is for proper color support in IPython shell. PyOpenGL To install PyOpenGL, we need install setuptools first. Download setuptools-0.6c11.win32-py2.6.exe from. Click and install. Download PyOpenGL-3.0.0b2.zip from - unzip PyOpenGL-3.0.1.zip to C:\PyOpenGL-3.0.1. - Open a console window, cd C:\PyOpenGL-3.0.1, then type "python setup.py install". Download PyQt-Py2.6-gpl-4.7.3-2.exe from. Click and install. matplotlib Download matplotlib-0.99.3.win32-py2.6.exe from, click and install. 6. Set up environment varaibles - set EMAN2DIR to C:\EMAN2 - set PATH to C:\EMAN2\bin;C:\EMAN2\lib - set PYTHONPATH to C:\EMAN2\lib After everything installed, type e2.py, if you see error message like, "The procedure entry point _Z7qstrcmpRK10QByteArrayS1_ could not be located in the dynamic link library QtCore4.dll.". That means you may have other Qt4 libraries installed. Remove them, just use the Qt4 libraries come with PyQt4 will solve this problem.
https://blake.bcm.edu/emanwiki/COMPILE_EMAN2_VS2008?action=SlideShow&amp;n=all
CC-MAIN-2019-22
refinedweb
1,956
55.91
In the last blog post about coding style, we dissected what the state of the art was regarding coding style check in Python. As we've seen, Flake8 is a wrapper around several tools and is extensible via plugins: meaning that you can add your own checks. I'm a heavy user of Flake8 and relies on a few plugins to extend the check coverage of common programming mistakes in Python. Here's the list of the ones I can't work without. As a bonus, you'll find at the end of this post, a sample of my go-to tox.ini file. flake8-import-order The name is quite explicit: this extension checks the order of your import statements at the beginning of your files. By default, it uses a style that I enjoy, which looks like: import os import sys import requests import yaml import myproject from myproject.utils import somemodule The builtin modules are grouped as the first ones. Then comes a group for each third-party modules that are imported. Finally, the last group manages the modules of the current project. I find this way of organizing modules import quite clear and easy to read. To make sure flake8-import-order knows about the name of your project module name, you need to specify it in tox.ini with the application-import-names option. If you beg to differ, you can use any of the other styles that flake8-import-order offers by default by setting the import-order-style option. You can obviously provide your own style. flake8-blind-except The flake8-blind-except extension checks that no except statement is used without specifying an exception type. The following excerpt is, therefore, considered invalid: try: do_something() except: pass Using except without any exception type specified is considered bad practice as it might catch unwanted exceptions. It forces the developer to think about what kind of errors might happen and should really be caught. In the rare case any exception should be caught, it's still possible to use except Exception anyway. flake8-builtins The flake8-builtins plugin checks that there is no name collision between your code and the Python builtin variables. For example, this code would trigger an error: def first(list): return list[0] As list is a builtin in Python (to create a list!), shadowing its definition by using list as the name of a parameter in a function signature would trigger a warning from flake8-builtins. While the code is valid, it's a bad habit to override Python builtins functions. It might lead to tricky errors; in the above example, if you ever need to call list(), you won't be able to. flake8-logging-format This module is handy as it is still slapping my fingers once in a while. When using the logging module, it prevents from writing this kind of code: mylogger.info("Hello %s" % mystring) While this works, it's suboptimal as it forces the string interpolation. If the logger is configured to print only messages with a logging level of warning or above, doing a string interpolation here is pointless. Therefore, one should instead write: mylogger.info("Hello %s", mystring) Same goes if you use format to do any formatting. Be aware that contrary to other flake8 modules, this one does not enable the check by default. You'll need to add enable-extensions=G in your tox.ini file. flake8-docstrings The flake8-docstrings module checks the content of your Python docstrings for respect of the PEP 257. This PEP is full of small details about formatting your docstrings the right way, which is something you wouldn't be able to do without such a tool. A simple example would be: class Foobar: """A foobar""" While this seems valid, there is a missing point at the end of the docstring. Trust me, especially if you are writing a library that is consumed by other developers, this is a must-have. flake8-rst-docstrings This extension is a good complement to flake8-docstrings: it checks that the content of your docstrings is valid RST. It's a no-brainer, so I'd install it without question. Again, if your project exports a documented API that is built with Sphinx, this is a must-have. My standard tox.ini Here's the standard tox.ini excerpt that I use in most of my projects. You can copy paste it and use [testenv:pep8] deps = flake8 flake8-import-order flake8-blind-except flake8-builtins flake8-docstrings flake8-rst-docstrings flake8-logging-format commands = flake8 [flake8] exclude = .tox # If you need to ignore some error codes in the whole source code # you can write them here # ignore = D100,D101 show-source = true enable-extensions=G application-import-names = <myprojectname> Before disabling an error code for your entire project, remember that you can force flake8 to ignore a particular instance of the error by adding the # noqa tag at the end of the line. If you have any flake8 extension that you think is useful, please let me know in the comment section!
https://julien.danjou.info/the-best-flake8-extensions/
CC-MAIN-2019-30
refinedweb
855
62.27
The Serial Peripheral Interface or SPI bus is a synchronous serial data link that operates in full duplex mode. In other words, data can be sent and received at the same time. Devices communicate in master/slave mode, where the master device initiates the data exchange with one or more slaves. Multiple slave devices are allowed with individual slave select lines. The SPI bus specifies four logic signals: SCLK : Serial Clock (a clock signal that is sent from the master). MOSI : Master Output, Slave Input (data sent from the master to the slave). MISO : Master Input, Slave Output (data sent from the slave to the master). SS : Slave Select (sent from the master, active on low signal). Often paired with the Chip Select (CS) line on an integrated circuit that supports SPI. In order to enable the SPI bus on the Raspberry Pi, uncomment the entry spi_bcm2708 in the file /etc/modprobe.d/raspi-blacklist.conf. Note that you will need to have root privileges to edit the file. Because the Raspberry Pi board does not come with a analog-to-digital converter, the SPI bus can be used to communicate with a peripheral analog-to-digital converter chip that is reading an analog signal. For this exercise, you will need the following hardware: The data sheet of the TLC549CP shows 8 pins, as shown in Figure 7-1. Note that the SPI connections reside on the right side of the chip, while the connections for measuring the analog signal are on the left side of the chip. Figure 7-1 Pinouts for TLC549CP Analog-to-Digital Converter Chip In order to connect the TLC549CP chip to the Raspberry Pi, the SPI connections must be connected as shown in Table 7-2. The other four pins must be connected to provide the analog voltage to measure. In this example, we are using a potentiometer (in effect, a variable resistor) to vary the amount of voltage being sent into the Analog In pin. Table 7-3 shows how to connect the remaining pins on the TCL549CP chip. Note that in order to complete our circuit and provide power to the potentiometer, the Vref+ must be also connected to a 3.3V input, and the Vref- must be connected to a ground. The chip does not provide voltage. You can test the voltage that is being sent through the potentiometer with a voltmeter to ensure that the circuit is working properly. The completed circuit on the breadboard is shown in Figure 7-2. Figure 7-2 Breadboard with the Analog-to-Digital Converter Circuit Once this is completed, we can use the source code in Example 7-1 to test out the ADC chip. Example 7-1 Testing Out the SPI Bus Connection import jdk.dio.Device; import jdk.dio.DeviceManager; import jdk.dio.spibus.SPIDevice; import jdk.dio.spibus.SPIDeviceConfig; import java.io.IOException; import java.nio.ByteBuffer; import java.util.logging.Level; import java.util.logging.Logger; import javax.microedition.midlet.MIDlet; public class SPIExample1 extends MIDlet { public void startApp() { System.out.println("Preparing to open SPI device..."); SPIDeviceConfig config = new SPIDeviceConfig(0, 0, SPIDeviceConfig.CS_ACTIVE_LOW, 500000, 3, 8, Peripheral.BIG_ENDIAN); try (SPIDevice slave = (SPIDevice)DeviceManager.open(config)) { System.out.println("SPI device opened."); for (int i = 1; i < 200; i++) { ByteBuffer sndBuf = ByteBuffer.wrap(new byte[]{0x00}); ByteBuffer rcvBuf = ByteBuffer.wrap(new byte[1]); slave.writeAndRead(sndBuf,rcvBuf); System.out.println("Analog to digital conversion at " + i + " is: " + rcvBuf.get(0)); Thread.sleep(1000); } } catch (IOException ioe) { // handle exception } catch (InterruptedException ex) { Logger.getLogger(SPIExample1.class.getName()). log(Level.SEVERE, null, ex); } } public void pauseApp() { } public void destroyApp(boolean unconditional) { } } This program is very simple: it opens up a connection to the Raspbeery Pi SPI bus using a SPIDeviceConfig and writes a byte to the peripheral device: the ADC chip. Since there is no input connection being sent from the master (the Raspberry Pi) to the slave (the ADC chip), this data is effectively ignored. The SPI bus will, concurrently, attempt to retrieve a byte of data from the chip. This byte is passed along the MISO line, which returns an 8-bit number that represents the current voltage level. This process will be repeated 200 times, with a one-second delay between each sampling on the bus. The program output looks like the following. As the program is running, try turning the dial on the potentiometer to vary the voltage that is being sent into the chip. Here, we are turning the voltage from higher to lower, and the ADC chip is representing this with a steady drop in the 8-bit value that is returned. Starting emulator in execution mode ... About the open device Device opened... Value for 1 is: 145 Value for 2 is: 143 Value for 3 is: 120 Value for 4 is: 113 Value for 5 is: 90 Value for 6 is: 75 Value for 7 is: 63
http://docs.oracle.com/javame/8.0/me-dev-guide/spi.htm
CC-MAIN-2017-43
refinedweb
824
57.77
Configure Your ASP.NET Core 1.0 Application Configure Your ASP.NET Core 1.0 Application Though it's not terribly hard once you know what you're doing, ASP.NET Core doesn't really make it easy to get started. Learn how to configure your ASP.NET Core app. Join the DZone community and get the full member experience.Join For Free an appsettigns.json and environment variables to the ConfigurationBuilder. In development mode, it also adds ApplicationInsights settings. If you take a look into the appsettings.json, you'll only find a ApplicationInsights key and some logging specific settings (if you choose these doesn't seem like a really useful way to provide the application settings to our application. And it looks almost like what we would have done in the previous verisions of ASP.NET. But the new configuration is much better. In previous versions, we created a settings facade to encapsulate the settings, to not access the configuration directly, and to get typed settings. Now,. When this the IOptions<AppSettings>: public class HomeController : Controller { private readonly AppSettings _settings; public HomeController(IOptions<AppSettings> settings) { _settings = settings.Value; } public IActionResult Index() { ViewData["Message"] = _settings.ApplicationTitle; return View(); } We can even do this these environments need another configuration, another connection string, mail settings, Azure access keys, whatever... Let's go back to the Startup.cs to have a look at. If we are running in Staging mode, the second settings file will be loaded and the existing settings will be overridden by the new one. We just need to specify the settings we want to override. Setting the flag optional to true means that the settings file doesn't need to exist. With this approach, out how to start. Published at DZone with permission of Juergen Gutsch , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/configure-your-aspnet-core-10-application?fromrel=true
CC-MAIN-2019-35
refinedweb
325
52.76
1 Expressions, Variables & Constants Written by Matt Galloway Welcome to the book! In this first chapter, you’re going to learn a few basics. You’ll learn how code works first. Then you’ll learn about the tools you’ll be using to write Swift code. You’ll then start your adventure into Swift by learning some basics such as code comments, arithmetic operations, constants and variables. These are some of the fundamental building blocks of any language, and Swift a computer comes mostly from how it does its work by running thousands to millions of simple instructions. A complex computer program such as your operating system, macOS (yes, that’s a computer program too!), consists of many millions of instructions. chat with your friends. Instead of writing individual instructions, you write source code (or just code) in a specific programming language, which in your case will be Swift. This code is put through a computer program called a compiler, which converts the code into those small machine instructions the CPU knows how to execute. Each line of code you write will turn into many instructions — some lines could end up being tens of instructions! 2, useful! Swift. Swift is an extremely modern language. It incorporates the strengths of many other languages while ironing out some of their weaknesses. In years to come, programmers may look back on Swift as being old and crusty, too. But for now, it’s continues to improve and evolve. This has been a brief tour of computer hardware, number representation and code, and how they all work together to create a modern program. That was a lot to cover in one section! Now it’s time to learn about the tools you’ll use to write in Swift as you follow along with this book. Playgrounds The set of tools you use to write software is called a toolchain. The part of the toolchain into which you write your code is known as the Integrated Development Environment (IDE). The most commonly used IDE for Swift is called Xcode, and that’s what you’ll be using. Xcode includes a handy document type called a playground, which allows you to quickly write and test code without building a complete app. You’ll use playgrounds throughout the book to practice coding, so it’s important to understand how they work. That’s what you’ll learn during the rest of this chapter. Creating a playground To get started with a playground, click File ▸ New ▸ Playground. with code. For this book, choose whichever platform you wish. You won’t be writing any platform-specific code; instead, you’ll be learning. For example, while you’re working through Chapter 1, you may want to name your playground Chapter1. Click Create to create and save the playground. Xcode then presents you with the playground, like so: Even blank playgrounds don’t start empty but have some basic starter code to get you going. Don’t worry — you’ll soon learn what this code means. Playgrounds overview At first glance, a playground may look like a rather fancy text editor. Well, here’s some news for you: It is essentially just that! The previous screenshot highlights the first and most important things to know about: Source editor: This is the area in which you’ll write your Swift code. It’s much like a text editor such as Notepad or TextEdit. You’ll notice the use of what’s known as a monospaced font, meaning all characters are the same width. This makes the code much easier to read and format. Results sidebar: This area shows the results of your code. You’ll learn more about how code executes as you read through the book. The results sidebar will be the main place you’ll look to confirm your code is working as expected. Execution control: This control lets you run the entire playground file or clear state so you can run it again. By default, playgrounds do not execute automatically. You can change this setting to execute with every change by long pressing on it and selecting “Automatically Run”. Activity viewer: This shows the status of the playground. In the screenshot, it shows that the playground has finished executing and is ready to handle more code in the source editor. When the playground is executing, this viewer will indicate this with a spinner. Left panel control: This toggles the left panel. In the left panel you’ll find a tree of resources for the playground starting with the main playground file, then additional sources and resources. This allows you to build very complex playgrounds that split up the sources into multiple files. Keep this closed for now. Right panel control: This toggles the right panel. In here you’ll find information about the source file that’s open. You’ll usually keep this closed. Bottom panel control: This toggles the bottom panel. In here you’ll find output from the running playground. You’ll see this later. You can turn on line numbers on the left side of the source editor by clicking Xcode ▸ Preferences… ▸ Text Editing ▸ Line Numbers. Line numbers can be very useful when you want to refer to parts of your code. Playgrounds execute the code in the source editor from top to bottom. The play button floats next to each line as you move the cursor over it and lets you run from the beginning of the file upto and including the line you click. To force a re-execution, you can click on the Execution control button twice–once to stop and clear it and again to rerun. Once the playground execution is finished, Xcode updates the results sidebar to show the results of the corresponding line in the source editor. You’ll see how to interpret the results of your code as you work through the examples in this book. Note: Under certain conditions, you may find Xcode incorrectly disables line-based execution. In these cases, just use the execution control button to run the entire playground. Getting started with Swift Now that you know how computers work and know what this “playground” thing is, it’s time to start writing some Swift! You may wish to follow along with your own playground. Simply create one and type in the code as you go! First up is something that helps you organize your code. Read on! Code comments The Swift compiler generates. Swift, like most other programming languages, allows you to document your code through the use of what are called comments. These allow you to write any text directly along side your code and! Swift: You can hide or show the debug area using the button highlighted with the red box in the picture above. You can also click View ▸ Debug Area ▸ Show Debug Area. In later chapters, you see operations for types other than numbers. Simple operations All operations in Swift use a symbol known as the operator to denote the type of operation they perform. Consider the four arithmetic operations you learned in your early school days: addition, subtraction, multiplication and division. For these simple operations, Swift uses the following operators: - Add: + - Subtract: - - Multiply: * - Divide: / These operators are used like so: 2 + 6 10 - 2 2 * 4 24 / 3 Each of these lines is an expression, meaning each has a value. In these cases, all four expressions have the same value: 8. Notice how the code looks similar to how you would write the operations out on pen and paper. You can enter these straight into your playground. The line numbers in light blue are ones that have not yet run. To run your code, click on the light blue play button on the last line next to the cursor. Upon running, the playground removes the blue sidebar from the lines that have run; you can also see the values of these expressions in the right-hand bar, known as the results sidebar. If you want, you can remove the whitespace surrounding the operator: 2+6 When you make this change, the blue sidebar reappears to indicate which lines need to be rerun. You can run again by clicking on the blue arrow or by using the shortcut Shift-Enter. Note: Shift-Enter runs all of the statements upto the current cursor and advances to the next line. This makes it easy to keep hitting Shift-Enter and run the whole playground step-by-step. Its a great shortcut to commit to muscle memory. Removing the whitespace is an all or nothing, you can’t mix styles. For example: 2+6 // OK 2 + 6 // OK 2 +6 // ERROR 2+ 6 // ERROR The first error will be: Consecutive statements on a line must be separated by ';' And for the second error you’ll see: '+' is not a postfix unary operator You don’t need to understand these error messages at the moment. Just be aware that you must have whitespace on both sides of the operator or no whitespace on either side! It’s often easier to read expressions when you have white space on either side. also. 28 divided by 10 and then truncates the result, chopping off any extra decimals and returns the remainder of that. The result is identical to % when there are no decimals._10<< Swift: ((8000 / (5 * 10)) - 32) >> (29 % 5) Note Swift also has a vast range of math functions for you to use when necessary. You never know when you need to pull out some trigonometry, especially when you’re a pro at Swift and writing those complex games! Note: Not all of these functions are part of Swift. The operating system provides some. Don’t remove the import statement convert an angle from degrees to radians and then compute the sine and cosine respectively. Notice how both make use of Double.pi which is a constant Swift provides us, ready-made with pi to as much precision as is possible by the computer. Neat! Then there’s this: (2.0).squareRoot() // 1.414213562373095 This computes the square root of 2. Did you know that the sine of 45° equals 1 over the square root of 2? Try it out! Not mentioning these would be a shame: max(5, 10) // 10 min(-5, -10) // -10 These compute the maximum and minimum of two numbers respectively. If you’re particularly adventurous you can even combine these functions like so: max((2.0).squareRoot(), Double.pi / 2) // 1.570796326794897 Naming data At its simplest, computer programming is all about manipulating data. Remember, everything you see on your screen can be reduced to numbers that you send to the CPU. Sometimes you represent and work with this data as various types of numbers, but other times the data comes in more complex forms such as text, images and collections. In your Swift code, you can give each piece of data a name you can refer to later. The name carries with it a type annotation that denotes what sort of data the name refers to, such as text, numbers, or a date. You’ll learn about some of the basic types in this chapter, and you’ll encounter many other types throughout this book. Constants Take a look at this: let number: Int = 10 This declares: number = 0 This code produces an error: Cannot assign to value: 'number' is a 'let' constant In Xcode, you would see the error represented this way: Constants are useful for values that aren’t going to change. For example, if you were modeling an airplane and needed to refer to the total number of seats installed, you could use a constant. You might even use a constant for something like a person’s age. Even though their age will change as their birthday comes, Swift, you can optionally use underscores to make larger numbers more human-readable. The quantity and placement of the underscores is up to you. declared or reassigned a variable.. For variables and constants, follow these rules to case your names properly: - Start with a lowercase letter. - If the name is made up of multiple words, join them together and start the other words likely to bring you more pain than amusement. Special characters like these probably make more sense in data that you store rather than in Swift code; you’ll learn more about Unicode in Chapter 9, “Strings.” Increment and decrement A common operation that you will need is to be able to increment or decrement a variable. In Swift, you achieve it Xcode, now’s the time to create a new playground. It is best if you try to solve them yourself, but solutions are available if you get stuck. These came with the download or are available at the printed book’s source code link listed in the introduction. Challenge 1: Variables Declare a constant Int called myAge and set it equal to your age. Also declare an Int variable called dogs and set it equal to the number of dogs you own. Then imagine you bought a new puppy and increment the dogs variable by one. Challenge 2: Make it compile Given the following code: age: Int = 16 print(age) age = 30 print(age) Modify the first line so that it compiles. Did you use var or let? Challenge 3: Compute the answer Consider the following code: let x: Int = 46 let y: Int = 10 Work out what answer equals when you add the following lines of code: // 1 let answer1: Int = (x * 100) + y // 2 let answer2: Int = (x * 100) + (y * 100) // 3 let answer3: Int = (x * 100) + (y / 10) Challenge 4: Add parentheses Add as many parentheses to the following calculation, ensuring that it doesn’t change the result of the calculation. 8 - 4 * 2 + 6 / 3 * 4 Challenge 5: Average rating Declare three constants called rating1, rating2 and rating3 of type Double and assign each a value. Calculate the average of the three and store the result in a constant named averageRating. Challenge 6: Electrical power The power of an electrical appliance is calculated by multiplying the voltage by the current. Declare a constant named voltage of type Double and assign it a value. Then declare a constant called current of type Double and assign it a value. Finally calculate the power of the electrical appliance you’ve just created storing it in a constant called power of type Double. Challenge 7: Electrical resistance The resistance of such an appliance can then be calculated (in a long-winded way) as the power divided by the current squared. Calculate the resistance and store it in a constant called resistance of type Double. Challenge 8: Random integer You can create a random integer number by using the function arc4random(). This picks a number anywhere between 0 and 4294967295. You can use the modulo operator to truncate this random number to whatever range you want. Declare a constant randomNumber and assign it a random number generated with arc4random(). Then calculate a constant called diceRoll and use the random number you just found to create a random number between 1 and 6. (Hint: You will need to include the line import Foundation to get access to arc4random(). If this method of creating a random number seems primative, you are right! There is an easier, more clear and expressive way to generate random numbers you will learn about in Chapter 4.) Challenge 9: Computers, at their most fundamental level, perform simple mathematics. A programming language allows you to write code, which the compiler converts into instructions that the CPU can execute. Computers operate on numbers in base 2 form, otherwise known as binary. The IDE you use to write Swift code is named Xcode. By providing immediate feedback about how code is executing, playgrounds allow you to write and test Swift code quickly and efficiently. Code comments are denoted by a line starting with //or multiple lines bookended with /*and */. You use comments to document your code. You can use The arithmetic operators are: Add: + Subtract: - Multiply: * Divide: / Remainder: % Swift makes many functions min(), max(), squareRoot(), sin()and cos(). You will learn many more throughout this book.: /=
https://www.raywenderlich.com/books/swift-apprentice/v6.0/chapters/1-expressions-variables-constants
CC-MAIN-2022-40
refinedweb
2,702
71.75
// link: // Runtime: 0.080s // Tag: Recursive, Dp /* * File: main.cpp * Author: shahab * * Created on August 5, 2011, 8:58 PM */ #include <cstdlib> #include <cstdio> #include <string> #include <iostream> using namespace std; long long fibo [50]; string bfs [50]; void generateFibo () { fibo [0] = 1; fibo [1] = 1; for ( int i = 2; i <= 50; i++ ) fibo [i] = fibo [i - 1] + fibo [i - 2]; } int bfsOfPosition (int n, long long pos) { if ( n < 31 ) return bfs [n] [pos] - '0'; if ( pos < fibo [n - 2] ) return bfsOfPosition (n - 2, pos); else return bfsOfPosition (n - 1, pos - fibo [n - 2]); } void generateBfs () { bfs [0] = "0"; bfs [1] = "1"; for ( int i = 2; i < 31; i++ ) bfs [i] = bfs [i - 2] + bfs [i - 1]; } int main(int argc, char** argv) { generateFibo (); generateBfs (); int testCase; scanf ("%d", &testCase); while ( testCase-- ) { long long n, start, end; scanf ("%lld %lld %lld", &n, &start, &end); if ( n > 48 ) { n = n % 2 ? 47 : 48; } if ( n < 31 ) for ( long long i = start; i <= end; i++ ) printf ("%c", bfs [n] [i]); else for ( long long i = start; i <= end; i++ ) printf ("%d", bfsOfPosition (n, i)); printf ("\n"); } return 0; } 2 thoughts on “UVa : 12041 (BFS (Binary Fibonacci String))” Could you explain how have you implemented DP here? @Niteesh Mehra There are 2 gotchas here 1. bfs string position query wont exceed 2^31. (i, j < 2^31 – 1) for example, 50th bfs = 48th bfs + 49th bfs that means, 1st portion of 50th bfs is exactly same as 48th bfs so, if 48th bfs length exceeds 2^31 then you don't need to find 50th bfs you can tell the answer of any question on 50th bfs using 48th bfs so its enough to create upto 48th bfs … but that will exceed time and space complexity thats why, i created upto 30 bfs, which is feasible in terms of space & time complexity now we need to think about 31th to 48th bfs 2. query won't exceed 10000. ( j – i <= 10000 ) so i ran a recursive function here, which returns 0 / 1 bfsOfPosition (int n, long long pos) that is, nth bfs contains 0 or 1 in the pos-th position if ( n < 31 ) i’ve already calculated it, so just return the result otherwise we need to determine, pos-th character of nth bfs whether comes from (n-2)th bfs or (n-1)th bfs for example, we want to find the 250th character of 14th bfs 14th bfs length = 377 13th bfs length = 233 12th bfs length = 144 as we know 14th bfs = 12th bfs + 13th bfs so 250th character of 14th bfs definitely not in 12th bfs 250th character of 14th bfs is same as -> (250 – 133)th character of 13th bfs hope it helps 🙂 my solution is not a good dp .. but its similar to a dp solution 😛
https://tausiq.wordpress.com/2011/09/05/uva-12041-bfs-binary-fibonacci-string/
CC-MAIN-2017-04
refinedweb
471
50.77
SSL socket fails with OSError: -30592 - peterson79 last edited by Hello! I am trying to connect to a node.js server on a ZEIT instance, zeit.co. ZEIT handles the SSL through an nginx proxy ( I think ). My following code works when connecting to google but not the ZEIT server. I get a weird error code: Traceback (most recent call last): File "main.py", line 146, in networkThread File "main.py", line 54, in connectToService OSError: -30592 line 54 is the ss.connect(...)line, Here is my connection code: def connectToService(): global socketQueue global serialQueue print("Creating Socket...") ss = socket.socket() print("Wrapping Socket...") ss = ssl.wrap_socket(ss) print("Connecting...") #this next line works #ss.connect(socket.getaddrinfo('', 443)[0][-1]) #this line does not ss.connect(socket.getaddrinfo('thismeshserver-fhprkttlyv.now.sh', 443)[0][-1]) print("Get...") cl = ss.makefile() buf = b"" buf += "GET / HTTP/1.0\r\n" buf += "\r\n" cl.write(buf) while 1: l = cl.readline() print(l) if l == b"\r\n": break ss.close() Where do I lookup the -30592 error? Any other insight appreciated! Thank you in advance. - Devang Sharma last edited by @this-wiederkehr I think it the same way as you do. There is no parameter named as server_hostname in the docs. - this.wiederkehr last edited by this.wiederkehr @peterson79 said in SSL socket fails with OSError: -30592: ss = ssl.wrap_socket(ss) BTW: have you tried adding parameters to this call? Like, do you need to supplie a client cert, or does the server require you to use SNI? See the docs here: Regarding SNI there is an undocumented keyword parameter server_hostnamesee my post here: Finally: If you intend to do an http request consider using urequests from here: - this.wiederkehr last edited by You have to translate the error code to hex and search for it on the mbed tls implementation: 30592 is 0x7780 Which means A fatal alert message was received from our peer. honestly I have no clue what this message implies ... - peterson79 last edited by I haven't found a solution, a standard connection seems to work through a dedicated server but not though ZEIT. Probably due to some proxy or load balancer would be my guess... I haven't found a way to lookup the error either... - Devang Sharma last edited by @peterson79, have you solved the error? I'm facing similar problem while using SSL.
https://forum.pycom.io/topic/1888/ssl-socket-fails-with-oserror-30592/4?lang=en-US
CC-MAIN-2021-17
refinedweb
401
69.38
> Ok so I have been perusing through the manual, tutorials, and previous sliding door answers and so far I have not found anything applicable. I am trying to build a sci-fi style double sliding door. I am new to unity and to C# scripting and unity's variation of it. Using C# as the primary scripting language for my scripts. A door frame assembly with 2 door objects that open and close in oposite directions when the assembly is interacted with. I have built a simple physical camera drone for moving about my scene and testing. Using information from one previous answer I have a door object attached to the frame assembly using a custom joint configures to act like a slide track. The door will slide using physics along its track when I bump into it with my camera drone. The door object is one of 2 that will be in the assembly. Only have 1 setup for testing purposes so far. The door object is a child object of the door frame. I have a pair of box colliders configured on the root object as interactable defining an interaction area along the side edges of the door frame. I do have a pair of animations created, one for opening the door, one for closing the door. At this point I am stuck. I have a generic script created and attached to the root of the entire door assembly object. I am unable to figure out how to identify the child door object or to trigger movement when the interaction zones of the assembly are clicked on. Also not sure if I should use animations to run the doors, or use physics by imparting a force to the door to move it from open to close and vice versa. Eventually I do plan on having a mechanic where a door panel can be broken or removed as well as for dissabling the door mechanisim so that it has to be manually manipulated by the player. Not sure if that matter now or not for determing the best method for moving the door objects for open/close? Why are you using physics for this? A simple animation would give the same effect, and you would not have to worry about the physics engine causing issues. Answer by Senuska · Aug 29, 2017 at 07:39 PM @Krahazik Best way to go about this (at least for me) was to create an empty object called "Sliding Door" and add functionality to it. I would put the meshes (geometry, models, take your pick) as children of the empty object called "Sliding Door". Then I create a script called "SlidingDoorController" on it, and add an Animator Component. I create a new Animation Controller called "Sliding Door Controller", and drag it into "Sliding Door's" animator component. I then create an empty child of "Sliding Door" and call it "Door Trigger". I add the BoxCollider Component to it. I position it in the middle of the doors and resize the BoxCollider to extend to either side of the door a certain distance (I found 2 works fine for a normal sized door). Make sure to also check the "Is Trigger" checkbox here. I add a component to the "Door Trigger" object just called "DoorTrigger". That is all of the setup for the scripting done, now onto the real meat of this! First let's make it so that we can see if a player has gotten close enough to the door to open it. We do that in "DoorTrigger". public class DoorTrigger : MonoBehaviour { public enum DoorEvents { None, PlayerDetected, }; private DoorEvents events = DoorEvents.None; public void OnTriggerEnter(Collider other) { if(other.tag == "Player") { events = DoorEvents.PlayerDetected; } } public void OnTriggerExit(Collider other) { if (other.tag == "Player") { events = DoorEvents.None; } } public DoorEvents Events { get { return events; } } } As a caveat what ever you use as your player in this particular case MUST have the "Player" tag for the events to work correctly. Now "DoorTrigger" is done. Next we go back to "SlidingDoorController" to finish off the scripting. public class SlidingDoorController : MonoBehaviour { public List<DoorTrigger> doorTriggers; private Animator doorAnimator; private bool openDoor = false; // Use this for initialization void Start () { doorAnimator = GetComponent<Animator>(); } // Update is called once per frame void Update () { foreach(DoorTrigger d in doorTriggers) { if(d.Events == DoorTrigger.DoorEvents.PlayerDetected) { if (!openDoor) { openDoor = true; } break; } else { if (openDoor) { openDoor = false; } } } doorAnimator.SetBool("openDoor", openDoor); } } Back in the Inspector for the "Sliding Door" drag and drop the "Door Trigger" child object into the "Door Triggers" object. Now you may get a warning or an error that there is not a boolean in the Animator component called "openDoor", well we are going to fix that right now! Select the Animation Controller we made earlier and open it in the Animator window (under Window -> Animator). Add a parameter of type bool and call it "openDoor". We are now going to set up our animation states while we are here. Go ahead and right-click in the grey grid area and select "Create State -> Empty". The first one we make will be our default state and will be colored yellow. Rename the state you just made "Idle" for now. Create another state and call it "Open" and create one more state called "Closing". Change the "Speed" of the "Closing" State to -1. Now that our states are setup we have to connect them. Select the "Idle" state and right-click it. Select "Make Transition" and click the "Open" state we made. Select the new arrow we created. Uncheck the "Has Exit Time" checkbox, and add a condition to it (press the "+" button in the conditions area). It should automatically add the "openDoor" boolean to the conditions. Create a transition from "Open" to "Closing". Add a condition to it. Note: Make sure you change "openDoor" to false in this transition. Finally add one more transition from the "Closing" state to the "Idle" state. Now we need to make the actual open animation. Since sliding doors open and close in the same manner we can just make one animation and play the opening animation in reverse to close the doors. Tip: For this next step make sure you can see the sliding door in the "Scene" window. So select our "Sliding Door" and open the "Animation" window (Window ->Animation). There should be a button in the animation window to create a new Animation clip. Click it and save the clip as "OpenSlidingDoor". Click the "Add Property" button and select the position the right door panel. Repeat this for the left door panel. Move the red line in the "Animation" window to the end of the clip (to the right). Select one of your door panels and move it to where its "open" position is. Repeat this for the other panel. Press the "Play" button in the "Animation" window to preview the animation. Go back to the "Animator" window to see a new state has been added. Select our "Open" state and select the "Motion" field. Select our newly made "OpenSlidingDoor" animation clip. Repeat this for the "Closing" state. To test this add a sphere to the scene. Change the sphere's tag to "Player" or add it if it is not there. Add a "Rigidbody" component to the sphere and uncheck the "Use Gravity" check box, and check the "Is Kinematic" checkbox. Hit play in the editor and move the sphere in the Scene preview window toward the door to see if the door opens and closes. From here you can expand functionality where the door is locked or broken. Think I might have missed something as the door is not responding. Can not tell if anything is happening. Added some print commands to the DoorTrigger script, and it looks like it is detecting and reacting to my player object. It appears the animation is not being. door that closes by itself 2 Answers Is it possible to create a hole for a door in a wall-like cube? 2 Answers Private and public variables 2 Answers Go through door, by loading next scene, with key-press 1 Answer Sliding Door OnButtonDown 4 Answers
https://answers.unity.com/questions/1400345/how-to-make-a-double-sliding-door.html
CC-MAIN-2019-26
refinedweb
1,360
73.37
. Previous Part: Fixed PointNext Part: Physics Getting Started For this tutorial, we’ll be using the same template from the 2nd tutorial: Text Output The very first article of this series made you write a Hello World application. This was also the last application that you had Part 7: Debugging). Try this piece of code: #include "game.h" #include "surface.h" #include "template.h" #include <cstdio> //printf namespace Tmpl8 { float x = 200.0f, y = 0.0f, vx = 0.1f, vy = 0.0f; void Game::Init() { } void Game::Shutdown() { } static Sprite rotatingGun(new Surface("assets/aagun.tga"), 36); static int frame = 0; void Game::Tick(float deltaTime) { screen->Clear(0); screen->Box(x, y, x + 5, y + 5, 0xffffff); if ((vy += 0.02f, y += vy) > ScreenHeight) vy = -vy; if ((x += vx < 0) || (x >= ScreenWidth)) vx = -vx; } }; Game::Tick method: printf("X - position: % f\nY - position: % f\n", x, y); What Happens? The output of the printf command is directed to the console window that is behind your game window. Left-click with your mouse inside the console window (to select some text) to pause the running application, this will block the main thread and let you examine the contents of the console window. Right-click to continue running the application. Printf The printf function used in the previous section has a number of interesting codes in it. There’s the %f: which is replaced by the contents of variable x. Then, there is the \n character: this sequence emits a new line, so printing continues on the next line. The printf command is pretty useful, especially if you know how to use it. Find out about the details of the printf function here: printf – C++ Reference. After reading about the printf function, you will learn that you can replace %f by %i to print an integer. You can also write %.2f to print a float with 2 decimals. Printing to a String Sadly, all this formatting goodness is not something we can use with the Surface::Print function, which just expects a plain string to print. We can however print to a string to bypass this issue: void Game::Tick(float deltaTime) { screen->Clear(0); char text[128]; sprintf(text, "X - position: % f\nY - position: % f\n", x, y); screen->Print(text, 2, 2, 0xffffff); screen->Box(x, y, x + 5, y + 5, 0xffffff); if ((vy += 0.02f, y += vy) > ScreenHeight) vy = -vy; if ((x += vx < 0) || (x >= ScreenWidth)) vx = -vx; } This time, we get all the benefits of printf, and use this to print the current position of the box to the game window. Printing to a File Once you know how to get text to the screen using the printf function and to a string using sprintf function, moving on to files is easy. Try the following: void Game::Tick(float deltaTime) { screen->Clear(0); FILE* f = fopen( "positions.txt", "a" ); fprintf( f, "X-position: %f\nY-position: %f\n", x, y ); fclose( f ); screen->Box(x, y, x + 5, y + 5, 0xffffff); if ((vy += 0.02f, y += vy) > ScreenHeight) vy = -vy; if ((x += vx < 0) || (x >= ScreenWidth)) vx = -vx; }" respectively. Retrieving Data from a File Before we continue, you need to make a file for testing purposes. Using notepad, create a file named settings.txt and place it in the directory where you unzipped TheTemplate.zip file. Put the following info in it: xpos = 100 Now, in your Game::Init function, add the following code: void Game::Init() { FILE* f = fopen("settings.txt", "r"); fscanf(f, "xpos = %f", &x); fclose(f); } When you run the application,: void Game::Init() { FILE* f = fopen("bindat.bin", "wb"); fwrite(&x, 4, 1, f); fwrite(&y, 4, 1, f); fclose(f); } This code will create a file for binary writing. Then, it writes two blocks of 4 bytes (the size of a float). The first line (line 12) writes the value of x to the file, and the second line (line 13) writes the value of y to the file. Note that you need to know how large a variable is to be able to store it to a file. When in doubt, use the sizeof operator: fwrite( &i, sizeof(i), 1, f ); Compared to just writing 4 can be loaded into your favourite image editing application. A .tga file consists of a header and the actual image data. For the image data, you can simply save lines of pixels. For the header, you need the following data: struct TGAHeader { unsigned char ID, colmapt; //; The size of the header is (and absolutely must be) precisely 18 bytes. You can verify this again using sizeof(TGAHeader). Finally, you can set individual fields of this header: header.ID = 0; - Find one of your previous assignments and add a Game::ScreenShotmethod to it. - Add code to your application that saves a screenshot whenever you press a specific key on the keyboard. Optional Use sprintf to produce file names like screenshot001.tga where each new screenshot increments the last three digits of the filename. Previous Part: Fixed PointNext Part: Physics
https://www.3dgep.com/cpp-fast-track-15-fileio/
CC-MAIN-2021-10
refinedweb
844
72.76
XULRunner Developer Preview Release Available 122 TeachingMachines writes "A stable developer preview release of XULRunner 1.8.0.1 is now available. Based on the Firefox 1.5.0.1 codebase, it is available for Windows, Mac OS X, and Linux. From the Mozilla Developer Center (beta): ." Help with programming with XUL and its related technologies can be found at XULPlanet. Beginning programmers will benefit especially from the XUL Tutorial. Also check out the XUL Element Reference to get an idea of what's available. " A couple of other resources are worth mentioning. First, there is the XUL Programmer's Reference Manual which covers interface elements for XUL version 1.0. "Rapid Application Development with Mozilla" is available for download at Bruce Perens' Open Source Series page. If you get the book, make sure to check out the errata. Unfortunately, the author Nigel McFarlane has passed away, so this is likely the final version. One final reference, "Creating Applications with Mozilla," is available here. For those individuals who are looking for an extremely powerful application framework that is relatively easy to use, Mozilla is definitely worth a look. Also worth mentioning (Score:3, Informative) XuulRunner (Score:1) Good work - a true cross-platform API with full interface features and themability. Windows, Linux, Mac OS X, Solaris, I should have a look at it someday, it might be an interesting platform for writing mediumly complex GUI applications. How is the development tool environment though? Whilst I like a terminal and build scripts, not everyone does, and mass uptake would be restricted greatly if there was no Eclipse/etc plugin. As a platform, or part Re:XuulRunner (Score:2) You don't build it it. It is all Javascript and XML. So whatever tools you would use for that. What's an Eclipse plugin going to give you beyond syntax highlighting? What more would you need? I hardly se Re:XuulRunner (Score:2) And this is the problem. With XUL, you're entirely limited to whatever the Mozilla developers found necessary to build a web browser/mail client. Don't get me wrong, XUL has a hell of a lot more rich GUI power than HTML, but it doesn't quite have the robustness of other application development platforms. If I were teh XULRunner team, I would be looking to ditch Javascript Re:XuulRunner (Score:2) I don't think that's ever a direct problem. Javascript could certainly be a nicer language to work in, but it never prevents you from reaching your goal. It's not the scripting language that constrains you, but the underlying components. However, it is actually possible to write your own components, even in C++, and distribute them along with the rest of your code. Obviously you lose some of the advantage of XUL at this point, but if it's just one small feature your missing you can code your own. And of Mozilla & Eclipse (Score:2) XULRunner future. (Score:4, Informative) XUL is very good RAD tool. Much.. much much better than HTML. Because with HTML you sould care about styles and other miscelanea problemas, and becasue with HTML you badly emulate OS widgets, with XUL you use OS widgets. Also good bonuses are easy to code with javascript, and the integration with XML (indeed!). Re:XULRunner future. (Score:5, Interesting) I would classify XUL as a good GUI development tool. It's rapidity is quickly lost if one delves into any XPCOM backends. However, for simple, client side, frontend GUI operations, XUL is a very, very useful tool. It gives you the ability of DHTML in a way that isn't a hack. Here's a good example [hevanet.com] of XUL's layout capabilities. IN terms of pure layout, there not really that much here that is different from HTML. However when you get dynamic [faser.net], XUL really shines. People go on about AJAX, but XUL offers a huge amount of potential. Personally, I feels XUL's only achilles heel is javascript. That language needs a serious overhawl if anyone is to be able to use it without all that hassle. As a GUI application development tool, I would expect XUL and XAML to replace older methods such as GTK and *shudder* Windows "Visual" code. It's faster, cleaner, makes more sense, and you don't need 300 lines of code plus libraries to draw a hello world window. Re:XULRunner future. (Score:2) You're in luck :-) Python bindings are coming to XUL. [mozillazine.org] Mozilla is working with ECMA on that too. See Brendan's comments about ECMAScript 4/JavaScript 2. Re:XULRunner future. (Score:1) Be honest with yourself here, how much of your experience with javascript involves those craptacular "this is how you do a rollover" tutorials? There are a lot of examples of people trying to use it like a real language, and when treated that way it is much nicer to work with. Re:XULRunner future. (Score:2) Re:XULRunner future. (Score:2) The MAB is the only non-Mozilla.org XUL-based tool anyone seems to mention. Are there many others? I also think it has great potential from what I've seen, but the lack of applications after all of this time seems odd... Re:XULRunner future. (Score:2) How so? There is nothing inherent about xul that makes dynamic document generation easier. In fact, it's the same, using the DOM. Comparing AJAX to XUL is comparing apples to oranges. Care to elaborate? What is it about javascr Re:XULRunner future. (Score:2) XUL, unlike DHTML, is build from the ground up to be dynamic. Working with XUL button tags is a lot handier than essentially hacking HTML input tags to be buttons. Comparing AJAX to XUL is comparing apples to oranges. Because? XUL is just AJAX with DHTML replaced by something saner. Care to elaborate? What is it about javascript that is so bad? What hassle? I find it to be one of the Re:XULRunner future. (Score:2) Again, care to quantify this? If i want a button in HTML I use the HTML button tag. To handle the click event you either describe onclick as an attribute or add an onclick handler via the DOM. What's the difference? The only things i can see making dynamic document generation easier are the stack tag (although only slightly, HTML has layers) Re:XULRunner future. (Score:2) Re:XULRunner future. (Score:2) [mozilla.org] Re:XULRunner future. (Score:2) Well stated. I wish people who keep trying to create these faux css-based widgets would understand this. Compared to something like XUL, they look, and behave like cheap hacks. Re:XULRunner future. (Score:2) For me, in order for something to qualify as a tool for rapid development, it must have (at least) the following: Does anyone know how these XUL tools stack up along these dimen Oblig. Ghostbusters (Score:5, Funny) Or, as the Wikipedia points out: "There is no data, only XUL" Re:Oblig. Ghostbusters (Score:3, Interesting) This is what lost the browser wars (Score:5, Interesting) Back when the browser wars were in full swing and the Netscape source was just released, Netscape was at a huge disadvantage - they were fighting against Internet Explorer, which was bundled on every new desktop. However, they had an ace card - they were the browser of choice for ISPs. Back when everybody was on dial-up, the usual way to get on the Internet was to get disks or CDs from ISPs, and run their installer. Typically, that also included Netscape, which was subsequently set to be the default. So while Microsoft had a browser installed by default on every desktop, Netscape was installed over the top of that for most people who signed up for dial-up service. Then the Netscape source was released, and Netscape 5 was overdue. There was missing code, so it didn't build. Instead of filling in the bits that were missing, fixing the most prominent bugs, and releasing Netscape 5, practically everything was thrown away and they started again - to build a new platform based on Javascript and XML (and, oh yeah, with a browser I guess). XULRunner is the culmination of that process. However, this came at a cost. Throwing everything away and starting again set back the development by a huge amount - it took over four years to go from the public release of Netscape's code to the first release of Mozilla. In the meantime, Microsoft released three new versions of Internet Explorer. So what choice did ISPs have? Ship the outdated Netscape 4 to all their new customers? Ship a buggy prerelease Mozilla build to all their new customers? Pay Opera for every new customer? Or just bundle Internet Explorer? Of course they did the latter. The Mozilla developers threw away the only thing that could stop Internet Explorer from winning the browser wars... to build XULRunner. So yeah, it's a nice platform, and I'm sure I'll use it in the future. I'm already building one Firefox extension with the same tech. It's decent enough. But when I think of the stranglehold Internet Explorer has had on the market for so many years, and the pain that has caused me as a web developer, I can't help but think that the price was way, way too high for what is essentially just another cross-platform toolkit. Good job on building a GUI toolkit, Mozilla guys! I just wish you'd focused on building a web browser instead. Re:This is what lost the browser wars (Score:2) OK, you correctly point out the fact that tactics matter more than strategy when we live and die by the quarterly report. Now that the strategic investments in good infrastructure pay dividends, is it really necessary to force-feed any closed, OS-centric solutio Re:This is what lost the browser wars (Score:2) That would almost be as dramatic as you were hoping, if only "tactics" and "strategy" weren't synonyms [reference.com]. Seriously though, I don't see why you're trying to make this about open and closed source software. Mozilla basically said to Microsoft: "Here, do what you want with our market share while we go do something else for a while." If Redhat stopped all development on their OS for a Re:This is what lost the browser wars (Score:2) Re:This is what lost the browser wars (Score:4, Interesting) No, it's not a nice platform. I can assume you've noticed this based on your "it's decent enough" comment. It's a horrible platform. First off, JavaScript. It doesn't matter if you can use XUL from other languages because parts of it are implemented in JavaScript. JavaScript is a horrible, horrible language. I recently discovered that JavaScript supports closures - which helped explain the horrible memory leaks I was experiencing with JavaScript. Stuff that was supposed to leave scope didn't because it wound up in a closure. Lisp/Scheme developers know what a closure is. JavaScript developers probably don't. (Plus, closures that contain DOM objects leak memory. This is "WONTFIX" because IE does it too.) Unfortunately for me, I've never figured out exactly WHAT the closure takes with it. I know of no way to check the current environment to find out what your function accidently wound up keeping. However it does explain the "delete" keyword that had always confused me. Why do you need delete in a GCed language? Well, because without it, you can wind up with pointless variables that were supposed to be local that are accidently kept in a closure! Next we have XUL and CSS. XUL isn't native - I think everyone's noticed that by now. Firefox manages to goof up the scrollbars, so they don't match my theme. They also goof up form controls, so that they don't match my theme. Under Windows, certain controls don't act like Windows controls. I'm told the situation is even worse under OS X, but I've yet to convince management that I need an OS X machine to test on. Like you said, XUL was a horrible, horrible, horrible mistake. It should never have been made. They should have released Netscape 5 and worked on making a usable browser. Firefox is an interesting tech demo, but it's not something I'd want to support indefinitely. Quite literally the only reason people use Firefox at all over Opera is because it's open source. Were Firefox closed source, Opera would be the clear victor. Yes, I know: XUL is supposed to make cross-platform support easier. Instead it ensures that Firefox just feels wrong on all platforms. Because of XUL, the entire core browser is a giant mess of CSS, JavaScript, XML, and XPCOM. XUL is an interesting concept, but it just fails in implementation. The insane hacks required to make XUL appear to be native are proof enough that it just isn't a smart design. People often joke that Emacs is practially an operating system. With the release of XULRunner, Firefox has proved that it literally IS a complete operating environment. It contains all the libraries you need to write full applications. In a sane world, that would be called "bloat". Re:This is what lost the browser wars (Score:5, Informative) Decent enough for browser plugins. Decent enough if you are building an application that is very closely related to browsing. I wouldn't choose it for building general-purpose applications, no. Well no, you might expect it to leave scope, if you assumed Javascript worked like some other language that doesn't support closures, but that's not the way Javascript works, so it's not supposed to leave scope. Richard Cornford wrote a decent explanation [jibbering.com]. I think that's an implementation detail rather than anything intrinsic to XUL itself. There was an experimental "KaXUL" to implement XUL within KDE and Konqueror a few years back, but I don't think anything came of it. As far as I know, there's nothing stopping a XUL implementation from rendering XUL applications with native widgets, it's just the people who built the only functional implementation chose not to. Re:This is what lost the browser wars (Score:5, Insightful) I suspect that I am feeding a troll, but here goes... Your comment is much akin to the following: Bottom line: if you can't be bothered to learn the grammar of the language you are using -- hell, if you don't find learning new languages and grammatical concepts positively exciting -- perhaps software development is really not for you. You might want to look into becoming a manager. Re:This is what lost the browser wars (Score:2) If you look at XUL from a competitive standpoint, it maps most closely against Java/Swing, where one generally does not have to be overly concerned with object lifecycles and 'native objects' and memory leaks (not that you can completely ignore it, but in general there's less "gotchas" than there is with JS.) I'm a big fan of Javascript as a language, but can totally understand why someone from an RAD app-programming POV would see it as a drawback. Bottom line: Re:This is what lost the browser wars (Score:2) I personally would not call Swing RAD. In my experience, it's one of the slowest ways to Re:This is what lost the browser wars (Score:2) Not to mention VB6 [which lacks the functional elements but was plauged with the same reference-counting issues.] Of course, you didn't add "VB is a horrible language" to your list because, well, people have actually used it and agree totally. I'm not a Swing programmer, so I can't comment on efficiency. Just competitively, if someone is creating something for "cross-platform", "network-delivered" Re:This is what lost the browser wars (Score:1, Interesting) Let's start off with "var". WTF does "var" do?! Well, it makes future variable references local to that execution context. Try this: function foo() { i = 1; }; function bar() { i = 2; foo(); return i; }; What does bar() evaluate to? Re:This is what lost the browser wars (Score:2) There is, in a way. Everything you say in the first half of your comment can be mitigated by setting the javascript.options.strict Gecko option to true, which throws up warnings when you make those kinds of mistakes. Unfortunately, it can't just stop processing when it hits bad code like this because absolutely loads of pages out there make these kinds of mistakes in ways that don't actually cause problems. Re:This is what lost the browser wars (Score:1) You might want to try JSLint [jslint.com](documentation [jslint.com]) by Douglas Crockford. It checks that variables are defined before use, and also checks for other common mistakes (you can read about them in the documentation). Re:This is what lost the browser wars (Score:2) Dude, I figured out what "var" meant back in 1995/1996. Coincidentally, that was around the time I first learned JavaScript. Where have you been? You might as well write about the bugs Re:This is what lost the browser wars (Score:2) The major problem with JavaScript is that it's basically a Lisp/Scheme-like language with C-like syntax, making it prone to human misunderstanding. Add the fact that it was developed in what could only be described as "a rush", stopping only briefly to be quickly standardised (probably the only large step forward in the language's history), and you can see what's wrong with it. It was an admirable attempt, and it can be made to work in great, but the fact is that was just thrown together too hastily, and no Re:This is what lost the browser wars (Score:2) Re:This is what lost the browser wars (Score:2) I know C. I know JavaScript. You know what helped me avoid human misunderstanding? Learning the languages. Everything else is personal bias and hot air. Re:This is what lost the browser wars (Score:2) I'm not so convinced. I realise that one needs to learn any language to be able to use it, but looking like one while acting like another is just plain misleading. Re:This is what lost the browser wars (Score:2) Come to think of it, there are more things different between C and JavaScript than features in common. But if it makes you feel any better, just think of "var" as JavaScript's version of C's typed va Re:This is what lost the browser wars (Score:2) I'm not sure what post you're replying to, but somewhere in here I was complaining that I hated C. It's the fact that JS has C-like (by which I mean Java-like, C#-like, C++-like, etc.) syntax. I just feel there's implications of semantics (or lack of semantics) that just don't hold from using a similar syntax. My personal preferred languages are ML, Python, or C#/Java (yeah, sorry about that last "double trouble" pair, but I got used to them and they're very good for large-scale stuff). I'm not sure I consi Re:This is what lost the browser wars (Score:2) Sorry but that sounds like sour grapes to me from an Opera supporter. Most firefox users could care less about open source. They use firefox because its Free, its more secure than IE, and it has a fantastic extension system. You'll note that Mozilla only enjoyed limited success and that things only really took off when Firefox was developed. Bloat, m Re:This is what lost the browser wars (Score:2) Re:This is what lost the browser wars (Score:1) Although XUL looks like a flawn design, it works decently enough to run Firefox despite the hacks you describe... And it has also a great advantage: plugins written for Firefox are platform-less. You would not care about this if you use windows or linux x86, but I let you imagine what a relief it is f Re:This is what lost the browser wars (Score:2) I'm sorry, but you have no clue what you're talking about. This bug was fixed on the Gecko trunk back in September 2005 (as in, 4 months ago). The fix will be in Gecko 1.9 (and Firefox 3). It _might_ end up in Gecko 1.8.1 (and Firefox 2) if the remaining regressions are resolved fast enough. On a more general level, "IE sucks and has this bug" is not necessarily a reason not to fix the bug. Re:This is what lost the browser wars (Score:2, Interesting) Slow down and take a deap breath. There are no horrible languages, only horrible developers (or implementations). I happen to find ECMAScript to be a very powerful language. It includes dynamic prototypical inheritance, which is considerably more flexible and powerful than classical inheritance. It is fully object oriented, as everything is an object. True it is losely type, which can be both a benifit and a detriment, but that and the fact that it is inter Re:This is what lost the browser wars (Score:2) Re:This is what lost the browser wars (Score:2) Let's compare: which do you think is more difficult? I don't know about you, but I'm tempted to say that #1 is easier. Re:This is what lost the browser wars (Score:2) "but we'd have it six or seven years ago." And IE 7 would have already been out, code rot would have destroyed Netscape/Moz, and IE would have their current market share, and Netscape/Moz would have 0%, instead of the 10%+ (depend on which figures you look at, 10% is Re:This is what lost the browser wars (Score:2) That's a false dichotomy. You simply don't have to choose between being stuck with bad code forever and a complete rewrite from the ground up. If you have a crappy code base, you can rewrite it a bit at a time without giving up your entire market share in the process. Re:This is what lost the browser wars (Score:5, Insightful) Good job on building a GUI toolkit, Mozilla guys! I just wish you'd focused on building a web browser instead. I'm sure many people remember the line from mozilla.org -- "It's not a Browser, it's a Development Platform!!" Urg. Unfortunately, Mozilla (aka AOL) did not understand the fundamentals of what they were getting into. "Development Platforms" are far less about capabilities and a lot more about Tools. It's just confounding that they sunk soo much effort into developing XUL, and then never released documentation, never released a GUI builder, never really built a community. Even ignoring the overall irrationality of the AOL building their own GUI Toolkit, it's just totally bewildering that they only went 80% of the way there. If one wants to compete directly with Microsoft and Sun in the devtools market, you really have follow through, not just throw a bunch of code out on a FTP site and then wonder why nobody's using it. They really did throw away 50% marketshare with nothing to show for it. Re:This is what lost the browser wars (Score:2) The real problem was that AOL bought Netscape and they didn't understand the market or the technology. They should have kept a team working on current-gen technology to keep up the fight with IE while letting Mozilla grow in the background with another team. They certainly had the money to do it - just not the brains Re:This is what lost the browser wars (Score:2) Didn't AOL buy Netscape to get this expetise? And Netscape told them "Oh, Screw Communicator v5. Eric Raymond told us that with Open Source(TM), this Gecko/XUL stuff will be ready in a year, and Microsoft is going down!!" Honestly, I think AOL realized Nu Re:This is what lost the browser wars (Score:2) Re:This is what lost the browser wars (Score:2) Please see my other comment [slashdot.org]. That choice is not necessary. What is the language? (Score:2) Re:What is the language? (Score:5, Informative) XUL is an XML-based language that lets you define a user interface for a program. You hook it all together with lots of things web developers already know - it uses Javascript, the DOM 2 Event model, extensions to CSS, etc. You can use it to build stand-alone applications with XULRunner, or extend existing XUL applications like Firefox. Frequently when people talk about XUL, they mean the whole system that makes XUL work - which includes the Javascript, CSS etc. "XULRunner" doesn't "run" XUL, it takes the user interface definition files defined in XUL, and executes the Javascript, renders the CSS, etc, to make it all work properly. XUL is just one component in the grand scheme of things. XUL: WIkipedia (Score:5, Informative) Mod parent down (Score:2) Apologies (Score:2) I apologize for my unintentionally misleading statement. Parrot Does not have a XUL document. The intro to PUGS [pugscode.org] and HASKELL [pugscode.org] slides are XUL documents. They are both referenced from the PERL6 site. In my quest to find out more about PUGS and HASKELL I was thrown off course and had to learn about XUL first. There is no good purpose for that. A user should not be surprised by a new document format while on a quest for other information. And a fair question would be, where is the POD Perl 6 documen Re:Presentations (Score:2) Huh!? (Score:3, Interesting) Re:Huh!? (Score:2) I don't think XUL apps should really be used outside the browser. XUL is really designed to be an easy to program javascript frontend enviornment for clients. It should be a slim-client connecting to your main application on the server. Re:Huh!? (Score:1) But the browser is a XUL app. So is the mail client. That's one of the main points of XULrunner - to make it easier to make stand alone apps that use XUL for the GUI. Re:Huh!? (Score:2) Re:Huh!? (Score:2) API stability -- This allows you to distribute standalone XUL apps without worrying that the next Download size (Score:2, Informative) Standalone apps (Score:1) It would be better if Firefox and all the rest would use XULRunner too so there would be some consistance between the projects so it doesnt result in several versions of XUL and I have to have all of them installed at the same time. Re:Standalone apps (Score:2) please no user installation (Score:2) there should only be one way how software gets installed on a linux system, and that is through it's package manager. And now, on to TFM... Re:please no user installation (Score:1) You don't need to be root to install software for the current user, even on Windows. XULRunner is a package manager of sorts. Does anybody know if there are any plans for (eg) gentoo devs hook it into portage's build system? Re:please no user installation (Score:2) Which is something that Linux and the wider FOSS community really doesn't need - yet another way to manage packages of software. You've got several distro level package manegment systems (deb, rpm, ports etc), language level package systems (perl, python, etc) and now platform level packages. Why can't XULRunner just tap into the base package management system instead of doing it it's own way? It's hard to see how yet more package management is going to make things eas Re:please no user installation (Score:2) Re:please no user installation (Score:2) Re:please no user installation (Score:2) Re:please no user installation (Score:2) Actually, on Windows XP (SP2?), if you are in the wrong user permission group (i.e., Power Users), you can't properly install most software. It won't let you install apps in X:\Program Files, for example, because you won't be able to create directories or manipulate files in that tree. IT won't let you load stuff into X:\Windows or X:\Windows\System32, either. Yes, I was able to "install" some of the GNU utilities for Win32 Dev environment (Score:4, Insightful) I am also looking for a way to 'run' xul components without doing a full build, a visual studio perhaps, that could help with layouts and avoid all the annoying syntax errors. XUL itself is a markup language that is XML based and allows building visual components - dialogs, menues, buttons, tabs, grids, textboxes, etc. While you can open a half done HTML page in your browser and see what is going on, with XUL you have to build the package first and then you can see what's going on (an incorrect XML structure in this case will give you an error, XUL must be well-formed and valid.) XPCOM brings other challenges. It is a native library of services/components that can be accessed from javascript (or possibly other scripts) and that extend the functionality of the script to include things like file management, access to preference storage, window manipulation, etc. But you can't just run a compiler to see if you are doing everything correctly, you will only get errors in runtime. Actually, I think this is the biggest problem - all errors must be caught in run-time. Javascript, XUL, XPCOM work, XBL, everything can be built (there is nothing to building anyway, just packaging really,) but after the packaging errors have to be caught in runtime, and I think this is always the biggest problem for a programmer who is used to rely on compiler to quickly catch some of the problems before even starting the application. Maybe there needs to be a unit-testing framework created, that can help running unit tests on portions of the code without building the entire application and catching unit errors during execution of the entire application. Yes, actually, to think about it this could be a big help, especially for the new developers, who can be put off this entire platform because of lack of these tools. Re:Dev environment (Score:2) Why can't you edit the installed plugin directly and just restart the browser to test changes? When i was writing my XUL plugin, I just used Mozilla to test and kept firefox running for actual web browsing? XPCOM brings other challenges. It is a native library of services/components that can be accessed from javascript (or possibly other scripts) and that extend the functionality of the script to include things like file manage Re:Dev environment (Score:2) Welcome to the world of developing with scripted languages. - than Re:Dev environment (Score:2, Informative) As for making up small bits of XUL, the extension developer's extension has a editor (it basically has a frame that loads whatever you type into a textbox in). Doesn't work too well with d OK (Score:1) Re:OK (Score:1, Insightful) Re:OK (Score:2) "We're too l33t for GUI builders HUR HUR HUR!!!!11" (meanwhile 100 man-years of programming effort disappears into obscurity) Re:OK (Score:3) --- SER XULRunner SDK is in the roadmap (Score:2, Informative) Perhaps they could fix the installation methods (Score:3, Interesting) Re:Perhaps they could fix the installation methods (Score:2) > the world doesn't immediately twist itself to conform to you. Why yes. If your application is as unfriendly as mozilla's plugin installation, you can be sure I am not going to use it. While it doesn't hurt me much, it will hurt you if you are selling it. In mozilla's case I'm only complaining because I can't just ditch it and use something else; there is nothing else. > You make it sound like it's a randomly *changing* string. It' Xoices (Score:1) Re:Xoices (Score:1) "Ex You Ell Runner". Re:Xoices (Score:2) The problem is not how hard it is to pronounce, but rather how hard it is to guess how to pronounce it. And "Ex Yoo El Runner" is fairly hard to pronounce, anyway. Re:Xoices (Score:2) It's "Zewlrunner". XUL = Zuul from Ghostbusters [wikipedia.org], that's why the namespace for XUL documents is " e .is.only.xul [mozilla.org]". Similarly "XPI" is "zippy" - the install technology used to distribute XUL apps. Re:Xoices (Score:2) Good preview edition (Score:2) So what kinds of applications can one create? (Score:2, Interesting) 1. Can one do general-purpose GUI application development with Mozilla/XULrunner--using JavaScript instead of Python or Tcl as the programming language? (i.e. is Re:So what kinds of applications can one create? (Score:2) Re:So what kinds of applications can one create? (Score:2) Yes. And the hope is that soon you will be able to use JavaScript _or_ Python as the programming language. > 2. Does developing with this environment require one to do hacking in C++? (I'm not interested in hacking with C++.) The idea is that it should not. Some things are still not easily (or at all) doable without using C++, of course. Writing device Memory usage? (Score:2) Also, like a browser, if you open a second "window", you only have a small memory hit to add the extra page (plus rendered objects). Is this the same here? Re:Cause of death of Nigel McFarlane? (Score:3, Funny) For crying out loud guy, get a grip. Cheers, Ian Re:Count the occurrences of XUL in that summary. (Score:1) Re:Count the occurrences of XUL in that summary. (Score:1)
https://slashdot.org/story/06/02/20/0530211/xulrunner-developer-preview-release-available
CC-MAIN-2017-47
refinedweb
5,710
71.75
Tutorial Getting Started with Readable & Writable Stores in Svel familiar with Redux or Vuex, then the Svelte stores offer a similar feature for state management. If your app is getting complicated, then it becomes difficult for components to relay data between themselves. Moving it to a global data store is a better option. Here, we’ll look at two store options that Svelte makes available: writable stores and readable stores. Writable Store Let’s go ahead and create a global state management file in our Svelte project - let’s call it store.js and import the writable function. import { writable } from "svelte/store"; let counter = writable(1); export {counter} We’ve created a variable called counter, which is a writable store. counter now has the following self-explanatory methods: set update Let’s create a custom component called Nested.svelte and use the counter store we just created. <script> import { counter } from "./store.js"; </script> <div> counter value: {$counter} </div> Notice that during usage, we prefix the the variable with $, since this is a named import. Let’s wrap it up by importing the component in the App.svelte file and create a method to write the counter variable to observe reactivity across nested components. <script> import Nested from "./Nested.svelte"; import { counter } from "./store.js"; function incrementCounter() { counter.update(n => n + 1); } </script> <div> <button on:click={incrementCounter}>Update</button> <Nested /> </div> the counter uses an update method that takes a function whose parameter is the current value of the writable store and returns the modified value. If we run this app, we should be able to see the value inside the Nested component getting updated as we click on the button. While we’re at it, let’s go ahead and add a reset button to App.svelte. function resetCounter() { counter.set(1); } <button on:click={resetCounter}>Reset</button> The resetCounter uses the set method of our writable store. Now, the writable function also supports a second argument which is also a function. Here’s the signature for that function: writable(value: any, (set: (value: any) => void) => () => void) This function is fired when the first subscriber is created and it returns another function that is fired when the last subscription to the variable is destroyed. Let’s see that in action. In our store.js, add the second argument to the writable function: let counter = writable(1, () => { console.log("First subscriber added!"); return () => { console.log("Last subscriber deleted :("); }; }); To test this, we’ll mount and unmount the Nested component to observe the behavior, in App.svelte: <script> // ... let flag = false; function toggleMount() { flag = !flag; } </script> <!-- ... --> <button on:click={toggleMount}>Mount/Unmount</button> {#if flag} <Nested /> {/if} </div> Readable Store Svelte also offers the readable function, which allows for creating readable stores whose values cannot be updated from other components. The value has to set from within the store. Let’s try this out, modify the store.js - import { readable } from "svelte/store"; let initialVal = Math.floor(Math.random()*100); let counter = readable(initialVal, (set) => { let incrementCounter = setInterval( () => { let newVal = Math.floor(Math.random()*100); set(newVal); }, 1000); return () => { clearInterval(incrementCounter); }; }); export {counter} Here the readable counter is set with the initialVal, which is being passed as the first argument. The second argument is the same as with writable stores, but this time it’s a required parameter since without it there would be no other way to access the counter value to reset it. In this example, we generate random numbers between 0 to 100 and assign this to counter by using the set method. update is not available. This is a simple demo, but in real apps readable stores can use the second argument to make API calls and, based on some logic, set the value. This will render the components that are subscribed to this store. As you saw, by using writable and readable stores in Svelte, we can achieve a basic form of global state management pretty easily! ✨
https://www.digitalocean.com/community/tutorials/svelte-svelte-store
CC-MAIN-2020-34
refinedweb
663
65.01
17 December 2008 15:52 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--It is too early to herald the death of the quarterly ethylene contract in ?xml:namespace> Pressure has built steadily for monthly olefins settlements with the protagonists, largely the big European producers, seeking to play catch-up with higher priced naphtha. At a time of rising feedstock prices, the ethylene, propylene and butadiene makers had been seeking to pass on higher costs as fast as possible. The more influential customers were, however, largely having none of it. Facing longer term contracts downstream they were not in any hurry to take on the job of absorbing oil/naphtha price volatility in their own businesses. But a great deal has changed in only a few months as plastics and chemicals demand downstream from the cracker has slumped, a consequence of the credit crunch and gathering economic gloom. The opportunity for change has presented itself particularly at the end of the current quarter given the huge disparity between feedstock naphtha and quarterly European olefins contract prices. The current margin for ethylene based on naphtha feed is more than €1,200/tonne on a fourth quarter ethylene contract price of €1,120/tonne. Ethylene and propylene producers have been making great money on the contract price but only if they have been able to shift volumes. Indeed, the cost/price environment has been such that analysts have been forced to re-base graphs showing the differential between olefins and naphtha as the spreads has shot off the chart. This anachronistic situation could not last. Across the petrochemicals business the hope has been maintained that feedstock costs and product prices could stabilise with oil sometime around the turn of the year. The outlook may be far from rosy but price spreads down the olefins chains could become more realistic in January as ethylene and propylene prices come down in response to the weakened naphtha price and poor demand for the major olefins. Just last week, one strong advocate of monthly olefins pricing, INEOS, reiterated its support for a monthly contract. The quarterly pricing mechanism, which INEOS said could lead to huge corrections in the monomer price, was no longer acceptable to the company’s derivative businesses or their customers. We are seeing the start of those huge corrections now. Initial European ethylene and propylene monthly contracts for January had been agreed down €600/tonne ($822/tonne) and €523/tonne respectively by a key producer and integrated consumer, ICIS news reported on Tuesday. This translates to close to €900 off the current margin. It has been called a “table-thumping” correction. Both parties to the deal are well-known advocates of a monthly system and had agreed what was called a “straightforward monthly price valid for January only.” A monthly European olefins supply contact was last publicly reported in December 2006, although a handful of subsequent deals have been kept private and confidential. The market is currently split with several key consumers and one key producer still firmly in favour of a quarterly pricing structure. A switch to a widely accepted public monthly settlement may yet be some time off but the world appears to be turning much more sharply against the current status quo. In the short term, at least, those holding out for the traditional quarterly approach in olefins will find themselves in a difficult place. There are one or two non-integrated olefins producers in western Europe and four major consumers who take part in the quarterly contract negotiations. Companies can, of course be net sellers or buyers and be integrated. INEOS and LyondellBasell are net olefins buyers; Dow Chemical, Shell and INEOS are merchant sellers. ($1 = €0.72) To discuss issues facing the chemical industry go to ICIS connect For more on ethylene
http://www.icis.com/Articles/2008/12/17/9180243/insight-europes-quarterly-ethylene-mechanism-is-under-pressure.html
CC-MAIN-2014-49
refinedweb
635
57.61
Type: Posts; User: raj874 hello i have one issue i made one windows program so now i want when my exe execute with some perameter in commandprompt it generate new exe with this perameter include and if not... thx sir :) First of all hello every one Now i m face one problem in c windows programming is i want identify which windows os is run in pc by c programing so how can i do it ? one time more i explain... 65565 is buffer size :( #!/usr/bin/python import socket sock = socket.socket(socket.AF_INET,socket.SOCK_RAW,socket.IPPROTO_TCP) while 1: print sock.recvfrom(65565) sock.close() i run that prog but why its gives... hello can any one told ma in detail whats differrence between WSASocket() and socket() function i know msdn tell it but i cant understand whats overlapped attributes so plz tell me in detail thx in... yes u r right i solve my prblem thx :) thx for it ur last line my answer but i have one noob type question this function work when window open at desktop????? basically findwindow mean find window but i think its requires error = the specified module could not be found FindWindow(NULL,"Windows Task Manager"); not working in c language any one tell me why here i use title of task manager i know first perameter i can also use but i wanna use second one i use dev... thx i solved problem :) and also loadlibrary works fine sorry for double post this small matter Actully i didnt debug it but i checked it when i display the messagebox in installhook function in dll like it its works BOOL _declspec (dllexport) installhook() { hook =... i made one program in c for keyboard hooking but its not working i made first one dll and after made one exe which load dll it is sample code /*THIS IS DLL CODE Replace "dll.h" with the name of...
http://forums.codeguru.com/search.php?s=192b3c71355c5c3fc8ebfc1341af2586&searchid=6137093
CC-MAIN-2015-06
refinedweb
324
64.24
Some minor news on logging: I upgraded the code to use WrapLog 1.1, which mostly means that logging messages include level, timestamp and thread. Regards, Thomas. I've released a tiny logging package called WrapLog at <>. We should be able to integrate this into BrowserLauncher2. The basic idea is that the library uses a generic logger interface which the client application developer can easily overwrite by inserting his own Logger class in the classpath. WrapLog already comes with wrappers for Log4j and java.util.logging, but also includes a simple logger that writes to System.out without the need for any libraries. I decided to put it under a BSD-style license because developers probably will like to customize the code for their project. (I didn't bother to come up with a nice and flexible design; this is supposed to be lean and mean, there are already powerful but bulky logging packages around.) That way, we can add the 2 relevant classes from WrapLog to the BrowserLauncher2 CVS, and include them in the BrowserLauncher2.jar. So client application developers need to add only one JAR to their classpath. I intend to provide a logger that logs warnings and errors toSystem.out/err. If client application developers don't like that, they can download the WrapLog.jar and create their own logger. Kind regards, Thomas. Hi, I added simple logging to BrowserLauncherTestApp, but you can use it in any class. Basic pattern: import net.sf.wraplog.Logger; class MyClass { private static Logger logger = Logger.getLogger(MyClass.class); ... void blah() { logger.error("cannot blah: " + actual + " must be " + expected"); } } The current implementation just logs debug/info messages to System.out, and warning/error messages to System.err. But this is easy to change or extend, see <> for details. Thomas.
http://sourceforge.net/p/browserlaunch2/mailman/message/104913/
CC-MAIN-2014-52
refinedweb
298
59.4
Dynamic generation of web graphics using Java2D801540 Jun 20, 2011 2:48 PM Does anyone have some thoughts on using Java2D to generate web graphics, like buttons? This content has been marked as final. Show 6 replies 1. Re: Dynamic generation of web graphics using Java2D801313 Jun 21, 2011 1:21 PM (in response to 801540)Unless you're talking about Applets, you can't really use Java SE to create buttons that appear in a web browser. You may want to check out a JavaScript tutorial about handling events on images. 2. Re: Dynamic generation of web graphics using Java2D801540 Jun 21, 2011 3:46 PM (in response to 801313. 3. Re: Dynamic generation of web graphics using Java2D801313 Jun 21, 2011 6:04 PM (in response to 801540) user10816810 wrote:My thought is that this is a waste of time. I don't think it would be that hard to do, but I don't see the point. Rather, I'd recommend spending your time looking at capacities that match what you are trying to do in Flash and HTML 5. But if you really want the Metal buttons (I assume we're talking about the Metal L&F, correct) then you gotta do what you gotta do.. 4. Re: Dynamic generation of web graphics using Java2D801540 Jun 21, 2011 10:55 PM (in response to 801313)I am not talking about Swing, AWT, Flash, or HTML 5. I am just talking about using a BufferedImage object to create .gif files so that I can automate the creation of button graphics to use in html files. For instance, if I need a button, i.e. <img src="something.gif" />, I don't want to have to open Photoshop to do it. I just want to generate a script using a BufferedImage object and graphics2d. This has nothing to do with applets, JFrames, JButtons, etc. Again, I just want the ability to create GIF formatted files to use for buttons in web pages. For example, I need three graphics files to represent the pressed, rollover and normal states. It tedious to constantly update these files. MetalUI has nothing to do with my question. I am not talking about look and feels. Metal is an example of an effect I would like to achieve using Java 2D. Photoshop has styles to reresent Metal, plastic, gradients, etc. I am talking about effects like that, not Look and Feel. I am not talking about Swing. You can make a web graphic in Photoshop to represent a button in Photoshop, but that is task driven. Is there a way to do the same thing in Java2D to automate the process? Are there examples somewhere? Is anyone good with creating these kind of graphics that could whip up some examples? Is HTML 5 mainstream? Is Flash mainstream? I just want basic every day generic GIF files. I am dealing with a commercial web site, I can not consider something that isn't mainstream yet. If this is not possible, then I will just keep using Photoshop. However, I think this type of automation would save a lot of time. Thank you. 5. Re: Dynamic generation of web graphics using Java2DStanislavL Jun 22, 2011 12:20 AM (in response to 801540)I think the BufferedImage based approach is ok. All you need is a JButton with proper text created. Then set size to getPreferredSize() and call paint() passing the Graphics instance from the BufferedImage. Or you can create a JFrame add the JButton there and paint desired portion of the JFrame. I have implemented something like this painted book pages pair as .png images. You server should have some Windows system support to let you render fonts. 6. Re: Dynamic generation of web graphics using Java2D801540 Jun 22, 2011 11:56 AM (in response to 801540)Hi. I put together an example of what I am trying to do: here it is: <pre> import java.awt.BasicStroke; import java.awt.Color; import java.awt.Font; import java.awt.Graphics2D; import java.awt.LinearGradientPaint; import java.awt.MultipleGradientPaint; import java.awt.Paint; import java.awt.RenderingHints; import java.awt.Shape; import java.awt.Stroke; import java.awt.geom.AffineTransform; import java.awt.geom.GeneralPath; import java.awt.geom.Point2D; import java.awt.geom.RoundRectangle2D; import java.awt.image.BufferedImage; import java.io.File; import java.util.Iterator; import javax.imageio.ImageIO; import javax.imageio.ImageWriter; import javax.imageio.stream.ImageOutputStream; public class ButtonExample{ static public void main(String args[]) throws Exception { int width = 200, height = 50; BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); Graphics2D g = bi.createGraphics(); g.fillRect(0, 0, width - 1, height - 1); ////////////////////////////////////////////////////////////////// g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); Shape shape = null; Paint paint = null; Stroke stroke = null; //int width = getWidth(); //int height = getHeight(); paint = new Color(0, 0, 0, 255); shape = new RoundRectangle2D.Double(0, 0, width, height, 26, 26); g.setPaint(paint); g.fill(shape); paint = new Color(0, 0, 0, 255); shape = new RoundRectangle2D.Double(1, 1, width - 2, height - 2, 15.0, 15.0); g.setPaint(paint); paint = new Color(136, 136, 136, 136); stroke = new BasicStroke(1.0f, 0, 0, 4.0f, null, 0.0f); shape = new RoundRectangle2D.Double(1, 1, width - 2, height - 2, 15.0, 15.0); g.setPaint(paint); g.setStroke(stroke); g.draw(shape); paint = new LinearGradientPaint(new Point2D.Double(321.125, 205.1853790283203), new Point2D.Double(321.125, 300.3125), new float[]{0.0f, 1.0f}, new Color[]{new Color(255, 255, 255, 255), new Color(0, 0, 0, 0)}, MultipleGradientPaint.CycleMethod.NO_CYCLE, MultipleGradientPaint.ColorSpaceType.SRGB, new AffineTransform(1.0f, 0.0f, 0.0f, 1.0f, -189.4317169189453f, -220.40145874023438f)); shape = new GeneralPath(); ((GeneralPath) shape).moveTo(15.1308, 1.8172914); ((GeneralPath) shape).curveTo(9.820799, 1.8172914, 2.1307993, 4.507291, 2.1307993, 13.817291); ((GeneralPath) shape).lineTo(2.1307993, 31.78604); ((GeneralPath) shape).curveTo(39.09004, 37.30802, 86.78241, 40.69229, width / 2, 40.69229); ((GeneralPath) shape).curveTo(width / 2, 40.69229, width / 2 + 30, 38.130333, width - 1, 33.84854); ((GeneralPath) shape).lineTo(width - 1, 11.817291); ((GeneralPath) shape).curveTo(width - 1, 6.507291, width - 5, 2.8172914, width - 13, 1.8172914); ((GeneralPath) shape).lineTo(18.1308, 1.8172914); ((GeneralPath) shape).closePath(); g.setPaint(paint); g.fill(shape); g.setFont(new Font("Microsoft Sanserif", Font.BOLD, 20)); g.setColor(new Color(0, 0, 0)); g.drawString("testing", width / 4 + 1, 31); g.setColor(new Color(0, 255, 0)); g.drawString("testing", width / 4, 30); Iterator imageWriters = ImageIO.getImageWritersByFormatName("GIF"); ImageWriter imageWriter = (ImageWriter) imageWriters.next(); File file = new File("c:\\button.gif"); ImageOutputStream ios = ImageIO.createImageOutputStream(file); imageWriter.setOutput(ios); imageWriter.write(bi); } } </pre> This outputs "button.gif". The graphics quality isn't that great, because I am not that good with visuals/graphics. This is the part I need help with. I need to come up with good effects, like plastic (gel), gradients, metal, etc. (Just like in Photoshop). This is also useful for creating rounded rectangles/outlines for web page sections. That is another thing that is tedious to do in Photoshop, because you have to deal with slicing up the image. Thanks. Edited by: user10816810 on Jun 22, 2011 9:54 AM Edited by: user10816810 on Jun 22, 2011 9:55 AM
https://community.oracle.com/thread/2243002?tstart=75
CC-MAIN-2015-27
refinedweb
1,214
62.54
Alright, so im storing a random number to a variable using the GetRand function from the tutorial. What im wanting to do is get a different random number assigned to that variable.. Please forgive the crude example program, im sure im missing a header or something. Id really appreciate any help that you all are willing to offer.Please forgive the crude example program, im sure im missing a header or something. Id really appreciate any help that you all are willing to offer.Code:#include <cstdio> #include <cstdlib> #include <iostream> #include <ctime> int GetRand(int min, int max) { static int Init = 0; int rc; if (Init == 0) { /* * As Init is static, it will remember it's value between * function calls. We only want srand() run once, so this * is a simple way to ensure that happens. */ srand(time(NULL)); Init = 1; } /* * Formula: * rand() % N <- To get a number between 0 - N-1 * Then add the result to min, giving you * a random number between min - max. */ rc = (rand() % (max - min + 1) + min); return (rc); } int newnumber; int number = getrand(1,12345); int main() { cout<<"Your current random number is: "<<number<<endl; cout<<"Enter '1' for a new number: "; cin>>newnumber; if(newnumber==1) { cout<<"Your new random number is: "<<number<<endl; } system("PAUSE"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/101286-resetting-random-numbers-same-variable.html
CC-MAIN-2014-15
refinedweb
216
59.94
attempts to set the destination of the agent in world-space units. Getting: Returns the destination set for this agent. • If a destination is set but the path is not yet processed the position returned will be valid navmesh position that's closest to the previously set position. • If the agent has no path or requested path - returns the agents position on the navmesh. • If the agent is not mapped to the navmesh (e.g. Scene has no navmesh) - returns a position at infinity. Setting: Requests the agent to move to the valid navmesh position that's closest to the requested destination. • The path result may not become available until after a few frames. Use pathPending to query for outstanding results. • If it's not possible to find a valid nearby navmesh position (e.g. Scene has no navmesh) no path is requested. Use SetDestination and check return value if you need to handle this case explicitly. using UnityEngine; using UnityEngine.AI; [RequireComponent(typeof(NavMeshAgent))] public class FollowTarget : MonoBehaviour { public Transform target; Vector3 destination; NavMeshAgent agent; void Start() { // Cache agent component and destination agent = GetComponent<NavMeshAgent>(); destination = agent.destination; } void Update() { // Update destination if the target moves one unit if (Vector3.Distance(destination, target.position) > 1.0f) { destination = target.position; agent.destination = destination; } } }
https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent-destination.html
CC-MAIN-2020-45
refinedweb
213
50.53
In the past I’ve tried building “frameworks.” They never did what I expected them to do. But this past week I’ve worked on another one and it’s actually shaped up reasonably well. Calling it “TriadNME” – it is a haXe/NME/Flash10 2d platforming engine with applicability to other 2d games. Some notes on interesting elements of the design and things I discovered in the process: - Mostly based on stealing existing good code from old projects. Proven code is golden, and possibly worth more than the architecture itself. I would actually consider Triad a second generation framework(the first being some bits of code for timing, the entity system, build system, etc.) - Very centered around the haXe dynamic/static typing mix. Entities are anonymous objects, with a component system based on “conventions” of data layout(field names, an array of component destructors, etc.) attached to them. I’ve used this for about a year on many projects – it feels like the right mix most of the time. - Lax approach to performance to favor reusable functionality. Essentially, I give myself room to use a layer or two of indirection(callbacks, lookups, etc.), but try not to go much deeper. So, e.g., instead of a hardcoded timer I now use an Alarm class with callbacks. Character controllers, camera movements, etc. are similarly based on callbacks, thus it’s easy to go from (e.g.) flipscreen camera to scrolling and centered around the player. If I want to regain performance from those things – I can usually rewrite the specific functionality. - Minimal contextual instancing. By this I mean – at the start I prepare all the core structures(whether as singleton instances or as static classes) and never destroy or replace them. Instead I call reset methods and toggle visiblility on the Flash displaylist. As a result, I never have to litter tests in the code for whether a certain kind of data is available, because it always is. To change contexts, I disable some functionality. As a result, there’s a whole body of code that simply doesn’t exist, since disabling is (in general) easier than destruction. This goes hand-in-hand with the approach to performance. - My approach to algorithms has generally improved. I’ve started making my own “xxxTools” static classes for common algorithms – so I have some for math/trig/probability now, some bits for strings and hashes, some bits specific to my AABB/tile collision system. Conversions between measurements (degrees/radians, pixels/tiles) and alignments (centered, left, right, above, below) are a major part of the toolbox and my core data structures tend to include a lot of conversion methods. Similarly, more structural algorithms like “array of objects with name fields” to “string hash of objects” are starting to come into play. In the past I had a horrible tendency to rewrite simple algorithms and inline stuff that didn’t warrant the customization, but hopefully I’ve come around on that. - [haXe-specific] – I use many more anonymous structures. HaXe is structurally typed and lets me add a bit of extra type checking just by wrapping everything in a little structure – e.g. most of my points are {x:Int,y:Int} or {x:Float,y:Float], instead of being a class. This adds a bit more flexibility in writing the data because I can just inline the structure into a function call instead of going through writing a class, a new() method, etc. Doing this lowers performance for the Flash target somewhat, but adds a ton of code clarity. - While heaping on engine-level features with sprites, rendering, physics, I specifically tried to engineer them so that they’re easy to rip out if I don’t want to use them for LD. This had the effect of making them modular without an explicitly “modular/pluggable” system design – a field on the main class, one call to initialize, one call to reset, one call to update. Take those out and the system is essentially gone, although other things might reference them – but that’s more of an “integration” problem and usually just requires more things to be deleted. - The in-game console is increasingly central to the workflow. I try to cram as many in-game tools as I can because – besides the obvious iteration improvements – it has a knock-on effect on the architecture where I need to keep data in both editable and playable forms, thus it starts decomposing towards the “real” constraints. The console works as a way to quickly expose editing functionality. This is something I could continue to refine considerably with tool-specific namespaces, unix-esque pipes, etc. - I opted for a tilesheet format that uses a “smallest size” of tile and rebuilds larger ones from that, similar to the character-mapped graphics of older consoles(NES, SMS, etc.). This has some interesting effects on editability – with an in-game editor the downsides are mostly covered, but the benefits in internal architecture are surprising: the structure of graphics access is greatly clarified when you have three ways to inspect a tile sheet – by set name, by set index, by tile index. It works particularly well with the autotiling algorithms I’ve made(one of the main reasons I started down this path). It also means that the framework should port more easily to a 3D-accelerated setting since only the rendering methods have to change – the original data for sprites and tiles is already a nice NPOT-sized thing. I still have some things I want to fix up and test before I declare this ready – sometime tomorrow I’ll publish it, with a little video. “Mostly based on stealing existing good code from old projects.” Pretty much – all of my libraries work like that, small bits of code I’ve pulled out of a project I did in a quick and dirty fashion and then tidied up when the pain points emerged. Stick them together and you have a framework that can be used to build similar things again – use them independently and they can help you not solve the problem twice.
http://ludumdare.com/compo/2011/12/15/notes-on-framework-building/
CC-MAIN-2016-18
refinedweb
1,023
60.65
- NAME - Synopsis - Overview - Actions - Valued and unvalued symbols - Nodes - Action context - Parse trees, parse results and parse series -. It is important to note that, when Perl closures are used for the semantics, they must be visible in the scope where the semantics are resolved. The action names are usually specified with the grammar, but action resolution takes place in the recognizer's value and reset_evaluation methods. This can sometimes be a source of confusion, because if a Perl closure is visible when the action is specified, but goes out of scope before the action name is resolved, resolution will fail. is one that cannot be relied on. It may, for example, vary from instance to instance, either randomly or according to an arbitrary pattern. rule node has a "whatever" value."; return $action_object; } ## end sub do_S In addition to the per-parse-tree variable and their child arguments, rule evaluation closures also have access to context variables. $Marpa::R2::Context::grammar is set to the grammar being parsed. $Marpa::R2::Context::rule is the ID of the current rule. Given the rule ID, an application can find its LHS and RHS symbols using the grammar's rule() method. "whatever" value. If the action name begins with the two characters " ::", then it is a reserved action name and must be resolved as such. The current reserved action names are " ::whatever" and " ::undef". If the reserved action name is not one of those, Marpa throws an exception. Explicit resolution The recognizer's closures named argument allows the user to directly control the mapping from action names to actions. The value of the closures named argument is a reference to a hash whose keys are action names and whose hash values closure. closure. But if the user wants to leave the rule evaluation closures in the main namespace, she can specify "main" as the value of the actions named argument. But it can be good practice to keep the rule evaluation closures in their own namespace, particularly if the application is not small..
https://metacpan.org/pod/release/JKEGL/Marpa-R2-2.021_006/pod/Semantics.pod
CC-MAIN-2018-22
refinedweb
339
53
Make environment for Swift Practice I need to learn swift again, and of course XCode project is one of option to use Swift practice. But, I want to use simple env just try swift programming I have some options to set up Swift Practice Env - Playground - Commandline and Text editor - REPL - Linux Docker with REPL Playground Playground is simple way to test swift program This is XCode tool. Commandline and Text editor Create file and compile and run hello.swift import Foundation print("Hello") Compile codes (with swiftc command) swiftc hello.swift Run execution file ./hello If you want to run without compile swift hello.swift REPL This is dialog base programming environment like Iron Python To start REPL swift Welcome to Apple Swift version 5.1.3 (swiftlang-1100.0.282.1 clang-1100.0.33.15). Type :help for assistance. 1> Linux Docker with REPL We can create isolated environment with Linux docker. Swift supports Linux(Ubuntu), too. Docker image is provided by many people This is an example (swift-docker) docker pull swift docker run --security-opt seccomp=unconfined -it swift You can use swift REPL on linux コメント
https://daiji110.com/2020/02/09/swift-practice-environment/
CC-MAIN-2021-43
refinedweb
192
55.44
This article lives in: Intro Let's say you have a FastAPI application... or actually, any other type of web application, including a Panel dashboard with Pandas DataFrames and Bokeh visualizations, or a Streamlit application. These are, in the end, web applications. You could think of many other examples. Now let's say it all works well locally, on your machine. But in most cases, the purpose of these web apps is to be available on the real web (not only on your machine), so that others can actually access them. So you need to "deploy" them somewhere, on a remote server. And then you would want to have secure communication between your app clients (web browsers, mobile apps, etc.) and your server web application. So, you should have HTTPS. 🔒 But although it might sound like a simple "option" to enable, it's quite more complex than that... and Traefik can help you a lot. I have been a long-time fan of Traefik, even before creating FastAPI. And recently I had the chance to make an event/webinar with them. 🎉 You can watch the recording of the video here on the Traefik resources' website. About HTTPS HTTPS is quite more complex than "enabling an option". The protocol any of your applications will need to "talk" is actually the same HTTP, so you don't have to change anything in your web apps to change from HTTP to HTTPS. But that HTTP communication has to go through a secure connection (TLS/SSL), that's where the "S" in HTTPS comes from, "HTTP Secure". There's a whole process required, including acquiring HTTPS (TLS/SSL) certificates. But fortunately, Let's Encrypt provides them for free... you just have to set everything up. But then, "setting everything up" including acquiring the certificates, installing them where appropriate, renewing them every three months, etc. It's all a relatively complex process. But Traefik can do all that for you. To quickly learn how HTTPS works from the consumer's perspective, I highly encourage you to go and check HowHTTPS.works. Then you can go and read the short summary of what you need to know as a developer in the FastAPI docs: About HTTPS. Domain name HTTPS is tied to a domain name because the TLS certificate is for that specific domain name. So, you need to have one or buy one. I buy my domains at Name.com, it's quite cheap and it has worked quite well for me. Remote server You will also need a "cloud" or remote server. It's frequently called a "VPS" for "virtual private server". It's a "private server" because you get a full Linux system with full control of it (contrary to a "shared hosting"). And it's "virtual" because what providers do is create a virtual machine and make it available for you, instead of installing a real physical server, that's why they are affordable. For simplicity, I would suggest these providers: I personally have things in each one of those. They all work great, they have a simple and nice user experience, and are quite cheap. Even $5 or $10 USD a month is enough to have one of the small servers up and running. You can also go and use one of the giant cloud infrastructure providers if you want, learn all their terminology and components, set up all the accounts, permissions, etc. And then use them. But for this example, I would suggest one of the three above as it will be a lot simpler. DNS records When you create a remote server, it will have a public IP. But now you need to configure your domain to point to that IP, so that when your users go to your domain, they end up talking to your remote server in its IP. There's a set of "records" that do that, they are called "DNS records" (DNS for "Domain Name System"). Those records are stored in "Name Servers". All of these cloud providers above have free Name Servers, so you can use them to store that information about pointing domains to IPs. Tip: those same DNS records are also used for configuring email, and other related small things. Note: all these Name Server and DNS changes are automatically copied and replicated through the web so that everyone in the world knows where to access the information about your domain, and then, with that, they will know to which IP they should talk to when interacting with your domain. Because that replication takes some time, after you save some of these changes, they can take from minutes to hours to be ready. Name Servers The first step is in your "registrar" (the entity that sold you the domain, e.g. Name.com). In there, you define what are the Name Servers for your domain. You will probably first want to remove the default Name Servers. After buying a domain, the default Name Servers are normally the ones for the same registrar (e.g. Name.com), and normally all they do is have DNS records to point the domain to a placeholder page full of ads, but they normally don't allow you to create DNS records (like pointing the domain to an IP address). So, you will probably want to remove those default Name Servers and add the ones for your VPS provider. E.g. you could add the Name Servers for DigitalOcean: ns1.digitalocean.com ns2.digitalocean.com ns3.digitalocean.com DNS Records After you configure the Name Servers for your domain to be the ones for your cloud provider, you can now go to that cloud provider and set up the DNS Records. Depending on your cloud provider, they will have some section to configure "domains", "domain zones", or "networks", in the end, they all refer to the same configurations for DNS records for a specific domain. So, the next step is to create a configuration there for your specific domain (sometimes called a "domain zone"). Then, inside of that domain configuration, you need to add a DNS record to point any web communication to your cloud server. There are several types of DNS records, the one we need is an A record, when you are creating a DNS record, those are normally the default type as they are the most important one. An A record has an IP and a hostname. The IP would be the one for your remote server. You might need to go to the section in the dashboard where your server is located to copy that IP. The hostname would be your domain or any sub-domain. So, if you bought example.com, you can set the record to example.com, or to somesubdomain.example.com or also a.long.sub.domain.example.com. In most cases, you can even use *.example.com, which will match any sub-domain and point it to the IP you specify. You can create multiple A records, one for each domain or sub-domain. And each of them can point to different IPs. That's also why you see some applications that use several domains, like dashboard.example.com and api.example.com, to handle different parts of the same system in different servers. Note: depending on the provider, you might need to use the symbol @ in the hostname to mean "the same domain I'm configuring", so, for the domain configurations for example.com, creating an A record with some IP and the hostname @ would mean "point the same domain example.com to that IP address". Wait You might have to wait sometime for these DNS changes to replicate. You can test if your computer already has access to the most recent version of your records with the tool ping from the command line. For example, checking for the domain tiangolo.com: $ ping tiangolo.com PING tiangolo.com (104.198.14.52) 56(84) bytes of data. 64 bytes from 52.14.198.104.bc.googleusercontent.com (104.198.14.52): icmp_seq=1 ttl=103 time=204 ms 64 bytes from 52.14.198.104.bc.googleusercontent.com (104.198.14.52): icmp_seq=2 ttl=103 time=226 ms you can see the IP address is 104.198.14.52. If that's what you just configured, congrats! The DNS records are ready. 🎉 Check the video From this point, you should be able to follow the video recording with all the explanations. So I'll keep the rest of this post as simple as possible, mainly showing you the config files so you can copy all the examples. Simple FastAPI app Let's start with a basic FastAPI app. I'm assuming that you know a bit about FastAPI, if you don't, feel free to check the documentation, it is written as a tutorial. If you want to see the explanation step by step, feel free to check the video. The basic app we will use is in a file at ./app/main.py, with: from fastapi import FastAPI app = FastAPI() @app.get("/") def read_main(): return {"message": "Hello World of FastAPI with Traefik"} Dockerfile We will use Docker to deploy everything. So, make sure you install it. Then we need a file at ./app/Dockerfile with: FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8 COPY ./app /app/ Notice that we are using the official FastAPI Docker image: tiangolo/uvicorn-gunicorn-fastapi:python3.8. The official base Docker image does most of the work for us, so we just have to copy the code inside. Make sure you have Docker installed on your local computer and in the remote server. Prepare your cloud server - Connect to your remote server from your terminal with SSH, it could be something like: ssh root@fastapi-with-traefik.example.com - Update the list of package versions available: apt update - Upgrade the packages to the latest version: apt upgrade Docker Compose We are using Docker Compose to manage all the configurations. So make you install Docker Compose locally and on the remote server. To prevent Docker Compose from hanging, install haveged: apt install haveged Technical Details: Docker Compose uses the internal pseudo-random number generators of the machine. But in a freshly installed/created cloud server, it might not have enough of that "randomness". And that could make the Docker Compose commands hang forever waiting for enough "randomness" to use. haveged prevents/fixes that issue. After that, you can check that Docker Compose works correctly. Docker Compose files For all the detailed explanations of the Docker Compose files, check the video recording. Make sure you update the domains from example.com to use yours, and the email to register with Let's Encrypt, you will receive notifications about your expiring certificates in that email. Also, make sure you add the right DNS records for your main application, and for the Traefik dashboard, and update them in the Docker Compose files accordingly. Here are the Docker Compose files if you want to easily copy them. docker-compose.traefik.yml: services: traefik: # Use the latest v2.3.x Traefik image available image: traefik:v2.3 ports: # Listen on port 80, default for HTTP, necessary to redirect to HTTPS - 80:80 # Listen on port 443, default for HTTPS - 443:443 restart: always labels: # Enable Traefik for this service, to make it available in the public network - traefik.enable=true # Define the port inside of the Docker service to use - traefik.http.services.traefik-dashboard.loadbalancer.server.port=8080 # Make Traefik use this domain in HTTP - traefik.http.routers.traefik-dashboard-http.entrypoints=http - traefik.http.routers.traefik-dashboard-http.rule=Host(`traefik.fastapi-with-traefik.example.com`) # Use the traefik-public network (declared below) - traefik.docker.network=traefik-public # traefik-https the actual router using HTTPS - traefik.http.routers.traefik-dashboard-https.entrypoints=https - traefik.http.routers.traefik-dashboard-https.rule=Host(`traefik.fastapi-with-traefik.example.com`) - traefik.http.routers.traefik-dashboard-https.tls=true # Use the "le" (Let's Encrypt) resolver created below - traefik.http.routers.traefik-dashboard-https.tls.certresolver=le # Use the special Traefik service api@internal with the web UI/Dashboard - traefik.http.routers.traefik-dashboard-https.service=api@internal # https-redirect middleware to redirect HTTP to HTTPS - traefik.http.middlewares.https-redirect.redirectscheme.scheme=https - traefik.http.middlewares.https-redirect.redirectscheme.permanent=true # traefik-http set up only to use the middleware to redirect to https - traefik.http.routers.traefik-dashboard-http.middlewares=https-redirect # admin-auth middleware with HTTP Basic auth # Using the environment variables USERNAME and HASHED_PASSWORD - traefik.http.middlewares.admin-auth.basicauth.users=${USERNAME?Variable not set}:${HASHED_PASSWORD?Variable not set} # Enable HTTP Basic auth, using the middleware created above - traefik.http.routers.traefik-dashboard-https.middlewares=admin-auth volumes: # Add Docker as a mounted volume, so that Traefik can read the labels of other services - /var/run/docker.sock:/var/run/docker.sock:ro # Mount the volume to store the certificates - traefik-public-certificates:/certificates command: # Enable Docker in Traefik, so that it reads labels from Docker services - --providers.docker # Do not expose all Docker services, only the ones explicitly exposed - --providers.docker.exposedbydefault=false #=admin@example.com # docker-compose.yml: services: backend: build: ./ restart: always labels: # Enable Traefik for this specific "backend" service - traefik.enable=true # Define the port inside of the Docker service to use - traefik.http.services.app.loadbalancer.server.port=80 # Make Traefik use this domain in HTTP - traefik.http.routers.app-http.entrypoints=http - traefik.http.routers.app-http.rule=Host(`fastapi-with-traefik.example.com`) # Use the traefik-public network (declared below) - traefik.docker.network=traefik-public # Make Traefik use this domain in HTTPS - traefik.http.routers.app-https.entrypoints=https - traefik.http.routers.app-https.rule=Host(`fastapi-with-traefik.example.com`) - traefik.http.routers.app-https.tls=true # Use the "le" (Let's Encrypt) resolver - traefik.http.routers.app-https.tls.certresolver=le # https-redirect middleware to redirect HTTP to HTTPS - traefik.http.middlewares.https-redirect.redirectscheme.scheme=https - traefik.http.middlewares.https-redirect.redirectscheme.permanent=true # Middleware to redirect HTTP to HTTPS - traefik.http.routers.app-http.middlewares=https-redirect - traefik.http.routers.app-https.middlewares=admin-auth networks: # Use the public network created to be shared between Traefik and # any other service that needs to be publicly available with HTTPS - traefik-public networks: traefik-public: external: true docker-compose.override.yml: services: backend: ports: - 80:80 networks: traefik-public: external: false Start the stacks There are many approaches for putting your code and Docker images on your server. You could have a very sophisticated Continuous Integration system. But for this example using a simple rsync would be enough. For example: rsync -a ./* root@fastapi-with-traefik.example.com:/root/code/fastapi-with-traefik/ Then, inside of your server, make sure you create the Docker network: docker network create traefik-public Next, create the environment variables for HTTP Basic Auth. - Create the username, e.g.: export USERNAME=admin - Create an environment variable with the password, e.g.: export PASSWORD=changethis - Use opensslto generate the "hashed" version of the password and store it in an environment variable: export HASHED_PASSWORD=$(openssl passwd -apr1 $PASSWORD) And now you can start the Traefik Docker Compose stack: docker-compose -f docker-compose.traefik.yml up Next, start the main Docker Compose stack: docker-compose -f docker-compose.yml up -d Check your app After that, if everything worked correctly (and probably it didn't work correctly the first time 😅), you should be able to check your new application live at your domain, something like: And the Traefik dashboard at: And the Traefik dashboard would be protected by HTTP Basic Auth, so no one can go and tamper with your Traefik. Celebrate 🎉 Congrats! That's a very stable way to have a production application deployed. You can probably improve that a lot, add Continuous Integration, monitoring, logging, use a complete cluster of machines instead of a single one (e.g. use Kubernetes instead of Docker Compose), etc. There's no limit to adding more stuff and improving it all... But with this, you already have the minimum to serve your users a secure application. And as your deployment is based on Docker, and can be replicated easily and quickly, you could destroy that server, create a new one from scratch, and be live again in minutes. Because it doesn't depend on that specific server. All the important configurations and setup are in your Docker Compose files. And all the important logic and setup of the actual app are in the Docker image (with the Dockerfile). And Docker itself is taking care of having your application running, restarting it after failures or reboots, etc. Dessert 🍰 Do you want a bit more? Check the source code for this blog post, including the latest version of the app and config files, including a basic example with Panel, and one with Streamlit. ✨ Learn More Here are some extra resources: - HowHTTPS.works. - FastAPI docs: HTTPS for developers. - Event video recording in Traefik resources. - Source code in GitHub. - Traefik docs. - FastAPI docs. I hope that was useful! 🚀 About me Hey! 👋 I'm Sebastián Ramírez (tiangolo). You can follow me, contact me, see what I do, or use my open source code: Discussion (1) traefik.http.routers.app-https.middlewares=admin-auth should we use above command inside docker-compose.yml? i got response with 401 UNAUTHORIZED everytime i request to my FastAPI app..
https://practicaldev-herokuapp-com.global.ssl.fastly.net/tiangolo/deploying-fastapi-and-other-apps-with-https-powered-by-traefik-5dik
CC-MAIN-2021-25
refinedweb
2,915
56.96
The QDLBrowserClient class displays rich-text containing QDLLinks. More... #include <QDLBrowserClient> Inherits QTextBrowser. The QDLBrowserClient class displays rich-text containing QDLLinks. QDLBrowserClient manages and activates QDLLinks in collaboration with a rich-text document in QTextBrowser and QDLClient. QDLBrowserClient hooks into the QTextBrowser setSource() method so that QDLLinks can be activated on QDL sources when a user clicks or selects the anchor in the text. See also QDLClient and QDLEditClient. Constructs a QDLBrowserClient and attaches it to parent. The QDLClient is identified by name, which should be unique within a group of QDLClients. name should only contain alpha-numeric characters, underscores and spaces. Destroys a QDL Browser Client object. Activates the QDLLink specified by link, which should be in the form QDL://<clientName>:<linkId>. Loads the links in str into the client object. The str is the base64 encoded binary data of the links created by QDL::saveLinks(). See also verifyLinks(). Verifies the correctness of the links stored by the client object. This method determines if QDLLinks are broken, ensures all stored links have anchor text in the parent widget's text, and anchor text for unstored links is removed from the parent widget's text. This method should only be called after QTextBrowser::setText() and loadLinks() have been called. See also loadLinks().
https://doc.qt.io/archives/qtextended4.4/qdlbrowserclient.html
CC-MAIN-2021-43
refinedweb
212
60.11
import "golang.org/x/crypto/scrypt" Package scrypt implements the scrypt key derivation function as defined in Colin Percival's paper "Stronger Key Derivation via Sequential Memory-Hard Functions" (). Code: // DO NOT use this salt value; generate your own random salt. 8 bytes is // a good length. salt := []byte{0xc8, 0x28, 0xf2, 0x58, 0xa7, 0x6a, 0xad, 0x7b} dk, err := scrypt.Key([]byte("some password"), salt, 1<<15, 8, 1, 32) if err != nil { log.Fatal(err) } fmt.Println(base64.StdEncoding.EncodeToString(dk)) Output: lGnMz8io0AUkfzn6Pls1qX20Vs7PGN6sbYQ2TQgY12M= 2017 are N=32768, r=8 and p=1. The parameters N, r, and p should be increased as memory latency and CPU parallelism increases; consider setting N to the highest power of 2 you can derive within 100 milliseconds. Remember to get a good random salt. Package scrypt imports 3 packages (graph) and is imported by 300 packages. Updated 2017-12-08. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/crypto/scrypt
CC-MAIN-2017-51
refinedweb
153
58.28
The team blog of the Expression Blend and Design products. Building a basic media viewer with controls for volume, balance and interacting with the playback OverviewCreating the application Task 1: Creating the components for the PlayPause button Task 2: Set up the states and triggers on PlayPause Task 3: Add the media, and define how PlayPause interacts with it Task 4: Add the Volume and Balance controls Task 5: Databind the VolumeLabel In this tutorial, you will build a basic-looking but functional video player entirely within Microsoft® Expression™ Interactive Designer. You will use templates and triggers to create a custom control that displays the state of the video playback and other parameters. You will explore some of Interactive Designer's path-editing controls, including learning how to interact with the individual vertices of a shape by changing a rectangle into a triangle. You will also use data binding to create controls to interact with the media file. Finally, you will create and use a value converter to modify how a data-bound relationship between two elements is displayed graphically. The MediaControl will display the media in the center of the application when the PlayPause button (the blue button in the center) is clicked. When the button is clicked, it will change from the 'Play' icon to the 'Pause' icon, and clicking it again will pause the video on the current frame. The BalanceControl on the left will adjust the balance between the left and right speakers, and the VolumeControl on the right will increase or decrease the volume. The VolumeControl also shows the volume in a more conventional 0-10 value, which users are familiar with, instead of the 0.0-1.0 format that Windows Presentation Foundation (WPF) uses. This tutorial serves as a starting point for playing and interacting with media in WPF applications. Additional controls, such as 'seeking' to a particular point in the media, or allowing the user to choose the media file to display, may be covered in later tutorials or they can be exercises for the reader. As you work in Interactive Designer, your changes take place in memory and not on your hard drive, so be sure to save early and often. If a palette is collapsed, expand it by clicking Expand on its title bar or simply double-click the title bar. On the File menu, point to New and then click Project. The Create New Project dialog box is displayed. Select the Standard Application (.exe) project type and name the project EIDMediaViewer. Set Language to whatever language you like. We will be writing a small amount of code later on, but the code will be provided in Visual Basic and C# alternatives. Choose the language you're more comfortable with, should you wish to do more experimentation on your own. For convenience, a newly-created project is initially stored in a temporary folder. If there are unsaved changes when you close Interactive Designer then you will be prompted to save the new project in a permanent location. In the Library palette (View | Library), choose the PresentationFramework library from the drop-down list and then select CheckBox from the list of element names (tip: click in the list and press C until CheckBox is selected). The pointer will turn into a crosshair. Draw a CheckBox by clicking on the artboard and dragging a bounding box inside the scene. Release the mouse button when the bounding box is the size, and in the position, that you want the play/pause button to be. You have a choice how you create a new element. You can draw the element freehand as you did in the previous step. Alternatively, you can simply double-click an element name in the Library palette to have Interactive Designer insert the new element with a default size and layout. The CheckBox is named CheckBox by default, but its purpose will be clearer if you rename it PlayPause. An element has to be selected before it can be renamed. Look in the structure view in the Timeline palette (View | Timeline) and notice how the background behind the CheckBox entry indicates that it is already selected. This is a workflow convenience for newly-created elements. So now just click the CheckBox entry in the structure view and wait a moment for the name to become editable. Type PlayPause and press ENTER. Newly-created elements are automatically selected in the Timeline palette’s structure view. In other cases you can select the element by expanding the tree if necessary and then clicking the element’s name. Another way to rename an element is first to select it and then use the Properties palette (View | Properties). You can edit the Name of the element either in the large text box at the top of the palette or in the property grid (within the Misc category if you have Categorized the properties). The property grid lists property names in its first column and corresponding values in the second column. In either of these places, edit the text and then press ENTER or TAB when you’re done. An element can be selected by clicking its name in the structure view. Before you click to select an element on the artboard, or to perform certain other artboard tasks with it, you need first to click Selection in the Tools palette (View | Tools). As a scene develops in complexity, using the structure view to select is recommended. Right-click PlayPause and create a new template (Edit Template | Create Empty Template...). This will instruct Expression to create an empty template for our control, but still to treat it fundamentally as a CheckBox. We're going to create Triggers in our template to change how the control looks based on its state. The state in this case being whether or not the control is checked. We could edit a copy of the template instead, and this would leave the triggers that normally exist on the CheckBox in place for us. But, since we're only interested in a few of these, it makes more sense to begin with an empty template. The Create ControlTemplate Resource dialog is displayed. By creating a new template, we are creating a new re-usable resource. The key of any resource is the name by which it is uniquely identified within its resource dictionary. In Key, type PlayPauseTemplate. The Define in: option indicates which resource dictionary to define your new template in. A resource dictionary can be defined at a point on a range from very local to very global and this determines the scope in which the resource is visible and available for re-use. If we wanted to use the same control on multiple scenes in our application, we'd define our template in the Application Root. But, since we're only using it on this scene, leave the Define in: option as This document. TargetType is the type of element to which this template will apply. In this case we want the template to apply to any CheckBox so leave the setting as it is. Press OK. Notice now that the timeline has changed to indicate that we're editing a template. In this mode, work done on the artboard will apply to the PlayPauseTemplate. If you inadvertently exit this mode and you wish to return to editing the template then right-click PlayPause, point to Edit Template, and then click Edit Template. The next steps are to build up a new template for the CheckBox which we named PlayPause. CheckBoxes are designed to have a single element of content within them. Since we want to have multiple visual elements in our control, add a Canvas to the control to put them in. This is done most easily by double-clicking Canvas in the Library palette, instead of dragging it out. When created by double-clicking, the Canvas will take up the entire space of our template. The Canvas is a control in the PresentationFramework. Let's rename the new canvas PlayPauseTemplateRoot. This is done the same way we renamed the CheckBox earlier in the tutorial. Double-Click the PlayPauseTemplateRoot in the timeline. This will set it as the activated element, so that what we subsequently draw or add to the art-board will be created inside it. Unless we activate PlayPauseTemplateRoot, the root of the template (named Template in the timeline) will remain activated and any new element we create will become its child and will replace the Canvas. When it is activated, PlayPauseTemplateRoot will appear with a yellow outline in the timeline. Add an ellipse to PlayPauseTemplateRoot. Select the Ellipse tool , and draw a new ellipse inside the canvas. You can hold down SHIFT to constrain the ellipse to be a perfect circle. The basic, empty ellipse isn't too exciting, so let's change the appearance of it. Bring up the Appearance palette if it's not already visible (View | Appearance). First, let's use a Radial Gradient. Of course, you can choose whatever appearance you like: this is simply one way to do it. Click Radial Gradient Brush from the brush editor control. When editing a radial gradient, you can create new gradient stops, move gradient stops, and reposition them with the gradient editor. We're not going to cover everything you can do with the gradient editor and Appearance palette in this tutorial. Change the first gradient stop to a dark blue, and the second gradient stop to a lighter blue. Now, we're going to design the Play and Pause symbols. First, we'll create the Pause symbol. If you like, you can zoom into the template. Select PlayPauseTemplateRoot in the timeline, drop down the Zoom list (lower-left corner of the art-board), and select Fit To Screen. This will cause the PlayPauseTemplateRoot to take up the entire view in our artboard. Now the ellipse is much larger, and easier to work with. When you're done, you can reset the view to 100%, to see how everything looks at normal scale. A Pause symbol consists of two vertical bars. Now, inside the ellipse - to the left of center - draw a tall, thin Rectangle. This is the left bar of our Pause symbol. For ease later, let's rename it PauseLeft. To color the rectangle solid black, select Solid Color Brush in the Appearance palette, and set the color to black. Verify that PlayPauseTemplateRoot is still the activated element. Then select PauseLeft and Copy and Paste it. You can do this either with Edit | Copy and Edit | Paste, or with CTRL+C and CTRL+V. In the scene, it looks like nothing happened. When you copy an element, all of the properties of that element, including its position, are duplicated. The new rectangle is exactly on top of the original rectangle. You can see the new element by looking in the timeline, and observing that PauseLeft_Copy has been created. Rename PauseLeft_Copy to PauseRight. We want to drag the new rectangle to the right of the original without changing its vertical position. To constrain movement horizontally or vertically, you can hold down SHIFT either before or during the drag operation. With the Selection tool, and holding down SHIFT, drag the rectangle named PauseRight to the right of the rectangle named PauseLeft and drop it when you're happy with its position. Since these two rectangles together constitute the Pause symbol, we can group them to make them easier to work with. Select both rectangles by selecting one of them, and holding down SHIFT while selecting the other. You can group either from the Arrange | Group command, or simply press CTRL+G. This combines the two elements into a single element that can be worked with as a unit. In the timeline, you'll see that a new element has appeared, named Group. If you expand it, you'll see that PauseLeft and PauseRight are the two children of it. Rename Group to Pause. You can also move Pause around, to get it positioned exactly where you want it. When Pause is resized, all the elements within it will be resized proportionally. You can also double-click Pause in the timeline, to make it the activated container. Once this is done, you can go back into PauseLeft and PauseRight to reposition them until you get your control exactly how you want it. If you activate Pause, when you're done working with it, remember to activate PlayPauseTemplateRoot before continuing. Now, with Pause selected, go to the Properties palette and find the Visibility property (It's in the Misc category, if you're using the Categorized view). Set Visibility to Hidden. We are hiding the Pause symbol now so that we can draw the Play symbol. Later in the tutorial we'll see how to switch the visibility of the two symbols so that they show at the appropriate time. The next step is the Play symbol. Make sure the PlayPauseTemplateRoot is still activated (it should have a yellow border around it). Draw a rectangle in the center of the ellipse. The rectangle is named Rectangle, but let's rename it to Play. Since we don't want a rectangle, but a triangle, we're going to edit the geometry of the rectangle by turning it into a path. This technique can be used with any of the built-in Expression Interactive Designer shapes. Start by selecting the rectangle and converting it to a Path. Use the Tools | Convert To Path command. The rectangle has been turned into a path and it has kept its shape. It has been moved into the top-left corner of the template but we can reposition it later. Now we can edit the path so that it has any shape we like. A rectangle has four vertices, and a triangle has three. The first step is to delete one of the vertices. Select the Pen tool . When you select the Pen tool, the adorners on our rectangle change. These are the individual points of the geometry of our rectangle. We want to delete one of these, so place the mouse cursor over the lower right vertex until the mouse pointer changes to a pen icon with a minus sign beside it. This cursor indicates that the vertex under the pointer will be deleted. When you see this, left-click to delete the vertex. Next, select the Subselection tool . With this tool, we can move individual vertices. Left-click on the upper right vertex, and drag it down to the middle of the right edge of our bounding box. You can use SHIFT to constrain the movement of the vertex to vertical only. You'll get a live preview of how the shape will look as you drag. Release the mouse button, and the rectangle has now been changed into a triangle. With the Selection tool, position the triangle in the center of our button, to get it right where we want it. Then, as we did with the Pause button, set the Visibility in the property palette to Hidden. Now we have all the components for our PlayPause control. You can make it look like a play button by making the Play element visible, and you can make it look like a pause button by making the Pause group visible. The next step, is to explain to WPF when it should look like each of these. To do this, we're going to define States on our template. Each State will have Triggers which will be evaluated while the application is running, and whenever all the Triggers are true, it will cause the button to look the way we specify. The Checked propety on PlayPause will indicate whether the video is playing or not. When the PlayPause is Checked, the video is playing. When it's Unchecked, the video will be paused. So, logically, when our control is checked and the video is playing, we want it to look like a pause button. When the control is unchecked, and the video is not playing, we want it to look like a play button. Expression should still be editing the Template for the PlayPauseTemplate. If not, right click PlayPause in the timeline or on the scene, and select Edit Template | Edit Template. In the timeline, we need to add a State. Click Create New State . The first thing to notice is that there's a new tab in the timeline. At the moment, it says "[No triggers]", which means that WPF doesn't know when the State is active. We need to add Triggers that define the State. A State can have any number of Triggers, but in this case, we only need one. If the Timeline Properties palette is not visible, bring it up with View | Timeline Properties. This palette has two tabs. Make sure that General is selected. Click Add on the Timeline Properties palette. A row, representing a Trigger is now added, and this is where we specify the condition for the State. The left side is the property or field on the control that we're interested in, and the right side is the value we want to compare against. For now, we need to know if the control is checked or not, which is determined by the IsChecked property on PlayPause. So, in the left field, type IsChecked. When you press ENTER, a couple things happen. First, Expression Interactive Designer recognizes that IsChecked is a property that can be either true or false, so it chooses False as a default value. Additionally, the State is renamed "!IsChecked". The ! symbol indicates "Not", so in this case, the state's name means "Not IsChecked". When this State is selected in the timeline, any changes will only be displayed when PlayPause is not checked. When PlayPause is not checked, the video is not playing. This means we want the Play element visible. Select Play in the timeline. In the property palette, set Visibility to Visible. So, what we've specified now, is that when the IsChecked property on PlayPause is False, we want Play visible, and everything else in the default state. In WPF terms, what we've said is that the !IsChecked State is active when the IsChecked property on PlayPause is false, and when it is active, the Visibility of the Play element should be True. Next, we want to define the IsChecked state, which is active when the IsChecked property on PlayPause is true, and when active, the Visibility of the Pause group should be True. Now, click Add State again. Add a Trigger, but this time, specify that IsChecked is True. This will cause the state to be named IsChecked (notice that the ! character is not there). Just as we did with Play, set the Visibility on Pause to be Visible. We have now created our control. Run the application, and look what happens when you click the control. It should switch from looking like a Play button to a Pause button and back. Our PlayPause control is complete. For this tutorial, our application is going to have the media element in the project itself. This sample application could be expanded to allow the user to select a media source (a different video clip), but for purposes of this tutorial, we're going to make it so the application has a single video to display. The next step in the tutorial is to add the media, and hook PlayPause up to it. Expression Interactive Designer needs all the assets your program will use to be explicitly added to the project. Media files are considered assets, so we need to add it to the project. Go to Project | Add Existing Item... A dialog will come up, asking for the location of the item you'd like to work with. Browse to any WMV file on your computer. In the March 2006 CTP, Expression Interactive Designer only supports WMV files, but additional file formats should be supported in future versions. We need to create a timeline for the media element. By default, Expression will think that when a new media file is added, that you want it to begin playing as soon as the application is started. This means that it will be added to the OnLoaded timeline. Although this is the most common case, it is not the case for this tutorial, so we're going to make a timeline specifically for our media element. Click the Create New Timeline button. When the dialog comes up to ask for a name, name the timeline MediaTimeline. Once the asset is added, it should be visible in the Projects palette. Double click on the asset to add it to the scene. Alternatively, with the media file selected, you can Right-Click the media file, and choose Insert Into Scene. You'll notice that a new Media Element has been created, with a name that is derived from the name of the media file. In my case, I used Expression.wmv, so the media element is named "expression_wmv". Whatever the media element is named in your scene, rename it MediaElement. A note at this point, MediaTimeline is currently active. This means that any changes you make to the scene are interpreted as part of MediaTimeline. So, for instance, if you were to resize MediaElement, it will be resized at the beginning of MediaTimeline, not actually on the scene. If you need to change the size, position or any properties of any elements, you should switch to the None tab in the timeline first. If you've changed the current timeline, make sure MediaTimeline is selected again. We now need to hook our PlayPause control to MediaTimeline. Bring up the Timeline Properties palette again. If it's not visible, you can find it under View | Timeline Properties. A few words about timelines. A timeline is always in one of three states. It can be Stopped, which is the default state, Playing or Paused. When a timeline is Stopped, nothing can be done with it, except to Begin it. You can't Resume or Pause a Stopped timeline. When you Begin a timeline, it will always begin playing at the beginning, and the timeline is logically considered to be Playing. When a timeline is Playing, you can do two things with it. Pausing a Playing timeline will cause it to pause. It will leave the last frame of the media file on screen, and the timeline is now considered Paused. Alternatively, you can End a timeline. This will put the timeline in the Stopped state, and what is displayed varies depending on other properties on the Media Element. Normally, this will be the frame that was displayed when the timeline was ended. Finally, when a timeline is Paused, you can either Resume the timeline, where it will begin the timeline again at the point where it was Paused, putting it in the Playing state. Alternatively, you can End the timeline. Ending a Paused timeline works the same way as Ending a timeline when it is Playing. When our application starts, MediaTimeline will be Stopped. Consider what would happen if we told the PlayPause to Begin MediaTimeline. Then, every time the user hit the Play icon, the media clip would restart at the beginning. This isn't how we want PlayPause to work. We want the play button to Resume MediaTimeline. But, we can't Resume the timeline, until it is in the Paused state. And when the application starts, the timeline is Stopped. The solution, in this case, is to Begin the timeline, and immediately Pause it. So, in the Timeline Properties palette, we want to add an Event. Make sure the General tab is selected. The panel is the same one we used before to set up the States earlier, but this time, as you can see by the panel, we're editing Events now. Three combo boxes appear. From left to right, they indicate the element on the scene that the timeline event is associated with, the event for that element to receive to cause the timeline event to occur, and the action to perform on the timeline. Select DocumentRoot in the first combo box, Loaded in the second combo box, and Begin in the third. This is stating that when the DocumentRoot is Loaded, we will Begin MediaTimeline. Expression knows that we want to apply the event to MediaTimeline because that's the timeline that is selected when we specified the event. Whenever an application starts, all of the Loaded events on the DocumentRoot happen immediately. So, logically, by doing this, MediaTimeline begins immediately. If you run the application now, you'll see that the media starts playing immediately. However, this is not what we want. We'll add a second event in the Timeline Properties palette. This time, select DocumentRoot and Loaded again, but in the third combo box, select Pause. When multiple events occur at the same time, they are always executed in order from top to bottom. Keep in mind what we discussed before, and look at what we've just told our application to do. First, we Begin MediaTimeline, which puts it into the state where we can Pause or End it. Then, we Pause the timeline. So, the first frame of our media is displayed, but it's Paused, so we can Resume then Pause it at will. Now, with our MediaTimeline ready to be Resumed and Paused, we can hook up PlayPause. Select PlayPause , and hit Add in the Timeline Properties palette again. This time, PlayPause is put into the left box for us automatically, since it's selected. In the middle box, select Checked. Finally, in the last box, select Resume. When PlayPause is not checked, we decided that this means that the media is not playing. So, when we Check the button, we're switching from the state where the media is not playing to when it is playing. So, we Resume the timeline. Similarly, we need to add the event for when PlayPause is Unchecked. In this case, we want to Pause the timeline. Run your application now. If everything has been done correctly, you should now be able to click the button that has the Play icon, to cause the media to start playing, and Pause the video by clicking the button again. The most common mistake, if your application isn't working correctly, is to check the events. Remember that you need to Begin then Pause the video when the DocumentRoot gets Loaded, then you need to Pause and Resume the timeline with PlayPause. You should never be Ending the timeline, and the only time you Begin it, is when the DocumentRoot is Loaded. Our application now shows the video, and allows the user to control when it's playing and when it isn't, but we want some more control for the user. In this step, we're going to allow the user to adjust the volume and balance. The BalanceControl is an analog control, which means that the user can nudge it however they want. But our VolumeControl will be a digital control, which has 11 specific settings, numbered from 0 to 10. The user can't set the control between two ticks. Both controls are sliders, but have different behaviors, based on the properties we set. Additionally, we're going to display the volume to the user, but we will use a Value Converter to show the value to the user in the manner we wish, instead of the way the Windows Presentation Framework uses it. First, make sure you're not editing any templates or have any timelines selected. If you're editing a template, click Return Scope To Root . Do this until DocumentRoot is the top item displayed in the timeline. Then, make sure the None tab is selected in the timeline. The Playhead and the red timing ruler should not be visible. We'll start with BalanceControl, since it's the simpler of the two controls. Locate the Slider control in the Library Palette. (View | Library if it's not already visible). The Slider is in the PresentationFramework assembly. Drag the Slider out to the left of PlayPause, and rename it BalanceControl. By default, a Slider is used to represent a floating point number between 0 and 10. Media elements in the Windows Presentation Framework represent audio balance as a floating point number between -1.0 and +1.0, where -1.0 is exclusively the left speaker and +1.0 is exclusively the right. In other words, we need to set properties on BalanceControl, so that it can interact with the Media element in a way that the Windows Presentation Framework understands. Open the Properties palette (View | Properties if it's not already visible). First, find the Minimum and Maximum properties. Set the Minimum to -1 and the Maximum to 1. This is the only setting you need to change the the BalanceControl. You can change the SmallChange or LargeChange values, if you expect the user to want to change the value with the keyboard, but for this tutorial, we can leave them alone. It's always best to label controls, so the user knows their function. Select Label from the Library palette. Labels are also in the PresentationFramework assembly. Create one in the artboard above the BalanceControl. Change the name of the Label to BalanceLabel. In the Properties palette, find the Content property for the BalanceLabel and set it to Balance. Finally, you can adjust the location of the BalanceLabel wherever it looks best. Feel free to experiment with other properties on the BalanceLabel, such as the FontFamily or FontSize to get the appearance more to your liking. Now, drag out another Slider on the right side of the PlayPause button, and name this one VolumeControl. Just as we did with BalanceControl, we need to change the Minimum and Maximum on VolumeControl to be in a format that the Volume property on the media element understands. The Volume in the Windows Presentation Framework is expressed as a floating point number between 0 and 1. So, set the Minimum and Maximum on VolumeControl to 0 and 1 respectively. For the VolumeControl, we want to supply some tick marks on to make it easier for users to see where the values are. The TickFrequency property tells VolumeControl how to display the ticks. The first tick will always be placed at the Minimum value, and then another tick will be placed every TickFrequency thereafter. There will also be a tick placed at the Maximum value, even if a tick wouldn't normally be put there. So, consider if we set TickFrequency to 0.1. We would get a tick at 0, 0.1, 0.2, 0.3, and so on... This would create 11 ticks total. And if we think about it, for VolumeControl, we want 11 ticks, one for each value between 0 and 10. So, set the TickFrequency to 0.1. Finally, by default, Sliders don't display the ticks. We need to set the TickPlacement property. Any value other then None will show the ticks, and it's purely aesthetic whether we put the ticks above or below the slider. For this tutorial, I chose TopLeft, but feel free to use whichever setting you like best. If you test the application now, you'll find that VolumeControl can go to any position, including between the ticks. For BalanceControl, this is fine, where we might want to the user to have a more Analog experience, but, for this tutorial, we're going to confine the user to only the ticks on the VolumeControl. Locate the IsSnapToTickEnabled property in the Properties palette. If you have properties categorized, it's in the Behavior property group. Set this property to True. Now, if you test the application, the thumb on the slider will only visit the positions where the tick marks are. Draw out another Label, this time over the VolumeControl and name it VolumeLabel. You can change the properties on it to match BalanceLabel, but don't worry about the Content of it. The BalanceLabel has a Content that doesn't change, so we can just set it. But, for the VolumeLabel we're going to databind the value, so it changes as we change the VolumeControl. Now, we've created our controls, but we need to explain to the application how they interact with MediaElement. First, select MediaElement. Locate the Volume property in the Properties palette, and click on it with the right mouse button. A context menu will be displayed, and on that menu, select Databind.... The Create Data Binding dialog comes up, and on this panel, we'll explain what the relationship between the Volume on the MediaElement and the Value of the VolumeControl is. If you look at the top of the dialog, you'll see that we need to specify the source for MediaElement.Volume. The Value property on VolumeControl is that source. So, select ElementProperty first, since the source is a property on an element. A list of all the scene elements on our scene is displayed on the left, and VolumeControl appears in that list. Select it. The Value of the VolumeControl is our source, so scroll the Properties list until you find Value and select it. Once this is done, click Finish. In the same way, we need to databind the Balance on MediaElement to the Value on BalanceControl. Click Balance in the Properties palette with the right mouse button, and select Databind... again. Choose to bind to an Element Property, choose the BalanceControl as the Scene Element, and choose Value as the property. Click Finish to create the binding. Now test the application. You'll see that you can now change the balance and the volume of your media with the two sliders. By this point, we now have an application that shows a video, allows the user to pause and restart the video, and even lets the user adjust the volume and balance of playback. The last thing we want to do, is change the VolumeLabel to display the current value. However, the VolumeControl recognizes the volume in a different format from how we want to present it to the user. We're going to use databinding and value converters to not only update the Content of the VolumeLabel as we change the VolumeControl, but also present the value the way we want. The first step here, is to create the value converter. For this, we're going to write a small amount of code, but it's pretty simple.The first thing we need, is a code document to put the value converter in. From the select File | New | Code File. The first time you do this, Expression may take a couple moments to prepare the coding environment. This is normal, and it shouldn't take more then a couple moments. Once the code editor comes up, replace the contents with the following code. (Make sure you use the correct code piece, if you're using C# or Visual Basic):C# using System; using System.IO; using System.Net; using System.Globalization; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Navigation; namespace ValueConverterDemo { public class SliderToVolume: IValueConverter { public object Convert(object input, Type type, object parameter, CultureInfo cultureInfo) { double value = System.Convert.ToDouble(input); return "Volume: " + Math.Round(value * 10).ToString(); } public object ConvertBack(object input, Type type, object parameter, CultureInfo cultureInfo) { return null; } public SliderToVolume() { } } } Imports System Imports System.IO Imports System.Net Imports System.Globalization Imports System.Windows Imports System.Windows.Controls Imports System.Windows.Data Imports System.Windows.Media Imports System.Windows.Media.Animation Imports System.Windows.Navigation Namespace ValueConverterDemo Public Class SliderToVolume Implements IValueConverter Public Sub New() End Sub Public Function Convert(ByVal value As Object, ByVal targetType As Type, ByVal parameter As Object, ByVal culture As Globalization.CultureInfo) As Object Implements IValueConverter.Convert Dim myValue As Double myValue = System.Convert.ToDouble(value) Return "Volume: " + Math.Round(myValue * 10).ToString End Function Public Function ConvertBack(ByVal value As Object, ByVal targetType As Type, ByVal parameter As Object, ByVal culture As Globalization.CultureInfo) As Object Implements IValueConverter.ConvertBack Return String.Empty End Function End Class End Namespace You'll need to build your project, so that Sparkle knows about the new Value Converter that you just created. You can do this by pressing CTRL+SHIFT+B, or select Project | Build Project. We need to databind the Content of the VolumeLabel to the Value of the VolumeControl with our Value Converter. Select the VolumeLabel on the scene. Click on the Content property with the right mouse button, and select Databind.... As before, we want our binding source to be an Element Property. So, select that at the top of the Create Data Binding dialog. Select VolumeControl in the list of Scene Elements. Select Value in the list of Properties. Now, we need to specify the Value Converter to use. At the bottom of the dialog, expand More Binding Options.... As you can see, Expression Interactive Designer is not using a Value Converter. If you open the combo box to specify a Value Converter, our new converter is not displayed. We need to explain where our Value Converter is. Click the ... button. The Add value converter dialog is presented to us and with it, we specify the Value Converter we want to use. At the top of the list, you should see the EIDMediaViewer. This represents the assembly that contains our entire project, including any code that we've specified. If our Value Converter was in another assembly, say one supplied by a developer, we'd select that assembly instead. However, our Value Converter is defined in our project, so expand the EIDMediaViewer assembly. Once you expand the assembly, you'll see our ValueConverterDemo namespace and right under it, the SliderToVolume object. Highlight SliderToVolume by clicking on it, and press Ok. Our ValueConverter is set to SliderToVolume for us, since we just added it. Press Finish, now that the databinding is complete. Press F5 to test the project. Notice that as you change the VolumeControl, the VolumeLabel updates as we specified in our Value Converter. The "Volume:" string is displayed, and our volume is shown as a number between 0 and 10 instead of 0 and 1. This completes the tutorial. If you would like to receive an email when updates are made to this post, please register here RSS
http://blogs.msdn.com/expression/articles/622876.aspx
crawl-002
refinedweb
6,445
65.42
Lecture 11 — Decisions Part 2¶ Overview — Logic and Decision Making, part 2¶ - In this lecture, we will first talk about program structure and debugging. - Then, we will talk about logic in more detail. - Logic is the key to getting programs right - Boolean logic - Nested if statements - Assigning booelan variables Part 1: Program Structure¶ Programming requires four basic skills: - Develop a solution. - Start with small steps, solve them and then add complexity - Structure your code. - Abstract out repeated code into functions. - Make your code easy to read and modify. Use meaningful variable names and break complex operations into smaller parts. - Do not hard code values so that it is easy to change the program. - Include comments for important functions, but do not clutter your code with unnecessary comments. If a function makes an assumption on validity or type of inputs, this is a good thing to document in your code. - If your function is meant to return a value, make sure it always returns a value. - Test your code. - Find all possible cases that your program should work for. As programs get larger, this is increasing hard. - If you cannot check for all inputs, then you must then check your code for meaningful inputs: regular (expected inputs) and edge cases (inputs that can break your code). - Debug your code. - Never assume an untested part of your code is bug free. - Learn syntax well so that you can spot and fix syntax errors fast. - Semantic errors are much harder to find and fix. You need strategies to isolate where the error is. - Print output before and after crucial steps. - Look at where your program crashed. - Use a debugger. - Simplify the problem. Remove parts of your code until you no longer have error. - Test parts of your code separately and once you are convinced they are bug free, concentrate on other parts. We will attempt to write an unbreakable program in today’s lecture. Help with debugging: Infinite Loops and Other Errors¶ One important danger with whileloops is that they may not stop! For example, it is possible that the following code runs “forever”. How? n = int(raw_input("Enter a positive integer ==> ")) sum = 0 i = 0 while i != n: sum += i i += 1 print 'Sum is', sum as we will see. Program organization¶ Visualize code as having two main parts: the main body and the functions that help the main code. Make sure your functions do only one thing, so they are easy to test and can be in many places. We will discuss the above by going through the following example. Note the new addition to the program structure: if __name__ == "__main__" # Put the main body of the program below this line When we execute a program, the above line has no effect. But it helps to separate main body of the program from the function definitions. When we import a program in another code (like the utility code I have been giving you), any code below the above line is skipped! This allows programs to work both as modules and stand alone code. It is best to use this section for test code for your modules. Example: Random Walk¶ Many numerical simulations, including many video games involve random events. Python includes a module to generate numbers at random. In particular, (very drunk) person takes a step to the left or a step to the right, completely at random (equally likely to go left or right), in one time unit. - If the person is on a platform with steps and the person starts in the middle, this process is repeated until s/he falls off (reaches step 0 or step ) - How long does this take? Many variations on this problem appear in physical simulations. We can simulate a steps in two ways: - If random.random()returns a value less than 0.5 step to the left; otherwise step to the right. - If random.randint(0,1)returns 0 then step left; otherwise, step right. We’ll write the code in class, starting from the following: import random # Print the output def print_platform( iteration, location, width ): before = location-1 after = width-location platform = '_'*before + 'X' + '_'*after print "%4d: %s" %(iteration,platform), raw_input( ' <enter>') # wait for an <enter> before the next step ####################################################################### if __name__ == "__main__" # Get the width of the platform n = int( raw_input("Input width of the platform ==> ") ) Review: - We will work through the example of writing a boolean expression to decide whether or not two rectangles with sides parallel to each other and to the x and y axes intersect. Short-Circuited Boolean Expressions¶ Python only evaluates expressions as far as needed to make a decision. Therefore, in boolean expression of the form ex1 and ex2 ex2will not be evaluated if ex1evaluates to False. Think about why. Also, in boolean expression of the form ex1 or ex2 ex2will not be evaluated if ex1evalues to True. This “short-circuiting” is common across many programming languages. Let’s try to check for valid input. x = raw_input("Enter a positive number ==> ") Part 1 Exercises¶ Write a boolean expression to determine if two circles, with centers at locations x0, y0and x1, y1and radii r0and r1intersect. Suppose you have a tuple that stores the semester and year a course was taken, as in when = ('Spring',2013) Write a function that takes two such lists as arguments and returns Trueif the first list represents an earlier semester and Falseotherwise. The possible semesters are 'Spring'and 'Fall'. Part 2 — More Complicated/Nest If Statements¶ We can place ifstatements inside of other ifstatements. To illustrate if ex1: if ex2: blockA elif ex3: blockB elif ex4: blockD else: blockE We can also work back and forth between this structure and a single level if, elif, elif, elsestructure. We will work through this example in class. Example: Ordering Three Values¶ - Suppose three siblings, Dale, Erinand Sam, have heights stored in the variables hd, heand hs, respectively . - We’d like code to print the names in order of height from greatest to least. - We’ll consider doing this with nested ifstatements and with a single-level if, elifstructure. Part 2 Exercise¶! Check that a string contains a float. Suppose we represent the date as three integers in a tuple giving the month, the day and the year, as in d = (2, 26, 2103) Write a Python function called is_earlierthat takes two date lists and returns Trueif the first date, d1is earlier than the second, d2. It should return Falseotherwise. Try to write this in three different ways: - Nested ifs, - if - elif - elif - else, - A single boolean expression. Part 3: Storing Conditionals¶ Sometimes we store the result of boolean expressions in a variable for later use: f = float(raw_input("Enter a Fahrenheit temperature: ")) is_below_freezing = f < 32.0 if is_below_freezing: print "Brrr. It is cold" We use this to - Make code clearer - Avoid repeating tests Example from the Textbook¶ Doctors often suggest if young: if slim: risk = 'low' else risk = 'medium' else: if slim: risk = 'medium' else: risk = 'high' Part 3 Exercises¶ - Rewrite the previous code without the nested if’s. There are several good solutions. - Make the code for checking whether two circles intersect unbreakable. - Write code to check if two rectangles intersect. Summary consider. - If statements can be structured in many ways, often!
http://www.cs.rpi.edu/~sibel/csci1100/fall2015/course_notes/lec11_conditionals2.html
CC-MAIN-2018-17
refinedweb
1,212
64.2
I've had serious problems on how to solve this: I don't know where the OnAfterInstall event goes. Let me explain myself. I created a C# project which compiles perfectly and built in Release mode. After that, I've created a Setup Project using the wizard. I have added an extra dialog, which lets the user choose between two languages. Now, my problem is that I want to store that language into the registry (or app.config file, the easier the better), and I've read that you need to detect it within the OnAfterInstall method in an inherited class of Installer. Now, where should I put that class? Logic tells me it goes in the C# project, but it complains that neither Context nor Installer class exist. When I add this class to the Setup Project, it doesn't complain, but it doesn't work after that. Here's the class. using System; using System.Configuration.Install; public class Install : Installer { public Install() { } protected override void OnAfterInstall(IDictionary savedState) { string lang = Context.Parameters["lang"]; RegistryKey key = Registry.LocalMachine; using (key = key.CreateSubKey(@"SOFTWARE\MyCompany\MyApp")) { key.SetValue("lang", lang); key.Close(); } base.OnAfterInstall(savedState); } } lang First, you should add the RunInstallerAttribute to you class. [RunInstaller(true)] public class Install : Installer ... Next, put the installer in a separate project (class library), e.g. MyCustomInstaller. Finally, add the primary output of this project to a custom action in the custom actions editor of the setup project. It's up to you in which custom action you want to use.
https://codedump.io/share/wpKsM1P81cRk/1/where-does-onafterinstall-event-go
CC-MAIN-2017-34
refinedweb
259
60.21
Adding Phone Number to User Model Apologies if this question is too general. I'm relatively new to Rails and development in general. I am creating a Rails app that uses Twilio for SMS verification. Users sign up for the app using their name and phone number. That phone number then gets verified by a pin delivered via SMS. I am having a tough time figuring out whether it would be easier and/or better to use a phone_number model and then create an association between users and phone_numbers using belongs_to: and has_one: or if I can make the phone number a part of the User model. I am using Rails 4.2.1 Here is my User model: class User < ActiveRecord::Base has_secure_password validates_presence_of :name validates_presence_of :phone_number validates_uniqueness_of :phone_number end Users controller: class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(user_params) if @user.save session[:user_id] = @user.id redirect_to root_url, notice: "Saved" else render 'new' end end private def user_params params.require(:user).permit(:name, :phone_number, :password, :password_confirmation) end end This is the Twilio tutorial I've been attempting to follow: Am I overthinking this? Is a user really just a phone_number with a name? If I do wind up having to create a phone number model/controller, how will I modify my routes? And how would that change my sign up page, which right now includes this: <div class="field"> <%= f.label :phone_number %><br /> <%= f.text_field :phone_number %> </div> Answers You can make the phone_number model as part of the user and just use the [:user][:phone_number] parameter to create it. It is really up to you and what kind of user experience you are trying to build. Does the user needs to verify his/her phone number during registration? Does the user has to do it after creating an account? In the first case, you can make the phone number a separate model which belongs to the user. You put all the methods for the Twilio integration in the phone_number model and use them in the users_controller. You can make the verification step simpler by avoiding AJAX and putting it as a next step of the user registration. Here's how it might look like: In your model: class PhoneNumber < ActiveRecord::Base belongs_to :user ... end In your user controller: def create @user = User.new(permitted_params) @phone_number = PhoneNumber.find_or_create_by(user_id: @user.id ,phone_number: params[:user][:phone_number]) @phone_number.generate_pin @phone_number.send_pin ... end Need Your Help Text Mining - What is the best way to mine descriptive excel sheet data excel excel-vba text-mining data-analysisI have university placement data pulled from databases in excel sheet. I need to text mine the job description offered by companies, which is a descriptive field for all the rows and then come up w...
http://unixresources.net/faq/32129608.shtml
CC-MAIN-2019-04
refinedweb
466
56.96
Python. Getting Python In order to program in Python you need the Python interpreter. If it is not already installed or if the version you are using is obsolete, you will need to obtain and install Python using the methods below: Python 2 vs Python 3win By default, the Cygwin installer for Windows does not include Python in the downloads. However, it can be selected from the list of packages. Installing Python on Mac) CPython ships with IDLE; however, IDLE is not considered user-friendly.[1] For Linux, KDevelop and Spyder are popular. For Windows, PyScripter is free, quick to install, and comes included with PortablePython.. Keeping Up to Date Interactive mode Python mode ~/pythonpractice, into you're shell rc file for exemple ~/.bashrc)). Built-in Data types. Numbers Python 2.x supports 4 Strings String operations Equality There are two quasi-numerical operations which can be done on strings -- addition and multiplication. String addition is just another name for concatenation. String multiplication is repetitive addition, or concatenation. So: >>>>> c + 'b' 'ab' >>> c * 5 'aaaaa' Containment Sets in Python at a glance: set1 = set() # A new empty set set1.add("cat") # Add a single member set1.update(["dog", "mouse"]) # Add several members if "cat" in set1: # Membership test set1.remove("cat") #set1.remove("elephant") - throws an error print set1 for item in set1: # Iteration AKA for each element print item print "Item count:", len(set1) # Length AKA size AKA item count isempty = len(set1) == 0 # Test for emptiness") set8 = set1.copy() set8.clear() # Clear AKA empty AKA erase print set1, set2, set3, set4, set5, set6, set7, set8, issubset, issuperset]) Membership Testing . >>>([]) Iteration Over Sets We can also have a loop move over each of the items in a set. However, since sets are unordered, it is undefined which order the iteration will follow. >>> s = set("blerg") >>> for n in s: ... print n, ... r b e l g Set Operations Any element which is in both and will appear in their intersection. >>> s1 = set([4, 6, 9]) >>> s2 = set([1, 6, 8]) >>> s1.intersection(s2) set([6]) >>> s1 & s2 set([6]) >>> s1.intersection_update(s2) >>> s1 set([6]) Union.]) Set Difference])' Exercises - Comparison Numbers, strings and other types can be compared for equality/inequality and ordering: >>> 2 == 3 False >>> 3 == 3 True >>> 2 < 3 True >>> "a" < "aa" True Identity The operators is and is not test for object identity: x is y is true if and only if x and y are references to the same object in memory. x is not y yields the inverse truth value. Note that an identity test is more stringent than an equality test since two distinct objects may have the same value. >>> [1,2,3] == [1,2,3] True >>> [1,2,3] is [1,2,3] False For the built-in immutable data types (like int, str and tuple) Python uses caching mechanisms to improve performance, i.e., the interpreter may decide to reuse an existing immutable object instead of generating a new one with the same value. The details of object caching are subject to changes between different Python versions and are not guaranteed to be system-independent, so identity checks on immutable objects like 'hello' is 'hello', (1,2,3) is (1,2,3), 4 is 2**2 may give different results on different machines. Augmented Assignment - ↑ [ What's New in Python 2.2 - ↑ PEP 238 -- Changing the Division Operator Default Argument Values) # list1 gets cleared print list1 list1 = [1, 2] print evilGetLength(list1[:]) # Pass a copy of list1 print list1 Calling Functions - 4.6. Defining Functions, The Python Tutorial, docs.python.org Scoping Variables Variables in Python are automatically declared by assignment. Variables are always references to objects, and are never typed. Variables exist only in the current scope or global scope. When they go out of scope, the variables are destroyed, but the objects to which they refer are not (unless the number of references to the object drops to zero). Scope is delineated by function and class blocks. Both functions and their scopes can be nested. So therefore def foo(): def bar(): x = 5 # x is now in scope return x + y # y is defined in the enclosing scope later y = 10 return bar() # now that y is defined, bar's scope includes y Now when this code is tested, >>> foo() 15 >>> bar() Traceback (most recent call last): File "<pyshell#26>", line 1, in -toplevel- bar() NameError: name 'bar' is not defined The name 'bar' is not found because a higher scope does not have access to the names lower in the hierarchy. It is a common pitfall to fail to lookup an attribute (such as a method) of an object (such as a container) referenced by a variable before the variable is assigned the object. In its most common form: >>> for x in range(10): y.append(x) # append is an attribute of lists Traceback (most recent call last): File "<pyshell#46>", line 2, in -toplevel- y.append(x) NameError: name 'y' is not defined Here, to correct this problem, one must add y = [] before the for loop. Exceptions Python Recovering and continuing with finally All built-in Python exceptions Exotic uses of exceptions) Input and output Input File Objects("text.) a dash. File Output Modules Modules,). The second statement means that all the elements in the math namespace is added to the current scope. Modules can be three. Creating a Module From a File The easiest way to create a module. External links Classes. Defining a Class In order to access the member of an instance of a class, use the syntax <class instance>.<member>. It is also possible to access the members of the class definition with <class name>.<member>. Methods Like all object oriented languages, Python provides for inheritance. Inheritance is a simple concept by which a class can extend the facilities of another class, or in Python's case, multiple other classes. Use the following format for this: class ClassName(superclass1,superclass2,superclass3,...): ... The subclass will then have all the members of its superclasses. If a method is defined in the subclass and in the superclass, the member in the subclass will override the one in the superclass. In order to use the method defined in the superclass,). Special encapulation is private.. Metaclasses In Python, classes are themselves objects. Just as other objects are instances of a particular class, classes themselves are instances of a metaclass. Python3 - The split function splits a string based on a given regular expression: >>> import re >>>>> re.split(r'\d\.', mystring) ['', ' First part ', ' Second part ', ' Third part'] Escaping The different flags use with regular expressions: Pattern objects - Python redocumentation - Full documentation for the re module, including pattern objects and match objects GUI Programming There are various GUI toolkits to start with. TkinterK. Authors Authors of Python textbook.. Crystal Space is accessible from Python in two ways: (1) as a Crystal Space plugin module in which C++ code can call upon Python code, and in which Python code can call upon Crystal Space; (2) as a pure Python module named ‘cspace’ which one can ‘import’ from within Python programs. To use the first option, load the ‘cspython’ plugin as you would load any other Crystal Space plugin, and interact with it via the SCF ‘iScript’ interface Files Files, Getting current working directory: os.getcwd() Changing current working directory: os.chdir(r -> see Python Programming/Databases code create problem shown below ImportError: DLL load failed: The specified procedure could not be found. MySQL connection in Python -> see Python Programming/Databases SQLAlchemy in Action References - ↑ Hammond, M.; Robinson, A. (2000). Python Programming on Win32. O'Reilly. ISBN 1-56592-621-8. - ↑ Lemburg, M.-A. (2007). "Python Database API Specification v2.0". Python.. External links - SQLAlchemy - SQLObject - PEP 249 - Python Database API Specification v2.0 - Database Topic Guide on python.org Web Page Harvestinging installs you the python developement package and ensures that you can use the line #include <Python.h> in the C source code. On other systems like openSUSE the needed package calls python-devel and can be installed by using zypper: $ sudo zypper install python-devel swig $ */ #include <CGAL/Cartesian.h> #include <CGAL/Range_segment_tree_traits.h> #include <CGAL/Range_tree_k.h> typedef CGAL::Cartesian<double> K; typedef CGAL::Range_tree_map_traits_2<K, char> Traits; typedef CGAL::Range_tree_2<Traits> Range_tree_2_type;
http://en.m.wikibooks.org/wiki/Python_Programming/Live_print_version
CC-MAIN-2014-52
refinedweb
1,384
63.39
Next: Alternative Buses, Previous: Receiving Method Calls, Up: Top [Contents][Index] Signals are one way messages. They carry input parameters, which are received by all objects which have registered for such a signal. This function is similar to dbus-call-method. The difference is, that there are no returning output parameters. The function emits signal on the D-Bus bus. bus is either the symbol :system") With this function, an application registers for a signal on the D-Bus bus. bus is either the symbol :system: :pathNarguments can be used for object path wildcard matches as specified by D-Bus, while an :argNargument requires an exact match. :arg-namespacestring: :path-namespacestring: nil. :eavesdrop: dbus-register-signal returns a Lisp object, which can be used as argument in dbus-unregister-object for must define one single string argument therefore. Plugging an USB device to your machine, when registered for signal ‘DeviceAdded’, will show you which objects the GNU/Linux hal daemon For backward compatibility, a broadcast message is also emitted if service is the known or unique name Emacs is registered at D-Bus bus. For backward compatibility, the arguments args can also be just strings. They stand for the respective arguments of signal in their order, and are used for filtering as well. A nil argument might be used to preserve the order. Next: Alternative Buses, Previous: Receiving Method Calls, Up: Top [Contents][Index]
http://www.gnu.org/software/emacs/manual/html_node/dbus/Signals.html
CC-MAIN-2016-30
refinedweb
234
55.64
#include <opencv2/flann/heap.h> Priority Queue Implementation The priority queue is implemented with a heap. A heap is a complete (full) binary tree in which each parent is less than both of its children, but the order of the children is unspecified. Constructor. Params: sz = heap size Clears the heap. Tests if the heap is empty Returns: true is heap empty, false otherwise Insert a new element in the heap. We select the next empty leaf node, and then keep moving any larger parents down until the right location is found to store this element. Params: value = the new element to be inserted in the heap Returns the node of minimum value from the heap (top of the heap). Params: value = out parameter used to return the min element Returns: false if heap empty Returns: heap size
https://docs.opencv.org/4.1.1/d0/d71/classcvflann_1_1Heap.html
CC-MAIN-2022-05
refinedweb
138
60.35
Develop SAP SAPUI5 Application for SAP BTP on Cloud Foundry You will learn - How to create a new SAPUI5 application for SAP Business Technology Platform (BTP), Cloud Foundry environment - How to configure your Cloud Foundry settings in SAP Web IDE - How to build and deploy your application to Cloud Foundry Prerequisites - Make sure you have access to the trial version of SAP Web IDE Full-Stack. - To access Web IDE go through the Prepare SAP Web IDE for Cloud Foundry Development tutorial. Create, configure, build, and deploy a simple application on Cloud Foundry in SAP Web IDE Full-Stack. In SAP Web IDE Full-Stack, right-click your workspace choose New > Project from Template. In the template wizard that opens, in the Environment dropdown list, make sure that Cloud Foundry is selected and Category should be Featured otherwise the SAPUI5 Application tile will not appear. Scroll down and click the SAPUI5 Application tile and then click Next. In the Basic Information screen, in the Module Name field, enter FioriDemo. In the Namespace field, enter mynamespaceand then choose Next. In the Template Customization screen, accept the default values shown below and choose Finish. A new MTA project called mta_FioriDemo containing the FioriDemo HTML5 module now appears in your SAP Web IDE workspace. When developing apps in the Cloud Foundry environment, you create a Multi-Target Application (MTA) file in SAP Web IDE. Each SAP Fiori app is developed as an SAPUI5 module of the MTA. You can alternatively choose the Multi-Target Application template which will create an MTA project structure and then add new modules to the project. Now you need to open the layout editor in SAP Web IDE to easily make a few changes. Choose FioriDemo> webapp> viewand right-click the View1.view.xmlfile that you created in the wizard in the previous step. Choose Open Layout Editor. Now you will make some changes using the layout editor, with no need to do any coding. In the layout editor, in the Controls pane, in the Search box on the top enter Textto filter the controls list. Select the Text control. Drag the Text control and drop it on the View control in the canvas to the right. Select the Text control, and in the Properties pane on the right, in the Text property, clear the default text and enter SAP Fiori on Cloud Foundry. Save your work by clicking either the Save or Save All icon located at the top of the workspace. Now, before you can build and deploy your new application, check your Cloud Foundry preferences. Open the Preferences perspective in SAP Web IDE by clicking the Preferences icon and then select Cloud Foundry. In the pane on the right, select the API endpoint, organization and space for your project. If you are using a trial account, these values are automatically populated. Click Save. Now you need to run your new application to test it. But first, check the project settings to make sure that Cloud Foundry is enabled for your project. By default, the target environment in your run configuration is set to Cloud Foundry. In the workspace, right-click the FioriDemofolder, then choose Run > Run Configurations. In the Run Configurations for FioriDemowindow that opens, click +and then select Run as Web Application. Click on the Configuration and then select indexhtmlas the File Name from the dropdown list. Click Save and Run. Now you need to build your application. In your workspace, right-click the mta_FioriDemo folder and choose Build > Build. The build process creates a multi-target archive ( MTAR) file in your workspace that packages all the project modules for deployment. Now, you need to deploy your application to SAP BTP, Cloud Foundry environment. In your workspace, locate and right-click the new mta_FioriDemo_0.0.1.mtar file in the mta_archives folder, and select Deploy > Deploy to SAP BTP. The Deploy to SAP BTP dialog box opens. The fields are automatically populated. Click Deploy. The deployment process takes a few minutes. You can see that the deployment is still in progress in the status bar at the bottom right of your screen. When the deployment process is complete, you should see the notification in the console at the bottom of your screen and also at the top right of the screen. Now you can access your deployed application in the SAP Business Technology Platform cockpit. The steps below show you how to create a URL that you can use to access your new application. From the Tools menu click SAP BTP Cockpit. Click Home [Europe (Rot)-Trial] at the top of the screen. Click Enter your Trial Account. Click the trial subaccount box, assuming you are working on the trial version of SAP Web IDE. Otherwise, your subaccount will have a different name. Click Spaces in the side navigation panel and then click the number link to your Cloud Foundry spaces. Click your space box to open it. On your Applications page, you should see your new application in the list: mta_FioriDemo_appRouterand that it has a Started status. Click this link. A new page opens: Application: mta-FioriDemo_appRouter- Overview. Right-click the URL under Application Routes and save the URL in a text file such as in Notepad or Notes. In your text editor you need to add the following suffix to the URL that you saved in step 8: /mynamespaceFioriDemo-1.0.0/index.html The construct of the final URL is: <URL_from_application_overview_page>/<project_name>-<application_version>/index.html You can use this URL in any browser to access your new application.
https://developers.sap.com/tutorials/cp-cf-fioriapps-create.html
CC-MAIN-2021-31
refinedweb
927
65.01
HDFS does not really meet your needs. I think that MapR's solution would. I will contact off-line to give details. On Thu, Oct 6, 2011 at 3:35 PM, Hemant kulkarni <kulkarnihemant@gmail.com>wrote: > Hi all, > We are a small software development firm working on data backup > software. We have a backup product which copies data from client > machine to data store. Currently we provide a specialized hardware to > store data(1-3TB disks and servers). We want to provide solution to > some customers(mining company) with following requirements > 1] Huge data storage capacity(initially starting with 100 TB but > should be easy to increase) > 2] Initially this facility is used as data storage but in future > company plans to add data processing software(some MapReduce jobs) > 3] Most of data is unstructured (mostly images, text files and videos) > 4] many times data is duplicate of some original. So need de duplication > 5] Mostly data is added every time(daily backup) and occasionally > read.(Write every day new data and read on weekly) > 6] data copied is in terms of files(every backup is 100,000 files each > file is some MB and some files in KB) > 7] this is data storage so latency requirements are not very strict > 8] Some part of data have very high HA requirements. Should be copied > to data centers outside country on timely basis(weekly, but data size > is small like few TB) > 9]Currently we provide some sort of HSM(Hierarchical Storage > Management ). company needs something similar in new solution > 10] Single namespace and versioning of files is another requirement > > As I understood HDFS doesn't suit directly for such storage due to > following design consideration > 1] Large no of small files > 2] duplicate data > 3] write many read once requirement > > Here are my questions > 1] Does DHFS support our client requirements? or at least can it be > configured to suit needs? > 2] is there any customization of HDFS(if possible) which will serve the > purpose > > is there any other solution which will work? > > All thoughts/suggestions are welcome > > Regards, > Hemant. >
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201110.mbox/%3CCAND0qzvRNyHyq52epL2VYoMTiG-uCmLWWpJ5u2oojDP220dWeQ@mail.gmail.com%3E
CC-MAIN-2017-34
refinedweb
351
57.61