text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Upgrade Guides¶ Compatibility Policy¶ Slick requires Scala 2.10 or 2.11. (For Scala 2.9 please use ScalaQuery, the predecessor of Slick). Slick version numbers consist of an epoch, a major and minor version, and possibly a qualifier (for milestone, RC and SNAPSHOT versions). For release versions (i.e. versions without a qualifier), backward binary compatibility is guaranteed between releases with the same epoch and major version (e.g. you could use 2.1.2 as a drop-in relacement for 2.1.0 but not for 2.0.0). Slick Extensions requires at least the same minor version of Slick (e.g. Slick Extensions 2.1.2 can be used with Slick 2.1.2 but not with Slick 2.1.1). Binary compatibility is not preserved for slick-codegen, which is generally used at compile-time. We do not guarantee source compatibility but we try to preserve it within the same major release. Upgrading to a new major release may require some changes to your sources. We generally deprecate old features and keep them around for a full major release cycle (i.e. features which become deprecated in 2.1.0 will not be removed before 2.2.0) but this is not possible for all kinds of changes. Release candidates have the same compatibility guarantees as the final versions to which they lead. There are no compatibility guarantees whatsoever for milestones and snapshots. Upgrade from 2 1.0 to 2.0¶ Slick 2.0 contains some improvements which are not source compatible with Slick 1.0. When migrating your application from 1.0 to 2.0, you will likely need to perform changes in the following areas. Code Generation¶ Instead of writing your table descriptions or plain SQL mappers by hand, in 2.0 you can now automatically generate them from your database schema. The code-generator is flexible enough to customize it’s output to fit exactly what you need. More info on code generation. Table Descriptions¶ In Slick 1.0 tables were defined by a single val or object (called the table object) and the * projection was limited to a flat tuple of columns that had to be constructed with the special ~ operator: // --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------- object Suppliers extends Table[(Int, String, String)]("SUPPLIERS") { def id = column[Int]("SUP_ID", O.PrimaryKey) def name = column[String]("SUP_NAME") def street = column[String]("STREET") def * = id ~ name ~ street } In Slick 2.0 you need to define your table as a class that takes an extra Tag argument (the table row class) plus an instance of a TableQuery of that class (representing the actual database table). Tuples for the * projection can use the standard tuple syntax: class Suppliers(tag: Tag) extends Table[(Int, String, String)](tag, "SUPPLIERS") { def id = column[Int]("SUP_ID", O.PrimaryKey) def name = column[String]("SUP_NAME") def street = column[String]("STREET") def * = (id, name, street) } val suppliers = TableQuery[Suppliers] You can import TupleMethods._ to get support for the old ~ syntax. The simple TableQuery[T] syntax is a macro which expands to a proper TableQuery instance that calls the table’s constructor (new TableQuery(new T(_))). In Slick 1.0 it was common practice to place extra static methods associated with a table into that table’s object. You can do the same in 2.0 with a custom TableQuery object: object suppliers extends TableQuery(new Suppliers(_)) { // put extra methods here, e.g.: val findByID = this.findBy(_.id) } Note that a TableQuery is a Query for the table. The implicit conversion from a table row object to a Query that could be applied in unexpected places is no longer needed or available. All the places where you had to use the raw table object in Slick 1.0 have been changed to use the table query instead, e.g. inserting (see below) or foreign key references. The method for creating simple finders has been renamed from createFinderBy to findBy. It is defined as an extension method for TableQuery, so you have to prefix the call with this. (see code snippet above). Mapped Tables¶ In 1.0 the <> method for bidirectional mappings was overloaded for different arities so you could directly pass a case class’s apply method to it: // --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------- def * = id ~ name ~ street <> (Supplier _, Supplier.unapply) This is no longer supported in 2.0. One of the reasons is that the overloading) Profile Hierarchy¶ Slick 1.0 provided two profiles, BasicProfile and ExtendedProfile. These two have been unified in 2.0 as JdbcProfile. Slick now provides more abstract profiles, in particular RelationalProfile which does not have all the features of JdbcProfile but is supported by the new HeapDriver and DistributedDriver. When porting code from Slick 1.0, you generally want to switch to JdbcProfile when abstracting over drivers. In particular, pay attention to the fact that BasicProfile in 2.0 is very different from BasicProfile in 1.0. Inserting¶ In Slick 1.0 you used to construct a projection for inserting from the table object: // --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------- (Suppliers.name ~ Suppliers.street) insert ("foo", "bar") Since there is no raw table object any more in 2.0 you have to use a projection from the table query: suppliers.map(s => (s.name, s.street)) += ("foo", "bar") Note the use of the new += operator for API compatibility with Scala collections. The old name insert is still available as an alias. Slick 2.0 will now automatically exclude AutoInc fields by default when inserting data. In 1.0 it was common to have a separate projection for inserts in order to exclude these fields manually: // --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------- case class Supplier(id: Int, name: String, street: String) object Suppliers extends Table[Supplier]("SUPPLIERS") { def id = column[Int]("SUP_ID", O.PrimaryKey, O.AutoInc) def name = column[String]("SUP_NAME") def street = column[String]("STREET") // Map a Supplier case class: def * = id ~ name ~ street <> (Supplier.tupled, Supplier.unapply) // Special mapping without the 'id' field: def forInsert = name ~ street <> ( { case (name, street) => Supplier(-1, name, street) }, { sup => (sup.name, sup.street) } ) } Suppliers.forInsert.insert(mySupplier) This is no longer necessary in 2.0. You can simply insert using the default projection and Slick will skip the auto-incrementing id column: case class Supplier(id: Int, name: String, street: String) class Suppliers(tag: Tag) extends Table[Supplier](tag, "SUPPLIERS") { def id = column[Int]("SUP_ID", O.PrimaryKey, O.AutoInc) def name = column[String]("SUP_NAME") def street = column[String]("STREET") def * = (id, name, street) <> (Supplier.tupled, Supplier.unapply) } val suppliers = TableQuery[Suppliers] suppliers += mySupplier If you really want to insert into an AutoInc field, you can use the new methods forceInsert and forceInsertAll. Pre-compiled Updates¶ Slick now supports pre-compilation of updates in the same manner like selects, see Compiled Queries. Database and Session Handling¶ In Slick 1.0, the common JDBC-based Database and Session types, as well as the Database factory object, could be found in the package slick.session. Since Slick 2.0 is no longer restricted to JDBC-based databases, this package has been replaced by the new DatabaseComponent (a.k.a. backend) hierarchy. If you work at the JdbcProfile abstraction level, you will always use a JdbcBackend from which you can import the types that were previously found in slick.session. Note that importing simple._ from a driver will automatically bring these types into scope. Dynamically and Statically Scoped Sessions¶ Slick 2.0 still supports both, thread-local dynamic sessions and statically scoped sessions, but the syntax has changed to make the recommended way of using statically scoped sessions more concise. The old threadLocalSession is now called dynamicSession and the overloads of the associated session handling methods withSession and withTransaction have been renamed to withDynSession and withDynTransaction respectively. If you used this pattern in Slick 1.0: // --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------- import scala.slick.session.Database.threadLocalSession myDB withSession { // use the implicit threadLocalSession here } You have to change it for Slick 2.0 to: import slick.jdbc.JdbcBackend.Database.dynamicSession myDB withDynSession { // use the implicit dynamicSession here } On the other hand, due to the overloaded methods, Slick 1.0 required an explicit type annotation when using the statically scoped session: myDB withSession { implicit session: Session => // use the implicit session here } This is no longer necessary in 2.0: myDB withSession { implicit session => // use the implicit session here } Again, the recommended practice is NOT to use dynamic sessions. If you are uncertain if you need them the answer is most probably no. Static sessions are safer. Mapped Column Types¶ Slick 1.0’s MappedTypeMapper has been renamed to MappedColumnType. Its basic form (using MappedColumnType.base) is now available at the RelationalProfile level (with more advanced uses still requiring JdbcProfile). The idiomatic use in Slick 1.0 was: // --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------- case class MyID(value: Int) implicit val myIDTypeMapper = MappedTypeMapper.base[MyID, Int](_.value, new MyID(_)) This has changed to: case class MyID(value: Int) implicit val myIDColumnType = MappedColumnType.base[MyID, Int](_.value, new MyID(_)) If you need to map a simple wrapper type (as shown in this example), you can now do that in an easier way by extending MappedTo: case class MyID(value: Int) extends MappedTo[Int] // No extra implicit required any more
http://slick.lightbend.com/doc/3.0.0/upgrade.html
CC-MAIN-2018-17
refinedweb
1,578
59.7
This post was written by Subramanian Ramaswamy and Andrew Pardoe, Senior Program Managers on the .NET Native team. We!. Join the conversationAdd Comment Will this support a WPF desktop application? @Sean: The .NET Native Preview released today only targets Store apps but we're working on improving native compilation for all .NET apps. We've already answered this question, and a lot of others, by the way, at the Microsoft .NET Native FAQ: msdn.microsoft.com/…/dn642499.aspx. Would this require a certain version of the .NET framwork on the target machine? @John: Also answered in the FAQ 🙂 With the .NET Native Developer Preview, apps will get deployed on end-user devices as fully self-contained natively compiled code, and will not have a dependency on the .NET Framework on the target device/machine. So, no .NET framework required on the target machine with .NET Native. This is great! And makes a lot sense in the server side, for critical mission applications, where performance matters a lot. Yes, this could really help our desktop WPF app! Will I be able to compiler a C++/CLI DLL with .NET Native and then link that with my native C++ .exe? Let's hope the team moves on from WinStore silliness to Desktop asap. Fantastic! Question: once this is baked, what reason would I have to NOT use it for a store app? What about other apps (once supported)? Why would I want the .NET framework to ever be required on the target machine? Are there any plans for Azure support? When a .NET application is compiled for native, are there also some kind of PDB symbols generated as well? I am concerned about supporting such an application after deployment. In the event of a bug / hang / crash, dumps will need to be taken. There must be a reasonable way of mapping the native code back to the original source in .NET. Could you please elaborate a bit on how you plan to support this scenario? @jps: The .NET Native Preview currently allows development only in C#. @Kent: There's no reason to not use it for a Store app. Start evaluating it, now! 🙂 This preview only supports the Windows Store profile and the .NET Framework supports a wider set of APIs. Keep in mind that not all scenarios are suited for static compilation, thus, we are also investing heavily in our dynamic compilation story simultaneously (checkout RyuJIT @ blogs.msdn.com/…/ryujit-ctp2-getting-ready-for-prime-time.aspx). @Jamy: Currently this preview only supports Windows Store apps. However, as we mentioned earlier, we are continuing to evolve and improve native compilation for the wide range of .NET applications. @Ben: The .NET Native compiler goes to great lengths to make the mapping back to source code possible while debugging, even with native code! The symbol information is passed through at every stage of compilation to make this possible. And yes, the compiler does generate a PDB that allows you to debug crash dumps. @Subramanian "The symbol information is passed through at every stage of compilation to make this possible. And yes, the compiler does generate a PDB that allows you to debug crash dumps. " Awesome. Thanks. Was wondering that as well. I can't find this in the Eclipse IDE? Will .NET native be able to support dynamically loading assemblies? We have this even in our Windows Store app. Would be very nice to have this in WPF applications so we can use it in the real world. This is really extremely good news! Wow, fantastic and really appreciated. If released in time for standard .NET apps, will that also solve obfuscation / reverse engineering issues? Right now it is really easy to steal the source code of my application with free tools…. Can't wait to deploy this to my clients for desktop apps (not Windows Store apps)! sound good! Is reflection supported? And can I use generic in virtual methods? I wish it would be available for WPF applications soon … Can you please stop releasing new technologies as Windows Store only to try and force developers to adopt the crap fest you used to call Metro? Brilliant on a phone/tablet, an abomination on the desktop. The DirectX AppWizard in VS2013 is another example – how about one for desktop app's? I'm seriously getting worried about the direction all this is going. After over 20 years Windows development, I'm starting to seriously learn Linux as I begin to see no future employability on Windows. Nice to see Microsoft admitting that JIT was not the way to go. For those of us that stayed with native code it just shows we were right. Now bring back an updated VB6 too. visualstudio.uservoice.com/…/3440221-bring-back-classic-visual-basic-an-improved-versi Why hasn't my previous comment showed up yet? It contained to fair questions about perf in specific situations? All: Many of these questions are addressed in the FAQ: msdn.microsoft.com/…/dn642499.aspx @Rogier: Because .NET Native uses the C++ optimizer to generate binaries it eliminates the need for an obfuscator in most cases. Do note that you can reverse-engineer even C++ assembly code, but it's far, far harder. @Belleve: Reflection is supported to the extent that the Windows Store supports reflection. And generic virtual methods are supported as well. Check out the MSDN documentation for some good details there. @ST: We're actually heavily investing in our JIT as well. Bing for 'RyuJIT' for a lot of details. Different scenarios require different capabilities. We're building best-of-breed on both ends of the spectrum. @Ultrahead: I'm sorry, I don't know why your comment isn't showing. We don't delete comments. Could you please try reposting? If that doesn't work please send me a mail at firstname.lastname@microsoft.com and I'll post it for you. Ok, read the FAQ: one of the questions refered to perf imporvements for interop calls, so it has been answered 🙂 Now, about the scenario: let's say I create a library with C# and I want that the apps/games that consume it get advantage of .NET Native, how is the process here for compilation? I mean, I guess that a .NET project in Visual Studio will not be able to directly reference a .NET Native library, right? Or will it? In case not, and a standard .NET version of the library must be provided for referencing in VS, will .NET Native compiler also compile and link all referenced assemblies to native bits? There are many, many VB6 programmers who want to see VB6 back in a new version (a parallel version to Visual Studio). I do programming in both VB6 and VB.NET and I think a new version of VB6 would be welcomed. @Ultrahead: .NET Native currently compiles all referenced code into a single MSIL image that gets compiled by the C++ compiler so you're correct, a .NET project can't directly reference a .NET Native library. Your last statement is correct–an MSIL version of the library is referenced and it's all statically linked in and compiled. Libraries present an interesting challenge for static compilation as they complicate the heuristics around knowing what metadata and types are referenced. For example, the library may expect to create a generic type out of a type that doesn't exist in the library itself. Short answer: if you are a library author and you want to help make .NET Native work well for library authors, please mail dotnetnative@microsoft.com. We'd love to work with you. Unfortunately this is about 10 years too late. .NET is becoming less relevant as focus shifts to the Web and languages such as Typescript take over. Thanks for the answer. Actually, I'm a videogame dev and .NET Native really interests me. When working with C#, my code consumes third-party libs, like for instance, SharpDX (either directly, or indirectly through APIs like Monogame's, WaveEngine's, etc.). So I was thinking of how or to what extent .NET Native can benefit my games by also compiling and linking these third-party libs, resulting in a perf boost with results close to pure native code (even with managed memory). Would love to see a version of SharpDX optimized for .NET Native compilation. Let's hope Alex is reading this (I'll let him know in case he hasn't yet). If I compile native my Win Store Application .. can I upload this native app to the store? If it is possible this means that the app will be much hard reverse – engineered, right? When you expect the Win Phone 8.1 version to be released? In fact is it now usable for production at all? @Ultrahead – we definitely aim to have applications that use third party libraries like SharpDX and MonoGame work very well with .NET Native. One of the main ways that library authors can make their libraries .NET Native friendly is to make sure that their use of reflection is self-describing, by providing a library specific .rd.xml file. The preview SDK ships with some .rd.xml files for several popular libraries we've seen in use in the Windows Store (including SharpDX). While improving our ability to automatically detect reflection uses will be a point of focus as the .NET Native SDK continues development, having libraries provide .rd.xml files themselves is the biggest thing that they can do in order to make it easier for applications to use them with .NET Native. We'll have more information on these library .rd.xml files coming shortly in a series of blog posts by one of the devs on our team who worked on getting store apps that use SharpDX and MonoGame up and running. can anyone at Microsoft please explain why WPF is ignored in favor of terrible store/modern/metro ! why was VS made so painful to look at ? its like a visual warning say "avoid this crap!" having spent a lot of time with HTML5 canvas I can see why you can only do solid color/Image brushes without knowing the position of the element , so when you make the argument that its "design language" you lose all credibility with me, we will never,ever, ever develop store apps, its a complete debacle, and when you try to force adoption by not supporting WPF or even responding to direct requests, are you really sitting around in your stupid meetings thinking "We'll just say,blah blah blah [insert BS specious argument here] sorry not at this time", do you think we are Morons ? We have a huge investment in .net/WPF /Silverlight and all we get its more games and double talk from you guys, let me assure you, many more moves like this and the we will have no options but to move on, no matter how painful/expensive ! @Alexander In the developer preview, we currently do not allow uploading of the native binaries to the store. When .NET Native gets closer to completion, the idea is similar to how Windows Phone apps get compiled. You would upload MSIL to the app store, and the app store will run the .NET Native compilers to produce the native binaries that would download to end user devices. The final binaries that come out of the .NET Native compilers have code that looks much like the code that the C++ compiler produces, with a similar bar for reverse engineering. (Not at all coincidentally, the native code generated by the .NET Native tool chain is actually produced by the back end of the Visual C++ compiler). Right now we're focused on Windows Store apps, but as we continue development on the .NET Native tool chain you can certainly imagine that we'll add other natural targets such as the phone. We don't have a date for that at this point unfortunately. After more than a decade we end up with …. Delphi. A fast compiled language with a nice syntax. Anders must be pleased. Release it under an opensource licence like MS did with Typescript. @Shawn: great! Looking forward to it. First of all this is awesome news and i can't wait to see more and get better performance specially on desktop applications! Now i have a curious question…. basically this provides the performance of C++ and kinda turn C# into the new C++ of the future right? now…. what would happen to C++ after .NET gets better? i mean… looking at the beauty of C# as well as other nice features it has and now this i personally find hard to even touch C++ myself for future applications. Productivity is a key feature that has high priority in a world that is moving fast but C++ is still used as in some scenarios performance have just higher priority. Now that we have this and if continues to improve i personally see an obscure future for C++ at some point in time, at least on Windows. Correct me if i'm wrong please. @John Wernd: As a former member of the C++ team I can tell you C++ still has a future : ) Different languages are great for different scenarios, and C++ still hold the gold standard for consistently fast code (if only because people don't write assembly anymore…) But you're right, the performance .NET Native makes the productivity of C# available to a large range of developers. So will Microsoft please tell us what the future is for the VB language? .NET Native and Xamarin's product neither support it. Is it over for VB? @David: VB support is coming, don't fear! .NET Native is in preview. C# first, then the world! Gotta admit that I love this comment: "…, the performance .NET Native makes the productivity of C# available to a large range of developers." @Andrew Pardoe: Thank you for the quick response! i work as a C++ developer myself but i have to admit that i'm a WPF/C#/MVVM lover. I didn't meant that C++ is going to die anytime soon but that considering .NET Native improvements over the year in combination with a language like C#, productivity, XAML, MVVM and a hell bunch of stuff it has to offer i see many developers switching in the future and using C# more. One of the primary excuses not to use C# is performance and if .NET native brings such advantages companies will pick up C# quickly as nowdays software need to be done faster. When i think about how easy it is to develope robust apps in WPF and then think about MFC which we use daily at work…. i can only conclude that there are worlds apart. I love C++ as well but the C# beauty got me many years ago :). The company i work for would love to use C# but due to performance it hasn't been an option. Now with .NET Native things could finally change in the future. Talking of which…. how far can we go in future versions of C# and .NET Native when it comes to low level? i'm not that far with my C# knowledges just yet but will be interesting to know. I just forgot one last question on my previous comment. What about availability for Linux or OSX? personally i've been waiting to see WPF applications running on something else than Windows. I can see a bright future there because people will still go back to Windows and use Visual Studio and other great tools. Having a compiler for other systems as well as cross-plattform development would be awesome. I love what Mono and Xamarin are doing but to be honest it feels kinda basic when comparing to the real thing like a robust WPF application rather than some standard designed window on Linux. Is there a future in that area? such a move could be a breakthrough for the recently announced "Unified Apps" Wasn't this part of the original concept. Somehow I left WinCE 2000 in Denver thinking this was the message being delivered @John Wernd "The company i work for would love to use C# but due to performance it hasn't been an option." – being on .NET perf team, I would love to know more details about your scenarios that you find slow. Can you please send me email to karel.zikmund@you-know-where.com? What kind of operating system support? we all know,the CLR manage the memory ,what about this ? .NET allocate many Large Object Heap(bigger than 85K bytes) will cause OutOfMemoryException error,even the program that does not leak memory,and which never requires more than a fixed amount of memory to perform an operation.Does .NET NATIVE solved this?! The Dangers of the Large Object Heap…/the-dangers-of-the-large-object-heap about time one of most restrict 'feature' of managed app has gone more and more native compiled store app are coming So how is this different from ngen'ing assemblies, apart from removing the dependency on having the CLR installed? How does the native code do heap allocation? Does it use a GC or does it somehow use malloc? This looks very promising. I managed to use C# in a very unusual fashion to get decent performance on a real-time number crunching application (sdrsharp.com). Supposing the same auto-vectorization techniques are used in this new compiler back-end, I expect some parts of my DSP code will leverage the new compiler to compete with hand tuned SIMD code. Congratulations for this achievement. Amazing! 😀 Thank you SO MUCH!!! gonna try this today 🙂 Oh why the hell is this for apps only 🙁 Can't make any use of that.. will Winforms & Console be supported anytime soon? Very interesting…. so C# is now a sugary syntax over the top of the C++ compiler. About time too 🙂 Now, I think I know the answer to this ("no") but can it take a native library and include/import/link it in with a .NET Native application? Would it be possible to expose an ordinary .lib file to a .NET application so we could to this, either directly or by creating a thin wrapper? If so, you could write all your libraries as native code, and link them to either .NET or C++ programs! And that means I could take some of my favourite OSS libraries (eg ffmpeg) and use them in .NET apps. Please say you'll at least look into this! Good stuff though, I always said best tools for the job – C++ backends with .NET fronts made sense. Now even .NET agrees with me 🙂. Give us more control on GC and memory management. Most overhead come from checking references betwen objects, and we want to disable this checking on givent object (and his subobjects) to improve performance. Thisa can be done by method GC.SetCheckReferences (obj, bool). Additionaly give us option to remove particular object from memory on demand, by adding method GC.Remove (obj). These improvements do not break any compatibility and can improve pervormance in multiple areas. On the other hand, NET Native only reduces startup time (in most significant way), which can easy achieved by traditional ngen-izing IL binaries at instalation time. This way NET Native is useless. Latest lesson MS gained from DX11 vs Mantle, lead to simple equation: more control = more speed. Do it with .NET Folks, lots of the questions you're asking are addressed in the .NET Native FAQ: msdn.microsoft.com/…/dn642499.aspx @JadeB: There is a GC included in .NET Native apps. Type safety is important to us, and memory safety is a key part of type safety. @The_Assimilator: There's lots of good information in the FAQ and in questions already answered here. @Aron Parker: Windows Store apps right now. It's just a preview, and just a preview for a v1. Everything is on our radar. @AndyB: All app code currently gets compiled and linked into a single assembly. Download the tools and try them out : ) @TBRMDEV: Thank you for the suggestions. We do want to preserve type safety and we firmly believe that garbage-collected memory can be a performance enhancement over managing your own memory. You might like reading about Raymond Chen's adventures comparing the two. There's a good reference/index on Rico Mariani's blog: blogs.msdn.com/…/416151.aspx @Andrew Pardoe "VB support is coming, don't fear! .NET Native is in preview. C# first, then the world!" Will be Classic VB6 part of the world? Cheers, @Miquel Matas: Microsoft will support your existing Visual Basic 6 components and applications through the lifetime of Windows 7 client and 2008 R2 server. msdn.microsoft.com/…/ms788229.aspx @TBRMDEV "NET Native only reduces startup time (in most significant way), which can easy achieved by traditional ngen-izing IL binaries at instalation time." We quote comparison with NGen, because having NGen images is typical situation end-users experience with .NET Windows Store apps – NGen images are automatically generated by AutoNGen for all .NET Windows Store apps on Windows 8+ (see blogs.msdn.com/…/got-a-need-for-speed-net-apps-start-faster.aspx). @Joelihn "The Dangers of the Large Object Heap" What about Large Object Heap Compaction that was introduced in .NET 4.5.1?…/large-object-heap-compaction-should-you-use-it Can someone please explain just what is "C++ Performance" when it comes to .net? .Net is a JIT-compiled, not interpreted. Presumably a JIT-Compiler can/should achieve the same, if not better, performance than what an ahead-of-time compiler can achieve since the JIT may know more information about the user's local system. Why not simply improve the JIT-Compiler to include all of these "C++ Optimizer" benefits that are being touted about? @Shaun: "C++ performance" refers to the fact that we're optimizing .NET code with the C++ optimizer. We're doing other things to improve the performance–read all about it on the download page at aka.ms/dotnetnative. We're also improving our JIT compiler. Different technologies suit different scenarios. Read about the new JIT at blogs.msdn.com/…/ryujit-the-next-generation-jit-compiler.aspx @Andrew Pardoe Visual Basic 6 "It just works" support is provided on Vista, Server 2008, Windows 7 and Windows 8 – giving support until 2023. msdn.microsoft.com/…/ms788708.aspx In practice VB6 applications should continue running as long as Windows uses the Windows API. But the support is for the VB6 runtime, not the VB6 IDE. While the IDE does work on Windows 7 and 8.x this isn't really enough. Microsoft should update VB6 to include the 64bit modifications added to VBA7 It is expensive to rewrite legacy VB6 applications, and this fact is holding back many XP users from upgrading to Windows 7. visualstudio.uservoice.com/…/3440221-bring-back-classic-visual-basic-an-improved-versi Will this remove .NET Framework dependencies? e.g will this work on an XP machine without installing .NET framework? It's time I'll start to learn C# :O In the FAQ i see this "Even though the binaries are natively compiled, we maintain the benefits of managed code type safety (and thus Garbage Collection) and the full C# exception model". I don't understand who manage the memory and code if it says too that no need .Net Framework (ig no CLR). @Sten2005: UserVoice is the best suggestion I can give you for VB6 support. Sorry o_o @Johnattan: There is a GC included in every .NET Native app. Hi guys, I'm the development team lead of WaveEngine and this is so interesting for us, if there is a possibility to participate with you to check .NET native with our engine, let me know who I can contact. Thanks This looks absolutely awesome. Hope this continues to be supported and developed fully. @Javier Canton Ferrero, please send mail to dotnetnative at Microsoft dot com. I'll pick it up. We're looking forward to working with library vendors! YESSSSSSSSSSSSSSSS! Hey, great work with the library! I was trying to do some startup time measurements in my app based on your suggestions here: msdn.microsoft.com/…/dn643729%28v=vs.110%29.aspx ("Measuring Startup Improvement with .NET Native"). I made 2 versions of my app, one with standard compiler, and the other using .NET Native compiler. I was able to measure startup time using standard compiler – the events AppInitialized and MainPageInitialized showed up in the PerfView log. As soon as I switched to the app compiled with .NET Native, those events were not showing up in the log anymore. I am using the same AppEventSource class implementation, and calling the Log in the same places inside the app. (ManifestData event still shows in the log for both apps, but that's not very useful…) Any ideas what's wrong? Did I miss a step somewhere? I appreciate your help! Igor Is the native managed runtime open source: MRT.dll (Managed Runtime) For core complier and language runtime techology MS needs to be open source end to end. Like Apple is with Clang/llvm. (The Windows Runtime – the core of the Modern experience – can be closed source. As Apple is with the GUI part of its platform). Will it support CodeDom? I mean would I still be able to compile code at runtime and make NATIVE executables? I have been asking for this compiler for years, so I'm glad to know about this project.…/Regarding_the_future_of_C_.php So, now that SIMD is around the corner for the upcoming Jitter, will we also have SIMD for .Net Native code? @SharpDev: Currently CodeDOM isn't supported. CodeDOM requires compiling code during execution so it doesn't work in a fully static environment. When we bring .NET Native to scenarios requiring CodeDOM we'll make this work but it necessarily requires dynamically creating the code (e.g. JIT) that the program emits. @Ultrahead: SIMD is available already in the RyuJIT CTP3. .NET Native will use vector instructions in its generated code. @Igor Sorry about the trouble using the EventSource class. You uncovered a bug here. There’s an issue that prevents events that don’t have event arguments from being emitted. This means that calls like WriteEvent(1) won’t work on this release of .NET Native. (Note that '1' is the event ID and not an event argument.) The workaround is to just add a dummy argument to the event. Just use one of the other overloads of WriteEvent to do this. For example, take calls like WriteEvent(1) and turn them into WriteEvent(1,""). I have filed the bug for this internally and I expect it to be fixed in the next release of .NET Native. We’ll also update the documentation in the interim. @Andrew Pardoe "UserVoice is the best suggestion I can give you for VB6 support. Sorry o_o" The lack of VB6 support from Microsoft has been with us for a long time. With good news from Microsoft on .Net Native and the open sourcing of Roslyn I can only assume that VB6 wasn't mentioned because no one in Microsoft thought about it. According to the Tiobe index, VB6 is still more popular than VB.Net – 16 years after the latest release. So why don't Microsoft either update VB6 or, if they won't do that, open-source it. At least Microsoft could give an answer rather than passing the question around. Remember there are lots of VB6 applications out there still in everyday use – this is one of the reasons end-users aren't upgrading from Windows XP. Here is the issue on UserVoice: visualstudio.uservoice.com/…/3440221-bring-back-classic-visual-basic-an-improved-versi @Andrew Pardoe [MSFT] "We do want to preserve type safety and we firmly believe that garbage-collected memory can be a performance enhancement over managing your own memory". In most cases – yes. But experienced programmers can use sugested featurec, to reach performance levels unreachable today. @Karel Zikmund You can improve ngen with this 'superior c++ back end compiler' and copy these optimized CLR libraries to user machine. This way you can also avoid 'duplicate the same code with each generated assembly' problem. Result is the same as NET Native without limitations. I do not see benefit with ablility to run natived apps withoiut installing of NET. Is there any windows without .NET installed ? If NET Native convert IL to CPU code, why it is limited to C# (even if temporarily) ? @Sten2005: Yes, there are well over 5000 votes in support of bringing back VB6. I'm sorry I don't have a better answer for you. Soma and ScottGu both have blogs 🙂 @TBRMDEV: We believe in the benefits of garbage-collected memory, not just the performance aspects but also the correctness aspects. We believe our optimization strategy will benefit apps in these scenarios. And we see a benefit to deploying apps independent of the installed runtime. I'm not sure what I can say except that we think we made pretty reasonable design decisions with .NET Native and we'll adjust those as we see how it works. As for why C#, even if temporarily, we have a limited number of resources and potentially infinite work. .NET Native is in preview. Some things are missing right now, such as other languages and x86 support. @TBRMDEV: It occurs to me I can address your last point a bit more specifically. Each managed language generates slightly different patterns in MSIL. The VB compiler generates exception filters, for example, and F# generates a bunch of tail calls. MSIL from one language is a subset of all valid MSIL so it was less work for us to focus on one language first. @Andrew Pardoe, to be honest i'm not sure if i get your answer quite right, Most of my applications (Desktop) compile and create temporary EXEs at runtime. I know that is currently not supported. but is there a plan to make the same thing possible with .NET NATIVE? I rather have all my applications compiled to native, but if that is not gonna be possible then i'm afraid i will be staying with current .NET framework. @SharpDev – this developer preview of .NET Native is targeted at Windows Store apps for the time being. Windows Store applications code against the .NET Core profile, which does not include the CodeDOM types. Therefore, apps which use CodeDOM currently will not work with this tool set. That being said, this is an early developer preview and as we expand our scenarios beyond Windows Store we will need to look at all sorts of dynamic code scenarios including CodeDOM to see how they fit together with .NET Native. @Shawn & Andrew Is the native minimal runtime (mrt100.dll) open source? @kevin – the runtime portion of .NET Native is not open source at this time. @shawn Open sourcing the native runtime would definitely help get developer buy in. It's the missing piece of the puzzle. can silverlight code be compiled with .net native? @iamveritas – the current developer preview is for Windows Store apps for Windows 8.1 only. As we continue to develop the product, we'll prioritize and consider other scenarios as well – but for now only store apps are supported. So why can't this just be a better set of pre-jit optimizations? Seems silly to me. THIS IS YET AGAIN SHAMEFUL – VB.NET IS SENT TO THE BOTTOM OF THE LINE YET AGAIN AND TREATED AS A SECOND CLASS LANGUAGE. There is no way in hell I would ever go back to C# to write code, just the idea of coding in C# takes all my motivation to code away. I absolutely love coding in VB.NET, a modern language, not outdated by 40 years like C/C++/C#. If MS burns us with this and VB.NET isn't supported in .NET Native 1.0 RTM, I am totally leaving the .NET Framework, I've almost had enough of being treated as an after thought, especially when during 2001 .NET Launch, VB was the popular language and the tech heads at MS single handedly (each one of them individually, without orders from anyone) did their bit to kill VB and popularize C# by using C#, giving examples in C#, coding for C# first etc… even when VB was the popular language, and this is what happened. I'm so sick of it, people are forced to use C# because of people like you making decisions like this. "WHEN WILL WE GET VB.NET SUPPORT?" I feel your pain, after the Sept 2011 Build conference I jumped on the "power and performance" modern C++ bandwagon and have invested much time and energy in learning it (since then). I was surprised to not find it supported (as yet) and am reading the signs carefully – they read that C# is where the investment is going to be made and if you want to play this is the place to be. I think misery loves company because at least we know we are not suffering alone. What was a turning point for me (in excepting that I can never become an expert) was Rocky Lhotka's article "Keeping Sane in Wake of Technology's Pace", an excerpt follows: "In 2007 I found myself becoming increasingly unhappy, feeling more and more overwhelmed by the rate of change. I felt like I was slipping farther and farther behind. However, the rate of change is outside the control of any of us individually. So this year, rather than fight or despair about it, I’ve chosen to embrace and revel in the change. " visualstudiomagazine.com/…/keeping-sane-in-wake-of-technologys-pace.aspx I'll embrace the change and hope my framework can be completed before this one becomes obsolete, i.e., a new one doesn't come out in the next year and I jump on that bandwagon….. We've definitely heard the feedback regarding VB support. One thing to keep in mind is that this is an early developer preview of .NET Native, and it is by no means yet a feature-complete product. This first preview focused on C# Windows Store apps as a way to get .NET Native boostrapped. Over the coming months, we'll be looking at our list of prioritized scenarios and continue to bring those online. You should in no way take away from the fact that .NET Native does not yet have VB support anything about if it will ever have VB support in the future. -Shawn @WHEN WILL WE GET VB.NET SUPPORT? "THIS IS YET AGAIN SHAMEFUL – VB.NET IS SENT TO THE BOTTOM OF THE LINE YET AGAIN AND TREATED AS A SECOND CLASS LANGUAGE." And that's exactly where it belongs. without any results and any feedback from us.. It is not nice to censor my comment … the general opinion is there already … Open sourcing the .NET Native runtime would be the utimate act of good faith from MS. I haven't been interested in MS tech since 2002 when I first checked out .net – but decided Web 2.0/ajax was a better client side route. But native .net unified apps are really compelling. But this core tech needs to be open source in it's entirety. Once this is released for Desktop apps, will the native executables be compatible with older version of Windows? @Ivan – right now the preview tool chain targets Windows Store for Windows 8.1. It's difficult to say what the requirements would be when new scenarios get added in the future, since those haven't yet been sorted out and designed. For our information when we start working on those designs, what versions of Windows would you be interested in us supporting? Having this only for Windows Store apps is like not having it at all. None of my customers want Windows Store apps. They want WPF. They would love to use the Windows Store SDK as long as they could deploy the apps themselves, but no dice. So, when it is available for WPF and ASP.Net, and WCF, and everything else in the Microsoft world that actually matters, get back to me and I'll be delighted to try it out. I dont see this .NET Native option anyware. I opened up a new vb.net windows store application and changed the architecture to x64, but I dont see the option anywhere!! What else do we need to do. @John – The current developer preview SDK supports C# Windows Store apps only, so the menu option to enable .NET Native will not show up for a VB.NET app yet. VB is on the list of items that we are prioritizing for future preview releases, but in this current version you'll need to use a C# project in order to experiment with .NET Native. Yes, .NET NATIVE is a great thing, if it also works for desktop. This is the question. The FAQ reads as if this was the case, but will it happen within a reasonable time? Because, I think, this is what many people are interested in, due to two crucial advantages .NET NATIVE offers: Performance And let's be honest: Most commercial programming is done for the desktop as it THE platform designed for productivity. So can you tell us something about your priorities-list and whether the desktop is a near or rather far away target for .NET NATIVE? Hey Shawn, I guess you mean, when desktop-support "comes". As I understood, .NET NATIVE does not support desktop right now. However, I share your opinion reagrding the necessity of desktop-support. And I also am very interested in some kind of timeline or priority that tells us, if and/or when we can expect .NET NATIVE for desktop apps. We're absolutely interested in getting many of the benefits of .NET Native to desktop applications. This project is still in its early phases to be sure – what we gave out at //build is an early developer preview of .NET Native. One of the benefits we have from starting with .NET applications in the Windows Store is the fact that the .NET Core profile is a limited subset of the full .NET Framework. That allows us to boostrap those applications more quickly since there is simply less functionality that we need to build to get those applications working. WPF applications have access to the full .NET Framework, which is a much larger target and therefore requires much more time to get up and running. I can't predict future timelines, but I would say that you should expect .NET Native to come online for the smaller profile frameworks first as it works its way up to bigger frameworks. That being said, we definitely know that desktop WPF application developers are very interested in the benefits of .NET Native and getting those benefits to you is something that we do want to do in the longer term. One thing that will help guide us is to see which of the benefits are most interesting to you as a desktop developer. From Shawn's comments it looks like he's interested in being able to distribute native code without IL, and get the performance of the code generated from the C++ compiler. Is that the same functionality you're interested in Timothy, or are there other features of .NET Native that are appealing to you? Absolutely. Performance is always welcome. However, one must admit that there has been a great progress since the first .NET version reagrding execution time. Combined with the progress in computer technology, I think that for most applications (except "niche-apps", for instance in finance), execution speed is at least sufficient. Code Security, in my opinion is a much more important advantage for commerical application. And offering the same natural security in C# that C++ offers, would be awesome. Bring back VB6 in parallel with VB .NET. If NET Native app is independent from NET framework, can it work on Android / ios ? @Tristan – currently .NET Native runs on Windows only. However, you might be interested in checking out Xamarin which provides an ahead of time compiler for C# code that does build for Android and iOS targets. @ShawnFarkas, we would like to see .NET Native support for WPF desktop apps in order to improve cold startup time. More importantly though, we do embedded development work. We would love to be able to share more code between desktop apps and embedded apps but .NET CF on certain ARM chips runs like a pig. Having .NET native compilation for .NET embedded software seems like a great fit to enable these apps to run on resource constrained (read cheap – very cheap) processors. A big +1 for adding native compilation to desktop apps as soon as possible. When will IIS be available for ARM processor 🙂 is support on win 7 or in the future? Really looking forward to .NET Native for Desktop applications. It will provide significant advantage to startups as we can leverage the advantages of C# i.e. quick development time, extensive set of base class libraries etc, while protecting our code dissassembly – which is the only reason we are currently using C++. Please make it available for Desktop apps asap! Right now, general developers just arent aware of this initiative otherwise there would have been a lot more vocal supporters. This ".NET Native" seems to be yet another JIT compiler. If MS is the only one that can compile native images, it is still a JIT compiler. It is not the "Native" compiler desktop developers have been waiting for. ".NET Native" still requires the entire application's IL code to leave the vendors hands. I hope it turns out to be more than just “smoke and mirrors.” I am sure that speeding up Windows store applications is a high priority. But, I would think that MS could spare a few programmers for developing comprehensive code protection software for vendors. Many vendors have already spent thousands on obfuscation products which simply cannot obfuscate all important code due to reflection and other .NET features. A native compiler for vendors would not have to support the entire .NET desktop library if the unsupported parts could still be used as IL and obfuscated. Perhaps that would require collaboration with the RyuJIT team? Yes, speed is important too. I understand… Microsoft's concentration is on "the cloud", subscription services, mobile devices, and enabling every "Tom, Dick and Harry" to write simple applications. I get it… that is where the money and power is. Money from commercial developers cannot possibly compete with all of the subscription fees and the commissions for every application "Tom, Dick and Harry" sells. Commercial developers have been "left out in the cold" for many, many years now. I do not see that ever changing. Microsoft is a big company… can't they "take over the cloud" and also throw desktop / client server software developers a bone? Some applications just do not belong "in the cloud." OK… my soap box has collapsed. 🙂 I think one of the next targets for ".NET Native" should be the .NET Micro Framework. It is even a smaller subset of .NET. A native compiled app linked to the native framework that is stripped of unused code would make .NET MF smaller and much faster. Including the JIT compiler and the unused framework code could still be an option. Visual Studio native compilation for non-.NET source code sections is really a necessary feature for .NET MF to succed. Later, add in easy Visual Stuido setup for the popular in circuit programmers and a Microsoft debugger / profiler. These features along with intellisense for the native code libraries would give Microsoft a very desirable embedded systems programming environment. I have the weirdest boner right now. 6 Months and still not for VB.NET and desktop ?? !! when you prepare this tool for all .NET apps? thanks. Are there any update about the native compiler project? When will it be officially released to public? This is all cool and great. I am waiting to see this working and hope there are no bugs. However, I would love to see all the .NET Framework functionality in an unmanaged environment without GC, reflection, etc. What I mean is: you at MS have left C++ too far behind in time while caring only for .NET. If C++ had all these shiny functionalities well organized into libraries as in .NET, you would make A LOT of people happy. What I mean is having, for example, a C++ class called "Path" with static members like "Combine()", "Exits()", etc, without having to dig into header files and calling global Win32 functions as you would normally do. I really like C#, but honestly, I doubt that any C# app will ever run as fast and efficiently as a native C++ app. What you try to do is cool, but it also complicates things. It might be also good to have a cool native API that is somewhat friendlier and more productive what C++ had so far (namely: Win32 API, STL, MFC, etc). I think that most of the .NET namespaces could be transfered (as a functionality) into C++, allowing people who have the abilities to write apps in 100% native C++ to do so, but in a more humane programming world. C++ is still good, only its libraries are relics of the past that need to be replaced ASAP. It is the base code of .NET Framework that holds most of the joy, not .NET itself. Missing things like reflection, etc could be handled/worked-around with much more ease in an unmanaged world compared to what you are trying to do right now in .NET for making apps run faster. At least that's my take on this. Certainly Apple and Google don't think that native code or C++ is dead, unlike MS. If fact, and correct me if I am wrong, I think that they don't get along with managed languages so much. And from your direction, I guess that you are starting to notice that your apps are not as performant as theirs. I think it would be best if you created a variant of the C++ language that supported some syntaxes/features of .NET as you did with C++.NET (e.g. namespaces, foreach, etc) and just let it be native from its System namespace bottom up to its top. Then make it work with XAML for its UI and I am sure many of your disappointed developers with be happy again. And one more thing: do not stop improving WPF. All your technologies that you leave for dead, like MFC, WinForms, WPF, etc are still used to develop apps and they will still be used in the future. Declaring them idle/dead/whatever does not promote you phone market, but turns developers against you. I am writing too much.. I know… It think its too late for announcing this great tool at least for desktop. Currently all Windows Vista, 7, 8 and 10 already have .NET installed. But from optimizing view, it's still a great tool. Is the Preview free to download? What are the timescales for expanding this to work with windows forms apps? bleh, this takes forever to release on desktop platform disappointed Will this eventually support native compilation in Visual Basic.NET? I hope it will!! Do you have a ballpark date for desktop application? @GypsyPrince: Visual Basic is now supported for Windows 10 UWP apps. Try it out in VS 2015 RC! @GeoffUnique: Unfortunately, we don't have a date to share for Desktop support. VS 2015 will only support Windows 10 UWP applications, but we're going to look at expanding to other scenarios in future versions of Visual Studio. We've had native compilation in Delphi since 1995. Anything less is, well, less. M$ catches up 20 years later. .net native how I feel microsoft not too likely to provide it?
https://blogs.msdn.microsoft.com/dotnet/2014/04/02/announcing-net-native-preview/
CC-MAIN-2018-30
refinedweb
8,064
66.44
#include <iostream> using namespace std; int main() { double x = 1; double y = 2; cout << "result: " << (x == y) << endl; return 0; }Compile this program after placing your choice of boolean operator in the test segment "(x == y)". Notice the differences in the compiled C++ version. As you try out different operators, your compiler will tell you some of the operators cannot be applied to double variables. You will also notice the C++ program reports "true" as 1, "false" as 0. In C++, any nonzero value counts as "true," while zero is "false." This expression -- while(1) { cout << "Bart Simpson is not the name of a world conqueror." << endl; }— will run forever, because the expression "(1)" is true, and it is always true. The test is entirely meaningless, because the thing being tested is a constant that evaluates as "true." The program that contains this expression will never stop on its own. Note: If you see lots of browser error messages While using this interactive programming page, if you are running Microsoft Internet Explorer, be sure to turn off "script debugging" so you won't have to look at endless message dialogs about errors you may have intended to make. Go to Tools ... Internet Options ... Advanced ... Browsing ... Disable Script Debugging.
http://arachnoid.com/cpptutor/data1.html
CC-MAIN-2014-41
refinedweb
208
62.48
GOALS: Note that examples used are not the exact context of my project, but are very similar, and therefore simplified for the sake of this post I am trying to use WP-API to use WordPress categories, post titles, and post content to generate a dynamic React app. Each piece of information plays a part in the generation of the React app: The WordPress categories are used to populate a sidebar navigation component in the React App. Each WordPress category will act as an item in this navigation. Think of this component as the table of contents. The post titles will be submenu items to each category. So, using recipes as an example, if one of the category items on the menu was “Mexican Food”, the post titles, or submenu items, could be “Tacos”, “Enchiladas”, “Tamales”. Sticking to the Mexican food example, the post content will be displayed in a main content area depending on which submenu item you click on. So, if you were to click on “Tacos” in the sidebar menu component, the main content would populate with the recipe for “Tacos”. This main content area is its own individual component. PROBLEM: The main problem is that I am able to get the items to populate on the sidebar navigation component without much issue using React hooks, useEffect, and fetch. However, once I click on the submenu item, the recipe will not populate on the main content area. RESEARCH FINDINGS: I was able to establish that useEffect is somehow not setting my state. I declare two variables which process the JSON data I get by fetching the WP-API. I then want to set the results into my component state using a setPost function. However, this setPost function doesn’t set the state. Below I’ll post code examples for clarity. CODE: This is my current Main Content component, which receives props. Specifically, it receives the ‘match’ prop from React Router, which contains the URL slug for the specific WordPress post. I use this import React, { useState, useEffect } from 'react' function MainContent(props) { console.log(props) useEffect(() => { fetchPost() }, []) const [post, setPost] = useState([]) const fetchPost = async () => { console.log('Fetched Main Content') const fetchedPost = await fetch(`{props.match.params.wpslug}`) const postContent = await fetchedPost.json() console.log(postContent[0]) setPost(postContent[0]) console.log(post) } return ( <div> {/* <h1 dangerouslySetInnerHTML={{__html: post.title.rendered}}></h1> */} </div> ) } export default MainContent Here are the results of the 1st console log, which contains the contents of my match prop: { isExact: true, params: {wpslug: "tacos"}, path: "/:wpslug", url: "/tacos" } The result of my 3rd console log, console.log(postContent[0]) , results in an object where it returns every single piece of detail for that specific post correctly. After that, I use setPost(postContent[0]) to save this information to my post state. To confirm this, I run console.log(post), to which it returns a simple [], an empty array. RESULTS & EXPECTATIONS: What I expected is that the content in setPost(postContent[0]) would be properly saved in the post state so that I could use that content to render content on the page. However, the actual result is that nothing gets saved to the state, and any time I click on other categories, such as “Tamales”, or “Enchiladas”, the URL DOES update, but it never fetches information again after the first fetch. I know this is a long winded question, but anyone who could help this poor newbie would be an absolute savior! Thanks in advance! First let’s take a look at your second problem/expectation: It never fetches information after the first fetch. The useEffect function may have two parameters. The callback function and an array of dependencies. The effect function will only be called when one of its dependencies changed. If the second parameter is omitted the function will run on every re-render. Your array is empty; that means the effect will run when the component is first mounted, after that it will not re-run. To resolve that problem you need to add your dependencies correctly. In your case you want to re-fetch if props.match.params.wpslug changes. useEffect(() => { fetchPost() }, [props.match.params.wpslug]) Now for the problem of seemingly not setting the state correctly. There seems to be nothing wrong with setting/updating state in your example. The problem is the ‘post’ variable is for the current render-cycle already set to [] and will not change until the next cycle (you should and did mark it as const so it cannot change its value). I will try to explain 2 cycles here. /* First run, should be the mounting of the component */ function MainContent(props) { const slug = props.match.params.wpslug; const [post, setPost] = useState([]) // 1. post is currently [] useEffect(() => { // 3. the useEffect callback will be called, i.e. fetching posts fetchPost() }, [slug]) const fetchPost = async () => { const fetchedPost = await fetch(`{slug}`) const postContent = await fetchedPost.json() // 4. you change the state of this component, // React will run the next render-cycle of this component setPost(postContent[0]) } return ( <div> <h1>{post.title}</h1> {/* 2. React will tell the browser to render this */} </div> ) } /* Second run, initiated by setting the post state */ function MainContent(props) { const slug = props.match.params.wpslug; const [post, setPost] = useState([]) // 1. post is now [{title: 'TestPost'}] useEffect(() => { // on this cycle this useEffect callback will not run because 'slug' did not change fetchPost() }, [slug]) const fetchPost = async () => { const fetchedPost = await fetch(`{slug}`) const postContent = await fetchedPost.json() setPost(postContent[0]) } return ( <div> <h1>{post[0].title}</h1> {/* 2. React will tell the browser to render this */} </div> ) } Here () is a very interesting article about the new useEffect Hook and some of the problems you encountered are addressed.
https://coded3.com/why-wont-useeffect-run-my-api-fetch-anymore-and-save-results-to-state/
CC-MAIN-2022-40
refinedweb
952
55.84
Re: A JLabel's Size - From: "A. Bolmarcich" <aggedor@xxxxxxxxxxxxxxxxxxxx> - Date: Wed, 07 Mar 2007 19:22:26 -0000 On 2007-03-07, Jason Cavett <jason.cavett@xxxxxxxxx> wrote: On Mar 7, 11:45 am, Knute Johnson <nos...@xxxxxxxxxxxxxxxxxxxxxxx> wrote: Jason Cavett wrote: Something I'm curious about... When I create a JLabel and place it into a JPanel/JDialog/some other component, it always appears as the correct size (just large enough to display the text + appropriate boundary around the text). But, when I attempt to get that JLabel's size, it returns 0 width and 0 height unless I specifically set the size of the JLabel. Shouldn't the size have been set correctly upon the addition of the text to the label? I'm confused on how this is working and why it's working that way. The component has to be realized first. It gets realized when the container it is in is either packed or its size is set. Excuse the quibble, but the size of the component is set by pack() as a side effect of pack() validating (laying out) the Window. Validating a Container causes the LayoutManager of the Container to set the sizes of the Components in the Container. (I'll skip the detail of the Window being made displayable.) In the example program if the invocation of pack() is replaced by setSize(100,200), then the width of the label remains zero until after setVisible(true) is invoked on the JFrame. Making the JFrame visible has the side effect of validating it. import java.awt.*; import java.awt.event.*; import javax.swing.*; public class test8 { public static void main(String[] args) { Runnable r = new Runnable() { public void run() { JFrame f = new JFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JLabel l = new JLabel("Hello World!"); f.add(l,BorderLayout.CENTER); System.out.println(l.getWidth()); f.pack(); System.out.println(l.getWidth()); f.setVisible(true); } }; EventQueue.invokeLater(r); } } -- Knute Johnson Okay, that makes sense. But...why is "preferred size" recognized. For example, I can do something like this... JLabel label = new JLabel("Hello World"); label.setSize(label.getPreferredSize()); I never set the preferred size on my own. Is something being done underneath the covers with preferred size? With Java Swing, there is a lot being done underneath the covers. For a text label, the getPreferredSize() method determines what the size Component needs to be to draw 1) the text of the label in the font being used for the text and 2) the border. . - Follow-Ups: - Re: A JLabel's Size - From: Knute Johnson - References: - A JLabel's Size - From: Jason Cavett - Re: A JLabel's Size - From: Knute Johnson - Re: A JLabel's Size - From: Jason Cavett - Prev by Date: Re: how to make graphics in Applet not to repaint after I click the "refresh" button - Next by Date: Re: Saving/Restoring state - Previous by thread: Re: A JLabel's Size - Next by thread: Re: A JLabel's Size - Index(es):
http://coding.derkeiler.com/Archive/Java/comp.lang.java.gui/2007-03/msg00077.html
crawl-002
refinedweb
498
62.88
(sublime-text-community-packages. ... lRing.html), but this does not appear to work with the ST2 alpha. Does anybody know what would have to be done to get this to work with ST2? It would be so great to finally have the kill ring in a modern editor. I've just glanced through the code and it seems to use api functions unavailable at the moment in v2. I'm not familiar with Emacs' kill ring, so I'm not sure whether the plugin could be rewritten to work around that. Sublime 2 has introduced a few new concepts in the api, like the EventListener class and the Edit object (source: memory), which haven't been explained yet. If you want to poke around and have a go, though, bring up the python console (ctrl + ~) and dir() away sublime (m), sublime_plugin (m), view (o), window (o) et al. The major change breaking compatibility with v1.x plugins is the new naming conventions in accordance with PEP8. Other than that, you can still peruse the v1.x api docs at sublimetext.com/docs/api-reference to get started. I'm pretty sure there will be more info on the api pretty soon. HTHGuillermo As far as I can tell EventListener is just a class that defines callbacks on_selection_modified.Guillermo: I'm not sure which functions you were looking at in the killring plugin source that Sublime Text 2 doesn't have yet—I glanced through the killring plugin code, and it looked pretty simple. Were there any in particular that stood out?stiang: you could probably just grab the code from that page and modify it to work with ST2. If you'd like, I could take a shot at this—I'm trying to get better with the ST2 API. In particular, I was referring to the quick panel. Maybe it isn't essential for this plugin, but as far as I know, it hasn't made it into the api yet? I too thought it hadn't, but just this morning I was pleasantly surprised to find it while reading through help(sublime.Window) – it's right at the bottom so I saw it right when I typed the command import sublime def onDone(string): print string v = sublime.active_window().show_input_panel("Kill ring index:","0",onDone,0,0) ...aaaaaaaand obviously you were talking about the quick panel whereas I found the input panel. So... never mind. You could still do it, but you'd have to use the input panel instead of the quick panel. If you would be willing to take a crack at it I would be super grateful! My understanding of ST (not to mention my Python fu) is too sketchy at the moment. It would be great to be able to evaluate ST2 for a while with kill ring support, before diving deeper and figuring out how to tweak/write plugins. My understanding is that the only command that requires showing a panel is yank-any. This is the least important command as far as I am concerned, so if there is a problem with the quick panel (which I am assuming is needed to show a selection of regions for yank-any), skipping that particular functionality is perfectly fine with me. Maybe this features can be merged into editor? I think, some things from killring can be merged with original idea of editor. Actually, I was able to get the EmacsKillRing working on my own, sans the quick panel to show yank choices (I get "AttributeError: 'Window' object has no attribute 'show_quick_panel'"). It wasn’t nearly as difficult as I had imagined. But there is one change I’d like to make. Setting the mark currently gives no visual cue beyond a message in the status bar. I would prefer the behaviour you get on newer Emacsen, where setting the mark anchors a (visible) selection that updates as you move the cursor around. The selection should continue to update until you abort by pressing ctrl-g or perform a command (like kill-region). Is this possible in ST2? If the event system can tell me when the cursor moves I suppose it could be possible to listen for cursor movements, then check if a mark has been set, and if so, update the visible selection, no? If this sounds feasible, is there some documentation on the (new) event system or an example I could peruse? you're looking for on_selection_modified and the syntax highlighting api (an example of the latter for st2 is in my sublimeflakes module you can get positions of cursor(s) and selection(s) then define highlight regions Thanks, that worked like a charm! I haven’t yet understood how I can change the background color of the region (it appears to stay the same no matter what scope I try), but that’s not terribly important. Sublime Text is a lot of fun to hack around with... @stiang : do you plan to release your code on GitHub or something like this? thanx Sure, you can pick it up here: github.com/stiang/EmacsKillRing
https://forum.sublimetext.com/t/emacskillring-plugin-for-st2/1113/4
CC-MAIN-2016-36
refinedweb
853
70.73
Ramkumar Menon's Blog (Comments) 2016-08-25T20:13:24+00:00 Apache Roller Re: Select for update, nowait, skip locked guest 2012-05-24T17:54:38+00:00 2012-05-24T17:54:38+00:00 <p>Hi Ram,</p> <p>Simple and clear illustrations.<br/> Could you also include the concept of MarkReservedValue in distributed polling.<br/> Appreciate your article.</p> <p>Cheers,<br/> Dev...</p> Re: Neighboring Siblings? guest 2011-10-24T05:50:12+00:00 2011-10-24T05:50:12+00:00 <p>And there is no "sibling-or-self" axis, such as for example "following-sibling-or-self" - which causes problems when you are trying to find a certain node and its following siblings.</p> <p>A work-around that works when there are preceding siblings (not part of the match) is to move to the immediate preceding sibling before doing a following-sibling call. But, of course, when there are no such preceding siblings, you can't use that technique.</p> Re: Scanning text files using java.util.Scanner guest 2011-10-06T13:44:56+00:00 2011-10-06T13:44:56+00:00 <p>This URL is not available now !! :-(</p> Re: DBAdapter - java.sql.SQLException: ORA-00932: inconsistent datatypes: expected - got CLOB Dodge 2011-09-22T15:19:32+00:00 2011-09-22T15:19:32+00:00 <p>You'll get the same error if you use "GROUP BY clobcolumn" as well. Took me a long time to figure that out after one of our VARCHAR2 columns got changed to a CLOB.</p> Re: When Java Meets SOA guest 2011-09-18T12:11:34+00:00 2011-09-18T12:11:34+00:00 <p>It would be useful to understand the performance impact of each case</p> Re: Spring Framework Samples Ramkumar Menon 2011-09-15T02:56:19+00:00 2011-09-15T02:56:19+00:00 <p>Good to hear back from you and thank you for your feedback, Clemens. I am in the processing of putting together an updated Sample for Async use-case. Do let us know any other recommendations as well!</p> Re: Spring Framework Samples Clemens Utschig 2011-09-15T02:49:04+00:00 2011-09-15T02:49:04+00:00 <p>I think there was one as well with async interactions - which the component is capable of as well. </p> Re: Dynamically updating bpel.xml properties within your BPEL Process guest 2011-08-23T01:19:29+00:00 2011-08-23T01:19:29+00:00 <p>I want to know why the below property is used in bpel.xml file.</p> <p><property name="Invocation Mode">local</property></p> <p>Thanks in advance</p> Re: Specifying null content in your XML Document using nillable and xsi:nil guest 2011-07-13T02:17:29+00:00 2011-07-13T02:17:29+00:00 <p>Thanks for the help, saved me a headache. If it's not specced in the definition, you will get an error returned when using SOAP. Usually used for validation.</p> Re: Error while invoking bean "finder": Cannot lookup BPEL domain outdoor fitness 2011-06-23T22:52:49+00:00 2011-06-23T22:52:49+00:00 <p>I like this blog ,Thank you for sharing! </p> Re: Specifying null content in your XML Document using nillable and xsi:nil antique victorian 2011-04-29T12:44:28+00:00 2011-04-29T12:44:28+00. Re: An attempt on 8 Queens Problem best cash isas 2011-04-29T12:36:28+00:00 2011-04-29T12:36:28+00:00 Its good to read this blog.Thanks for sharing it. Re: Re-submit BPEL instances from BAM Vimax Reviews 2011-04-29T12:32:14+00:00 2011-04-29T12:32:14+00:00 Thanks for great post! Re: Stripping namespaces using XSLT best cash isas 2011-04-29T12:25:57+00:00 2011-04-29T12:25:57+00:00 Thanks so much for sharing all of the awesome info! I am looking forward to checking out more posts! Re: Viewing Audits for Stale instances best cash isas 2011-04-29T11:58:54+00:00 2011-04-29T11:58:54+00:00 I always learn something new from your post! Re: Specifying null content in your XML Document using nillable and xsi:nil Leandra Corping 2011-04-29T11:40:38+00:00 2011-04-29T11:40:38+00:00 I'm not sure why but this weblog is loading incredibly slow for me. Is anyone else having this problem or is it a problem on my end? I'll check back later on and see if the problem still exists. Re: DBAdapter - java.sql.SQLException: ORA-00932: inconsistent datatypes: expected - got CLOB buy baby clothes 2011-04-29T11:14:19+00:00 2011-04-29T11:14:19+00:00 It’s hard to find knowledgeable people on this topic, but you sound like you know what you’re talking about! Thanks Re: Customizing any BPEL Project Artifact using <customizeDocument> car scrap 2011-04-29T10:38:05+00:00 2011-04-29T10:38:05+00:00 Your blog doesn't load appropriately for me, might want to check into it. Re: xml:lang versus lang best cash isas 2011-04-29T10:37:56+00:00 2011-04-29T10:37:56+00:00 This post is excellent and so is the manner in which the subject was explained. I like some of the comments as well even though I would rather we all keep it on topic so that to add value to the idea. Re: Base64 Explained dizi izle 2011-04-29T09:39:45+00:00 2011-04-29T09:39:45. Re: Error while invoking bean "finder": Cannot lookup BPEL domain best cash isas 2011-04-29T09:19:08+00:00 2011-04-29T09:19:08+00:00 Thanks so much for sharing all of the awesome info! I am looking forward to checking out more posts! Re: Customizing any BPEL Project Artifact using <customizeDocument> cheap jerseys from china 2011-04-29T08:37:47+00:00 2011-04-29T08:37:47+00:00 I have read a few good stuff here. Definitely worth bookmarking for revisiting. I wonder how much effort you put to make such a excellent informative website. Re: Customizing any BPEL Project Artifact using <customizeDocument> cheap jerseys from china 2011-04-29T08:30:40+00:00 2011-04-29T08:30:40+00:00 I think this is one of the most significant information for me. And i'm glad reading your article. But should remark on some general things, The site style is perfect, the articles is really excellent : D. Good job, cheers Re: Customizing any BPEL Project Artifact using <customizeDocument> cheap jerseys from china 2011-04-29T08:25:57+00:00 2011-04-29T08:25:57+00:00 Heya i am for the first time here. I found this board and I find It truly useful & it helped me out a lot. I hope to give something back and aid others like you helped me. Re: SOAP FAQ best cash isas 2011-04-29T07:57:23+00:00 2011-04-29T07:57:23+00:00 Thanks so much for sharing all of the awesome info! I am looking forward to checking out more posts! Re: Error while invoking bean "finder": Cannot lookup BPEL domain Rebecca Christensen 2011-04-29T06:13:39+00:00 2011-04-29T06:13:39+00:00 Is there another approaches to connect with this blog without having opting-in for the RSS? I don’t know exactly why however I can’t obtain the RSS loaded on the reader while I can get that through the IE. Re: Oracle Enterprise Manager FMW Control versus Oracle Enterprise Manager Grid Control best cash isas 2011-04-29T06:12:16+00:00 2011-04-29T06:12:16+00:00 I always learn something new from your post! Re: Getting the Fault Payload returned by BPEL Processes in RMI Clients best cash isas 2011-04-29T06:01:27+00:00 2011-04-29T06:01:27+00:00 I always learn something new from your post! Re: Issue with imported XSDs in WSDL while using OWSM best cash isas 2011-04-29T05:51:43+00:00 2011-04-29T05:51:43+00:00 Very useful info provided Thanks for it! Re: Base64 Explained Connie Fletcher 2011-04-29T04:49:29+00:00 2011-04-29T04:49:29+00:00 Is there another methods of connect to this web site with no opting-in to the RSS? I am not sure exactly why but I can’t have the RSS filled on my viewer while I can read this from my opera.
https://blogs.oracle.com/rammenon/feed/comments/atom
CC-MAIN-2016-36
refinedweb
1,446
71.04
Important: Please read the Qt Code of Conduct - QFileDialog::getOpenFileName messes up cursor stack?! I have a slot that's directly connected to a button click (so executed in the Qt gui thread) and every time I call QFileDialog::getOpenFileName, no matter if given a parent or not, calling QApplication::setOverrideCursor anywhere afterwards won't work unless I call QApplication::processEvents after this. If I leave the call to getOpenFileName out, setOverrideCursor works perfectly fine as expected also without processEvents. How can that be? I was debugging through the internal Qt sources and was looking if getOpenFileName in some way modifies the cursor stack but I wasn't able to find anything. Am I missing sth obvious? - kshegunov Qt Champions 2017 last edited by kshegunov Hello, QWidget and its subclasses are not reentrant, nor thread safe. Make sure that any call you make to the GUI part is from the GUI thread. Then, a direct connection is exactly as if you called the function yourself, so I see no reason one way to work and the other not to. Still if you could provide some source it would be helpful. Kind regards. Didn't I just state in my very first sentence that everything happens in the GUI thread so multi-threading issues can completely be ruled out? Anyway, I tried to create a minimal example and can clearly reproduce it. I created a QMainWindow in QtDesigner and just added a simple QPushButton to its central widget. The implementation looks like this: class TestWindow : public QMainWindow, public Ui::TestWindow { Q_OBJECT public: TestWindow() : QMainWindow() { setupUi(this); } void init() { connect(pushButton, (nullptr); QApplication::setOverrideCursor(Qt::WaitCursor); QThread::currentThread()->msleep(3000); QApplication::restoreOverrideCursor(); } }; - Chris Kawa Moderators last edited by Chris Kawa Apparently it's a problem in setOverrideCursorand not in the dialog. You can change the dialog to something like QMessageBox::warning()or even winapi's MessageBox()and it will happen too. I did not dig deep enough but it seems to be related to the fact that showing any kind of dialog/popup and then hiding it leaves the app as not active until events are processed, and this somehow prevents the cursor from being set. The call stack is a jungle so with little effort I made I wasn't able to pinpoint the issue exactly. - kshegunov Qt Champions 2017 last edited by Didn't I just state in my very first sentence that everything happens in the GUI thread so multi-threading issues can completely be ruled out? I'm just trying to help, no need to get snippy. I missed the directly part of your post, sorry. As for your problem, I've tested both the 5.5.1 and 5.6 Qt (both x64) builds I have available on my system (Debian, 4.2.0 kernel) and both work as expected. I cancel the file dialog and after a bit the cursor returns to its proper state. I did remove the setupUiand the form, but I don't expect that to be the problem. Here's the source I use for testing: #include <QMainWindow> #include <QVBoxLayout> #include <QPushButton> #include <QFileDialog> #include <QApplication> #include <QThread> class TestWindow : public QMainWindow { Q_OBJECT public: TestWindow() : QMainWindow() { setCentralWidget(new QWidget); QVBoxLayout * layout = new QVBoxLayout(centralWidget()); QPushButton * button = new QPushButton(centralWidget()); layout->addWidget(button); connect(button, (); QApplication::setOverrideCursor(Qt::WaitCursor); QThread::currentThread()->msleep(3000); QApplication::restoreOverrideCursor(); } }; Good point, I should have mentioned that I'm still running under Qt 5.3.1 on Win8 MSVC x64. I'll try to link with the newest Qt libs. Which Qt verson were you using, Chris? Update: Qt 5.5 also doesn't work for me. - Chris Kawa Moderators last edited by I'm on Qt5.5.1, Win 10, VS2015. It looks like a very particular scenario and not a general issue, thus easy to miss. Especially since it's related to how particular window manager works (the offending code seems to be in the windows platform plugin). I think it's agood candidate to report a bug. It has already been opened multiple times and even been fixed for Qt4, see #42117 #40122 and #6525 - Chris Kawa Moderators last edited by Oh, then all that's left is to workaround it and wait for a fix or fix it and submit a patch ;)
https://forum.qt.io/topic/61642/qfiledialog-getopenfilename-messes-up-cursor-stack
CC-MAIN-2021-04
refinedweb
718
53.92
The average C++ developer has several things to do as part of his or her daily chores: developing new software, debugging other people's code, creating a test plan, developing tests per the plan, managing a regression suite, an so on. Juggling between multiple roles can eat away precious time. To help, this article provides 10 effective methods that can increase your productivity. The examples in this article use tcsh version 6 as a reference, but the ideas are portable to all variants of UNIX® shells. This article also refers to several open source tools available for the UNIX platform. Keep the data safe Using rm -rf * or its variants at the shell prompt is probably the most common source of several lost work hours for a UNIX developer. There are several ways to help in this cause— alias rm, alias cp, or alias mv to their interactive variants in $HOME/.aliases, then source this file during system startup. Depending on the login shell, this could mean putting source $HOME/.aliases inside .cshrc (for C/tcsh shells) or in .profile or .bash_profile (for the Bourne shell), as shown in Listing 1. Listing 1. Setting aliases for rm, cp, and mv alias rm 'rm –i' alias cp 'cp –i' alias mv 'mv –i' Yet another option specific to users of the tcsh shell is to add the following line to your startup scripts: set rmstar on If you ever issue rm * with the rmstar variable set, you would be prompted to confirm your decision, as shown in Listing 2. Listing 2. Using the rmstar shell variable in tcsh arpan@tintin# pwd /home/arpan/IBM/documents arpan@tintin# set rmstar on arpan@tintin# rm * Do you really want to delete all files? [n/y] n However, if you use the rm, cp, or mv commands with the -f option, Interactive mode is superseded. A more effective way is to create your own copy of these UNIX commands and for deletion, use a predefined folder such as $HOME/.recycle_bin to hold the data. Listing 3 shows a sample script—called saferm—that accepts only files and folder names. Listing 3. Outline of a safe-rm script #!/usr/bin/tcsh if (! -d ~/.recycle_bin) then mkdir ~/.recycle_bin endif mv $1 ~/.recycle_bin/$1.`date +%F` Automatic backup of data Comprehensive policy measures are needed to restore data. Data backup might occur nightly or every couple of hours, depending on the requirement. By default, $HOME and all its subdirectories should be backed up using a cron job and kept in a previously disclosed file system area. Note that only the system administrator should have Write or Execute permission to the backed-up data. The following cron script succinctly describes this point: 0 20 * * * /home/tintin/bin/databackup.csh The script backs up the data every day at 20:00. The data backup script is shown in Listing 4. Listing 4. The data backup script cd /home/tintin/database/src tar cvzf database-src.tgz.`date +%F` database/ main/ sql-processor/ mv database-src.tgz.`date +%F` ~/.backup/ chmod 000 ~/.backup/database-src.tgz.`date +%F` Yet another policy would be to maintain some file system areas in the network with easy-to-follow names like /backup_area1, /backup_area2, and so on. Developers who want their data backed up should create directories or files in these areas. It is also important to understand that such areas must have the sticky bit turned on, similar to /tmp. Browse source code Using the cscope utility, available for download at no charge (see Resources for a link) is a great way to discover and browse existing sources. Cscope requires a list of files— C or C++ headers, source files, flex and bison files, inline sources (.inl files), and so on—to create its own database. When the database has been created, it provides a neat interface to the source code listings. Listing 5 shows how to build a cscope database and invoke it. Listing 5. Using cscope to build a source database and invoke it arpan@tintin# find . –name “*.[chyli]*” > cscope.files arpan@tintin# cscope –b –q –k arpan@tintin# cscope –d The -b option of cscope makes it create the internal database; the -q option makes it create an index file for faster searches; the -k option means that cscope does not look into system headers while searching (otherwise, the result would be overwhelming even for the most trivial of searches). Using the -d option invokes the cscope interface, as shown in Listing 6. Listing 6. The cscope interface Cscope version 15.5 Press the ? key for help Find this C symbol: Find this global definition: Find functions called by this function: Find functions calling this function: Find this text string: Change this text string: Find this egrep pattern: Find this file: Find files #including this file: To exit cscope, click Ctrl-D. You use the Tab key to switch between the data cscope lists and the cscope options (for example, Find C Symbol, Find file). Listing 7 shows the screen snapshot when you search for a file whose name contains database. To look into individual files, you click 0, 1, and so on, accordingly. Listing 7. Cscope output when searching for a file named database Cscope version 15.5 Press the ? key for help File 0 database.cpp 1.database.h 2.databasecomponents.cpp 3.databasecomponents.h Find this C symbol: Find this global definition: Find functions called by this function: Find functions calling this function: Find this text string: Change this text string: Find this egrep pattern: Find this file: Find files #including this file: Learn to debug legacy code with doxygen To effectively debug code that was developed by someone else, it pays to understand the overall hierarchy of the existing software—the classes and their hierarchy, the global and the static variables, and the public interface routines. The GNU utility doxygen (see Resources for a link) is probably the best-in-class tool for extracting class hierarchies from the existing sources. To run doxygen on a project, first run doxygen -g at the shell prompt. This command generates a file called Doxyfile in the current working directory and must be manually edited. When edited, you re-run doxygen on Doxyfile. Listing 8 shows a sample run log. Figure 8. Running doxygen arpan@tintin# doxygen -g arpan@tintin# ls Doxyfile … [after editing Doxyfile] arpan@tintin# doxygen Doxyfile Doxyfile has several fields that you need to understand. Some of the more important fields are: - OUTPUT_DIRECTORY. The generated documentation files are kept in this directory. - INPUT. This is a space-separated list of all the source files and folders whose documentation must be generated. - RECURSIVE. Set this field to YES when the source code listing is hierarchical. So, instead of specifying all the folders in INPUT, simply specifying the top-level folder in INPUT and setting this field to YES does the required job. - EXTRACT_ALL. This field must be set to YES to indicate to doxygen that documentation should be extracted from those classes and functions that are undocumented. - EXTRACT_PRIVATE. This field must be set to YES to indicate to doxygen that private data members of classes should be included in the documentation. - FILE_PATTERNS. Unless the project does not adhere to the usual Cor C++source header extension styles, such as .c, .cpp, .cc, .cxx, .h, or .hpp, you don't need to add anything to this field. Note: Doxyfile has several other fields that you must study depending on the project requirements and the detail of documentation required. Listing 9 shows a sample Doxyfile. Listing 9. Sample Doxyfile OUTPUT_DIRECTORY = /home/tintin/database/docs INPUT = /home/tintin/project/database FILE_PATTERNS = RECURSIVE = yes EXTRACT_ALL = yes EXTRACT_PRIVATE = yes EXTRACT_STATIC = yes Use STL and gdb Most sophisticated pieces of software developed using C++ use the classes from the C Standard Template Library (STL). Unfortunately, debugging code inundated with classes from the STL isn't easy, and the GNU Debugger (gdb) often complains of missing information, fails to display the relevant data, or even crashes. To circumvent this problem, use an advanced feature of gdb—the capability to add user-defined commands. For example, consider the code snippet in Listing 10, which uses a vector and displays the information. Listing 10. Using STL vector in C++ code #include <vector> #include <iostream> using namespace std; int main () { vector<int> V; V.push_back(9); V.push_back(8); for (int i=0; i < V.size(); i++) cout << V[i] << "\n"; return 0; } Now, while you're debugging the program, if you want to figure out the length of the vector, you can run V._M_finish – V._M_start at the gdb prompt, where _M_finish and _M_start are pointers to the end and beginning of the vector, respectively. However, this requires that you understand the internals of STL, which may not always be feasible. As an alternative, I recommend gdb_stl_utils—available for download at no charge—which defines several user-defined commands in gdb such as p_stl_vector_size, which displays the size of a vector, or p_stl_vector, which displays the contents of a vector. Listing 11 describes p_stl_vector, which iterates over the data demarcated by _M_start and _M_finish pointers. Listing 11. Using p_stl_vector to display the contents of a vector define p_stl_vector set $vec = ($arg0) set $vec_size = $vec->_M_finish - $vec->_M_start if ($vec_size != 0) set $i = 0 while ($i < $vec_size) printf "Vector Element %d: ", $i p *($vec->_M_start+$i) set $i++ end end end For a list of commands defined using gdb_stl_utils, run help user-defined at the gdb prompt. Speed up compile time Making a clean build of the sources for any reasonably complicated piece of software eats into your productive time. One of the best tools for speeding up the compilation process is ccache (see Resources for a link). Ccache acts as a compiler cache, which means that if a file isn't changed during compilation, it's retrieved from the tool's cache. This results in a tremendous benefit when a user makes changes in a header file and typically invokes make clean; make. Because ccache doesn't judge whether a file is fit for re-compilation using just the time stamp, precious build time is saved. Here's a sample use of ccache: arpan@tintin# ccache g__ foo.cxx Internally, ccache generates a hash that, among other things, takes into consideration pre-processed versions of the source file (obtained using g++ –E), command-line options used to invoke the compiler, and so on. The compiled object file is stored against this hash in the cache. Ccache defines several environment variables to allow for customizations: - CCACHE_DIR. Here, ccache stores the cached files. By default, the files are stored in $HOME/.ccache. - CCACHE_TEMPDIR. Here, ccache stores temporary files. This folder should be in the same file system as $CCACHE_DIR. - CCACHE_READONLY. If the ever-increasing size of the cache folder is a problem, setting this environment variable is useful. If you enable this variable, ccache doesn't add any files to the cache during compilation; however, it uses the existing cache to look for object files. Use Valgrind and Electric-Fence with gdb to stop memory errors C++ programming has several pitfalls—most notably, memory corruption. Two open source tools for use in the UNIX environment—Valgrind and Electric-Fence—work in tandem with gdb to help close in on memory errors. Here's a brief guide on how to use these tools. Valgrind The easiest way to use Valgrind on a program is to run it at the shell prompt followed by the usual program options. Note that for optimal results, you should run the debug version of the program. arpan@tintin# valgrind <valgrind options> <program name> <program option1> <program option2> .. Valgrind reports several common memory errors, like incorrect freeing of memory (allocation using malloc and free using delete), using variables with uninitialized values, and deleting the same pointer twice. The sample code shown in Listing 12 has an obvious array overwrite problem. Listing 12. Sample C++ memory corruption issue int main () { int* p_arr = new int[10]; p_arr[10] = 5; return 0; } Valgrind and gdb work in tandem. Using the -db-attach=yes option in Valgrind, it's possible to directly invoke gdb while Valgrind is running. For example, when Valgrind is invoked on the code in Listing 12 with the –db-attach option, it invokes gdb the instant it first encounters a memory issue, as shown in Listing 13. Listing 13. Attaching gdb during Valgrind execution ==5488== Conditional jump or move depends on uninitialised value(s) ==5488== at 0x401206C: strlen (in /lib/ld-2.3.2.so) ==5488== by 0x4004E35: _dl_init_paths (in /lib/ld-2.3.2.so) ==5488== by 0x400305A: dl_main (in /lib/ld-2.3.2.so) ==5488== by 0x400F87D: _dl_sysdep_start (in /lib/ld-2.3.2.so) ==5488== by 0x4001092: _dl_start (in /lib/ld-2.3.2.so) ==5488== by 0x4000C56: (within /lib/ld-2.3.2.so) ==5488== ==5488== ---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ---- n ==5488== ==5488== Invalid write of size 4 ==5488== at 0x8048466: main (test.cc:4) ==5488== Address 0x4245050 is 0 bytes after a block of size 40 alloc'd ==5488== at 0x401ADEB: operator new[](unsigned) (m_replacemalloc/vg_replace_malloc.c:197) ==5488== by 0x8048459: main (test.cc:3) ==5488== ==5488== ---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ---- Electric-Fence Electric-Fence is a set of libraries for detecting buffer overflow or underflow in a gdb-based environment. In the event of erroneous memory access, this tool, in conjunction with gdb, points to the exact instruction in the source code that caused the access. For example, for the code in Listing 12, with Electric-Fence turned on, Listing 14 shows the gdb behaviour. Listing 14. Electric-Fence, showing the exact area in sources that caused a crash (gdb) efence on Enabled Electric Fence (gdb) run Starting program: /home/tintin/efence/a.out Electric Fence 2.2.0 Copyright (C) 1987-1999 Bruce Perens <bruce@perens.com> Electric Fence 2.2.0 Copyright (C) 1987-1999 Bruce Perens <bruce@perens.com> Program received signal SIGSEGV, Segmentation fault. 0x08048466 in main () at test.cc:4 <b>4 p_arr[10] = 5;</b> After Electric-Fence installation, add the following lines to the .gdbinit file: define efence set environment EF_PROTECT_BELOW 0 set environment LD_PRELOAD /usr/lib/libefence.so.0.0 echo Enabled Electric Fence\n end Use gprof for code coverage One of the most common programming tasks is improving code performance. To do this, it is important to figure out which sections of the code took the maximum time to execute. In technical terms, this is known as profiling. The GNU profiler tool, gprof (see Resources for a link), is both easy to use and at the same time packed with a number of useful features. To collect profile information for a program, the first step is to specify the –pg option when invoking the compiler: arpan@tintin# g++ database.cpp –pg Next, run the program as you would during the normal course. At the end of a successful run (that is, a run with no crash or call to _exit system call), the profile information is written in a file named gmon.out. After the gmon.out file is generated, you run gprof on the executable, as shown below. Note that if no executable name is mentioned, a.out is assumed by default. Likewise, if no profile-data file name is mentioned, gmon.out is assumed to be present in the current working directory. arpan@tintin# gprof <options> <executable name> <profile-data-file name> > outfile By default, gprof displays output in the standard output, so you need to redirect it to a file. Gprof provides two sets of information: the flat profile and the call graph, both of which form part of the output file. The flat profile shows the total amount of time spent in each function. Cumulative seconds indicate the total time spent in a function plus the time spent in other functions called from this function. Self seconds indicate the time accounted by this function alone. Display source listings in gdb It's quite common to find developers debugging code over a remote connection that is slow enough not to support a graphical interface like the Data Display Debugger (DDD) for gdb. In such situations, using the Ctrl-X-A key combination in gdb proves a lifesaver of sorts, because it displays the source listings during debug. To return to the gdb prompt, the key combination Ctrl-W-A is required. Yet another option is to invoke gdb with –tui option: This directly launches the text mode source listing. Listing 15 shows gdb being invoked in text mode. Using gdb source listing in text mode 3using namespace std; 4 5int main () 6 { B+>7 vector<int> V; 8 V.push_back(9); 9 V.push_back(8); 10 for (int i=0; i < V.size(); i++) 11 cout << V[i] << "\n"; 12 return 0; 13 } 14 -------------------------------------------------------------------------------------- child process 6069 In: main Line: 7 PC: 0x804890e (gdb) b main Breakpoint 1 at 0x804890e: file test.cc, line 7. (gdb) r Maintain orderly source listings using CVS Different projects have different coding styles. For example, some advocate the use of tabs in code, while some do not. It is important, however, that all developers adhere to the same set of coding standards. Quite often, this is not the case in the real world. With a version-control system like Concurrent Versions system (CVS), this can be effectively enforced by subjecting the file about to be checked into a list of coding guidelines. To accomplish this task, CVS comes up with a set of predefined trigger scripts that come into action when certain user actions are involved. The format for the trigger scripts is simple: <REGULAR EXPRESSION> <PROGRAM TO RUN> One of the predefined trigger scripts is the commitinfo file located within the $CVSROOT folder. To check whether the file about to be checked in contains tabs, here's how the commitinfo file syntax would look: ALL /usr/local/bin/validate-code.pl The commitinfo file recognizes the ALL keyword (it means that every file being committed should have this check run on it; it's possible to customize the set of files on which these checks are run). The associated script is run on the file to check for source code guidelines. Conclusion This article discussed several freely available tools that can help increase the productivity of C++ developers—both new and experienced. For further details on the individual tools, check out the Resources section. Resources Learn - Learn more about integrating STL with gdb. - Learn more about the cscope project. - - Visit the doxygen site. - Find information on how to run Valgrind along with tool internals at the Valgrind site. - Download Electric-Fence. - Download ccache..
http://www.ibm.com/developerworks/aix/library/au-aixnirvana/index.html
CC-MAIN-2014-52
refinedweb
3,137
62.98
INVENTORY MANAGER PROCEDURES [1] TABLE OF CONTENTS 10000 – Inventory Manager Position Agreement ....................................................................................... 1 10000.1 – Inventory Manager Job Description ........................................................................................... 3 10000.2 – Action Codes on Inventory Items ............................................................................................... 4 10000.3 – Alicam Ordering and Monitoring ................................................................................................ 5 10001 - Areas of Responsibility for Inventory ............................................................................................. 6 10001.1 – BUDGETING INVENTORY .......................................................................................................... 10 10002 - COUNTING INVENTORY................................................................................................................ 11 10002.1 – Controlling Inventory Guidelines .............................................................................................. 19 10003 – Dental Chews Included with Dental Cleanings ............................................................................ 23 10004 - “Hospital Use” items .................................................................................................................... 24 10004.1 - Inventory in a Nutshell .............................................................................................................. 26 10004.2 – New Product Requests ............................................................................................................. 28 10005 - ORDERING .................................................................................................................................... 30 1005.1 Pricing and Markup Guidelines ..................................................................................................... 33 1005.1 Pricing and Markup Guidelines .................................................................................................. 33 10006 - RECEIVING ORDERS ...................................................................................................................... 35 10006.1 – Retail Products ......................................................................................................................... 37 10006.2 – Returns ..................................................................................................................................... 39 10007 - Special Orders .............................................................................................................................. 40 10008 - Storeroom Organization Policy .................................................................................................... 46 10009 - Vendors ........................................................................................................................................ 47 10010 – Weekly Checklist for Inventory Manager .................................................................................... 49 [2] 10000 – Inventory Manager Position Agreement POSITION TITLE: Inventory Manager SUPERVISOR’S POSTION: Director of Administration RESULTS STATEMENT: Assures the hospital is properly supplied and stocked with supplies and inventory while following the proper written protocol and staying within the budget. PRIMARY STATS: 1. Inventory cost as percentage of gross revenue EVIDENCE OF PROPER CONTROL: 1. Inventory and Supplies – accuracy and cost as percentage of gross revenue is within the budgeted range (Cost of Goods Sold between 18-23% of VSD). Following inventory order points and order quantities will keep it in this range. 2. We almost NEVER run out of products that regularly stocked. 3. Special order products for clients are ordered and supplied to clients as in procedure. 4. METICULOUS organization and labeling of ALL inventory and supplies throughout the hospital. [1] TACTICAL RESPONSIBILITIES: 1. Inventory procedures are all followed exactly as outlined in procedures. 2. Any required judgments needing to be made on special pricing, new products, etc. are all ran through the director of administration IN WRITING before decisions are made. STANDARDS: 1. All work will be performed in accordance with Hospital Policies. 2. All work will be performed according to the General Hospital Procedures. 3. Supervisor will be notified of any issues to be resolved or deadlines that cannot be met well in advance of the due date. SIGNATURES: Statement of the position holder: I accept the accountabilities of this position and agree to produce the results, perform the work, and meet the standards set forth in the Hospital Policies and General Procedures. Date: _________Signature: __________________________Printed Name: _____________________ Statement of the position holder’s supervisor: I agree to provide a working environment, necessary resources, and appropriate training to enable the accountabilities of this position (result, work, standards) to be accomplished. Date: _________Signature: __________________________ Printed Name: _________________ [2] 10000.1 – Inventory Manager Job Description DAILY: o Monitor Vendor Drug Prices. o Properly change costs and client prices in AVImark o Follows appropriate written ordering and receiving protocol o Changing Item Prices Based on Invoices with Cost Changes using appropriate markups o Inventory Binders: We have four inventory binders all kept in the exec office except when being currently used by the inventory manager: Orders – This binder is kept organized UP TO THE MINUTE WTIHOUT FAIL with working with any of the four inventory binders. Invoices – This is for invoices on received orders with documentation on each showing the orders have been properly checked in with prices, expirations, etc. Weekly Areas of Inventory: This is kept up to date all ALL TIMES showing clearly that each area of the hospital has been counted properly, by the due date, and inventory is updated. Controlled Drug Invoices – This are for invoices on controlled drugs only. WEEKLY: Assures all areas of inventory have been counted by designated staff each Wednesday by 5pm. Submits “Weekly Area of Inventory Checklist” to Director of Admin by 5 PM EVERY THURSDAY. This form can be found under “forms” in hospital manual. Assures all areas of inventory have been counted and adjusted in AVImark by 10am on and Thursdays. Orders must be placed before 11am on the Tuesdays and Thursdays. Purchases all in-store needed supplies and inventory ONCE weekly by 5pm Thursday. Purchases via clinic credit card or check, NOT through personal accounts. Ask ED or DOA for credit card. Store purchases are to be made at Walmart or Costco. Office supplies are to be ordered through OfficeMax when possible. Inventory Manager should be scheduled off the floor for IM duties on Tuesday from 9-11am, Wednesdays from 12-2pm and Thursdays from 6:45-10am MONTHLY AND QUARTERLY: o See inventory procedure 10002 for monthly and quarterly counts – completed by March 31st, st June 30th, Sept 30th and Dec 31 o Before the first day of each month, get Inventory Department Checklist form located in Forms in the Procedures Manual. You will need to change the date at the top of the form and change the weekly dates. To change the weekly dates, 1. Click on the first date in each column and department and change to the first due date 2. Repeat for each Department and Save. 3. The form will stay in the Inventory Binder until the end of the month. 4. Turn completed Inventory Department Checklist into Director of Administration by the last day of the month. [3] 10000.2 – Action Codes on Inventory Items Every medication that has to be counted and put into a prescription bottle has a Rx Fee. The Action Code is P. Examples: Rimadyl Thyrosyn Any liquid medication that is poured from the original bottle to a prescription bottle has a Handling Fee. The Action Code is H. Example: Metronidazole Controlled medications have a Controlled Substance Fee. The Action Code is C. Examples: Tramadol Buprenorphine [4] 10000.3 – Alicam Ordering and Monitoring We order the Alicam as needed on a patient-by-patient basis AFTER the client as paid for the service with treatment code 2593 “Alicam Monitoring and Consult” in AVImark. Paid by credit card or appropriate debit card AFTER approved by ED, D of A or owner. Update vet that requested the order IN WRITING of when to expect the order to arrive. To order an Alicam call: Joshua Ahern with Infiniti Medical Main: 650-327-5000 Direct: 817-304-9738 [5] 10001 - Areas of Responsibility for Inventory: Result Statement: All areas of the hospital with supplies, pet food, and vaccines are accurately counted and adjusted in AVImark every week without fail. With supplies all purchase orders in AVImark is accurate. The inventory manager never has to shop for supplies as an emergency. 1. Each staff member is responsible for turning in their updated INVENTORY REPORT (AVImark>Inventory button > File > Print > Inventory Report. Select your category and click Print) to Inventory Manager by 5 PM on Wednesday. There should be a cover page staple to the front of this report stating date and time inventory has been ADJUSTED to show accurate numbers. If it is not turned in by this time the employee will lose 5 hours per week for the next 4 weeks. This happens by reporting the missing area to the director of Admin who will deduct the hours and give a correction to the staff member in writing. These should be complete BEFORE this time. Excuses such as vacation, busy week, sent home early, etc. will not be accepted because they have a week to complete it and can turn it over to someone else while they are on vacation. We have EIGHT areas of responsibility throughout the hospital. You need to have all 8 sheets turned in every week. 1. Bathing Supplies – Bather 2. Cleaning Supplies – Kennel 3. Pet Foods – Kennel Manager (currently being handled by Executive Director) 4. Vaccines, Lab supplies, AND Client Info packets and brochures – Tech or TA 5. Surgery Supplies– Tech 6. Treatment Supplies AND Parasite Preventatives – Tech or Tech Assistant (TA) 7. Reception – Receptionist or lead receptionist 2. Each staff member should be aware of order points on the items we DO NOT SELL. It should turn to red when we have at least a ONE WEEK SUPPLY, not less. This avoids the inventory manager having to go buy something at the last minute. The order points on the items WE SELL have already been set. Example: If we have less than a week supply and it is NOT RED in AVImark, we have a problem! Tell the inventory or hospital manager. 3. Each staff member should also be aware that the inventory sheet column that reads “AVImark adjustment” (on the cover page listed in step 1 above) is the most critical part. Example: We have 35 paper towels in stock. AVImark shows that we have 20. The weekly sheet should have a +15 listed in to correct column. [6] 4. To complete the weekly count and adjustment for that area(s): The assigned staff member will print their assigned inventory list. Print the inventory list using the following instructions: Go to AVImark At the top, go to RX Go to “File” Go to “Print” Go to “Inventory Report” Choose the category that pertains to you Check “Fiscal Inventory Format” Check “preview” and click print. Your list will come up, verify that it is the list you need. If your list is correct, go to the top left and click the printer icon. Using the list, go to the area in the hospital where the inventory is kept and verify that the numbers on the count list are correct. Verify expiration dates, write the expiration date next to each item on the list (if it applies) Check for any items not yet posted (in blue). They are still considered on-hand- o For example: If you have six bags of Purina OM on all shelves and one bag up front waiting for a client to pick up, it’s on their chart, but not sold yet, it is considered part of the current inventory because it is still in the building and not paid for. Write the count next to that item Update the numbers in your area of AVImark using the following instructions: Go to the area you need to change under RX Pull up the item to update by highlighting it Double click (left side of mouse) The “Change” screen will come up. Go to “On hand” and in order to measure the variance subtract by using – and add by using +. Repeat as necessary, updating all of the areas of inventory that need to be updated. Once all of your counts have been updated, initial your count sheets and put the sheet in the Inventory Manager’s mail box by 8pm every Monday. Discrepancies: When conducting inventory under assigned sections, the assigned staff member must investigate discrepancies (Quantity on hand versus actual count is short/over). [7] If short- make sure that stock has been counted in all regular storage areas o Example: Prescription Diets are up front in the waiting area and in the Electric Room If stock has been counted in all regular storage areas – research if inventory has been entered properly o Check the variance against existing invoices Btl count Amt received o Did we receive a delivery on the day of the count which has not been entered into AVImark? [8] 5. The Inventory Manager will make sure AVImark is adjusted weekly on Thursday by noon. He/she will also make sure the order sheets are turned in weekly. The inventory manager turns in the “Weekly Areas of Inventory Checklist” to Director of Admin by 5 PM every Thursday. This form can be found in the hospital manual under “Forms” or “Inventory Manager” folders. 6. The Inventory Manager will place orders and do any needed shopping at least once weekly as needed. There should never be a need for emergency shopping if the above procedures are followed. 7. The Department Managers or Inventory Manager will train each employee correctly in counting their areas of responsibility weekly, filling out the weekly form correctly, and monitoring the quantities we use on a weekly basis. Drill/Quiz for Weekly Areas of Responsibility Training (VERBAL answering and discussing): 1. Name the 7 areas of the hospital under “Weekly Areas of Responsibility”. 2. How many sheets should you get each week from the individuals? When are they due? 3. What would you do if you don’t have all _ sheets from the individuals? How are you going to assure you will have all the sheets turned in each week? 4. What spreadsheet do you turn in every week about this? Where do you find it? Who do you turn it in to? What is the deadline for YOU to turn it in each week? 5. When is inventory adjusted for the amount we actually have in stock? Who does it 6. Inventory items in AVImark turn red when we need them. How many days of a supply do we need to have when it turns red? Why? Explain how it turns red. 7. Describe a couple of things that could happen if: a. We don’t adjust AVImark for what we actually have in an area. b. An item turns red when we only have two days of a product left. [9] 10001.1 – BUDGETING INVENTORY Result: Accurately counted and updated Cost of Goods Sold in AVImark will keep the hospital with budgeted 18-23%.Error! Bookmark not defined. Inventory is the second highest expense to the hospital behind payroll. It is CRITICAL that we do not over-order, or under-order. If the inventory manager does the following well we should always be within the budgeted range for Cost of Goods Sold (COGS) of 18-23%: 1. Inventory is accurate in AVImark at all times (10001, 10002) 2. Order points and order quantities are followed. Order book is organized (10005). 3. Orders are received, counted, and price/cost adjustments are made correctly (10006) 4. Exec director and/or director of admin are consulted on price specials, expired products, changing order point/quantities, and new products (10004.1 item 2d). Over-ordering costs too much, troubles making room for all products, and having products on the shelf that will expire before they are sold. It also takes money away from the hospital that is sitting on the shelves. This money could be used for pay raises, bonus programs, equipment, more staff, etc. Under-ordering causes us to be out of stock on certain products. This causes inconvenience for the doctors, clients, and staff. It leads to a lack of confidence and loyalty of our clients. They perceive that we are not well-managed. It wastes MANY hours of time for staff. In some cases it can lead to improper treatment of pets. This can me more pain, sickness, or death in serious cases. Drill: 1. Why is staying within budget so important? 2. What are somethings that can happen when there is too much product on hand? Not enough product? [10] 10002 - COUNTING INVENTORY Result Statement: 1. To assure we have: a. Have accurate inventory counts b. Have accurate expiration dates c. Return (those we can get credit on) or handle (keep on hand, if safe based on doctors decision, or dispose) any expired medication/supplies. th 2. To assure our TOP 50 SOLD items are counted and corrected monthly by the 15 of each month 3. To assure ALL ITEMS/SUPPLIES are counted and corrected Quarterly by Jan. 2, Apr. 1, July 1, and Oct. 1. Note: Only the items WE SELL need to be counted Quarterly. The other items are counted and adjusted weekly based on the “weekly areas of inventory”. [11] Top 50 Items – by the first of each month: Monthly Inventory counts are done using the “Top 50” list and turned in to the Director of st Administration before the 21 of the month and updated under the Inventory tab in Avimark. 1. Go to Avimark 2. Choose Work With 3. Choose Practice Analysis (a new window will pop up) 4. In new window choose: 5. Choose Reports 6. Highlight Top 50 items sold 7. Change Date Value to reflect 1 month (previous month’s sales) 8. Choose printer 9. Run 10. Sign off and give list to Director of Administration [12] Weekly Inventory Counts – due by 8pm every Monday Weekly inventory counts in the following categories are done by assigned staff members 1. Bathing 2. Cleaning 3. Lab 4. Reception 5. Retail/OTC 6. Rx/Retail Food 7. Surgery 8. Treatment 9. Vaccines 10. Flea & HW Preventatives 11. Client Info Packets The assigned staff member will print their assigned inventory list. In AVImark: i. Choose Work With ii. Choose Inventory Lists iii. Choose File, then Print, then Inventory Report iv. Organize by Category v. Under options, choose Sort by Fiscal format, then Print The assigned staff member will count the pills, check the expiration date, write the count, and expiration date next to each item they are in charge of. They make the changes/corrections in AVImark, sign the inventory pages, and turn them into the Inventory Manager by close of business Mondays. Questionable Discrepancies: When conducting inventory under assigned sections, the assigned staff member must investigate discrepancies (Quantity on hand versus actual count is short/over) 1. If short – make sure that stock has been counted in all regular storage areas a. Example: Prescription Diets are up front in the waiting area and in the Electric Room 2. If stock has been counted in all regular storage areas – research if inventory has been inputted properly a. Check the variance against existing invoices i. Btl count ii. Amt received b. Did we receive a delivery on the day of the count which has not been inputted into AVIMARK? [13] c. Do we have stock up front that has not been picked up and paid by client yet – Food & Rx? d. Did we charge the client appropriately for amount or size dispensed? e. Worst Case Scenario would be an Act of Pilverage for missing stock – Research and speak with Upper Management. [14] Quarterly Inventory Counts – by Jan. 2, Apr. 1, July 1, and Oct. 1 1. Scheduling the count. Hint: Your weekly checklist will be reminder for you not to forget. a. Schedule with appropriate managers every quarter to be completed by Jan. 2, April. 1, July 1, and Oct 1 each year th b. The date must be chosen BEFORE the 5 of the month PRIOR and a note given to each department manager. The department manager will schedule each staff member for inventory counting on the staff schedule that is due to the Director of Administration by th the 10 . c. A note is given to Dept Heads (or those making staff schedule) by Nov. 5, Feb. 5, May 5, th Aug. 5 to SCHEDULE appropriate quarterly inventory counts by the dates above. i. The note will state WHO is scheduled (those with weekly areas of inventory counts that have been here over 90 days). You will have up to 8 people which should make it go quickly ii. The note will state WHEN they are to be scheduled (4:30-7:30 on a Sat, or 2:30-8:30 or 7- 10am on a Sun) iii. The note will state that this count should be scheduled in a way that we are NOT SCHEDULING OVERTIME. d. It must be 100% completed before leaving. e. The Lead Tech or Inventory Manager must count the lock box/controlled drugs while the doctor is present (to get drug box keys). The Lead Tech or Inventory Manager must be scheduled off the floor while counting. 2. Completing the count: a. Only the pharmacy, injectables, and the lock box will be counted. The other areas of inventory are counted weekly b. The Inventory Manager will print the inventory lists and assign for each staff member to count. In AVImark: i. Choose Work With ii. Choose Inventory Lists iii. Choose File, then Print, then Inventory Report iv. Sort by Fiscal format, then Print c. The staff member will count the pills, check the expiration date, write the count, and expiration date next to each item they are in charge of. They make the changes/corrections in AVImark, sign the inventory pages, and turn them into the Inventory Manager. d. Double check any counted items that may still be in the reception area to picked up, or with a patient still in the hospital. Basically any meds that are IN BLUE still in AVImark are counted STILL IN THE HOSPITAL. e. If a staff member finishes their list, the Inventory Manager will give them another list or have them help another staff member. f. The Inventory Manager will review all counts and sign off that inventory is complete and turn lists into Director of Administration. [15] 3. The inventory manager completes the vaccines tables steps while the staff is completing the above counting (see below) 4. Day/Time stamp the current quarterly inventory count follow-up in AVImark under the patient “Inventory Counts”. 5. Sign off on the bottom of each page of the entire list and submit list to Director of Administration inbox. 6. No one leaves until this entire process is complete. 7. Expired Medications: a. Fill out the left side of the form “Expired Medications List and Action Plan” found in the manual under “Forms – Misc”. b. E-mail this completed form to the owner or DVM medical director. c. The owner or DVM medical director will decide what actions to take for each medication, fill out the right side of the form, and e-mail it back to the executive director. The ED will create a project and deadline for the inv. Mgr to complete these steps. Vaccine Tables 1. Schedule to be completed by Jan. 2, April. 1, July 1, and Oct 1 each year 2. Go to “Work With” 3. Choose “System Tables” 4. Choose “Vaccine Tables” 5. Enter Quarterly expiration dates and lot #’s for vaccines 6. Once completed…time stamp follow-up and post [16] DRILL/QUIZ: 1. Why do we do quarterly inventory counts? What would happen if we didn’t? If we were late doing it? 2. When are quarterly inventory counts done? 3. When are they scheduled? 4. Who needs to be given a note of when inventory is scheduled? When? How will you remember to do this? 5. Who is required to be scheduled? 6. Who counts the lock box and when? 7. Who are the inventory list given to after the Inventory Manager signs off that inventory is complete? 8. Demonstrate to your trainer by doing a “Print Preview” in AVImark for the inventory for both the complete list to count (without areas on weekly areas of inv) and vaccine tables. Print only page 1. 9. Take page one, pick ONE item, go count it, check expiration date, write the differences on the list, adjust both in AVImark. Sign the bottom of the page. Turn in to D of A inbox. This is what everyone will do over and over again until it’s done that evening. [17] [18] 10002.1 – Controlling Inventory Guidelines Inventory is best kept at a level where running out is always imminent, but rarely happens. This will afford the hospital the best return on the inventory carried. Stock Levels – this is the best level of inventory to keep. Typically you will want to keep anywhere from a 2 to 4 weeks (.5 to 1 month) on hand. The lower the better. To determine how much you sell of a particular item, you would. 1. Run a report in AVImark to find out how much you sold during a certain period. a. If you will be using this info off-hand, it is best to go with 10 months. Then all you have to do is move the decimal. b. If it will be input into a spreadsheet, then choose the most recent 12 months 2. Divide the amount sold by the time period: Sold / time = use 3. Once you have the use per month, you can then decide on how much you need to keep. a. Max - should be set at 1 month supply b. Min – should be set at 2 weeks or 1 unit depending on the need or pack size 4. Use these numbers to set stock levels and reorder points 5. When you reorder, you should not push your stock level above a 1 month supply (if possible) Reorder Point: The stock level which triggers an order. (Usually, 1 unit or a 1 – 2 week supply) Reorder Amount: The amount that, when added to the supply on hand leads to the desired stock level (2-4 weeks) Inventory levels and use can vary seasonally, but also can vary based on new introductions to the market or a new associate joining the practice. Overstock and overlapping drugs Overstock should be returned as soon as possible Look for drugs that may be used in the same circumstances and work with the doctors to remove one of the drugs from the list of what you carry. - NSAIDs and FOOD are great examples [19] [20] 10003 - Food Rack (Front) EN OM PD PV NF LP OM Case Case Case Case Canine Canine Feline Feline EN NF OM SO PD/PV DM EN K/D LP NF SO A/D OM W/D Single Cans Single Cans Single Cans K9 W/D Canine Canine Feline Single Cans Feline/Canine EN x2 EN x2 RC Dental Urinary SO K/D NF Pro Plan 6 lb bags PV & PD 6 lb bags Adult C&R Canine 7 lb bags LP 12oz bags Adult Salmon& Rice Canine Feline 6 lb bags Feline OM NF PD PV EN DM OM ProPlan 6 lb bags 6 lbs bags SO Dental 6 lb bags Puppy Chk & Rice Canine 7 lb bags EN 10 lb bags Small Breed Feline Feline Adult Chk & Rice Canine OM SO NF OM ProPlan 16 + 32 lb bags 17.6 lb bag 16 lb bags Grain Free Canine Feline Feline 24 lb bag Canine EN PV W/D TD 16 + 32 lb bag 17 + 27.5 lb bag 16 lb bag 25 bag Canine Canine Feline Canine [21] [22] 10003 – Dental Chews Included with Dental Cleanings Result Statement: To ensure great patient dental health and client satisfaction by assuring we are always stocked with complimentary sample chews for dental packages. Primary Responsible Position: Inventory Manager When: When the inventory order point is at or below the required quantity, the chews will be ordered. Items Needed: Individual Chews Branded Stickers Specific Bags to hold the Sample Chews How: Pull the needed chews from the retail stock Adjust the inventory item in AVImark o REMOVE the correct amount from the retail on-hand qty. o ADD the correct amount TO the sample on-hand qty. Fill the bags with 5 of the (same size) chews and label with the correct branded sticker. Place the filled bags and leftover supplies in the ICU cabinet labeled “Oravet Sample Bags” Quiz/Drill Questions for “Dental Chews Included with Dental Cleanings” 1. Where do you pull the Oravets when needed for the samples? 2. How many go in a bag? 3. What goes on the bag? 4. Pretend you need more sample bags of the smallest sized chews for dental cleanings. Show your trainer what you would do in AVImark. 5. Where do you put the samples and leftover supplies? 6. How will you know when you need to make sample bags? [23] 10004 - “Hospital Use” items Result: To keep accurate count of inventory Any items being used internally (items we also sell to clients) are to be accounted for inventory and financial purposes. This form must be completed by the staff member who takes the item off the shelf for hospital use. The completed form is put in the Inventory Manager’s box. The Inventory Manager will enter the item in AVImark, Date: _____________________________ Product: ______________________________________ Reason : ________________________________________________________________ _________________________________________________________________________ For Inventory Manager: Entered in Avimark: Yes or No Date: ________________________ Inventory Manager Signature: ______________________________________ Avimark Instructions 1. Enter “Julius” into client selection field 2. Choose “Kent Julius” 3. At bottom of account choose “Hospital Use” 4. Hit “F2” and then type either the item # or item description and choose from list 5. Enter quantity used and “DO” for doctor 6. Hit “Done” 7. Highlight the item and choose “Notes” from side column 8. Time stamp and then notate the manager that approved and what the item is being used for 9. Put completed forms into Director of Administration’s box weekly on Monday by 5pm. [24] Continued on next page Examples of Hospital Use items are: E/N dog food bought for kennel use Pill Pockets for exam room use Treats for exam room use Drill: Why do we need to account for hospital use items? What would happen if items were used and not entered in AVImark as hospital use? Who completes the Hospital Use form? ………Who enters in item in AVImark? ………When and where are the forms given to the Director of Administration? [25] 10004.1 - Inventory in a Nutshell Result: An accurately stocked and organized hospital. Clients and pets receiving high quality customer service and care. 1. Accurate and Organized at ALL TIMES a. Accurate – staff can see what we have in less than two minutes i. Items are listed in AVImark….or ii. In the area labeled to be checked in. b. Organized i. Storeroom Organization: back-stock and orders received (10008) ii. EVERY SINGLE ITEM HAS A PLACE (hospital policies priority #3) iii. EVERY ITIME HAS A LABELED PLACE iv. Even items needing action or answers have a labeled place (I don’t knows/action/returns) v. Order Book is meticulously neat and accurate every MINUTE of every DAY. 2. Placing Orders a. Twice weekly, additions to this procedure including order book details (10005) b. Based on order points and quantities set in AVImark by Execs c. Order book meticulously accurate and organized d. Orders needing JUDGMENT are approved by execs IN WRITING and in book. 3. Receiving Orders a. Storeroom organization procedure (10008) b. Order Receiving procedure (10006) 4. Counting Inventory a. Daily i. client purchases accurately entered in AVImark and automatically adjusted when sold ii. Order Receiving procedure – Daily (10006) b. Weekly areas of inventory – adjusted in AVImark and turned in on time (10001) c. Monthly – Execs and CPA office monitors all inventory numbers to assure it is within COGS budget of 18-23% of VSD. d. Quarterly Counts – Hospital-wide inventory accurately counted and adjusted including expiration dates (10002) [26] Drill: 1. Why is it so important the inventory counts are accurate in AVImark? 2. Explain the importance of the weekly and quarterly inventory counts. 3. What is the budgeted % of COGS? [27] 10004.2 – New Product Requests Result Statement: All NEW items or supplies requested are only ordered after it improves patient care, client service, or employee efficiency…..AND IMPROVES the financial condition of the hospital. Resources Needed: All financials on the new product and any existing product it is replacing New Product Request Form Completely Filled out and approved Primary Responsible Position: Inventory Manager Steps needed: 1. Gather all financial information (cost, price, order pack, order quantity) on the new product 2. Gather all financial information on the product that this new product REPLACES if applicable. 3. Gather all the vendor information including how long it takes to get this product routinely 4. Gather all financial information on the AMOUNT of this product we will use and how it helps the hospital financially. 5. Completely fill out the “New Product Requests” form found in the hospital manual under “Forms – Misc”. 6. Turn in the form to both the Director of Admin AND ED for approval within 5 days 7. If approved by BOTH ED and D of A: a. Enter all the information into AVImark (order qty, order pack, cost, price, etc) b. Create a proper place for the new product and label this space. c. If applicable create a labeled space for the back stock. d. If applicable create a WRITTEN plan for how will any existing product will be replaced BEFORE using this new product. e. Order the product per procedure f. Receive the product per procedure g. If applicable follow the WRITTEN plan above in how you will REPLACE the other product before using this new one. 8. File the completed “New Products Requests” from in the inventory binder kept in the administration office. Drill/Quiz: 1. Why do we have this procedure for new products? 2. What would happen if we didn’t? 3. How does this procedure help YOU? 4. Drill through at least one pretend new product will all steps filling out the “New Product Request” form, demo pieces, pretend area for new product, etc. a. Example: Current eye med costs $12 a bottle. We can get the exact same one as a generic for $4 a bottle and sell the same number per month. We cannot return the name brand one. (Hint: We should make MORE net profit using this product and ALSO save our clients’ money. WIN-WIN!) [28] [29] 10005 - ORDERING Result Statement: COMPLETE and ACCURATE orders are placed without fail every Tuesday and Thursday assuring we don’t double order or MISS anything needing to be ordered. HIGHLIGHTS AND GUIDELINES: Routine orders are placed twice weekly on Tuesdays and Thursdays during scheduled inventory management time. Order list is printed through AVImark – see below Items on order list that must be purchased in person are all purchased EVERY THURSDAY with a shopping trip by the inventory manager during scheduled inventory management time. Items on the purchase order that are already on order are notated on the line with the date it was ordered and the company ordered from. Before ordering look in binder at the LAST placed “ORDER LIST”. Highlight and notate items already ordered. How to print the “Order List” 1. Go to “Work With” in AVImark 2. Click on “Inventory List” 3. Click on “File” on the top left corner 4. Click on “Order” 5. Click on “New” 6. Click on “Items from all categories” 7. Click on “Continue” 8. Click “Order “ on the top left corner 9. Click “Print” 10. Click “Done” [30] How to place an order Vendors to call to place orders: The vendor phone numbers are in AVImark in QA. As each item is ordered BY PHONE on the purchase order the line item on the purchase order is highlighted in BLUE. The original is 3-hole punched and placed in the “Inventory Binder” in reverse chronological order. If out of stock or on back order write in the date the order was placed to the right of the line item. If item has already been ordered but not received – write in the date it was ordered to the right of the line item. Zoetis Vaccines and Revolution are ordered directly from Zoetis, NOT MWI! Assure we get the VGP Discount! Vendor List: o Roadrunner (Oti-Pak E)……..1-877-518-4589 o Henry Schein……..1-631-843-5500 o Specialty Pet Products (Candles)……..1-866-540-7457 o Deter-X……..1-314-282-2776 o Eco-88……..1-281-851-8251 o Pet Detect (hospital collars)……..1-888-224-4408 o Vet Apparel (Rx labels)………1-800-922-1456 (item # 997) o NovaCopy……..214-276-0730 (DAL00738) o IDEXX……..1-888-794-3399 o Patterson (Kennel Kare cleaner)……..1-978-353-6000 o Don’t Let It Break…….214-587-9401 [31] Vendors to order online: Log into vendor website using username and password. Items ordered online are highlighted on the “order list” in YELLOW. After order is complete, print order 3 hole punch, and place in binder under correct vendor. “Order Lists” are fully highlighted with notations and place in binder under section labeled “order Lists” Vendor List: (username for all id [email protected]) `MWI…….mwivet.com o DO NOT ORDER ZOETIS VACCINES OR REVOLUTION THROUGH MWI. ORDER THEM DIRECTLY THROUH ZOETIS. We get a discount this way! Zoetis……zoetisus.com o ORDER REVOLUTION VACCINES AND REVOLUTION DIRECTLY THROUGH ZOETIS, not from MWI. Make sure we get the VGP Discount! Royal Canin…..gateway.royalcanin.us Drill: 1. How do you know what to order? 2. What days do you order? 3. Where do you find the vendor phone numbers? 4. Why do you order twice a week? Why not once a week? Why not 4 times a week? 5. Why not wait until ALL areas of weekly inventory are turned in to place an order? 6. Who do you order Zoetis Vaccines and Revolution from? [32] 10005.1 Pricing and Markup Guidelines Result Statement: The fee schedule reflects our medical standard of care and target client demographic, it also represents the value of our services and products. Setting fees appropriately and charging for all of the services we provide is critical to our practice's financial success and gives us the resources we need to continue to providing quality patient care and excellent client service. General Prescription Markups – 3X cost which is listed in AVImark under the “general” tab for each item listed as “markup percent”. Example: 3X our cost would be 170 (100 +170 = 270% total). As a general rule: Prescription markups are typically 3 X cost. The special circumstances where this does not apply are listed below. There will be instances where we have to make judgment calls based on the market demand, internet pharmacies, competition, etc. In cases requiring judgment or analysis the executive director or owner needs to be involved in deciding the markup and listing it below and e- mailing the new list to the owner for the master copy of the manual. Prescription Fees “P” and Handling Fees “H”: P – in general pills that are counted, or meds that are handled outside of the original packaging gets a prescription fee (P code in AVImark) in addition to the markup. H – In general medications that need to be mixed up or compounded by us gets the handling fee (H code in AVImark) in ADDITION to the Prescription code (P) and markup in AVImark. Outside Labs (under treatment list): Typically 2.5 X cost with certain exceptions for special pricing, fecal tests, heartworms tests, etc. Prepackaged shoppable (proin, rimadyl, metacam) – varies depending on internet pharmacy costs but typically is about 100% markup. The affordability of veterinary generic drugs provides multiple benefits for our practice. Our profitability improves because veterinary generic drugs cost less than brand-name drugs, stocking our pharmacy with veterinary generics reduces overhead costs and lets us price drugs competitively with local big-box commercial pharmacies (Shoppable) and online retailers so that fewer of our clients’ prescriptions walk out the door. By offering veterinary generic drugs, we can provide affordable pet medicines conveniently for our clients and retain the benefits of those sales in our practice. With affordable veterinary generics, we can continue to provide our clients with one-stop service—individualized pet healthcare recommendations, high-quality patient care, and a convenient hospital pharmacy [33] Pricing Shoppable Items: When switching from a name brand to Generic product, price the generic so that the client price and markup percentage equates to 10% more net profit than the name brand per tablet. Bulk Shoppable Purchases (Unopened Bottles: 60 Count, 100 Count, etc) are priced with appropriate price breaks without the current Rx Fee. To do this go under the Inventory Item, General Tab, Break Qty Section. Compare On-Line Pharmacy prices for that item and select the bulk quantity price to be within $ 10.00 of the On-Line Price Generic Human Label Markups: ALL are 3X cost which is listed in AVImark under the “general” tab for each item listed as “markup percent”. Heartworm/Flea combos: Typically around 100% markup but depends also on internet pharmacies Clomipramine: As of Aug 2014 markup is 100% without a prescription fee. Generic is no longer available through MWI. Client price cheaper through VFC or other online pharmacies. [34] 10006 - RECEIVING ORDERS Result Statement: Orders are regularly received (up to 7 days a week), are placed in the correct labeled area, and are processed and checked-in the same day (or by noon the next day). This happens even if the inventory manager is off for multiple days. AVImark accurately reflects all received orders. All products and supplies are kept in the correct labeled areas at all times, even before, during, and after checked-in. Orders Received: All received orders are placed within taped section on floor of storeroom labeled with “Orders Received”. This includes pet food, MWI plastic bins, paper towels, etc. They STAY in this area until they are checked into AVImark and stocked in the appropriate area. Order check-in: Received orders are checked in and stocked DAILY by 4pm by either the Inventory Manager, Assistant Inventory Manager, Dir of Admin, or ED. Inventory Receiving Protocol 1. Instruct vendors to drop all shipments in the labeled are of the storeroom. 2. Reconcile shipments to packing slips / invoices…verify item received against invoice to ensure correct reconciliation. Initial invoice upon reconciliation. Controlled Drug items require x 2 initials. Have a tech or TA confirm we received the controlled drug and initial next to the line item. Controlled drugs must be stored in lockbox immediately. a. Make sure what’s on the invoice/packing slip are in the box then initial the top right corner of the invoice/packing slip. b. Enter each item in AVImark by using the “+” sign, enter quantity and clicking done. Initial to the left of each item after quanity is entered. c. Adjust prices and expiration dates in AVImark. Initial to the right of each item after change is made. d. Cost decreases are entered, but the client cost remains the same. e. Cost increases are entered and the client cost is increased by the Inventory Manager. Cost increases need to be turned into the Director of Administration in writing. On a full piece of paper, write or type the name of the product and the AVImark code for the product and place in the DOA’s mailbox. f. Once all complete, file the invoice/packing slip in the inventory received binder in reverse chronological order (most current on top). Controlled drug invoices must be filed in controlled drug binder. [35] 3. Review out of stock and special request forms and fill then contact the client and document in AVImark. Refer to procedure number 10007. 4. As retail items are removed from boxes they are to be priced with price gun on the top right corner of product (there may be a rare exception, such as, framed art)and placed in the reception retail area. Location: Main Display, Large Chest by Grooming Salon for excess stock items. 5. Items are stocked utilizing First In, First Out Method. New product goes behind old product. If an order is received after 4 PM it will be checked-in and stocked prior to noon the next day. Quiz and Drill: 1. How do you document that the order is checked into AVImark? 2. What do you do with the invoices or packing slips? Which one? Where? 3. How are orders received when you are off work? 4. Who checks in orders when you are not here? 5. What if something comes in at 6 PM? Do we check it in right then? 6. Where do the orders that are received go? All of them? 7. How are you going to assure the staff knows who can check-in orders? When? 8. What do they do if a box comes in…it’s not checked in…and a client wants it? What if it’s Saturday at 3:45 PM and it’s your off day? 9. How do you handle special orders or out of stock medication request forms? 10. What would you do if you find unpacked orders are scattered in the receiving room? 11. Show us how you check in an order. Demonstration with old invoice. [36] 10006.1 – Retail Products Results Statement: To assure great customer satisfaction and financial viability of the hospital by offering a selection of safe, high quality retail pet products that clients readily want and regularly purchase. Primary Responsible Position: Inventory Manager Why: Clients trust us and look for us to offer and recommend retail products for their pets that are better than the average products found in pet stores. We offer a niche of products that have been specially selected by our doctors or executives. How: Retail products and the associated vendors are handled just the same as with other “items” in AVImark but with retail tax. They all have markups, costs, prices, order points, order quantities, etc. The have descriptions in AVImark that should be obvious. Order points and quantities are determined the same as with all items (sales, shipping time, etc.). Each retail item has an accurate price tag either in the upper right-hand corner on the FRONT of the product, or on the hanging label/tag. This helps receptionists to quickly find the item in AVImark. It is very important the receptionists know exactly how to accurately and efficiently add these items to a client’s invoice and check-out. See below for lists of various vendors, retail products, markups, etc. Vendor: Product : Markup %: Harry Barker Dog Toys 100% rounded to nearest dollar Harry Barker Toy Bins 100% rounded to nearest dollar Harry Barker Waste Bags & Packs 100% rounded to nearest dollar Harry Barker Shampoo, Fragrance 100% rounded to nearest dollar [37] Retail Item Examples as of April 2016: Product: Cost: Price: Order Point Order Pk O. Qty On Hd. HB Small cotton bone $4 $8 5 3 6 18 HB Cotton Alligator $7 $15 5 3 6 6 HB Bucket $7 $14 2 1 3 4 HB Toy Bin blue/tan $9 $18 2 3 3 4 HB Toy Bin Empire $10 $20 2 3 3 3 HB Boat Shoe/Slipper S $6 $12 4 3 3 12 HB Boat Shoe/Sipper L $7 $14 4 3 3 11 HB Rope Tug Small $3 $6 4 3 3 12 HB Rope Tug Medium $4 $8 4 3 3 10 HB Rope Tug Large $5 $10 4 3 3 1 HB Canvas Bone $6 $12 4 3 3 6 HB Rubber Tug Toy $8 $16 4 3 3 9 HB Tennis Balls $1 $1.50 15 10 20 105 HB Shampoo $7 $14 1 3 3 2 HB Refreshing Spray $5 $10 1 3 3 3 HB Leopard Dispenser $7.50 $15 1 3 3 4 HB Leopard Waste Bag $0.75 $1.50 5 18 18 18 Drill/Quiz for “Retail Products”: 1. Why do we have a full procedure for ordering and stocking retail products this way? 2. What would happen if we just put in one big item and had the receptionists enter the price off of the price tag when a client wants to buy each product? 3. How will this procedure make things easier for YOU? [38] 10006.2 – Returns Result Statement: Any product (supply or item) that needs to be returned to the vendor is done quickly and without creating clutter ANYWHERE. Credit is given back to LVH and we have written proof of this credit. Resources Needed: “To be Returned” shelves, “Returns” form. Primary Responsible Position: Inventory Manager Steps: 1. Place the product to be returned on the shelves labeled “To Be Returned” in the associate’s office. DO NOT PLACE THE PRODUCT ANYWHERE ELSE! THIS CREATES CLUTTER! 2. Fill out the “Returns” Form 3. Order the CORRECT product. 4. Make a copy of the form and put in the D of A’s message box. 5. File the original form in the inventory binder. 6. Prepare the product for the return (MWI to pick up, Fedex box, etc.) a. MWI returns ready for pickup are in the phone room under the dry erase board in the labeled spot. b. FedEx, UPS and other packages are also kept in this same spot AFTER it is packaged and ready for pickup. 7. Assure the product has been picked up or shipped. 8. Assure the credit is on our account. Attach WRITTEN proof to the form. 9. Complete the “Returns” form. File in the inventory manager binder. 10. Turn in a copy of the completed “Returns” to the D of A message box. Drill/Quiz (open book): 1. Why do we have this procedure? How does it help YOU? 2. What would happen without this procedure? 3. Go through a drill of this procedure using the form and demo pieces. Explain to the trainer where your putting things. 4. Procedure is a pass when trainee has demonstrated this can be done independently and completely. [39] 10007 - Special Orders Result Statement: All special orders are ordered and available to the client when the product arrives. This ensures the clients and pets receive high quality customer service. Primary Accountable Position: Inventory Manager Participating Positions: Doctor, Department Heads, Execs Note of Interest: In many cases, it will save everyone time, including the client, to NOT order through us. Call in the product to a local pharmacy (compounding pharmacy) or our online pharmacy. In these cases, we don’t have to order, price, or receive anything. The client will get it themselves or get it delivered to their home. If you do the above, you can skip all these steps below. [40] How to Place Special Orders: When a client makes a special request for food or other items: Get a Special Request Form located in “Forms - Misc” on “staff shares” under “Procedures Manual (date)” on all computers. Complete steps 1-5 on the form Reception: 1. Informs client that our inventory manager will call them back client to verify the price and expected date of delivery. 2. Places the form on the Special Order board (located to the right of Dr Julius’s office) Inventory Manager: 1. Check the board and place special orders DAILY (on the days he/she works, even in another role), with the exception of food. Food orders will be placed on Tuesdays. 2. Get full payment by phone before ordering. 3. Post this payment to the account as a pre-payment/credit to their account for the full amount. 4. Notify client of the expected time of food order as they will be longer. Call the client or tell them in person TODAY!! [41] In some cases, a doctor, manager, or exec may need to order this product if the patient needs it earlier and the inventory manager is not working. They still follow these same steps. Inventory manger places the order: 1. Complete steps 6 – 8 on the form 2. Re-post on the Special Order board Receiving Orders 1. Complete step 9 on the form, note in client’s account in AVImark 2. Tape Special Order Form to Product Remove Special Order form from product when client has picked up. (Every needed documentation is already in the patient’s file by using the above steps.) How to Price it and Enter in AVImark: In most cases, non-compounded items will need to have a NEW item put into AVImark with proper cost and price and order qty of zero. For compounded drugs, Roadrunner Pharmacy for example, enter a treatment on the patient record of “8056 Misc Fee”. Change the name to the correct name and size/strength. Change the price to the accurate client price (generally 2.5 X cost). What if client doesn’t pick up: 1. Between 7-10 days after initial call (telling the client that nd the we have the product in stock) the 2 call will be made (Step 10) a. Receptionist will contact owner b. Annotate and post client communication in AVImark rd 2. Between 10-13 days owner after initial call the 3 call will be made (Step 11) a. Receptionist will contact owner b. Annotate and post client communication in AVImark 3. After 21 days: a. Receptionist will notify Inventory Manager of products not picked up b. Inventory Manager will: 1. Return to Stock – items that we already regularly stock (has and “order qty”) [42] 2. Return to Vendor – items that are not regularly stocked (has “order qty” of ZERO in AVImark). [43] Drill for “Special Orders”: 1. Where are the Special Order forms kept? 2. Demonstrate how to print the form. 3. Where does the Special Order form go for the Inventory Manager to be able to see that there’s a requested item? 4. When does the Inventory Manager check the Special Order board for orders? How often? 5. After the inventory manager finds the special order form what do they do? If they call the client what do they do? 6. Do we get payment now or later? When? How? Do we put the charge in as a pre-payment or with the item as if they already go it? 7. Why is it important all sections of the form are completed? 8. Why/when would a special order be return? 9. Why do we have special order procedure? 10. Demonstrate printing form, pretending we have a special order through all steps. 11. What happens if we don’t do this procedure completely? [44] Special Order Products: Artificial Tears Baytril Otic – We have to order 6 at a time. Can we use something else? $$ Bitter Orange Cefa Drops Cerulytic Chondroflex per chew – 120 ct or 240 ct only Clomipramine 25 mg Deramaxx 100 mg – 30 ct bottle Deramaxx 100 mg – 90 ct bottle Deramaxx 75 mg – 90 ct bottle Dermazole 16 oz. Digoxin Elixir 0.05 mg/ml Frontline Cats and Kittens Gentomycin Ophth. Ointment Glipizide Goodwinol Ointment Hyliderm Shampoo Mometamax 15gm Napahazoine Eye Ointment Nitroglycerine Ointment Piroxicam Potassium Bromide Chewable Tablets or Oral Solution Propanthiline Tabs Rimadyl 100 mg – 180 ct bottle Rimadyl 100 mg – 60 ct bottle Rimadyl 25 mg – 180 ct bottle Rimadyl 25 mg – 60 ct bottle Rimadyl 75 mg – 180 ct bottle Skunk Off Spray Spironolactone 25 mg tabs VIP Fly Repellant Ointment Zeniquin [45] 10008 - Storeroom Organization Policy Result Statement: All items in the storeroom has a properly labeled space and is kept in that space AT ALL TIMES. “Everything has a place where it lives.” All orders will be checked in on the invoice that is with the order, signed and placed in the inbox The food will be stocked on the appropriate spot on the shelves with the correct label with arrow above or underneath the food depending on the shelf placement. ALL inventory to be stocked will be placed on the “TO BE STOCKED” labeled area of the storeroom on the floor. All empty MWI blue boxes will stacked on the floor right in front of the ladder, so the driver will pick them up. This area is labeled as “EMPTY MWI BINS”. All boxes will be broken down and taken to the dumpster immediately after they have been opened and unpacked. Every item in the hospital has an area for it to be kept. The area must be labeled for the item even if it yet to be checked in. In order to fit all back stock pet food on the shelves many times two types of food will be on one shelf. Both labels will be there. EVERYTHING IS LABLED. NOTHING IS OUT OF PLACE. EVER. Drill/Quiz: 1. Why do we want to have the storeroom meticulously organized? 2. Where do received orders go before they are checked in? 3. Where do the following items go AFTER they are checked in: a. Paper Towels b. Pet Food c. Pills d. CET Chews 4. After the blue MWI plastic boxes are emptied where do they go? 5. After cardboard boxes are emptied where do they go? When? 6. How are going to assure this area stays in place on days/times you are not working? 7. What will you do if we suddenly stop ordering one kind of off and 10 bags/week of another? [46] 10009 - Vendors Primitives by Kathy: UN: legacyveterinary PW: animal321! For our BOX SIGNS. We have various sizes and prices we keep in stock. Basically we order the best sellers for dogs and cats and sizes/prices that sell best. The client price is DOUBLE OUR COST. It is important to make sure the product has a price tag on the back and labeled with the “size” in AVImark. Harry Barker: Phone: 1-800-444-2779, Fax: 843-763-2030 Customer Number: 162457 (As of April 2016) Phil Winter: Pet Report Cards and Travel Sheets Phil Winter's Marketing Communications 3245 University Ave. Suite 1-525 San Diego, CA 92104 Ph 619-280-7712 Ph. 800-803-8832 Zoetis: For Puppy/Kitten Packs and Rimadyl Trials: Call/email Brett Hallman directly: 214-886-1334 or Meagher Graphics: For Hospital bags, hats, jackets: Robyn Stanford at [email protected] Harry Barker: Retail Pet Supplies: 800-244-7779 or 843-766-8686 Amazon.com Bathroom paper towel rolls AAA Termite and Pest Control: 972-359-7733 Diamond Lawn Service: 972-417-3831 Vet2Pet App: to order more marketing materials for the app. They are free to us! [47] MWI: Viva Concepts Referral Cards, Hospital Brochures Viva Concepts Ryan Tabib [email protected] Jeff Sims [email protected] 888-340-1840 [48]
https://anyflip.com/ofib/jqav/basic
CC-MAIN-2022-21
refinedweb
9,565
65.32
Closed Bug 1316236 Opened 6 years ago Closed 6 years ago graphic glitch on .com after upload image Categories (Core :: CSS Parsing and Computation, defect, P3) Tracking () mozilla53 People (Reporter: alice0775, Assigned: xidorn) Details (Keywords: regression, Whiteboard: [sitewait] [css] [platform-rel-Imgur]) Attachments (5 files) Build Identifier: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0 ID:20161108004019 Affected to Aurora51.0a2 and Nightly52.0a1. Not affected to 50.0rc2. Reproducible: 80% Steps To Reproduce: 1. open (no need login) 2. Click [New post] button at the top of page so that drop target will be shown 3. Drag and drop a image file from Explorer or desktop to the drop target or Click [Browse] button and chose a image file and then click [Open] in file picker 4. Observe graphic Actual Results: graphic glitch. See screencast Regression window: Triggered by: Bug 1274158 I can reproduce this on Windows with Firefox 51. Probably having a too small perspective reveals some graphics issue. STR: 1. Click [click] observe graphics 2. Repeat Step1 Actual Results: Graphic glitch Though the graphics glitch is quite varied. Flags: needinfo?(milan). If we accept the premise of bug 1274158 in that we want zero perspective to be used, not ignored, then this is the consistent behaviour. I'm going to close this as WONTFIX - if we want to go back to behaviour pre bug 1274158, we should probably reopen the discussion in that bug - it has the right audience. I will put a note in there about this bug. Status: NEW → RESOLVED Closed: 6 years ago Flags: needinfo?(milan) Resolution: --- → WONTFIX Version: 51 Branch → 49 Branch Let me reopen this while we're figuring if we should just back out bug 1274158, or if this is a tech-evangelism issue. Status: RESOLVED → REOPENED Resolution: WONTFIX → --- Version: 49 Branch → 51 Branch (In reply to Xidorn Quan [:xidorn] (UTC+10) from comment #3) > I can reproduce this on Windows with Firefox 51. Probably having a too small > perspective reveals some graphics issue. I haven't looked at why these show up at values < 5px, but they do on all browsers. This is really a question of backing out bug 1274158 or tech-evangelism. Component: Graphics → Layout (In reply to Milan Sreckovic [:milan] from comment #6) >. After finish animation, The glitch remains on Firefox until scrolling page/switching tabs, but not on the other browser. So, at least, graphic problem is actual bug on Firefox. Component: Layout → Graphics I believe you're talking about imgur scenario, right? Yes, since that one is using perspective(0), it will behave differently in Firefox from everywhere else. We designed to behave differently, as of bug 1274158. If we want to match what other browsers do, with perspective(0), we have to undo bug 1274158, or at least change what we're doing. The comment 8 actually talks about replacing perspective(0) with perspective(1px) or perspective(2px) in the "semi-reduced.html" example. In Firefox, the behaviour is the same (close). In Chrome, the modified file then behaves the same as Firefox. So, we have to choose - we either want perspective(0) to behave like perspective("close to zero") - which is the change that bug 1274158 introduced, or we want it to behave like Chrome. Right now, we have the consistency of "zero behaves like it's close to zero"; Chrome does not. That may not be a good enough reason to actually keep the modified behaviour, but it comes down to the CSS handling changes in bug 1274158. Component: Graphics → Layout (In reply to Milan Sreckovic [:milan] from comment #10) > I believe you're talking about imgur scenario, right? > behaviour, but it comes down to the CSS handling changes in bug 1274158. No, Both imgur scenario and semi-reduced html. Tested with semi-reduced html(but perspective(0) change 0 to 1px): During animation: The glitch is on all browser After completion animation: The glitch remains only Firefox(after scrolling the page, the glitch is gone). The other browser is no glitch. s/after scrolling the page/after scrolling the page, resizing browser or switching tabs/ So I think there are actually two issues: 1. the glitch during animation, which is caused by bug 1274158; 2. the glitch after the animation finishes. I think the first issue is a interoperability issue which is probably related to the spec. The second is an actual pre-existing graphics issue. What I don't really understand is, what does imgur actually expect from using perspective(0) here. Let's turn this into a Tech Evangelism and open a new bug for the glitch leftover issue. Component: Layout → Desktop Product: Core → Tech Evangelism Version: 51 Branch → Firefox 51 (In reply to Xidorn Quan [:xidorn] (UTC+10) from comment #3) > I can reproduce this on Windows with Firefox 51. Probably having a too small > perspective reveals some graphics issue. Note: this doesn't reproduce on OSX. But here's a demo w/ perspective(0) that glitches out on my mac as well: in global.css imgur has a bunch of animations (removed the prefixed versions): @keyframes flipin { from { transform:perspective(500px) translate(-50%,0) rotateX(-90deg) rotateY(0); opacity:.5 } to { transform:perspective(0) translate(-50%,115%) rotateX(0) rotateY(0); opacity:1 } } @keyframes bounceup { from { transform:perspective(0) translate(-50%,115%) rotateX(0) rotateY(0); opacity:1 } to { transform:perspective(0) translate(-50%,98%) rotateX(0) rotateY(0); opacity:1 } } @keyframes stayput { from,to { transform:perspective(0) translate(-50%,98%) rotateX(0) rotateY(0); opacity:1 } } @keyframes bouncedown { from { transform:perspective(0) translate(-50%,98%) rotateX(0) rotateY(0); opacity:1 } to { transform:perspective(0) translate(-50%,115%) rotateX(0) rotateY(0); opacity:1 } } @keyframes flipout { from { transform:perspective(0) translate(-50%,115%) rotateX(0) rotateY(0); opacity:1 } to { transform:perspective(500px) translate(-50%,0) rotateX(-90deg) rotateY(0); opacity:5px } } I would guess they're trying to "reset" perspective w/ the 0 value. But they can just remove all the instances of "perspective(0)" to get the same result -- I've tested locally. Whiteboard: [needscontact] Yes, in Firefox perspective(0) behaves like perspective(1px). The "can't reproduce on OS X" is the "keeps looking wrong after animation finishes and we start scrolling". Or are you seeing that problem on OS X? > Or are you seeing that problem on OS X? Nah, not as described in the original STR. I can see the glitch occasionally on OSX w/ this demo however: (might have to hover on or off a few times). By the way, I'm guessing the effect they want is perspective(infinity) type of thing. At least, in the semi-reduced case, that's what gives me a good looking result (slides in from the left), and it matches in all browsers. Adam, do we have contacts at imgur? Flags: needinfo?(astevenson) Karl reached out 16 days ago to imgur in another bug, but no response yet. We can find more contacts to reach out to. Flags: needinfo?(astevenson) Would be good to look for more contacts, it seems. Contacted @briankassouf by email. Seems like the imgur person from might be another good option. Note that they recorded the issue for this bug. > As for the more recently noted one, I opened a ticket for us to investigate (in FF 50.1.0 in OS X 10.12.2 I don't see the tooltip at all...) but can't really comment on any timeline to resolution. URL: Whiteboard: [needscontact] → [sitewait] [css] Priority: -- → P3 I can't repro the glitch anymore, would you mind verifying Alice0775? Flags: needinfo?(alice0775) (In reply to Mike Taylor [:miketaylr] from comment #25) > I can't repro the glitch anymore, would you mind verifying Alice0775? I can still reproduce the problem with STR comment#0 and attachment 8808923 [details] on Latest Nightly53.0a1[1]. [1] Mozilla/5.0 (Windows NT 10.0; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0 ID:20170104030214 Flags: needinfo?(alice0775) OK, thanks. (Interesting that it didn't repro for me in Win/53.) Dees, do you have any contacts here? We're having trouble reaching them. Flags: needinfo?(dchinniah) According to Comment #24, Karl was in touch with imgur and they filed an internal ticket for this. (In reply to Milan Sreckovic [:milan] from comment #28) > Dees, do you have any contacts here? We're having trouble reaching them. I don't. However if :karlcow's contact doesn't move it forward — I could try other avenues. Let me know once you have feedback... platform-rel: --- → ? Flags: needinfo?(dchinniah) Whiteboard: [sitewait] [css] → [sitewait] [css] [platform-rel-Imgur] Note that it was __only__ 3 weeks ago. And the contact said explicitly > … but can't really comment on any timeline to resolution. — It is recorded. They know about it. I can try to send a gentle reminder if you want, but I would not want to be annoying too. Companies have priorities too. :) The other option is to treat perspective(0) as an identity transform rather than being perspective(small-number). ... which is what the CSSWG just agreed to do, given that it seems like the most compatible thing. (i.e., treating perspective(0) as perspective(infinity), including for animation, where perspective is currently animated as a matrix but should be animated by interpolating the *reciprocal* of the argument). Component: Desktop → Layout: Web Painting Flags: needinfo?(xidorn+moz) Product: Tech Evangelism → Core Version: Firefox 51 → 51 Branch Given that CSSWG decided to treat perspective(0) different from perspective(calc(0)), I think it is actually not a Web Painting issue, but a style system issue, that we need to handle them differently. To achieve that, we probably need to change ProcessPerspective (in nsStyleTransformationMatrix.cpp) to handle the zero value specifically in advance. If we also need to do the same thing for the perspective property (which I doubt), we would probably need to add a new flag for SetCoord in nsRuleNode.cpp to handle zero coord value and zero value from calc differently. Probably not very hard to change... when I have time :/ Component: Layout: Web Painting → CSS Parsing and Computation Per comment #34, the behavior between perspective(0) & perspective(calc(0)) is different. I doubt that we can fix this in 51. Mark 51 fix-optional and let's try to fix this in 52. Comment on attachment 8827273 [details] Bug 1316236: Treat zero perspective as inf perspective. I bet this would not fix this issue. This issue is not really about rendering perspective(0), but about animating between perspective(0) and other perspective value. Attachment #8827273 - Flags: review?(xidorn+moz) → review- OK, I'll take a look at this tomorrow. I may have an idea about how to fix it elegantly. Assignee: nobody → xidorn+moz Flags: needinfo?(xidorn+moz) (In reply to Xidorn Quan [:xidorn] (UTC+10) from comment #37) > Comment on attachment 8827273 [details] > Bug 1316236: Treat zero perspective as inf perspective. > > > > I bet this would not fix this issue. This issue is not really about > rendering perspective(0), but about animating between perspective(0) and > other perspective value. OK, except that it does fix it for me, and it does so by treating perspective(0) as perspective(inf) which is what I thought was the proposed solution. So, I'll take your bet :) OK, I think you are right that this should fix the issue, as the interpolation computation of perspective function all goes to that function eventually. I was also worrying about handling of perspective(calc(0)) which, according to CSSWG's minutes, should be handled differently from perspective(0). But after digging our code a bit, I think that requirement is hard to implement, and I don't think that actually adds much value. So I guess something like this patch should be enough. The only thing may need fix on top of your patch is the distance calculation, which doesn't use that function, and currently clamp zero to epsilon. I'm not sure whether we should check "> 0" or ">= epsilon". The minimum positive value float can represent is ~1.4e-45, whose reciprocal is infinite, while epsilon is ~1.19e-7, whose reciprocal is still finite (~8.39e6). (In reply to Xidorn Quan [:xidorn] (UTC+10) from comment #40) > I was also worrying about handling of perspective(calc(0)) [...] > I don't think that actually adds much value. I was going to ask that you raise this question with the CSSWG, but it looks like you already have here: Thanks! Comment on attachment 8828618 [details] Bug 1316236 - Treat perspective(0) as perspective(infinity). r=me with nits addressed: ::: layout/reftests/w3c-css/submitted/transforms/perspective-zero.html:14 (Diff revision 1) > +#ref { > + background: red; Minor naming issue (which makes this test a little harder to reason about): "#ref" feels like a misleading name for this element, especially when paired with #test on the other element. It *suggests* that this is some sort of reference for the #test element, showing the expected rendering. But it's not at all -- we're actually expecting #ref to be covered up here (which is why it's red) -- and if any of it is visible, then that means something went wrong. Maybe something like #coverMe or #shouldBeCovered would be a clearer ID for this element? ::: layout/reftests/w3c-css/submitted/transforms/perspective-zero.html:21 (Diff revision 1) > + /* If perspective(0) is invalid, #test would not create a stacking > + * context, and #ref would be placed on top of #test showing red. > + * If perspective(0) is handled as perspective(epsilon) rather than > + * perspective(infinity), #test would be invisible. */ This comment describes two possible bad outcomes but doesn't indicate the expected good outcome. Perhaps start with that? (something along the lines of "This transform should [...] so that it covers up [...]" ::: layout/reftests/w3c-css/submitted/transforms/reftest.list:2 (Diff revision 1) > == transform-containing-block-dynamic-1b.html containing-block-dynamic-1-ref.html > == perspective-containing-block-dynamic-1a.html containing-block-dynamic-1-ref.html > == perspective-containing-block-dynamic-1b.html containing-block-dynamic-1-ref.html > + > +== perspective-zero.html green.html Perhaps best to delete the blank line separating this new test from the rest of the file? I don't see why we'd want to visually separate this one test from the others here. ::: layout/style/nsStyleTransformMatrix.h:33 (Diff revision 1) > * A helper to generate gfxMatrixes from css transform functions. > */ > namespace nsStyleTransformMatrix { > > + // Function for applying perspective() transform function. We treat > + // any value smaller than epsilon as perspective(infinite), which s/infinite/infinity/, I think. ("infinite" is an adjective; "infinity" is the noun for the quantity) ::: layout/style/nsStyleTransformMatrix.h:35 (Diff revision 1) > namespace nsStyleTransformMatrix { > > + // Function for applying perspective() transform function. We treat > + // any value smaller than epsilon as perspective(infinite), which > + // follows CSSWG's resolution on perspective(0). See bug 1316236. > + inline void PerspectiveMatrix(mozilla::gfx::Matrix4x4& aMatrix, float aDepth) Perhaps this should be named "ApplyPerspectiveToMatrix", since that's what it does? The current name, "PerspectiveMatrix()", sounds more like a function that *returns* a perspective matrix to me, rather than a function that modifies one of its parameters. Attachment #8828618 - Flags: review?(dholbert) → review+ Pushed by xquan@mozilla.com: Treat perspective(0) as perspective(infinity). r=dholbert Status: REOPENED → RESOLVED Closed: 6 years ago → 6 years ago status-firefox53: --- → fixed Resolution: --- → FIXED Target Milestone: --- → mozilla53 Comment on attachment 8828618 [details] Bug 1316236 - Treat perspective(0) as perspective(infinity). Approval Request Comment [Feature/Bug causing the regression]: bug 1274158 [User impact if declined]: may see glitch on some websites with CSS transition or animation [Is this code covered by automated tests?]: a new test is added in this patch [Has the fix been verified in Nightly?]: just landed. [Needs manual test from QE? If yes, steps to reproduce]: no. [List of other uplifts needed for the feature/fix]: n/a [Is the change risky?]: may not be very risky. shouldn't be riskier than bug 1274158 which is already in beta. [Why is the change risky/not risky?]: it only covers a specific case. This patch is changing the behavior of exactly the same case as bug 1274158, but to match other browsers' behavior. As far as no more regression shows up with bug 1274158, this patch shouldn't be riskier. [String changes made/needed]: n/a Attachment #8828618 - Flags: approval-mozilla-beta? Attachment #8828618 - Flags: approval-mozilla-aurora? remote: vendor-imports/mozilla/mozilla-central-reftests/transforms/green.html status changed to 'Needs Work' due to error: remote: Not linked to a specification. This presumably either needs to have -ref in the filename or be in a reference (?) subdirectory. Flags: needinfo?(xidorn+moz) Pushed by xquan@mozilla.com: followup - Move green.html into reference subdirectory. ^ Moved the reference file. Flags: needinfo?(xidorn+moz) platform-rel: ? → + Comment on attachment 8828618 [details] Bug 1316236 - Treat perspective(0) as perspective(infinity). fix visual glitch on animations, beta52+ Attachment #8828618 - Flags: approval-mozilla-beta? Attachment #8828618 - Flags: approval-mozilla-beta+ Attachment #8828618 - Flags: approval-mozilla-aurora? Flags: in-testsuite+ (In reply to Xidorn Quan [:xidorn] (UTC+10) from comment #50) > ^ Moved the reference file. Thanks; that fixed it. I'm responsible for the code on imgur that led to this issue. My intention was to reset the perspective but did so incorrectly due to ignorance. I'm submitting a fix now that sets the perspective to none, by omitting perspective as advised above.
https://bugzilla.mozilla.org/show_bug.cgi?id=1316236
CC-MAIN-2022-27
refinedweb
2,912
57.77
Questions re porting my Flixel game to HaxeFlixel Hi, Currently I'm facing problems converting all imports from original Flixel to HaxeFlixel and not sure how I should change them. I've converted my code from as3 to Haxe with as3hx tool. For example, what should import *.flixel.FlxRect; be? My best guess was based on documentation that FlxRect now lives in flixel.math but import flixel.math won't work. This may have already been tested by you, since I can't exactly comprehend things right now, but did you try "import flixel.math.FlxRect" or "import flixel.math.*"? @test84 You can search classes from here : from left panel :)0 We are making a MMORPG w/HaxeFlixel => Click Here to go to the Facebook Page @KnotUntied I solved that but now faced a more serious one, FlxWeapon. I know it's in addons but import flixel.addons.*; doesn't help. Nor does import flixel.addons.FlxWeapon; or import flixel.addons.weapn.FlxWeapon; I don't understand it, isn't it in package flixel.addons.weapon? @eminfedar Yes, problem is how to use them. See me anser to @KnotUntied . @KnotUntied Yes, I just mentioned it in my previous reply. It returns: Type not found : flixel.addons.weapon.FlxWeapon And yes I've installed flixel Addons. How about "import flixel.addons.weapon.FlxTypedWeapon"? I checked the addons library's changelog, and apparently FlxWeapon was abstracted to FlxTypedWeapon. Fixed it, had to add <haxelib name="flixel-addons" /> to project.xml file. I'm glad the problem's solved. ...and mine still isn't. weeps
http://forum.haxeflixel.com/topic/119/questions-re-porting-my-flixel-game-to-haxeflixel
CC-MAIN-2017-17
refinedweb
262
61.53
Writing your first Django app, part 4 This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: 0.96, 0.95. This tutorial begins where Tutorial 3 left off. We’re continuing the Web-poll application and will focus on simple form processing and cutting down our code. Write a simple form Let’s update our poll detail template (“polls/detail.html”) from the last tutorial, so that the template contains an HTML <form> element: <h1>{{ poll.question }}</h1> {% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %} <form action="/polls/{{ poll.id }}/vote/" method="post"> {% for choice in poll.choice_set.all %} <input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" /> <label for="choice{{ forloop.counter }}">{{ choice.choice }}</label><br /> {% endfor %} <input type="submit" value="Vote" /> </form>, it’ll send the POST data choice=3. This is HTML Forms 101. - We set the form’s action to /polls/{{ poll.id }}/vote/, indicates how many times the for tag has gone through its loop. For more information, see the documentation for the “for” tag. Now, let’s create a Django view that handles the submitted data and does something with it. Remember, in Tutorial 3, we created a URLconf for the polls application that includes this line: (r'^(?P<poll_id>\d+)/vote/$', 'mysite.polls.views.vote'), So let’s create a vote() function in mysite/polls/views.py: from django.shortcuts import get_object_or_404, render_to_response from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from mysite('mysite). For more information about reverse(), see the URL dispatcher documentation. As mentioned in Tutorial 3, request is a HTTPRequest object. For more on HTTPRequest objects, see the request and response documentation.> Now, go to /polls/1/ in your browser and vote in the poll. You should see a results page that gets updated each time you vote. If you submit the form without having chosen a choice, you should see the error message. Use generic views: Less code is better. First, open the polls/urls.py URLconf. It looks like this, according to the tutorial so far: from django.conf.urls.defaults import * urlpatterns = patterns('mysite.polls.views', (r'^$', 'index'), (r'^(?P<poll_id>\d+)/$', 'detail'), (r'^(?P<poll_id>\d+)/results/$', 'results'), (r'^(?P<poll_id>\d+)/vote/$', 'vote'), ) Change it like so: from django.conf.urls.defaults import * from mysite.polls.models import Poll info_dict = { 'queryset': Poll.objects.all(), }'), ) We’re using two generic views here: object_list and object_detail. Respectively, those two views abstract the concepts of “display a list of objects” and “display a detail page for a particular type of object.” - Each generic view needs to know what data it will be acting upon. This data is provided in a dictionary. The queryset key in this dictionary points to the list of objects to be manipulated by the generic view. - The object_detail generic view expects the ID value captured from the URL to be called "object_id", so we’ve changed poll_id to object_id for the generic views. - We’ve added a name, poll_results, to the results view so that we have a way to refer to its URL later on (see the documentation about naming URL patterns for information). We’re also using the url() function from django.conf.urls.defaults here. It’s a good habit to use url() when you are providing a pattern name like this. By default, the object_detail generic view uses a template called <app name>/<model name>_detail.html. In our case, it’ll use the template "polls/poll_detail.html". Thus, rename your polls/detail.html template to polls/poll_detail.html, and change the render_to_response() line in vote(). Similarly, the object_list generic view uses a template called <app name>/<model name>_list.html. Thus, rename polls/index.html to polls/poll_list.html. Because we have more than one entry in the URLconf that uses object_detail for the polls app, we manually specify a template name for the results view: template_name='polls/results.html'. Otherwise, both views would use the same template. Note that we use dict() to return an altered dictionary in place. Note all() is lazy It might look a little frightening to see Poll.objects.all() being used in a detail view which only needs one Poll object, but don’t worry; Poll.objects.all() is actually a special object called a QuerySet, which is “lazy” and doesn’t hit your database until it absolutely has to. By the time the database query happens, the object_detail generic view will have narrowed its scope down to a single object, so the eventual query will only select one row from the database. If you’d like to know more about how that works, The Django database API documentation explains the lazy nature of QuerySet objects. In previous parts of the tutorial, the templates have been provided with a context that contains the poll and latest_poll_list context variables. However, the generic views provide the variables object and object_list as context. Therefore, you need to change your templates to match the new context variables. Go through your templates, and modify any reference to latest_poll_list to object_list, and change any reference to poll to object. You can now delete the index(), detail() and results() views from polls/views.py. We don’t need them anymore — they have been replaced by generic views. The vote() view is still required. However, it must be modified to match the new context variables. In the render_to_response() call, rename the poll context variable to object. The last thing to do is fix the URL handling to account for the use of generic views. In the vote view above, we used the reverse() function to avoid hard-coding our URLs. Now that we’ve switched to a generic view, we’ll need to change the reverse() call to point back to our new generic view. We can’t simply use the view function anymore — generic views can be (and are) used multiple times — but we can use the name we’ve given: return HttpResponseRedirect(reverse('poll_results', args=(p.id,))) Run the server, and use your new polling app based on generic views. For full details on generic views, see the generic views documentation. The tutorial ends here for the time being. But check back soon for the next installments: - Advanced form processing - Using the RSS framework - Using the cache framework - Using the comments framework - Advanced admin features: Permissions - Advanced admin features: Custom JavaScript In the meantime, you can read through the rest of the Django documentation and start writing your own applications. Questions/Feedback If you notice errors with this documentation, please open a ticket and let us know! Please only use the ticket tracker for criticisms and improvements on the docs. For tech support, ask in the IRC channel or post to the django-users list.
http://www.djangoproject.com/documentation/tutorial04/
crawl-001
refinedweb
1,144
59.19
tf < > Code: Transcript: We import TensorFlow as tf. import tensorflow as tf Then we print out what version of TensorFlow we are using. We are using 1.0.1. print(tf.__version__) Next, let’s import NumPy as np import numpy as np And then print out what version of NumPy we are using. print(np.__version__) We are using NumPy 1.13.3. In this video, we are going to create a placeholder tensor using the TensorFlow placeholder operation. No data will be provided for the tensor until the execution within a TensorFlow session. At that point, we will use a feed_dict argument which will provide the data that will fill the placeholder tensor. All right, let’s get started. In order to do that, let’s start by defining a TensorFlow placeholder tensor that will eventually hold data of data type 32-bit signed integers and have a shape of 1x2x3. placeholder_ex = tf.placeholder(tf.int32, shape=(1, 2, 3)) We create this placeholder tensor using the TensorFlow placeholder operation and assign it to the Python variable placeholder_ex. So you see tf.placeholder. We give it the data type tf.int32, and we give it a shape of 1x2x3, and we assign it to the Python variable placeholder_ex. When we print out what our placeholder_ex Python variable has print(placeholder_ex) We see that it’s a tensor. It’s a placeholder. The shape is 1x2x3. And the data type is int32. Let’s also define a TensorFlow addition operation where we add this placeholder tensor to itself. placeholder_sum_ex = tf.add(placeholder_ex, placeholder_ex) We create the addition using the tf.add operation and assign it to the Python variable placeholder_sum_ex. So we use tf.add(placeholder_ex, placeholder_ex) and assign it to the Python variable placeholder_sum_ex. Let’s print out our placeholder_sum_ex Python variable to see what’s in it. print(placeholder_sum_ex) We see that the operation is add, the shape is 1x2x3, and the data type is int32. Because TensorFlow’s add operation does element-wise addition of tensors, this makes sense. Now that we’ve created our TensorFlow variable, it’s time to run the computational graph. We launch the graph in the session. sess = tf.Session() Then we initialize all the global variables in the graph. sess.run(tf.global_variables_initializer()) Now that we’ve done that, let’s print our placeholder_ex tensor in the TensorFlow session to see the values. print(sess.run(placeholder_ex)) Yikes! We get an error that says, “You must feed a value for placeholder tensor ‘Placeholder’ with dtype int32 and shape [1,2,3].” A TensorFlow placeholder tensor is just that, a placeholder, so printing it doesn’t make any sense and that’s why we get an error. Let’s also check out the placeholder_sum_ex tensor in a TensorFlow session to see the value. print(sess.run(placeholder_sum_ex)) We get the same thing. We get an error that says, “You must feed a value for placeholder tensor ‘Placeholder’.” This is expected because placeholder_sum_ex is just the sum of the placeholder tensor with itself. So having it not have any values when we evaluate it, adding it to itself would definitely still not have any values. So to properly use the placeholder tensor, we have to pass a feed_dict to it. Since we know the placeholder is an int32 tensor expecting a 1x2x3 shape, we can create a NumPy multidimensional array to feed into it. mda_ex = np.ones((1,2,3), dtype="int32") To do that, we’re going to create a 1x2x3 NumPy array filled with 1s. So we’re using np.ones and we’re going to give it the data type int32, and we’re going to assign it to the Python variable mda_ex. mda_ex Now that we’ve done that, let’s print the evaluation of the placeholder_sum_ex when we use this mda_ex array in the feed_dict. print(sess.run(placeholder_sum_ex, feed_dict={placeholder_ex: mda_ex})) So we see print(sess.run(placeholder_sum_ex). For the feed_dict, we’re going to pass in the placeholder_ex. We want to use this NumPy multidimensional array. When we do that, we see that the result is a 1x2x3 tensor whose value is all 2s, which is what we expect because the placeholder tensor was full of 1s, so when we added it to itself, 1+1 is 2, and because it’s element-wise addition, all the element places are the integers 2. Finally, we close the TensorFlow session to release the TensorFlow resources used within the session. sess.close() That is how you create a TensorFlow placeholder tensor and then when it needs to be evaluated, pass a NumPy multidimensional array into the feed_dict so that the values are used within the TensorFlow session to fill out the TensorFlow placeholder tensor.
https://aiworkbox.com/lessons/create-a-tensorflow-placeholder-tensor
CC-MAIN-2019-51
refinedweb
797
65.01
Hello All, I'm having trouble figuring a way to convert the row returned from a basic execute statement into a named tuple/Dictionary. The idea, is to be able to access each field via row['column_name']. I'm currently looking at creating a factory, but still a complete noob to python and cx_Oracle. cur.execute(""SELECT * FROM V$INSTANCE""") #print 'Query Executed..' row = cur.fetchone() #print 'Cursor Loaded.' #print result[0], result[1] list = {} list['host_name'] = row['host_name'] print row['host_name'] Thanks in advance for your assistance. Jan S. Hello All, Managed through some weird keyword search to find a nice little article. here is the solution. cur = oracle.cursor() cur.execute("""SELECT * FROM V$DATABASE""") data = rows_to_dict_list(cur) for row in data: print row['PROTECTION_LEVEL'] def rows_to_dict_list(cursor): columns = [i[0] for i in cursor.description] return [dict(zip(columns, row)) for row in cursor] Thanks Jan S. ps - I hate these new editors, Python requires indentations, but that does not mean I want to create and html table from it.
https://community.oracle.com/message/11335908
CC-MAIN-2016-30
refinedweb
173
57.77
Contents In this course you will learn how to program self-driving cars! Specifically, in this unit, you will learn how to program a localizer, which is a directional aid used to assist machines in making informed navigation decisions. Over the past decade computer scientists have developed methods using sensors, radars and software to program vehicles with the ability to sense their own location, the locations of other vehicles and navigate a charted course. Stanford University Professor and director of the Stanford Artificial Intelligence Lab, Sebastian Thrun's autonomous cars, Stanley and Junior illustrate the progress that has been made in this field over the last decade. Additionally, the Google Driverless Car Project seeks to further research and development to make driverless cars a viable option for people everywhere. Driverless cars sound neat, huh? Want to program your own? Let's get started! Localization is the ability for a machine to locate itself in space. Consider a robot lost in space. Within its environment, how can the robot locate where it is in space? Rather than install a GPS device in our robot, we are going to write a program to implement localization. Our localization program will reduce the margin of error considerably, compared to a GPS device whose margin of error can be as high as ten meters. For our robot to successfully and accurately navigate itself through space, we are looking for a margin of error between two and ten centimeters. Imagine a robot resides in a one-dimensional world, so somewhere along a straight line, with no idea where it is in this world. For an example of such a world we can imagine a long, narrow hallway where it is only possible to move forward or backwards; sideways motion is impossible. Since our robot is completely clueless about its location, it believes that every point in this one- dimensional world is equally likely to be its current position. We can describe this mathematically by saying that the robot's probability function is uniform (the same) over the sample space (in this case, the robot's one-dimensional world). If we were to draw a graph of this probability function with probability on the vertical axis and location on the horizontal axis, we would draw a straight, level line. This line describes a uniform probability function, and it represents the state of maximum confusion. Assume there are three landmarks, which are three doors that all look alike and we can distinguish a door from a non-door area. If the robot senses it is next to a door, how does this affect our belief — or the probability that the robot is near a door? In the new function, there are three bumps aligned with the location of the doors. Since the robot has just sensed that it is near a door, it assigns these locations greater probability (indicated by the bumps in the graph) whereas, all of the other locations have decreased belief. This function represents another probability distribution, called the posterior belief where the function is defined after the robot's sense measurement has been taken. The posterior function is the best representation of the robot's current belief, where each bump represents the robot's evaluation of its position relative to a door. However, the possibility of making a bad measurement constantly looms over robotics, and over the course of this class we will see various ways to handle this problem. If the robot moves to the right a certain distance, we can shift the belief according to the motion. Notice that all of the bumps also shift to the right, as we would expect. What may come as a surprise, however, is that these bumps have not shifted perfectly. They have also flattened. This flattening is due to the uncertainty in robot motion: since the robot doesn't know exactly how far it has moved, it's knowledge has become less precise and so the bumps have become less sharp. When we shift and flatten these bumps, we are performing a convolution. Convolution is a mathematical operation that takes two functions and measures their overlap. To be more specific, it measures the amount of overlap as you slide one function over another. For example, if two functions have zero overlap, the value of their convolution will be equal to zero. If they overlap completely, their convolution will be equal to one. As we slide between these two extreme cases, the convolution will take on values between zero and one. The animations here and here should help clarify this concept. There is also a forum post explaining these in everyday analogies. In our convolution, the first function is the belief function (labeled "posterior" above), and the second is the function which describes the distance moved, which we will address in more depth later. The result of this convolution is the shifted and flattened belief function shown above. Now, assume that after the robot moves it senses itself right next to a door again so that the measurement is the same as before. Just like after our first measurement, the sensing of a door will increase our probability function by a certain factor everywhere where there is a door. So, should we get the same posterior belief that we had after our first measurement? No! Because unlike with our first measurement, when we were in a state of maximum uncertainty, this time we have some idea of our location prior to sensing. This prior information, together with the second sensing of a door, combine to give us a new probability distribution, as shown in the bottom graph below. In this graph we see a few minor bumps, but only one sharp peak. This peak corresponds to the second door. We have already explained this mathematically, but let's think intuitively about what happened. First, we saw the first door. This led us to believe that we were near some door, but we didn't know which. Then we moved and saw another door. We saw two doors in a row! So of course our probability function should have a major peak near the only location where we would expect to see two doors in a row. Once again, it's important to note that we still aren't certain of our location, but after two measurements we are more sure than we were after one or zero measurements. What do you think would happen as we made more and more measurements? Congratulations! You now understand both probability and localization! The type of localization that you have just learned is known as Monte Carlo localization, also called histogram filters. If a robot can be in one of five grid cells, labeled, for i = 1 ... 5, what is the probability for each ? Modify the empty list: p = [] such that p becomes a uniform distribution over five grid cells, expressed as a vector of five probabilities. For example, the probability that the robot is in cell two, can be written as: p[2] = 0.2 and given that each cell has the same probability that the robot will be in there, you can also write the probability that the robot is in cell five by writing: p[5] = 0.2 For a refresher on python, the links here will be helpful. A simple solution is to specify each element in the list p. p = [0.2, 0.2, 0.2, 0.2, 0.2] print p [0.2, 0.2, 0.2, 0.2, 0.2] Modify your code to create probability vectors, p of arbitrary size, n. For example use: n=5 to help us to verify that our solution matches the previous solutions. Use a for loop: #define global variables p and n. p = [ ] n = 5 #Use a for loop to search for a cell, i, in a world with a given length, n for i in range (n) #Append to the list ''n'' elements each of size one over ''n''. Remember to use floating point numbers p.append(1./n) print p If we forget to use floating point operations, python will interpret our division as integer division. Since 1 divided by 5 is 0 with remainder 1, our incorrect result would be [0,0,0,0,0] Now we are able to make p a uniform probability distribution regardless of the number of available cells, specified by n. Now that we can establish the robot's initial belief about its location in the world, we want to update these beliefs given a measurement from the robot's sensors. Examine the measurement of the robot in a world with five cells, x1 through x5. Say we know that externally two of the cells (x2 and x3) are colored red, and the other three (x1, x4 and x5) are green. We also know that our robot senses that it resides in a red cell. How will this affect the robot's belief in its location in the world? Knowing that the robot senses itself in a red cell, let's update the belief vector such that we are more likely to reside in a red cell and less likely to reside in a green cell. To do so, come up with a simple rule to represent the probability that the robot is in a red or a green cell, based on the robot's measurement of 'red': red cells * 0.6 green cells * 0.2 For now, we have chosen these somewhat arbitrarily, but since 0.6 is three times larger than 0.2, this should increase the probability of being in a red cell by something like a factor of three. Keep in mind that it is possible that our sensors are incorrect, so we do not want to multiply the green cell probability by zero. What is the probability that the robot is in a red cell and the probability that it's in a green cell? Find the probability for each cell by multiplying the previous belief (0.2 for all cells) by the new factor, whose value depends on the color of the cells. For red cells, 0.2*0.6 = .12 For green cells 0.2*0.2 = 0.04 So are these our probabilities? No! These are not our probabilities, because they do not add up to one. Since we know we are in some cell, the individual probabilities must add up to one. What we have now is called an unnormalized probability distribution So, how do we normalize our distribution? First, find the current sum of the individual probabilities. Then, find a proper probability distribution. A probability distribution is a function that represents the probability of any variable, i, taking on a specific value. Compute the sum of the values to see if you have a proper probability distribution — the sum should be one. Since the sum of the cells is not one, our updated distribution is not a proper probability distribution. To obtain a probability distribution, divide each number in each box by the sum of the values, 0.36. This is a probability distribution written as: and read as: The posterior distribution of place this video may help.given measurement Z, Z being the measurement from the robot's sensors. This is an example of conditional probability. If you need more help on this subject, Starting with our state of maximum confusion p = [0.2, 0.2, 0.2, 0.2, 0.2] We want to write a program that will multiply each entry by the appropriate factor: either pHit or pMiss. We can start off by not worrying about whether this distribution will sum to one (whether it is normalized). As we showed in the last few questions, we can easily normalize later. Write a piece of code that outputs p after multiplying pHit and pMiss at the corresponding places. One way to do this is to go through all five cases, 0-4 and multiply in manually the pMiss or pHit case: p 0.04000000000000004, 0.12, 0.12, 0.040000000000000008, 0.040000000000000008 Note: This is not the most elegant solution. We know that this is not a valid distribution because it does not sum to one. Before we normalize this distribution we need to calculate the sum of the individual probabilities. Modify the program so that you get the sum of all the p's. Use the Python sum function: sum(p) 0.36 Let's make our code more elegant by introducing a variable, world, which specifies the color of each of the cells — red or green. p = [0.2, 0.2, 0.2, 0.2, 0.2] #introduce variable '''world''' world = ['green', 'red', 'red', 'green', 'green'] pHit = 0.6 pMiss = 0.2 Furthermore, let's say that the robot senses that it is in a red cell, so we define the measurement Z to be red: p = [0.2, 0.2, 0.2, 0.2, 0.2] world = ['green', 'red', 'red', 'green', 'green'] #define the measurement ''Z'' to be red Z = 'red' pHit = 0.6 pMiss = 0.2 Define a function to be the measurement update called sense, which takes as input the initial distribution p and the measurement Z. Enable the function to output the non-normalized distribution of q, in which q reflects the non-normalized product of input probability (0.2) with pHit or pMiss in accordance with whether the color in the corresponding world cell is red (hit) or green (miss). p = [0.2, 0.2, 0.2, 0.2, 0.2]''' world = ['green', 'red', 'red', 'green', 'green'] Z = 'red' pHit = 0.6 pMiss = 0.2 def sense(p, Z): #Define a function '''sense''' return q print sense(p, Z) When you return q, you should expect to get the same vector answer as in question 6, but now we will compute it using a function. This function should work for any possible Z (red or green) and any valid p[] Step 1: One way to do this is as follows:)) return q print sense(p, Z) This code sets hit equal to one when the measurement is the same as the current entry in world and 0 otherwise. The next line appends to the list q[ ] an entry that is equal to p[i]*pHit when hit = 1 and p[i]*pMiss when hit = 0. This gives us our non-normalized distribution. A "binary flag" refers to the following piece of code: (hit * pHit + (1-hit) * pMiss). The name comes from the fact that when the term (hit) is 1, the term (1-hit) is zero, and vice-versa. Modify this code so that it normalizes the output for the function sense, and adds up to one.)) #First, compute the sum of vector q, using the sum function s = sum(q) for i in range (len(p)): # normalize by going through all the elements in q and divide by s q[i]=q[i]/s return q print sense(p, Z) [0.1, 0.3, 0.3, 0.1, 0.1] Try typing green into your Z, sensory measurement variable, and re-run your code to see if you get the correct result. p = [0.2, 0.2, 0.2, 0.2, 0.2] world = ['green', 'red', 'red', 'green', 'green'] #Make 'green' your Z, measurement variable, and re-run your code to see if you get the correct result Z = ) [0.27, 0.09, 0.09, 0.27, 0.27] This output looks good. Now the green cells all have higher probabilities than the red cells. The "division by 44" referred to in the video comes from the normalization step where we divide by the sum. The sum in this case was 0.44. Modify the code in such a way that there are multiple measurements by replacing Z with a measurements vector. Assume the robot is going to sense red, then green. Can you modify the code so that it updates the probability twice, and gives you the posterior distribution after both measurements are incorporated so that any sequence of measurement, regardless of length, can be processed? p = [0.2, 0.2, 0.2, 0.2, 0.2] world = ['green', 'red', 'red', 'green', 'green'] #Replace Z with measurements vector and assume the robot is going to sense red, then #As often as there are ''measurements'' use the following for loop return q for k in range(len(measurements)): #Grab the k element, apply to the current belief, p, and update that belief into itself p = sense(p, measurements [k]) print p #run this twice and get back the uniform distribution: This code defines the function sense and then calls that function once for each measurement and updates the distribution. [0.2, 0.2, 0.2, 0.2, 0.2] Suppose there is dist. over cells such as: We know the robot moves to the right, and will assume the world is cyclic. So, when the robot reaches the right-most cell, it will circle around back to the first, or left-most cell. Assuming the robot moves exactly one position to the right, what is the posterior probability distribution after the motion? Everything shifts to the right one cell. Define a function move with an input distribution p and a motion number u, where u is the number of grid cells moving to the right or to the left. Program a function that returns a new distribution q, where if u=0, q is the same as p. Changing the probability for cell two from zero to one will allow us to see the effect of the motion p = [0, 1, 0, 0, 0) world = ['green', 'red', 'red', 'green', ) def move(p, U): return q for k in range(len(measurements)): p = sense(p, measurements [k]) print move (p, 1) def move(p, U): q= [ ] #Start with empty list for i in range(len(p)): #Go through all the elements in ''''''''p''''' #Construct ''q'' element by element by accessing the corresponding ''p'' which is shifted by ''U'' q.append(p[(i-U) % len (p)]) return q for k in range(len(measurements)): p = sense(p, measurements [k]) print move (p, 1) The (i-U) in the above function may be confusing at first. If you are moving to the right, you may be tempted to use a plus sign. Don't! Instead of thinking of it as p shifting to a cell in q, think of this function as q grabbing from a cell in p. For example, if we are shifting p one to the right, each cell in q has to grab its value from the cell in p that is one to the left. Assume the robot executes its action with high probability (0.8), correctly; with small probability (0.1) it will overshoot or undershoot. Note that this is a more accurate model of real robot motion. Given a prior distribution, can you give the distribution after the motion? Use this formula: As expected, the probability distribution has shifted and spread out. Moving causes a loss of information. This time, given that cells two and four have a value of 0.5, fill in the posterior distribution. We answer this in the same way as in the previous question, but this time we have to take into account that cell 5 can be arrived at in 2 ways: by cell 4 undershooting or cell 2 overshooting. Given a uniform distribution, fill in the distribution after motion, using the formula. We can answer this question in two ways: Modify the move procedure to accommodate the added probabilities. p = [0, 1, 0, 0, 0) world = ['green', 'red', 'red', 'green', 'green'] measurements = ['red', 'green'] pHit = 0.6 pMiss = 0.2 #Add exact probability pExact = 0.8 #Add overshoot probability pOvershoot = 0.1 pUndershoot = 0.1 def move(p, U): q= [] for i in range(len(p)): q.append(p[(i-U) % len (p)]) return q def move(p, U): #Introduce auxiliary variable s q= [] for i in range(len(p)): s = pExact * p[(i-U) % len(p)] s = s + pOvershoot * p[(i-U-1) % len(p)] s = s + pUndershoot * p[(i-U+1) % len(p)] q.append(s) return q [0.0, 0.1, 0.8, 0.1, 0.0] This function accommodates the possibility of undershooting or overshooting our intended move destination by going through each cell in p and appropriately distributing it's probability over three cells in q (the cell it was intended to go to, a distance U away, and the overshoot and undershoot cells) What happens if the robot never senses but executes the motion to the right forever, what will be the limit or stationary distribution in the end? A distribution is stationary when it doesn't change over time. In the case of the moving robot, this corresponds to the probability distribution before a move being the same as the distribution after the move. The fact that the result is a uniform distribution should not be surprising. Remember Inexact Motion 3, where we saw that when we act on the uniform distribution with the function move, we get back the uniform distribution. Moving doesn't affect the uniform distribution, and this is exactly the definition of a stationary state! We can also solve for the probability of each cell by realizing that each cell is the possible destination for three other cells. This is the method used in the lecture and shown below. Of course, regardless of our method, we get the same answer: the uniform distribution. Write code that makes the robot move twice, starting with the initial distribution: p = [0, 1, 0, 0, 0] p = move(p, 1) p = move(p, 1) print p [0.01, 0.01, 0.16, 0.66, 0.16] The result is a vector where 0.66 is the largest value and not 0.8 anymore. This is expected: two moves has flattened and broadened our distribution. Write a piece of code that moves 1,000 steps and give you the final distribution. #Write a loop for 1,000 steps for k in range(1000): p = move(p, 1) print p #Final distribution is 0.2 in each case as expected After 1000 moves, we have lost essentially all information about the robot's location. We need to be continuously sensing if we want to know the robot's location. Localization, specifically the Monte Carlo Localization method that we are using here, is nothing but the repetition of sensing and moving. There is an initial belief that is tossed into the loop. If you sense first it comes to the left side. Localization cycles through move and sense. Every time it moves it loses information and every time it senses it gains information. Entropy- a measure of information that the distribution has Entropy and information entropy are both fascinating topics. Feel free to read more about them! Given the motions one and one, which means the robot moves right and right again: motions = [1, 1] Compute the posterior distribution if the robot first senses red, then moves right by one, then senses green, then moves right again. Start with a uniform prior distribution: p = [0.2, 0.2, 0.2, 0.2, 0.2] for k in range(len(measurements)): p = sense(p, measurements[k]) p = move(p, motions[k]) print p Robot is likely started in grid cell three, which is the right-most of the two red cells. Can you modify the previous question so that the robot senses red twice. What do you think will be the most likely cell the robot will reside in? How will this probability compare to that of the previous question? The most likely cell is cell four (not three! Don't forget we move after each sense). This program is the essence of Google's self-driving car's Monte Carlo Localization approach. Localization involves a robot continuously updating its belief about its location over all possible locations. Stated mathematically, we could say the robot is constantly updating it's probability distribution over the sample space. For a real self-driving car, this means that all locations on the road are assigned a probability. If the AI is doing it's job properly, this probability distribution should have two qualities: Our Monte Carlo localization procedure can be written as a series of steps: Multiply this distribution by the results of our sense measurement. Normalize the resulting distribution. Keep in mind that step 2 always increased our knowledge and step 4 always decreased our knowledge about our location. Extra Information: You may be curious about how we can use individual cells to represent a road. After all, a road isn't neatly divided into cells for us to jump between: mathematically, we would say the road is not discrete it is continuous. Initially, this seems like a problem. Fortunately, whenever we have a situation where we don't need exact precision--and remember that we only need 2-10 cm of precision for our car--we can chop a continuous distribution into pieces and we can make those pieces as small as we need. In the next unit we will discuss Kalman Filters, which use continuous probability distributions to describe beliefs. Remember that a probability of zero means an event is impossible, while a probability of one means it is certain. Therefore, we know all probabilities must satisfy: The answer is 0.8. If the probability for x~1~ is: P(x~1~) = 0 What is the probability of x~2~ - P(x~2~) = ? In both of these problems we used the fact that the sum of all probabilities must add to one. Probability distributions must be normalized. Given: Fill in the value of the fifth grid cell Bayes' Rule is a tool for updating a probability distribution after making a measurement. The equation for Bayes' Rule looks like this: The left hand side of this equation should be read as "The probability of x after observing Z." In probability, we often want to update our probability distribution after making a measurement "Z". Sometimes, this isn't easy to do directly. Bayes' Rule tells us that this probability (left hand side of the equation) is equal to another probability (right side of the equation), which is often easier to calculate. In those situations, we use Bayes' Rule to rephrase the problem in a way that is solvable. To use Bayes' Rule, we first calculate the non-normalized probability distribution, which is given by: P(Z|x)P(x) and then divide that by the total probability of making measurement "Z" we call this probability our normalizer: P(Z) This may seem a little confusing at first. If you are still a little unsure about Bayes Rule, continue on to the next example to see how we put it into practice. Suppose there exists a certain kind of cancer that is very rare: Suppose we have a test that is pretty good at identifying cancer in patients with the disease (it does so with 0.8 probability), but occasionally (with probability 0.1) misdiagnoses cancer in a healthy patient: Using this information, can you compute the probability of cancer, given that you receive a positive test: 0.79 out of 100 that despite the positive test results you have cancer. You can apply the same mechanics as before. The result of Bayes Rule, non-normalized is the product of the prior probability, P(C), multiplied by the probability of a positive test, P(POS|C): Plug in the probabilities: Non-normalized probability for the opposite event, given a positive test: . P(¬C|POS) = 0.999 * 0.1 = 0.0999 Normalized (the sum of the the probability of having cancer after a positive test and not having cancer after a positive test): Dividing the non-normalized probability, 0.0008, by the normalized probability, 0.1007, gives us the answer: 0.0079 Use a time index to indicate after and before motion: Compute this by looking at all the grid cells the robot could have come from one time step earlier (t-1), and index those cells with the letter j which in our example ranges from 1 to 5. For each of those cells, look at the prior probability, and multiply it with the probability of moving from cell x~j~ to cell x~i~ , P(x~i~|x~j~). This gives us the probability that we will be in cell x~i~ at time t: This looks similar to the usual way of stating the theorem of total probability A represents a place, i at time, t. All the B's as possible prior locations. This is often called the Theorem of Total Probability. Coin Toss: The probability of tossing a fair coin, heads (H) or tails (T) is: P(T) = P(H) = 0.5 If the coin comes up tails, you accept, but if it comes up heads, you flip it again and then accept the result. What is the probability that the final result is heads: The probability of throwing heads in step two, P(H^2^), depends upon having thrown heads in step one, P(H^1^), which we can write as a condition, P(H^2^|H^1^)P(H^1^). Add the probability of throwing heads in step two with the condition, p(H^2^|T^1^)P(T^1^), of throwing tails in step one, multiplied by the probability of throwing tails in step one, P(T^1^). Written as: The last term of this equation, P(H^2^|T^1^)P(T^1^) is the product of the probability of heads given a tail on the first throw with the probability of tails on the first throw. Since we are told that we stop flipping when we get a tails on the first throw, P(H^2^|T^1^)=0 and we can ignore this term. Our equation becomes: P(H^2^) = P(H^2^|H^1^)P(H^1^) which is: Note that the superscripts next to H and T in this problem indicate whether we are talking about the first or second toss. They are not exponents. Note: We can also approach this problem as a probability tree, where the probability of moving down any branch is 1/2. We can see that the only way to arrive at heads is to proceed down the heads path twice. Say you have multiple coins, one is fair and the other is loaded. The probability of flipping heads for each coin is shown below. There is a 50% chance of flipping either the fair or the loaded coin. If you throw heads, what is the probability that the coin you flipped is fair? The question, what is the probability of throwing a fair coin given that you observe H, can be written as: Use Bayes' Rule because you are making observations. The non-normalized probability, which we will represent with a lower case p with a bar over it, of getting the fair coin is: The non-normalized probability of NOT getting the fair coin, but getting the loaded coin is: When you find the sum of these, the result is: Now, we divide our non-normalized probability by this sum to obtain our answer. This is what you've learned in this class: You are now able to make a robot localize, and you have an intuitive understanding of probabilistic methods called Filters. Next class we will learn about:
https://www.udacity.com/wiki/cs373/unit-1
CC-MAIN-2016-50
refinedweb
5,262
61.06
Wolfgang Meier's open source eXist database is probably the most popular native XML database available today (which is not at all the same thing as saying it's the best). eXist is written in the Java™ programming language and runs on most major platforms. Programs interface with eXist through its bundled HTTP server. SOAP, XML-RPC, and RESTful interfaces are all provided, and through these you can submit XPath, XQuery, and XUpdate requests to the core server. Command-line and GUI clients are also available. eXist requires Java 1.4 or later; otherwise, all necessary dependencies are bundled (a nice touch). In fact, installing eXist is shockingly easy for a server-side open source project. A lot of other projects, closed and open source, might learn from it. The installer is built with IzPack. The distribution is a single JAR archive. To install eXist, just run the archive like so: The installer brings up a GUI that asks you where you want to install the eXist directory. I put it in /home/elharo/eXist. The eXist/bin directory contains the necessary startup scripts. To launch the server, execute startup.sh (UNIX®) or startup.bat (Microsoft® Windows®): This command runs the server on port 8080 and begins serving the files in /eXist. You can connect to eXist from any Web browser. For instance, I installed eXist on eliza.elharo.com, so I can connect to it at the following URL: (Don't try this at home -- my firewall will block you. You'll have to connect to your own server.) Initially, you'll see the eXist documentation, as well as some samples that you can try out. eXist isn't really a Web server; it just uses one as a convenient interface to the underlying database server. The package also includes independent GUI clients and programming APIs that you can use to perform various operations. You can even browse it from Microsoft Windows Explorer using WebDAV. For initial experimentation, it's probably easiest to use the simple GUI client. To launch the client, execute client.sh (UNIX) or client.bat (Windows) from the eXist/bin directory: As you can see in Figure 1, by default the client tries to connect to an eXist database running on the localhost on port 8080. You can specify a different host and port in the URL text field. The same window also asks you for a username and a password. By default, the username is admin; you can leave the password field blank. (Haven't software companies learned by now to not ship servers with default usernames and passwords?) Figure 1. Connect to eXist After you've logged in, the client displays the GUI shown in Figure 2. Initially, eXist comes with one collection, called system, in which the user information is stored. You want to stay out of this collection for now. Instead, create a new collection for your documents by selecting File > New Collection. I created a collection named books. To open the collection, double-click it in the GUI. After you open a collection, to upload documents, click the icon that looks like a bent piece of paper with a plus sign next to it. Figure 2. The eXist admin client I first uploaded a couple of small documents, and the database accepted them without complaint. I then tried to upload the complete text of my book Processing XML with Java. This operation failed silently, with no error message. Uploading through the Web interface instead of the GUI client also failed. However, that interface showed me a stack trace to help debug the problem. It turned out that eXist didn't resolve the relative URL used in the document type declaration. To load documents with external DTD subsets, you must manually install the DTDs on the server's filesystem and edit a catalog file to tell the database where they are; then, you have to restart the database server to make it reload the catalog file. This is a major hassle, although you normally only need to install each different DTD once. eXist works best if your documents either don't use DTDs or use only a small number of infrequently changed DTDs. eXist supports both XPath and XQuery (see Resources for more information on both). eXist uses the XQuery syntax from the November 2003 XQuery working draft. Work is ongoing to update the database to use the syntax from more recent working drafts. The differences between the drafts for basic For-Let-Where-Order-Return (FLWOR) queries aren't large. To enter queries against a collection, click the little binoculars icon in the GUI client to bring up the window shown in Figure 3. Figure 3. eXist query window Annoyingly, copy and paste functions don't work in this interface, so you have to manually type in all queries. Of course, this program is really just for testing and experiments -- you wouldn't use it for heavy-duty interaction with the database any more than you'd type raw SQL into an Oracle database. After you have a fairly good idea of the queries that you want to run, you can write programs that generate and submit the queries algorithmically, as I discuss next. Write programs that interface with eXist IBM®, Oracle, and the other members of the JSR 225 expert group are currently working to define an API that will do for XQuery what JDBC does for SQL. However, until this process is finished and the API is implemented in eXist, it will be necessary to use eXist's native API. You can access this API through SOAP, XML-RPC, WebDAV, or HTTP interfaces. Any API that supports one of these protocols can communicate with eXist. For instance, you can use JAX-RPC to talk to eXist over SOAP or java.net to talk to it over HTTP. The RESTful HTTP interface is the simplest and most broadly available of the options. For example, suppose you want to find all para elements in the books collection that contain the word "XSLT." The XQuery in Listing 1 locates all such elements. Listing 1. A sample XQuery You GET this query from the following URL: Here, eliza.elharo.com is the network host on which the database is running; 8080 is the port; /exist/servlet/db identifies the Web app, the servlet, and the database, respectively; and books is the specific collection you're querying in that database. eXist allows nested collections. For instance, the books collection might contain separate fiction and nonfiction collections, which are available at the following URLs: For the purposes of this article, however, you want to query all the books, both fiction and nonfiction. The XQuery is sent as the value of the _query field in the URL's query string (the part of the URL after a question mark). It must be percent-encoded in the usual way (for example, spaces become %20, the double quotation mark becomes %22, and so forth). Thus, you can send the query in Listing 1 to the server by GETting the following URL: The server sends back the query results wrapped in an exist:result element like the one in Listing 2. Listing 2. Results of sample query Other optional query string variables control whether the results are pretty printed, what elements wrap the results, how many matches return (by default, eXist only returns the first 10 hits), and so forth. Because this is all done with HTTP GET, you can make this query simply by typing the appropriate URL into a Web browser. Of course, any software library that speaks HTTP can also send this query and get back the result as a stream of XML. If you were to write this query in the Java language, you might use the URLEncoder class to encode the query string, the URL class to submit it, and XOM to process the results, as shown in Listing 3. Listing 3. Query eXist in Java code An HTTP interface like this one is completely language independent. You can easily reproduce the functionality in Listing 3 in Perl, Python, C, C#, or any other language that has a simple HTTP library and some XML support. One of the most effective ways to query such a database is to write an XSLT stylesheet that formats the results. XQuery allows you to get information out of the database. But what about putting data in? This is even easier. Instead of sending a GET request, you send a PUT request. The URL where you PUT the data is the URL where the document will be placed inside the database; the body of the request is the document to store. For example, the Java code in Listing 4 grabs the RSS feed from the Cafe con Leche Web site and puts it in the syndication collection with the name 20050401. Listing 4. Insert documents into eXist with Java code PUTting new documents into the database typically requires authentication. eXist's REST interface supports HTTP Basic authentication. The Java language supports this through the java.net.Authenticator class. Complete details would take this discussion a little too far afield; but in brief, you have to subclass Authenticator with a class that knows (or knows how to ask for) the user name and password for the database, and then install an instance of this subclass as the system default authenticator. Need to remove a document from the collection? Just send a DELETE request to the appropriate URL, as shown in Listing 5. Listing 5. Delete a document in eXist Again, in practice you also need to supply a username and a password via an Authenticator object. The final and trickiest operation is to modify information in the database. For example, suppose I change my e-mail address from elharo@metalab.unc.edu to elharo@macfaq.com. Therefore, I want to change all to doesn't provide this capability, so eXist uses XUpdate instead. The XUpdate query in Listing 6 makes the change. Listing 6. Using XUpdate to update documents in eXist Because this operation changes a resource, you need to use the POST method to send it to the server. You post to the URL of the document you want to change and give the XUpdate instructions in the body of the request. I've just hit the highlights of the REST interface. It also includes instructions to create and drop collections, to specify how the query results are formatted, and to supply user credentials. Nor is HTTP the only interface to eXist. eXist also has native APIs for Perl, PHP, and the Java language, along with generic WebDAV, SOAP, and XML-RPC interfaces. Broad API support is one of the particular strengths of eXist. Performance, robustness, and stability eXist is not the fastest database on the planet. You can easily use a stopwatch to measure the time it takes to load a medium-sized document, even on fast hardware connecting to a local database. Query speed is of similar quality. Complex queries over moderately large collections give you enough time to brew a cup of coffee. To improve both document loading and query times, you can give eXist more memory. The default configuration that ships with eXist specifies settings that are appropriate for machines with about 256 MB of memory. If you have a beefier server, you can modify the conf.xml file to allocate more memory. To tune the database, you can add indexes. By default, eXist indexes element and attribute nodes as well as the full text of the document. You can specify additional range indexes for particular node-sets that are likely to occur in your queries. For instance, if you know that you are likely to do a lot of queries that looked at para elements, you can define an index on //para. This tells eXist to precompute and store the values of all the para elements in the document because they're likely to be needed later. Still, eXist is mostly suitable for small collections where speed isn't critical. If you have gigabyte-sized documents or you process thousands of transactions per hour, plan to look elsewhere. Similarly, I'm not sure I'm ready to trust my critical data to eXist. I haven't personally experienced any database corruption. However the developers are still finding and fixing database corruption problems more frequently than I'm comfortable with. On the plus side, eXist does make it quite easy to back up the database. Very importantly, the backup format saves the contents in real textual XML, not some proprietary binary format; this means that in a worst-case scenario, you can fix problems with a text editor. If you make frequent archival backups, eXist is unlikely to do anything that makes the data irretrievable. Feature-wise, eXist suffices for basic needs and includes some unexpected lagniappes such as XInclude support. Transactions, rollover, fallback, and similar enterprise-level features are all missing (transactions are on the "to do" list); but many applications don't need such advanced functionality. One of my biggest concerns about eXist (or any other XQuery-based native XML database, for that matter) is the stability of the underlying standards and APIs. This article is based on the latest beta of eXist, from November 2004, which is based on the XQuery drafts from November 2003. The version of eXist now in CVS has made quite a few backwards-incompatible changes that are not yet fully documented. More changes will come in the future, both in eXist and in the W3C specs it depends on. Do not put eXist into production unless you're comfortable with frequent updates that will require you to retest and rewrite some of your own code. The more data you have, the more important it becomes to use some sort of database system to manage it. If the data is XML, a solid native XML database is an obvious choice. Is eXist such a solid system? Sadly, the answer is no. eXist is an interesting research project that might develop into a useful tool in a year or two. However, it's hard to recommend in its current state. Documentation is incomplete and often misleading. Error messages are nonexistent. (Note to programmers everywhere: Exception stack traces don't count as decent error messages -- and sometimes eXist doesn't even give you those.) GUIs violate user interface standards at every turn. Basic features like copy and paste are omitted. During the very basic testing I did for this article, I encountered multiple bugs. eXist isn't finished yet. It's currently classified as a beta. Many of the problems I encountered might be fixed before version 1.0 ships, but that won't happen tomorrow. I know some people now use eXist for real work today, and that worries me. Either they're very lucky, or they carefully craft their queries and documents to avoid eXist's bugs. If you're interested in contributing to a worthwhile open source project, eXist is a worthwhile candidate. However, the same incompleteness that makes it a fun project for programmers with time on their hands makes it unsuitable for production systems. - Download eXist from SourceForge. - eXist sits on top of the Cocoon application server from the XML Apache Project and bundles the Jetty servlet engine. However, it can be integrated into other servlet containers, such as the Apache Jakarta Project's Tomcat. - The eXist installer was built with Julien Ponge's open source IzPack. - Read Elliotte Rusty Harold's book Processing XML with Java (Addison Wesley Professional, 2002) online or buy it on paper. - Explore Java Network Programming (O'Reilly Media, 2004) and its explanation of how the URLand URLConnectionclasses talk to HTTP servers such as eXist's REST interface. - Read Ronald Bourret's solid introduction to using XML with various types of database systems. - Check out how IBM and Oracle lead the expert group for Java Specification Request 225, XQuery API for Java. Currently, an early draft review of this specification is available. - Printed out, the W3C's XQuery specs run to hundreds of pages. The author recommends that you start with the XML Query Use Cases. - Learn more about the XML Path Language (XPath) by reading the W3C Recommendation. - Read the XUpdate specification. - The author's server Eliza is named after Eliza de la Zeur in Neal Stephenson's Baroque Cycle. - Read the previous installments of Elliotte Rusty Harold's Managing XML data column here on developerWorks. - Find hundreds more XML resources on the developerWorks XML zone. - Learn how you can become an IBM Certified Developer in XML and related technologies. >>IMAGE. You can contact him at elharo@metalab.unc.edu.
http://www.ibm.com/developerworks/xml/library/x-mxd5/index.html
crawl-003
refinedweb
2,792
64
Why you are non programmer and you want to learn any programming language, than python is best choice for you. If you want to learn new language, you will find python interesting. Why you should learn python | Benefits of Python 1. Python is easy to learn : Python programming language is easy to learn. If you are coming from java background or c or c++ back ground, you must write few codes before printing hello world. in python just write. print("Hello World") It will print hello world. You don’t need to write classes, just like Java does. You don’t need to include anything. It is as simple as that. You don’t need to worry about curly braces or semi colon’s, Python does not use curly braces( expect in dictionary). When you write function, you don’t need to use curly braces,like other programming languages. def hello_function(): print("Hello from a function") hello_function() In python, you have you worry about indentation, all the code must be indented. Other wise it will throw an error. 2.) Portable and Extensible : Python code is portable, you can use python code in any operating system without any error. If you have written your code in macOS, you can run that code in windows OS or in Linux. You don’t have to worry about anything. Python allows you to integrate with Java or .NET. It also invokes C or C++ libraries. How cool is this. This cool feature excites other developer to learn python more. 3.) Python is used in Artificial Intelligence Field: Artificial Intelligence is the future of this man kind. Its the future of ours. Python is the main language used in Artificial Intelligence. Fields like Machine Learning and Deep Learning are most popular now a days. Everyone uses python for this kind of technology. Tensorflow, pytorch, Scikit learn and many more libraries available for Deep Learning and machine learning. 4.) Python pays well : Python jobs are high paying jobs across the globe. Fields like data science and deep learning, pays huge amount of money to developers. In US, Python developers are the 2nd highest salaried people, with average of 103,000 $ per year. This is hell lot of money. 5.) Big Companies uses python : Companies like Google, Netflix, Facebook, Instagram, Mozilla, Dropbox etc. they use python and they pay a lot of money. Most of them use Python for machine learning and Deep Learning. Netflix company uses recommendation system, to recommend you movies. more than 75% of movies/web series you watch on Netflix is recommended by Python. 6.) Python is used in Web Development : Django is an awesome framework for web development. If you want to make web application quickly than Django is for you. Django is based on Model, View, Template architecture. It’s primary goal is to ease the creation of complex, database-driven websites. The framework emphasizes reusability and “pluggability” of components, less code, low coupling, rapid development, and the principle of don’t repeat yourself. It gives default admin panel. 7.) Python has amazing Ecosystem: The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python’s standard library and the community-contributed modules allow for endless possibilities. Python developer keeps updating python packages all the time. Even Big companies like Google. Google made tensorflow which is maintained by them. 8.) Python is used in everywhere : Python is used everywhere. You can make websites, softwares, GUI applications, android applications, games and many more. 1.) Data Science 2.) Scientific Computing and Mathematical Computing 3.) Finance and Trading 4.) Web Development 5.) Gaming 6.) GUI application 7.) Security and Penetration testing 8.) Scripting 9.) GIS software 10.) Micro controllers. Conclusion : Why you should learn python This is my list of why you should learn python. I am also a python developer. I hope you liked my list. if you find any error, don’t forget to mail me. Thank you for reading.
http://www.geekyshadow.com/2020/07/07/top-10-reasons-learn-python/
CC-MAIN-2020-40
refinedweb
662
69.18
XSL: Adding namespace prefix to elements read from external file Discussion in 'XML' started by johanneskrueger@gmx.net, Tag prefix and namespaceheidi, Nov 2, 2003, in forum: ASP .Net - Replies: - 0 - Views: - 434 - heidi - Nov 2, 2003 "static" prefix - to parallel "this" prefixTim Tyler, Dec 5, 2004, in forum: Java - Replies: - 36 - Views: - 1,756 - Darryl L. Pierce - Dec 10, 2004 XSL Question tp xsl:for-each and xsl:variable, May 27, 2005, in forum: XML - Replies: - 1 - Views: - 3,853 - A. Bolmarcich - May 27, 2005 xsl to group elements? [xsl newbie]Rob Smegma, Sep 26, 2005, in forum: XML - Replies: - 1 - Views: - 1,953 - shaun - Sep 26, 2005 removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML - Replies: - 6 - Views: - 758 - Richard Tobin - Nov 14, 2006
http://www.thecodingforums.com/threads/xsl-adding-namespace-prefix-to-elements-read-from-external-file.392814/
CC-MAIN-2015-48
refinedweb
138
66.57
. A Python version. I did not look at Wirth’s best solution. I tried to solve it without looking at his method. This was my first attempt and it worked immediately. A more elegant version, that uses recursion. […] Pages: 1 2 […] […] today’s exercise, our goal is to write an algorithm that, given an alphabet and a length, generates all […] My Haskell solution (see for a version with comments): My haskell solution. I’m not sure if it has optimum running time. Running “generate n” generates all possible sequences for a given N. If you run ‘take 1 (generate n)’, the lazyness of haskell should allow a fairly fast generation of a new solution. split xs n = (take n xs, take n (drop n xs)) –given ys is legal, checks that (y:ys) is also legal check xs = foldl (\a (ys, zs) -> if ys == zs then True else a) False (map (split xs) [1..((length xs) `div` 2)]) alphabet = [1,2,3] generate 0 = [[]] generate n = foldl (\ys zs -> if check zs then ys else zs:ys) [] (concat ( map (\x -> map (\y -> y:x) alphabet) (generate (n-1)))) lists _ 0 = [[]] lists set n = [ l’ | l <- lists set (n-1), x <- set, let l' = l ++ [x], ok l' ] where ok l' = and [ a /= b | j <- [1 .. n `quot` 2], let (a,b) = splitAt j $ drop (n-j-j) l' ] main = print $ lists "123" 10 Let the regular expression engine do the hard work… #!perl use strict; use warnings; # $> perl solve.pl 5 3 # Found 198 solutions my @asolutions = generate( $ARGV[0], # N (first cmd line argument) [1 .. $ARGV[1]] # Alphabet (number generated from second cmd line argument) ); printf("Found %d solutions\n", scalar @asolutions); sub generate { my ( $target_length, $alphabet, $head ) = @_; # Base Case our head is the right length return $head if (length ($head//'')) == $target_length; # General Case return map { generate( $target_length, $alphabet, $_ ) } grep { ! /(.{2})\1/ } map { ($head // "") . $_ } @$alphabet; } If you want it to exclude single-item adjacent sequences, you can change the {2} to a {1} If you harden the problem a bit and try to avoid repeated permuations you get to abelian square free problem: An abelian square means a non-empty word uv, where u and v are permutations of each other For example, abccba contains repeate permuation of a,b, and c. Another Python solution. Uses that you should only have to check that you’re not creating an adjacent sequence pair with each new addition. import sys N = int(sys.argv[1]) def next(seq=(),seqs=set(),pair_idxs = {(1,2):None,(1,3):None,(2,1):None,(2,3):None,(3,1):None,(3,2):None}): if len(seq) == N: seqs.add(seq) return seqs elif len(seq) == 0: for i in range(1,4): seqs = next(seq + (i,),seqs,pair_idxs) else: last = seq[-1] possibles = set([1,2,3]) possibles.remove(last) for i in possibles: prev_pos = pair_idxs[(last,i)] if prev_pos == None or test(seq,i,prev_pos): new_pi = dict(pair_idxs) new_pi[(last,i)] = len(seq)-1 seqs = next(seq + (i,),seqs,new_pi) return seqs def test(seq, i, prev_pos): sub_new = seq[prev_pos+2:] + (i,) sub_prev = seq[:prev_pos+2][-len(sub_new):] return sub_new != sub_prev print next() how come some solutions don’t have the 3x and 2x sequences while some of the other solutions do? if you include the 3x and 2x sequences it looks like the length of the solution is generated by this sequence which is kinda cool: Simple recursive solution in Python 3. Here is a quick solution in Python 3 #!/usr/bin/env python import re import itertools VALID_CHARS = ['1', '2', '3'] def no_repetition(s): return re.search(r'(.+)\1', s) == None def valid_strings(n): dictionary = (''.join(x) for x in itertools.product(VALID_CHARS, repeat=n)) dictionary = filter(no_repetition, dictionary) for word in dictionary: print(word) def main(): valid_strings(3) valid_strings(5) valid_strings(10) if __name__ == "__main__": main() Here are up my usual solutions in Python and Racket: Generating non-repeating strings It’s just the brute force solutions for now, I think I’ll try to write up a smarter solution that bails out early tomorrow. @Ben: That actually makes perfect sense. If you parse the jargon, ternary is base three (thus the alphabet A = {a, b, c}) and a squarefree word is one without any adjacent subwords. So exactly what the problem asks for. @Mike: I think that yield from alone is just about enough for me to give Python 3.3+ another try over Python 2.7. That’s really helpful. My C recursive solution. Sorry, but I don’t think I got the problem definition right. Do the following also belong to a N=5 sequence? 13121 13123 13212 13213 13231 and also sequences starting with 2 and 3… My C recursive solution. A divide and conquer approach in Pyhton: not entirely sure what you mean by “statement separator” vs. “statement terminator”, but semicolons in pascal are used as statement delimiters in (almost) exactly the same way as C. the difference in pascal is that the final statement of a lexical block does not need to be terminated with a semicolon. the end of the block terminates the expression. also, similar to C, pascal does not require you use block delimiters “begin” and “end” (curly braces in C) for single-expression blocks. a good example demonstrating all of these behaviors is found in his print procedure: the call to “change” is not delimited with a semicolon because it is the last statement of the if-block. similarly, “extend” is not delimited with a semicolon because it is the last statement of the else-block. also, the else-block contains only a single expression, so the block delimiters “begin” and “end” are not used at all. for clarity, the above could be rewritten with the following equivalent: I personally find the lack of semicolons and block delimiters to be slightly confusing and more difficult to maintain. so, unlike Wirth, I choose to use them for all blocks and all expressions. new to python/programming. First I created all the possible combinations. Then I test each one and return the valid ones. My results are in lists, but I’m sure could easily be changed to strings if that is ‘required’. def wirth(n): a=1 b=2 c=3 chars = [a,b,c] validseq = [] allseq = [[a],[b],[c]] i=0 while i < n-1: ##build all combinations of the 3 characters newseq = [] for e in allseq: x = e+[a] newseq.append(x) y = e+[b] newseq.append(y) z = e+[c] newseq.append(z) allseq = newseq i+=1 for e in allseq: testresult = testvalidity(e,n) if testresult: validseq.append(testresult) print validseq def testvalidity(e, n): i=1 while i < n: j=0 while j < min(i, n-i): if e[(i-1)-j:i] == e[i:i+j+1]: return None j += 1 i += 1 return e My java solution here: (hamidj.wordpress.com) My Mathematica translation of Wirth’s algorithm: Just wondering if anyone has tested for a max N? My program generates quote a few solutions (non unique) with N=21, but zero at 22. Has anyone else noticed this or is there a problem with my program? Thanks. If you follow the link from the exercise to codepad, and modify the script given there, you will find 691 solutions with n=22. Something is wrong with your program. wow no one came up with a C++ version ? ;( or even a C# version ?> im looking for a heuristic-algorithm to add its source to my code so that the heuristic function can be used
https://programmingpraxis.com/2012/12/11/stepwise-program-development-a-heuristic-algorithm/?like=1&source=post_flair&_wpnonce=b19f116cf4
CC-MAIN-2016-50
refinedweb
1,272
63.09
09 March 2012 07:21 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The phenol-acetone plant and its downstream No 1 and No 2 BPA lines were shut on 3 March because of feedstock propylene supply issues, the source said. The phenol-acetone plant, which can produce 300,000 tonnes/year of phenol and 180,000 tonnes/year of acetone, is currently running at 100%, the source said. Its 150,000 tonne/year No 1 BPA line has resume full operation, the source said. The 150,000 tonne/year No 2 BPA unit is ramping up production, with full operation anticipated next
http://www.icis.com/Articles/2012/03/09/9539928/s-koreas-lg-chem-restarts-yeosu-phenol-acetone-bpa-plants.html
CC-MAIN-2014-35
refinedweb
102
70.33
On Mar 16, 5:29pm, "David I. Dalva" wrote: > Subject: Packet filtering and FTP > Summary: Cisco "established" keyword breaks FTP-DATA. > > I am having FTP trouble when I configure my Cisco to only permit established > TCP connections above port 1024. When a new (random) port is created for > FTP-DATA (e.g., as the result of a "dir"), the Cisco prohibits the connection > since it doesn't meet the "established" criteria. > > Does anybody know what the port range is for randomly allocated ports, or > another way to get around this problem? > > Dave Dalva <dave @ tis . com> > Trusted Information Systems, Inc. > Glenwood, MD 21738 > +1 301 854-6889 > +1 301 854-5363 FAX >-- End of excerpt from "David I. Dalva" dave: talk to one of your associates, marcus ranum...I am sure he has some ideas along this line... -- Bryan D. Boyle |Physical: Exxon Research, Annandale, NJ 08801 #include <disclaimer> |Logical: bdboyle @ erenj . com < USENET: Post to exotic, distant machines. Meet exciting, > < unusual people. And flame them. >
http://www.greatcircle.com/lists/firewalls/mhonarc/firewalls.199303/msg00045.html
CC-MAIN-2014-52
refinedweb
166
69.28
User talk:Leo Translation group Hello Leo, I would like to form a group for the french translation! Are you interested? (cf :). You can contact me by email, answer on your discussion page, or answer on mine. --Feng (talk) 06:35, 5 April 2017 (UTC) Translation pages Hi Leo, You can join us on the #gentoo-wiki channel (IRC) to exchange ideas, ask questions, etc. In fact, I disagree with your translation of the term "handbook". Therefore, a common glossary should be created to avoid having different translations. I would have liked to create a common space for the translation activity. This space will improve the translation and the cooperation (share knowledge). Unfortunately, currently, I do not have much time available. I have already start a terminology page (& Vodinos ). The Russian translation team already has a common glossary (cf. the history). Temporarily, I suggest you learn how to use the wiki or improve your Wiki skills. If you want, you can also define and propose a structure for the common space. You can also expose your ideas for the Wiki on the main page. --Feng (talk) 08:13, 8 April 2017 (UTC) - Hi Feng, - Thank you for comments about my translations. Please forgive my incorrect term for "handbook", I fixed it by looking for the other user's translation. My translations were wrong because I didn't understand what handbook really means. I'll take a look at terminology page. - Thank you for your comments, and please forgive my errors and/or my ignorance about how the Gentoo Wiki works. I suggest that you make a glossary grouping all the technical terms you have translated. Thus, our translation will be improved. --Feng (talk) 20:18, 18 April 2017 (UTC) Translation notes I have already translated (recently) the main page of the handbook. I'm going to update this translation. The last translation has been "cleared" because the English article has been updated. It would be better to translate the handbook after establishing an organization. A list of articles to be translated in priority order and the associated verification tasks, should be defined. For example, the handbook is an important set of articles that should be reviewed by several people (I have defined the "Role" status in the following table to achieve this). Currently, the organization of the French translation is not clearly defined and established. You can get a list of pages marked for translation on the page Special:PageTranslation. It would be great if you propose, on this talk page, several pages to translate and effectively translate one of these pages, then, I hope I will have enough time to suggest improvements or make comments. --Feng (talk) 13:03, 8 April 2017 (UTC) - Hello, - I'll take a look at this page, but how to tell you what pages I chose ? There is not any button "Talk" or "Discuss" on this page. Should I write it on User talk:Leo page ? - Please forgive my (maybe stupid) questions, it's the first time I help for a translation project. - --Leo (talk) 17:00, 8 April 2017 (UTC) - You can write a comment on your talk page (the discussion is opened here), you can send me an email (watch on the sidebar) or talk with me on IRC (most frequently on #gentoo-wiki, sometimes on #gentoofr). --Feng (talk) 18:43, 8 April 2017 (UTC) - Hi, - Thank you for your answer, I'll first begin by translating Fontconfig page.--Leo (talk) 15:22, 10 April 2017 (UTC) - Hello, - As I finished to translate the fontconfig page, I'd like to translate the Awesome page. Is it okay with it ? --Leo (talk) 17:25, 18 April 2017 (UTC) - I believe the article named "Awesome" can be an exercice if you try to improve the previous translations so I think it is okay. Your translation of the "fontconfig" article needs to be reviewed by someone else. I also recommend that you keep the term "Use Flag" as it is (cf. this thread). --Feng (talk) 20:08, 18 April 2017 (UTC) - Hello, thank you for your message, and for your translation, I've consequently changed the "Use Flag" translation of Fontconfig. By the way, is there a way to tell to the translator what changes I have done ? --Leo (talk) 15:22, 22 April 2017 (UTC) Gentoo Wiki Guidelines Leo, you are allowed to create a page with content in the main namespace but you are not allowed to create an empty page. This type of "edition" should only occur in your userspace!!! Your contribution for the french translation is welcomed here: User:Feng/Gentoo Traduction! --Feng (talk) 18:01, 16 April 2017 (UTC) - Hello Feng, - Please forgive me, I didn't realize I was doing something wrong. Did you successfully delete it ? What should I write on User:Feng/Gentoo Traduction ? Is a formatted way to write in it ? --Leo (talk) 17:18, 18 April 2017 (UTC) - The situation is a bit embarrassing because I do not know when the main page will be written and if it will be redirected at this location. We must follow the Wiki guidelines to edit the Wiki - Help:Contents! I will inquire for the page. --Feng (talk) 19:48, 18 April 2017 (UTC)
https://wiki.gentoo.org/wiki/User_talk:Leo
CC-MAIN-2019-26
refinedweb
876
72.56
Java Exercises: Reads n digits (given) chosen from 0 to 9 and prints the number of combinations Java Basic: Exercise-228 with Solution Write a Java program that reads n digits (given) chosen from 0 to 9 and prints the number of combinations where the sum of the digits equals to another given number (s). Do not use the same digits in a combination. For example, the combinations where n = 3 and s = 6 are as follows: 1 + 2 + 3 = 6 0 + 1 + 5 = 6 0 + 2 + 4 = 6 Input: Two integers as number of combinations and their sum by a single space in a line. Input 0 0 to exit. Pictorial Presentation: Sample Solution: Java Code: import java.util.*; public class Main { public static void main(String[] args) { Scanner stdIn = new Scanner(System.in); System.out.println("Input number of combinations and sum (separated by a space in a line):"); int n = stdIn.nextInt(); int s = stdIn.nextInt(); int c1 = comnum(0, n, s,0); System.out.println("Number of combinations:"); System.out.println(c1); } public static int comnum(int i, int n, int s,int p) { if(s == p && n == 0) { return 1; } if(i >= 10) { return 0; } if(n < 0) { return 0; } if(p > s) { return 0; } int c1 = comnum(i+1,n-1,s,p+i); int c2 = comnum(i+1,n,s,p); return c1+c2; } } Sample Output: Input number of combinations and sum (separated by a space in a line): 3 6 Number of combinations: 3 Flowchart: Java Code Editor: Contribute your code and comments through Disqus. Previous:Write a Java program which reads a text (only alphabetical characters and spaces.) and prints two words. The first one is the word which is arise most frequently in the text. The second one is the word which has the maximum number of letters. Next: Write a Java program which reads the two adjoined sides and the diagonal of a parallelogram and check whether the parallelogram is a rectangle or a rhombus. What is the difficulty level of this exercise? New Content: Composer: Dependency manager for PHP, R Programming
https://www.w3resource.com/java-exercises/basic/java-basic-exercise-228.php
CC-MAIN-2019-18
refinedweb
353
53.71
Higher. In this tutorial, we will learn what a HOC is, its basic structure, some use cases, and finally an example. Note: Basic knowledge of React and JavaScript will come in handy as you work through this tutorial. Best Practices With React React is a fantastic JavaScript library for building rich user interfaces. It provides a great component abstraction for organizing your interfaces into well-functioning code, and there’s just about anything you can use it for. Read more articles on React → Higher-Order Functions In JavaScript Before jumping into HOCs in React, let’s briefly discuss higher-order functions in JavaScript. Understanding them is critical to understanding our topic of focus. Higher-order functions in JavaScript take some functions as arguments and return another function. They enable us to abstract over actions, not just values, They come in several forms, and they help us to write less code when operating on functions and even arrays. The most interesting part of using higher-order functions is composition. We can write small functions that handle one piece of logic. Then, we can compose complex functions by using the different small functions we have created. This reduces bugs in our code base and makes our code much easier to read and understand. JavaScript has some of these functions already built in. Some examples of higher-order functions are the following: .forEach() This iterates over every element in an array with the same code, but does not change or mutate the array, and it returns undefined. .map() This method transforms an array by applying a function to all of its elements, and then building a new array from the returned values. .reduce() This method executes a provided function for each value of the array (from left to right). .filter() This checks every single element in an array to see whether it meets certain criteria as specified in the filtermethod, and then it returns a new array with the elements that match the criteria. So many higher-order functions are built into JavaScript, and you can make your own custom ones. An Example Of Custom Higher-Order Function Suppose we are asked to write a function that formats integers as currencies, including some customization of specifying the currency symbol and adding a decimal separator for the currency amount. We can write a higher-other function that takes the currency symbol and also the decimal separator. This same function would then format the value passed to it with the currency symbol and decimal operators. We would name our higher-order function formatCurrency. const formatCurrency = function( currencySymbol, decimalSeparator ) { return function( value ) { const wholePart = Math.trunc( value / 100 ); let fractionalPart = value % 100; if ( fractionalPart formatCurrency returns a function with a fixed currency symbol and decimal separator. We then pass the formatter a value, and format this value with the function by extracting its whole part and the fractional part. The returned value of this function is constructed by a template literal, concatenating the currency symbol, the whole part, the decimal separator, and the fractional part. Let’s use this higher-order function by assigning a value to it and seeing the result. > getLabel = formatCurrency( '$', '.' ); > getLabel( 1999 ) "$19.99" //formatted value > getLabel( 2499 ) "$24.99" //formatted value You might have noticed that we created a variable named getLabel, then assigned our formatCurrency higher-order function, and then passed the currency formatters to the function, which is the currency symbol and a decimal separator. To make use of the function, we call getLabel, which is now a function, and we pass in the value that needs to be formatted. That’s all! We have created a custom higher order of our choice. What Is A Higher-Order Component? A higher-order component (HOC) is an advanced element for reusing logic in React components. Components take one or more components as arguments, and return a new upgraded component. Sounds familiar, right? They are similar to higher-order functions, which take some functions as an argument and produce a new function. HOCs are commonly used to design components with certain shared behavior in a way that makes them connected differently than normal state-to-props pattern. Facts About HOCs - We don’t modify or mutate components. We create new ones. - A HOC is used to compose components for code reuse. - A HOC is a pure function. It has no side effects, returning only a new component. Here are some examples of real-world HOCs you might have come across: Structure Of A Higher-Order Component A HOC is structured like a higher-order function: - It is a component. - It takes another component as an argument. - Then, it returns a new component. - The component it returns can render the original component that was passed to it. The snippet below shows how a HOC is structured in React: import React from 'react'; // Take in a component as argument WrappedComponent const higherOrderComponent = (WrappedComponent) => { // And return another component class HOC extends React.Component { render() { return <WrappedComponent />; } } return HOC; }; We can see that higherOrderComponent takes a component ( WrappedComponent) and returns another component inside of it. With this technique, whenever we need to reuse a particular component’s logic for something, we can create a HOC out of that component and use it wherever we like. Use Cases In my experience as a front-end engineer who has been writing React for a while now, here are some use cases for HOCs. Show a loader while a component waits for data Most of the time, when building a web application, we would need to use a loader of some sort that is displayed while a component is waiting for data to be passed to its props. We could easily use an in-component solution to render the loader, which would work, but it wouldn’t be the most elegant solution. Better would be to write a common HOC that can track those props; and while those props haven’t been injected or are in an empty state, it can show a loading state. To explain this properly, let’s build a list of categories of public APIs, using its open API. We tend to handle list-loading, so that our clients don’t panic when the API we are getting data from takes so much time to respond. Let’s generate a React app: npx create-react-app repos-list A basic list component can be written as follows: //List.js import React from 'react'; const List = (props) => { const { repos } = props; if (!repos) return null; if (!repos.length) return <p>No repos, sorry</p>; return ( <ul> {repos.map((repo) => { return <li key={repo.id}>{repo.full_name}</li>; })} </ul> ); }; export default List; The code above is a list component. Let’s break down the code into tiny bits so that we can understand what is happening. const List = (props) => {}; Above, we initialize our functional component, named List, and pass props to it. const { repos } = props; Then, we create a constant, named repos, and pass it to our component props, so that it can be used to modify our component. if (!repos) return null; if (!repos.length) return <p>No repos, sorry</p>; Above, we are basically saying that, if after fetching has completed and the repos prop is still empty, then it should return null. We are also carrying out a conditional render here: If the length of the repos prop is still empty, then it should render “No repos, sorry” in our browser. return ( <ul> {repos.map((repo) => { return <li key={repo.id}>{repo.full_name}</li>; })} </ul> ); Here, we are basically mapping through the repos array and returning a list of repos according to their full names, with a unique key for each entry. Now, let’s write a HOC that handles loading, to make our users happy. //withdLoading.js import React from 'react'; function WithLoading(Component) { return function WihLoadingComponent({ isLoading, ...props }) { if (!isLoading) return <Component {...props} />; return <p>Hold on, fetching data might take some time.</p>; }; } export default WithLoading; This would display the text “Hold on, fetching data might take some time” when the app is still fetching data and the props are being injected into state. We make use of isLoading to determine whether the component should be rendered. Now, in your App.js file, you could pass the WithLoading, without worrying about it in your List . import React from 'react'; import List from './components/List.js'; import WithLoading from './components/withLoading.js'; const ListWithLoading = WithLoading(List); class App extends React.Component { state = { { }; componentDidMount() { this.setState({ loading: true }); fetch(``) .then((json) => json.json()) .then((repos) => { this.setState({ loading: false, repos: repos }); }); } render() { return ( <ListWithLoading isLoading={this.state.loading} repos={this.state.repos} /> ); } } export default App; The code above is our entire app. Let’s break it down to see what is happening. class App extends React.Component { state = { loading: false, repos: null, }; componentDidMount() { this.setState({ loading: true }); fetch(``) .then((json) => json.json()) .then((repos) => { this.setState({ loading: false, repos: repos }); }); } All we are doing here is creating a class component named App(), then initializing state with two properties, repos: null,. The initial state of false, while the initial state of repos is also null. Then, when our component is mounting, we set the state of the true, and immediately make a fetch request to the API URL that holds the data we need to populate our List component. Once the request is complete, we set the false and populate the repos state with the data we have pulled from the API request. const ListWithLoading = WithLoading(List); Here, we create a new component named ListWithLoading and pass the WithLoading HOC that we created and also the List component in it. render() { return ( <ListWithLoading isLoading={this.state.loading} repos={this.state.repos} /> ); } Above, we render the ListWithLoading component, which has been supercharged by the WithLoading HOC that we created and also the List component in it. Also, we pass the repos state’s value as props to the component. Because the page is still trying to pull data from the API, our HOC will render the following text in the browser. When loading is done and the props are no longer in an empty state, the repos will be rendered to the screen. Conditionally Render Components Suppose we have a component that needs to be rendered only when a user is authenticated — it is a protected component. We can create a HOC named WithAuth() to wrap that protected component, and then do a check in the HOC that will render only that particular component if the user has been authenticated. A basic withAuth() HOC, according to the example above, can be written as follows: // withAuth.js import React from "react"; export function withAuth(Component) { return class AuthenticatedComponent extends React.Component { isAuthenticated() { return this.props.isAuthenticated; } /** * Render */ render() { const loginErrorMessage = ( <div> Please <a href="">login</a> in order to view this part of the application. </div> ); return ( <div> { this.isAuthenticated === true ? <Component {...this.props} /> : loginErrorMessage } </div> ); } }; } export default withAuth; The code above is a HOC named withAuth. It basically takes a component and returns a new component, named AuthenticatedComponent, that checks whether the user is authenticated. If the user is not authenticated, it returns the loginErrorMessage component; if the user is authenticated, it returns the wrapped component. Note: this.props.isAuthenticated has to be set from your application’s logic. (Or else use react-redux to retrieve it from the global state.) To make use of our HOC in a protected component, we’d use it like so: // MyProtectedComponent.js import React from "react"; import {withAuth} from "./withAuth.js"; export class MyProectedComponent extends React.Component { /** * Render */ render() { return ( <div> This is only viewable by authenticated users. </div> ); } } // Now wrap MyPrivateComponent with the requireAuthentication function export default withAuth(MyPrivateComponent); Here, we create a component that is viewable only by users who are authenticated. We wrap that component in our withAuth HOC to protect the component from users who are not authenticated. Provide Components With Specific Styling Continuing the use case above, based on whatever UI state you get from the HOC, you can render specific styles for specific UI states. For example, if the need arises in multiple places for styles like backgroundColor, fontSize and so on, they can be provided via a HOC by wrapping the component with one that just injects props with the specific className. Take a very simple component that renders “hello” and the name of a person. It takes a name prop and some other prop that can affect the rendered JavaScript XML (JSX). // A simple component const HelloComponent = ({ name, ...otherProps }) => ( <div {...otherProps}>Hello {name}!/div> ); Let’s create a HOC named withStyling that adds some styling to the “hello” text. const withStyling = (BaseComponent) => (props) => ( <BaseComponent {...props} style={{ fontWeight: 700, color: 'green' }} /> ); In order to make use of the HOC on our HelloComponent, we wrap the HOC around the component. We create a pure component, named EnhancedHello, and assign the HOC and our HelloComponent, like so : const EnhancedHello = withStyling(HelloComponent); To make a change to our HelloComponent, we render the EnhancedHello component: <EnhancedHello name='World' /> Now, the text in our HelloComponent becomes this: <div style={{fontWeight: 700, color: 'green' }}>Hello World</div> Provide A Component With Any Prop You Want This is a popular use case for HOCs. We can study our code base and note what reusable prop is needed across components. Then, we can have a wrapper HOC to provide those components with the reusable prop. Let’s use the example above: // A simple component const HelloComponent = ({ name, ...otherProps }) => ( <div {...otherProps}>Hello {name}!</div> ); Let’s create a HOC named withNameChange that sets a name prop on a base component to “New Name”. const withNameChange = (BaseComponent) => (props) => ( <BaseComponent {...props} ); In order to use the HOC on our HelloComponent, we wrap the HOC around the component, create a pure component named EnhancedHello2, and assign the HOC and our HelloComponent like so: const EnhancedHello2 = withNameChange(HelloComponent); To make a change to our HelloComponent, we can render the EnhancedHello component like so: <EnhancedHello /> Now, the text in our HelloComponent becomes this: <div>Hello New World</div> To change the name prop, all we have to do is this: <EnhancedHello name='Shedrack' /> The text in our HelloComponent becomes this: <div>Hello Shedrack</div> Let’s Build A Higher-Order Component In this section, we will build a HOC that takes a component that has a name prop, and then we will make use of the name prop in our HOC. So, generate a new React app with create-react-app, like so: npx create-react-app my-app After it is generated, replace the code in your index.js file with the following snippet. import React from 'react'; import { render } from 'react-dom'; const Hello = ({ name }) => <h1> Hello {name}! </h1>; function withName(WrappedComponent) { return class extends React.Component { render() { return <WrappedComponent name="Smashing Magazine" {...this.props} />; } }; } const NewComponent = withName(Hello); const App = () => <div> <NewComponent /> </div>; render(<App />, document.getElementById('root')); Once you have replaced the code in your index.js file, you should see the following on your screen: Let’s go through the snippet bit by bit. const Hello = ({ name }) => <h1> Hello {name}! </h1>; Here, we create a functional component that has a prop called name. In this functional component, we render the “Hello” and the value of the name prop in an h1 tag. function withName(WrappedComponent) { return class extends React.Component { render() { return <WrappedComponent name="Smashing Magazine" {...this.props} />; } }; } Above, we create a higher-order functional component named withName(). Then, we return an anonymous class component inside that renders the component wrapped in the HOC. And we assign a value to the prop of the wrapped component. const NewComponent = withName(Hello); Here, we create a new component named NewComponent. We use the HOC that we created, and assign to it the functional component that we created at the start of the code base, named hello. const App = () => <div> <NewComponent /> </div>; render(<App />, document.getElementById('root')); All we are doing above is creating another functional component, named App. It renders the NewComponent that we upgraded with our HOC in a div. Then, we use the react-dom function render to display the component in the browser. That’s all we need to do! Our withName function takes a component as an argument and returns a HOC. A few months from now, if we decide to change things around, we only have to edit our HOC. Conclusion I hope you’ve enjoyed working through this tutorial. You can read more about higher-order components in the references listed below. If you have any questions, leave them in the comments section below. I’ll be happy to answer every one. Resources And References
https://www.fvwebsite.design/fraser-valley-website-design/higher-order-components-in-react-smashing-magazine/
CC-MAIN-2021-17
refinedweb
2,789
55.24
can we write automated tests for all of this? Well... I have so many answers for that. First, you could unit test your message classes. I don't normally do this... because those classes tend to be so simple... but if your class is a bit more complex or you want to play it safe, you can totally unit test this. More important are the message handlers: it's definitely a good idea to test these. You could write unit tests and mock the dependencies or write an integration test... depending on what's most useful for what each handler does. The point is: for message and message handler classes... testing them has absolutely nothing to do with messenger or transports or async or workers: they're just well-written PHP classes that we can test like anything else. That's really one of the beautiful things about messenger: above all else, you're just writing nice code. But functional tests are more interesting. For example, open src/Controller/ImagePostController.php. The create() method is the upload endpoint and it does a couple of things: like saving the ImagePost to the database and, most important for us, dispatching the AddPonkaToImage object. Writing a functional test for this endpoint is actually fairly straightforward. But what if we wanted to be able to test not only that this endpoint "appears" to have worked, but also that the AddPonkaToImage object was, in fact, sent to the transport? After all, we can't test that Ponka was actually added to the image because, by the time the response is returned, it hasn't happened yet! Let's get the functional test working first, before we get all fancy. Start by finding an open terminal and running: composer require phpunit --dev That installs Symfony's test-pack, which includes the PHPUnit bridge - a sort of "wrapper" around PHPUnit that makes life easier. When it finishes, it tells us to write our tests inside the tests/ directory - brilliant idea - and execute them by running php bin/phpunit. That little file was just added by the recipe and it handles all the details of getting PHPUnit running. Ok, step one: create the test class. Inside tests, create a new Controller/ directory and then a new PHP Class: ImagePostControllerTest. Instead of making this extend the normal TestCase from PHPUnit, extend WebTestCase, which will give us the functional testing superpowers we deserve... and need. The class lives in FrameworkBundle but... be careful because there are (gasp) two classes with this name! The one you want lives in the Test namespace. The one you don't want lives in the Tests namespace... so it's super confusing. It should look like this. If you choose the wrong one, delete the use statement and try again. But.... while writing this tutorial and getting mad about this confusing part, I created an issue on the Symfony repository. And I'm thrilled that by the time I recorded the audio, the other class has already been renamed! Thanks to janvt who jumped on that. Go open source! Anyways, because we're going to test the create() endpoint, add public function testCreate(). Inside, to make sure things are working, I'll try my favorite $this->assertEquals(42, 42). Notice that I didn't get any auto-completion on this. That's because PHPUnit itself hasn't been downloaded yet. Check it out: find your terminal and run the tests with: php bin/phpunit This little script uses Composer to download PHPUnit into a separate directory in the background, which is nice because it means you can get any version of PHPUnit, even if some of its dependencies clash with those in your project. Once it's done... ding! Our one test is green. And the next time we run: php bin/phpunit it jumps straight to the tests. And now that PHPUnit is downloaded, once PhpStorm builds its cache, that yellow background on assertEquals() will go away. To test the endpoint itself, we first need an image that we can upload. Inside the tests/ directory, let's create a fixtures/ directory to hold that image. Now I'll copy one of the images I've been uploading into this directory and name it ryan-fabien.jpg. There it is. The test itself is pretty simple: create a client with $client = static::createClient() and an UploadedFile object that will represent the file being uploaded: $uploadedFile = new UploadedFile() passing the path to the file as the first argument - __DIR__.'/../fixtures/ryan-fabien.jpg - and the filename as the second - ryan-fabien.jpg. Why the, sorta, "redundant" second argument? When you upload a file in a browser, your browser sends two pieces of information: the physical contents of the file and the name of the file on your filesystem. Finally, we can make the request: $client->request(). The first argument is the method... which is POST, then the URL - /api/images - we don't need any GET or POST parameters, but we do need to pass an array of files. If you look in ImagePostController, we're expecting the name of the uploaded file - that's normally the name attribute on the <input field - to literally be file. Not the most creative name ever... but sensible. Use that key in our test and set it to the $uploadedFile object. And... that's it! To see if it worked, let's just dd($client->getResponse()->getContent()). Testing time! Find your terminal, clear the screen, deep breath and... php bin/phpunit Got it! And we get a new id each time we run it. The ImagePost records are saving to our normal database because I haven't gone to the trouble of creating a separate database for my test environment. That is something I normally like to do. Remove the dd(): let's use a real assertion: $this->assertResponseIsSuccessful(). This nice method was added in Symfony 4.3... and it's not the only one: this new WebTestAssertionsTrait has a ton of nice new methods for testing a whole bunch of stuff! If we stopped now... this is a nice test and you might be perfectly happy with it. But... there's one part that's not ideal. Right now, when we run our test, the AddPonkaToImage message is actually being sent to our transport... or at least we think it is... we're not actually verifying that this happened... though we can check manually right now. To make this test more useful, we can do one of two different things. First, we could override the transports to be synchronous in the test environment - just like we did with dev. Then, if handling the message failed, our test would fail. Or, second, we could at least write some code here that proves that the message was at least sent to the transport. Right now, it's possible that the endpoint could return 200... but some bug in our code caused the message never to be dispatched. Let's add that check next, by leveraging a special "in memory" transport.
https://symfonycasts.com/screencast/messenger/functional-test
CC-MAIN-2019-43
refinedweb
1,180
74.59
The Unreasonable Effectiveness of Dynamic Typing for Practical Programs Recorded at: - Download - MP3 - | - Slides - | - Android app Summary Robert Smallshire explores the disconnect between the dire outcomes predicted by advocates of static typing versus the near absence of type errors in real world systems built with dynamic languages. Bio Robert Smallshire is a founding director of Sixty North, a software product and consulting business in Norway . He has worked in architecture and technical management roles for several companies providing tools in the energy sector for dealing with the masses of information flowing from today’s digital oil fields. Robert is a regular speaker at conferences, meetups and corporate software events. “BUILD STUFF” Conference is a Software Development Conference created for developers, team leaders, software architects and technical project managers. Our goal is to bring world class speakers to share innovations, latest developments, new trends and directions in software development world to the Baltics. Are there advanced type systems? by Arturo Hernandez The second problem is that the productivity when first writing a program, does not necessarily represent the full cost of a system. When the original programmers of a dynamically typed system have left then the cost equation may change. Static vs dynamic typing by Serge Slepov As to the point of the lecture, I didn't see it. We were promised to be shown scientific/exprimental proof of dynamic languages making programmers happier and more productive. What the experiments have shown though, is that there is some percentage (1-2%) of type errors amongst all the issues logged for dynamic language projects. That's the errors that could have been avoided by using a statically typed language. I spent 50 minutes waiting to hear what the tradeoffs were of static typing and at the end of the video an argument was presented where Robert was pointing at a slide with some C++ code and saying it took him days to write it. I didn't see the slide (fire that cameraman!) but I am guessing it was showing a case of having to specify an iterator type explicitly. C# or F# don't have this problem while being statically typed (with support for dynamic typing too, if you prefer). Still not convinced. What about Type Inference? by Faisal Waris Have been coding F# for over 3 years. It definitely has the feel of a dynamically typed language but gives you all the support of a statically type one. Hmm I use Python and CoffeeScript by aoeu256 . class MyLongClass: # Actually happened to me : ( def lol(a): return 1 # 500 lines later def lol(a): return 5 or this: myclass.lol = 5 myclass.lal = 3 # mistyped lol or this: lol = 3 oll = 24 # i meant to redefine lol if you use strings for "enums" instead of class namespaces you can also mis-spell it, but if you use classes: enemyType = EnemyType.Witch These are all "Write Errors", rather than "Read Errors". If you assume they are equally as likely as "Read Errors" then it bumps it up to 5%-10%. Still not good enough for Static Typing : ). Fixes for write errors: <-- Maybe use ":" for defining things, and "=" for redefining things, and disallow class and def to redefining things. lol: 3 lal = 5 Why don't Dynamic languages have "interactive coercion & typing, and spelling check" where when you write 5 + "5" it asks you whether to fix it(modifying the file, and changing the function) or to error out. You can also automatically fix most errors by saying if you are adding an int and string it will ask you if you want to make both a str or int. If you get a NameError, KeyError, or AttributeError it retries the line of code by spelling checking it over the local variables/Attributes. If you get a AttributeError(Duck Typing error), TypeError because of wrong argument order, all you have to automatically fix the error is to retry calling with a different argument order and see if the error goes away (and you can do this all the way to the top of the stack). I heard LISP has Edit & Continue, but does Clojure and other languages also have it? I wrote a Python Edit & Continue decorator for individual functions. A waste of time by tim frian The speaker cloaks his disdain and lack of understanding of static type systems and theory behind some pseudo-scientific experiments and lame analogies. Don’t waste your time watching this. Instead, I recommend presentations by Daniel Spiewak or Martin Odersky on the subject. An Unreasonable Failure of Domain Modeling by Jared Hester His types should have been : [<Measure>] type kg [<Measure>] type m [<Measure>] type s [<Measure>] type N = kg m/s^2 [<Measure>] type J = N * m [<Measure>] type Radian [<Measure>] type Torque = J / Radian Had he done this the compiler would have rejected his attempts to add torque to energy. Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
http://www.infoq.com/presentations/dynamic-static-typing
CC-MAIN-2015-32
refinedweb
854
60.35
; Getting Started For each algorithm you can see the examples in the provided colab or in the repo itself. For VAD we also provide streaming examples for a single stream and multiple streams. import torch torch.set_num_threads(1) from pprint import pprint model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad', force_reload=True) (get_speech_ts, _, read_audio, _, _, _) = utils files_dir = torch.hub.get_dir() + '/snakers4_silero-vad_master/files' wav = read_audio(f'{files_dir}/en.wav') # full audio # get speech timestamps from full audio file speech_timestamps = get_speech_ts(wav, model, num_steps=4) pprint(speech_timestamps) Latency All speed test were run on AMD Ryzen Threadripper 3960X using only 1 thread: torch.set_num_threads(1) # pytorch ort_session.intra_op_num_threads = 1 # onnx ort_session.inter_op_num_threads = 1 # onnx Streaming latency depends on 2 factors: - num_steps — number of windows to split each audio chunk into. Our post-processing class keeps previous chunk in memory (250 ms), so new chunk (also 250 ms) is appended to it. The resulting big chunk (500 ms) is split into num_steps overlapping windows, each 250 ms long; - number of audio streams; So batch size for streaming is num_steps * number of audio streams. Time between receiving new audio chunks and getting results is shown in picture: Throughput RTS (seconds of audio processed per second, real time speed, or 1 / RTF) for full audio processing depends on num_steps (see previous paragraph) and batch size (bigger is better). VAD Quality Benchmarks We use random 250 ms audio chunks for validation. Speech to non-speech ratio among chunks is about ~50/50 (i.e. balanced). Speech chunks are sampled from real audios in four different languages (English, Russian, Spanish, German), then random background noise is added to some of them (~40%). Since our VAD (only VAD, other networks are more flexible) was trained on chunks of the same length, model's output is just one float from 0 to 1 — speech probability. We use speech probabilities as thresholds for precision-recall curve. This can be extended to 100 — 150 ms. Less than 100 — 150 ms cannot be distinguished as speech with confidence. Webrtc splits audio into frames, each frame has corresponding number (0 or 1). We use 30ms frames for webrtc, so each 250 ms chunk is split into 8 frames, their mean value is used as a treshold for plot.
https://habr.com/en/post/537276/
CC-MAIN-2021-43
refinedweb
382
63.7
Shawn Hargreaves Blog One of the 2.0 features I am personally happiest about is the addition of custom parameters to control how your content gets built. This is something we always wanted to do, but didn't have time for in v1, so it feels great to be able to finally finish our original design! The most obvious impact of this change is that the previous muddle of TextureProcessor, SpriteTextureProcessor, and ModelTextureProcessor class has been replaced with a single improved TextureProcessor. Here it is in the Visual Studio properties window: See that little + box to the left of the "Content Processor" label? Guess what happens if you expand it: Tada! If you don't like our default texture format or color keying behavior, you can now easily change this without having to write any custom processor code. Even better, remember the hoops you had to jump through to customize how models built their textures? Not any more. The built in ModelProcessor now provides parameters controlling not only the 3D model processing, but also how to build any textures that are used on the model: It is easy to add custom parameters to your own processors. Simply create a public property: [ContentProcessor] public class MyProcessor : ContentProcessor<MyType, MyOtherType> { public float numberOfCats; { get { return numberOfCats; } set { numberOfCats = value; } } float numberOfCats = 23; ... This will automatically show up in Visual Studio when your processor is selected. For bonus points you can include attributes to customize how the property is displayed and initialized: [DefaultValue(23.0f)] [DisplayName("Number of cats")] [Description("Sets how many cats you wish to splice afore the mizzen")] public float numberOfCats; { I'm excited about the possibilities this will open up for more powerful and reusable content processors. Imagine if the VegetationProcessor from the Billboard Sample let you choose what textures to use, or if the NormalMapProcessor from the Sprite Effects Sample let you choose how bumpy a normalmap to create... From the pictures above, by setting "Generate tangent frames" to "true", the processor then (re)generates the normal, tangent and binormal vectors for the model, or is it something different? I'm not sure how the PropertyGrid control would handle this, but if it's "gracefully", why not merge Color Key Color and Color Key Enabled to just be a nullable Color Key? sorry to sound newbish but I was just wondering how you create a public property. I see the option to add a new content processor. Any direction in this subject would be greatly appreciated. On another note that's awesome that we can change these attributes so easy :) Just a heads up for anyone new to C# trying to use this information. You need to also include the following line to your list of assemblies if you wish to support the [DefaultValue()], [DisplayName()] and [Description()] method attributes. using System.ComponentModel; Hi Shawn, I have the following code in a Custom Texture Processor class: private Color colorKeyColor = Color.White; [DisplayName("Color Key Color")] //[DefaultValue(Color.Magenta)] [Description("Color Key")] public Color ColorKeyColor { get { return colorKeyColor; } set { colorKeyColor = value; } } The problem is that the ColorKeyColor prop only shows up as a textbox in the Property Grid, how to I get it to have nested R,G,B,A properties like the standard Texture Processor? Also If I uncomment the "[DefaultValue(Color.Magenta)]" line I get an error. How can I set the default Color for this property in the PropertyGrid? Thanks!! - Chris W OK, it looks like you have to type the Color into the text box in the format: R,B,G,A Then after you compile the nested R,B,G,A props become available. I take it the "[DefaultValue(Color.Magenta)]" entry won't work though? Looks like I answered my own question again. You need to do: [DefaultValue(typeof(Color), "0, 255, 0, 255")] In xna 3.1, is it possible to set the processor parameters in code. The context is if i was use msBuild to build content, it seems like i'm stuck using the default values for the model processor. mike: I would recommend the forums on creators.xna.com for this question. Blog post comments aren't really the best place for posting code snippets etc!
http://blogs.msdn.com/b/shawnhar/archive/2007/11/26/content-processor-parameters-in-xna-game-studio-2-0.aspx?Redirected=true
CC-MAIN-2015-27
refinedweb
706
53.71
Borislav Hadzhiev Last updated: Jul 25, 2022 The error "Invalid hook call. Hooks can only be called inside the body of a function component" occurs for multiple reasons: reactand react-dom. reactpackage in the same project. App()instead of <App /> Open your terminal in your project's root directory and update the versions of your react and react-dom packages to make sure the versions match and you aren't using an outdated version. # 👇️ with NPM npm install react@latest react-dom@latest # 👇️ ONLY If you use TypeScript npm install --save-dev @types/react@latest @types/react-dom@latest # ---------------------------------------------- # 👇️ with YARN yarn add react@latest react-dom@latest # 👇️ ONLY If you use TypeScript yarn add @types/react@latest @types/react-dom@latest --dev If the error persists, try to delete your node_modules and package-lock.json (not package.json) files, re-run npm install and restart your IDE. The error is often caused by having multiple versions of react in the same project. # 👇️ delete node_modules and package-lock.json rm -rf node_modules rm -f package-lock.json # 👇️ clean npm cache npm cache clean --force npm install Here is another example of how the error occurs. import {useState} from 'react'; // 👇️ Don't call components as functions 👇️ App(); export default function App() { /** * ⛔️ Warning: */ const [count, setCount] = useState(0); return ( <div> <h1>Hello world</h1> </div> ); } Appcomponent as a function. You should only use components with the JSX syntax, e.g. <App country="Austria" age="30" />, and not App({country: 'Austria', age: 30}). If you have a class, convert it to a function to be able to use hooks. Here is an example of how the error is caused in a function that is neither a component nor a custom hook. import {useState, useEffect} from 'react'; // 👇️ not a component nor custom hook // so it can't use hooks function counter() { const [count, setCount] = useState(0); useEffect(() => { console.log('hello world'); }, []); } The counter function starts with a lowercase c, so it's not considered a component by React because all component names must start with a capital letter. It also isn't a custom hook because its name doesn't start with use, e.g. useCounter. We are only able to use hooks inside of function components or custom hooks, so one way to be able to use hooks is to rename counter to useCounter. import {useState, useEffect} from 'react'; function useCounter() { const [count, setCount] = useState(0); useEffect(() => { console.log('hello world'); }, []); } React now recognizes useCounter as a custom hook, so it allows us to use hooks inside of it. Like the documentation states:
https://bobbyhadz.com/blog/react-invalid-hook-call-hooks-can-only-be-called-inside-body
CC-MAIN-2022-40
refinedweb
430
54.63
puma_pay 3.0.6 puma_pay: ^3.0.6 copied to clipboard puma_pay # This package allows you app to manage in-app products and subscriptions. Installation # Add the following in you pubspec.yamlfile under dependencies: puma_pay: ^2.0.1 Add the Puma Pay middleware in your app ( src/main.dart): final store = Store<MainState>( mainReducer, initialState: initialState, middleware: [ PumaPayMiddleware<MainState>().createMiddleware(), mainMiddleware(), ], ); Add the Puma Pay reducer in your app main reducer ( src/redux/mainReducer.dart): final mainReducer = combineReducers<MainState>([ kitReducers, ]); MainState kitReducers(MainState state, action) { return state.rebuild((b) => b ..pumaPayState.replace(pumaPayReducer(state.pumaPayState, action)) ..accountKitState .replace(accountKitReducer(state.accountKitState, action))); } Add PumaPayState in your mainState ( src/redux/mainState.dart): PumaPayState get pumaPayState; Don't forget to init pumaPayState in the mainState initial() function: static MainState initial() { final builder = MainStateBuilder(); builder.pumaPayState = PumaPayState.initial().toBuilder(); return new MainState((b) => b ..pumaPayState.replace(PumaPayState.initial()) // other stuff ); } Getting Started # 1. Init the package # On your src/app.dart you should init Puma Pay (using the StoreConnector onInitialBuild handler for example). You just need to dispatch the action InitPayAction. The InitPayAction contains 4 parameters: `productsID` List\<String\>: Id of your products (Make sure the IDS are the same on iOS and Android). verifyEndpointString?: All restored products should be verified by Apple or Google because we cannot trust the restored product list provided by the package in-app-purchase. That is why your API should implement an endpoint that query /verifyReceipt for Apple or GET purchases.subscriptions for Google. Your endpoint must be a POST that accepts body like this: ```dart { "productID": STRING, "storeName": "android" || "ios" "data": STRING } ``` appleAppStorePassword: String?: If you don't use an endpoint on your backend. You still could query the Apple /verifyReceipteven if it is not recommended. In that case, this field is required to perform the API call. appStoreSandbox: Bool?: If you don't use an endpoint on your backend and if you want to use the AppStore Sandbox. As soon as you dispatch the action, the package will init and fetch all products provided on the productsID list on the different stores. 2. PumaPayState # Your MainState includes a PumaPayState that contains all you need: 4 Flags : storeAvailable[boolean]: Whether the device store is available or not (AppStore or Google Play). productsLoading[boolean]: Descibes if the app is still fetching the full products on the stores. restoringPurchase[boolean]: Descibes if the app is restoring all the purchases. purchasePending[boolean]: Descibes if a products is being purchased. 1 list : products[Product]: list of fetched products. A Product has the following shape: String get id; String get title; String get description; String get price; double get rawPrice; String get currencySymbol; String get currencyCode; BuiltList<PurchaseDetail> get purchaseDetails; The PurchaseDetail is filled if you perform a restorePurchaseAction and will put all related items on this list. The object has this shape: String? get purchaseID; String get productID; String? get transactionDate; String get status; bool? get valid; Note that the valid boolean is nullable. You should interpret its value like this: true: Your backend has verified the integratity of the purchase. You can safely deliver the related content to the end user. false: Your backend has rejected the integratity of the purchase (Could be expired or fraudulent). You should not deliver the content. null: If you have a backend, your backend is verifying it (response expected soon). You should wait before deliver the content. If you DON'T have a backend, you can consider that the purchase is valid (the authenticity cannot be garanteed though). 3. Actions # GetProductsAction: Query the stores to get the Produc's object. RestorePurchaseAction: Restore the products and query your backend if any to verify the integrity. PurchaseProductAction: Purchase the product (productID needed). 4. Extensions # extensions/extProduct.dart: This extension allows you to know if a given product is valid right away: import 'package:puma_pay/extensions/extProduct.dart'; ... final myProduct = store.state.pumaPayState.products.first; if (myProduct.isValid()) { // do stuff } 5. Debug Page # The package comes with a complete debug page: PumaPayDebugContainer. You can add this container to your app abd view the important part of PumaPayState.
https://pub.dev/packages/puma_pay
CC-MAIN-2022-27
refinedweb
681
50.73
Understanding the BackgroundWorker Component Pages: 1, 2 Let's switch to the code behind of the Windows form and do the coding. First, import the following namespace: Imports System.ComponentModel When the Start button is clicked, you first initialize some of the controls on the form. You also change the cursor to an hourglass (Cursors.WaitCursor) so that the user knows the application is working. You then get the BackgroundWorker component to spin off a separate thread using the RunWorkAsync() method. You pass the number entered by the user as the parameter for this method: Cursors.WaitCursor RunWorkAsync() Private Sub btnStart_Click( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnStart.Click lblResult.Text = "" btnCancel.Enabled = True btnStart.Enabled = False ProgressBar1.Value = 0 Me.Cursor = Cursors.WaitCursor BackgroundWorker1.RunWorkerAsync(txtNum.Text) End Sub The DoWork event of the BackgroundWorker component will invoke the SumNumbers() function (which I will define next) in a separate thread. This event (DoWork) is fired when you call the RunWorkerAsync() method (as was done in the previous step). DoWork SumNumbers() RunWorkerAsync() Private Sub BackgroundWorker1_DoWork( _ ByVal sender As System.Object, _ ByVal e As System.ComponentModel.DoWorkEventArgs) _ Handles BackgroundWorker1.DoWork 'This method will run on a thread other than the UI thread. 'Be sure not to manipulate any Windows Forms controls created 'on the UI thread from this method. Dim worker As System.ComponentModel.BackgroundWorker = _ CType(sender, System.ComponentModel.BackgroundWorker) e.Result = SumNumbers(CType(e.Argument, Double), worker, e) End Sub The SumNumbers() function basically sums up all the numbers from 0 to the number specified. It takes in three arguments--the number to sum up to, the BackgroundWorker, and the DoWorkEventArgs. Note that within the For loop, you check to see if the user has clicked on the Cancel button (the event will be defined later in this article) by checking the value of the CancellationPending property. If the user has cancelled the process, set e.Cancel to True. For every ten iterations, I will also calculate the progress completed so far. If there is progress (when the current progress percentage is greater than the last one recorded), then I will update the progress bar by calling the ReportProgress() method of the BackgroundWorker component. You should not call the ReportProgress() method unnecessarily, as frequent calls to update the progress bar will freeze the UI of your application. DoWorkEventArgs For Cancel CancellationPending e.Cancel True ReportProgress() It is important that note that in this method (which was invoked by the DoWork event), you cannot directly access the Windows controls, as they are not thread-safe. Trying to do so will also trigger a runtime error, a useful feature new in Visual Studio 2005. Function SumNumbers( _ ByVal number As Double, _ ByVal worker As System.ComponentModel.BackgroundWorker, _ ByVal e As DoWorkEventArgs) As Double Dim lastPercent As Integer = 0 Dim sum As Double = 0 For i As Double = 0 To number '---check if user cancelled the process If worker.CancellationPending = True Then e.Cancel = True Else sum += i If i Mod 10 = 0 Then Dim percentDone As Integer = i / number * 100 '---update the progress bar if there is a change If percentDone > lastPercent Then worker.ReportProgress(percentDone) lastPercent = percentDone End If End If End If Return sum End Function The ProgressChanged event is invoked whenever the ReportProgress() method is called. In this case, I used it to update my progress bar: ProgressChanged Private Sub backgroundWorker1_ProgressChanged( _ ByVal sender As Object, ByVal e As ProgressChangedEventArgs) _ Handles BackgroundWorker1.ProgressChanged '---updates the progress bar ProgressBar1.Value = e.ProgressPercentage End Sub The RunWorkerCompleted event is fired when the thread (SumNumbers(), in this case) has completed running. Here you will print the result accordingly and change the cursor back to the default: RunWorkerCompleted Private Sub BackgroundWorker1_RunWorkerCompleted( _ ByVal sender As Object, _ ByVal e As System.ComponentModel.RunWorkerCompletedEventArgs) _ Handles BackgroundWorker1.RunWorkerCompleted If Not (e.Error Is Nothing) Then MsgBox(e.Error.Message) ElseIf e.Cancelled Then MsgBox("Cancelled") Else lblResult.Text = "Sum of 1 to " & _ txtNum.Text & " is " & e.Result End If btnStart.Enabled = True btnCancel.Enabled = False Me.Cursor = Cursors.Default End Sub Finally, when the user clicks the Cancel button, you cancel the process by calling the CancelAsync() method: CancelAsync() Private Sub btnCancel_Click( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs) _ Handles btnCancel.Click ' Cancel the asynchronous operation. BackgroundWorker1.CancelAsync() btnCancel.Enabled = False End Sub To test the application, press F5 and enter a large number (say, 9999999) and click the Start button. You should see the progress bar updating and the cursor changed to an hourglass. When the process is completed, the result will be printed in the Label control (see Figure 4). F5 Label Figure 4. Testing the application Wei-Meng Lee (Microsoft MVP) is a technologist and founder of Developer Learning Solutions, a technology company specializing in hands-on training on the latest Microsoft technologies..
http://archive.oreilly.com/pub/a/dotnet/2005/07/25/backgroundworker.html?page=2
CC-MAIN-2016-36
refinedweb
823
50.43
This is the mail archive of the cygwin mailing list for the Cygwin project. On Thu, May 22, 2008 at 05:48:14PM -0400, Mike Marchywka wrote: >> From: dave.korn@artimi.com >> Date: Thu, 22 May 2008 16:15:16 +0100 > >Thanks. If you are interested, let me try to give a chronology of follow on events as best as I can >reconstruct: > >1::::::::::::::::::::::::::::::::::::::::::: >> Wrong place to look, surely? >> >> > >Oddly enough, that helped :) >OK, so I copied sockio.h and also needed ioccom.h. Just to be clear, > > > >my install is messed up so I'm not sure this problem applies to anyone else >but I'm just being complete incase it matters. As the sys/linux above implies, this is linux-specific code. It has no bearing for cygwin. >2::::::::::::::::::::::::::::::::::::::::::::::: > >Then, I encountered a define problem. This seems to have occured elsewhere >so I thought I would post the issue with definition of ssize_t: > > > >This turned out to be flagged with __CYGWIN__ and also easy to fix in the makefile. > >3:::::::::::::::::::::::::::::::::::::::::::::::::::::: > >But, then I had a missing definition for IFF_POINTOPOINT which does seem to be >an issue with cygwin. According to this, the symbol should be defined in if.h: > > > >my if.h needed to be modified, > >#define IFF_LOOPBACK 0x8 /* is a loopback net */ >// >#define IFF_POINTOPOINT 0x10 /* mjm, */ >// > >#define IFF_NOTRAILERS 0x20 /* avoid use of trailers */ > >4: :::::::::::::::::::::::::::::::::::::::::::: >Next, a problem with INET6_ADDSTRLEN16. I found this and implemented the suggestion, > > > > >#if defined(__CYGWIN__) >#define INET_ADDRSTRLEN 16 >#define INET6_ADDRSTRLEN 46 >#endif /* __CYGWIN__ */ > >which again seemed to work but I have no idea what other problems may turn up if this isn't right. > >5:::::::::::::::::::::::::::::::::::::::::::: > >Then I finally encountered a big link problem and determined that some >pieces were built with and without -mno-cygwin. I was finally able to >stop it from complaining by grepping all the libraries for the missing >symbols and just randomly adding stuff but, duh, the executable >wouldn't run. I can probably figure this out but I've never built >anything with -mno-cygwin before so it will probably take a while. Sorry but you're very confused by this point. Compiling with -mno-cygwin means that you can't use ANY cygwin headers. You won't be able to create an executable which works this way. When Dave said "...cygwin doesn't supply or support <sys/sockio.h>" he wasn't kidding. It isn't supplied and copying it from a random source and randomly making changes is not going to work. cgf -- Unsubscribe info: Problem reports: Documentation: FAQ:
https://cygwin.com/ml/cygwin/2008-05/msg00440.html
CC-MAIN-2018-09
refinedweb
423
73.58
GWT in Action 216 Michael J. Ross writes ." Read on for the rest of Michael's reviewWritten by Robert Hanson and Adam Tacy, this book was published by Manning Publications on 5 June 2007, under the ISBNs 1933988231 and. We. Michael J. Ross is a Web developer, freelance writer, and the editor of PristinePlanet.com's free newsletter. You can purchase GWT in Action from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. The more I learn about JavaScript... (Score:5, Insightful) I'm not really sure I see the point of GWT, given that JavaScript is actually a more powerful language at this point. I'm almost tempted to suggest that someone build a webserver platform around a JavaScript interpreter. Well, almost. The problem with JavaScript is that it's horribly slow to execute. But then, you still have that problem, even if it's JavaScript generated from Java code. Re: (Score:2, Funny) duh... IIS and asp @ Language=JavaScript Re: (Score:3, Interesting) Depends on your engine. I've put some thought into the same sort of server-side Javascript engine as you have. Someone else mentioned the Netscape Enterprise Server. That was... not so great. I was thinking more along the lines of building on top of a Java Servlet Engine. The Mozilla Rhino [mozilla.org] Javascript Engine actually compiles the JS down to Java byte code. That wouldn't be all that special, except that the JVM then JITs that to native code. Wh Re: (Score:2) How fast is it, really? Compiling down to Java bytecode doesn't necessarily make it faster. Consider a Bash script -- for most of what people use Bash for, you're not going to get any significant speed improvement out of "compiling" it, if you could. I think JavaScript could be very fast Re: (Score:3, Interesting) But then, you still have that problem, even if it's JavaScript generated from Java code. That's not really what's going on. GUI stuff happens in Javascript. Anything really complicated gets sent over to Java. That's why it's called an AJAX framework, not a Javascript framework. I like Javascript, but it has the same problem as PHP: no namespaces. Using it as a real language would causes the same problems you see there: security issues, hard-to-read code, difficulty refactoring, interoperability problems, etc. You can actually see those problems and the various mechanisms to overcome them in Re: (Score:2) You can work around this by writing object-oriented Javascript, like this: The only problem you run into is if you want to call a member function as a result of an event. I haven't found a good way to do this without using some sort of global varia Re: (Score:2) this and that (Score:2) But look up Douglas Crockford. He had one way of solving this: var that = this. Since Javascript supports closures, you can then refer to that, even in nested functions in which "this" gets overwritten. Re: (Score:2) PHP has classes... (Score:2) I mean, there are plenty of things to hate about PHP, and even object-oriented PHP, but you can no longer complain that it doesn't have namespaces. And Javascript is just far too flexible not to let you hack together namespaces on your own. Watch this (sorry about the lack of indentations; stupid Slashdot): Re: (Score:2) - Differences in browsers object models makes it counter-intuitive to write cross-browser JavaScript code, GWT can take care of these differences for you. Re: (Score:2) This may be more of a DOM issue than a javascript issue but I find a lot of situations where you don't get an error at all. It just doesn't do what you instructed. Sometimes execution just halts in that method and sometimes it doesn't. Also, don't get me started on javascript's this variable which is a complete joke. Re: (Score:2) You want a good debugger. Or, hell, just look at Firefox's error log -- it'll show you errors in HTML and CSS, also. It's completely easy to work around, also. Would you care to explain what ab Re: (Score:2) So what? Most of the errors I catch at compile time in languages like Java are things that would not be errors in languages like Javascript. And in any case, what is so magically better about compile time? So can Dojo, the Yahoo UI library, and so on. I understand the Re: (Score:2) Netscape did that, back in the "four days", google up "server-side javascript". > Well, almost. The problem with JavaScript is that it's horribly slow to execute. Bullshit. The types of JOBS people usually try to do with JavaScript may be slow, however. Or the native methods that the javascript programs are using might be... But that's a far cry from making it a slow language. It's not substancially s Re: (Score:2) That's just goofy. Spidermonkey is quite a bit slower than Rhino. They both contain runtime compilers, but Spidermonkey has its own interpreter. Rhino compiles down to Java bytecode which is then JITed by the JVM. If you want performance, use Rhino! Re: (Score:2) Re: (Score:2) Now that IS interesting, I wasn't aware of that. Thanks! ATM, Spidermonkey seems to perform VERY well for my needs. I have used NJS in the past and been satistfied with it (from a speed perspective) as well. Of course, when I profile my javascript code... only a VERY SMALL amount of time is spent in the JavaScript interpreter. Wall time is mostly consumed doing pesky things like I/O. So I tend to conce Re: (Score:2) I'm not really sure what you're asking here, so I'll just give a generic answer. Rhino compiles scripts to Java classes based on Rhino's hashtable-like data-model classes. Java classes can be passed to Rhino either by extending/implementing the data-model classes [mozilla.org], or by using the Livescript-like Javascript APIs [mozilla.org]. Native libraries are incompatible with Java save for when they are mapped to a class through JNI. So if you want to link in a library Half-true. (Score:2) In practice, the more powerful the language is, and the simpler a naive interpreter is, the harder it can be to make a good compiler. Consider JavaScript -- you have to carry around enough baggage to be able to support random 'eval' statements. Or Perl -- common wisdom dictates creating classes as a hash table, because there's no such thing as actual object members in Perl. Technically, you could make a P Re: (Score:2) The first thing I'd like are some of my favorite features from Common Lisp, like first class functions, ability to do procedural, functional, and object oriented programming with the same language, dynamic typing, macros, closures, and lexical scoping. Having a C-like syntax would be good for people who are used to C or Perl and don't want to learn about s-expressions. Automatic memory management is a must. So I Re: (Score:2) > But if you ever use it while only targetting a single browser, it's a dream to work with, > and all of the annoyances go away. I use JavaScript frequently, and have for years. I don't write much for the browser any more, but I often use JavaScript for general purpose projects. I'm a hardcore C hacker, know enough LISP to write macros for emacs, and I totally agree with your assessment. About the only thing I RE Re: (Score:2) I think the coolest thing potentially is being able to send the commands to a real unix server and then pipe the output back to the Javascript shell. Obviously this has potentially serious security implications, but with a secure setup could be extrem Inheritance... (Score:2) Re: (Score:2) Funny thing is, so does, say, Perl (probably to an even greater degree... how do you do macros in Javascript? I know Perl allows script pre-processing modules). However, both language suffer from a flaw that I think makes it difficult to write large scale applications: weak typing. Dynamic typing, that kicks ass, absolutely. But *weak* typing creates a whole class of errors easily avoided with a strong type system, just so the program Re: (Score:2) Um, you understand that GWT is not a language, it is a framework, right? Java is the language. And JavaScript is most CERTAINLY not more powerful than Java. Don't give in to the temptation because it would make you look quite foolish. -matthew Re: (Score:2) Really? What features are missing from Javascript that are in Java? Because I could come up with a very long list of things that are nowhere to be found in Java, but are actually cleverly hid in Javascript. Only because it's been done already. However, you just claimed Java is Re: (Score:2) I'm not really sure I see the point of GWT, given that JavaScript is actually a more powerful language at this point. I think "the point of GWT" is clearly stated on their front page [google.com]. The ability to write your web application more like a real application. The ability to share client and server side code. The possibility of using a real debugger. Support for browser history (havn't seen this for AJAX before).. just to name a few. Re: (Score:2) Millions of dollars and 5 years of Java IDE development by some of the best IDE developers in the world. Also the Hotspot JIT Compiler and class libraries. There are many 'languages' that Java developers would probably prefer to use if they had had the same massive cross-industry investment as Java - like Ruby, Python or Smalltalk. Matt Re: (Score:3, Informative) I agree with you that Javascript is a nicer language than Java and more powerful to boot, but I'm using GWT for one big reason: dead code elimination. The GWT compiler takes advantage of the static analysis possibilities in Java and the closed-world nature of a web page and produces dramatically optimized Javascript. Th Re: (Score:2) I know, for one thing, that JavaScript started out as a horrible little language, and within a year or so, it became a beautiful little language. I imagine that we could do better, were the same thing attempted today. Re: (Score:3, Informative) Not. Java doesn't have closures. Java functions aren't first-class objects. JavaScript, OTOH, isn't typed. There's loads of differences between them. All I'd want from JavaScript is more performance, namespaces, and possibly some very basic native OO concepts so that I can define an inherited class without three or four declarations and maybe have private and static members. Re: (Score:2) -matthew Re: (Score:2) I agree, Javascript is fine as a client side language only. My point wasn't that we need to turn it into a server side language (although this is already being done [wikipedia.org] in a proof-of-concepty way, so it's sort of too late to make this point), but rather that improving Javascript isn't going to magically turn it into Java. Re: (Score:3, Insightful) more info here [mozilla.org] Re: (Score:2) Not needed in a dynamically-typed language. See above. That's a property of the interpreter, not the language. Really? So Java has anonymous subroutines, anonymous classes, classes/objects/functions/methods all as first-class variables, closures, dynamic typing, the eval() keyword, and the ability to add stuff to ex Re: (Score:2) Would you care to explain? I've used all of these languages to some degree, and Javascript seems no better or worse than the others (well, depending on your preferences... Python and Ruby are both dynamically, strongly typed, and Javascript is dynamically, weakly typed. If you're prejudiced against the latter, then of course you'll prefer the Re: (Score:2) I'll bet you just don't know how to work with it. Go read Douglas Crockford. There are a few quirks, here and there, that I really don't like, but they are things I can live with. As the sibling asked, could you be specific? Re: (Score:2) I suppose it depends on whether the price of orange juice directly affects the outcome of soccer games played in the Miami Orange Bowl. Re: (Score:2) In other words, it does something like Dojo, Yahoo Web Toolkit, and so on, only in Java instead of Javascript. And Java is, without question, a more restrictive and less powerful language than Javascript. Re: (Score:2) Re: (Score:2) Re: (Score:2) I must be missing what, exactly, you're trying to do here -- it's really easy to just search for a method name, for one thing. If it's more complicated than that, I suppose you could examine the call stack from a debugger... Re: (Score:2) Or you can set up methods with types, if the method breaks, callees break too. So it scales very well in terms of number of developers. Now let's say you have your javascript method that 10 developers call in 20 different ways. Let's say you change that method signature. You have just hosed the code base and pissed off 10 developers, or you can spend 10-30 mins updatin I Still don't know what GWT is (Score:5, Informative) It's so simple, but... (Score:2, Troll) Hell. No. Re: (Score:3, Informative) You don't actually write any SOAP code, but rather use Java Remoting to call across the wire, getting a Java object back that you can manipulate. The GWT compiler translates that into a XML request and writes all the serialization/deserialization code and so on. In fact it doesn't seem to be able to call an XML web service Re: (Score:2) Re: (Score:2) Beyond hope (Score:5, Insightful) Come on, doesn't anybody read these? Re: (Score:2) Re:Beyond hope (Score:5, Funny) Just when you need mod points (Score:2) Re: (Score:2) Funny, that, I am using both as well, and I feel the same thing. The other way around. VS is way behind Eclipse on some features, and a lot more on others. VS 2005 now has acceptable support for operations on the abstract syntax tree, but nothing really powerful. Of course, VS has more functionality integrated. But Eclipse has its vast number of functional plug Re: (Score:2) Re: (Score:2) GWT *and* Java (Score:5, Informative) Before somebody grills me, the version of WindowBuilder Pro that I am using is a bit unstable and crashes Eclipse now and then. Lots of memory is also recommended (then again, if you are a developer, you need lots of memory anyway). Re: (Score:3, Informative) Nah, didn't think so Why should GWT be on my radar screen? (Score:3, Insightful) Re: (Score:2) It is a servlet, so it works with servlet containers. Like tomcat, resin, jetty, or those big ol JavaEE containers. I'm not sure that tells you anything about why to use it though. Oops. (Score:2) I want to write desktop apps with JS/GWT/whatever (Score:3, Interesting) JS is a great language and GWT is a great tool, especially the hosted development environment. But it will never reach its potential until it is a general purpose application programming language. Very nice toolkit with some problems (Score:4, Informative) The content on Google's GWT site [google.com] almost speaks for itself concerning the power of it, so I'll elaborate on some negative aspects I've encountered. I went to explore the possibility of a non-servlet based backend and very quickly encountered problems that are not really GWT's fault but a side effect of having a hosted environment: the browser is based on Mozilla and thus inherits the security restrictions of it. I could not make RPC calls to my backend web server because Mozilla disallows cross-domain access. I spent a lot of time trying to circumvent this, and did find some work-arounds but nothing that worked to my satisfaction (too cumbersome). GWT should offer a modified Mozilla with these restrictions removed, or even better, adopting Flash's excellent practise of looking for a crossdomain.xml [crossdomainxml.org] file in the root of the target webserver (to see whether access is allowed or not). Also, developing on an AMD64 based Linux I discovered that the hosting environment just doesn't work running from a 64bit JVM. Setting up a 32bit JVM isn't that difficult, but it's not a good solution (However, I talked to a GWT developer on IRC who claimed this issue was high on their priority list). Lastly, GWT is a nice environment, but it's still a bit young. It seems stable enough, but if you engage in a large GWT based project you may find yourself doing a lot of low-level client-side code unless their limited palette of widgets and other client-side features completely cover your needs Re: (Score:2) Distributing a specific web client to make it possible to view (some) web applications? That's not a good idea. Maybe for intranet use, but people won't go for a specific web client just to view some GWT based websites. The idea of a crossdomain.xml for use within JavaScri Google DB/LDAP API? (Score:2) Shameless plug - chess board diagram composer (Score:4, Interesting) Just a few hours ago I finished a small, mostly-for-fun project in GWT, and now I see a GWT-related story on slashdot. Surely it's not a coincidence and therefore I must pimp my project: a chess board diagram composer [jinchess.com]. Re: (Score:2) GWT in Python and Ruby (Score:3, Informative) So I want to give oyou two pointers to projects that could need help to transfer the idea of GWT from Java to Python and Ruby: Python: pyjamas - build AJAX apps in Python (like Google did for Java) [pyworks.org]. Ruby: Blog Entry from Michael Neumann, who tries to port GWT to Ruby [ntecs.de]. The Ruby stuff is in the very early stages Bye egghat. Re:Wow (Score:5, Funny) Re:Wow (Score:4, Interesting) Is it just me or is anyone else glad to see a review on Slashdot without a chapter by chapter summary? One of the most pointless, pedantic mistakes in book reviews is to summarize each chapter. Hoozah for the writer! Re:Wow (Score:4, Funny) Don't Use Google Web Toolkit. Worse than Wicket? (Score:4, Interesting) I did an evaluation of it. I kind of liked it, but in general don't like "lets make a servlet environment pretend its an applet", so I didn't endorse. Now, after months of using Wicket, I wish I had pimped for it more. Seriously. If you're gonna pretend it's an applet, then REALLY do it like GWT does, fake JVM and all. This Wicket stuff.... ugh. keeping the html view and java in synch for anythign complex is SO IRRITATING. Seriously, after years of Struts and stuff like Wicket, I don't see any real advantage over "Model 2+" (Servlet/JSP pairs) that I started with in 2001. There is a benefit to simplicity and writing HTML in HTML and writing Javascript in Javascript that over-engineers who are, frankly, a little OO-obsessed just don't get. Maybe years of Perl CGI has bent my brain but The HTTP Request/Response paradigm just doesn't seem so awful that it needs to be (leaky) abstracted away. Re: (Score:2) Re:Worse than Wicket? (Score:4, Insightful) Maybe especially core Wicket, which isn't trying to do much but wrap HTML bits as objects. What really is the benefit of that? Nothing seems simpler and it often seems to be getting in the way-- maybe it's a mental gap problem, but often I end up having to put placeholder HTML in the page, and the set visibility = null. And the Ajax support is so-so. I mean, I think the fact that the static Wicket homepages often produce session timeout exceptions is pretty damning that it encourages poor web programming. Re: (Score:3, Interesting) Re: (Score:3, Informative) You're obviously missing the point indeed. What it aims to do besides providing abstractions is facilitate a stateless programming model that is safe by default and doesn't leak implementation details of widgets all over the place. Re: (Score:2) I haven't seen leaky PKs be too much of an issue. A lot of keys are just local identifiers, or your API should be doing requisite security checks. Re: (Score:2, Insightful) And now Wicket In Action [manning.com] is available (partly) via early access. But agreed documentation could be better, which holds true for most open source projects that depend on unpaid volunteers. But it is something you'll always have to be aware about. The application I'm working for for instance, has a very fine graine Re: (Score:2) Sounds like your problems with Wicket are due to your total lack of knowledge about the framework. Hint: there's nothing called "visibility" in Wicket. There Re: (Score:2) Read my post (I think as kirkjerk) in the apache mailing list. I know setVisibility takes a boolean. I've been using Wicket solidly for 6 weeks. Re:Worse than Wicket? (Score:5, Insightful) I couldn't agree more. I like my code structured, clean and simple, and layering abstraction upon abstraction is not a good way to achieve that. I'm also a big proponent of writing to the language's strengths. JavaScript has some annoying weaknesses (lack of namespaces), but things like it's object prototyping are very powerful, and it seems silly to give that up. For all the browser inconsistencies, HTML + CSS is actually quite a nice layout tool. Getting them all to work together requires some organizational skills, but it pays off in ease of use and a higher level of control. These are tools a web developer really should be comfortable with anyway, so while I can see the utility of something like GWT if you're not, to really excel at it, you should be knowledgeable about them. Re: (Score:2, Interesting) Sometimes I worry that I come up with my philosophy about languages to suit my personal gripes and history, and that ultimately that limits me as a developer, but... Anytime you borrow someone else's toolkit that adds some kind of organization/abstraction over an existing technology... well, either you're losing bits of the functionality of the lower level, or what you're learning is about as complex as the lower level. My strong personal preference is to write your OWN, app-specific organization. The cou Re:Worse than Wicket? (Score:4, Insightful) Re: (Score:2) Especially because usually you're just using a subset of the functionality, but to do it right, have to "pay" for (in terms of learning complexity) much more than that. That's one reason I love APIs to the heavy lifting, but having a core of the Lowest Common Denominator of control code. For Java, this was Servlet/JSP pairs. Re: (Score:2) A relatively simple example - I've written several apps that produce PDF and Excel output - iText and Apache POI have been invaluable. Sure, they've been a pain at times, but they have been more than worth the trouble. They've saved me time and allowed me to produce Re: (Score:2) Re: (Score:2) Here's a funny thing - I go through the same questions, wondering if I'm holding on to old biases that have gone past their best-before date. Every couple years I'll take a look around to see if I'm missing something, and invariably come to the conclusion that things Aren't Quite There Yet. Then I'll go refactor some of my own light-weight libraries and move on. Which makes me wonder how subject I am to NIH syndrome. But I get things done. The punch line is that I did cut my teeth on CGI (perl 4) and a ha Re: (Score:2) I'll just say that your criticisms of Wicket are so entirely opposite of my experience with the framework that it's frustrating and somewhat confusing to read your posts. So you do seem to be advocating the "write your own framework" approach, which I've already addressed to a certain degree. I'll add/reiterate that i Re: (Score:2) And trust me, I'm just as frustrated and confused by people who adore Wicket and say it has a great power/conceptual weight ratio! Especially because I'm concerned it points me to being a curmudgeon before my time, unwilling or unable to learn new things. The last time I "rolled my own framework", which was just some tools to make "Model Two Plus" (servlets and JSP pairs; esentially MVC w/ lot Re: (Score:2) Well said! Same applies to SQL, and the clumsiness of object-relational mapping. SQL selects are awesome - why ruin them? Same with generating a printed document in Postscript vs. using an API; Postscript is text, and is easily mani Re: (Score:3, Insightful) You can't really write a web application using a (single) language, though; you need to have some degree of expertise in HTML and CSS AND Javascript AND whatever language(s) your server-side code is written in... I can see some appeal in toolkits that unify everything into just one language, for the developer's convenience. It's still pretty much impossible that the output of such a toolkit will be anywhere near as elegant or efficient as nativ Re: (Score:2) Oh, yeah, that's pretty much what I was trying to get at (unless I misunderstand.) HTML and CSS and javascript and whatever server-side language you're using are all different beasts, and I personally prefer to meet each on their own terms. It might take a bigger learning investment, but the payoff is understanding everything that's going on, and being able to manipulate it at a fine grained level. And you can take advantage of the strengths inherent in each, while occasionally offsetting their weaknesses w Re: (Score:2) I also find it easier to import 3rd party javasvript/DHTML widgets when I don't have to sweat wrapping it in Java objects. Re: (Score:2) Re: (Score:2) At this point I'm still willing to give GWT the benefit of the doubt. I hear your point that you can take my argument to an illogical extreme (lets just toggle panel switches!) but that doesn't mean the opposite is true. I wouldn't want to write an applet by having to control individual pixels either, but the fact is web apps have a pretty good toolset, DHTML, Form Elements, CSS, and in my judgement, unless you embrace a full on JVM-ish approach like GWT, then the Re: (Score:2, Informative) Sorry to hear it doesn't work for you. A little OO-obsessed though? Because the framework takes a stance to actually try to provide a real OO programming model where other frameworks simpl Re: (Score:2) I felt the same way about GWT -- it's just too much of a black box for my taste. Having said that, it's probably a good idea to reevaluate it now that I use a lot more Ajax and am far more comfortable with JavaScript than I was a year or so ago. Regarding Wicket, however, I couldn't disagree more. I think Wicket offers by far the best balance of productivity and maintainability compared to any action or component-based Java web framework out there. There are, however, a couple of caveats. 1) Wicket's Re: (Score:2) Again, it's the background thing. HTTPRequest/Response seems like the most natural thing in the world to me, in part from years of CGI. But hell, from years of using the Web! Click, or submit, and get a page back. Or, nowadays, click, and some part of the page goes and updates, ala Ajax. And the Session model, a simple Map to st Re: (Score:3, Informative) Firstly, I had loads of problems. However they were mainly down to broken stuff in the RC builds (all fixed now that it's out of beta) and the fact that I was learning Java as I went along. Nothing that was a major problem was down to the architecture of GWT. If i'd have written pure javascript instead then doubtless it'd have taken far longer and worked less well. GWT rocks for anything where yo Re: (Score:3, Interesting)
https://slashdot.org/story/07/08/29/1418229/gwt-in-action
CC-MAIN-2017-13
refinedweb
4,876
71.04
South Africa 100 liers Low Pressure Solar Geyser ,Solar Water Heater US $120-150 / Set 50 Sets (Min. Order) solar street light poles in Africa US $300-500 / Set 1 Set (Min. Order) MIC 90w solar street light for africa US $450-1500 / Piece 5 Pieces (Min. Order) 1.1kW New Project Solar Water Pumping System in South Africa for Irrigation US $1430-1500 / Set 1 Set (Min. Order) Solar Panel Frame for South Africa US $1-15 / Set 1 Set (Min. Order) 5KW solar kits for Africa / solar system price 5000W / 6000W 8000W 10KW solar system information in hindi US $0.52-0.55 / Watt 1 Watt (Min. Order) hot sale africa type 30W mini home solar lighting system/for lighting and mobile charger US $10.0-10.0 / Sets 100 Sets (Min. Order) Africa approved home solar lighting kits all in one with 2 led lamp mobile phone charger for 2rooms US $16.8-17.8 / Piece 100 Pieces (Min. Order) Africa Hot sale 4w solar light system with USB for camping and home used ,solar light kit US $12.5-12.6 / Piece 1 Piece (Min. Order) 2kw home power remote cooling solar pv systems to south africa US $2459.06-2539.06 / Set 1 Set (Min. Order) buy portable solar generator for home solar system africa market US $45-415 / Set 1 Set (Min. Order) for Afghan africa market 10 watt portable solar power systems for home use US $30-45 / Set 1 Set (Min. Order) solar lighting system for Africa ,India ,Pakistan US $34.0-36.0 / Set | Buy Now 1000000 Sets (Min. Order) solar energy systems africa 2KW 3kw 5KW ; solar power system off grid US $3800-8900 / Piece 1 Piece (Min. Order) 2016 Africa approved solar panel system home 5kw US $40-50 / Piece 10 Pieces (Min. Order) Africa selling remote portable lights solar lights solar lights camping Africa US $10.8-12 / Piece 100 Pieces (Min. Order) 3W solar lighting kit factory direct to Africa and Mid-East! US $14.0-18.0 / Carton | Buy Now 1 Carton (Min. Order) Best Africa 40W solar home kits solar lighting system US $181.0-185.0 / Piece 1 Piece (Min. Order) New Design Africa DC Solar Small Home Use System/ solar power system /solar storage power system US $33.0-35.0 / Set | Buy Now 500 Sets (Min. Order) JCN multi-purpose portable solar power station charger wholesale Africa market US $34.0-34.0 / Pieces 500 Pieces (Min. Order) import solar panel to africa US $0.45-0.99 / Watt 1000 Watts (Min. Order) Top quality hot selling for africa market 10w small solar lighting kits US $35.18-36.68 / Piece 200 Pieces (Min. Order) MUST New product South Africa 12V 10W mini portable Solar power System US $30.0-35.0 / Unit 1 Unit (Min. Order) solar lighting system solar power system solar energy system hot sale Africa , Dubai , Middle East US $12-13 / Piece 200 Pieces (Min. Order) Hot sale in Africa how to install solar power at home US $418-420 / Piece 100 Pieces (Min. Order) High efficiency solar home system africa 300 Sets (Min. Order) 2015 hot sale in Africa most competitive solar panel 250w US $20-400 / Set 5 Sets (Min. Order) 15kw solar system price Africa US $4986-59900 / Set 1 Set (Min. Order) South Africa 220V portable 500w solar generator price US $1-600 / Unit 1 Unit (Min. Order) Vscien Hot sale ac solar system home lighting Africa US $220.0-260.0 / Set 1 Set (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show solar africa or other products of your own company? Display your Products FREE now! Related Category Supplier Features Supplier Types Recommendation for you
http://www.alibaba.com/showroom/solar-africa.html
CC-MAIN-2017-04
refinedweb
640
76.72
Years ago, I built a chrome extension to import transactions into Mint. Mint hasn’t been updated in nearly a decade at this point, and once it stopped connecting to my bank for over two months I decided to call it quits and switch to LunchMoney which is improved frequently and has a lot of neat developer-focused features. However, I had years of historical data in Mint and I didn’t want to lose it when I transitioned. Luckily, Mint allows you to export all of your transaction data as a CSV and LunchMoney has an API. I’ve spent some time brushing up on my JavaScript knowledge in the past, and have since used Flow (a TypeScript competitor) in my work, but I’ve heard great things about TypeScript and wanted to see how it compared. Building a simple importer tool like this in TypeScript seems like a great learning project, especially since the official bindings for the API is written in TypeScript. Deno vs NodeDeno vs Node Deno looks cool. It uses V8, Rust, supports TypeScript natively, and seems to have an improved REPL experience. I started playing around with it, but it is not backwards compatible with Node/npm packages which is a non-starter for me. It still looks pretty early in its development and adoption. I hope Deno matures and is more backwards compatible in the future! Learning TypeScriptLearning TypeScript - You can’t run TypeScript directly via node (this is one of the big benefits of Deno). There are some workarounds, although they all add another layer of indirection, which is the primary downfall of the JavaScript ecosystem in my opinion. ts-nodelooks like the easiest solution to run TypeScript without a compilation step. npm i ts-nodewill enable you to execute TypeScript directly using npx ts-node the_script.ts. However, if you use ESM you can’t use ts-node. This is a known issue, and although there’s a workaround it’s ugly and it feels easier just to have a watcher compile in the background and execute the raw JS. .d.tswithin repos define types on top of raw JS. This reason this is done is to allow a single package to support both standard JavaScript and TypeScript: when you are using TypeScript the .jsand .d.tsfiles are included in the TypeScript compilation process. - Use npx tsc --initto setup an initial tsconfig.json. I turned off strict mode; it’s easier to learn a new typing system without hard mode enabled. - Under the hood, typescript transpiles TypeScript into JavaScript. If you attempt to debug a TypeScript file with node inspect -r ts-node/registerthe code will look different and it’ll be challenging to copy/paste snippets to debug your application interactively. - Same applies to debugging in a GUI like VS Code. You can enable sourcemaps, but the debugger is not smart enough to map variables dynamically for you when inputting strings into the console session. This is massive bummer for me: I’m a big fan of REPL-driven development. I can’t copy/paste snippets of code between my editor + REPL, it really slows me down. - Similar to most other languages with gradual typing (python, ruby, etc), there are ‘community types’ for each package. TypeScript is very popular, so many/most packages includes types within the package itself. The typing packages need to be added to package.json. There’s a nice utility to do this automatically for you. - If you want to be really fancy you can overload npm iand run typesyncautomatically. - VS Code has great support for TypeScript: you can start a watcher process which emits errors directly into your VS Code status bar via cmd+shift+b. - If you make any changes to tsconfig.jsonyou’ll need to restart your watcher process. - You can define a function signature that dynamically changes based on the input. For instance, if you have a configuration object, you can change the output of the function based on the structure of that object. Additionally, you can inline-assign an object to a type, which is a nice change from other languages (ruby, python). - Example of inline type assignment: {download: true} as papaparse.ParseConfig<Object>. In this case, Objectis an argument into the ParseConfigtype and changes the type of the resulting return value. Very neat! - I ran into Element implicitly has an 'any' type because expression of type 'string' can't be used to index type 'Object'. No index signature with a parameter of type 'string' was found on type 'Object. The solution was typing a map/object/hash with theVariable: { [key: string]: any }. I couldn’t change the anytype of the value without causing additional typing errors since the returning function was typed as a simple Objectreturn. - There’s a great, free, extensive book on TypeScript development. One of the most interesting pieces of TypeScript is how fast it’s improving. Just take a look at the changelog. Even though JavaScript isn’t the most well-designed language, "One by one, they are fixing the issues, and now it is an excellent product." A language that has wide adoption will iterate it’s way to greatness. There’s a polish that only high throughput can bring to a product and it’s clear that after a very long time JavaScript is finally getting a high level of polish. Linting with ESLint, Code Formatting with PrettierLinting with ESLint, Code Formatting with Prettier - ESLint looks like the most popular JavaScript linting tool. It has lots of plugins and huge community support. - You can integrate prettier with eslint, which looks like the most popular code formatting tool. - VS code couldn’t run ESLint after setting it up. Had trouble loading /node_modules/espree/dist/espree.cjs. Restarting VS Code fixed the problem. Here’s the VS Code settings.json that auto-fixed ESLint issues on save: { "[typescript]": { "editor.formatOnSave": true, "editor.codeActionsOnSave": { "source.fixAll.eslint": true }, }, "eslint.validate": ["javascript"] } And here’s the .eslintrc.json which allowed ESLint, prettier, and ESM to play well together: { "env": { "browser": false, "es2020": true }, "extends": [ "standard", "plugin:prettier/recommended" ], "parser": "@typescript-eslint/parser", "parserOptions": { "ecmaVersion": 2020, "sourceType": "module" }, "plugins": [ "@typescript-eslint" ], "rules": { } } Module LoadingModule Loading As with most things in JavaScript-land, the module definition ecosystem has a bunch of different community implementation/conventions. It’s challenging to determine what the latest-and-best way to handle module definitions is. This was a great overview and I’ve summarized my learnings below. require()== commonjs== CJS. You can spot modules in this format by module.exportsin their package. This was originally designed for backend JavaScript code. AMD== Asynchronous Module Definition. You can spot packages in this style by define(['dep1', 'dep2'], function (dep1, dep2) {at the header of the index package. Designed for frontend components. UMD== Universal Module Definition. Designed to unify AMD + CJS so both backend and frontend code could import a package. The signature at the top of the UMD-packaged module is messy and basically checks for define, module.exports, etc. import== ESM== ES Modules. This is the latest-and-greatest module system officially baked into ES6. It has wide browser adoption at this point. This is most likely what you want to use. importrequires modulemode in TypeScript (or other compilers) not set to commonjs. - If you use ESM, your transpiled JS code will look a lot less garbled, and you’ll still be able to use the VS Code debugger. The big win here is your import variable names will be consistent with your original source, which it makes it much easier to work with a REPL. - There are certain compatibility issues between ESM and the rest of the older package types. I didn’t dig into this, but buyer beware. - It looks like experimental support for loading modules from a URL exist. I hope this gets baked in to the runtime. There are downsides (major security risks), but it’s great for getting the initial version of something done. This was one of the features I thought was neat about Deno: you could write a script with a single JavaScript file without creating a mess of package*, tsconfig.json, etc files in a new folder. - is a great tool for loading a JS file from any repo on GitHub. - You’ll get Cannot use import statement inside the Node.js REPL, alternatively use dynamic importif you try to import inside of a repl. This is a known limitation.. The workaround (when in es2020 mode) is to use await import("./out/util.js"). - When importing a commonjs formatted package, you’ll probably need to import specific exports via import {SpecificExport} from 'library'. However, if the thing you want to import is just the default exportyou’ll run into issues and probably need to modify the core library. Here’s an example commit which fixed the issue in the LunchMoney library - When importing a local file, you need to specify the .js(not the ts) in the import statement import { readCSV, prettyPrintJSON } from "./util.js"; Package ManagementPackage Management - You can install a package directly from a GitHub reference npm i lunch-money/lunch-money-js - You can’t put comments in package.json, which is terrible. Lots of situations where you want to document why you are importing a specific dependency, or a specific forked version of a dependency. npm install -g npmto update to the latest npm version. - By default, npm updateonly updates packages to the latest minor semver. Use npx npm-check-updates -u && npm ito update all packages to the latest version. This is dangerous, and only makes sense if there are a small number of packages - is a great tool for helping decide which package to use. JavaScript LearningsJavaScript Learnings - You’ll want to install underscoreand use chainfor data manipulation: _.chain(arr).map(...).uniq().value(). Lots of great tools you are missing from ruby or python. - ES6 introduced computed property names so you can use a variable as an object key { [variableKey]: variableValue } - I had trouble getting papaparse to read a local file without using a callback. I hate callbacks; here’s a promise wrapper that cleaned this up for me. - Merge objects with _.extend. - The dotenvpackage didn’t seem to parse .envwith exports in the file. Got tripped up on this for a bit. requirecan be used to load a JSON file, not just a javascript file. Neat! - There are nice iterators now! for(const i in list) - There’s array destruction too const [a, b] = [1,2] - Underscore JS has a nice memoizemethod. I hate the pattern of having a package-level variable for memoization. Just feels so ugly. - There’s a inkeyword that can be used with objects, but not arrays (at least in the way you’d expect). - There’s a null-safe operator now. For instance, if you want to safely check a JSON blob for a field and set a default you can now do something like const accounts = json_blob?.accounts || [] - You are iterate over the keys and values of an object using for (const [key, value] of Object.entries(object)) - is a neat project which transpiles JavaScript code into multiple languages. Hacking & DebuggingHacking & Debugging - The most disappointing part of the node ecosystem is the REPL experience. There are some tools that (very) slightly improve it, but there’s nothing like iPython or Pry. - nbd is dead and hasn’t been updated in years - node-help is dead as well and just made it slightly easier to view documentaiton. - node-inspector is now included in node and basically enables you to use Chrome devtools - local-repl looks neat, but also hasn’t been updated in ~year. - The updated repl project wouldn’t load for me on v16. - The debugging happy path seems to be using the GUI debugger. - You can use toString()on a function to get the source code. Helpful alternative to show-sourcefrom ruby or llfrom python. However, it has some gotchas: - It’s specifically discouraged since it’s been removed from the standard - Arguments and argument defaults are not specified - It’s not obvious how to list local variables in the CLI debugger. There’s a seemingly undocumented exec .scopethat you can run from the debugger context (but not from a repl!). - You can change the target to ES6to avoid some of the weird JS transpiling stuff, - Run your script with node inspectand then before conting type breakOnUncaughtto ensure that you can inspect any exceptions. I prefer terminal-based debugging, if you want to connect to a GUI (chrome or VS Code) use --inspect. - There’s not a way I could find to add your own aliases to the debugging (i.e. c== continue== cont). - It’s worth writing your own console.tsto generate a helpful repl environment to play with imports and some aliases defined. Unfortunately, this needs to be done on a per-project basis. - You can’t redefine constvariables in a repl, which makes it annoying to copy/paste code into a console. - It looks like there are some hacks you can use to strip out the constand replace with a letbefore the copy/pasted code gets eval’d. This seems like a terrible hack and should just be a native flag added to node. - In more recent versions of node (at least 16 or greater), you can use awaitwithin a repl session. - If you are in a debuggersession awaitdoes not work, unlike when you are a standard node repl. You cannot resolve promises and therefore cannot interact with async code. This is a known bug, will not be changed, and makes debugging async code interactively extremely hard. Very surprised this is still a limitation. console.diris the easiest way to inspect all properties of an object within a REPL. This uses util.inspectunder the hood, so you don’t need to import this package and remember the function arguments. - There’s a set of functions only available in the web console. Most of these seem to model after jQuery functions. Open QuestionsOpen Questions - How can I run commands without npx? Is there some shim I can add to my zsh config to conditionally load all npx-enabled bins when node_modulesexists? - Is there anything that can done to make the repl experience better? This is my biggest gripe with JavaScript development. - looks interesting but seems dead (no commits in over a year) - This code looks like an interesting starting point to removing all constthat are pasted into a repl. - The number of configuration files you need to get started in a repo just to get started is insane (tsconfig.json, package*.json, .eslintc.json). Is there a better want to handle this? Some sort of single configuration file to rule them all?
https://mikebian.co/learning-typescript-by-migrating-mint-transactions/?utm_source=rss&utm_medium=rss&utm_campaign=learning-typescript-by-migrating-mint-transactions
CC-MAIN-2022-27
refinedweb
2,463
65.42
QtSDK 1.2 and QtWebKit 2.2 not working in qml Hi All, I installed QtSDK 1.2 and it includes qtwebkit 2.2. So in my qml i like to use qtwebkit 2.2. When i tried the demo version, I get the following error. @ C:/QtSDK/Demos/4.7/declarative/webbrowser/webbrowser.qml:43:1: module "QtWebKit" version 2.2 is not installed import QtWebKit 2.2 @ But import QtWebKit 1.0 works fine. but i have a problem with this, when i load a html file with in-page navigation like the below code. Its not working !! The page is not scrolling to bottom. @ <a name="top" href="#bottom"> go to bottom</a> <a name="bottom" href="#top"> go to top</a> @ How to solve the above problem ? any ideas ? Thanks. - DamianMilo is this any solved for this problem?
https://forum.qt.io/topic/14071/qtsdk-1-2-and-qtwebkit-2-2-not-working-in-qml
CC-MAIN-2018-09
refinedweb
142
79.87
Results 1 to 3 of 3 Thread: Convert Word Tables (Excel 97) - Join Date - Nov 2001 - Location - Australia - 1 - Thanks - 0 - Thanked 0 Times in 0 Posts Convert Word Tables (Excel 97) At work I have inherited some large Word 97 tables which I need to convert to Excel 97. Not usually a problem even for a novice like me, but it is not working. In Word, some cells have multiple lines of data within the cell, with a return (Enter) at the end of each line (within the cell). On conversion in Excel this seems to have the effect of starting a new row each time a return is encountered ( just like the end of row marker from the Word table). Is there a way to convert so that these multiple lines of data in a single cell from Word with a return at the end of each line become multiple lines within a single cell in Excel? Or close to it ? Any help much appreciated. - Join Date - Feb 2001 - Location - Dublin, Ireland, Republic of - 2,697 - Thanks - 1 - Thanked 0 Times in 0 Posts Re: Convert Word Tables (Excel 97) Try using Paste Special, and select Text. This will however result in the loss of the hard CR, which can be replaced manually unless there is too large a number of them. Excel 97 does not handle text pasting as well as Excel 2000. Andrew C - Join Date - Jan 2001 - Location - West Long Branch, New Jersey, USA - 1,921 - Thanks - 6 - Thanked 9 Times in 7 Posts Re: Convert Word Tables (Excel 97) Jeff, There was another thread on this subject about a month after your posting. It's . I felt a solution was needed since I have this problem also. So I just posted a proposal for a solution on that thread in <A target="_blank" HREF= &view=&sb=&o=&fpart=&vc=> this reply . {edited to fix 2nd link} Let us know on that thread if this helps. Fred
http://windowssecrets.com/forums/showthread.php/14600-Convert-Word-Tables-%28Excel-97%29
CC-MAIN-2016-50
refinedweb
332
73.41
#include <stdio.h> size_t fwrite(const void *restrict ptr, size_t size, size_t nitems, FILE *restrict stream);. The file-position indicator for the stream (if defined) shall be advanced by the number of bytes successfully written. If an error occurs, the resulting value of the file-position indicator for the stream is unspecified. The st_ctime and st_mtime fields of the file shall be marked for update between the successful execution of fwrite() and the next successful completion of a call to fflush() or fclose() on the same stream, or a call to exit() or abort().. Refer to fputc() . The following sections are informative. None. Because of possible differences in element length and byte ordering, files written using fwrite() are application-dependent, and possibly cannot be read using fread() by a different application or by the same application on a different processor. None. None. ferror() , fopen() , printf() , putc() , puts() , write() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdio.h>
http://www.makelinux.net/man/3posix/F/fwrite
CC-MAIN-2015-06
refinedweb
160
52.19
Modeling materials using density functional theory Copyright \copyright 2012--\the\year\ John Kitchin". Table of Contents - 1. Introduction to this book - 2. Introduction to DFT - 3. Molecules - 3.1. Defining and visualizing molecules - 3.2. Simple properties - 3.3. Simple properties that require single computations - 3.4. Geometry optimization - 3.5. Vibrational frequencies - 3.6. Simulated infrared spectra - 3.7. Thermochemical properties of molecules - 3.8. Molecular reaction energies - 3.9. Molecular reaction barriers - 4. Bulk systems - 4.1. Defining and visualizing bulk systems - 4.2. Computational parameters that are important for bulk structures - 4.3. Determining bulk structures - 4.4. TODO Using built-in ase optimization with vasp - 4.5. Cohesive energy - 4.6. Elastic properties - 4.7. Bulk thermodynamics - 4.8. Effect of pressure on phase stability - 4.9. Bulk reaction energies - 4.10. Bulk density of states - 4.11. Atom projected density of states - 4.12. Band structures - 4.13. Magnetism - 4.14. TODO phonons - 4.15. TODO solid state NEB - 5. Surfaces - 6. Atomistic thermodynamics - 7. Advanced electronic structure methods - 8. Databases in molecular simulations - 9. Acknowledgments - 10. Appendices - 11. Python - 12. References - 13. GNU Free Documentation License - 14. Index 1 Introduction to this book This book serves two purposes: 1) to provide worked examples of using DFT to model materials properties, and 2) to provide references to more advanced treatments of these topics in the literature. It is not a definitive reference on density functional theory. Along the way to learning how to perform the calculations, you will learn how to analyze the data, make plots, and how to interpret the results. This book is very much "recipe" oriented, with the intention of giving you enough information and knowledge to start your research. In that sense, many of the computations are not publication quality with respect to convergence of calculation parameters. You will read a lot of python code in this book. I believe that computational work should always be scripted. Scripting provides a written record of everything you have done, making it more probable you (or others) could reproduce your results or report the method of its execution exactly at a later time. This book makes heavy use of many computational tools including: - Python - Atomic Simulation Environment (ase) - numpy - scipy - matplotlib - emacs - git This book is available at - vasp This is the Python module used extensively here. It is available at The DFT code used primarily in this book is VASP. Similar code would be used for other calculators, e.g. GPAW, Jacapo, etc… you would just have to import the python modules for those codes, and replace the code that defines the calculator. Review all the hyperlinks in this chapter. 2 Introduction to DFT A comprehensive overview of DFT is beyond the scope of this book, as excellent reviews on these subjects are readily found in the literature, and are suggested reading in the following paragraph. Instead, this chapter is intended to provide a useful starting point for a non-expert to begin learning about and using DFT in the manner used in this book. Much of the information presented here is standard knowledge among experts, but a consequence of this is that it is rarely discussed in current papers in the literature. A secondary goal of this chapter is to provide new users with a path through the extensive literature available and to point out potential difficulties and pitfalls in these calculations. A modern and practical introduction to density functional theory can be found in Sholl and Steckel sholl-2009-densit-funct-theor. A fairly standard textbook on DFT is the one written by Parr and Yang parr-yang. The Chemist's Guide to DFT koch2001 is more readable and contains more practical information for running calculations, but both of these books focus on molecular systems. The standard texts in solid state physics are by Kittel kittel and Ashcroft and Mermin ashcroft-mermin. Both have their fine points, the former being more mathematically rigorous and the latter more readable. However, neither of these books is particularly easy to relate to chemistry. For this, one should consult the exceptionally clear writings of Roald Hoffman hoffmann1987,RevModPhys.60.601, and follow these with the work of N\o rskov and coworkers hammer2000:adv-cat,greeley2002:elect. In this chapter, only the elements of DFT that are relevant to this work will be discussed. An excellent review on other implementations of DFT can be found in Reference freeman1995:densit, and details on the various algorithms used in DFT codes can be found in Refs. payne1992:iterat,Kresse199615. One of the most useful sources of information has been the dissertations of other students, perhaps because the difficulties they faced in learning the material are still fresh in their minds. Thomas Bligaard, a coauthor of Dacapo, wrote a particularly relevant thesis on exchange/correlation functionals bligaard2000:exchan-correl-funct and a dissertation illustrating the use of DFT to design new alloys with desirable thermal and mechanical properties bligaard2003:under-mater-proper-basis-densit. The Ph.D. thesis of Ari Seitsonen contains several useful appendices on k-point setups, and convergence tests of calculations, in addition to a thorough description of DFT and analysis of calculation output seitsonen2000:phd. Finally, another excellent overview of DFT and its applications to bimetallic alloy phase diagrams and surface reactivity is presented in the PhD thesis of Robin Hirschl hirschl2002:binar-trans-metal-alloy-their-surfac. 2.1 Background In 1926, Erwin Schrödinger published the first accounts of his now famous wave equation pauling1963. He later shared the Nobel prize with Paul A. M. Dirac in 1933 for this discovery. Schrödinger's wave function seemed extremely promising, as it contains all of the information available about a system. Unfortunately, most practical systems of interest consist of many interacting electrons, and the effort required to find solutions to Schrödinger's equation increases exponentially with the number of electrons, limiting this approach to systems with a small number of relevant electrons, \(N \lesssim O(10)\) RevModPhys.71.1253. Even if this rough estimate is off by an order of magnitude, a system with 100 electrons is still very small, for example, two Ru atoms if all the electrons are counted, or perhaps ten Pt atoms if only the valence electrons are counted. Thus, the wave function method, which has been extremely successful in studying the properties of small molecules, is unsuitable for studies of large, extended solids. Interestingly, this difficulty was recognized by Dirac as early as 1929, when he wrote "The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the application of these laws leads to equations much too complicated to be soluble." dirac1929:quant-mechan-many-elect-system. In 1964, Hohenberg and Kohn showed that the ground state total energy of a system of interacting electrons is a unique functional of the electron density PhysRev.136.B864. By definition, a function returns a number when given a number. For example, in \(f(x)=x^2\), \(f(x)\) is the function, and it equals four when \(x=2\). A functional returns a number when given a function. Thus, in \(g(f(x))=\int_0^\pi f(x) dx\), \(g(f(x))\) is the functional, and it is equal to two when \(f(x)=\sin(x)\). Hohenberg and Kohn further identified a variational principle that appeared to reduce the problem of finding the ground state energy of an electron gas in an external potential (i.e., in the presence of ion cores) to that of the minimization of a functional of the three-dimensional density function. Unfortunately, the definition of the functional involved a set of 3N-dimensional trial wave functions. In 1965, Kohn and Sham made a significant breakthrough when they showed that the problem of many interacting electrons in an external potential can be mapped exactly to a set of noninteracting electrons in an effective external potential PhysRev.140.A1133. This led to a set of self-consistent, single particle equations known as the Kohn-Sham (KS) equations: with where \(v(\mathbf{r})\) is the external potential and \(v_{xc}(\mathbf{r})\) is the exchange-correlation potential, which depends on the entire density function. Thus, the density needs to be known in order to define the effective potential so that Eq. \eqref{eq:KS} can be solved. \(\varphi_j(\mathbf{r})\) corresponds to the \(j^{th}\) KS orbital of energy \(\epsilon_j\). The ground state density is given by: To solve Eq. \eqref{eq:KS} then, an initial guess is used for \(\varphi_j(r)\) which is used to generate Eq. \eqref{eq:density}, which is subsequently used in Eq. \eqref{eq:veff}. This equation is then solved for \(\varphi_j(\mathbf{r})\) iteratively until the \(\varphi_j(\mathbf{r})\) that result from the solution are the same as the \(\varphi_j(\mathbf{r})\) that are used to define the equations, that is, the solutions are self-consistent. Finally, the ground state energy is given by: where \(E_{xc}[n(\mathbf{r})]\) is the exchange-correlation energy functional. Walter Kohn shared the Nobel prize in Chemistry in 1998 for this work RevModPhys.71.1253. The other half of the prize went to John Pople for his efforts in wave function based quantum mechanical methods RevModPhys.71.1267. Provided the exchange-correlation energy functional is known, Eq. (eq:dftEnergy) is exact. However, the exact form of the exchange-correlation energy functional is not known, thus approximations for this functional must be used. 2.2 Exchange correlation functionals The two main types of exchange/correlation functionals used in DFT are the local density approximation (LDA) and the generalized gradient approximation (GGA). In the LDA, the exchange-correlation functional is defined for an electron in a uniform electron gas of density \(n\) PhysRev.140.A1133. It is exact for a uniform electron gas, and is anticipated to be a reasonable approximation for slowly varying densities. In molecules and solids, however, the density tends to vary substantially in space. Despite this, the LDA has been very successfully used in many systems. It tends to predict overbonding in both molecular and solid systems fuchs1998:pseud, and it tends to make semiconductor systems too metallic (the band gap problem) perdew1982:elect-kohn-sham. The generalized gradient approximation includes corrections for gradients in the electron density, and is often implemented as a corrective function of the LDA. The form of this corrective function, or "exchange enhancement" function determines which functional it is, e.g. PBE, RPBE, revPBE, etc. hammer1999:improv-pbe. In this book the PBE GGA functional is used the most. N\o{}rskov and coworkers have found that the RPBE functional gives superior chemisorption energies for atomic and molecular bonding to surfaces, but that it gives worse bulk properties, such as lattice constants compared to experimental data hammer1999:improv-pbe. Finally, there are increasingly new types of functionals in the literature. The so-called hybrid functionals, such as B3LYP, are more popular with gaussian basis sets (e.g. in Gaussian), but they are presently inefficient with planewave basis sets. None of these other types of functionals were used in this work. For more details see Chapter 6 in Ref. koch2001 and Thomas Bligaard's thesis on exchange and correlation functionals bligaard2000:exchan-correl-funct. 2.3 Basis sets Briefly, VASP utilizes planewaves as the basis set to expand the Kohn-Sham orbitals. In a periodic solid, one can use Bloch's theorem to show that the wave function for an electron can be expressed as the product of a planewave and a function with the periodicity of the lattice ashcroft-mermin:\begin{equation} \psi_{n\mathbf{k}}(\mathbf{r})=\exp({i\mathbf{k}\cdot\mathbf{r}})u_{n\mathbf{k}}(\mathbf{r}) \end{equation} where \(\mathbf{r}\) is a position vector, and \(\mathbf{k}\) is a so-called wave vector that will only have certain allowed values defined by the size of the unit cell. Bloch's theorem sets the stage for using planewaves as a basis set, because it suggests a planewave character of the wave function. If the periodic function \(u_{n\mathbf{k}}(\mathbf{r})\) is also expanded in terms of planewaves determined by wave vectors of the reciprocal lattice vectors, \(\mathbf{G}\), then the wave function can be expressed completely in terms of a sum of planewaves payne1992:iterat:\begin{equation} \psi_i(\mathbf{r})=\sum_\mathbf{G} c_{i,\mathbf{k+G}} \exp(i\mathbf{(k+G)\cdot r}). \end{equation} where \(c_{i,\mathbf{k+G}}\) are now coefficients that can be varied to determine the lowest energy solution. This also converts Eq. \eqref{eq:KS} from an integral equation to a set of algebraic equations that can readily be solved using matrix algebra. In aperiodic systems, such as systems with even one defect, or randomly ordered alloys, there is no periodic unit cell. Instead one must represent the portion of the system of interest in a supercell, which is then subjected to the periodic boundary conditions so that a planewave basis set can be used. It then becomes necessary to ensure the supercell is large enough to avoid interactions between the defects in neighboring supercells. The case of the randomly ordered alloy is virtually hopeless as the energy of different configurations will fluctuate statistically about an average value. These systems were not considered in this work, and for more detailed discussions the reader is referred to Ref. makov1995:period-bound-condit. Once a supercell is chosen, however, Bloch's theorem can be applied to the new artificially periodic system. To get a perfect expansion, one needs an infinite number of planewaves. Luckily, the coefficients of the planewaves must go to zero for high energy planewaves, otherwise the energy of the wave function would go to infinity. This provides justification for truncating the planewave basis set above a cutoff energy. Careful testing of the effect of the cutoff energy on the total energy can be done to determine a suitable cutoff energy. The cutoff energy required to obtain a particular convergence precision is also element dependent, shown in Table tab:pwcut. It can also vary with the "softness" of the pseudopotential. Thus, careful testing should be done to ensure the desired level of convergence of properties in different systems. Table tab:pwcut refers to convergence of total energies. These energies are rarely considered directly, it is usually differences in energy that are important. These tend to converge with the planewave cutoff energy much more quickly than total energies, due to cancellations of convergence errors. In this work, 350 eV was found to be suitable for the H adsorption calculations, but a cutoff energy of 450 eV was required for O adsorption calculations. Bloch's theorem eliminates the need to calculate an infinite number of wave functions, because there are only a finite number of electrons in the unit (super) cell. However, there are still an infinite number of discrete k points that must be considered, and the energy of the unit cell is calculated as an integral over these points. It turns out that wave functions at k points that are close together are similar, thus an interpolation scheme can be used with a finite number of k points. This also converts the integral used to determine the energy into a sum over the k points, which are suitably weighted to account for the finite number of them. There will be errors in the total energy associated with the finite number of k, but these can be reduced and tested for convergence by using higher k-point densities. An excellent discussion of this for aperiodic systems can be found in Ref. makov1995:period-bound-condit. The most common schemes for generating k points are the Chadi-Cohen scheme PhysRevB.8.5747, and the Monkhorst-Pack scheme PhysRevB.13.5188. The use of these k point setups amounts to an expansion of the periodic function in reciprocal space, which allows a straight-forward interpolation of the function between the points that is more accurate than with other k point generation schemes PhysRevB.13.5188. 2.4 Pseudopotentials The core electrons of an atom are computationally expensive with planewave basis sets because they are highly localized. This means that a very large number of planewaves are required to expand their wave functions. Furthermore, the contributions of the core electrons to bonding compared to those of the valence electrons is usually negligible. In fact, the primary role of the core electron wave functions is to ensure proper orthogonality between the valence electrons and core states. Consequently, it is desirable to replace the atomic potential due to the core electrons with a pseudopotential that has the same effect on the valence electrons PhysRevB.43.1993. There are essentially two kinds of pseudopotentials, norm-conserving soft pseudopotentials PhysRevB.43.1993 and Vanderbilt ultrasoft pseudopotentials PhysRevB.41.7892. In either case, the pseudopotential function is generated from an all-electron calculation of an atom in some reference state. In norm-conserving pseudopotentials, the charge enclosed in the pseudopotential region is the same as that enclosed by the same space in an all-electron calculation. In ultrasoft pseudopotentials, this requirement is relaxed and charge augmentation functions are used to make up the difference. As its name implies, this allows a "softer" pseudopotential to be generated, which means fewer planewaves are required to expand it. The pseudopotentials are not unique, and calculated properties depend on them. However, there are standard methods for ensuring the quality and transferability (to different chemical environments) of the pseudopotentials PhysRevB.56.15629. nil VASP provides a database of PAW potentials PhysRevB.50.17953,PhysRevB.59.1758. 2.5 Fermi Temperature and band occupation numbers At absolute zero, the occupancies of the bands of a system are well-defined step functions; all bands up to the Fermi level are occupied, and all bands above the Fermi level are unoccupied. There is a particular difficulty in the calculation of the electronic structures of metals compared to semiconductors and molecules. In molecules and semiconductors, there is a clear energy gap between the occupied states and unoccupied states. Thus, the occupancies are insensitive to changes in the energy that occur during the self-consistency cycles. In metals, however, the density of states is continuous at the Fermi level, and there are typically a substantial number of states that are close in energy to the Fermi level. Consequently, small changes in the energy can dramatically change the occupation numbers, resulting in instabilities that make it difficult to converge to the occupation step function. A related problem is that the Brillouin zone integral (which in practice is performed as a sum over a finite number of k points) that defines the band energy converges very slowly with the number of k points due to the discontinuity in occupancies in a continuous distribution of states for metals gillan1989:calcul,Kresse199615. The difficulty arises because the temperature in most DFT calculations is at absolute zero. At higher temperatures, the DOS is smeared across the Fermi level, resulting in a continuous occupation function over the distribution of states. A finite-temperature version of DFT was developed PhysRev.137.A1441, which is the foundation on which one solution to this problem is based. In this solution, the step function is replaced by a smoothly varying function such as the Fermi-Dirac function at a small, but non-zero temperature Kresse199615. The total energy is then extrapolated back to absolute zero. 2.6 Spin polarization and magnetism There are two final points that need to be discussed about these calculations, spin polarization and dipole corrections. Spin polarization is important for systems that contain net spin. For example, iron, cobalt and nickel are magnetic because they have more electrons with spin "up" than spin "down" (or vice versa). Spin polarization must also be considered in atoms and molecules with unpaired electrons, such as hydrogen and oxygen atoms, oxygen molecules and radicals. For example, there are two spin configurations for an oxygen molecule, the singlet state with no unpaired electrons, and the triplet state with two unpaired electrons. The oxygen triplet state is lower in energy than the oxygen singlet state, and thus it corresponds to the ground state for an oxygen atom. A classically known problem involving spin polarization is the dissociation of a hydrogen molecule. In this case, the molecule starts with no net spin, but it dissociates into two atoms, each of which has an unpaired electron. See section 5.3.5 in Reference koch2001 for more details on this. In VASP, spin polarization is not considered by default; it must be turned on, and an initial guess for the magnetic moment of each atom in the unit cell must be provided (typically about one Bohr-magneton per unpaired electron). For Fe, Co, and Ni, the experimental values are 2.22, 1.72, and 0.61 Bohr-magnetons, respectively kittel and are usually good initial guesses. See Reference PhysRevB.56.15629 for a very thorough discussion of the determination of the magnetic properties of these metals with DFT.. Thus, a magnetic initial guess usually must be provided to get a magnetic solution. Finally, unless an adsorbate is on a magnetic metal surface, spin polarization typically does not need to be considered, although the gas-phase reference state calculation may need to be done with spin-polarization. The downside of including spin polarization is that it essentially doubles the calculation time. 2.7 Recommended reading The original papers on DFT are PhysRev.136.B864,PhysRev.140.A1133. Kohn's Nobel Lecture RevModPhys.71.1253 and Pople's Nobel Lecture RevModPhys.71.1267 are good reads. This paper by Hoffman RevModPhys.60.601 is a nice review of solid state physics from a chemist's point of view. All calculations in this book were performed using VASP Kresse199615,PhysRevB.54.11169,PhysRevB.49.14251,PhysRevB.47.558 with the projector augmented wave (PAW) potentials provided in VASP. 3 Molecules In this chapter we consider how to construct models of molecules, how to manipulate them, and how to calculate many properties of molecules. For a nice comparison of VASP and Gaussian see paier:234102. 3.1 Defining and visualizing molecules We start by learning how to define a molecule and visualize it. We will begin with defining molecules from scratch, then reading molecules from data files, and finally using some built-in databases in ase. 3.1.1 From scratch When there is no data file for the molecule you want, or no database to get it from, you have to define your atoms geometry by hand. Here is how that is done for a CO molecule (Figure fig:co-origin). We must define the type and position of each atom, and the unit cell the atoms are in. from ase import Atoms, Atom from ase.io import write # define an Atoms object atoms = Atoms([Atom('C', [0., 0., 0.]), Atom('O', [1.1, 0., 0.])], cell=(10, 10, 10)) print('V = {0:1.0f} Angstrom^3'.format(atoms.get_volume())) write('images/simple-cubic-cell.png', atoms, show_unit_cell=2) V = 1000 Angstrom^3 Figure 2: Image of a CO molecule with the C at the origin. \label{fig:co-origin} There are two inconvenient features of the simple cubic cell: - Since the CO molecule is at the corner, its electron density is spread over the 8 corners of the box, which is not convenient for visualization later (see Visualizing electron density). - Due to the geometry of the cube, you need fairly large cubes to make sure the electron density of the molecule does not overlap with that of its images. Electron-electron interactions are repulsive, and the overlap makes the energy increase significantly. Here, the CO molecule has 6 images due to periodic boundary conditions that are 10 Å away. The volume of the unit cell is 1000 Å\(^3\). The first problem is easily solved by centering the atoms in the unit cell. The second problem can be solved by using a face-centered cubic lattice, which is the lattice with the closest packing. We show the results of the centering in Figure fig:co-fcc, where we have guessed values for \(b\) until the CO molecules are on average 10 Å apart. Note the final volume is only about 715 Å\(^3\), which is smaller than the cube. This will result in less computational time to compute properties. from ase import Atoms, Atom from ase.io import write b = 7.1 atoms = Atoms([Atom('C', [0., 0., 0.]), Atom('O', [1.1, 0., 0.])], cell=[[b, b, 0.], [b, 0., b], [0., b, b]]) print('V = {0:1.0f} Ang^3'.format(atoms.get_volume())) atoms.center() # translate atoms to center of unit cell write('images/fcc-cell.png', atoms, show_unit_cell=2) V = 716 Ang^3 Figure 3: CO in a face-centered cubic unit cell. \label{fig:co-fcc} At this point you might ask, "How do you know the distance to the neighboring image?" The ag viewer lets you compute this graphically, but we can use code to determine this too. All we have to do is figure out the length of each lattice vector, because these are what separate the atoms in the images. We use the numpy module to compute the distance of a vector as the square root of the sum of squared elements. from ase import Atoms, Atom import numpy as np b = 7.1 atoms = Atoms([Atom('C', [0., 0., 0.]), Atom('O', [1.1, 0., 0.])], cell=[[b, b, 0.], [b, 0., b], [0., b, b]]) # get unit cell vectors and their lengths (a1, a2, a3) = atoms.get_cell() print('|a1| = {0:1.2f} Ang'.format(np.sum(a1**2)**0.5)) print('|a2| = {0:1.2f} Ang'.format(np.linalg.norm(a2))) print('|a3| = {0:1.2f} Ang'.format(np.sum(a3**2)**0.5)) |a1| = 10.04 Ang |a2| = 10.04 Ang |a3| = 10.04 Ang 3.1.2 Reading other data formats into a calculation ase.io.read supports many different file formats: Known formats: ========================= =========== format short name ========================= =========== GPAW restart-file gpw Dacapo netCDF output file dacapo Old ASE netCDF trajectory nc Virtual Nano Lab file vnl ASE pickle trajectory traj ASE bundle trajectory bundle GPAW text output gpaw-text CUBE file cube XCrySDen Structure File xsf Dacapo text output dacapo-text XYZ-file xyz VASP POSCAR/CONTCAR file vasp VASP OUTCAR file vasp_out SIESTA STRUCT file struct_out ABINIT input file abinit V_Sim ascii file v_sim Protein Data Bank pdb CIF-file cif FHI-aims geometry file aims FHI-aims output file aims_out VTK XML Image Data vti VTK XML Structured Grid vts VTK XML Unstructured Grid vtu TURBOMOLE coord file tmol TURBOMOLE gradient file tmol-gradient exciting input exi AtomEye configuration cfg WIEN2k structure file struct DftbPlus input file dftb CASTEP geom file cell CASTEP output file castep CASTEP trajectory file geom ETSF format etsf.nc DFTBPlus GEN format gen CMR db/cmr-file db CMR db/cmr-file cmr LAMMPS dump file lammps Gromacs coordinates gro ========================= =========== You can read XYZ file format to create ase.Atoms objects. Here is what an XYZ file format might look like: #+include: molecules/isobutane.xyz The first line is the number of atoms in the file. The second line is often a comment. What follows is one line per atom with the symbol and Cartesian coordinates in Å. Note that the XYZ format does not have unit cell information in it, so you will have to figure out a way to provide it. In this example, we center the atoms in a box with vacuum on all sides (Figure fig:isobutane). from ase.io import read, write atoms = read('molecules/isobutane.xyz') atoms.center(vacuum=5) write('images/isobutane-xyz.png', atoms, show_unit_cell=2) Figure 4: An isobutane molecule read in from an XYZ formatted data file. \label{fig:isobutane} 3.1.3 Predefined molecules ase defines a number of molecular geometries in the ase.data.molecules database. For example, the database includes the molecules in the G2/97 database curtiss:1063. This database contains a broad set of atoms and molecules for which good experimental data exists, making them useful for benchmarking studies. See this site for the original files. The coordinates for the atoms in the database are MP2(full)/6-31G(d) optimized geometries. Here is a list of all the species available in ase.data.g2. You may be interested in reading about some of the other databases in ase.data too. from ase.data import g2 keys = g2.data.keys() # print in 3 columns for i in range(len(keys) / 3): print('{0:25s}{1:25s}{2:25s}'.format(*tuple(keys[i * 3: i * 3 + 3]))) isobutene CH3CH2OH CH3COOH COF2 CH3NO2 CF3CN CH3OH CCH CH3CH2NH2 PH3 Si2H6 O3 O2 BCl3 CH2_s1A1d Be H2CCl2 C3H9C C3H9N CH3CH2OCH3 BF3 CH3 CH4 S2 C2H6CHOH SiH2_s1A1d H3CNH2 CH3O H BeH P C3H4_C3v C2F4 OH methylenecyclopropane F2O SiCl4 HCF3 HCCl3 C3H7 CH3CH2O AlF3 CH2NHCH2 SiH2_s3B1d H2CF2 SiF4 H2CCO PH2 OCS HF NO2 SH2 C3H4_C2v H2O2 CH3CH2Cl isobutane CH3COF HCOOH CH3ONO C5H8 2-butyne SH NF3 HOCl CS2 P2 C CH3S O C4H4S S C3H7Cl H2CCHCl C2H6 CH3CHO C2H4 HCN C2H2 C2Cl4 bicyclobutane H2 C6H6 N2H4 C4H4NH H2CCHCN H2CCHF cyclobutane HCl CH3OCH3 Li2 Na CH3SiH3 NaCl CH3CH2SH OCHCHO SiH4 C2H5 SiH3 NH ClO AlCl3 CCl4 NO C2H3 ClF HCO CH3CONH2 CH2SCH2 CH3COCH3 C3H4_D2d CH CO CN F CH3COCl N CH3Cl Si C3H8 CS N2 Cl2 NCCN F2 CO2 Cl CH2OCH2 H2O CH3CO SO HCOOCH3 butadiene ClF3 Li PF3 B CH3SH CF4 C3H6_Cs C2H6NH N2O LiF H2COH cyclobutene LiH SiO Si2 C2H6SO C5H5N trans-butane Na2 C4H4O SO2 NH3 NH2 CH2_s3B1d ClNO C3H6_D3h Al CH3SCH3 H2CO CH3CN Some other databases include the ase.data.s22 for weakly interacting dimers and complexes, and ase.data.extra_molecules which has a few extras like biphenyl and C60. Here is an example of getting the geometry of an acetonitrile molecule and writing an image to a file. Note that the default unit cell is a 1 Å × Å × Å cubic cell. That is too small to use if your calculator uses periodic boundary conditions. We center the atoms in the unit cell and add vacuum on each side. We will add 6 Å of vacuum on each side. In the write command we use the option show_unit_cell =2 to draw the unit cell boundaries. See Figure fig:ch3cn. from ase.structure import molecule from ase.io import write atoms = molecule('CH3CN') atoms.center(vacuum=6) print('unit cell') print('---------') print(atoms.get_cell()) write('images/ch3cn.png', atoms, show_unit_cell=2) unit cell --------- [[ 13.775328 0. 0. ] [ 0. 13.537479 0. ] [ 0. 0. 15.014576]] Figure 5: A CH3CN molecule in a box. \label{fig:ch3cn} It is possible to rotate the atoms with ase.io.write if you wanted to see pictures from another angle. In the next example we rotate 45 degrees about the $x$-axis, then 45 degrees about the $y$-axis. Note that this only affects the image, not the actual coordinates. See Figure fig:ch3cn-rot. from ase.structure import molecule from ase.io import write atoms = molecule('CH3CN') atoms.center(vacuum=6) print('unit cell') print('---------') print(atoms.get_cell()) write('images/ch3cn-rotated.png', atoms, show_unit_cell=2, rotation='45x,45y,0z') unit cell --------- [[ 13.775328 0. 0. ] [ 0. 13.537479 0. ] [ 0. 0. 15.014576]] Figure 6: The rotated version of CH3CN. \label{fig:ch3cn-rot} If you actually want to rotate the coordinates, there is a nice way to do that too, with the ase.Atoms.rotate method. Actually there are some subtleties in rotation. One rotates the molecule an angle (in radians) around a vector, but you have to choose whether the center of mass should be fixed or not. You also must decide whether or not the unit cell should be rotated. In the next example you can see the coordinates have changed due to the rotations. Note that the write function uses the rotation angle in degrees, while the rotate function uses radians. from ase.structure import molecule from ase.io import write from numpy import pi atoms = molecule('CH3CN') atoms.center(vacuum=6) p1 = atoms.get_positions() atoms.rotate('x', pi/4, center='COM', rotate_cell=False) atoms.rotate('y', pi/4, center='COM', rotate_cell=False) write('images/ch3cn-rotated-2.png', atoms, show_unit_cell=2) print('difference in positions after rotating') print('atom difference vector') print('--------------------------------------') p2 = atoms.get_positions() diff = p2 - p1 for i, d in enumerate(diff): print('{0} {1}'.format(i, d)) difference in positions after rotating atom difference vector -------------------------------------- 0 [-0.65009456 0.91937255 0.65009456] 1 [ 0.08030744 -0.11357187 -0.08030744] 2 [ 0.66947344 -0.94677841 -0.66947344] 3 [-0.32532156 0.88463727 1.35030756] 4 [-1.35405183 1.33495444 -0.04610517] 5 [-0.8340703 1.33495444 1.2092413 ] Figure 7: Rotated CH3CN molecule Note in this last case the unit cell is oriented differently than the previous example, since we chose not to rotate the unit cell. 3.1.4 Combining Atoms objects It is frequently useful to combine two Atoms objects, e.g. for computing reaction barriers, or other types of interactions. In ase, we simply add two Atoms objects together. Here is an example of getting an ammonia and oxygen molecule in the same unit cell. See Figure fig:combined-atoms. We set the Atoms about three Å apart using the ase.Atoms.translate function. from ase.structure import molecule from ase.io import write atoms1 = molecule('NH3') atoms2 = molecule('O2') atoms2.translate([3, 0, 0]) bothatoms = atoms1 + atoms2 bothatoms.center(5) write('images/bothatoms.png', bothatoms, show_unit_cell=2, rotation='90x') Figure 8: Image featuring ammonia and oxygen molecule in one unit cell. \label{fig:combined-atoms} 3.2 Simple properties Simple properties do not require a DFT calculation. They are typically only functions of the atom types and geometries. 3.2.1 Getting cartesian positions If you want the \((x,y,z)\) coordinates of the atoms, use the ase.Atoms.get_positions. If you are interested in the fractional coordinates, use ase.Atoms.get_scaled_positions. from ase.structure import molecule atoms = molecule('C6H6') # benzene # access properties on each atom print(' # sym p_x p_y p_z') print('------------------------------') for i, atom in enumerate(atoms): print('{0:3d}{1:^4s}{2:-8.2f}{3:-8.2f}{4:-8.2f}'.format(i, atom.symbol, atom.x, atom.y, atom.z)) # get all properties in arrays sym = atoms.get_chemical_symbols() pos = atoms.get_positions() num = atoms.get_atomic_numbers() atom_indices = range(len(atoms)) print() print(' # sym at# p_x p_y p_z') print('-------------------------------------') for i, s, n, p in zip(atom_indices, sym, num, pos): px, py, pz = p print('{0:3d}{1:>3s}{2:8d}{3:-8.2f}{4:-8.2f}{5:-8.2f}'.format(i, s, n, px, py, pz)) # sym p_x p_y p_z ------------------------------ 0 C 0.00 1.40 0.00 1 C 1.21 0.70 0.00 2 C 1.21 -0.70 0.00 3 C 0.00 -1.40 0.00 4 C -1.21 -0.70 0.00 5 C -1.21 0.70 0.00 6 H 0.00 2.48 0.00 7 H 2.15 1.24 0.00 8 H 2.15 -1.24 0.00 9 H 0.00 -2.48 0.00 10 H -2.15 -1.24 0.00 11 H -2.15 1.24 0.00 () # sym at# p_x p_y p_z ------------------------------------- 0 C 6 0.00 1.40 0.00 1 C 6 1.21 0.70 0.00 2 C 6 1.21 -0.70 0.00 3 C 6 0.00 -1.40 0.00 4 C 6 -1.21 -0.70 0.00 5 C 6 -1.21 0.70 0.00 6 H 1 0.00 2.48 0.00 7 H 1 2.15 1.24 0.00 8 H 1 2.15 -1.24 0.00 9 H 1 0.00 -2.48 0.00 10 H 1 -2.15 -1.24 0.00 11 H 1 -2.15 1.24 0.00 3.2.2 Molecular weight and molecular formula molecular weight We can quickly compute the molecular weight of a molecule with this recipe. We use ase.Atoms.get_masses to get an array of the atomic masses of each atom in the Atoms object, and then just sum them up. from ase.structure import molecule atoms = molecule('C6H6') masses = atoms.get_masses() molecular_weight = masses.sum() molecular_formula = atoms.get_chemical_formula(mode='reduce') # note use of two lines to keep length of line reasonable s = 'The molecular weight of {0} is {1:1.2f} gm/mol' print(s.format(molecular_formula, molecular_weight)) The molecular weight of C6H6 is 78.11 gm/mol Note that the argument reduce=True for ase.Atoms.get_chemical_formula collects all the symbols to provide a molecular formula. 3.2.3 Center of mass center of mass The center of mass (COM) is defined as: COM = \(\frac{\sum m_i \cdot r_i}{\sum m_i}\) The center of mass is essentially the average position of the atoms, weighted by the mass of each atom. Here is an example of getting the center of mass from an Atoms object using ase.Atoms.get_center_of_mass. from ase.structure import molecule import numpy as np # ammonia atoms = molecule('NH3') # cartesian coordinates print('COM1 = {0}'.format(atoms.get_center_of_mass())) # compute the center of mass by hand pos = atoms.positions masses = atoms.get_masses() COM = np.array([0., 0., 0.]) for m, p in zip(masses, pos): COM += m*p COM /= masses.sum() print('COM2 = {0}'.format(COM)) # one-line linear algebra definition of COM print('COM3 = {0}'.format(np.dot(masses, pos) / np.sum(masses))) COM1 = [ 0.00000000e+00 5.91843349e-08 4.75457009e-02] COM2 = [ 0.00000000e+00 5.91843349e-08 4.75457009e-02] COM3 = [ 0.00000000e+00 5.91843349e-08 4.75457009e-02] You can see see that these centers of mass, which are calculated by different methods, are the same. 3.2.4 Moments of inertia moment of inertia The moment of inertia is a measure of resistance to changes in rotation. It is defined by \(I = \sum_{i=1}^N m_i r_i^2\) where \(r_i\) is the distance to an axis of rotation. There are typically three moments of inertia, although some may be zero depending on symmetry, and others may be degenerate. There is a convenient function to get the moments of inertia: ase.Atoms.get_moments_of_inertia. Here are several examples of molecules with different types of symmetry.: from ase.structure import molecule print('linear rotors: I = [0 Ia Ia]') atoms = molecule('CO2') print(' CO2 moments of inertia: {}'.format(atoms.get_moments_of_inertia())) print('') print('symmetric rotors (Ia = Ib) < Ic') atoms = molecule('NH3') print(' NH3 moments of inertia: {}'.format(atoms.get_moments_of_inertia())) atoms = molecule('C6H6') print(' C6H6 moments of inertia: {}'.format(atoms.get_moments_of_inertia())) print('') print('symmetric rotors Ia < (Ib = Ic)') atoms = molecule('CH3Cl') print('CH3Cl moments of inertia: {}'.format(atoms.get_moments_of_inertia())) print('') print('spherical rotors Ia = Ib = Ic') atoms = molecule('CH4') print(' CH4 moments of inertia: {}'.format(atoms.get_moments_of_inertia())) print('') print('unsymmetric rotors Ia != Ib != Ic') atoms = molecule('C3H7Cl') print(' C3H7Cl moments of inertia: {}'.format(atoms.get_moments_of_inertia())) linear rotors: I = [0 Ia Ia] CO2 moments of inertia: [ 0. 44.45384271 44.45384271] symmetric rotors (Ia = Ib) < Ic NH3 moments of inertia: [ 1.71012426 1.71012548 2.67031768] C6H6 moments of inertia: [ 88.77914641 88.77916799 177.5583144 ] symmetric rotors Ia < (Ib = Ic) CH3Cl moments of inertia: [ 3.20372189 37.97009644 37.97009837] spherical rotors Ia = Ib = Ic CH4 moments of inertia: [ 3.19145621 3.19145621 3.19145621] unsymmetric rotors Ia != Ib != Ic C3H7Cl moments of inertia: [ 19.41351508 213.18961963 223.16255537] If you want to know the principle axes of rotation, we simply pass vectors=True to the function, and it returns the moments of inertia and the principle axes. from ase.structure import molecule atoms = molecule('CH3Cl') moments, axes = atoms.get_moments_of_inertia(vectors=True) print('Moments = {0}'.format(moments)) print('axes = {0}'.format(axes)) Moments = [ 3.20372189 37.97009644 37.97009837] axes = [[ 0. 0. 1.] [ 0. 1. 0.] [ 1. 0. 0.]] This shows the first moment is about the z-axis, the second moment is about the y-axis, and the third moment is about the x-axis. 3.2.5 Computing bond lengths and angles A typical question we might ask is, "What is the structure of a molecule?" In other words, what are the bond lengths, angles between bonds, and similar properties. The Atoms object contains an ase.Atoms.get_distance method to make this easy. To calculate the distance between two atoms, you have to specify their indices, remembering that the index starts at 0. from ase.structure import molecule # ammonia atoms = molecule('NH3') print('atom symbol') print('===========') for i, atom in enumerate(atoms): print('{0:2d} {1:3s}' .format(i, atom.symbol)) # N-H bond length s = 'The N-H distance is {0:1.3f} angstroms' print(s.format(atoms.get_distance(0, 1))) atom symbol =========== 0 N 1 H 2 H 3 H The N-H distance is 1.017 angstroms Bond angles are a little trickier. If we had vectors describing the directions between two atoms, we could use some simple trigonometry to compute the angle between the vectors: \(\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos(\theta)\). So we can calculate the angle as \(\theta = \arccos\left(\frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}\right)\), we just have to define our two vectors \(\vec{a}\) and \(\vec{b}\). We compute these vectors as the difference in positions of two atoms. For example, here we compute the angle H-N-H in an ammonia molecule. This is the angle between N-H\(_1\) and N-H\(_2\). In the next example, we utilize functions in numpy to perform the calculations, specifically the numpy.arccos function, the numpy.dot function, and numpy.linalg.norm functions. from ase.structure import molecule # ammonia atoms = molecule('NH3') print('atom symbol') print('===========') for i, atom in enumerate(atoms): print('{0:2d} {1:3s}'.format(i, atom.symbol)) a = atoms.positions[0] - atoms.positions[1] b = atoms.positions[0] - atoms.positions[2] from numpy import arccos, dot, pi from numpy.linalg import norm theta_rad = arccos(dot(a, b) / (norm(a) * norm(b))) # in radians print('theta = {0:1.1f} degrees'.format(theta_rad * 180./pi)) atom symbol =========== 0 N 1 H 2 H 3 H theta = 106.3 degrees Figure 9: Schematic of the vectors defining the H-N-H angle. Alternatively you could use ase.Atoms.get_angle. Note we want the angle between atoms with indices [1, 0, 2] to get the H-N-H angle. from ase.structure import molecule from numpy import pi # ammonia atoms = molecule('NH3') print('theta = {0} degrees'.format(atoms.get_angle([1, 0, 2]) * 180. / pi)) theta = 106.334624232 degrees 3.2.5.1 Dihedral angles There is support in ase for computing dihedral angles. Let us illustrate that for ethane. We will compute the dihedral angle between atoms 5, 1, 0, and 4. That is a H-C-C-H dihedral angle, and one can visually see (although not here) that these atoms have a dihedral angle of 60° (Figure fig:ethane-dihedral). # calculate an ethane dihedral angle from ase.structure import molecule import numpy as np atoms = molecule('C2H6') print('atom symbol') print('===========') for i, atom in enumerate(atoms): print('{0:2d} {1:3s}'.format(i, atom.symbol)) da = atoms.get_dihedral([5, 1, 0, 4]) * 180. / np.pi print('dihedral angle = {0:1.2f} degrees'.format(da)) atom symbol =========== 0 C 1 C 2 H 3 H 4 H 5 H 6 H 7 H dihedral angle = 60.00 degrees In this section we covered properties that require simple calculations, but not DFT calculations, to compute. 3.3 Simple properties that require single computations There are many properties that only require a single DFT calculation to obtain the energy, forces, density of states, electron density and electrostatic potential. This section describes some of these calculations and their analysis. 3.3.1 Energy and forces Two of the most important quantities we are interested in are the total energy and the forces on the atoms. To get these quantities, we have to define a calculator and attach it to an ase.Atoms object so that ase knows how to get the data. After defining the calculator a DFT calculation must be run. Here is an example of getting the energy and forces from a CO molecule. The forces in this case are very high, indicating that this geometry is not close to the ground state geometry. Note that the forces are only along the $x$-axis, which is along the molecular axis. We will see how to minimize this force in Manual determination and Automatic geometry optimization with VASP. This is your first DFT calculation in the book! See ISMEAR, SIGMA, NBANDS, and ENCUT to learn more about these VASP keywords. from ase import Atoms, Atom from vasp import Vasp co = Atoms([Atom('C', [0, 0, 0]), Atom('O', [1.2, 0, 0])], cell=(6., 6., 6.)) calc = Vasp('molecules/simple-co', #111507 eV [[ 5.09138064 0. 0. ] [-5.09138064 0. 0. ]] We can see what files were created and used in this calculation by printing the vasp attribute of the calculator. from vasp import Vasp print(Vasp('molecules/simple-co').vasp) INCAR ----- INCAR created by Atomic Simulation Environment ENCUT = 350 LCHARG = .FALSE. NBANDS = 6 ISMEAR = 1 LWAVE = .FALSE. SIGMA = 0.01 POSCAR ------ C O 1.0000000000000000 6.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 6.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 6.0000000000000000 1 1 Cartesian 0.0000000000000000 0.0000000000000000 0.0000000000000000 1.2000000000000000 0.0000000000000000 0.0000000000000000 KPOINTS ------- KPOINTS created by Atomic Simulation Environment 0 Monkhorst-Pack 1 1 1 0.0 0.0 0.0 POTCAR ------ cat $VASP_PP_PATH/potpaw_PBE/C/POTCAR $VASP_PP_PATH/potpaw_PBE/O/POTCAR > POTCAR 3.3.1.1 Running a job in parallel from ase import Atoms, Atom from vasp import Vasp from vasp.vasprc import VASPRC VASPRC['queue.ppn'] = 4 co = Atoms([Atom('C', [0, 0, 0]), Atom('O', [1.2, 0, 0])], cell=(6., 6., 6.)) calc = Vasp('molecules/simple-co-n4', #072754 eV [[ 5.09089107 0. 0. ] [-5.09089107 0. 0. ]] 3.3.1.2 Convergence with unit cell size There are a number of parameters that affect the energy and forces including the calculation parameters and the unit cell. We will first consider the effect of the unit cell on the total energy and forces. The reason that the unit cell affects the total energy is that it can change the distribution of electrons in the molecule. from vasp import Vasp from ase import Atoms, Atom import numpy as np np.set_printoptions(precision=3, suppress=True) atoms = Atoms([Atom('C', [0, 0, 0]), Atom('O', [1.2, 0, 0])]) L = [4, 5, 6, 8, 10] energies = [] ready = True for a in L: atoms.set_cell([a, a, a], scale_atoms=False) atoms.center() calc = Vasp('molecules/co-L-{0}'.format(a), encut=350, xc='PBE', atoms=atoms) energies.append(atoms.get_potential_energy()) print(energies) calc.stop_if(None in energies) import matplotlib.pyplot as plt plt.plot(L, energies, 'bo-') plt.xlabel('Unit cell length ($\AA$)') plt.ylabel('Total energy (eV)') plt.savefig('images/co-e-v.png') [-15.35943747, -14.85641864, -14.68750595, -14.63202061, -14.65342838] Figure 10: Total energy of a CO molecule as a function of the unit cell length. Here there are evidently attractive interactions between the CO molecules which lower the total energy for small box sizes. We have to decide what an appropriate volume for our calculation is, and the choice depends on the goal. We may wish to know the total energy of a molecule that is not interacting with any other molecules, e.g. in the ideal gas limit. In that case we need a large unit cell so the electron density from the molecule does not go outside the unit cell where it would overlap with neighboring images. It pays to check for convergence. The cost of running the calculation goes up steeply with increasing cell size. Doubling a lattice vector here leads to a 20-fold increase in computational time! Note that doubling a lattice vector length increases the volume by a factor of 8 for a cube. The cost goes up because the number of planewaves that fit in the cube grows as the cube gets larger. from vasp import Vasp L = [4, 5, 6, 8, 10] for a in L: calc = Vasp('molecules/co-L-{0}'.format(a)) print('{0} {1} seconds'.format(a, calc.get_elapsed_time())) 4 10.748 seconds 5 11.855 seconds 6 15.613 seconds 8 28.346 seconds 10 45.259 seconds Let us consider what the pressure in the unit cell is. In the ideal gas limit we have \(PV = nRT\), which gives a pressure of zero at absolute zero. At non-zero temperatures, we have \(P=n/V RT\). Let us consider some examples. In atomic units we use \(k_B\) instead of \(R\). from ase.units import kB, Pascal import numpy as np import matplotlib.pyplot as plt atm = 101325 * Pascal L = np.linspace(4, 10) V = L**3 n = 1 # one atom/molecule per unit cell for T in [298, 600, 1000]: P = n / V * kB * T / atm # convert to atmospheres plt.plot(V, P, label='{0}K'.format(T)) plt.xlabel('Unit cell volume ($\AA^3$)') plt.ylabel('Pressure (atm)') plt.legend(loc='best') plt.savefig('images/ideal-gas-pressure.png') Figure 11: Ideal gas pressure dependence on temperature and unit cell volume. 3.3.1.3 Convergence of ENCUT The total energy and forces also depend on the computational parameters, notably ENCUT. from ase import Atoms, Atom from vasp import Vasp import numpy as np np.set_printoptions(precision=3, suppress=True) atoms = Atoms([Atom('C', [0, 0, 0]), Atom('O', [1.2, 0, 0])], cell=(6, 6, 6)) atoms.center() ENCUTS = [250, 300, 350, 400, 450, 500] calcs = [Vasp('molecules/co-en-{0}'.format(en), encut=en, xc='PBE', atoms=atoms) for en in ENCUTS] energies = [calc.potential_energy for calc in calcs] print(energies) calcs[0].stop_if(None in energies) import matplotlib.pyplot as plt plt.plot(ENCUTS, energies, 'bo-') plt.xlabel('ENCUT (eV)') plt.ylabel('Total energy (eV)') plt.savefig('images/co-encut-v.png') [-14.95250419, -14.71808896, -14.68750595, -14.66725733, -14.65604528, -14.65012078] Figure 12: Dependence of the total energy of CO molecule on ENCUT. You can see in this figure that it takes a cutoff energy of about 400 eV to achieve a convergence level around 10 meV, and that even at 500 meV the energy is still changing slightly. Keep in mind that we are generally interested in differences in total energy, and the differences tend to converge faster than a single total energy. Also it is important to note that it is usually a single element that determines the rate of convergence. The reason we do not just use very high ENCUT all the time is it is expensive. grep "Elapsed time (sec):" molecules/co-en-*/OUTCAR molecules/co-en-250/OUTCAR: Elapsed time (sec): 11.634 molecules/co-en-300/OUTCAR: Elapsed time (sec): 14.740 molecules/co-en-350/OUTCAR: Elapsed time (sec): 13.577 molecules/co-en-400/OUTCAR: Elapsed time (sec): 16.310 molecules/co-en-450/OUTCAR: Elapsed time (sec): 17.704 molecules/co-en-500/OUTCAR: Elapsed time (sec): 11.658 3.3.1.4 Cloning You may want to clone a calculation, so you can change some parameter without losing the previous result. The clone function does this, and changes the calculator over to the new directory. from ase import Atoms, Atom from vasp import Vasp calc = Vasp('molecules/simple-co') print('energy = {0} eV'.format(calc.get_atoms().get_potential_energy())) # This creates the directory and makes it current working directory calc.clone('molecules/clone-1') calc.set(encut=325) # this will trigger a new calculation print('energy = {0} eV'.format(calc.get_atoms().get_potential_energy())) energy = -14.69111507 eV energy = -14.77117554 eV 3.3.2 Visualizing electron density The electron density is a 3\(d\) quantity: for every \((x,y,z)\) point, there is a charge density. That means we need 4 numbers for each point: \((x,y,z)\) and \(\rho(x,y,z)\). Below we show an example (Figure fig:cd1) of plotting the charge density, and we consider some issues we have to consider when visualizing volumetric data in unit cells with periodic boundary conditions. We will use the results from a previous calculation. from vasp import Vasp from enthought.mayavi import mlab from ase.data import vdw_radii from ase.data.colors import cpk_colors calc = Vasp('molecules/simple-co') calc.clone('molecules/co-chg') calc.set(lcharg=True) calc.stop_if(calc.potential_energy is None) atoms = calc.get_atoms() x, y, z, cd = calc.get_charge_density() # make a white figure mlab.figure(1, bgcolor=(1, 1, 1)) # plot the atoms as spheres for atom in atoms: mlab.points3d(atom.x, atom.y, atom.z, #this determines the size of the atom) mlab.view(azimuth=-90, elevation=90, distance='auto') mlab.savefig('images/co-cd.png') Figure 13: Charge density of a CO molecule that is located at the origin. The electron density that is outside the cell is wrapped around to the other corners. \label{fig:cd1} If we take care to center the CO molecule in the unit cell, we get a nicer looking result. from vasp import Vasp from enthought.mayavi import mlab from ase.data import vdw_radii from ase.data.colors import cpk_colors from ase import Atom, Atoms atoms = Atoms([Atom('C', [2.422, 0.0, 0.0]), Atom('O', [3.578, 0.0, 0.0])], cell=(10,10,10)) atoms.center() calc = Vasp('molecules/co-centered', encut=350, xc='PBE', atoms=atoms) calc.set(lcharg=True,) calc.stop_if(calc.potential_energy is None) atoms = calc.get_atoms() x, y, z, cd = calc.get_charge_density() mlab.figure(bgcolor=(1, 1, 1)) # plot the atoms as spheres for atom in atoms: mlab.points3d(atom.x, atom.y, atom.z,, transparent=True) # this view was empirically found by iteration mlab.view(azimuth=-90, elevation=90, distance='auto') mlab.savefig('images/co-centered-cd.png') mlab.show() Figure 14: Charge density of a CO molecule centered in the unit cell. Now the electron density is centered in the unit cell. \label{fig:cd2} TODO: how to make this figure 3.3.3 Visualizing electron density differences Here, we visualize how charge moves in a benzene ring when you substitute an H atom with an electronegative Cl atom. #!/usr/bin/env python from ase import * from ase.structure import molecule from vasp import Vasp ### Setup calculators benzene = molecule('C6H6') benzene.set_cell([10, 10, 10]) benzene.center() calc1 = Vasp('molecules/benzene', xc='PBE', nbands=18, encut=350, atoms=benzene) calc1.set(lcharg=True) chlorobenzene = molecule('C6H6') chlorobenzene.set_cell([10, 10, 10]) chlorobenzene.center() chlorobenzene[11].symbol ='Cl' calc2 = Vasp('molecules/chlorobenzene', xc='PBE', nbands=22, encut=350, atoms=chlorobenzene) calc2.set(lcharg=True) calc2.stop_if(None in (calc1.potential_energy, calc2.potential_energy)) x1, y1, z1, cd1 = calc1.get_charge_density() x2, y2, z2, cd2 = calc2.get_charge_density() cdiff = cd2 - cd1 print(cdiff.min(), cdiff.max()) ########################################## ##### set up visualization of charge difference from enthought.mayavi import mlab mlab.contour3d(x1, y1, z1, cdiff, contours=[-0.02, 0.02]) mlab.savefig('images/cdiff.png') (-2.0821159999999987, 2.9688999999999979) #!/usr/bin/env python from ase import * from ase.structure import molecule from vasp import Vasp import bisect import numpy as np def vinterp3d(x, y, z, u, xi, yi, zi): "Interpolate the point (xi, yi, zi) from the values at u(x, y, z)" p = np.array([xi, yi, zi]) #1D arrays of coordinates xv = x[:, 0, 0] yv = y[0, :, 0] zv = z[0, 0, :] # we subtract 1 because bisect tells us where to insert the # element to maintain an ordered list, so we want the index to the # left of that point i = bisect.bisect_right(xv, xi) - 1 j = bisect.bisect_right(yv, yi) - 1 k = bisect.bisect_right(zv, zi) - 1 if i == len(x) - 1: i = len(x) - 2 elif i < 0: i = 0 if j == len(y) - 1: j = len(y) - 2 elif j < 0: j = 0 if k == len(z) - 1: k = len(z) - 2 elif k < 0: k = 0 # points at edge of cell. We only need P1, P2, P3, and P5 P1 = np.array([x[i, j, k], y[i, j, k], z[i, j, k]]) P2 = np.array([x[i + 1, j, k], y[i + 1, j, k], z[i + 1, j, k]]) P3 = np.array([x[i, j + 1, k], y[i, j + 1, k], z[i, j + 1, k]]) P5 = np.array([x[i, j, k + 1], y[i, j, k + 1], z[i, j, k + 1]]) # values of u at edge of cell u1 = u[i, j, k] u2 = u[i+1, j, k] u3 = u[i, j+1, k] u4 = u[i+1, j+1, k] u5 = u[i, j, k+1] u6 = u[i+1, j, k+1] u7 = u[i, j+1, k+1] u8 = u[i+1, j+1, k+1] # cell basis vectors, not the unit cell, but the voxel cell containing the point cbasis = np.array([P2 - P1, P3 - P1, P5 - P1]) # now get interpolated point in terms of the cell basis s = np.dot(np.linalg.inv(cbasis.T), np.array([xi, yi, zi]) - P1) # now s = (sa, sb, sc) which are fractional coordinates in the vector space # next we do the interpolations ui1 = u1 + s[0] * (u2 - u1) ui2 = u3 + s[0] * (u4 - u3) ui3 = u5 + s[0] * (u6 - u5) ui4 = u7 + s[0] * (u8 - u7) ui5 = ui1 + s[1] * (ui2 - ui1) ui6 = ui3 + s[1] * (ui4 - ui3) ui7 = ui5 + s[2] * (ui6 - ui5) return ui7 ### Setup calculators calc = Vasp('molecules/benzene') benzene = calc.get_atoms() x1, y1, z1, cd1 = calc.get_charge_density() calc = Vasp('molecules/chlorobenzene') x2, y2, z2, cd2 = calc.get_charge_density() cdiff = cd2 - cd1 #we need the x-y plane at z=5 import matplotlib.pyplot as plt from scipy import mgrid X, Y = mgrid[0: 10: 25j, 0: 10: 25j] cdiff_plane = np.zeros(X.shape) ni, nj = X.shape for i in range(ni): for j in range(nj): cdiff_plane[i, j] = vinterp3d(x1, y1, z1, cdiff, X[i, j], Y[i, j], 5.0) plt.imshow(cdiff_plane.T, vmin=-0.02, # min charge diff to plot vmax=0.02, # max charge diff to plot cmap=plt.cm.gist_heat, # colormap extent=(0., 10., 0., 10.)) # axes limits # plot atom positions. It is a little tricky to see why we reverse the x and y coordinates. That is because imshow does that. x = [a.x for a in benzene] y = [a.y for a in benzene] plt.plot(x, y, 'bo') plt.colorbar() #add colorbar plt.savefig('images/cdiff-imshow.png') plt.show() 3.3.4 Dipole moments dipole moment The dipole moment is a vector describing the separation of electrical (negative) and nuclear (positive) charge. The magnitude of this vector is the dipole moment, which has units of Coulomb-meter, or more commonly Debye. The symmetry of a molecule determines if a molecule has a dipole moment or not. Below we compute the dipole moment of CO. We must integrate the electron density to find the center of electrical charge, and compute a sum over the nuclei to find the center of positive charge. from vasp import Vasp from vasp.VaspChargeDensity import VaspChargeDensity import numpy as np from ase.units import Debye import os calc = Vasp('molecules/co-centered') atoms = calc.get_atoms() calc.stop_if(atoms.get_potential_energy() is None) vcd = VaspChargeDensity('molecules/co-centered/CHG') cd = np.array(vcd.chg[0]) n0, n1, n2 = cd.shape s0 = 1.0 / n0 s1 = 1.0 / n1 s2 = 1.0 / n2 X, Y, Z = np.mgrid[0.0:1.0:s0, 0.0:1.0:s1, 0.0:1.0:s2] C = np.column_stack([X.ravel(), Y.ravel(), Z.ravel()]) atoms = calc.get_atoms() uc = atoms.get_cell() real = np.dot(C, uc) # now convert arrays back to unitcell shape x = np.reshape(real[:, 0], (n0, n1, n2)) y = np.reshape(real[:, 1], (n0, n1, n2)) z = np.reshape(real[:, 2], (n0, n1, n2)) nelements = n0 * n1 * n2 voxel_volume = atoms.get_volume() / nelements total_electron_charge = -cd.sum() * voxel_volume electron_density_center = np.array([(cd * x).sum(), (cd * y).sum(), (cd * z).sum()]) electron_density_center *= voxel_volume electron_density_center /= total_electron_charge electron_dipole_moment = -electron_density_center * total_electron_charge # now the ion charge center. We only need the Zval listed in the potcar from vasp.POTCAR import get_ZVAL LOP = calc.get_pseudopotentials() ppp = os.environ['VASP_PP_PATH'] zval = {} for sym, ppath, hash in LOP: fullpath = os.path.join(ppp, ppath) z = get_ZVAL(fullpath) zval[sym] = z ion_charge_center = np.array([0.0, 0.0, 0.0]) total_ion_charge = 0.0 for atom in atoms: Z = zval[atom.symbol] total_ion_charge += Z pos = atom.position ion_charge_center += Z * pos ion_charge_center /= total_ion_charge ion_dipole_moment = ion_charge_center * total_ion_charge dipole_vector = (ion_dipole_moment + electron_dipole_moment) dipole_moment = ((dipole_vector**2).sum())**0.5 / Debye print('The dipole vector is {0}'.format(dipole_vector)) print('The dipole moment is {0:1.2f} Debye'.format(dipole_moment)) The dipole vector is [ 0.02048406 0.00026357 0.00026357] The dipole moment is 0.10 Debye Note that a function using the code above exists in vasp which makes it trivial to compute the dipole moment. Here is an example of its usage. from vasp import Vasp from ase.units import Debye calc = Vasp('molecules/co-centered') dipole_moment = calc.get_dipole_moment() print('The dipole moment is {0:1.2f} Debye'.format(dipole_moment)) The dipole moment is 0.10 Debye 3.3.5 The density of states (DOS) The density of states (DOS) gives you the number of electronic states (i.e., the orbitals) that have a particular energy. We can get this information from the last calculation we just ran without having to run another DFT calculation. from vasp import Vasp from ase.dft.dos import DOS import matplotlib.pyplot as plt calc = Vasp('molecules/simple-co') # we already ran this! dos = DOS(calc) plt.plot(dos.get_energies(), dos.get_dos()) plt.xlabel('Energy - $E_f$ (eV)') plt.ylabel('DOS') # make sure you save the figure outside the with statement, or provide # the correct relative or absolute path to where you want it. plt.savefig('images/co-dos.png') Figure 17: Density of states for a CO molecule. 3.3.6 Atom-projected density of states on molecules Let us consider which states in the density of states belong to which atoms in a molecule. This can only be a qualitative consideration because the orbitals on the atoms often hybridize to form molecular orbitals, e.g. in methane the \(s\) and \(p\) orbitals can form what we call \(sp^3\) orbitals. We can compute atom-projected density of states in VASP, which is done by projecting the wave function onto localized atomic orbitals. Here is an example. We will consider the CO molecule. To get atom-projected density of states, we must set RWIGS for each atom. This parameter defines the radius of the sphere around the atom which cuts off the projection. The total density of states and projected density of states information comes from the DOSCAR file. Note that unlike the DOS, here we must run another calculation because we did not specify the atom-projected keywords above. Our strategy is to get the atoms from the previous calculation, and use them in a new calculation. You could redo the calculation in the same directory, but you risk losing the results of the first step. That can make it difficult to reproduce a result. We advocate our approach of using multiple directories for the subsequent calculations, because it leaves a clear trail of how the work was done. The RWIGS is not uniquely determined for an element. There are various natural choices, e.g. the ionic radius of an atom, or a value that minimizes overlap of neighboring spheres, but these values can change slightly in different environments. You can also get spin-polarized atom-projected DOS, and magnetization projected DOS. See for more details. from vasp import Vasp from ase.dft.dos import DOS import matplotlib.pyplot as plt # get the geometry from another calculation calc = Vasp('molecules/simple-co') atoms = calc.get_atoms() calc = Vasp('molecules/co-ados', encut=300, xc='PBE', rwigs={'C': 1.0, 'O': 1.0}, # these are the cutoff radii for projected states atoms=atoms) calc.stop_if(calc.potential_energy is None) # now get results dos = DOS(calc) plt.plot(dos.get_energies(), dos.get_dos() + 10) energies, c_s = calc.get_ados(0, 's') _, c_p = calc.get_ados(0, 'p') _, o_s = calc.get_ados(1, 's') _, o_p = calc.get_ados(1, 'p') _, c_d = calc.get_ados(0, 'd') _, o_d = calc.get_ados(1, 'd') plt.plot(energies, c_s + 6, energies, o_s + 5) plt.plot(energies, c_p + 4, energies, o_p + 3) plt.plot(energies, c_d, energies, o_d + 2) plt.xlabel('Energy - $E_f$ (eV)') plt.ylabel('DOS') plt.legend(['DOS', 'C$_s$', 'O$_s$', 'C$_p$', 'O$_p$', 'C$_d$', 'O$_d$'], ncol=2, loc='best') plt.savefig('images/co-ados.png') Figure 18: Atom-projected DOS for a CO molecule. The total density of states and the \(s\), \(p\) and \(d\) states on the C and O are shown. 3.3.7 Electrostatic potential This is an example of the so-called σ hole in a halogen bond. The coordinates for the CF3Br molecule were found at. from vasp import Vasp from ase import Atom, Atoms from ase.io import write from enthought.mayavi import mlab from ase.data import vdw_radii from ase.data.colors import cpk_colors atoms = Atoms([Atom('C', [ 0.0000, 0.0000, -0.8088]), Atom('Br', [ 0.0000, 0.0000, 1.1146]), Atom('F', [ 0.0000, 1.2455, -1.2651]), Atom('F', [ 1.0787, -0.6228, -1.2651]), Atom('F', [-1.0787, -0.6228, -1.2651])], cell=(10, 10, 10)) atoms.center() calc = Vasp('molecules/CF3Br', encut=350, xc='PBE', ibrion=1, nsw=50, lcharg=True, lvtot=True, lvhar=True, atoms=atoms) calc.set_nbands(f=2) calc.stop_if(calc.potential_energy is None) x, y, z, lp = calc.get_local_potential() x, y, z, cd = calc.get_charge_density() mlab.figure(1, bgcolor=(1, 1, 1)) # make a white figure # plot the atoms as spheres for atom in atoms: mlab.points3d(atom.x, atom.y, atom.z, scale_factor=vdw_radii[atom.number]/5., resolution=20, # a tuple is required for the color color=tuple(cpk_colors[atom.number]), scale_mode='none') # plot the bonds. We want a line from C-Br, C-F, etc... # We create a bond matrix showing which atoms are connected. bond_matrix = [[0, 1], [0, 2], [0, 3], [0, 4]] for a1, a2 in bond_matrix: mlab.plot3d(atoms.positions[[a1,a2], 0], # x-positions atoms.positions[[a1,a2], 1], # y-positions atoms.positions[[a1,a2], 2], # z-positions [2, 2], tube_radius=0.02, colormap='Reds') mlab.contour3d(x, y, z, lp) mlab.savefig('images/halogen-ep.png') mlab.show() Figure 19: Plot of the electrostatic potential of CF3Br. TODO: figure out how to do an isosurface of charge, colormapped by the local potential. See for examples of using VMD for visualization. 3.3.8 Bader analysis Note: Thanks to @prtkm for helping improve this section (). Bader analysis is a charge partitioning scheme where charge is divided by surfaces of zero flux that define atomic basins of charge. The most modern way of calculating the Bader charges is using the bader program from Graeme Henkelmen's group Henkelman2006354,doi.10.1021/ct100125x. Let us consider a water molecule, centered in a box. The strategy is first to run the calculation, then run the bader program on the results. We have to specify laechg to be true so that the all-electron core charges will be written out to files. Here we setup and run the calculation to get the densities first. from vasp import Vasp from ase.structure import molecule atoms = molecule('H2O') atoms.center(vacuum=6) calc = Vasp('molecules/h2o-bader', xc='PBE', encut=350, lcharg=True, laechg=True, atoms=atoms) print calc.potential_energy -14.22250648 Now that the calculation is done, get the bader code and scripts from. We use this code to see the changes in charges on the atoms. from vasp import Vasp calc = Vasp('molecules/h2o-bader') calc.bader(ref=True, overwrite=True) atoms = calc.get_atoms() for atom in atoms: print('|{0} | {1} |'.format(atom.symbol, atom.charge)) |O | -1.2326 | |H | 0.6161 | |H | 0.6165 | The results above are comparable to those from gpaw at. You can see some charge has been "transferred" from H to O. 3.4 Geometry optimization 3.4.1 Manual determination of a bond length The equilibrium bond length of a CO molecule is approximately the bond length that minimizes the total energy. We can find that by computing the total energy as a function of bond length, and noting where the minimum is. Here is an example in VASP. There are a few features to point out here. We want to compute 5 bond lengths, and each calculation is independent of all the others. vasp is set up to automatically handle jobs for you by submitting them to the queue. It raises a variety of exceptions to let you know what has happened, and you must handle these to control the workflow. We will illustrate this by the following examples. from vasp import Vasp from ase import Atom, Atoms bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25] energies = [] for d in bond_lengths: # possible bond lengths co = Atoms([Atom('C', [0, 0, 0]), Atom('O', [d, 0, 0])], cell=(6, 6, 6)) calc = Vasp('molecules/co-{0}'.format(d), # output dir xc='PBE', nbands=6, encut=350, ismear=1, sigma=0.01, atoms=co) energies.append(co.get_potential_energy()) print('d = {0:1.2f} ang'.format(d)) print('energy = {0:1.3f} eV'.format(energies[-1] or 0)) print('forces = (eV/ang)\n {0}'.format(co.get_forces())) print('') # blank line if None in energies: calc.abort() else: import matplotlib.pyplot as plt plt.plot(bond_lengths, energies, 'bo-') plt.xlabel(r'Bond length ($\AA$)') plt.ylabel('Total energy (eV)') plt.savefig('images/co-bondlengths.png') d = 1.05 ang energy = -14.216 eV forces = (eV/ang) [[-14.93017486 0. 0. ] [ 14.93017486 0. 0. ]] d = 1.10 ang energy = -14.722 eV forces = (eV/ang) [[-5.81988086 0. 0. ] [ 5.81988086 0. 0. ]] d = 1.15 ang energy = -14.841 eV forces = (eV/ang) [[ 0.63231023 0. 0. ] [-0.63231023 0. 0. ]] d = 1.20 ang energy = -14.691 eV forces = (eV/ang) [[ 5.09138064 0. 0. ] [-5.09138064 0. 0. ]] d = 1.25 ang energy = -14.355 eV forces = (eV/ang) [[ 8.14027842 0. 0. ] [-8.14027842 0. 0. ]] Before continuing, it is worth looking at some other approaches to setup and run these calculations. Here we consider a functional approach that uses list comprehensions pretty extensively. from vasp import Vasp from ase import Atom, Atoms bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25] ATOMS = [Atoms([Atom('C', [0, 0, 0]), Atom('O', [d, 0, 0])], cell=(6, 6, 6)) for d in bond_lengths] calcs = [Vasp('molecules/co-{0}'.format(d), # output dir xc='PBE', nbands=6, encut=350, ismear=1, sigma=0.01, atoms=atoms) for d, atoms in zip(bond_lengths, ATOMS)] energies = [atoms.get_potential_energy() for atoms in ATOMS] print(energies) [-14.21584765, -14.72174343, -14.84115208, -14.69111507, -14.35508371] We can retrieve data similarly. from vasp import Vasp bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25] calcs = [Vasp('molecules/co-{0}'.format(d)) for d in bond_lengths] energies = [calc.get_atoms().get_potential_energy() for calc in calcs] print(energies) [-14.21584765, -14.72174343, -14.84115208, -14.69111507, -14.35508371] from vasp import Vasp from ase.db import connect bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25] calcs = [Vasp('molecules/co-{0}'.format(d)) for d in bond_lengths] con = connect('co-database.db', append=False) for atoms in [calc.get_atoms() for calc in calcs]: con.write(atoms) Here we just show that there are entries in our database. If you run the code above many times, each time will add new entries. ase-db co-database.db id|age|user |formula|calculator| energy| fmax|pbc| volume|charge| mass| smax|magmom 1|12s|jkitchin|CO |vasp |-14.216|14.930|TTT|216.000| 0.000|28.010|0.060| 0.000 2|10s|jkitchin|CO |vasp |-14.722| 5.820|TTT|216.000| 0.000|28.010|0.017| 0.000 3| 9s|jkitchin|CO |vasp |-14.841| 0.632|TTT|216.000| 0.000|28.010|0.017| 0.000 4| 9s|jkitchin|CO |vasp |-14.691| 5.091|TTT|216.000| 0.000|28.010|0.041| 0.000 5| 7s|jkitchin|CO |vasp |-14.355| 8.140|TTT|216.000| 0.000|28.010|0.060| 0.000 Rows: 5 This database is now readable in Python too. Here we read in all the results. Later we will learn how to select specific entries. from ase.io import read ATOMS = read('co-database.db', ':') print([a[0].x - a[1].x for a in ATOMS]) print([atoms.get_potential_energy() for atoms in ATOMS]) [-1.0499999999999998, -1.09999998, -1.15000002, -1.2000000000000002, -1.2499999800000001] [-14.21584765, -14.72174343, -14.84115208, -14.69111507, -14.35508371] Now, back to the goal of finding the minimum. To find the minimum we could run more calculations, but a simpler and faster way is to fit a polynomial to the data, and find the analytical minimum. The results are shown in Figure fig:co-bondlengths. from vasp import Vasp import numpy as np import matplotlib.pyplot as plt bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25] energies = [] for d in bond_lengths: # possible bond lengths calc = Vasp('molecules/co-{0}'.format(d)) atoms = calc.get_atoms() energies.append(atoms.get_potential_energy()) # Now we fit an equation - cubic polynomial pp = np.polyfit(bond_lengths, energies, 3) dp = np.polyder(pp) # first derivative - quadratic # we expect two roots from the quadratic eqn. These are where the # first derivative is equal to zero. roots = np.roots(dp) # The minimum is where the second derivative is positive. dpp = np.polyder(dp) # second derivative - line secd = np.polyval(dpp, roots) minV = roots[secd > 0] minE = np.polyval(pp, minV) print('The minimum energy is {0[0]} eV at V = {1[0]} Ang^3'.format(minE, minV)) # plot the fit x = np.linspace(1.05, 1.25) fit = np.polyval(pp, x) plt.plot(bond_lengths, energies, 'bo ') plt.plot(x, fit, 'r-') plt.plot(minV, minE, 'm* ') plt.legend(['DFT', 'fit', 'minimum'], numpoints=1) plt.xlabel(r'Bond length ($\AA$)') plt.ylabel('Total energy (eV)') plt.savefig('images/co-bondlengths.png') The minimum energy is -14.8458440947 eV at V = 1.14437582331 Ang^3 Figure 21: Energy vs CO bond length. \label{fig:co-bondlengths} 3.4.2 Automatic geometry optimization with VASP It is generally the case that the equilibrium geometry of a system is the one that minimizes the total energy and forces. Since each atom has three degrees of freedom, you can quickly get a high dimensional optimization problem. Luckily, VASP has built-in geometry optimization using the IBRION and NSW tags. Here we compute the bond length for a CO molecule, letting VASP do the geometry optimization for us. Here are the most common choices for IBRION. VASP applies a criteria for stopping a geometry optimization. When the change in energy between two steps is less than 0.001 eV (or 10*EDIFF), the relaxation is stopped. This criteria is controlled by the EDIFFG tag. If you prefer to stop based on forces, set EDIFFG=-0.05, i.e. to a negative number. The units of force is eV/Å. For most work, a force tolerance of 0.05 eV/Å is usually sufficient. from ase import Atom, Atoms from vasp import Vasp co = Atoms([Atom('C',[0, 0, 0]), Atom('O',[1.2, 0, 0])], cell=(6, 6, 6)) calc = Vasp('molecules/co-cg', xc='PBE', nbands=6, encut=350, ismear=1, sigma=0.01, # this is small for a molecule ibrion=2, # conjugate gradient optimizer nsw=5, # do at least 5 steps to relax atoms=co) print('Forces') print('=======') print(co.get_forces()) pos = co.get_positions() d = ((pos[0] - pos[1])**2).sum()**0.5 print('Bondlength = {0:1.2f} angstroms'.format(d)) Forces ======= [[-0.8290116 0. 0. ] [ 0.8290116 0. 0. ]] Bondlength = 1.14 angstroms 3.4.3 Relaxation of a water molecule It is not more complicated to relax more atoms, it just may take longer because there are more electrons and degrees of freedom. Here we relax a water molecule which has three atoms.-relax-centered', xc='PBE', encut=400, ismear=0, # Gaussian smearing ibrion=2, ediff=1e-8, nsw=10, atoms=atoms) print("forces") print('=======') print(atoms.get_forces()) [[ 4.2981572 3.23149312 4. ] [ 3.70172616 4. 4. ] [ 4.2981572 4.76850688 4. ]] forces ======= [[ -3.49600000e-05 5.06300000e-05 0.00000000e+00] [ 6.99200000e-05 0.00000000e+00 0.00000000e+00] [ -3.49600000e-05 -5.06300000e-05 0.00000000e+00]] from vasp import Vasp calc = Vasp('molecules/h2o-relax-centered') from ase.visualize import view view(calc.traj) 3.5 Vibrational frequencies 3.5.1 Manual calculation of vibrational frequency The principle idea in calculating vibrational frequencies is that we consider a molecular system as masses connected by springs. If the springs are Hookean, e.g. the force is proportional to the displacement, then we can readily solve the equations of motion and find that the vibrational frequencies are related to the force constants and the masses of the atoms. For example, in a simple molecule like CO where there is only one spring, the frequency is: \(\nu = \frac{1}{2\pi}\sqrt{k/\mu}\) where \(\frac{1}{\mu} = \frac{1}{m_C} + \frac{1}{m_O}\) and \(k\) is the spring constant. We will compute the value of \(k\) from DFT calculations as follows: \(k = \frac{\partial^2E}{\partial x^2}\) at the equilibrium bond length. We actually already have the data to do this from Manual determination. We only need to fit an equation to the energy vs. bond-length data, find the minimum energy bond-length, and then evaluate the second derivative of the fitted function at the minimum. We will use a cubic polynomial for demonstration here. Polynomials are numerically convenient because they are easy to fit, and it is trivial to get the roots and derivatives of the polynomials, as well as to evaluate them at other points using numpy.polyfit, numpy.polyder, and numpy.polyval. from vasp import Vasp import numpy as np from ase.units import * bond_lengths = [1.05, 1.1, 1.15, 1.2, 1.25] energies = [] for d in bond_lengths: calc = Vasp('molecules/co-{0}'.format(d)) atoms = calc.get_atoms() energies.append(atoms.get_potential_energy()) # fit the data pars = np.polyfit(bond_lengths, energies, 3) xfit = np.linspace(1.05, 1.25) efit = np.polyval(pars, xfit) # first derivative dpars = np.polyder(pars) # find where the minimum is. chose the second one because it is the # minimum we need. droots = np.roots(dpars) # second derivative ddpars = np.polyder(dpars) d_min = droots[np.polyval(ddpars, droots) > 0] # curvature at minimum = force constant in SI units k = np.polyval(ddpars, d_min) / (J / m**2) # mu, reduced mass from ase.data import atomic_masses C_mass = atomic_masses[6] / kg O_mass = atomic_masses[8] / kg mu = 1.0 / (1.0 / C_mass + 1.0 / O_mass) frequency = 1. / (2. * np.pi) * np.sqrt(k / mu) print('The CO vibrational frequency is {0} Hz'.format(*frequency)) print('The CO vibrational frequency is {0[0]} cm^{{-1}}'.format(frequency / 3e10)) import matplotlib.pyplot as plt plt.plot(bond_lengths, energies, 'bo ') plt.plot(xfit, efit, 'b-') plt.xlabel('Bond length ($\AA$)') plt.ylabel('Total energy (eV)') plt.savefig('images/co-freq.png') The CO vibrational frequency is 6.43186126691e+13 Hz The CO vibrational frequency is 2143.95375564 cm^{-1} This result is in good agreement with experiment. The procedure described above is basically how many vibrational calculations are performed. With more atoms, you have to determine a force constant matrix and diagonalize it. For more details, see wilson1955. In practice, we usually allow a packaged code to automate this, which we cover in Automated vibrational calculations. We now consider how much energy is in this vibration. This is commonly called zero-point energy (ZPE) and it is defined as \(E_{ZPE} = \frac{1}{2} h \nu\) for a single mode, and \(h\) is Planck's constant (4.135667516e-15 eV/s). c = 3e10 # speed of light cm/s h = 4.135667516e-15 # eV*s nu = 2143.6076625*c # 1/s E_zpe = 0.5*h*nu print('E_ZPE = {0:1.3f} eV'.format(E_zpe)) E_ZPE = 0.133 eV This is a reasonable amount of energy! Zero-point energy increases with increasing vibrational frequency, and tends to be very important for small atoms. A final note is that this analysis is in the "harmonic approximation". The frequency equation is the solution to a harmonic oscillator. If the spring is non-linear, then there are anharmonic effects that may become important, especially at higher temperatures. 3.5.2 TODO Automated vibrational calculations VASP has built-in capability for performing vibrational calculations. We access the capability by using a new value for IBRION. The values of 5 and 6 calculated the Hessian matrix using finite differences. For IBRION=5, all atoms that are not constrained are displaced. For IBRION=6, only symmetry inequivalent displacements are considered, which makes the calculations slightly cheaper. You can specify the number of displacements with NFREE. The default number of displacements is 2. You can also specify the size of the displacement with POTIM (the default is 0.015 Å). # <<water-vib>> # adapted from from ase import Atoms, Atom from vasp import Vasp import ase.units_vib', xc='PBE', encut=400, ismear=0, # Gaussian smearing ibrion=6, # finite differences with symmetry nfree=2, # central differences (default) potim=0.015, # default as well ediff=1e-8, # for vibrations you need precise energies nsw=1, # Set to 1 for vibrational calculation atoms=atoms) print('Forces') print('======') print(atoms.get_forces()) print('') calc.stop_if(calc.potential_energy is None) # vibrational energies are in eV energies, modes = calc.get_vibrational_modes() print('energies\n========') for i, e in enumerate(energies): print('{0:02d}: {1} eV'.format(i, e)) Forces ====== [[ 0.01810349 -0.03253721 -0.00127275] [-0.03620698 0. 0.0025455 ] [ 0.01810349 0.03253721 -0.00127275]] energies ======== 00: 0.475855773 eV 01: 0.46176517 eV 02: 0.196182182 eV 03: 0.007041992 eV 04: 0.002445078 eV 05: (0.000292003+0j) eV 06: (0.012756432+0j) eV 07: (0.01305212+0j) eV 08: (0.015976377+0j) eV Note we get 9 frequencies here. Water has 3 atoms, with three degrees of freedom each, leading to 9 possible combinations of collective motions. Three of those collective motions are translations, i.e. where all atoms move in the same direction (either \(x\), \(y\) or \(z\)) and there is no change in the total energy of the molecule. Another three of those motions are rotations, which also do not change the total energy of the molecule. That leaves 3N-6 = 3 degrees of vibrational freedom where some or all of the bonds are stretched, resulting in a change in the total energy. The modes of water vibration are (with our calculated values in parentheses): - a symmetric stretch at 3657 cm-1 (3723) - an asymmetric stretch at 3756 cm-1 (3836) - and a bending mode at 1595 cm-1 (1583) The results are not too far off, and more accurate frequencies may be possible using tighter tolerance on POTIM, or by using IBRION=7 or 8. Let us briefly discuss how to determine which vectors are vibrations and which are rotations or translations. One way is to visualize the modes. The vibrations are easy to spot. The rotations/translations are not always cleanly separable. This is an issue of accuracy and convergence. We usually do not worry about this because these modes are usually not important. - mode 0 is an asymmetric stretch - mode 1 is a symmetric stretch - mode 2 is a bending mode - mode 3 is a mixed translation/rotation - mode 4 is a rotation - mode 5 is a translation - mode 6 is a rotation - mode 7 is a partial translation - mode 8 is a rotation # <<h2o-vib-vis>> from vasp import Vasp import numpy as np calc = Vasp('molecules/h2o_vib') energies, modes = calc.get_vibrational_modes(mode=0, massweighted=True, show=True) See for a more quantitative discussion of these modes, identifying them, and a method to project the rotations and translations out of the Hessian matrix. 3.5.2.1 Zero-point energy for multiple modes For a molecule with lots of vibrational modes the zero-point energy is defined as the sum over all the vibrational modes: \(E_{ZPE} = \sum_i \frac{1}{2} h \nu_i\) Here is an example for water. Note we do not sum over the imaginary modes. We should also ignore the rotational and translational modes (some of those are imaginary, but some are just small). from vasp import Vasp import numpy as np c = 3e10 # speed of light cm/s h = 4.135667516e-15 # eV/s # first, get the frequencies. calc = Vasp('molecules/h2o_vib') freq = calc.get_vibrational_frequencies() ZPE = 0.0 for f in freq: if not isinstance(f, float): continue # skip complex numbers nu = f * c # convert to frequency ZPE += 0.5 * h * nu print(np.sum([0.5 * h * f * c for f in freq if isinstance(f, float)])) print('The ZPE of water is {0:1.3f} eV'.format(ZPE)) # one liner ZPE = np.sum([0.5 * h * f * c for f in freq if isinstance(f, float)]) print('The ZPE of water is {0:1.3f} eV'.format(ZPE)) Note the zero-point energy of water is also fairly high (more than 0.5 eV). That is because of the high frequency O-H stretches. 3.6 Simulated infrared spectra infrared intensity At there is a recipe for computing the Infrared vibrational spectroscopy intensities in VASP. We are going to do that for water here. First, we will relax a water molecule.)) calc = Vasp('molecules/h2o_relax', xc='PBE', encut=400, ismear=0, # Gaussian smearing ibrion=2, ediff=1e-8, nsw=10, atoms=atoms) print('Forces') print('===========================') print(atoms.get_forces()) Forces =========================== [[ -3.80700000e-05 5.32200000e-05 0.00000000e+00] [ 7.61400000e-05 0.00000000e+00 0.00000000e+00] [ -3.80700000e-05 -5.32200000e-05 0.00000000e+00]] Next, we instruct VASP to compute the vibrational modes using density functional perturbation theory with IBRION=7. Note, this is different than in 3.5 where finite differences were used. from vasp import Vasp # read in relaxed geometry calc = Vasp('molecules/h2o_relax') atoms = calc.get_atoms() # now define a new calculator calc = Vasp('molecules/h2o_vib_dfpt', xc='PBE', encut=400, ismear=0, # Gaussian smearing ibrion=7, # switches on the DFPT vibrational analysis (with # no symmetry constraints) nfree=2, potim=0.015, lepsilon=True, # enables to calculate and to print the BEC # tensors lreal=False, nsw=1, nwrite=3, # affects OUTCAR verbosity: explicitly forces # SQRT(mass)-divided eigenvectors to be printed atoms=atoms) print(calc.potential_energy) -14.22662275 To analyze the results, this shell script was provided to extract the results. #!/bin/bash # A utility for calculating the vibrational intensities from VASP output (OUTCAR) # (C) David Karhanek, 2011-03-25, ICIQ Tarragona, Spain () # extract Born effective charges tensors printf "..reading OUTCAR" BORN_NROWS=`grep NIONS OUTCAR | awk '{print $12*4+1}'` if [ `grep 'BORN' OUTCAR | wc -l` = 0 ] ; then \ printf " .. FAILED! Born effective charges missing! Bye! \n\n" ; exit 1 ; fi grep "in e, cummulative" -A $BORN_NROWS OUTCAR > born.txt # extract Eigenvectors and eigenvalues if [ `grep 'SQRT(mass)' OUTCAR | wc -l` != 1 ] ; then \ printf " .. FAILED! Restart VASP with NWRITE=3! Bye! \n\n" ; exit 1 ; fi EIG_NVIBS=`grep -A 2000 'SQRT(mass)' OUTCAR | grep 'cm-1' | wc -l` EIG_NIONS=`grep NIONS OUTCAR | awk '{print $12}'` EIG_NROWS=`echo "($EIG_NIONS+3)*$EIG_NVIBS+3" | bc` grep -A $(($EIG_NROWS+2)) 'SQRT(mass)' OUTCAR | tail -n $(($EIG_NROWS+1)) | sed 's/f\/i/fi /g' > eigenvectors.txt printf " ..done\n" # set up a new directory, split files - prepare for parsing printf "..splitting files" mkdir intensities ; mv born.txt eigenvectors.txt intensities/ cd intensities/ let NBORN_NROWS=BORN_NROWS-1 let NEIG_NROWS=EIG_NROWS-3 let NBORN_STEP=4 let NEIG_STEP=EIG_NIONS+3 tail -n $NBORN_NROWS born.txt > temp.born.txt tail -n $NEIG_NROWS eigenvectors.txt > temp.eige.txt mkdir inputs ; mv born.txt eigenvectors.txt inputs/ split -a 3 -d -l $NEIG_STEP temp.eige.txt temp.ei. split -a 3 -d -l $NBORN_STEP temp.born.txt temp.bo. mkdir temps01 ; mv temp.born.txt temp.eige.txt temps01/ for nu in `seq 1 $EIG_NVIBS` ; do let nud=nu-1 ; ei=`printf "%03u" $nu` ; eid=`printf "%03u" $nud` ; mv temp.ei.$eid eigens.vib.$ei done for s in `seq 1 $EIG_NIONS` ; do let sd=s-1 ; bo=`printf "%03u" $s` ; bod=`printf "%03u" $sd` ; mv temp.bo.$bod borncs.$bo done printf " ..done\n" # parse deviation vectors (eig) printf "..parsing eigenvectors" let sad=$EIG_NIONS+1 for nu in `seq 1 $EIG_NVIBS` ; do nuu=`printf "%03u" $nu` tail -n $sad eigens.vib.$nuu | head -n $EIG_NIONS | awk '{print $4,$5,$6}' > e.vib.$nuu.allions split -a 3 -d -l 1 e.vib.$nuu.allions temp.e.vib.$nuu.ion. for s in `seq 1 $EIG_NIONS` ; do let sd=s-1; bo=`printf "%03u" $s`; bod=`printf "%03u" $sd`; mv temp.e.vib.$nuu.ion.$bod e.vib.$nuu.ion.$bo done done printf " ..done\n" # parse born effective charge matrices (born) printf "..parsing eff.charges" for s in `seq 1 $EIG_NIONS` ; do ss=`printf "%03u" $s` awk '{print $2,$3,$4}' borncs.$ss | tail -3 > bornch.$ss done mkdir temps02 ; mv eigens.* borncs.* temps02/ printf " ..done\n" # parse matrices, multiply them and collect squares (giving intensities) printf "..multiplying matrices, summing " for nu in `seq 1 $EIG_NVIBS` ; do nuu=`printf "%03u" $nu` int=0.0 for alpha in 1 2 3 ; do # summing over alpha coordinates sumpol=0.0 for s in `seq 1 $EIG_NIONS` ; do # summing over atoms ss=`printf "%03u" $s` awk -v> exact.res.txt printf "." done printf " ..done\n" # format results, normalize intensities printf "..normalizing intensities" max=`awk '(NR==1){max=$3} $3>=max {max=$3} END {print max}' exact.res.txt` awk -v max="$max" '{printf "%03u %6.1f %5.3f\n",$1,$2,$3/max}' exact.res.txt > results.txt printf " ..done\n" # clean up, display results printf "..finalizing:\n" mkdir temps03; mv bornch.* e.vib.*.allions temps03/ mkdir temps04; mv z.ion* e.vib.*.ion.* temps04/ mkdir temps05; mv matr-* temps05/ mkdir results; mv *res*txt results/ let NMATRIX=$EIG_NVIBS**2 printf "%5u atoms found\n%5u vibrations found\n%5u matrices evaluated" \ $EIG_NIONS $EIG_NVIBS $NMATRIX > results/statistics.txt # fast switch to clean up all temporary files rm -r temps* cat results/results.txt Note that the results above include the rotational and translational modes (modes 4-9). The following shell script removes those, and recalculates the intensities. Note that it appears to just remove the last 6 modes and req compute the intensities. It is not obvious that will always be the right way to do it as the order of the eigenvectors is not guaranteed. #!/bin/bash # reformat intensities, just normal modes: 3N -> (3N-6) printf "..reformatting and normalizing intensities" cd intensities/results/ nlns=`wc -l exact.res.txt | awk '{print $1}' `; let bodylns=nlns-6 head -n $bodylns exact.res.txt > temp.reform.res.txt max=`awk '(NR==1){max=$3} $3>=max {max=$3} END {print max}' temp.reform.res.txt` awk -v max="$max" '{print $1,$2,$3/max}' temp.reform.res.txt > exact.reform.res.txt awk -v max="$max" '{printf "%03u %6.1f %5.3f\n",$1,$2,$3/max}' temp.reform.res.txt > reform.res.txt printf " ..done\n..normal modes:\n" rm temp.reform.res.txt cat reform.res.txt cd ../.. ..reformatting and normalizing intensities ..done ..normal modes: The interpretation of these results is that the mode at 3713 cm-1 would be nearly invisible in the IR spectrum. Earlier we interpreted that as the symmetric stretch. In this mode, there is only a small change in the molecule dipole moment, so there is a small IR intensity. See also giannozzi:8537. For HREELS simulations see 0953-8984-22-26-265006. The shell script above has been translated to a convenient python function in vasp. from vasp import Vasp calc = Vasp('molecules/h2o_vib_dfpt') print('mode Relative intensity') for i, intensity in enumerate(calc.get_infrared_intensities()): print('{0:02d} {1:1.3f}'.format(i, intensity)) mode Relative intensity 00 0.227 01 0.006 02 0.312 03 1.000 04 0.002 05 0.000 06 0.006 07 0.000 08 0.350 3.7 Thermochemical properties of molecules thermochemistry ase.thermochemistry can be used to estimate thermodynamic properties of gases in the ideal gas limit. The module needs as input the geometry, the total energy, the vibrational energies, and some information about the molecular symmetry. We first consider an N2 molecule. The symmetry numbers are determined by the molecular point group springerlink-10.1007/s00214-007-0328-0. Here is a table of the most common ones. from ase.structure import molecule from ase.thermochemistry import IdealGasThermo from vasp import Vasp atoms = molecule('N2') atoms.set_cell((10,10,10), scale_atoms=False) # first we relax a molecule calc = Vasp('molecules/n2-relax', xc='PBE', encut=300, ibrion=2, nsw=5, atoms=atoms) electronicenergy = atoms.get_potential_energy() # next, we get vibrational modes calc2 = Vasp('molecules/n2-vib', xc='PBE', encut=300, ibrion=6, nfree=2, potim=0.15, nsw=1, atoms=atoms) calc2.wait() vib_freq = calc2.get_vibrational_frequencies() # in cm^1 #convert wavenumbers to energy h = 4.1356675e-15 # eV*s c = 3.0e10 #cm/s vib_energies = [h*c*nu for nu in vib_freq] print('vibrational energies\n====================') for i,e in enumerate(vib_energies): print('{0:02d}: {1} eV'.format(i,e)) # # now we can get some properties. Note we only need one vibrational # energy since there is only one mode. This example does not work if # you give all the energies because one energy is zero. thermo = IdealGasThermo(vib_energies=vib_energies[0:0], potentialenergy=electronicenergy, atoms=atoms, geometry='linear', symmetrynumber=2, spin=0) # temperature in K, pressure in Pa, G in eV G = thermo.get_gibbs_energy(temperature=298.15, pressure=101325.) vibrational energies ==================== 00: 0.281619180732 eV 01: 0.0302718194691 eV 02: 0.0302718194691 eV 03: 6.20350125e-10 eV 04: 4.962801e-10 eV 05: 0.0 eV Enthalpy components at T = 298.15 K: =============================== E_pot -16.484 eV E_ZPE 0.000 eV Cv_trans (0->T) 0.039 eV Cv_rot (0->T) 0.026 eV Cv_vib (0->T) 0.000 eV (C_v -> C_p) 0.026 eV ------------------------------- H -16.394 eV =============================== Entropy components at T = 298.15 K and P = 101325.0 Pa: ================================================= S T*S S_trans (1 atm) 0.0015579 eV/K 0.464 eV S_rot 0.0007868 eV/K 0.235 eV S_elec 0.0000000 eV/K 0.000 eV S_vib 0.0000000 eV/K 0.000 eV S (1 atm -> P) -0.0000000 eV/K -0.000 eV ------------------------------------------------- S 0.0023447 eV/K 0.699 eV ================================================= Free energy components at T = 298.15 K and P = 101325.0 Pa: ======================= H -16.394 eV -T*S -0.699 eV ----------------------- G -17.093 eV ======================= Let us compare this to what is in the Nist webbook via the Shomate equations. import numpy as np A = 28.98641 B = 1.853978 C = -9.647459 D = 16.63537 E = 0.000117 F = -8.671914 G = 226.4168 H = 0.0 T = 298.15 t = T/1000. S = A*np.log(t) + B*t + C*t**2/2 + D*t**3/3 - E/(2*t**2) + G print('-T*S = {0:1.3f} eV'.format(-T*S/1000/96.4853)) -T*S = -0.592 eV This is reasonable agreement for the entropy. You will get different results if you use different exchange correlation functionals. 3.8 Molecular reaction energies 3.8.1 O2 dissociation The first reaction we consider is a simple dissociation of oxygen molecule into two oxygen atoms: O2 → 2O. The dissociation energy is pretty straightforward to define: it is the energy of the products minus the energy of the reactant. \(D = 2*E_O - E_{O_2}\). It would appear that we simply calculate the energy of an oxygen atom, and the energy of an oxygen molecule and evaluate the formula. Let us do that. 3.8.1.1 Simple estimate of O2 dissociation energy from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('O', [5, 5, 5])], cell=(10, 10, 10)) calc = Vasp('molecules/O', xc='PBE', encut=400, ismear=0, atoms=atoms) E_O = atoms.get_potential_energy() # now relaxed O2 dimer atoms = Atoms([Atom('O', [5, 5, 5]), Atom('O', [6.22, 5, 5])], cell=(10, 10, 10)) calc = Vasp('molecules/O2', xc='PBE', encut=400, ismear=0, ibrion=2, nsw=10, atoms=atoms) E_O2 = atoms.get_potential_energy() if None not in (E_O, E_O2): print('O2 -> 2O D = {0:1.3f} eV'.format(2 * E_O - E_O2)) O2 -> 2O D = 8.619 eV The answer we have obtained is way too high! Experimentally the dissociation energy is about 5.2 eV (need reference), which is very different than what we calculated! Let us consider some factors that contribute to this error. We implicitly neglected spin-polarization in the example above. That could be a problem, since the O2 molecule can be in one of two spin states, a singlet or a triplet, and these should have different energies. Furthermore, the oxygen atom can be a singlet or a triplet, and these would have different energies. To account for spin polarization, we have to tell VASP to use spin-polarization, and give initial guesses for the magnetic moments of the atoms. Let us try again with spin polarization. 3.8.1.2 Estimating O2 dissociation energy with spin polarization in triplet ground states To tell VASP to use spin-polarization we use ISPIN=2, and we set initial guesses for magnetic moments on the atoms with the magmom keyword. In a triplet state there are two electrons with spins of the same sign. from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('O', [5, 5, 5], magmom=2)], cell=(10, 10, 10)) calc = Vasp('molecules/O-sp-triplet', xc='PBE', encut=400, ismear=0, ispin=2, # turn spin-polarization = 2.0000072 Bohr magnetons Magnetic moment on O2 = 2.0000084 Bohr magnetons O2 -> 2O D = 6.746 eV This is much closer to accepted literature values for the DFT-GGA O\(_2\) dissociation energy. It is still more than 1 eV above an experimental value, but most of that error is due to the GGA exchange correlation functional. Some additional parameters that might need to be checked for convergence are the SIGMA value (it is probably too high for a molecule), as well as the cutoff energy. Oxygen is a "hard" atom that requires a high cutoff energy to achieve high levels of convergence. 3.8.1.2.1 Looking at the two spin densities In a spin-polarized calculation there are actually two electron densities: one for spin-up and one for spin-down. We will look at the differences in these two through the density of states. from vasp import Vasp from ase.dft.dos import * calc = Vasp('molecules/O2-sp-triplet') dos = DOS(calc, width=0.2) d_up = dos.get_dos(spin=0) d_down = dos.get_dos(spin=1) e = dos.get_energies() ind = e <= 0.0 # integrate up to 0eV print('number of up states = {0}'.format(np.trapz(d_up[ind], e[ind]))) print('number of down states = {0}'.format(np.trapz(d_down[ind], e[ind]))) import pylab as plt plt.plot(e, d_up, e, -d_down) plt.xlabel('energy [eV]') plt.ylabel('DOS') plt.legend(['up', 'down']) plt.savefig('images/O2-sp-dos.png') number of up states = 6.11729553486 number of down states = 5.00000794208 Figure 23: Spin-polarized DOS for the O2 molecule. \label{fig:o2-sp-dos} You can see in Figure fig:o2-sp-dos that there are two different densities of states for the two spins. One has 7 electrons in it (the blue lines), and the other has 5 electrons in it (the green line). The difference of two electrons leads to the magnetic moment of 2 which we calculated earlier. Remember that only peaks in the DOS below the Fermi level are occupied. It is customary to set the Fermi level to 0 eV in DOS plots. The peaks roughly correspond to electrons. For example, the blue peak between -25 and -30 eV corresponds to one electron, in a 1s orbital, where as the blue peak between -5 and -10 eV corresponds to three electrons. 3.8.1.3 Convergence study of the O2 dissociation energy from vasp import Vasp from ase import Atom, Atoms encuts = [250, 300, 350, 400, 450, 500, 550] D = [] for encut in encuts: atoms = Atoms([Atom('O', [5, 5, 5], magmom=2)], cell=(10, 10, 10)) calc = Vasp('molecules/O-sp-triplet-{0}'.format(encut), xc='PBE', encut=encut, ismear=0, ispin=2, atoms=atoms) E_O = atoms.get_potential_energy() # now relaxed O2 dimer atoms = Atoms([Atom('O', [5, 5, 5], magmom=1), Atom('O', [6.22, 5, 5], magmom=1)], cell=(10, 10, 10)) calc = Vasp('molecules/O2-sp-triplet-{0}'.format(encut), xc='PBE', encut=encut, ismear=0, ispin=2, # turn spin-polarization on ibrion=2, # this turns relaxation on nsw=10, atoms=atoms) E_O2 = atoms.get_potential_energy() if None not in (E_O, E_O2): d = 2*E_O - E_O2 D.append(d) print('O2 -> 2O encut = {0} D = {1:1.3f} eV'.format(encut, d)) if not D or None in D: calc.abort() import matplotlib.pyplot as plt plt.plot(encuts, D) plt.xlabel('ENCUT (eV)') plt.ylabel('O$_2$ dissociation energy (eV)') plt.savefig('images/O2-dissociation-convergence.png') O2 -> 2O encut = 250 D = 6.774 eV O2 -> 2O encut = 300 D = 6.804 eV O2 -> 2O encut = 350 D = 6.785 eV O2 -> 2O encut = 400 D = 6.746 eV O2 -> 2O encut = 450 D = 6.727 eV O2 -> 2O encut = 500 D = 6.725 eV O2 -> 2O encut = 550 D = 6.727 eV Figure 24: Convergence study of the O2 dissociation energy as a function of ENCUT. \label{fig:o2-encut} Based on these results (Figure fig:o2-encut), you could argue the dissociation energy is converged to about 2 meV at a planewave cutoff of 450 eV, and within 50 meV at 350 eV cutoff. You have to decide what an appropriate level of convergence is. Note that increasing the planewave cutoff significantly increases the computational time, so you are balancing level of convergence with computational speed. It would appear that planewave cutoff is not the cause for the discrepancy between our calculations and literature values. encuts = [250, 300, 350, 400, 450, 500, 550] print('encut (eV) Total CPU time') print('--------------------------------------------------------') for encut in encuts: OUTCAR = 'molecules/O2-sp-triplet-{0}/OUTCAR'.format(encut) f = open(OUTCAR, 'r') for line in f: if 'Total CPU time used (sec)' in line: print('{0} eV: {1}'.format(encut, line)) f.close() encut (eV) Total CPU time -------------------------------------------------------- 250 eV: Total CPU time used (sec): 1551.338 300 eV: Total CPU time used (sec): 2085.191 350 eV: Total CPU time used (sec): 2795.841 400 eV: Total CPU time used (sec): 2985.064 450 eV: Total CPU time used (sec): 5155.562 500 eV: Total CPU time used (sec): 4990.818 550 eV: Total CPU time used (sec): 5262.052 3.8.1.4 Illustration of the effect of SIGMA The methodology for extrapolation of the total energy to absolute zero is only valid for a continuous density of states at the Fermi level Kresse199615. Consequently, it should not be used for semiconductors, molecules or atoms. In VASP, this means a very small Fermi temperature (SIGMA) should be used. The O2 dissociation energy as a function of SIGMA is shown in Figure fig:sigma-o2-diss. A variation of nearly 0.2 eV is seen from the default Fermi temperature of \(k_bT=0.2\) eV and the value of \(k_bT=0.0001\) eV. However, virtually no change was observed for a hydrogen atom or molecule or for an oxygen molecule as a function of the Fermi temperature. It is recommended that the total energy be calculated at several values of the Fermi temperature to make sure the total energy is converged with respect to the Fermi temperature. We were not careful in selecting a good value for SIGMA in the calculations above. The default value of SIGMA is 0.2, which may be fine for metals, but it is not correct for molecules. SIGMA is the broadening factor used to smear the electronic density of states at the Fermi level. For a metal with a continuous density of states this is appropriate, but for molecules with discrete energy states it does not make sense. We are somewhat forced to use the machinery designed for metals on molecules. The solution is to use a very small SIGMA. Ideally you would use SIGMA=0, but that is not practical for convergence reasons, so we try to find what is small enough. Let us examine the effect of SIGMA on the dissociation energy here. from vasp import Vasp from ase import Atom, Atoms sigmas = [0.2, 0.1, 0.05, 0.02, 0.01, 0.001] D = [] for sigma in sigmas: atoms = Atoms([Atom('O',[5, 5, 5], magmom=2)], cell=(10, 10, 10)) calc = Vasp('molecules/O-sp-triplet-sigma-{0}'.format(sigma), xc='PBE', encut=400, ismear=0, sigma=sigma, ispin=2, atoms=atoms) E_O = atoms.get_potential_energy() # now relaxed O2 dimer atoms = Atoms([Atom('O',[5, 5, 5],magmom=1), Atom('O',[6.22, 5, 5],magmom=1)], cell=(10, 10, 10)) calc = Vasp('molecules/O2-sp-triplet-sigma-{0}'.format(sigma), xc='PBE', encut=400, ismear=0, sigma=sigma, ispin=2, # turn spin-polarization on ibrion=2, # make sure we relax the geometry nsw=10, atoms=atoms) E_O2 = atoms.get_potential_energy() if None not in (E_O, E_O2): d = 2 * E_O - E_O2 D.append(d) print('O2 -> 2O sigma = {0} D = {1:1.3f} eV'.format(sigma, d)) import matplotlib.pyplot as plt plt.plot(sigmas, D, 'bo-') plt.xlabel('SIGMA (eV)') plt.ylabel('O$_2$ dissociation energy (eV)') plt.savefig('images/O2-dissociation-sigma-convergence.png') O2 -> 2O sigma = 0.2 D = 6.669 eV O2 -> 2O sigma = 0.1 D = 6.746 eV O2 -> 2O sigma = 0.05 D = 6.784 eV O2 -> 2O sigma = 0.02 D = 6.807 eV O2 -> 2O sigma = 0.01 D = 6.815 eV O2 -> 2O sigma = 0.001 D = 6.822 eV Figure 25: Effect of SIGMA on the oxygen dissociation energy. \label{fig:sigma-o2-diss} Clearly SIGMA has an effect, but it does not move the dissociation energy closer to the literature values! 3.8.1.5 Estimating singlet oxygen dissociation energy Finally, let us consider the case where each species is in the singlet state. from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('O', [5, 5, 5], magmom=0)], cell=(10, 10, 10)) calc = Vasp('molecules/O-sp-singlet', xc='PBE', encut=400, ismear=0,-singlet', xc='PBE', encut=400, ismear=0, ispin=2, # turn spin-polarization on ibrion=2, # make sure we relax the geometry nsw=10, atoms=atoms) E_O2 = atoms.get_potential_energy() # verify magnetic moment print('O2 molecule magnetic moment = ', atoms.get_magnetic_moment()) if None not in (E_O, E_O2): print('O2 -> 2O D = {0:1.3f} eV'.format(2 * E_O - E_O2)) Magnetic moment on O = 0.0001638 Bohr magnetons ('O2 molecule magnetic moment = ', 0.0) O2 -> 2O D = 8.619 eV Let us directly compare their total energies: from vasp import Vasp calc = Vasp('molecules/O2-sp-singlet') print('singlet: {0} eV'.format(calc.potential_energy)) calc = Vasp('molecules/O2-sp-triplet') print('triplet: {0} eV'.format(calc.potential_energy)) singlet: -8.77378302 eV triplet: -9.84832389 eV You can see here the triplet state has an energy that is 1 eV more stable than the singlet state. 3.8.1.6 Estimating triplet oxygen dissociation energy with low symmetry It has been suggested that breaking spherical symmetry of the atom can result in lower energy of the atom. The symmetry is broken by putting the atom off-center in a box. We will examine the total energy of an oxygen atom in a few geometries. First, let us consider variations of a square box. from vasp import Vasp from ase import Atom, Atoms # square box origin atoms = Atoms([Atom('O', [0, 0, 0], magmom=2)], cell=(10, 10, 10)) pars = dict(xc='PBE', encut=400, ismear=0, sigma=0.01, ispin=2) calc = Vasp('molecules/O-square-box-origin', atoms=atoms, **pars) print('Square box (origin): E = {0} eV'.format(atoms.get_potential_energy())) # square box center atoms = Atoms([Atom('O', [5, 5, 5], magmom=2)], cell=(10, 10, 10)) calc = Vasp('molecules/O-square-box-center', atoms=atoms, **pars) print('Square box (center): E = {0} eV'.format(atoms.get_potential_energy())) # square box random atoms = Atoms([Atom('O', [2.13, 7.32, 1.11], magmom=2)], cell=(10, 10, 10)) calc = Vasp('molecules/O-square-box-random', atoms=atoms, **pars) print('Square box (random): E = {0} eV'.format(atoms.get_potential_energy())) Square box (origin): E = -1.51654778 eV Square box (center): E = -1.51654804 eV Square box (random): E = -1.5152871 eV There is no significant difference in these energies. The origin and center calculations are identical in energy. The meV variation in the random calculation is negligible. Now, let us consider some non-square boxes. # calculate O atom energy in orthorhombic boxes from vasp import Vasp from ase import Atom, Atoms # orthorhombic box origin atoms = Atoms([Atom('O', [0, 0, 0], magmom=2)], cell=(8, 9, 10)) calc = Vasp('molecules/O-orthorhombic-box-origin', xc='PBE', encut=400, ismear=0, sigma=0.01, ispin=2, atoms=atoms) print('Orthorhombic box (origin): E = {0} eV'.format(atoms.get_potential_energy())) # orthorhombic box center atoms = Atoms([Atom('O', [4, 4.5, 5], magmom=2)], cell=(8, 9, 10)) calc = Vasp('molecules/O-orthorhombic-box-center', xc='PBE', encut=400, ismear=0, sigma=0.01, ispin=2, atoms=atoms) print('Orthorhombic box (center): E = {0} eV'.format(atoms.get_potential_energy())) # orthorhombic box random atoms = Atoms([Atom('O', [2.13, 7.32, 1.11], magmom=2)], cell=(8, 9, 10)) calc = Vasp('molecules/O-orthorhombic-box-random', xc='PBE', encut=400, ismear=0, sigma=0.01, ispin=2, atoms=atoms) print('Orthorhombic box (random): E = {0} eV'.format(atoms.get_potential_energy())) Orthorhombic box (origin): E = -1.89375092 eV Orthorhombic box (center): E = -1.89375153 eV Orthorhombic box (random): E = -1.87999536 eV This is a surprisingly large difference in energy! Nearly 0.4 eV. This is precisely the amount of energy we were in disagreement with the literature values. Surprisingly, the "random" position is higher in energy, similar to the cubic boxes. Finally, we put this all together. We use a non-symmetric box for the O-atom. from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('O', [5.1, 4.2, 6.1], magmom=2)], cell=(8, 9, 10)) calc = Vasp('molecules/O-sp-triplet-lowsym', xc='PBE', encut=400, ismear=0, sigma=0.01,, sigma=0.01,('E_O: ', E_O) print('O2 -> 2O D = {0:1.3f} eV'.format(2 * E_O - E_O2)) Magnetic moment on O = 2.0000073 Bohr magnetons Magnetic moment on O2 = 2.0000084 Bohr magnetons ('E_O: ', -1.89307116) O2 -> 2O D = 6.062 eV This actually agrees within 30-50 meV of reported literature values, although still nearly an eV greater than the experimental dissociation energy. Note that with a different "random" position, we get the lower energy for the O atom. All the disagreement we had been seeing was apparently in the O atom energy. So, if you do not need the dissociation energy in your analysis, you will not see the error. Also note that this error is specific to there being a spherical atom in a symmetric cell. This is not a problem for most molecules, which are generally non-spherical. 3.8.1.7 Verifying the magnetic moments on each atom It is one thing to see the total magnetic moment of a singlet state, and another to ask what are the magnetic moments on each atom. In VASP you must use LORBIT = 11 to get the magnetic moments of the atoms written out. from vasp import Vasp calc = Vasp('molecules/O2-sp-singlet') calc.clone('molecules/O2-sp-singlet-magmoms') calc.set(lorbit=11) atoms = calc.get_atoms() magmoms = atoms.get_magnetic_moments() print('singlet ground state') for i, atom in enumerate(atoms): print('atom {0}: magmom = {1}'.format(i, magmoms[i])) print(atoms.get_magnetic_moment()) calc = Vasp('molecules/O2-sp-triplet') calc.clone('molecules/O2-sp-triplet-magmoms') calc.set(lorbit=11) atoms = calc.get_atoms() magmoms = atoms.get_magnetic_moments() print() print('triplet ground state') for i, atom in enumerate(atoms): print('atom {0}: magmom = {1}'.format(i, magmoms[i])) print(atoms.get_magnetic_moment()) singlet ground state atom 0: magmom = 0.0 atom 1: magmom = 0.0 0.0 () triplet ground state atom 0: magmom = 0.815 atom 1: magmom = 0.815 2.0000083 Note the atomic magnetic moments do not add up to the total magnetic moment. The atomic magnetic moments are not really true observable properties. The moments are determined by a projection method that probably involves a spherical orbital, so the moments may be over or underestimated. 3.8.1.8 Using a different potential It is possible we need a higher quality potential to get the 6.02 eV value quoted by many in the literature. Here we try the O_sv potential, which treats the 1s electrons as valence electrons. Note however, the ENMIN in the POTCAR is very high! grep ENMIN $VASP_PP_PATH/potpaw_PBE/O_sv/POTCAR(E_O) -1.57217591 In the following calculation, we let VASP select an appropriate ENCUT value.-s', xc='PBE', ismear=0, sigma=0.01, ispin=2, # turn spin-polarization on ibrion=2, # make sure we relax the geometry nsw=10, setups=[['O', '_s']], = 1.9999982 Bohr magnetons Magnetic moment on O2 = 2.0000102 Bohr magnetons O2 -> 2O D = 6.120 eV This result is close to other reported values. It is possibly not converged, since we let VASP choose the ENCUT value, and that value is the ENMIN value in the POTCAR. Nevertheless, the point is that a harder potential does not fix the problem of overbinding in the O2 molecule. That is a fundamental flaw in the GGA exchange-correlation functional. 3.8.2 Water gas shift example We consider calculating the reaction energy of the water-gas shift reaction in this example. CO + H2O \(\leftrightharpoons\) CO2 + H2 We define the reaction energy as the difference in energy between the products and reactants. \(\Delta E = E_{CO_2} + E_{H_2} - E_{CO} - E_{H_2O}\) For now, we compute this energy simply as the difference in DFT energies. In the next section we will add zero-point energies and compute the energy difference as a function of temperature. For now, we simply need to compute the total energy of each molecule in its equilibrium geometry. from ase.structure import molecule from vasp import Vasp # first we define our molecules. These will automatically be at the coordinates from the G2 database. CO = molecule('CO') CO.set_cell([8, 8, 8], scale_atoms=False) H2O = molecule('H2O') H2O.set_cell([8, 8, 8], scale_atoms=False) CO2 = molecule('CO2') CO2.set_cell([8, 8, 8], scale_atoms=False) H2 = molecule('H2') H2.set_cell([8, 8, 8], scale_atoms=False) # now the calculators to get the energies c1 = Vasp('molecules/wgs/CO', xc='PBE', encut=350, ismear=0, ibrion=2, nsw=10, atoms=CO) eCO = CO.get_potential_energy() c2 = Vasp('molecules/wgs/CO2', xc='PBE', encut=350, ismear=0, ibrion=2, nsw=10, atoms=CO2) eCO2 = CO2.get_potential_energy() c3 = Vasp('molecules/wgs/H2', xc='PBE', encut=350, ismear=0, ibrion=2, nsw=10, atoms=H2) eH2 = H2.get_potential_energy() c4 = Vasp('molecules/wgs/H2O', xc='PBE', encut=350, ismear=0, ibrion=2, nsw=10, atoms=H2O) eH2O = H2O.get_potential_energy() if None in (eCO2, eH2, eCO, eH2O): pass else: dE = eCO2 + eH2 - eCO - eH2O print('Delta E = {0:1.3f} eV'.format(dE)) print('Delta E = {0:1.3f} kcal/mol'.format(dE * 23.06035)) print('Delta E = {0:1.3f} kJ/mol'.format(dE * 96.485)) Delta E = -0.723 eV Delta E = -16.672 kcal/mol Delta E = -69.758 kJ/mol We estimated the enthalpy of this reaction at standard conditions to be -41 kJ/mol using data from the NIST webbook, which is a fair bit lower than we calculated here. In the next section we will examine whether additional corrections are needed, such as zero-point and temperature corrections. It is a good idea to verify your calculations and structures are what you expected. Let us print them here. Inspection of these results shows the geometries were all relaxed, i.e., the forces on each atom are less than 0.05 eV/Å. from vasp import Vasp print('**** Calculation summaries') print('***** CO') calc = Vasp('molecules/wgs/H2O') print('#+begin_example') print(calc) print('#+end_example') 3.8.2.1 Calculation summaries 3.8.2.1.1 CO Vasp calculation in /home-research/jkitchin/dft-book-new-vasp/molecules/wgs/H2O INCAR created by Atomic Simulation Environment ENCUT = 350 LCHARG = .FALSE. IBRION = 2 ISMEAR = 0 LWAVE = .TRUE. SIGMA = 0.1 NSW = 10 O H 1.0000000000000000 8.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 8.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 8.0000000000000000 1 2 Cartesian 0.0000000000000000 0.0000000000000000 0.1192620000000000 0.0000000000000000 0.7632390000000000 -0.4770470000000000 0.0000000000000000 -0.7632390000000000 -0.4770470000000000 3.8.3 Temperature dependent water gas shift equilibrium constant To correct the reaction energy for temperature effects, we must compute the vibrational frequencies of each species, and estimate the temperature dependent contributions to vibrational energy and entropy. We will break these calculations into several pieces. First we do each vibrational calculation. After those are done, we can get the data and construct the thermochemistry objects we need to estimate the reaction energy as a function of temperature (at constant pressure). 3.8.3.1 CO vibrations from vasp import Vasp # get relaxed geometry calc = Vasp('molecules/wgs/CO') CO = calc.get_atoms() # now do the vibrations calc = Vasp('molecules/wgs/CO-vib', xc='PBE', encut=350, ismear=0, ibrion=6, nfree=2, potim=0.02, nsw=1, atoms=CO) calc.wait() vib_freq = calc.get_vibrational_frequencies() for i, f in enumerate(vib_freq): print('{0:02d}: {1} cm^(-1)'.format(i, f)) 00: 2064.699153 cm^(-1) 01: 170.409559 cm^(-1) 02: 170.409559 cm^(-1) 03: (1.171397+0j) cm^(-1) 04: (6.354831+0j) cm^(-1) 05: (6.354831+0j) cm^(-1) CO has only one vibrational mode (3N-5 = 6 - 5 = 1). The other 5 modes are 3 translations and 2 rotations. 3.8.3.2 CO2 vibrations from vasp import Vasp # get relaxed geometry calc = Vasp('molecules/wgs/CO2') CO2 = calc.get_atoms() # now do the vibrations calc = Vasp('molecules/wgs/CO2-vib', xc='PBE', encut=350, ismear=0, ibrion=6, nfree=2, potim=0.02, nsw=1, atoms=CO2) calc.wait() vib_freq = calc.get_vibrational_frequencies() for i, f in enumerate(vib_freq): print('{0:02d}: {1} cm^(-1)'.format(i, f)) 00: 2339.140984 cm^(-1) 01: 1309.517832 cm^(-1) 02: 639.625419 cm^(-1) 03: 639.625419 cm^(-1) 04: (0.442216+0j) cm^(-1) 05: (1.801034+0j) cm^(-1) 06: (1.801034+0j) cm^(-1) 07: (35.286745+0j) cm^(-1) 08: (35.286745+0j) cm^(-1) CO\(_2\) is a linear molecule with 3N-5 = 4 vibrational modes. They are the first four frequencies in the output above. 3.8.3.3 H2 vibrations from vasp import Vasp # get relaxed geometry H2 = Vasp('molecules/wgs/H2').get_atoms() # now do the vibrations calc = Vasp('molecules/wgs/H2-vib', xc='PBE', encut=350, ismear=0, ibrion=6, nfree=2, potim=0.02, nsw=1, atoms=H2) calc.wait() vib_freq = calc.get_vibrational_frequencies() for i, f in enumerate(vib_freq): print('{0:02d}: {1} cm^(-1)'.format(i, f)) 00: 4484.933386 cm^(-1) 01: 0.0 cm^(-1) 02: 0.0 cm^(-1) 03: (1.5e-05+0j) cm^(-1) 04: (586.624928+0j) cm^(-1) 05: (586.624928+0j) cm^(-1) There is only one frequency of importance (the one at 4281 cm\(^{-1}\)) for the linear H2 molecule. 3.8.3.4 H2O vibrations from vasp import Vasp # get relaxed geometry H2O = Vasp('molecules/wgs/H2O').get_atoms() # now do the vibrations calc = Vasp('molecules/wgs/H2O-vib', xc='PBE', encut=350, ismear=0, ibrion=6, nfree=2, potim=0.02, nsw=1, atoms=H2O) calc.wait() vib_freq = calc.get_vibrational_frequencies() for i, f in enumerate(vib_freq): print('{0:02d}: {1} cm^(-1)'.format(i, f)) 00: 3846.373652 cm^(-1) 01: 3734.935388 cm^(-1) 02: 1573.422217 cm^(-1) 03: 16.562103 cm^(-1) 04: 8.00982 cm^(-1) 05: (0.375952+0j) cm^(-1) 06: (225.466583+0j) cm^(-1) 07: (271.664033+0j) cm^(-1) 08: (286.859818+0j) cm^(-1) Water has 3N-6 = 3 vibrational modes. 3.8.3.5 Thermochemistry Now we are ready. We have the electronic energies and vibrational frequencies of each species in the reaction. ase.thermochemistry.IdealGasThermo from ase.thermochemistry import IdealGasThermo from vasp import Vasp import numpy as np import matplotlib.pyplot as plt # first we get the electronic energies c1 = Vasp('molecules/wgs/CO') E_CO = c1.potential_energy CO = c1.get_atoms() c2 = Vasp('molecules/wgs/CO2') E_CO2 = c2.potential_energy CO2 = c2.get_atoms() c3 = Vasp('molecules/wgs/H2') E_H2 = c3.potential_energy H2 = c3.get_atoms() c4 = Vasp('molecules/wgs/H2O') E_H2O = c4.potential_energy H2O = c4.get_atoms() # now we get the vibrational energies h = 4.1356675e-15 # eV * s c = 3.0e10 # cm / s calc = Vasp('molecules/wgs/CO-vib') vib_freq = calc.get_vibrational_frequencies() CO_vib_energies = [h * c * nu for nu in vib_freq] calc = Vasp('molecules/wgs/CO2-vib') vib_freq = calc.get_vibrational_frequencies() CO2_vib_energies = [h * c * nu for nu in vib_freq] calc = Vasp('molecules/wgs/H2-vib') vib_freq = calc.get_vibrational_frequencies() H2_vib_energies = [h * c * nu for nu in vib_freq] calc = Vasp('molecules/wgs/H2O-vib') vib_freq = calc.get_vibrational_frequencies() H2O_vib_energies = [h * c * nu for nu in vib_freq] # now we make a thermo object for each molecule CO_t = IdealGasThermo(vib_energies=CO_vib_energies[0:0], potentialenergy=E_CO, atoms=CO, geometry='linear', symmetrynumber=1, spin=0) CO2_t = IdealGasThermo(vib_energies=CO2_vib_energies[0:4], potentialenergy=E_CO2, atoms=CO2, geometry='linear', symmetrynumber=2, spin=0) H2_t = IdealGasThermo(vib_energies=H2_vib_energies[0:0], potentialenergy=E_H2, atoms=H2, geometry='linear', symmetrynumber=2, spin=0) H2O_t = IdealGasThermo(vib_energies=H2O_vib_energies[0:3], potentialenergy=E_H2O, atoms=H2O, geometry='nonlinear', symmetrynumber=2, spin=0) # now we can compute G_rxn for a range of temperatures from 298 to 1000 K Trange = np.linspace(298, 1000, 20) # K P = 101325. # Pa Grxn = np.array([(CO2_t.get_gibbs_energy(temperature=T, pressure=P) + H2_t.get_gibbs_energy(temperature=T, pressure=P) - H2O_t.get_gibbs_energy(temperature=T, pressure=P) - CO_t.get_gibbs_energy(temperature=T, pressure=P)) * 96.485 for T in Trange]) Hrxn = np.array([(CO2_t.get_enthalpy(temperature=T) + H2_t.get_enthalpy(temperature=T) - H2O_t.get_enthalpy(temperature=T) - CO_t.get_enthalpy(temperature=T)) * 96.485 for T in Trange]) plt.plot(Trange, Grxn, 'bo-', label='$\Delta G_{rxn}$') plt.plot(Trange, Hrxn, 'ro:', label='$\Delta H_{rxn}$') plt.xlabel('Temperature (K)') plt.ylabel(r'$\Delta G_{rxn}$ (kJ/mol)') plt.legend(loc='best') plt.savefig('images/wgs-dG-T.png') plt.figure() R = 8.314e-3 # gas constant in kJ/mol/K Keq = np.exp(-Grxn/R/Trange) plt.plot(Trange, Keq) plt.ylim([0, 100]) plt.xlabel('Temperature (K)') plt.ylabel('$K_{eq}$') plt.savefig('images/wgs-Keq.png') You can see a few things here. One is that at near 298K, the Gibbs free energy is about -75 kJ/mol. This is too negative compared to the experimental standard free energy, which we estimated to be about -29 kJ/mol from the NIST webbook. There could be several reasons for this disagreement, but the most likely one is errors in the exchange-correlation functional. The error in energy has a significant effect on the calculated equilibrium constant, significantly overestimating it. Figure 26: Temperature dependence of the equilibrium constant. 3.9 Molecular reaction barriers We will consider a simple example of the barrier for NH3 inversion. We have to create an NH3 molecule in the initial and inverted state (these have exactly the same energy), and then interpolate a band of images. Then, we use the NEB method sheppard:134106 to compute the barrier to inversion. The NEB class of methods are pretty standard, but other algorithms for finding barriers (saddle-points) exist that may be relevant olsen:9776. 3.9.1 Get initial and final states # compute initial and final states from ase import Atoms from ase.structure import molecule import numpy as np from vasp import Vasp from ase.constraints import FixAtoms atoms = molecule('NH3') constraint = FixAtoms(mask=[atom.symbol == 'N' for atom in atoms]) atoms.set_constraint(constraint) Npos = atoms.positions[0] # move N to origin atoms.translate(-Npos) atoms.set_cell((10, 10, 10), scale_atoms=False) atoms2 = atoms.copy() pos2 = atoms2.positions for i,atom in enumerate(atoms2): if atom.symbol == 'H': # reflect through z pos2[i] *= np.array([1, 1, -1]) atoms2.positions = pos2 #now move N to center of box atoms.translate([5, 5, 5]) atoms2.translate([5, 5, 5]) calcs = [Vasp('molecules/nh3-initial', xc='PBE', encut=350, ibrion=1, nsw=10, atoms=atoms), Vasp('molecules/nh3-final', xc='PBE', encut=350, ibrion=1, nsw=10, atoms=atoms2)] print [c.potential_energy for c in calcs] 3.9.2 Run band calculation Now we do the band calculation. nudged elastic band # Run NH3 NEB calculations from vasp import Vasp from ase.neb import NEB from ase.io import read atoms = Vasp('molecules/nh3-initial').get_atoms() atoms2 = Vasp('molecules/nh3-final').get_atoms() # 5 images including endpoints images = [atoms] # initial state images += [atoms.copy() for i in range(3)] images += [atoms2] # final state neb = NEB(images) neb.interpolate() calc = Vasp('molecules/nh3-neb', xc='PBE', ibrion=1, encut=350, nsw=90, spring=-5.0, atoms=images) #calc.write_db(atoms, 'molecules/nh3-neb/00/DB.db') #calc.write_db(atoms2, 'molecules/nh3-neb/04/DB.db') images, energies = calc.get_neb() calc.stop_if(None in energies) print images print energies p = calc.plot_neb(show=False) import matplotlib.pyplot as plt plt.savefig('images/nh3-neb.png') [Atoms(symbols='NH3', positions=..., magmoms=..., cell=[10.0, 10.0, 10.0], pbc=[True, True, True], constraint=FixAtoms(indices=[0]), calculator=Vasp(...)),]), calculator=Vasp(...))] [ 0.00000000e+00 1.26688520e-01 2.25038820e-01 1.26688620e-01 9.99999727e-09] Optimization terminated successfully. Current function value: -0.225039 Iterations: 15 Function evaluations: 30 The calculator view function shows you the band. from vasp import Vasp calc = Vasp('molecules/nh3-neb') calc.view() Figure 27: Nudged elastic band results for ammonia flipping. 3.9.3 Make a movie of the animation animation It is helpful sometimes to animate the Nudged elastic band path. Here is a script to do that. I have not figured out how to embed the movie in this document # make neb movie from ase.io import write from ase.visualize import view from vasp import Vasp calc = Fasp('molecules/nh3-neb') as calc: images, energies = calc.get_neb() # this rotates the atoms 90 degrees about the y-axis [atoms.rotate('y', np.pi/2.) for atoms in images] for i,atoms in enumerate(images): write('images/00{0}-nh3.png'.format(i), atoms, show_unit_cell=2) # animated gif os.system('convert -delay 50 -loop 0 images/00*-nh3.png images/nh3-neb.gif') # Shockwave flash os.system('png2swf -o images/nh3-neb.swf images/00*-nh3.png ') Figure 28: In principle this is an animated gif 4 Bulk systems See for a very informative comparison of DFT codes for computing different bulk properties. 4.1 Defining and visualizing bulk systems 4.1.1 Built-in functions in ase As with molecules, ase provides several helper functions to create bulk structures. We highlight a few of them here. Particularly common ones are: - ase.lattice.cubic.FaceCenteredCubic - ase.lattice.cubic.BodyCenteredCubic - ase.lattice.hexagonal.Graphite - ase.lattice.compounds.NaCl For others, see We start with a simple example, fcc Ag. By default, ase knows Ag is an fcc metal, and knows the experimental lattice constant. We have to specify the directions (vectors along each axis) to get something other than the default output. Here, the default fcc cell contains four atoms. from ase.io import write from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic('Ag') write('images/Ag-fcc.png', atoms, show_unit_cell=2) print(atoms) Lattice(symbols='Ag4', positions=..., cell=[4.09, 4.09, 4.09], pbc=[True, True, True]) A ase.lattice.bravais.Lattice object is returned! This is practically the same as as an ase.atoms.Atoms object. Figure 29: A simple fcc Ag bulk structure in the primitive unit cell. Here we specify the primitive unit cell, which only has one atom in it. from ase.io import write from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic('Ag', directions=[[0, 1, 1], [1, 0, 1], [1, 1, 0]]) write('images/Ag-fcc-primitive.png', atoms, show_unit_cell=2) print atoms]) Figure 30: A simple fcc Ag bulk structure in the primitive unit cell.]) We can use these modules to build alloy unit cells. The basic strategy is to create the base unit cell in one element and then selectively change some atoms to different chemical symbols. Here we examine an Ag3Pd alloy structure. from ase.io import write from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic(directions=[[1, 0, 0], [0, 1, 0], [0, 0, 1]], size=(1, 1, 1), symbol='Ag', latticeconstant=4.0) write('images/Ag-bulk.png', atoms, show_unit_cell=2) # to make an alloy, we can replace one atom with another kind atoms[0].symbol = 'Pd' write('images/AgPd-bulk.png', atoms, show_unit_cell=2) Figure 31: A simple fcc Ag bulk structure in the traditional unit cell. Figure 32: A simple Ag3Pd bulk structure. To create a graphite structure we use the following code. Note that we have to specify the lattice constants (taken from) because ase has C in the diamond structure by default. We show two views, because the top view does not show the spacing between the layers. from ase.lattice.hexagonal import Graphite from ase.io import write atoms = Graphite('C', latticeconstant={'a': 2.4612, 'c': 6.7079}) write('images/graphite.png', atoms.repeat((2, 2, 1)), rotation='115x', show_unit_cell=2) write('images/graphite-top.png', atoms.repeat((2, 2, 1)), show_unit_cell=2) Figure 33: A top view of graphite. Figure 34: A side view of graphite. To get a compound, we use the following code. We have to specify the basis atoms to the function generating the compound, and the lattice constant. For NaCl we use the lattice constant at (). from ase.lattice.compounds import NaCl from ase.io import write atoms = NaCl(['Na', 'Cl'], latticeconstant=5.65) write('images/NaCl.png', atoms, show_unit_cell=2, rotation='45x,45y,45z') Figure 35: A view of a NaCl crystal structure. 4.1.1.1 ase.spacegroup A final alternative to setting up bulk structures is ase.spacegroup. This is a concise way to setup structures if you know the following properties of the crystal structure: - Chemical symbols - Coordinates of the non-equivalent sites in the unit cell - the spacegroup - the cell parameters (a, b, c, alpha, beta, gamma) from ase.lattice.spacegroup import crystal # FCC aluminum a = 4.05 al = crystal('Al', [(0, 0, 0)], spacegroup=225, cellpar=[a, a, a, 90, 90, 90]) print(al) Atoms(symbols='Al4', positions=..., cell=[[4.05, 0.0, 0.0], [2.4799097682733903e-16, 4.05, 0.0], [2.4799097682733903e-16, 2.4799097682733903e-16, 4.05]], pbc=[True, True, True]) Here is rutile TiO2. from ase.lattice.spacegroup import crystal a = 4.6 c = 2.95 rutile = crystal(['Ti', 'O'], basis=[(0, 0, 0), (0.3, 0.3, 0.0)], spacegroup=136, cellpar=[a, a, c, 90, 90, 90]) print rutile]) =sho 4.1.2 Using The Materials Project offers web access to a pretty large number of materials (over 21,000 at the time of this writing), including structure and other computed properties. You must sign up for an account at the website, and then you can access the information. You can search for materials with lots of different criteria including formula, unit cell formula, by elements, by structure, etc… The website allows you to download the VASP files used to create the calculations. They also develop the pymatgen project (which requires python 2.7+). For example, I downloaded this cif file for a RuO\(_2\) structure (Material ID 825). #\#CIF1.1 ########################################################################## # Crystallographic Information Format file # Produced by PyCifRW module # # This is a CIF file. CIF has been adopted by the International # Union of Crystallography as the standard for data archiving and # transmission. # # For information on this file format, follow the CIF links at # ########################################################################## data_RuO2 _symmetry_space_group_name_H-M 'P 1' _cell_length_a 3.13970109 _cell_length_b 4.5436378 _cell_length_c 4.5436378 _cell_angle_alpha 90.0 _cell_angle_beta 90.0 _cell_angle_gamma 90.0 _chemical_name_systematic 'Generated by pymatgen' _symmetry_Int_Tables_number 1 _chemical_formula_structural RuO2 _chemical_formula_sum 'Ru2 O4' _cell_volume 64.8180127062 _cell_formula_units_Z 2 loop_ _symmetry_equiv_pos_site_id _symmetry_equiv_pos_as_xyz 1 'x, y, z' loop_ _atom_site_type_symbol _atom_site_label _atom_site_symmetry_multiplicity _atom_site_fract_x _atom_site_fract_y _atom_site_fract_z _atom_site_attached_hydrogens _atom_site_B_iso_or_equiv _atom_site_occupancy O O1 1 0.000000 0.694330 0.694330 0 . 1 O O2 1 0.500000 0.805670 0.194330 0 . 1 O O3 1 0.000000 0.305670 0.305670 0 . 1 O O4 1 0.500000 0.194330 0.805670 0 . 1 Ru Ru5 1 0.500000 0.500000 0.500000 0 . 1 Ru Ru6 1 0.000000 0.000000 0.000000 0 . 1 We can read this file in with ase.io.read. That function automatically recognizes the file type by the extension. from ase.io import read, write atoms = read('bulk/Ru2O4_1.cif') write('images/Ru2O4.png', atoms, show_unit_cell=2) Figure 36: An RuO2 unit cell prepared from a cif file. 4.2 Computational parameters that are important for bulk structures 4.2.1 k-point convergence In the section on molecules, we learned that the total energy is a function of the planewave cutoff energy (ENCUT) used. In bulk systems that is true also. There is also another calculation parameter you must consider, the k-point grid. The k-point grid is a computational tool used to approximate integrals of some property, e.g. the electron density, over the entire unit cell. The integration is performed in reciprocal space (i.e. in the Brillouin zone) for convenience and efficiency, and the k-point grid is where the property is sampled for the integration. The higher the number of sampled points, the more accurately the integrals are approximated. We will typically use a Monkhorst-Pack PhysRevB.13.5188 $k$-point grid, which is essentially a uniformly spaced grid in the Brillouin zone. Another less commonly used scheme is the Chadi-Cohen k-point grid PhysRevB.8.5747. The Monkhorst-Pack grids are specified as \(n1 \times n2 \times n3\) grids, and the total number of k-points is \(n1 \cdot n2 \cdot n3\). The computational cost is linear in the total number of k-points, so a calculation on a \(4 \times 4 \times 4\) grid will be roughly 8 times more expensive than on a \(2 \times 2 \times 2\) grid. Hence, one seeks again to balance convergence with computational tractability. Below we consider the k-point convergence of fcc Ag. from ase.lattice.cubic import FaceCenteredCubic from vasp import Vasp import numpy as np atoms = FaceCenteredCubic('Ag') KPTS = [2, 3, 4, 5, 6, 8, 10] TE = [] for k in KPTS: calc = Vasp('bulk/Ag-kpts-{0}'.format(k), xc='PBE', kpts=[k, k, k], # specifies the Monkhorst-Pack grid encut=300, atoms=atoms) TE.append(atoms.get_potential_energy()) if None in TE: calc.abort() import matplotlib.pyplot as plt # consider the change in energy from lowest energy state TE = np.array(TE) TE -= TE.min() plt.plot(KPTS, TE) plt.xlabel('number of k-points in each dimension') plt.ylabel('Total Energy (eV)') plt.savefig('images/Ag-kpt-convergence.png') Figure 37: k-point convergence of the total energy of fcc Ag. Based on this figure, we need at least a \(6 \times 6 \times 6\) k-point grid to achieve a convergence level of at least 50 meV. Note: the k-point convergence is not always monotonic like it is in this example, and sometimes very dense grids (e.g. up to \(20 \times 20 \times 20\)) are needed for highly converged properties such as the density of states in smaller unit cells. Oscillations in the total energy are typical, and it can be difficult to get high levels of convergence. The best practices are to use the same k-point sampling grid in energy differences where possible, and dense (high numbers of k-points) otherwise. It is important to check for convergence in these cases. As unit cells get larger, the number of k-points required becomes smaller. For example, if a \(1 \times 1 \times 1\) fcc unit cell shows converged energies in a \(12 \times 12 \times 12\) k-point grid, then a \(2 \times 2 \times 2\) fcc unit cell would show the same level of convergence with a \(6 \times 6 \times 6\) k-point grid. In other words, doubling the unit cell vectors results in a halving of the number of k-points. Sometimes you may see k-points described as k-points per reciprocal atom. For example, a \(12 \times 12 \times 12\) k-point grid for a primitive fcc unit cell would be 1728 k-points per reciprocal atom. A \(2 \times 2 \times 2\) fcc unit cell has eight atoms in it, or 0.125 reciprocal atoms, so a \(6 \times 6 \times 6\) k-point grid has 216 k-points in it, or 216/0.125 = 1728 k-points per reciprocal atom, the same as we discussed before. In the k-point convergence example above, we used a \(6 \times 6 \times 6\) k-point grid on a unit cell with four atoms in it, leading to 864 k-points per reciprocal atom. If we had instead used the primitive unit cell, we would need either a \(9 \times 9 \times 9\) or \(10 \times 10 \times 10\) k-point grid to get a similar level of accuracy. In this case, there is no exact matching of k-point grids due to the difference in shape of the cells. 4.2.2 TODO Effect of SIGMA In the self-consistent cycle of a DFT calculation, the total energy is minimized with respect to occupation of the Kohn-Sham orbitals. At absolute zero, a band is either occupied or empty. This discrete occupation results in discontinuous changes in energy with changes in occupation, which makes it difficult to converge. One solution is to artificially broaden the band occupancies, as if they were occupied at a higher temperature where partial occupation is possible. This results in a continuous dependence of energy on the partial occupancy, and dramatically increases the rate of convergence. SIGMA and ISMEAR affect how the partial occupancies of the bands are determined. Some rules to keep in mind: - The smearing methods were designed for metals. For molecules, semiconductors and insulators you should use a very small SIGMA (e.g. 0.01). - Standard values for metallic systems is SIGMA=0.1, but the best SIGMA may be material specific. The consequence of this finite temperature is that additional bands must be included in the calculation to allow for the partially occupied states above the Fermi level; the number of extra bands depends on the temperature used. An example of the maximum occupancies of the bands for an Cu bulk as a function of SIGMA is shown in Figure fig:sigma-occ. Obviously, as SIGMA approaches 0, the occupancy approaches a step function. It is preferable that the occupancy of several of the highest bands be zero (or at least of order \(1\times 10^{-8}\)) to ensure enough variational freedom was available in the calculation. Consequently, it is suggested that fifteen to twenty extra bands be used for a SIGMA of 0.20. In any case, it should be determined that enough bands were used by examination of the occupancies. It is undesirable to have too many extra bands, as this will add computational time. Below we show the effect of SIGMA on the band occupancies. from vasp import Vasp from ase import Atom, Atoms import matplotlib.pyplot as plt import numpy as np a = 3.61 atoms = Atoms([Atom('Cu', (0, 0, 0))], cell=0.5 * a * np.array([[1.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 1.0]])).repeat((2, 2, 2)) SIGMA = [0.001, 0.05, 0.1, 0.2, 0.5] for sigma in SIGMA: calc = Vasp('bulk/Cu-sigma-{0}'.format(sigma), xc='PBE', encut=350, kpts=[4, 4, 4], ismear=-1, sigma=sigma, nbands=9 * 8, atoms=atoms) if calc.potential_energy is not None: nbands = calc.parameters.nbands nkpts = len(calc.get_ibz_k_points()) occ = np.zeros((nkpts, nbands)) for i in range(nkpts): occ[i, :] = calc.get_occupation_numbers(kpt=i) max_occ = np.max(occ, axis=0) #axis 0 is columns plt.plot(range(nbands), max_occ, label='$\sigma = {0}$'.format(sigma)) plt.xlabel('band number') plt.ylabel('maximum occupancy (electrons)') plt.ylim([-0.1, 2.1]) plt.legend(loc='best') plt.savefig('images/occ-sigma.png') Figure 38: Effects of SIGMA on the occupancies of the Cu system. 4.2.3 The number of bands In the last figure, it is evident that due to the smearing of the electronic states you need to have extra bands to accommodate the electrons above the Fermi level, and the higher the SIGMA value is, the more bands you need. You need enough bands so that the highest energy bands are unoccupied, and VASP will give you a warning that looks like this: ----------------------------------------------------------------------------- | | | ADVICE TO THIS USER RUNNING 'VASP/VAMP' (HEAR YOUR MASTER'S VOICE ...): | | | | Your highest band is occupied at some k-points! Unless you are | | performing a calculation for an insulator or semiconductor, without | | unoccupied bands, you have included TOO FEW BANDS!! Please increase | | the parameter NBANDS in file 'INCAR' to ensure that the highest band | | is unoccupied at all k-points. It is always recommended to | | include a few unoccupied bands to accelerate the convergence of | | molecular dynamics runs (even for insulators or semiconductors). | | Because the presence of unoccupied bands improves wavefunction | | prediction, and helps to suppress 'band-crossings.' | | Following all k-points will be listed (with the Fermi weights of | | the highest band given in paranthesis) ... : | | | | 6 (-0.01472) | | 8 (-0.01413) | | 13 (-0.01733) | | 14 (-0.01838) | | | | The total occupancy of band no. 49 is -0.00932 electrons ... | | | ----------------------------------------------------------------------------- We tell VASP the number of bands to use with the NBANDS keyword. VASP will set the NBANDS automatically if you do not provide a value, but this is in general bad practice (even though it is often done in this book!). There are a few general guidelines for setting NBANDS. First we recognize that a band can only have two electrons in it (one spin up, and one spin down) in an calculation without spin-polarization, or one electron per band for a spin-polarized calculation (note that spin-polarization doubles the number of bands). There absolutely must be enough bands to accommodate all the electrons, so the minimum number of bands is int(ceil(nelectrons/2)). Here is an example of what this equation does. import numpy as np print int(np.ceil(50 / 2.)) print int(np.ceil(51 / 2.)) 25 26 However, due to the smearing, the minimum number of bands is almost never enough, and we always add more bands. The default behavior in VASP is: These do not always work, especially for small molecular systems where NIONS/2 may be only 1, or transition metals where it may be necessary to add up to 2*NIONS extra bands. To figure out how many bands you need, it is necessary to know how many electrons are in your calculation. The Vasp.get_valence_electrons provides this for you. Alternatively, you can look in the Appendix for a table listing the number of valence electrons for each POTCAR file. Armed with this information you can set NBANDS the way you want.], nbands=9, ibrion=2, isif=4, nsw=10, atoms=atoms) print(calc.get_valence_electrons()) print(calc.potential_energy) 11.0 -3.73436945 For this calculation we need at least 6 bands (11/2=5.5 which is rounded up to 6) and we need to include some extra bands. The default rule would only add half a band, which is not enough. We add three additional bands. This system is so small it does not substantially increase the computational cost. If you are too trifling to do that much work, you can use the Vasp.set_nbands to automatically set the number of bands. This function takes an argument N to set the number of bands to N, or an argument f to set the NBANDS according to the formula \(nbands = int(nelectrons/2 + len(atoms)*f)\). The default value of f is 1.5. If you want the default VASP behavior, set f=0.5. For transition metals, it may be required that f=2. This function does not consider whether the calculation is spin-polarized or not. Here is an example of using Vasp.set_nbands.], ibrion=2, isif=4, nsw=10, atoms=atoms) calc.set_nbands(f=7) calc.write_input() # you have to write out the input for it to take effect print calc *************** VASP CALCULATION SUMMARY *************** Vasp calculation directory: --------------------------- [[/home-research/jkitchin/dft-book/bulk/alloy/cu]] : 13) Note the defaults that were set.-setnbands', xc='PBE', encut=350, kpts=[13, 13, 13], ibrion=2, isif=4, nsw=10, atoms=atoms) calc.set_nbands(f=3) calc.write_input() print calc *************** VASP CALCULATION SUMMARY *************** Vasp calculation directory: --------------------------- [[/home-research/jkitchin/dft-book/bulk/alloy/cu-setnbands]] : 9) You are, of course, free to use any formula you want to set the number of bands. Some formulas I have used in the past include: - NBANDS = 0.65*NELECT + 10 - NBANDS = 0.5*NELECT + 15 - etc… 4.3 Determining bulk structures What we typically mean by determining bulk structures includes the following: - What is the most stable crystal structure for a material? - What is the lattice constant of fcc Cu? - What are the lattice parameters and internal atom parameters for TiO2? All of these questions can often be addressed by finding the volume, shape and atomic positions that minimize the total energy of a bulk system. This is true at 0K. At higher temperatures, one must consider minimizing the free energy, rather than the internal energy. 4.3.1 fcc/bcc crystal structures The fcc and bcc structures are simple. They only have one degree of freedom: the lattice constant. In this section we show how to calculate the equilibrium volume of each structure, and determine which one is more stable. We start with the fcc crystal structure of Cu. We will manually define the crystal structure based on the definitions in Kittel kittel (Chapter 1). from vasp import Vasp from ase import Atom, Atoms import numpy as np # fcc LC = [3.5, 3.55, 3.6, 3.65, 3.7, 3.75] fcc_energies = [] ready = True for a in LC: atoms = Atoms([Atom('Cu', (0, 0, 0))], cell=0.5 * a * np.array([[1.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 1.0]])) calc = Vasp('bulk/Cu-{0}'.format(a), xc='PBE', encut=350, kpts=[8, 8, 8], atoms=atoms) e = atoms.get_potential_energy() fcc_energies.append(e) calc.stop_if(None in fcc_energies) import matplotlib.pyplot as plt plt.plot(LC, fcc_energies) plt.xlabel('Lattice constant ($\AA$)') plt.ylabel('Total energy (eV)') plt.savefig('images/Cu-fcc.png') print '#+tblname: cu-fcc-energies' print r'| lattice constant ($\AA$) | Total Energy (eV) |' for lc, e in zip(LC, fcc_energies): print '| {0} | {1} |'.format(lc, e) Use the data in the table above to plot the total energy as a function of the lattice constant. Fit a cubic polynomial to the data, and find the volume that minimizes the total energy. Figure 39: Total energy vs. fcc lattice contant for Cu. It appears the minimum is near 3.65 Å. If you want to know the lattice constant that gives the lowest energy, you would fit an equation of state to the data. Here is an example using ase.utils.eos. See also the appendix equations of state. from vasp import Vasp from ase.utils.eos import EquationOfState LC = [3.5, 3.55, 3.6, 3.65, 3.7, 3.75] energies = [] volumes = [] for a in LC: calc = Vasp('bulk/Cu-{0}'.format(a)) atoms = calc.get_atoms() volumes.append(atoms.get_volume()) energies.append(atoms.get_potential_energy()) calc.stop_if(None in energies) eos = EquationOfState(volumes, energies) v0, e0, B = eos.fit() print ''' v0 = {0} A^3 E0 = {1} eV B = {2} eV/A^3'''.format(v0, e0, B) eos.plot('images/Cu-fcc-eos.png') v0 = 11.9941760954 A^3 E0 = -3.73528237713 eV B = 0.862553823078 eV/A^3 3.63585568663 3.63585568663 3.63585568663 Figure 40: Total energy vs. volume for fcc Cu with fitted cubic polynomial equation of state. Before we jump into the bcc calculations, let us consider what range of lattice constants we should choose. The fcc lattice is close-packed, and the volume of the primitive cell is \(V = 1/4 a^3\) or about 11.8 Å3/atom. The volume of the equilibrium bcc primitive cell will probably be similar to that. The question is: what bcc lattice constant gives that volume? The simplest way to answer this is to compute the answer. We will make a bcc crystal at the fcc lattice constant, and then compute the scaling factor needed to make it the right volume. from ase import Atom, Atoms import numpy as np a = 3.61 # lattice constant atoms = Atoms([Atom('Cu', [0,0,0])], cell=0.5 * a*np.array([[ 1.0, 1.0, -1.0], [-1.0, 1.0, 1.0], [ 1.0, -1.0, 1.0]])) print 'BCC lattice constant = {0:1.3f} Ang'.format(a * (11.8 / atoms.get_volume())**(1./3.)) BCC lattice constant = 2.868 Ang Now we run the equation of state calculations. from vasp import Vasp from ase import Atom, Atoms import numpy as np LC = [2.75, 2.8, 2.85, 2.9, 2.95, 3.0] for a in LC: atoms = Atoms([Atom('Cu', [0, 0, 0])], cell=0.5 * a * np.array([[ 1.0, 1.0, -1.0], [-1.0, 1.0, 1.0], [ 1.0, -1.0, 1.0]])) calc = Vasp('bulk/Cu-bcc-{0}'.format(a), xc='PBE', encut=350, kpts=[8, 8, 8], atoms=atoms) print(calc.potential_energy) -3.59937543 -3.67930795 -3.71927399 -3.72637899 -3.70697046 -3.66645678 Finally, we will compare the two crystal structures. from vasp import Vasp # bcc energies and volumes bcc_LC = [2.75, 2.8, 2.85, 2.9, 2.95, 3.0] bcc_volumes = [] bcc_energies = [] for a in bcc_LC: calc = Vasp('bulk/Cu-bcc-{0}'.format(a)) atoms = calc.get_atoms() bcc_volumes.append(atoms.get_volume()) bcc_energies.append(atoms.get_potential_energy()) # fcc energies and volumes fcc_LC = [3.5, 3.55, 3.6, 3.65, 3.7, 3.75] fcc_volumes = [] fcc_energies =[] for a in fcc_LC: calc = Vasp('bulk/Cu-{0}'.format(a)) atoms = calc.get_atoms() fcc_volumes.append(atoms.get_volume()) fcc_energies.append(atoms.get_potential_energy()) import matplotlib.pyplot as plt plt.plot(fcc_volumes, fcc_energies, label='fcc') plt.plot(bcc_volumes, bcc_energies, label='bcc') plt.xlabel('Atomic volume ($\AA^3$/atom)') plt.ylabel('Total energy (eV)') plt.legend() plt.savefig('images/Cu-bcc-fcc.png') # print table of data print '#+tblname: bcc-data' print '#+caption: Total energy vs. lattice constant for BCC Cu.' print '| Lattice constant (\AA$^3$) | Total energy (eV) |' print '|-' for lc, e in zip(bcc_LC, bcc_energies): print '| {0} | {1} |'.format(lc, e) Use the data for FCC and BCC Cu to plot the total energy as a function of the lattice constant. Figure 41: Comparison of energies between fcc and bcc Cu. The fcc structure is lower in energy. Note we plot the energy vs. atomic volume. That is because the lattice constants of the two crystal structures are very different. It also shows that the atomic volumes in the two structures are similar. What can we say here? The fcc structure has a lower energy than the bcc structure, so we can conclude the fcc structure is more favorable. In fact, the fcc structure is the experimentally found structure for Cu. Some caution is in order; if you run these calculations at a \(4 \times 4 \times 4\) $k$-point grid, the bcc structure is more stable because the results are not converged! Compute the energy vs. volume for fcc and bcc Cu for different $k$-point grids. Determine when each result has converged, and which structure is more stable. What can we say about the relative stability of fcc to hcp? Nothing, until we calculate the hcp equation of state. 4.3.2 Optimizing the hcp lattice constant The hcp lattice is more complicated than the fcc/bcc lattices because there are two lattice parameters: \(a\) and \(c\) or equivalently: \(a\) and \(c/a\). We will start by making a grid of values and find the set of parameters that minimizes the energy. See Figure fig:ru-e-ca. from ase.lattice.hexagonal import HexagonalClosedPacked from vasp import Vasp import matplotlib.pyplot as plt atoms = HexagonalClosedPacked(symbol='Ru', latticeconstant={'a': 2.7, 'c/a': 1.584}) a_list = [2.5, 2.6, 2.7, 2.8, 2.9] covera_list = [1.4, 1.5, 1.6, 1.7, 1.8] for a in a_list: energies = [] for covera in covera_list: atoms = HexagonalClosedPacked(symbol='Ru', latticeconstant={'a': a, 'c/a': covera}) wd = 'bulk/Ru/{0:1.2f}-{1:1.2f}'.format(a, covera) calc = Vasp(wd, xc='PBE', # the c-axis is longer than the a-axis, so we use # fewer kpoints. kpts=[6, 6, 4], encut=350, atoms=atoms) energies.append(atoms.get_potential_energy()) if not None in energies: plt.plot(covera_list, energies, label=r'a={0} $\AA$'.format(a)) plt.xlabel('$c/a$') plt.ylabel('Energy (eV)') plt.legend() plt.savefig('images/Ru-covera-scan.png') Figure 42: Total energy vs. \(c/a\) for different values of \(a\). \label{fig:ru-e-ca} It looks like there is a minimum in the a=2.7 Å curve, at a \(c/a\) ratio of about 1.6. We can look at the same data in a contour plot which shows more clearly there is minimum in all directions near that point (Figure fig:ru-contourf). from vasp import Vasp import matplotlib.pyplot as plt import numpy as np x = [2.5, 2.6, 2.7, 2.8, 2.9] y = [1.4, 1.5, 1.6, 1.7, 1.8] X,Y = np.meshgrid(x, y) Z = np.zeros(X.shape) for i,a in enumerate(x): for j,covera in enumerate(y): wd = 'bulk/Ru/{0:1.2f}-{1:1.2f}'.format(a, covera) calc = Vasp(wd) Z[i][j] = calc.potential_energy calc.stop_if(None in Z) cf = plt.contourf(X, Y, Z, 20, cmap=plt.cm.jet) cbar = plt.colorbar(cf) cbar.ax.set_ylabel('Energy (eV)') plt.xlabel('$a$ ($\AA$)') plt.ylabel('$c/a$') plt.legend() plt.savefig('images/ru-contourf.png') plt.show() Figure 43: Contour plot of the total energy of hcp Ru for different values of \(a\) and \(c/a\). \label{fig:ru-contourf} 4.3.3 Complex structures with internal degrees of freedom A unit cell has six degrees of freedom: the lengths of each unit cell vector, and the angle between each vector. There may additionally be internal degrees of freedom for the atoms. It is impractical to try the approach used for the hcp Ru on anything complicated. Instead, we rely again on algorithms to optimize the unit cell shape, volume and internal degrees of freedom. It is usually not efficient to make a wild guess of the geometry and then turn VASP loose on to optimize it. Instead, the following algorithm works pretty well. - Find the volume (at constant shape, with relaxed ions) that minimizes the total energy (ISIF=2). The goal here is to just get an idea of where the right volume is. - Using the results from step 1 as a starting point, perform a set of calculations at constant volume around the minimum from step 1, but the shape and internal atom positions are allowed to change (ISIF=4). - Finally, do a final calculation near the minimum energy allowing the volume to also change. (ISIF=3). This multistep process is pretty reasonable to get a converged structure pretty quickly. It is not foolproof, however, and if you have materials such as graphite it may not work well. The problem with graphite is that it is a layered compound that is held together by weak van der waal type forces which are not modeled well by typical GGA functionals. Thus the change in energy due to a volume change is larger in the plane of the graphite sheet than in the direction normal to the sheet. With a typical GGA, the sheets may just move apart until they do not interact any more. We will illustrate the process on a well-behaved system (rutile TiO2) which has two lattice parameters and one internal degree of freedom. There are a few subtle points to mention in doing these calculations. The VASP manual recommends that you set PREC to 'high', and that ENCUT be set to 1.3*max(ENMAX) of the pseudopotentials. This is necessary to avoid problems caused by small basis sets when the volume changes, and Pulay stress. It is important to ensure that the energies are reasonably converged with respect to k-point grids. Hence, it can be a significant amount of work to do this right! Let us start with determining the ENCUT value that is appropriate for TiO2. grep ENMAX $VASP_PP_PATH/potpaw_PBE/Ti/POTCAR grep ENMAX $VASP_PP_PATH/potpaw_PBE/O/POTCAR ENMAX = 178.330; ENMIN = 133.747 eV ENMAX = 400.000; ENMIN = 300.000 eV According to the manual, we should use ENCUT = 1.3*400 = 520 eV for good results. Now we consider the k-point convergence. The lattice vectors of the rutile TiO2 structure are not all the same length, which means it is not essential that we use the same number of k-points in each direction. For simplicity, however, we do that here. # step 1 frozen atoms and shape at different volumes from ase import Atom, Atoms import numpy as np from vasp import Vasp import matplotlib.pyplot as plt ''' create a TiO2 structure from the lattice vectors at This site does not exist anymore. ''']) KPOINTS = [2, 3, 4, 5, 6, 7, 8] energies = [] ready = True for k in KPOINTS: calc = Vasp('bulk/tio2/kpts-{0}'.format(k), encut=520, kpts=[k, k, k], xc='PBE', sigma=0.05, atoms=atoms) energies.append(atoms.get_potential_energy()) calc.stop_if(None in energies) plt.plot(KPOINTS, energies) plt.xlabel('number of k-points in each vector') plt.ylabel('Total energy (eV)') plt.savefig('images/tio2-kpt-convergence.png') Figure 44: k-point convergence of rutile TiO2. A k-point grid of \(5 \times 5 \times 5\) appears suitable for reasonably converged results. Now we proceed with step 1: Compute the total energy of the unit cell allowing internal degrees of freedom to relax, but keeping a constant cell shape. # step 1 frozen atoms and shape at different volumes from ase import Atom, Atoms import numpy as np from vasp import Vasp import matplotlib.pyplot as plt ''' create a TiO2 structure from the lattice vectors at ''']) v0 = atoms.get_volume() cell0 = atoms.get_cell() factors = [0.9, 0.95, 1.0, 1.05, 1.1] #to change volume by energies, volumes = [], [] ready = True for f in factors: v1 = f * v0 cell_factor = (v1 / v0)**(1. / 3.) atoms.set_cell(cell0 * cell_factor, scale_atoms=True) calc = Vasp('bulk/tio2/step1-{0:1.2f}'.format(f), encut=520, kpts=[5, 5, 5], isif=2, # relax internal degrees of freedom ibrion=1, nsw=50, xc='PBE', sigma=0.05, atoms=atoms) energies.append(atoms.get_potential_energy()) volumes.append(atoms.get_volume()) calc.stop_if(None in energies) plt.plot(volumes, energies) plt.xlabel('Vol. ($\AA^3)$') plt.ylabel('Total energy (eV)') plt.savefig('images/tio2-step1.png') print '#+tblname: tio2-vol-ene' print '#+caption: Total energy of TiO_{2} vs. volume.' print '| Volume ($\AA^3$) | Energy (eV) |' print '|-' for v, e in zip(volumes, energies): print '| {0} | {1} |'.format(v, e) Figure 45: Total energy vs. volume for rutile TiO2 in step 1 of the optimization. Now, we know the minimum energy volume is near 64 Å^3. You could at this point fit an equation of state to find that minimum. However, we now want to use these initial starting points for a second round of optimization where we allow the unit cell shape to change, at constant volume: ISIF=4. from vasp import Vasp calc = Vasp('bulk/tio2/step1-0.90') calc.clone('bulk/tio2/step2-0.90') #calc.set(isif=4) print calc.set(isif=4) print calc.calculation_required() clone: Atoms(symbols='Ti2O4', positions=..., magmoms=..., cell=[4.41041021, 4.41041021, 2.88537073], pbc=[True, True, True], calculator=SinglePointCalculator(...)) {} False from vasp import Vasp factors = [0.9, 0.95, 1.0, 1.05, 1.1] # to change volume by energies1, volumes1 = [], [] # from step 1 energies, volumes = [], [] # for step 2 ready = True for f in factors: calc = Vasp('bulk/tio2/step1-{0:1.2f}'.format(f)) atoms = calc.get_atoms() energies1.append(atoms.get_potential_energy()) volumes1.append(atoms.get_volume()) calc.clone('bulk/tio2/step2-{0:1.2f}'.format(f)) calc.set(isif=4) # You have to get the atoms again. atoms = calc.get_atoms() energies.append(atoms.get_potential_energy()) volumes.append(atoms.get_volume()) print(energies, volumes) calc.stop_if(None in energies) import matplotlib.pyplot as plt plt.plot(volumes1, energies1, volumes, energies) plt.xlabel('Vol. ($\AA^3)$') plt.ylabel('Total energy (eV)') plt.legend(['step 1', 'step 2'], loc='best') plt.savefig('images/tio2-step2.png') ([-51.82715553, -52.46235848, -52.76127768, -52.80903199, -52.67597935], [56.125418401558292, 59.243497055444799, 62.36157600817976, 65.47965475812434, 68.597733708605929]) Figure 46: Total energy vs. volume for step 2 of the unit cell optimization. The take away message here is that the total energy slightly decreases when we allow the unit cell shape to change, especially for the larger unit cell deformation. This has little effect on the minimum volume, but would have an effect on the bulk modulus, which is related to the curvature of the equation of state. At this point, you could fit an equation of state to the step 2 data, and estimate the volume at the minimum volume, and recalculate the total energy at that volume. An alternative is a final calculation with ISIF=3, which optimizes the unit cell volume, shape and internal coordinates. It looks like the calculation at bulk/tio2/step2-1.05 is close to the minimum, so we will use that as a starting point for the final calculation. from vasp import Vasp calc = Vasp('bulk/tio2/step2-1.05') calc.clone('bulk/tio2/step3') calc = Vasp('bulk/tio2/step3', isif=3) calc.wait() print calc from pyspglib import spglib print '\nThe spacegroup is {0}'.format(spglib.get_spacegroup(calc.atoms)) *************** VASP CALCULATION SUMMARY *************** Vasp calculation directory: --------------------------- [[/home-research/jkitchin/dft-book/bulk/tio2/step3]] Unit cell: ---------- x y z |v| v0 4.661 0.000 0.000 4.661 Ang v1 0.000 4.661 0.000 4.661 Ang v2 0.000 0.000 2.970 2.970 Ang alpha, beta, gamma (deg): 90.0 90.0 90.0 Total volume: 64.535 Ang^3 Stress: xx yy zz yz xz xy -0.002 -0.002 -0.000 -0.000 -0.000 -0.000 GPa ID tag sym x y z rmsF (eV/A) 0 0 Ti 0.000 0.000 0.000 0.00 1 0 Ti 2.331 2.331 1.485 0.00 2 0 O 1.420 1.420 0.000 0.00 3 0 O 3.241 3.241 0.000 0.00 4 0 O 3.751 0.910 1.485 0.00 5 0 O 0.910 3.751 1.485 0.00 Potential energy: -52.8176 eV INPUT Parameters: ----------------- pp : PBE isif : 3 xc : pbe kpts : [5, 5, 5] encut : 520 lcharg : False ibrion : 1 ismear : 1 lwave : False sigma : 0.05 nsw : 50 Pseudopotentials used: ---------------------- Ti: potpaw_PBE/Ti/POTCAR (git-hash: 39cac2d7c620efc80c69344da61b5c43bc16e9b8) O: potpaw_PBE/O/POTCAR (git-hash: 592f34096943a6f30db8749d13efca516d75ec55) The spacegroup is P4_2/mnm (136) This is the final result. You can see that the forces on all the atoms are less than 0.01 eV/Å, and the stress is also very small. The final volume is close to where we expect it to be based on steps 1 and 2. The space group is still correct. The lattice vectors are close in length to the experimentally known values, and the angles between the vectors has not changed much. Looks good! As a final note, the VASP manual recommends you do not use the final energy directly from the calculation, but rather run a final calculation with ISMEAR set to -5. Here we examine the effect. from vasp import Vasp calc = Vasp('bulk/tio2/step3') atoms = calc.get_atoms() print 'default ismear: ', atoms.get_potential_energy() calc.clone('bulk/tio2/step4') calc.set(ismear=-5, nsw=0) atoms = calc.get_atoms() print 'ismear=-5: ', atoms.get_potential_energy() default ismear: -52.81760338 ismear=-5: -52.8004532 The difference here is on the order of a meV. That does not seem significant here. I suspect the recommended practice stems from early days when much smaller ENCUT values were used and changes in the number of basis functions were more significant. 4.3.4 Effect of XC on bulk properties The exchange correlation functional can significantly affect computed bulk properties. Here, we examine the effect on the bulk lattice constant of Pd (exp. 3.881). An excellent review of this can be found in mattsson-084714. We examine several functionals. The xc keyword in Vasp is used to select the POTCARs. Let us consider the LDA functional first. from vasp import Vasp from ase import Atom, Atoms from ase.utils.eos import EquationOfState import numpy as np LC = [3.75, 3.80, 3.85, 3.90, 3.95, 4.0, 4.05, 4.1] volumes, energies = [], [] for a in LC: atoms = Atoms([Atom('Pd', (0, 0, 0))], cell=0.5 * a * np.array([[1.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 1.0]])) calc = Vasp('bulk/Pd-LDA-{0}'.format(a), encut=350, kpts=[12, 12, 12], xc='LDA', atoms=atoms) e = atoms.get_potential_energy() energies.append(e) volumes.append(atoms.get_volume()) calc.stop_if(None in energies) eos = EquationOfState(volumes, energies) v0, e0, B = eos.fit() print('LDA lattice constant is {0:1.3f} Ang^3'.format((4*v0)**(1./3.))) LDA lattice constant is 3.841 Ang^3 For a GGA calculation, it is possible to specify which functional you want via the GGA tag. This tag was designed to use the LDA POTCAR files, but with a GGA functional. We will consider four different functionals here. from vasp import Vasp from ase import Atom, Atoms from ase.utils.eos import EquationOfState import numpy as np LC = [3.75, 3.80, 3.85, 3.90, 3.95, 4.0, 4.05, 4.1] GGA = {'AM': 'AM05', 'PE': 'PBE', 'PS': 'PBEsol', 'RP': 'RPBE'} for key in GGA: volumes, energies = [], [] for a in LC: atoms = Atoms([Atom('Pd', (0, 0, 0))], cell=0.5 * a * np.array([[1.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 1.0]])) calc = Vasp('bulk/Pd-GGA-{1}-{0}'.format(a, key), encut=350, kpts=[12, 12, 12], xc='LDA', gga=key, atoms=atoms) e = atoms.get_potential_energy() energies.append(e) volumes.append(atoms.get_volume()) if None not in energies: eos = EquationOfState(volumes, energies) v0, e0, B = eos.fit() print '{1:6s} lattice constant is {0:1.3f} Ang^3'.format((4*v0)**(1./3.), GGA[key]) else: print energies, LC print '{0} is not ready'.format(GGA[key]) PBEsol lattice constant is 3.841 Ang^3 AM05 lattice constant is 3.841 Ang^3 RPBE lattice constant is 3.841 Ang^3 PBE lattice constant is 3.939 Ang^3 These results compare very favorably to those in mattsson-084714. It is typical that LDA functionals underestimate the lattice constants, and that GGAs tend to overestimate the lattice constants. PBEsol and AM05 were designed specifically for solids, and for Pd, these functionals do an exceptional job of reproducing the lattice constants. RPBE is particularly bad at the lattice constant, but it has been reported to be a superior functional for reactivity hammer1999:improv-pbe. 4.4 TODO Using built-in ase optimization with vasp ASE has some nice optimization tools built into it. We can use them in vasp too. This example is adapted from this test: First the VASP way. from vasp import Vasp from ase.lattice import bulk Al = bulk('Al', 'fcc', a=4.5, cubic=True) calc = Vasp('bulk/Al-lda-vasp', xc='LDA', isif=7, nsw=5, ibrion=1, ediffg=-1e-3, lwave=False, lcharg=False, atoms=Al) print(calc.potential_energy) print(calc) -10.07430725 Vasp calculation in /home-research/jkitchin/dft-book/bulk/Al-lda-vasp INCAR created by Atomic Simulation Environment ISIF = 7 LCHARG = .FALSE. IBRION = 1 EDIFFG = -0.001 ISMEAR = 1 LWAVE = .TRUE. SIGMA = 0.1 NSW = 5 Al 1.0000000000000000 4.5000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 4.5000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 4.5000000000000000 4 Cartesian 0.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 2.2500000000000000 2.2500000000000000 2.2500000000000000 0.0000000000000000 2.2500000000000000 2.2500000000000000 2.2500000000000000 0.0000000000000000 #+BEGIN_SRC python from vasp import Vasp calc = Vasp('bulk/Al-lda-vasp') calc.view() print [atoms.get_volume() for atoms in calc.traj] print [atoms.get_potential_energy() for atoms in calc.traj] [91.124999999999986, 78.034123525818302, 72.328582812881763, 73.422437849114189, 73.368474506164134] [-9.58448747, -10.02992063, -10.07180132, -10.07429962, -10.07430725] Now, the ASE way. TODO from vasp import Vasp from ase.lattice import bulk from ase.optimize import BFGS as QuasiNewton Al = bulk('Al', 'fcc', a=4.5, cubic=True) calc = Vasp('bulk/Al-lda-ase', xc='LDA', atoms=Al) from ase.constraints import StrainFilter sf = StrainFilter(Al) qn = QuasiNewton(sf, logfile='relaxation.log') qn.run(fmax=0.1, steps=5) print('Stress:\n', calc.stress) print('Al post ASE volume relaxation\n', calc.get_atoms().get_cell()) print(calc) Now for a detailed comparison: from vasp import Vasp atoms = Vasp('bulk/Al-lda-vasp').get_atoms() atoms2 = Vasp('bulk/Al-lda-ase').get_atoms() import numpy as np cellA = atoms.get_cell() cellB = atoms2.get_cell() print((np.abs(cellA - cellB) < 0.01).all()) False This could be handy if you want to use any of the optimizers in ase.optimize in conjunction with ase.constraints, which are more advanced than what is in VASP. 4.5 Cohesive energy cohesive energy The cohesive energy is defined as the energy to separate neutral atoms in their ground electronic state from the solid at 0K at 1 atm. We will compute this for rhodium. Rh is normally an fcc metal, so we will use that structure and let VASP find the equilibrium volume for us.])], cell=(7, 8, 9)) calc = Vasp('bulk/atomic-rh', xc='PBE', encut=350, kpts=[1, 1, 1], atoms=atoms) atomic_energy = atoms.get_potential_energy() calc.stop_if(None in (bulk_energy, atomic_energy)) cohesive_energy = atomic_energy - bulk_energy print 'The cohesive energy is {0:1.3f} eV'.format(cohesive_energy) The cohesive energy is 6.184 eV According to Kittel kittel, the cohesive energy of Rh is 5.75 eV. There are a few reasons we may have discrepancy here: - The k-point grid used in the bulk state is not very dense. However, you can see below that the total energy is pretty converged by a \(6 \times 6 \times 6\) $k$-point grid. - We did not check for convergence with the planewave cutoff. - We neglected spin on the atomic state. Rh in the atomic state has this electronic structure: [Kr] 4d8 5s1 and is a doublet. First we consider the k-point convergence. convergence!KPOINTS from vasp import Vasp calc = Vasp('bulk/atomic-rh') atomic_energy = calc.potential_energy calc = Vasp('bulk/bulk-rh') atoms = calc.get_atoms() kpts = [3, 4, 6, 9, 12, 15, 18] calcs = [Vasp('bulk/bulk-rh-kpts-{0}'.format(k), xc='PBE', encut=350, kpts=[k, k, k], atoms=atoms) for k in kpts] energies = [calc.potential_energy for calc in calcs] calcs[0].stop_if(None in energies) for k, e in zip(kpts, energies): print('({0:2d}, {0:2d}, {0:2d}):' ' cohesive energy = {1} eV'.format(k, e - atomic_energy)) ( 3, 3, 3): cohesive energy = -4.76129426 eV ( 4, 4, 4): cohesive energy = -6.17915613 eV ( 6, 6, 6): cohesive energy = -6.20654198 eV ( 9, 9, 9): cohesive energy = -6.20118094 eV (12, 12, 12): cohesive energy = -6.20897225 eV (15, 15, 15): cohesive energy = -6.2091123 eV (18, 18, 18): cohesive energy = -6.21007962 eV Using only 1 k-point for the bulk energy is a terrible approximation! It takes at least a 6 × 6 × 6 grid to get the total energy converged to less than 10 meV. Note we do not need to check the k-point convergence of the atomic state because it is surrounded by vacuum on all sides, and so there should not be any dispersion in the bands. We will examine the magnetic state next.], magmom=1)], cell=(7, 8, 9)) calc = Vasp('bulk/atomic-rh-sp', xc='PBE', encut=350, kpts=[1, 1, 1], ispin=2, atoms=atoms) atomic_energy = atoms.get_potential_energy() calc.stop_if(None in [atomic_energy, bulk_energy]) cohesive_energy = atomic_energy - bulk_energy print 'The cohesive energy is {0:1.3f} eV'.format(cohesive_energy) The cohesive energy is 6.127 eV Again, the value in Kittel kittel is 5.75 eV which is very close to this value. Finally, it is also possible there is a lower energy non-spherical atom energy; we did not check that at all (see Estimating triplet oxygen dissociation energy with low symmetry). 4.6 Elastic properties See this reference PhysRevB.65.104104. We seek the elastic constant tensor that relates stress (σ) and strain (ε) via \(\sigma = c \epsilon \). The stress and strain are six component vectors, so \(c\) will be a 6 × 6 symmetric matrix. 4.6.1 Fe elastic properties As with molecular vibrations, we need a groundstate geometry. Let us get one for BCC Fe. from vasp import Vasp from ase.lattice.cubic import BodyCenteredCubic atoms = BodyCenteredCubic(symbol='Fe') for atom in atoms: atom.magmom = 3.0 from vasp.vasprc import VASPRC VASPRC['mode'] = None import logging log = logging.getLogger('Vasp') #log.setLevel(logging.DEBUG) calc = Vasp('bulk/Fe-bulk', xc='PBE', kpts=[6, 6, 6], encut=350, ispin=2, isif=3, nsw=30, ibrion=1, atoms=atoms) print(atoms.get_potential_energy()) print(atoms.get_stress()) -15.53472773 [ 0.00031141 0.00031141 0.00031141 -0. -0. -0. ] Ok, now with a relaxed geometry at hand, we proceed with the elastic constants. This is accomplished with IBRION=6 and ISIF > 3 in VASP. See this reference (from the VASP page) Y. Le Page and P. Saxe, Phys. Rev. B 65, 104104 (2002) doi:10.1103/PhysRevB.65.104104 for more details. from vasp import Vasp calc = Vasp('bulk/Fe-bulk') calc.clone('bulk/Fe-elastic') calc.set(ibrion=6, # isif=3, # gets elastic constants potim=0.05, # displacements nsw=1, nfree=2) print(calc.potential_energy) -15.52764065 Now, the results are written out to the OUTCAR file. Actually, three sets of moduli are written out 1) the elastic tensor for rigid ions, 2) the contribution from allowing the atoms to relax, and 3) the total elastic modulus, all in kBar. SYMMETRIZED -------------------------------------------------------------------------------- and ELASTIC MODULI CONTR FROM IONIC RELAXATION (kBar) Direction XX YY ZZ XY YZ ZX -------------------------------------------------------------------------------- XX 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 YY 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 ZZ 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 XY 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 YZ 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 ZX 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 -------------------------------------------------------------------------------- TOTAL -------------------------------------------------------------------------------- Let us write a small code here to extract the Total elastic moduli from the OUTCAR file. First we get the line where the total elastic moduli start, then take the six lines that start three lines after that. Finally we parse out the matrix elements and cast them as floats. import numpy as np EM = [] with open('bulk/Fe-elastic/OUTCAR') as f: lines = f.readlines() for i, line in enumerate(lines): if line.startswith(' TOTAL ELASTIC MODULI (kBar)'): j = i + 3 data = lines[j:j+6] break for line in data: EM += [[float(x) for x in line.split()[1:]]] print np.array(EM) [[ 1125.1405 3546.8135 3546.8135 0. 0. 0. ] [ 3546.8135 1125.1405 3546.8135 0. 0. 0. ] [ 3546.8135 3546.8135 1125.1405 0. 0. 0. ] [ 0. 0. 0. 1740.2372 0. 0. ] [ 0. 0. 0. 0. 1740.2372 0. ] [ 0. 0. 0. 0. 0. 1740.2372]] Fe is in a BCC crystal structure, which is high in symmetry. Consequently, many of the elements in the matrix are equal to zero. See for a lot of detail about Fe-Ni alloys and general theory about elastic constants. In the next section, we show how the code above is integrated into Vasp. 4.6.2 Al elastic properties First, the relaxed geometry. from vasp import Vasp from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic(symbol='Al') calc = Vasp('bulk/Al-bulk', xc='PBE', kpts=[12, 12, 12], encut=350, prec='High', isif=3, nsw=30, ibrion=1, atoms=atoms) print(calc.potential_energy) -14.97511793 Ok, now with a relaxed geometry at hand, we proceed with the elastic constants. This is accomplished with IBRION=6 and ISIF ≥ 3 in VASP. from vasp import Vasp calc = Vasp('bulk/Al-bulk') calc.clone('bulk/Al-elastic') calc.set(ibrion=6, # isif=3, # gets elastic constants potim=0.015, # displacements nsw=1, nfree=2) calc.wait(abort=True) EM = calc.get_elastic_moduli() print(EM) c11 = EM[0, 0] c12 = EM[0, 1] B = (c11 + 2 * c12) / 3.0 print(B) [[ 110.17099 59.54652 59.54652 0. 0. 0. ] [ 59.54652 110.17099 59.54652 0. 0. 0. ] [ 59.54652 59.54652 110.17099 0. 0. 0. ] [ 0. 0. 0. 11.52331 0. 0. ] [ 0. 0. 0. 0. 11.52331 0. ] [ 0. 0. 0. 0. 0. 11.52331]] 76.4213433333 This example shows the basic mechanics of getting the elastic constants. The C44 constant above is too low, and probably we need to check these constants for convergence with respect to kpoints, planewave cutoff, and maybe the value of POTIM. 4.6.3 Manual calculation of elastic constants It is possible to manually calculate single elastic constants; you just need to know what strain corresponds to the elastic constant. For the C11 elastic constant in a cubic system, we simply strain the cell along the x-axis, and then evaluate the second derivative at the minimum to calculate C11 like this. \(C_{11} = \frac{1}{V_C}\frac{\partial^2 E^{tot}}{\partial \gamma^2}\) from vasp import Vasp from ase.lattice.cubic import FaceCenteredCubic import numpy as np import matplotlib.pyplot as plt DELTAS = np.linspace(-0.05, 0.05, 5) calcs = [] volumes = [] for delta in DELTAS: atoms = FaceCenteredCubic(symbol='Al') cell = atoms.cell T = np.array([[1 + delta, 0, 0], [0,1, 0], [0, 0, 1]]) newcell = np.dot(cell, T) atoms.set_cell(newcell, scale_atoms=True) volumes += [atoms.get_volume()] calcs += [Vasp('bulk/Al-c11-{}'.format(delta), xc='pbe', kpts=[12, 12, 12], encut=350, atoms=atoms)] Vasp.run() energies = [calc.potential_energy for calc in calcs] # fit a parabola eos = np.polyfit(DELTAS, energies, 2) # first derivative d_eos = np.polyder(eos) print(np.roots(d_eos)) xfit = np.linspace(min(DELTAS), max(DELTAS)) yfit = np.polyval(eos, xfit) plt.plot(DELTAS, energies, 'bo', xfit, yfit, 'b-') plt.xlabel('$\delta$') plt.ylabel('Energy (eV)') plt.savefig('images/Al-c11.png') [ 0.00727102] 4.7 Bulk thermodynamics We can predict temperature dependent thermodynamic properties of bulk materials without too much effort. As with the thermochemical properties of ideal gases, we must use some simple models that we parameterize by DFT. Here we follow the example in Reference Shang20101040 for computing the thermal coefficient of expansion, heat capacity, enthalpy and entropy for Ni as a function of temperature. We start by computing the equation of state for fcc Ni. from vasp import Vasp from ase import Atom, Atoms import numpy as np # fcc LC = [3.5, 3.55, 3.6, 3.65, 3.7, 3.75] volumes, energies = [], [] for a in LC: atoms = Atoms([Atom('Ni', (0, 0, 0), magmom=2.5)], cell=0.5 * a * np.array([[1.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 1.0]])) calc = Vasp('bulk/Ni-{0}'.format(a), xc='PBE', encut=350, kpts=[12, 12, 12], ispin=2, atoms=atoms) energies.append(calc.potential_energy) volumes.append(atoms.get_volume()) calc.stop_if(None in energies) import matplotlib.pyplot as plt plt.plot(LC, energies) plt.xlabel('Lattice constant ($\AA$)') plt.ylabel('Total energy (eV)') plt.savefig('images/Ni-fcc.png') 4.8 Effect of pressure on phase stability So far we have only considered relative stability at a pressure of 0 Pa. We now consider the relative stability of two phases under pressure. We will consider TiO\(_2\) in the rutile and anatase phases. The pressure is defined by: \(P = -\left(\frac{\partial E}{\partial V}\right)_T\). So if we have an equation of state \(E(V)\) we can calculate the pressure at any volume, or alternatively, given a pressure, compute the volume. Pressure can affect the energy of two phases differently, so that one may become stable under pressure. The condition where a phase transition occurs is when the pressure in the two phases is the same, which occurs at a common tangent. To show this, we need \(E_{rutile}(V)\) and \(E_{anatase}(V)\). # run the rutile calculations from vasp import Vasp from ase import Atom, Atoms import numpy as np B = 'Ti'; X = 'O'; a = 4.59; c = 2.958; u = 0.305; ''' create a rutile structure from the lattice vectors at spacegroup: 136 P4_2/mnm ''' a1 = a * np.array([1.0, 0.0, 0.0]) a2 = a * np.array([0.0, 1.0, 0.0]) a3 = c * np.array([0.0, 0.0, 1.0]) atoms = Atoms([Atom(B, [0., 0., 0.]), Atom(B, 0.5*a1 + 0.5*a2 + 0.5*a3), Atom(X, u*a1 + u*a2), Atom(X, -u*a1 - u*a2), Atom(X, (0.5+u)*a1 + (0.5-u)*a2 + 0.5*a3), Atom(X, (0.5-u)*a1 + (0.5+u)*a2 + 0.5*a3)], cell=[a1, a2, a3]) nTiO2 = len(atoms) / 3. v0 = atoms.get_volume() cell0 = atoms.get_cell() volumes = [28., 30., 32., 34., 36.] #vol of one TiO2 for v in volumes: atoms.set_cell(cell0 * ((nTiO2 * v / v0)**(1. / 3.)), scale_atoms=True) calc = Vasp('bulk/TiO2/rutile/rutile-{0}'.format(v), encut=350, kpts=[6, 6, 6], xc='PBE', ismear=0, sigma=0.001, isif=2, ibrion=2, nsw=20, atoms=atoms) calc.update() # run the anatase calculations import numpy as np from vasp import Vasp from ase import Atom, Atoms # B = 'Ti'; X = 'O'; a = 3.7842; c = 2*4.7573; z = 0.0831; a1 = a * np.array([1.0, 0.0, 0.0]) a2 = a * np.array([0.0, 1.0, 0.0]) a3 = np.array([0.5 * a, 0.5 * a, 0.5 * c]) atoms = Atoms([Atom(B, -0.125 * a1 + 0.625 * a2 + 0.25 * a3), Atom(B, 0.125 * a1 + 0.375 * a2 + 0.75 * a3), Atom(X, -z*a1 + (0.25-z)*a2 + 2.*z*a3), Atom(X, -(0.25+z)*a1 + (0.5-z)*a2 + (0.5+2*z)*a3), Atom(X, z*a1 - (0.25 - z)*a2 + (1-2*z)*a3), Atom(X, (0.25 + z)*a1 + (0.5 + z)*a2 + (0.5-2*z)*a3)], cell=[a1,a2,a3]) nTiO2 = len(atoms) / 3. v0 = atoms.get_volume() cell0 = atoms.get_cell() volumes = [30., 33., 35., 37., 39.] #vol of one TiO2 for v in volumes: atoms.set_cell(cell0 * ((nTiO2*v/v0)**(1./3.)), scale_atoms=True) calc = Vasp('bulk/TiO2/anatase/anatase-{0}'.format(v), encut=350, kpts=[6, 6, 6], xc='PBE', ismear=0, sigma=0.001, isif=2, ibrion=2, nsw=20, atoms=atoms) calc.update() Now we will fit cubic polynomials to the data. # fit cubic polynomials to E(V) for rutile and anatase from vasp import Vasp import matplotlib.pyplot as plt import numpy as np np.set_printoptions(precision=2) # anatase equation of state volumes = [30., 33., 35., 37., 39.] # vol of one TiO2 formula unit a_volumes, a_energies = [], [] for v in volumes: calc = Vasp('bulk/TiO2/anatase/anatase-{0}'.format(v)) atoms = calc.get_atoms() nTiO2 = len(atoms) / 3.0 a_volumes.append(atoms.get_volume() / nTiO2) a_energies.append(atoms.get_potential_energy() / nTiO2) # rutile equation of state volumes = [28., 30., 32., 34., 36.] # vol of one TiO2 r_volumes, r_energies = [], [] for v in volumes: calc = Vasp('bulk/TiO2/rutile/rutile-{0}'.format(v)) atoms = calc.get_atoms() nTiO2 = len(atoms) / 3.0 r_volumes.append(atoms.get_volume() / nTiO2) r_energies.append(atoms.get_potential_energy() / nTiO2) # cubic polynomial fit to equation of state E(V) = pars*[V^3 V^2 V^1 V^0] apars = np.polyfit(a_volumes, a_energies, 3) rpars = np.polyfit(r_volumes, r_energies, 3) print 'E_anatase(V) = {0:1.2f}*V^3 + {1:1.2f}*V^2 + {2:1.2f}*V + {3:1.2f}'.format(*apars) print 'E_rutile(V) = {0:1.2f}*V^3 + {1:1.2f}*V^2 + {2:1.2f}*V + {3:1.2f}'.format(*rpars) print 'anatase epars: {0!r}'.format(apars) print 'rutile epars: {0!r}'.format(rpars) # get pressure parameters P(V) = -dE/dV dapars = -np.polyder(apars) drpars = -np.polyder(rpars) print 'anatase ppars: {0!r}'.format(dapars) print 'rutile ppars: {0!r}'.format(drpars) print print 'P_anatase(V) = {0:1.2f}*V^2 + {1:1.2f}*V + {2:1.2f}'.format(*dapars) print 'P_rutile(V) = {0:1.2f}*V^2 + {1:1.2f}*V + {2:1.2f}'.format(*drpars) vfit = np.linspace(28, 40) # plot the equations of state plt.plot(a_volumes, a_energies,'bo ', label='Anatase') plt.plot(vfit, np.polyval(apars, vfit), 'b-') plt.plot(r_volumes, r_energies,'gs ', label='Rutile') plt.plot(vfit, np.polyval(rpars, vfit), 'g-') plt.xlabel('Volume ($\AA^3$/f.u.)') plt.ylabel('Total energy (eV/f.u.)') plt.legend() plt.xlim([25, 40]) plt.ylim([-27, -26]) plt.savefig('imag Figure 49: Equations of state (E(V)) for anatase and rutile TiO\(_2\). To find the conditions where a phase transition occurs, we have to find the common tangent line between the rutile and anatase phases. In other words we have to solve these two equations: \((E_{anatase}(V1) - E_{rutile}(V2))/(V1-V2) = P_{anatase}(V1)\) \((E_{anatase}(V1) - E_{rutile}(V2))/(V1-V2) = P_{rutile}(V2)\) This is a nonlinear algebra problem. We use the scipy.optimize.fsolve to solve this problem. from ase.units import GPa from numpy import array, linspace, polyval # copied from polynomial fit above anatase_epars = array([-1.06049246e-03, 1.30279404e-01, -5.23520055e+00, 4.25202869e+01]) rutile_epars = array([-1.24680208e-03, 1.42966536e-01, -5.33239733e+00, 3.85903670e+01]) # polynomial fits for pressures anatase_ppars = array([3.18147737e-03, -2.60558808e-01, 5.23520055e+00]) rutile_ppars = array([3.74040625e-03, -2.85933071e-01, 5.33239733e+00]) def func(V): V1 = V[0] # rutile volume V2 = V[1] # anatase volume E_rutile = polyval(rutile_epars, V1) E_anatase = polyval(anatase_epars, V2) P_rutile = polyval(rutile_ppars, V1) P_anatase = polyval(anatase_ppars, V2) return [(E_anatase - E_rutile) / (V1 - V2) - P_anatase, (E_anatase - E_rutile) / (V1 - V2) - P_rutile] from scipy.optimize import fsolve x0 = fsolve(func, [28, 34]) print 'The solutions are at V = {0}'.format(x0) print 'Anatase pressure: {0} GPa'.format(polyval(anatase_ppars, x0[1]) / GPa) print 'Rutile pressure: {0} GPa'.format(polyval(rutile_ppars, x0[0]) / GPa) # illustrate the common tangent import matplotlib.pyplot as plt vfit = linspace(28, 40) plt.plot(vfit, polyval(anatase_epars,vfit), label='anatase') plt.plot(vfit, polyval(rutile_epars,vfit), label='rutile') plt.plot(x0, [polyval(rutile_epars,x0[0]), polyval(anatase_epars,x0[1])], 'ko-', label='common tangent') plt.legend() plt.xlabel('Volume ($\AA^3$/f.u.)') plt.ylabel('Total energy (eV/f.u.)') plt.savefig('images/eos-common-tangent.png') The solutions are at V = [ 31.67490656 34.60893508] Anatase pressure: 4.5249494236 GPa Rutile pressure: 4.52494942374 GPa At a pressure of 4.5 GPa, we expect that anatase will start converting into rutile. Along this common tangent, a mixture of the two phases will be more stable than either pure phase. Figure 50: Illustration of the common tangent that shows the pressure where anatase and rutile coexist before anatase converts to rutile. \label{fig:tio2-cotangent} there is some controversy about the most stable phase. add discussion here. 4.9 Bulk reaction energies 4.9.1 Alloy formation energies In this section we will consider how to calculate the formation energy of an fcc Cu-Pd alloy and how to use that information to discuss relative stabilities. The kinds of questions we can easily answer are: - Is the formation of an alloy at a particular composition and structure energetically favorable? - Given two alloy structures at the same composition, which one is more stable? - Given a set of alloy structures at different compositions, which ones are stable with respect to phase separation? Each of these questions is answered by calculating the formation energy of the alloy from the parent metals. Thus, we will need the total energies of fcc Cu and fcc Pd. To get started. We get those first. Rather than compute a full equation of state for these, we will rely on the built in unit cell optimization algorithm in VASP (ISIF=3). 4.9.1.1 Basic alloy formation energy # get bulk Cu and Pd energies. <<pure-metal-components>> from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('Cu', [0.000, 0.000, 0.000])], cell= [[ 1.818, 0.000, 1.818], [ 1.818, 1.818, 0.000], [ 0.000, 1.818, 1.818]]) cuc = Vasp('bulk/alloy/cu', xc='PBE', encut=350, kpts=[13, 13, 13], nbands=9, ibrion=2, isif=3, nsw=10, atoms=atoms) cu = cuc.potential_energy atoms = Atoms([Atom('Pd', [0.000, 0.000, 0.000])], cell=[[ 1.978, 0.000, 1.978], [ 1.978, 1.978, 0.000], [0.000, 1.978, 1.978]]) pd = Vasp('bulk/alloy/pd', xc='PBE', encut=350, kpts=[13, 13, 13], nbands=9, ibrion=2, isif=3, nsw=10, atoms=atoms).potential_energy print 'Cu energy = {0} eV'.format(cu) print 'Pd energy = {0} eV'.format(pd) Cu energy = -3.73437194 eV Pd energy = -5.22003433 eV Note that the Pd energy is more negative than the Cu energy. This does not mean anything significant. We cannot say Pd is more stable than Cu; it is not like Cu could transmutate into Pd! Next, we will consider a question like which of two structures with composition of CuPd is more stable. These coordinates for these structures came from research of the author. The approach is pretty general, you must identify the coordinates and unit cell of the candidate structure, and then run a calculation to find the optimized geometry and unit cell. This may take some work, as previously described in the multistep process for optimizing a bulk system. Here the geometry is pretty close to optimized, so we can use the VASP optimization routines. We consider two structures with composition CuPd. from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('Cu', [0.000, 0.000, 0.000]), Atom('Pd', [-1.652, 0.000, 2.039])], cell= [[0.000, -2.039, 2.039], [0.000, 2.039, 2.039], [-3.303, 0.000, 0.000]]) calc = Vasp('bulk/alloy/cupd-1', xc='PBE', encut=350, kpts=[12, 12, 8], nbands=17, ibrion=2, isif=3, nsw=10, atoms=atoms) cupd1 = atoms.get_potential_energy() atoms = Atoms([Atom('Cu', [-0.049, 0.049, 0.049]), Atom('Cu', [-11.170, 11.170, 11.170]), Atom('Pd', [-7.415, 7.415, 7.415]), Atom('Pd', [-3.804 , 3.804, 3.804])], cell=[[-5.629, 3.701, 5.629 ], [-3.701, 5.629, 5.629 ], [-5.629, 5.629, 3.701 ]]) calc = Vasp('bulk/alloy/cupd-2', xc='PBE', encut=350, kpts=[8, 8, 8], nbands=34, ibrion=2, isif=3, nsw=10, atoms=atoms) cupd2 = atoms.get_potential_energy() print 'cupd-1 = {0} eV'.format(cupd1) print 'cupd-2 = {0} eV'.format(cupd2) cupd-1 = -9.17593835 eV cupd-2 = -18.07779325 eV Looking at these energies, you could be tempted to say cupd-2 is more stable than cupd-1 because its energy is much lower. This is wrong, however, because cupd-2 has twice as many atoms as cupd-1. We should compare the normalized total energies, that is the energy normalized per CuPd formula unit, or as an alternative the number of atoms in the unit cell. It does not matter which, as long as we normalize consistently. It is conventional in alloy calculation to normalize by the number of atoms in the unit cell. from vasp import Vasp calc = Vasp('bulk/alloy/cupd-1') atoms = calc.get_atoms() e1 = atoms.get_potential_energy()/len(atoms) calc = Vasp('bulk/alloy/cupd-2') atoms = calc.get_atoms() e2 = atoms.get_potential_energy()/len(atoms) print 'cupd-1: {0} eV/atom'.format(e1) print 'cupd-2: {0} eV/atom'.format(e2) cupd-1: -4.587969175 eV/atom cupd-2: -4.5194483125 eV/atom After normalizing by number of atoms, we can see that cupd-1 is a more stable structure. However, we are looking at total energies, and we might ask: is cupd-1 more stable than an unreacted mixture of the parent compounds, fcc Cu and Pd? In other words, is the following reaction exothermic: Cu + Pd \(\rightarrow\) CuPd for the two configurations we examined? Below, we show some pretty general code that computes these formation energies, and normalizes them by the number of atoms in the unit cell. from vasp import Vasp # bulk energy 1 calc = Vasp('bulk/alloy/cu') atoms = calc.get_atoms() cu = atoms.get_potential_energy()/len(atoms) # bulk energy 2 calc = Vasp('bulk/alloy/pd') atoms = calc.get_atoms() pd = atoms.get_potential_energy()/len(atoms) calc = Vasp('bulk/alloy/cupd-1') atoms = calc.get_atoms() e1 = atoms.get_potential_energy() # subtract bulk energies off of each atom in cell for atom in atoms: if atom.symbol == 'Cu': e1 -= cu else: e1 -= pd e1 /= len(atoms) # normalize by number of atoms in cell calc = Vasp('bulk/alloy/cupd-2') atoms = calc.get_atoms() e2 = atoms.get_potential_energy() for atom in atoms: if atom.symbol == 'Cu': e2 -= cu else: e2 -= pd e2 /= len(atoms) print 'Delta Hf cupd-1 = {0:1.2f} eV/atom'.format(e1) print 'Delta Hf cupd-2 = {0:1.2f} eV/atom'.format(e2) Delta Hf cupd-1 = -0.11 eV/atom Delta Hf cupd-2 = -0.04 eV/atom The answer is yes. Both structures are energetically more favorable than an equal composition mixture of the parent metals. The heat of formation for both structures is exothermic, but the cupd-1 structure is more stable than the cupd-2 structure. This is shown conceptually in Figure fig:alloy1. Figure 51: Conceptual picture of two alloys with exothermic formation energies. The dashed line represents a composition weighted average energy of the parent metals. E4 and E3 are energies associated with two different alloy structures at the same composition. Both structures are more stable than a mixture of pure metals with the same composition, but E3 is more stable than E4. We will now examine another structure at another composition and its stability. from vasp import Vasp from ase import Atom, Atoms # parent metals atoms = Vasp('bulk/alloy/cu').get_atoms() cu = atoms.get_potential_energy() / len(atoms) atoms = Vasp('bulk/alloy/pd').get_atoms() pd = atoms.get_potential_energy() / len(atoms) atoms = Atoms([Atom('Cu', [-3.672, 3.672, 3.672]), Atom('Cu', [0.000, 0.000, 0.000]), Atom('Cu', [-10.821, 10.821, 10.821]), Atom('Pd', [-7.246, 7.246, 7.246])], cell=[[-5.464, 3.565, 5.464], [-3.565, 5.464, 5.464], [-5.464, 5.464, 3.565]]) calc = Vasp('bulk/alloy/cu3pd-1', xc='PBE', encut=350, kpts=[8, 8, 8], nbands=34, ibrion=2, isif=3, nsw=10, atoms=atoms) e3 = atoms.get_potential_energy() Vasp.wait(abort=True) for atom in atoms: if atom.symbol == 'Cu': e3 -= cu else: e3 -= pd e3 /= len(atoms) print 'Delta Hf cu3pd-1 = {0:1.2f} eV/atom'.format(e3) Delta Hf cu3pd-1 = -0.02 eV/atom The formation energy is slightly exothermic, which means the structure is more stable than a mixture of the parent metals. However, let us consider whether the structure is stable with respect to phase separation into pure Cu and the cupd-1 structure. We define the following quantities: \(H_{f,Cu}\) = 0.0 eV/atom, \(x_0\) = 0, \(H_{f,cupd-1}\) = -0.12 eV/atom, \(x_3\) = 0.5. The composition weighted average at \(x_{Pd}=0.25\) is: \(H_f = H_{f,Cu} + \frac{x0-x}{x0-x3}(H_{f,cupd-1} - H_{f,Cu})\) x0 = 0.0; x3 = 0.5; x = 0.25; Hf1 = 0.0; Hf3 = -0.12; print 'Composition weighted average = {0} eV'.format(Hf1 + (x0 - x) / (x0 - x3) * (Hf3 - Hf1)) Composition weighted average = -0.06 eV We find the weighted composition formation energy of pure Cu and cupd-1 is more favorable than the formation energy of cu3pd-1. Therefore, we could expect that structure to phase separate into a mixture of pure Cu and cupd-1. Schematically what we are seeing is shown in Figure \ref:fig:alloy-phase-separation. Figure 52: Illustration of of an alloy structure with an exothermic formation energy that is not stable with respect to phase separation. The solid line shows the composition weighted average energy of a mixture of Cu and cupd-2. Since the energy of cu3pd-1 is above the solid line, it is less favorable than a mixture of Cu and cupd-2 with the same composition. \label{fig:alloy-phase-separation} Finally, let us consider one more structure with the Cu3Pd stoichiometry. from vasp import Vasp from ase import Atom, Atoms # parent metals cu = Vasp('bulk/alloy/cu') cu_e = cu.potential_energy / len(cu.get_atoms()) pd = Vasp('bulk/alloy/pd') pd_e = pd.potential_energy / len(pd.get_atoms()) atoms = Atoms([Atom('Cu', [-1.867, 1.867, 0.000]), Atom('Cu', [0.000, 0.000, 0.000]), Atom('Cu', [0.000, 1.867, 1.867]), Atom('Pd', [-1.867, 0.000, 1.86])], cell=[[-3.735, 0.000, 0.000], [0.000, 0.000, 3.735], [0.000, 3.735, 0.000]]) calc = Vasp('bulk/alloy/cu3pd-2', xc='PBE', encut=350, kpts=[8, 8, 8], nbands=34, ibrion=2, isif=3, nsw=10, atoms=atoms) e4 = atoms.get_potential_energy() Vasp.wait(abort=True) for atom in atoms: if atom.symbol == 'Cu': e4 -= cu_e else: e4 -= pd_e e4 /= len(atoms) print('Delta Hf cu3pd-2 = {0:1.2f} eV/atom'.format(e4)) Delta Hf cu3pd-2 = -0.10 eV/atom This looks promising: the formation energy is much more favorable than cu3pd-1, and it is below the composition weighted formation energy of -0.06 eV/atom. Consequently, we conclude that this structure will not phase separate into a mixture of Cu and CuPd. We cannot say, however, if there is a more stable phase not yet considered, or if it might phase separate into two other phases. We also note here that we have ignored a few other contributions to alloy stability. We have only considered the electronic energy contributions to the formation energy. At temperatures above absolute zero there are additional contributions including configurational and vibrational entropy, which may stabilize some structures more than others. Finally, our analysis is limited to comparisons of the structures computed on the fcc lattice. In fact, it is known that the CuPd alloy forms a bcc structure. We did not calculate that structure, so we can not say if it is more or less stable than the obvious fcc structure we found. Figure 53: Illustration that cu3pd-2 is more stable than cu3pd-1 and that is it is more stable than a composition weighted mixture of Cu and cupd-1. The dotted line shows the energy of a composition weighted average energy of a mixture of Cu and cupd-1. Since cu3pd-2 is below the dotted line, it is more stable than the phase-separated mixture. \label{fig:alloy-phase-separation-2} The construction of alloy phase diagrams is difficult. You are always faced with the possibility that there is a phase that you have not calculated that is more stable than the ones you did calculate. One approach is to use a tool that automates the discovery of relevant structures such as the Alloy Theoretic Automated Toolkit (ATAT) vandeWalle2002539,vandeWalle2009266 which uses a cluster expansion methodology. 4.9.2 Metal oxide oxidation energies We will consider here the reaction 2 Cu2O + O2 \(\rightleftharpoons\) 4 CuO. The reaction energy is: \(\Delta E = 4E_{CuO} - 2E_{Cu_2O} - E_{O_2}\). We need to compute the energy of each species. 4.9.2.1 Cu2O calculation # run Cu2O calculation from vasp import Vasp from ase import Atom, Atoms # a = 4.27 atoms = Atoms([Atom('Cu', [0, 0, 0]), Atom('Cu', [0.5, 0.5, 0.0]), Atom('Cu', [0.5, 0.0, 0.5]), Atom('Cu', [0.0, 0.5, 0.5]), Atom('O', [0.25, 0.25, 0.25]), Atom('O', [0.75, 0.75, 0.75])]) atoms.set_cell((a, a, a), scale_atoms=True) calc = Vasp('bulk/Cu2O', encut=400, kpts=[8, 8, 8], ibrion=2, isif=3, nsw=30, xc='PBE', atoms=atoms) print atoms.get_potential_energy() print atoms.get_stress() -27.27469148 [-0.01018402 -0.01018402 -0.01018402 -0. -0. -0. ] 4.9.2.2 CuO calculation # run CuO calculation from vasp import Vasp from ase import Atom, Atoms import numpy as np # CuO # # a = 4.6837 b = 3.4226 c = 5.1288 beta = 99.54/180*np.pi y = 0.5819 a1 = np.array([0.5*a, -0.5*b, 0.0]) a2 = np.array([0.5*a, 0.5*b, 0.0]) a3 = np.array([c*np.cos(beta), 0.0, c*np.sin(beta)]) atoms = Atoms([Atom('Cu', 0.5*a2), Atom('Cu', 0.5*a1 + 0.5*a3), Atom('O', -y*a1 + y*a2 + 0.25*a3), Atom('O', y*a1 - y*a2 - 0.25*a3)], cell=(a1, a2, a3)) calc = Vasp('bulk/CuO', encut=400, kpts=[8, 8, 8], ibrion=2, isif=3, nsw=30, xc='PBE', atoms=atoms) print(atoms.get_potential_energy()) -19.61568557 4.9.2.3 TODO Reaction energy calculation from vasp import Vasp # don't forget to normalize your total energy to a formula unit. Cu2O # has 3 atoms, so the number of formula units in an atoms is # len(atoms)/3. calc = Vasp('bulk/Cu2O') atoms1 = calc.get_atoms() cu2o_energy = atoms1.get_potential_energy() calc = Vasp('bulk/CuO') atoms2 = calc.get_atoms() cuo_energy = atoms2.get_potential_energy() # make sure to use the same cutoff energy for the O2 molecule! calc = Vasp('molecules/O2-sp-triplet-400') atoms3 = calc.get_atoms() o2_energy = atoms3.get_potential_energy() calc.stop_if(None in [cu2o_energy, cuo_energy, o2_energy]) cu2o_energy /= (len(atoms1) / 3) # note integer math cuo_energy /= (len(atoms2) / 2) rxn_energy = 4.0 * cuo_energy - o2_energy - 2.0 * cu2o_energy print 'Reaction energy = {0} eV'.format(rxn_energy) Reaction energy = -2.11600154 eV This is the reaction energy for 2 Cu2O \(\rightarrow\) 4 CuO. In PhysRevB.73.195107, the experimental reaction is estimated to be about -3.14 eV. There are a few reasons why our number does not agree with the experimental reaction energy. One reason is related to errors in the O2 dissociation energy, and another reason is related to localization of electrons in the Cu 3\(d\) orbitals PhysRevB.73.195107. The first error of incorrect O2 dissociation error is a systematic error that can be corrected empirically PhysRevB.73.195107. Fixing the second error requires the application of DFT+U (see DFT+U). The heat of reaction is reported to be 1000 J/g product at for the reaction 2CuO \(\rightleftharpoons\) Cu2O + 1/2 O2. from ase import Atoms atoms = Atoms('Cu2O') MW = atoms.get_masses().sum() H = 1. # kJ/g print 'rxn energy = {0:1.1f} eV'.format(-2 * H * MW / 96.4) # convert to eV rxn energy = -3.0 eV This is pretty close to the value in PhysRevB.73.195107 and might need a temperature correction to get agreement at 298K. 4.10 Bulk density of states The density of states refers to the number of electronic states in a particular energy range. The solution to Eq. \eqref{eq:KS} yields a set of Kohn-Sham (K-S) orbitals and an associated set of eigenvalues that correspond to the energies of these orbitals, neither of which have any known directly observable meaning RevModPhys.71.1253. The sum of the squared K-S orbitals, however, is equal to the electron density (Eq. \eqref{eq:density}), and the sum of the eigenvalues is a significant part of the total energy (Eq. \eqref{eq:dftEnergy}). Thus, it seems reasonable to suppose these quantities have other significant relationships to physical observables. Perdew et al. showed that the highest occupied eigenvalue is equal to the ionization energy of a system within an exact density functional theory perdew1982:elect-kohn-sham, but their interpretation has been vigorously debated in the literature kleinman1997:signif-kohn-sham,perdew1997,kleinman1997:reply-commen-kohn-sham, and is only true for the exact exchange/correlation functional, not the approximate ones used in practice koch2001. Stowasser and Hoffmann discussed an approach to using the K-S orbitals in more traditional molecular orbital interpretations, but the results were primarily qualitative stowasser1999:what-kohn-sham. More recently, a DFT analog of Koopmans' theorem has been developed that formally identifies the eigenvalues with vertical ionization potentials, which can be measured with photoelectron spectroscopy gritsenko2002. Despite the arguments against ascribing physical meaning to the K-S orbitals and eigenvalues, it has become fairly standard, especially for solids, to use them to calculate the density of states (DOS) jones1989 [Sec. VI. B]. This has been found to yield reasonable results for the valence bands in metals, but poor results for tightly bound orbitals and band gaps perdew1982:elect-kohn-sham. A highly technical discussion of this issue can be found in Ref. perdew1981:self. The density of states can be calculated by a sum over the k-points seitsonen2000:phd:\begin{equation}\label{eq:dos} \rho(\epsilon)=\sum_\mathbf{\mathrm{k}} \omega_\mathbf{\mathrm{k}} \sum_i \beta(\epsilon -\epsilon_{i\mathbf{\mathrm{k}}}) \end{equation} where \(\omega_\mathbf{\mathrm{k}}\) is the weight associated with the k-point, and \(\beta\) is a broadening function, typically a gaussian function, to account for the finite number of k-points used in the calculations. The amount of broadening is arbitrary, and should tend to zero as the number of k-points approaches infinity. from vasp import Vasp npoints = 200 width = 0.5 def gaussian(energies, eik): x = ((energies - eik) / width) return np.exp(-x**2) / np.sqrt(np.pi) / width calc = Vasp('bulk/pd-dos') # kpt weights wk = calc.get_k_point_weights() # for each k-point there are a series of eigenvalues # here we get all the eigenvalues for each k-point e_kn = [] for i, k in enumerate(wk): e_kn.append(calc.get_eigenvalues(kpt=i)) e_kn = np.array(e_kn) - calc.get_fermi_level() # these are the energies we want to evaluate the dos at energies = np.linspace(e_kn.min(), e_kn.max(), npoints) # this is where we build up the dos dos = np.zeros(npoints) for j in range(npoints): for k in range(len(wk)): # loop over all kpoints for i in range(len(e_kn[k])): # loop over eigenvalues in each k dos[j] += wk[k] * gaussian(energies[j], e_kn[k][i]) import matplotlib.pyplot as plt plt.plot(energies, dos) plt.savefig('images/manual-dos.png') plt.show() Figure 54: Density of states. Here is a more convenient way to compute the DOS using ase.dft. from vasp import Vasp import matplotlib.pyplot as plt from ase.dft import DOS calc = Vasp('bulk/pd-dos') dos = DOS(calc, width=0.2) d = dos.get_dos() e = dos.get_energies() import pylab as plt plt.plot(e, d) plt.xlabel('energy (eV)') plt.ylabel('DOS') plt.savefig('images/pd-dos.png') Figure 55: Total DOS for bulk Pd. This DOS looks roughly like you would expect. The peak between -5 to 0 eV is the Pd d-band. The VASP manual recommends a final run be made with ISMEAR=-5, which uses the tetrahedron method with Bl\"ochl corrections. from vasp import Vasp from ase.dft import DOS calc = Vasp('bulk/pd-dos') calc.clone('bulk/pd-dos-ismear-5') bulk = calc.get_atoms() calc.set(ismear=-5) bulk.get_potential_energy() dos = DOS(calc, width=0.2) d = dos.get_dos() e = dos.get_energies() import pylab as plt plt.plot(e, d) plt.xlabel('energy [eV]') plt.ylabel('DOS') plt.savefig('images/pd-dos-ismear-5.png') This not notably different to me. Figure 56: Total DOS for Pd computed with ISMEAR=-5 We can test for convergence of the DOS. The k-points are most important. from ase import Atoms, Atom from vasp import Vasp Vasp.vasprc(mode=None) #Vasp.log.setLevel(10) import matplotlib.pyplot as plt import numpy as np from ase.dft import DOS import pylab as plt a = 3.9 # approximate lattice constant b = a / 2. bulk = Atoms([Atom('Pd', (0.0, 0.0, 0.0))], cell=[(0, b, b), (b, 0, b), (b, b, 0)]) kpts = [8, 10, 12, 14, 16, 18, 20] calcs = [Vasp('bulk/pd-dos-k{0}-ismear-5'.format(k), encut=300, xc='PBE', kpts=[k, k, k], atoms=bulk) for k in kpts] Vasp.wait(abort=True) for calc in calcs: # this runs the calculation if calc.potential_energy is not None: dos = DOS(calc, width=0.2) d = dos.get_dos() + k / 4.0 e = dos.get_energies() plt.plot(e, d, label='k={0}'.format(k)) else: pass plt.xlabel('energy (eV)') plt.ylabel('DOS') plt.legend() plt.savefig('images/pd-dos-k-convergence-ismear-5.png') plt.show() got here <ase.dft.dos.DOS instance at 0xec2d710> from vasp import Vasp from ase.dft import DOS # This seems very slow... calc = Vasp('bulk/pd-dos-k20-ismear-5') print DOS(calc, width=0.2) <ase.dft.dos.DOS instance at 0x168a1ea8> Figure 57: Convergence of the total DOS with k-points 4.11 Atom projected density of states One major disadvantage of a planewave basis set is that it is difficult to relate the completely delocalized planewaves to localized phenomena such as bonding. Much insight into bonding has been gained by atomic/molecular orbital theory, which has carried over to the solid-state arena RevModPhys.60.601. Consequently, several schemes have been developed to project the one-electron Kohn-Sham wave functions onto atomic wave functions sanchez-portal1995:projec,segall1996:popul,segall1996:popul-mp. In VASP, the one electron wave functions can be projected onto spherical harmonic orbitals. The radial component of the atomic orbitals extends to infinity. In a solid, this means that the projection on one atom may overlap with the projection on a neighboring atom, resulting in double counting of electrons. Consequently, a cutoff radius was introduced, beyond which no contributions are included. It is not obvious what the best cutoff radius is. If the radius is too small, it might not capture all of the electrons associated with the atom. However, if it is too large, it may include electrons from neighboring atoms. One might want to use different cutoff radii for different atoms, which have different sizes. Furthermore, the ideal cutoff radius for an atom may change in different environments, thus it would require an iterative procedure to determine it. This difficulty arises because the orbital-band occupations are not observable, thus how the electrons are divided up between atoms is arbitrary and, as will be seen later, is sensitive to the cutoff radius (and in other DFT implementations, the basis set). However, Mulliken orbital populations have been used successfully for many years to examine the qualitative differences between similar systems, and that is precisely what these quantities are used for here. Thus, a discussion of the analysis and results is warranted. The s and p states in a metal are typically delocalized in space and more like free-electrons, whereas the d-orbitals are fairly localized in space and have been treated successfully with tight-binding theories such as extended H\"u ckel theory RevModPhys.60.601, and linear muffin tin orbital theory ruban1997:surfac. Consequently, the remaining discussion will be focused on the properties of the projected d-states. In this example, we consider how to get the atom-projected density of states (ADOS). We are interested in properties of the $d$-band on Pd, such as the $d$-band center and $d$-band width. You must set the RWIGS tag to get ADOS, and these are the Wigner-Seitz radii for each atom. By integrating the projected d-band up to the Fermi level, the d-band filling can be determined. It is not obvious what the electron count in the d-band should be for an atom in a metal. For a gas-phase, neutral metal atom in the ground state, however, the d-orbital electron count is well defined, so it will be used as an initial reference point for comparison kittel. A powerful method for characterizing distributions is to examine various moments of the distribution (see Chapter 4 in Ref. cottrell1988 and Chapter 6 in Refs. ducastelle1991 and pettifor1992:elect-theor-alloy-desig). The \(n^{th}\) order moment, \(\mu_n\), of a distribution of states \(\rho(\epsilon)\) with respect to a reference \(\epsilon_o\) is defined by\begin{equation} \mu_n = \frac{\int_{-\infty}^\infty \epsilon^n \rho(\epsilon-\epsilon_o)d\epsilon} {\int_{-\infty}^\infty \rho(\epsilon-\epsilon_o)d\epsilon} \end{equation} In this work, the reference energy is always the Fermi level. The zeroth moment is just the total number of states, in this case it will be normalized to unity. The first moment is the average energy of distribution, analogous to the center of mass for a mass density distribution. The second moment is the mean squared width of the distribution. The third moment is a measure of skewness and the fourth moment is related to kurtosis, but these moments are rarely used, and only the first and second moments are considered in this work. It is important to note that these projected density of states are not physical observables. They are the wavefunctions projected onto atomic orbitals. For some situations this makes sense, e.g. the \(d\) orbitals are fairly localized and reasonably approximated by atomic orbitals. The \(s\) valence orbitals in a metal, in contrast, are almost totally delocalized. Depending on the cutoff radius (RWIGS) you choose, you can see very different ADOS.)]) calc = Vasp('bulk/pd-ados', encut=300, xc='PBE', lreal=False, rwigs={'Pd': 1.5}, # wigner-seitz radii for ados kpts=[8, 8, 8], atoms=bulk) # this runs the calculation calc.wait(abort=True) # now get results energies, ados = calc.get_ados(0, 'd') # we will select energies in the range of -10, 5 ind = (energies < 5) & (energies > -10) energies = energies[ind] dos = ados[ind] Nstates = np.trapz(dos, energies) occupied = energies <= 0.0 N_occupied_states = np.trapz(dos[occupied], energies[occupied]) # first moment ed = np.trapz(energies * dos, energies) / Nstates # second moment wd2 = np.trapz(energies**2 * dos, energies) / Nstates print 'Total # states = {0:1.2f}'.format(Nstates) print 'number of occupied states = {0:1.2f}'.format(N_occupied_states) print 'd-band center = {0:1.2f} eV'.format(ed) print 'd-band width = {0:1.2f} eV'.format(np.sqrt(wd2)) # plot the d-band plt.plot(energies, dos, label='$d$-orbitals') # plot the occupied states in shaded gray plt.fill_between(x=energies[occupied], y1=dos[occupied], y2=np.zeros(dos[occupied].shape), color='gray', alpha=0.25) plt.xlabel('$E - E_f$ (eV)') plt.ylabel('DOS (arbitrary units)') plt.savefig('images/pd-ados.png') Total # states = 9.29 number of occupied states = 7.95 d-band center = -1.98 eV d-band width = 2.71 eV Figure 58: Atom projected $d$-band for bulk Pd. The shaded area corresponds to the occupied states below the Fermi level. 4.11.1 Effect of RWIGS on ADOS Here we examine the effect of changing RWIGS on the number of counted electrons, and properties of the d-band moments.)]) RWIGS = [1.0, 1.1, 1.25, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0 ] ED, WD, N = [], [], [] for rwigs in RWIGS: calc = Vasp('bulk/pd-ados') calc.clone('bulk/pd-ados-rwigs-{0}'.format(rwigs)) calc.set(rwigs={'Pd': rwigs}) if calc.potential_energy is None: continue # now get results ados = VaspDos(efermi=calc.get_fermi_level()) energies = ados.energy dos = ados.site_dos(0, 'd') #we will select energies in the range of -10, 5 ind = (energies < 5) & (energies > -10) energies = energies[ind] dos = dos[ind] Nstates = np.trapz(dos, energies) occupied = energies <= 0.0 N_occupied_states = np.trapz(dos[occupied], energies[occupied]) ed = np.trapz(energies * dos, energies) / np.trapz(dos, energies) wd2 = np.trapz(energies**2 * dos, energies) / np.trapz(dos, energies) N.append(N_occupied_states) ED.append(ed) WD.append(wd2**0.5) plt.plot(RWIGS, N, 'bo', label='N. occupied states') plt.legend(loc='best') plt.xlabel('RWIGS ($\AA$)') plt.ylabel('# occupied states') plt.savefig('images/ados-rwigs-occupation.png') fig, ax1 = plt.subplots() ax1.plot(RWIGS, ED, 'bo', label='d-band center (eV)') ax1.set_xlabel('RWIGS ($\AA$)') ax1.set_ylabel('d-band center (eV)', color='b') for tl in ax1.get_yticklabels(): tl.set_color('b') ax2 = ax1.twinx() ax2.plot(RWIGS, WD, 'gs', label='d-band width (eV)') ax2.set_ylabel('d-band width (eV)', color='g') for tl in ax2.get_yticklabels(): tl.set_color('g') plt.savefig('images/ados-rwigs-moments.png') plt.show() Figure 59: Effect of the RWIGS on the number of occupied \(d\)-states. You can see the number of occupied states increases approximately linearly here with RWIGS. This is due to overcounting of neighboring electrons. q The d-band center and width also change. 4.12 Band structures band structure To compute a band structure we do two things. First, we compute the self-consistent band structure. Then we compute the band structure at the desired $k$-points. We will use Si as an example (adapted from). First, we get the self-consistent electron density in a calculation. from vasp import Vasp from ase import Atom, Atoms from ase.visualize import view a = 5.38936 atoms = Atoms([Atom('Si', [0, 0, 0]), Atom('Si', [0.25, 0.25, 0.25])]) atoms.set_cell([[a / 2., a / 2., 0.0], [0.0, a / 2., a / 2.], [a / 2., 0.0, a / 2.]], scale_atoms=True) calc = Vasp('bulk/Si-selfconsistent', xc='PBE', prec='Medium', lcharg=True, lwave=True, kpts=[4, 4, 4], atoms=atoms) calc.run() Now, we run a new calculation along the k-point path desired. The standard VASP way of doing this is to modify the INCAR and KPOINTS file and rerun VASP. We will not do that. Doing that results in some lost information if you overwrite the old files. We will copy the old directory to a new directory, using code to ensure this only happens one time. from vasp import Vasp wd = 'bulk/Si-bandstructure' calc = Vasp('bulk/Si-selfconsistent') calc.clone(wd) kpts = [[0.5, 0.5, 0.0], # L [0, 0, 0], # Gamma [0, 0, 0], [0.5, 0.5, 0.5]] # X calc.set(kpts=kpts, reciprocal=True, kpts_nintersections=10, icharg=11) print calc.run() -3.62224484 We will learn how to manually parse the EIGENVAL file here to generate the band structure. The structure of the EIGENVAL file looks like this: head -n 20 bulk/Si-bandstructure/EIGENVAL 2 2 1 1 0.1956688E+02 0.3810853E-09 0.3810853E-09 0.3810853E-09 0.5000000E-15 1.000000000000000E-004 CAR unknown system 8 20 8 0.5000000E+00 0.5000000E+00 0.0000000E+00 0.5000000E-01 1 -1.826747 2 -1.826743 3 3.153321 4 3.153347 5 6.743989 6 6.744017 7 16.392596 8 16.393943 0.4444444E+00 0.4444444E+00 0.0000000E+00 0.5000000E-01 1 -2.669487 2 -0.918463 We can ignore the first five lines. f = open('bulk/Si-bandstructure/EIGENVAL', 'r') line1 = f.readline() line2 = f.readline() line3 = f.readline() line4 = f.readline() comment = f.readline() unknown, nkpoints, nbands = [int(x) for x in f.readline().split()] blankline = f.readline() band_energies = [[] for i in range(nbands)] for i in range(nkpoints): x, y, z, weight = [float(x) for x in f.readline().split()] for j in range(nbands): fields = f.readline().split() id, energy = int(fields[0]), float(fields[1]) band_energies[id - 1].append(energy) blankline = f.readline() f.close() import matplotlib.pyplot as plt for i in range(nbands): plt.plot(range(nkpoints), band_energies[i]) ax = plt.gca() ax.set_xticks([]) # no tick marks plt.xlabel('k-vector') plt.ylabel('Energy (eV)') ax.set_xticks([0, 10, 19]) ax.set_xticklabels(['$L$', '$\Gamma$', '$X$']) plt.savefig('images/Si-bandstructure.png') Figure 60: Calculated band-structure for Si. Next we will examine the connection between band structures and density of states. In this example, we will compute the band structure of TiO2 using a function built into vasp to do the analysis described above. from vasp import Vasp calc = Vasp('bulk/tio2/step3') print calc.get_fermi_level() calc.abort() n, bands, p = calc.get_bandstructure(kpts_path=[('$\Gamma$', [0.0, 0.0, 0.0]), ('X', [0.5, 0.5, 0.0]), ('X', [0.5, 0.5, 0.0]), ('M', [0.0, 0.5, 0.5]), ('M', [0.0, 0.5, 0.5]), ('$\Gamma$', [0.0, 0.0, 0.0])]) p.savefig('images/tio2-bandstructure-dos.png') Figure 61: Band structure and total density of states for TiO2. 4.12.1 create example showing band dispersion with change in lattice constant In this section, we examine the effect of the lattice constant on the band structure. Since the lattice constant affects the overlap of neighboring atoms, we expect that smaller lattice constants will show more dispersion, i.e. broader bands. Larger lattice constants, in contrast, should show narrower bands. We examine this in silicon. from vasp import Vasp from ase import Atom, Atoms calcs = [] for i, a in enumerate([4.7, 5.38936, 6.0]): atoms = Atoms([Atom('Si', [0, 0, 0]), Atom('Si', [0.25, 0.25, 0.25])]) atoms.set_cell([[a/2., a/2., 0.0], [0.0, a/2., a/2.], [a/2., 0.0, a/2.]], scale_atoms=True) calc = Vasp('bulk/Si-bs-{0}'.format(i), xc='PBE', lcharg=True, lwave=True, kpts=[4, 4, 4], atoms=atoms) print(calc.run()) calcs += [calc] Vasp.wait(abort=True) for i, calc in enumerate(calcs): n, bands, p = calc.get_bandstructure(kpts_path=[('L', [0.5,0.5,0.0]), ('$\Gamma$', [0, 0, 0]), ('$\Gamma$', [0, 0, 0]), ('X', [0.5, 0.5, 0.5])], kpts_nintersections=10) if p is not None: png = 'images/Si-bs-{0}.png'.format(i) p.savefig(png) -7.55662509 -10.80024435 -10.13735105 Figure 62: Si band structure for a=4.7 Figure 63: Si band structure for a=5.38936 Figure 64: Si band structure for a=6.0 You can see the band structure for a=6.0 is notably sharper than the band structure for a=4.0. 4.13 Magnetism 4.13.1 Determining if a magnetic solution is energetically favorable We can force a total magnetic moment onto a unit cell and compute the total energy as function of the total magnetic moment. If there is a minimum in the energy, then we know there is a lower energy magnetic solution than a non-magnetic solution. We use NUPDOWN to enforce the magnetic moment in the cell. Note that NUPDOWN can only be an integer. You cannot set it to be an arbitrary float. from vasp import Vasp from ase.lattice.cubic import BodyCenteredCubic atoms = BodyCenteredCubic(directions=[[1, 0, 0], [0, 1, 0], [0, 0, 1]], size=(1, 1, 1), symbol='Fe') calc = Vasp('bulk/Fe-bcc-fixedmagmom-{0:1.2f}'.format(0.0), xc='PBE', encut=300, kpts=[4, 4, 4], ispin=2, nupdown=0, atoms=atoms) print(atoms.get_potential_energy()) -15.34226703 from vasp import Vasp from ase.lattice.cubic import BodyCenteredCubic atoms = BodyCenteredCubic(directions=[[1, 0, 0], [0, 1, 0], [0, 0, 1]], size=(1, 1, 1), symbol='Fe') NUPDOWNS = [0.0, 2.0, 4.0, 5.0, 6.0, 8.0] energies = [] for B in NUPDOWNS: calc = Vasp('bulk/Fe-bcc-fixedmagmom-{0:1.2f}'.format(B), xc='PBE', encut=300, kpts=[4, 4, 4], ispin=2, nupdown=B, atoms=atoms) energies.append(atoms.get_potential_energy()) if None in energies: calc.abort() import matplotlib.pyplot as plt plt.plot(NUPDOWNS, energies) plt.xlabel('Total Magnetic Moment') plt.ylabel('Energy (eV)') plt.savefig('images/Fe-fixedmagmom.png') Figure 65: Total energy vs. total magnetic moment for bcc Fe. You can see here there is a minimum in energy at a total magnetic moment somewhere between 4 and 5. There are two Fe atoms in the unit cell, which means the magnetic moment on each atom must be about 2.5 Bohr-magnetons. This is a good guess for a real calculation. Note that VASP recommends you overestimate the magnetic moment guesses if you are looking for ferromagnetic solutions. To run a spin-polarized calculation with initial guesses on each atom, we must set the magnetic moment on the atoms. Here we set it through the magmom attribute on the atom. In the example after this, we set it in the Atoms object. from vasp import Vasp from ase.lattice.cubic import BodyCenteredCubic atoms = BodyCenteredCubic(directions=[[1, 0, 0], [0, 1, 0], [0, 0, 1]], size=(1, 1, 1), symbol='Fe') # set magnetic moments on each atom for atom in atoms: atom.magmom = 2.5 calc = Vasp('bulk/Fe-bcc-sp-1', xc='PBE', encut=300, kpts=[4, 4, 4], ispin=2, lorbit=11, # you need this for individual magnetic moments atoms=atoms) e = atoms.get_potential_energy() B = atoms.get_magnetic_moment() magmoms = atoms.get_magnetic_moments() print 'Total magnetic moment is {0:1.2f} Bohr-magnetons'.format(B) print 'Individual moments are {0} Bohr-magnetons'.format(magmoms) Total magnetic moment is -0.01 Bohr-magnetons Individual moments are [-0.013 -0.013] Bohr-magnetons 4.13.2 Antiferromagnetic spin states In an antiferromagnetic material, there are equal numbers of spin up and down electrons that align in a regular pattern, but pointing in opposite directions so that there is no net magnetism. It is possible to model this by setting the magnetic moments on each ase.Atom object. lreal from vasp import Vasp from ase import Atom, Atoms atoms = Atoms([Atom('Fe', [0.00, 0.00, 0.00], magmom=5), Atom('Fe', [4.3, 4.3, 4.3], magmom=-5), Atom('O', [2.15, 2.15, 2.15], magmom=0), Atom('O', [6.45, 6.45, 6.45], magmom=0)], cell=[[4.3, 2.15, 2.15], [2.15, 4.3, 2.15], [2.15, 2.15, 4.3]]) ca = Vasp('bulk/afm-feo', encut=350, prec='Normal', ispin=2, nupdown=0, # this forces a non-magnetic solution lorbit=11, # to get individual moments lreal=False, atoms=atoms) print 'Magnetic moments = ', atoms.get_magnetic_moments() print 'Total magnetic moment = ', atoms.get_magnetic_moment() Magnetic moments = [-0.061 -0.061 0.063 0.063] Total magnetic moment = -5e-06 You can see that even though the total magnetic moment is 0, there is a spin on both Fe atoms, and they are pointing in opposite directions. Hence, the sum of spins is zero, and this arrangement is called anti-ferromagnetic. 4.13.3 TODO NiO-FeO formation energies with magnetism 4.14 TODO phonons phonopy 4.15 TODO solid state NEB Caspersen10052005 Carter paper sheppard:074103 recent Henkelman paper 5 Surfaces 5.1 Surface structures 5.1.1 Simple surfaces ase provides many utility functions to setup surfaces. Here is a simple example of an fcc111 Al surface. There are built in functions for fcc111, bcc110, bcc111, hcp001 and diamond111. from ase.lattice.surface import fcc111 from ase.io import write from ase.visualize import view slab = fcc111('Al', size=(2, 2, 3), vacuum=10.0) from ase.constraints import FixAtoms constraint = FixAtoms(mask=[atom.tag >= 2 for atom in slab]) slab.set_constraint(constraint) view(slab) write('images/Al-slab.png', slab, rotation='90x', show_unit_cell=2) Figure 66: An Al(111) slab with three layers and 20 Å of vacuum. 5.1.2 Vicinal surfaces The vast majority of surface calculations are performed on flat surfaces. This is partially because these surfaces tend to have the lowest surface energies, and thus are likely to be experimentally observed. The flat surfaces, also known as low Miller index surfaces, also have small unit cells, which tends to make them computationally affordable. There are, however, many reasons to model the properties of surfaces that are not flat. You may be interested in the reactivity of a step edge, for example, or you may use the lower cooridnation of steps as a proxy for nanoparticle reactivity. Many stepped surfaces are not that difficult to make now. The main idea in generating them is described here. ase provides a general function for making vicinal surfaces. Here is an example of a (211) surface. from ase.lattice.surface import surface from ase.io import write # Au(211) with 9 layers s1 = surface('Au', (2, 1, 1), 9) s1.center(vacuum=10, axis=2) write('images/Au-211.png', s1.repeat((3, 3, 1)), rotation='-30z,90x', # change the orientation for viewing show_unit_cell=2) Figure 67: An Au(211) surface constructed with ase. 5.2 TODO Surface calculation parameters There is one important parameter that is different for surfaces than for bulk calculations, the k-point grid. Assuming you have followed the convention that the z-axis is normal to the surface, the k-point grids for slab calculations always have the form of \(M \times N \times 1\). To illustrate why, consider this example: from ase.lattice.surface import fcc111 from vasp import Vasp slab = fcc111('Al', size=(1, 1, 4), vacuum=10.0) calc = Vasp('surfaces/Al-bandstructure', xc='PBE', encut=300, kpts=[6, 6, 6], lcharg=True, # you need the charge density lwave=True, # and wavecar for the restart atoms=slab) n, bands, p = calc.get_bandstructure(kpts_path=[(r'$\Gamma$', [0, 0, 0]), ('$K1$', [0.5, 0.0, 0.0]), ('$K1$', [0.5, 0.0, 0.0]), ('$K2$', [0.5, 0.5, 0.0]), ('$K2$', [0.5, 0.5, 0.0]), (r'$\Gamma$', [0, 0, 0]), (r'$\Gamma$', [0, 0, 0]), ('$K3$', [0.0, 0.0, 1.0])], kpts_nintersections=10) if p is None: calc.abort() p.savefig('images/Al-slab-bandstructure.png') Figure 68: Band structure of an Al slab in the plane (path from Gamma to K1 to K2 to Gamma) and normal to the surface (Gamma to K3). Note the bands are flat in the direction normal to the surface, hence only one k-point is needed in this direction. 5.3 Surface relaxation When a surface is created, the bulk symmetry is broken and consequently there will be forces on the surface atoms. We will examine some consequences of this with a simple Al slab. First, we show there are forces on the slab atoms. from vasp import Vasp from ase.lattice.surface import fcc111 atoms = fcc111('Al', size=(1, 1, 4), vacuum=10.0) calc = Vasp('surfaces/Al-slab-unrelaxed', xc='PBE', kpts=[6, 6, 1], encut=350, atoms=atoms) print(atoms.get_forces()) [[ 0. 0. -0.01505445] [ 0. 0. 0.18818605] [ 0. 0. -0.18818605] [ 0. 0. 0.01505445]] Some points to note. The forces on the atoms have symmetry to them. That is because the slab is centered. Had the slab had an odd number of atoms, it is likely the center atom would have no forces on it. Next we consider the spacing between each layer in the slab. We do this for comparison later. from vasp import Vasp calc = Vasp('surfaces/Al-slab-unrelaxed') atoms = calc.get_atoms() print 'Total energy: {0:1.3f} eV'.format(atoms.get_potential_energy()) for i in range(1, len(atoms)): print '{0} deltaz = {1:1.3f} angstroms'.format(i, atoms[i].z - atoms[i-1].z) Total energy: -14.179 eV 1 deltaz = 2.338 angstroms 2 deltaz = 2.338 angstroms 3 deltaz = 2.338 angstroms To reduce the forces, we can let VASP relax the geometry. We have to make some decisions about how to relax the slab. One choice would be to relax all the atoms in the slab. If we do that, then there will be no atoms with bulk like spacing unless we increase the slab thickness pretty dramatically. It is pretty common to freeze some atoms at the bulk coordinates, and let the others relax. We will freeze the bottom two layers (defined by tags 3 and 4) and let the first two layers relax. To do that we add constraints to the slab. Note: the ase constraints are only partially used by Vasp. The ase.constraints.FixAtoms constraint gets written to the POSCAR file, and is then used internally in VASP. The only other constraint that VASP can use internally is ase.constraints.FixScaled. The other constraints are not written to the POSCAR and are not used by VASP. from ase.lattice.surface import fcc111 atoms = fcc111('Al', size=(2, 2, 4), vacuum=10.0) print([atom.z for atom in atoms]) print [atom.z <= 13 for atom in atoms] [9.9999999999999982, 9.9999999999999982, 9.9999999999999982, 9.9999999999999982, 12.338268590217982, 12.338268590217982, 12.338268590217982, 12.338268590217982, 14.676537180435968, 14.676537180435968, 14.676537180435968, 14.676537180435968, 17.01480577065395, 17.01480577065395, 17.01480577065395, 17.01480577065395] [True, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False] from vasp import Vasp from ase.lattice.surface import fcc111 from ase.constraints import FixAtoms atoms = fcc111('Al', size=(1, 1, 4), vacuum=10.0) constraint = FixAtoms(mask=[atom.tag >= 3 for atom in atoms]) atoms.set_constraint(constraint) calc = Vasp('surfaces/Al-slab-relaxed', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms) print(calc.potential_energy) print(calc) -14.17963819 Vasp calculation directory: --------------------------- [[/home-research/jkitchin/dft-book/surfaces/Al-slab-relaxed]] Unit cell: ---------- x y z |v| v0 2.864 0.000 0.000 2.864 Ang v1 1.432 2.480 0.000 2.864 Ang v2 0.000 0.000 27.015 27.015 Ang alpha, beta, gamma (deg): 90.0 90.0 60.0 Total volume: 191.872 Ang^3 Stress: xx yy zz yz xz xy 0.006 0.006 0.002 -0.000 -0.000 -0.000 GPa ID tag sym x y z rmsF (eV/A) 0 4 Al 0.000* 0.000* 10.000* 0.00 1 3 Al 1.432* 0.827* 12.338* 0.00 2 2 Al 2.864 1.653 14.677 0.19 3 1 Al 0.000 0.000 17.015 0.01 Potential energy: -14.1796 eV INPUT Parameters: ----------------- pp : PBE isif : 2 xc : pbe kpts : [6, 6, 1] encut : 350 lcharg : False ibrion : 2 ismear : 1 lwave : True sigma : 0.1 nsw : 10 Pseudopotentials used: ---------------------- Al: potpaw_PBE/Al/POTCAR (git-hash: ad7c649117f1490637e05717e30ab9a0dd8774f6) You can see that atoms 2 and 3 (the ones we relaxed, because the have tags of 1 and 2, which are less than 3) now have very low forces on them and it appears that atoms 0 and 1 have no forces on them. That is because the FixAtoms constraint works by setting the forces on those atoms to zero. We can see in the next example that the z-positions of the relaxed atoms have indeed relaxed and changed, while the position of the frozen atoms did not change. Note there are two versions of the forces. The true forces, and the forces when constraints are applied. ase.atoms.Atoms.get_forces from vasp import Vasp calc = Vasp('surfaces/Al-slab-relaxed') atoms = calc.get_atoms() print('Constraints = True: ', atoms.get_forces(apply_constraint=True)) print('Constraints = False: ', atoms.get_forces(apply_constraint=False)) ('Constraints = True: ', array([[ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ 0. , 0. , -0.00435222], [ 0. , 0. , -0.07264519]])) ('Constraints = False: ', array([[ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ 0. , 0. , -0.00435222], [ 0. , 0. , -0.07264519]])) Constraints = True: [[ 0. 0. 0. ] [ 0. 0. 0. ] [ 0. 0. -0.049] [ 0. 0. -0.019]] Constraints = False: [[ 0. 0. -0.002] [ 0. 0. 0.069] [ 0. 0. -0.049] [ 0. 0. -0.019]] from vasp import Vasp from ase.lattice.surface import fcc111 calc = Vasp('surfaces/Al-slab-relaxed') atoms = calc.get_atoms() print 'Total energy: {0:1.3f}'.format(atoms.get_potential_energy()) for i in range(1, len(atoms)): print 'd_({0},{1}) = {2:1.3f} angstroms'.format(i, i-1, atoms[i].z - atoms[i-1].z) Total energy: -14.182 d_(1,0) = 2.338 angstroms d_(2,1) = 2.309 angstroms d_(3,2) = 2.370 angstroms Depending on the layer there is either slight contraction or expansion. These quantities are small, and careful convergence studies should be performed. Note the total energy change from unrelaxed to relaxed is not that large in this case (e.g., it is about 5 meV). This is usually the case for metals, where the relaxation effects are relatively small. In oxides and semiconductors, the effects can be large, and when there are adsorbates, the effects can be large also. 5.4 Surface reconstruction We previously considered how relaxation can lower the surface energy. For some surfaces, a more extreme effect can reduce the surface energy: reconstruction. In a simple surface relaxation, the basic structure of a surface is preserved. However, sometimes there is a different surface structure that may have a lower surface energy. Some famous reconstructions include: Si-\(\sqrt{7}\times\sqrt{7}\), Pt(100) hex reconstruction PhysRevB.56.10518,PhysRevB.82.161418, and the Au(111) herringbone reconstruction. We will consider the (110) missing row reconstruction PhysRevB.83.075415. For some metals, especially Pt and Au, it is energetically favorable to form the so-called missing row reconstruction where every other row in the surface is "missing". It is favorable because it lowers the surface energy. Let us consider how we might calculate and predict that. It is straightforward to compute the energy of a (110) slab, and of a (110) slab with one row missing. However, these slabs contain different numbers of atoms, so we cannot directly compare the total energies to determine which energy is lower. We have to consider where the missing row atoms have gone, so we can account for their energy. We will consider that they have gone into the bulk, and so we to consider the energy associated with the following transformation: slab110 \(\rightarrow\) slabmissing row + bulk Thus, if this change in energy: \(E_{bulk} + E_{slab_{missing row}} - E_{slab_{110}} \) is negative, then the formation of the missing row is expected to be favorable. 5.4.1 Au(110) missing row reconstruction We first consider the Au(110) case, where the reconstruction is known to be favorable. 5.4.1.1 Clean Au(110) slab from ase.lattice.surface import fcc110 from ase.io import write from ase.constraints import FixAtoms from ase.visualize import view atoms = fcc110('Au', size=(2, 1, 6), vacuum=10.0) constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) view(atoms) from vasp import Vasp from ase.lattice.surface import fcc110 from ase.io import write from ase.constraints import FixAtoms atoms = fcc110('Au', size=(2, 1, 6), vacuum=10.0) constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) write('images/Au-110.png', atoms.repeat((2, 2, 1)), rotation='-90x', show_unit_cell=2) print Vasp('surfaces/Au-110', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms).potential_energy -35.92440066 Figure 69: The unreconstructed Au(110) surface viewed from the side. 5.4.1.2 Missing row in Au(110) from vasp import Vasp from ase.lattice.surface import fcc110 from ase.io import write from ase.constraints import FixAtoms atoms = fcc110('Au', size=(2, 1, 6), vacuum=10.0) del atoms[11] # delete surface row constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) write('images/Au-110-missing-row.png', atoms.repeat((2, 2, 1)), rotation='-90x', show_unit_cell=2) calc = Vasp('surfaces/Au-110-missing-row', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms) calc.update() Figure 70: Au(110) with the missing row reconstruction. 5.4.1.3 Bulk Au from vasp import Vasp from ase.visualize import view from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic(directions=[[0, 1, 1], [1, 0, 1], [1, 1, 0]], size=(1, 1, 1), symbol='Au') print Vasp('bulk/Au-fcc', xc='PBE', encut=350, kpts=[12, 12, 12], atoms=atoms).potential_energy -3.19446244 5.4.1.4 Analysis of energies from vasp import Vasp print 'dE = {0:1.3f} eV'.format(Vasp('surfaces/Au-110-missing-row').potential_energy + Vasp('bulk/Au-fcc').potential_energy - Vasp('surfaces/Au-110').potential_energy) natoms slab = 12 natoms missing row = 11 natoms bulk = 1 dE = -0.070 eV The missing row formation energy is slightly negative. The magnitude of the formation energy is pretty small, but just slightly bigger than the typical convergence errors observed, so we should cautiously conclude that the reconstruction if favorable for Au(110). We made a lot of shortcuts in computing this quantity, including using the experimental lattice constant of Au, not checking for convergence in k-points or planewave cutoff, and not checking for convergence with respect to slab thickness or number of relaxed layers. 5.4.2 Ag(110) missing row reconstruction 5.4.2.1 Clean Ag(110) slab from vasp import Vasp from ase.lattice.surface import fcc110 from ase.io import write from ase.constraints import FixAtoms atoms = fcc110('Ag', size=(2, 1, 6), vacuum=10.0) constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) calc = Vasp('surfaces/Ag-110', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms) calc.update() 5.4.2.2 Missing row in Ag(110) from vasp import Vasp from ase.lattice.surface import fcc110 from ase.io import write from ase.constraints import FixAtoms atoms = fcc110('Ag', size=(2, 1, 6), vacuum=10.0) del atoms[11] # delete surface row constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) Vasp('surfaces/Ag-110-missing-row', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms).update() 5.4.2.3 Bulk Ag from vasp import Vasp from ase.visualize import view from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic(directions=[[0, 1, 1], [1, 0, 1], [1, 1, 0]], size=(1, 1, 1), symbol='Ag') Vasp('bulk/Ag-fcc', xc='PBE', encut=350, kpts=[12, 12, 12], atoms=atoms).update() 5.4.2.4 Analysis of energies from vasp import Vasp eslab = Vasp('surfaces/Ag-110').potential_energy emissingrow = Vasp('surfaces/Ag-110-missing-row').potential_energy ebulk = Vasp('bulk/Ag-fcc').potential_energy print 'dE = {0:1.3f} eV'.format(emissingrow + ebulk - eslab) dE = -0.010 eV For Ag(110), the missing row formation energy is practically thermoneutral, i.e. not that favorable. This energy is so close to 0eV, that we cannot confidently say whether the reconstruction is favorable or not. Experimentally, the reconstruction is not seen on very clean Ag(110) although it is reported that some adsorbates may induce the reconstruction PhysRevLett.59.2307. 5.4.3 Cu(110) missing row reconstruction 5.4.3.1 Clean Cu(110) slab from vasp import Vasp from ase.lattice.surface import fcc110 from ase.constraints import FixAtoms atoms = fcc110('Cu', size=(2, 1, 6), vacuum=10.0) constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) Vasp('surfaces/Cu-110', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms).update() 5.4.3.2 Missing row in Cu(110) from vasp import Vasp from ase.lattice.surface import fcc110 from ase.constraints import FixAtoms atoms = fcc110('Cu', size=(2, 1, 6), vacuum=10.0) del atoms[11] # delete surface row constraint = FixAtoms(mask=[atom.tag > 2 for atom in atoms]) atoms.set_constraint(constraint) Vasp('surfaces/Cu-110-missing-row', xc='PBE', kpts=[6, 6, 1], encut=350, ibrion=2, isif=2, nsw=10, atoms=atoms).update() 5.4.3.3 Bulk Cu from vasp import Vasp from ase.visualize import view from ase.lattice.cubic import FaceCenteredCubic atoms = FaceCenteredCubic(directions=[[0, 1, 1], [1, 0, 1], [1, 1, 0]], size=(1, 1, 1), symbol='Cu') Vasp('bulk/Cu-fcc', xc='PBE', encut=350, kpts=[12, 12, 12], atoms=atoms).update() 5.4.3.4 Analysis from vasp import Vasp eslab = Vasp('surfaces/Cu-110').potential_energy emissingrow = Vasp('surfaces/Cu-110-missing-row').potential_energy ebulk = Vasp('bulk/Cu-fcc').potential_energy print 'natoms slab = {0}'.format(len(slab)) print 'natoms missing row = {0}'.format(len(missingrow)) print 'natoms bulk = {0}'.format(len(bulk)) print 'dE = {0:1.3f} eV'.format(emissingrow + ebulk - eslab) It is questionable whether we should consider this evidence of a missing row reconstruction because the number is small. That does not mean the reconstruction will not happen, but it could mean it is very easy to lift. 5.5 Surface energy The easiest way to calculate surface energies is from this equation: \(\sigma = \frac{1}{2}(E_{slab} - \frac{N_{slab}}{N_{bulk}} E_{bulk})\) where \(E_{slab}\) is the total energy of a symmetric slab (i.e. one with inversion symmetry, and where both sides of the slab have been relaxed), \(E_{bulk}\) is the total energy of a bulk unit cell, \(N_{slab}\) is the number of atoms in the slab, and \(N_{bulk}\) is the number of atoms in the bulk unit cell. One should be sure that the bulk energy is fully converged with respect to $k$-points, and that the slab energy is also converged with respect to $k$-points. The energies should be compared at the same cutoff energies. The idea is then to increase the thickness of the slab until the surface energy \(\sigma\) converges. Figure 71: Schematic figure illustrating the calculation of a surface energy. Unfortunately, this approach does not always work. The bulk system is treated subtly different than the slab system, particularly in the $z$-direction where the vacuum is (where typically only one $k$-point is used in slabs). Consequently, the $k$-point sampling is not equivalent in the two systems, and one can in general expect some errors due to this, with the best case being cancellation of the errors due to total $k$-point convergence. In the worst case, one can get a linear divergence in the surface energy with slab thickness PhysRevB.49.16798. A variation of this method that usually results in better $k$-point error cancellation is to calculate the bulk unit cell energy using the slab unit cell with no vacuum space, with the same $k$-point mesh in the \(x\) and \(y\) directions, but with increased $k$-points in the $z$-direction. Thus, the bulk system and slab system have the same Brillouin zone in at least two dimensions. This maximizes the cancellation of $k$-point errors, but still does not guarantee convergence of the surface energy, as discussed in PhysRevB.49.16798,0953-8984-10-4-017. For quick estimates of the surface energy, one of the methods described above is likely sufficient. The advantage of these methods is the small number of calculations required to obtain the estimate, one needs only a bulk calculation (which must be done anyhow to get the bulk lattice constant to create the slab), and a slab calculation that is sufficiently thick to get the estimate. Additional calculations are only required to test the convergence of the surface energy. An alternative method for calculating surface energies that does not involve an explicit bulk calculation follows Ref. 0953-8984-10-4-017. The method follows from equation (ref{eq:se}) where for a N-atom slab, in the limit of N → ∞, \(E_{slab} \approx 2\sigma + \frac{N_{slab}}{N_{bulk}} E_{bulk}\) Then, we can estimate Ebulk by plotting the total energy of the slab as a function of the slab thickness. \(\sigma = \lim_{N \rightarrow \infty} \frac{1}{2}(E_{slab}^N - N \Delta E_N)\) where \(\Delta E_N = E_{slab}^N - E_{slab}^{N-1}\). We will examine this approach here. We will use unrelaxed slabs for computational efficiency. from vasp import Vasp from ase.lattice.surface import fcc111 import matplotlib.pyplot as plt Nlayers = [3, 4, 5, 6, 7, 8, 9, 10, 11] energies = [] sigmas = [] for n in Nlayers: slab = fcc111('Cu', size=(1, 1, n), vacuum=10.0) slab.center() calc = Vasp('bulk/Cu-layers/{0}'.format(n), xc='PBE', encut=350, kpts=[8, 8, 1], atoms=slab) calc.set_nbands(f=2) # the default nbands in VASP is too low for Cu energies.append(slab.get_potential_energy()) calc.stop_if(None in energies) for i in range(len(Nlayers) - 1): N = Nlayers[i] DeltaE_N = energies[i + 1] - energies[i] sigma = 0.5 * (-N * energies[i + 1] + (N + 1) * energies[i]) sigmas.append(sigma) print 'nlayers = {1:2d} sigma = {0:1.3f} eV/atom'.format(sigma, N) plt.plot(Nlayers[0:-1], sigmas, 'bo-') plt.xlabel('Number of layers') plt.ylabel('Surface energy (eV/atom)') plt.savefig('images/Cu-unrelaxed-surface-energy.png') nlayers = 3 sigma = 0.561 eV/atom nlayers = 4 sigma = 0.398 eV/atom nlayers = 5 sigma = 0.594 eV/atom nlayers = 6 sigma = 0.308 eV/atom nlayers = 7 sigma = 0.590 eV/atom nlayers = 8 sigma = 0.332 eV/atom nlayers = 9 sigma = 0.591 eV/atom nlayers = 10 sigma = 0.392 eV/atom Figure 72: Surface energy of a Cu(111) slab as a function of thickness. One reason for the oscillations may be quantum size effects Fiolhais2003209. In PhysRevB.75.115131 the surface energy of Cu(111) is reported as 0.48 eV/atom, or 1.36 J/m\(^2\). Here is an example showing a conversion between these two units. We use ase to compute the area of the unit cell from the norm of the cross-product of the vectors defining the surface unit cell. from ase.lattice.surface import fcc111 from ase.units import J, m import numpy as np slab = fcc111('Cu', size=(1, 1, 3), vacuum=10.0) cell = slab.get_cell() area = np.linalg.norm(np.cross(cell[0], cell[1])) # area per atom sigma = 0.48 # eV/atom print 'sigma = {0} J/m^2'.format(sigma / area / (J / m**2)) sigma = 1.36281400415 J/m^2 5.5.1 Advanced topics in surface energy The surface energies can be used to estimate the shapes of nanoparticles using a Wulff construction. See doi.10.1021/jp200950a for an example of computing Mo2C surface energies and particle shapes, and Inoglu2009188 for an example of the influence of adsorbates on surface energies and particle shapes of Cu. For a classic paper on trends in surface energies see Vitos1998186. 5.6 Work function work function To get the work function, we need to have the local potential. This is not written by default in VASP, and we have to tell it to do that with the LVTOT and LVHAR keywords. from vasp import Vasp import matplotlib.pyplot as plt import numpy as np calc = Vasp('surfaces/Al-slab-relaxed') atoms = calc.get_atoms() calc = Vasp('surfaces/Al-slab-locpot', xc='PBE', kpts=[6, 6, 1], encut=350, lvtot=True, # write out local potential lvhar=True, # write out only electrostatic potential, not xc pot atoms=atoms) calc.wait() ef = calc.get_fermi_level() x, y, z, lp = calc.get_local_potential() nx, ny, nz = lp.shape axy = np.array([np.average(lp[:, :, z]) for z in range(nz)]) # setup the x-axis in realspace uc = atoms.get_cell() xaxis = np.linspace(0, uc[2][2], nz) plt.plot(xaxis, axy) plt.plot([min(xaxis), max(xaxis)], [ef, ef], 'k:') plt.xlabel('Position along z-axis') plt.ylabel('x-y averaged electrostatic potential') plt.savefig('images/Al-wf.png') ind = (xaxis > 0) & (xaxis < 5) wf = np.average(axy[ind]) - ef print ' The workfunction is {0:1.2f} eV'.format(wf) The workfunction is 4.17 eV The workfunction of Al is listed as 4.08 at. Figure 73: \(xy\) averaged local electrostatic potential of an Al(111) slab. 5.7 Dipole correction A subtle problem can arise when an adsorbate is placed on one side of a slab with periodic boundary conditions, which is currently the common practice. The problem is that this gives the slab a dipole moment. The array of dipole moments created by the periodic boundary conditions generates an electric field that can distort the electron density of the slab and change the energy. The existence of this field in the vacuum also makes the zero-potential in the vacuum ill-defined, thus the work function is not well-defined. One solution to this problem is to use slabs with adsorbates on both sides, but then very thick (eight to ten layers) slabs must be used to ensure the adsorbates do not interact through the slab. An alternative solution, the dipole correction scheme, was developed by Neugebauer and Scheffler PhysRevB.46.16067 and later corrected by Bengtsson PhysRevB.59.12301. In this technique, an external field is imposed in the vacuum region that exactly cancels the artificial field caused by the slab dipole moment. The advantage of this approach is that thinner slabs with adsorbates on only one side can be used. There are also literature reports that the correction is small morikawa2001:c2h2-si. Nevertheless, in the literature the use of this correction is fairly standard, and it is typical to at least consider the correction. Here we will just illustrate the effect. 5.7.1 Slab with no dipole correction We simply run the calculation here, and compare the results later. # compute local potential of slab with no dipole from ase.lattice.surface import fcc111, add_adsorbate from vasp import Vasp import matplotlib.pyplot as plt from ase.io import write slab = fcc111('Al', size=(2, 2, 2), vacuum=10.0) add_adsorbate(slab, 'Na', height=1.2, position='fcc') slab.center() write('images/Na-Al-slab.png', slab, rotation='-90x', show_unit_cell=2) print(Vasp('surfaces/Al-Na-nodip', xc='PBE', encut=340, kpts=[2, 2, 1], lcharg=True, lvtot=True, # write out local potential lvhar=True, # write out only electrostatic potential, not xc pot atoms=slab).potential_energy) -22.55264459 Figure 74: Example slab with a Na atom on it for illustrating the effects of dipole corrections. 5.7.2 TODO Slab with a dipole correction Note this takes a considerably longer time to run than without a dipole correction! In VASP there are several levels of dipole correction to apply. You can use the IDIPOL tag to turn it on, and specify which direction to apply it in (1=\(x\), 2=\(y\), 3=\(z\), 4=\((x,y,z)\)). This simply corrects the total energy and forces. It does not change the contents of LOCPOT. For that, you have to also set the LDIPOL and DIPOL tags. It is not efficient to set all three at the same time for some reason. The VASP manual recommends you first set IDIPOL to get a converged electronic structure, and then set LDIPOL to True, and set the center of electron density in DIPOL. That makes these calculations a multistep process, because we must run a calculation, analyze the charge density to get the center of charge, and then run a second calculation. # compute local potential with dipole calculation on from ase.lattice.surface import fcc111, add_adsorbate from vasp import Vasp import numpy as np slab = fcc111('Al', size=(2, 2, 2), vacuum=10.0) add_adsorbate(slab, 'Na', height=1.2, position='fcc') slab.center() calc = Vasp('surfaces/Al-Na-dip', xc='PBE', encut=340, kpts=[2, 2, 1], lcharg=True, idipol=3, # only along z-axis lvtot=True, # write out local potential lvhar=True, # write out only electrostatic potential, not xc pot atoms=slab) calc.stop_if(calc.potential_energy is None) x, y, z, cd = calc.get_charge_density() n0, n1, n2 = cd.shape nelements = n0 * n1 * n2 voxel_volume = slab.get_volume() / nelements total_electron_charge = cd.sum() * voxel_volume electron_density_center = np.array([(cd * x).sum(), (cd * y).sum(), (cd * z).sum()]) electron_density_center *= voxel_volume electron_density_center /= total_electron_charge print 'electron-density center = {0}'.format(electron_density_center) uc = slab.get_cell() # get scaled electron charge density center sedc = np.dot(np.linalg.inv(uc.T), electron_density_center.T).T # we only write 4 decimal places out to the INCAR file, so we round here. sedc = np.round(sedc, 4) calc.clone('surfaces/Al-Na-dip-step2') # now run step 2 with dipole set at scaled electron charge density center calc.set(ldipol=True, dipol=sedc) print(calc.potential_energy) 5.7.3 Comparing no dipole correction with a dipole correction To see the difference in what the dipole correction does, we now plot the potentials from each calculation. from vasp import Vasp import matplotlib.pyplot as plt calc = Vasp('surfaces/Al-Na-nodip') atoms = calc.get_atoms() x, y, z, lp = calc.get_local_potential() nx, ny, nz = lp.shape axy_1 = [np.average(lp[:, :, z]) for z in range(nz)] # setup the x-axis in realspace uc = atoms.get_cell() xaxis_1 = np.linspace(0, uc[2][2], nz) e1 = atoms.get_potential_energy() calc = Vasp('surfaces/Al-Na-dip-step2') atoms = calc.get_atoms() x, y, z, lp = calc.get_local_potential() nx, ny, nz = lp.shape axy_2 = [np.average(lp[:, :, z]) for z in range(nz)] # setup the x-axis in realspace uc = atoms.get_cell() xaxis_2 = np.linspace(0, uc[2][2], nz) ef2 = calc.get_fermi_level() e2 = atoms.get_potential_energy() print 'The difference in energy is {0} eV.'.format(e2-e1) plt.plot(xaxis_1, axy_1, label='no dipole correction') plt.plot(xaxis_2, axy_2, label='dipole correction') plt.plot([min(xaxis_2), max(xaxis_2)], [ef2, ef2], 'k:', label='Fermi level') plt.xlabel('z ($\AA$)') plt.ylabel('xy-averaged electrostatic potential') plt.legend(loc='best') plt.savefig('images/dip-vs-nodip-esp.png') Figure 75: Comparison of the electrostatic potentials with a dipole correction and without it. The key points to notice in this figure are: - The two deep dips are where the atoms are. - Without a dipole correction, the electrostatic potential never flattens out. there is near constant slope in the vacuum region, which means there is an electric field there. - With a dipole correction the potential is flat in the vacuum region, except for the step jump near 23 Å. - The difference between the Fermi level and the flat vacuum potential is the work function. - The difference in energy with and without the dipole correction here is small. 5.8 Adsorption energies 5.8.1 Simple estimate of the adsorption energy Calculating an adsorption energy amounts to computing the energy of the following kind of reaction: slab + gas-phase molecule \(\rightarrow\) slab_adsorbate + products Figure 76: Schematic of an adsorption process. There are many variations of this idea. The slab may already have some adsorbates on it, the slab may reconstruct on adsorption, the gas-phase molecule may or may not dissociate, and the products may or may not stick to the surface. We have to decide where to put the adsorbates, i.e. what site to put them on, and some sites will be more stable than others. We will consider the dissociative adsorption of O\(_2\) on three sites of a Pt(111) slab. We will assume the oxygen molecule has split in half, and that the atoms have moved far apart. We will model the oxygen coverage at 0.25 ML, which means we need to use a \(2 \times 2\) surface unit cell. For computational speed, we will freeze the slab, but allow the adsorbate to relax. \( \Delta H_{ads} (eV/O) = E_{slab+O} - E_{slab} - 0.5*E_{O_2} \) 5.8.1.1 Calculations 5.8.1.1.1 clean slab calculation from vasp import Vasp from ase.lattice.surface import fcc111 from ase.constraints import FixAtoms atoms = fcc111('Pt', size=(2, 2, 3), vacuum=10.0) constraint = FixAtoms(mask=[True for atom in atoms]) atoms.set_constraint(constraint) from ase.io import write write('images/Pt-fcc-ori.png', atoms, show_unit_cell=2) print(Vasp('surfaces/Pt-slab', xc='PBE', kpts=[4, 4, 1], encut=350, atoms=atoms).potential_energy) -68.23616204 Figure 77: Pt(111) fcc surface 5.8.1.1.2 fcc='fcc') constraint = FixAtoms(mask=[atom.symbol != 'O' for atom in atoms]) atoms.set_constraint(constraint) from ase.io import write write('images/Pt-fcc-site.png', atoms, show_unit_cell=2) print(Vasp('surfaces/Pt-slab-O-fcc', xc='PBE', kpts=[4, 4, 1], encut=350, ibrion=2, nsw=25, atoms=atoms).potential_energy) -74.23018764 Figure 78: FCC site. 5.8.1.1.3 O atom on the bridge='bridge') constraint = FixAtoms(mask=[atom.symbol != 'O' for atom in atoms]) atoms.set_constraint(constraint) print(Vasp('surfaces/Pt-slab-O-bridge', xc='PBE', kpts=[4, 4, 1], encut=350, ibrion=2, nsw=25, atoms=atoms).potential_energy) -74.23023073 Figure 79: Initial geometry of the bridge site. It is definitely on the bridge. 5.8.1.1.4 hcp='hcp') constraint = FixAtoms(mask=[atom.symbol != 'O' for atom in atoms]) atoms.set_constraint(constraint) from ase.io import write write('images/Pt-hcp-o-site.png', atoms, show_unit_cell=2) print(Vasp('surfaces/Pt-slab-O-hcp', xc='PBE', kpts=[4, 4, 1], encut=350, ibrion=2, nsw=25, atoms=atoms).potential_energy) -73.76942127 Figure 80: HCP site. 5.8.1.2 Analysis of adsorption energies from vasp import Vasp from ase.io import write calc = Vasp('surfaces/Pt-slab') atoms = calc.get_atoms() e_slab = atoms.get_potential_energy() write('images/pt-slab.png', atoms,show_unit_cell=2) calc = Vasp('surfaces/Pt-slab-O-fcc') atoms = calc.get_atoms() e_slab_o_fcc = atoms.get_potential_energy() write('images/pt-slab-fcc-o.png', atoms,show_unit_cell=2) calc = Vasp('surfaces/Pt-slab-O-hcp') atoms = calc.get_atoms() e_slab_o_hcp = atoms.get_potential_energy() write('images/pt-slab-hcp-o.png', atoms,show_unit_cell=2) calc = Vasp('surfaces/Pt-slab-O-bridge') atoms = calc.get_atoms() e_slab_o_bridge = atoms.get_potential_energy() write('images/pt-slab-bridge-o.png', atoms,show_unit_cell=2) calc = Vasp('molecules/O2-sp-triplet-350') atoms = calc.get_atoms() e_O2 = atoms.get_potential_energy() Hads_fcc = e_slab_o_fcc - e_slab - 0.5 * e_O2 Hads_hcp = e_slab_o_hcp - e_slab - 0.5 * e_O2 Hads_bridge = e_slab_o_bridge - e_slab - 0.5 * e_O2 print 'Hads (fcc) = {0} eV/O'.format(Hads_fcc) print 'Hads (hcp) = {0} eV/O'.format(Hads_hcp) print 'Hads (bridge) = {0} eV/O'.format(Hads_bridge) You can see the hcp site is not as energetically favorable as the fcc site. Interestingly, the bridge site seems to be as favorable as the fcc site. This is not correct, and to see why, we have to look at the final geometries of each calculation. First the fcc (Figure fig:fcc and hcp (Figure fig:hcp sites, which look like we expect. Figure 81: Final geometry of the fcc site. \label{fig:fcc} Figure 82: Final geometry of the hcp site. \label{fig:hcp} The bridge site (Figure fig:bridge, however, is clearly not at a bridge site! Figure 83: Final geometry of the bridge site. You can see that the oxygen atom ended up in the fcc site. \label{fig:bridge} Let us see what the original geometry and final geometry for the bridge site were. The POSCAR contains the initial geometry (as long as you haven't copied CONTCAR to POSCAR), and the CONTCAR contains the final geometry. from ase.io import read, write atoms = read('surfaces/Pt-slab-O-bridge/POSCAR') write('images/Pt-o-brige-ori.png', atoms, show_unit_cell=2) atoms = read('surfaces/Pt-slab-O-bridge/CONTCAR') write('images/Pt-o-brige-final.png', atoms, show_unit_cell=2) Figure 84: Initial geometry of the bridge site. It is definitely on the bridge. Figure 85: Final geometry of the bridge site. It has fallen into the fcc site. You can see the problem. We should not call the adsorption energy from this calculation a bridge site adsorption energy because the O atom is actually in an fcc site! This kind of result can happen with relaxation, and you should always check that the result you get makes sense. Next, we consider how to get a bridge site adsorption energy by using constraints. Some final notes: - We did not let the slabs relax in these examples, and allowing them to relax is likely to have a big effect on the adsorption energies. You have to decide how many layers to relax, and check for convergence with respect to the number of layers. - The slabs were pretty thin. It is typical these days to see slabs that are 4-5 or more layers thick. - We did not consider how well converged the calculations were with respect to $k$-points or ENCUT. - We did not consider the effect of the error in O\(_2\) dissociation energy on the adsorption energies. - We did not consider coverage effects (see Coverage dependence). 5.8.1.3 Adsorption on bridge site with constraints To prevent the oxygen atom from sliding down into the fcc site, we have to constrain it so that it only moves in the $z$-direction. This is an artificial constraint; the bridge site is only metastable. But there are lots of reasons you might want to do this anyway. One is the bridge site is a transition state for diffusion between the fcc and hcp sites. Another is to understand the role of coordination in the adsorption energies. We use a ase.constraints.FixScaled constraint in ase to constrain the O atom so it can only move in the $z$-direction (actually so it can only move in the direction of the third unit cell vector, which only has a $z$-component). from vasp import Vasp from ase.lattice.surface import fcc111, add_adsorbate from ase.constraints import FixAtoms, FixScaled from ase.io import write atoms = fcc111('Pt', size=(2, 2, 3), vacuum=10.0) # note this function only works when atoms are created by the surface module. add_adsorbate(atoms, 'O', height=1.2, position='bridge') constraint1 = FixAtoms(mask=[atom.symbol != 'O' for atom in atoms]) # fix in xy-direction, free in z. actually, freeze movement in surface # unit cell, and free along 3rd lattice vector constraint2 = FixScaled(atoms.get_cell(), 12, [True, True, False]) atoms.set_constraint([constraint1, constraint2]) write('images/Pt-O-bridge-constrained-initial.png', atoms, show_unit_cell=2) print 'Initial O position: {0}'.format(atoms.positions[-1]) calc = Vasp('surfaces/Pt-slab-O-bridge-xy-constrained', xc='PBE', kpts=[4, 4, 1], encut=350, ibrion=2, nsw=25, atoms=atoms) e_bridge = atoms.get_potential_energy() write('images/Pt-O-bridge-constrained-final.png', atoms, show_unit_cell=2) print 'Final O position : {0}'.format(atoms.positions[-1]) # now compute Hads calc = Vasp('surfaces/Pt-slab') atoms = calc.get_atoms() e_slab = atoms.get_potential_energy() calc = Vasp('molecules/O2-sp-triplet-350') atoms = calc.get_atoms() e_O2 = atoms.get_potential_energy() calc.stop_if(None in [e_bridge, e_slab, e_O2]) Hads_bridge = e_bridge - e_slab - 0.5*e_O2 print 'Hads (bridge) = {0:1.3f} eV/O'.format(Hads_bridge) Initial O position: [ 1.38592929 0. 15.72642611] Final O position : [ 1.38592929 0. 15.9685262 ] Hads (bridge) = -0.512 eV/O You can see that only the \(z\)-position of the O atom changed. Also, the adsorption energy of O on the bridge site is much less favorable than on the fcc or hcp sites. Figure 86: Initial state of the O atom on the bridge site. Figure 87: Final state of the constrained O atom, still on the bridge site. 5.8.2 Coverage dependence The adsorbates on the surface can interact with each other which results in coverage dependent adsorption energies PhysRevB.82.045414. Coverage dependence is not difficult to model; we simply compute adsorption energies in different size unit cells, and/or with different adsorbate configurations. Here we consider dissociative oxygen adsorption at 1ML on Pt(111) in an fcc site, which is one oxygen atom in a \(1 \times 1\) unit cell. For additional reading, see these references from our work: - Correlations of coverage dependence of oxygen adsorption on different metals miller:104709,Miller2009794 - Coverage effects of atomic adsorbates on Pd(111) PhysRevB.79.205412 - Simple model for estimating coverage dependence PhysRevB.82.045414 - Coverage effects on alloys PhysRevB.77.075437 5.8.2.1 clean slab calculation from vasp import Vasp from ase.io import write from ase.lattice.surface import fcc111 from ase.constraints import FixAtoms atoms = fcc111('Pt', size=(1, 1, 3), vacuum=10.0) constraint = FixAtoms(mask=[True for atom in atoms]) atoms.set_constraint(constraint) write('images/Pt-fcc-1ML.png', atoms, show_unit_cell=2) print(Vasp('surfaces/Pt-slab-1x1', xc='PBE', kpts=[8, 8, 1], encut=350, atoms=atoms).potential_energy) -17.05903301 Figure 88: 1 × 1 unit cell. 5.8.2.2 fcc site at 1 ML coverage from vasp import Vasp from ase.lattice.surface import fcc111, add_adsorbate from ase.constraints import FixAtoms from ase.io import write atoms = fcc111('Pt', size=(1, 1, 3), vacuum=10.0) # note this function only works when atoms are created by the surface module. add_adsorbate(atoms, 'O', height=1.2, position='fcc') constraint = FixAtoms(mask=[atom.symbol != 'O' for atom in atoms]) atoms.set_constraint(constraint) write('images/Pt-o-fcc-1ML.png', atoms, show_unit_cell=2) print(Vasp('surfaces/Pt-slab-1x1-O-fcc', xc='PBE', kpts=[8, 8, 1], encut=350, ibrion=2, nsw=25, atoms=atoms).potential_energy) -22.13585728 Figure 89: 1 ML oxygen in the fcc site. 5.8.2.3 Adsorption energy at 1ML from vasp import Vasp e_slab_o = Vasp('surfaces/Pt-slab-1x1-O-fcc').potential_energy # clean slab e_slab = Vasp('surfaces/Pt-slab-1x1').potential_energy e_O2 = Vasp('molecules/O2-sp-triplet-350').potential_energy hads = e_slab_o - e_slab - 0.5 * e_O2 print 'Hads (1ML) = {0:1.3f} eV'.format(hads) Hads (1ML) = -0.099 eV The adsorption energy is much less favorable at 1ML coverage than at 0.25 ML coverage! We will return what this means in Atomistic thermodynamics effect on adsorption. 5.8.3 Effect of adsorption on the surface energy There is a small point to make here about what adsorption does to surface energies. Let us define a general surface formation energy scheme like this: Figure 90: Schematic of forming a surface with adsorbates. First we form two clean surfaces by cleaving the bulk, then allow adsorption to occur on the surfaces. Let us presume the surfaces are symmetric, and that each surface contributes half of the energy change. The overall change in energy: \(\Delta E = E_{slab,ads} - E_{ads} - E_{bulk}\) where the the energies are appropriately normalized for the stoichiometry. Let us rearrange the terms, and add and subtract a constant term \(E_{slab}\). \(\Delta E = E_{slab,ads} - E_{slab} - E_{ads} - E_{bulk} + E_{slab}\) We defined \(\gamma_{clean} = \frac{1}{2A}(E_{slab} - E_{bulk})\), and we defined \(H_{ads} = E_{slab,ads} - E_{slab} - E_{ads}\) for adsorption on a single side of a slab. In this case, there are adsorbates on both sides of the slab, so \(E_{slab,ads} - E_{slab} - E_{ads} = 2 \Delta H_{ads}\). If we normalize by \(2A\), the area for both sides of the slab, we get \(\frac{\Delta E}{2A} = \gamma = \gamma_{clean} + \frac{H_{ads}}{A}\) You can see here that the adsorption energy serves to stabilize, or reduce the surface energy, provided that the adsorption energy is negative. Some final notes about the equations above: - We were not careful about stoichiometry. As written, it is assumed there are the same number of atoms (not including the adsorbates) in the slabs and bulk, and the same number of adsorbate atoms in the slab and \(E_{ads}\). Appropriate normalization factors must be included if that is not true. - It is not necessary to perform a symmetric slab calculation to determine the effect of adsorption on the surface energy! You can examine \(\gamma - \gamma_{clean}\) with knowledge of only the adsorption energies! 5.9 Adsorbate vibrations Adsorbates also have vibrational modes. Unlike a free molecule, the translational and rotational modes of an adsorbate may actually have real frequencies. Sometimes they are called frustrated translations or rotations. For metal surfaces with adsorbates, it is common to only compute vibrational modes of the adsorbate on a frozen metal slab. The rationale is that the metal atoms are so much heavier than the adsorbate that there will be little coupling between the surface and adsorbates. You can limit the number of modes calculated with constraints (ase.constraints.FixAtoms or ase.constraints.FixScaled) if you use IBRION=5. The other IBRION settings (6, 7, 8) do not respect the selective dynamics constraints. Below we consider the vibrational modes of an oxygen atom in an fcc site on Pt(111). from vasp import Vasp calc = Vasp('surfaces/Pt-slab-O-fcc') calc.clone('surfaces/Pt-slab-O-fcc-vib') calc.set(ibrion=5, # finite differences with selective dynamics nfree=2, # central differences (default) potim=0.015, # default as well ediff=1e-8, nsw=1) atoms = calc.get_atoms() f, v = calc.get_vibrational_modes(0) print 'Elapsed time = {0} seconds'.format(calc.get_elapsed_time()) allfreq =)) print print 'All energies = ', allfreq There are three modes for the free oxygen atom. One of them is a mode normal to the surface (the one with highest frequency. The other two are called frustrated translations. Note that we did not include the surface Pt atoms in the calculation, and this will have an effect on the result because the O atom could be coupled to the surface modes. It is typical to neglect this coupling because of the large difference in mass between O and Pt. Next we look at the difference in results when we calculate all the modes. from vasp import Vasp calc = Vasp('surfaces/Pt-slab-O-fcc') calc.clone('Pt-slab-O-fcc-vib-ibrion=6') calc.set(ibrion=6, # finite differences with symmetry nfree=2, # central differences (default) potim=0.015, # default as well ediff=1e-8, nsw=1) calc.update() print 'Elapsed time = {0} seconds'.format(calc.get_elapsed_time()) f, m = calc.get_vibrational_modes(0) allfreq = calc.get_vibrational_modes()[0] from ase.units import meV c = 3e10 # cm/s h = 4.135667516e-15 # eV*s print 'For mode 0:' print 'vibrational energy = {0} eV'.format(f) print 'vibrational energy = {0} meV'.format(f / meV) print 'vibrational freq = {0} 1/s'.format(f / h) print 'vibrational freq = {0} cm^{{-1}}'.format(f / (h * c)) print print 'All energies = ', allfreq Elapsed time = 77121.015 seconds For mode 0: vibrational energy = 0.063537929 eV vibrational energy = 63.537929 meV vibrational freq = 1.53634035507e+13 1/s vibrational freq = 512.113451691 cm^{-1} All energies = [0.06353792899999999, 0.045628623, 0.045628623, 0.023701702, 0.023701702, 0.023223747, 0.022978233, 0.022978233, 0.022190167, 0.021807461, 0.02040119, 0.02040119, 0.019677135000000002, 0.015452848, 0.015302098000000002, 0.015302098000000002, 0.0148412, 0.0148412, 0.014071851000000002, 0.012602063, 0.012602063, 0.012409611999999999, 0.012300973000000002, 0.011735683, 0.011735683, 0.011714521, 0.011482183, 0.011482183, 0.010824891, 0.010414177, 0.010414177, 0.009799697, 0.00932905, 0.00932905, 0.003859079, 0.003859079, (2.9894000000000002e-05+0j), (2.9894000000000002e-05+0j), (0.00012182999999999999+0j)] Note that now there are 39 modes, which is 3*N where N=13 atoms in the unit cell. Many of the modes are low in frequency, which correspond to slab modes that are essentially phonons. The O frequencies are not that different from the previous calculation (497 vs 512 cm\(^{-1}\). This is why it is common to keep the slab atoms frozen. Calculating these results took 39*2 finite differences. It took about a day to get these results on a single CPU. It pays to use constraints to minimize the number of these calculations. 5.9.1 Vibrations of the bridge site Here we consider the vibrations of an O atom in a bridge site, which we saw earlier is a metastable saddle point. from vasp import Vasp from ase.constraints import FixAtoms # clone calculation so we do not overwrite previous results calc = Vasp('surfaces/Pt-slab-O-bridge-xy-constrained') calc.clone('surfaces/Pt-slab-O-bridge-vib') calc.set(ibrion=5, # finite differences with selective dynamics nfree=2, # central differences (default) potim=0.015, # default as well ediff=1e-8, nsw=1) atoms = calc.get_atoms() del atoms.constraints constraint = FixAtoms(mask=[atom.symbol != 'O' for atom in atoms]) atoms.set_constraint([constraint]) f, v = calc.get_vibrational_modes(2) print))) [0.06691932, 0.047345270999999994, (0.020649715000000003+0j)] vibrational energy = (0.020649715+0j) eV vibrational energy = (20.649715+0j) meV vibrational freq = (4.99307909065e+12+0j) 1/s vibrational freq = (166.435969688+0j) cm^(-1) Note that we have one imaginary mode. This corresponds to the motion of the O atom falling into one of the neighboring 3-fold sites. It also indicates this position is not a stable minimum, but rather a saddle point. This position is a transition state for hopping between the fcc and hcp sites. 5.10 Surface Diffusion barrier See this review ANIE.ANIE200602223 of diffusion on transition metal surfaces. 5.10.1 Standard nudged elastic band method Here we illustrate a standard NEB method. You need an initial and final state to start with. We will use the results from previous calculations of oxygen atoms in an fcc and hcp site. then we will construct a band of images connecting these two sites. Finally, we let VASP optimize the band and analyze the results to get the barrier. from vasp import Vasp from ase.neb import NEB import matplotlib.pyplot as plt calc = Vasp('surfaces/Pt-slab-O-fcc') initial_atoms = calc.get_atoms() final_atoms = Vasp('surfaces/Pt-slab-O-hcp').get_atoms() # here is our estimated transition state. we use vector geometry to # define the bridge position, and add 1.451 Ang to z based on our # previous bridge calculation. The bridge position is half way between # atoms 9 and 10. ts = initial_atoms.copy() ts.positions[-1] = 0.5 * (ts.positions[9] + ts.positions[10]) + [0, 0, 1.451] # construct the band images = [initial_atoms] images += [initial_atoms.copy()] images += [ts.copy()] # this is the TS neb = NEB(images) # Interpolate linearly the positions of these images: neb.interpolate() # now add the second half images2 = [ts.copy()] images2 += [ts.copy()] images2 += [final_atoms] neb2 = NEB(images2) neb2.interpolate() # collect final band. Note we do not repeat the TS in the second half final_images = images + images2[1:] calc = Vasp('surfaces/Pt-O-fcc-hcp-neb', ibrion=1, nsw=90, spring=-5, atoms=final_images) images, energies = calc.get_neb() p = calc.plot_neb(show=False) plt.savefig('images/pt-o-fcc-hcp-neb.png') Optimization terminated successfully. Current function value: -26.953429 Iterations: 12 Function evaluations: 24 Figure 91: Energy pathway for O diffusion from an fcc to hcp site with a spline fit to determine the barrier. We should compare this barrier to what we could estimate from the simple adsorption energies in the fcc and bridge sites. The adsorption energy in the fcc site was -1.04 eV, and in the bridge site was -0.49 eV. The difference between these two is 0.55 eV, which is very close to the calculated barrier from the NEB calculation. In cases where you can determine what the transition state is, e.g. by symmetry, or other means, it is much faster to directly compute the energy of the initial and transition states for barrier determinations. This is not usually possible though. 5.10.2 Climbing image NEB One issue with the standard NEB method is there is no image that is exactly at the transition state. That means there is some uncertainty of the true energy of the transition state, and there is no way to verify the transition state by vibrational analysis. The climbing image NEB method henkelman:9901 solves that problem by making one image climb to the top. You set LCLIMB==True= in Vasp to turn on the climbing image method. Here we use the previous calculation as a starting point and turn on the climbing image method. # perform a climbing image NEB calculation from vasp import Vasp calc = Vasp('surfaces/Pt-O-fcc-hcp-neb') calc.clone('surfaces/Pt-O-fcc-hcp-cineb') calc.set(ichain=0, lclimb=True) images, energies = calc.get_neb() calc.plot_neb(show=False) import matplotlib.pyplot as plt plt.savefig('images/pt-o-cineb.png') plt.show() Figure 92: Climbing image NEB 5.10.3 Using vibrations to confirm a transition state A transition state should have exactly one imaginary degree of freedom which corresponds to the mode that takes reactants to products. See Vibrations of the bridge site for an example. 6 Atomistic thermodynamics atomistic thermodynamics Let us consider how much the Gibbs free energy of an O2 molecule changes as a function of temperature, at 1 atm. We use the Shomate polynomials to approximate the temperature dependent entropy and enthalpy, and use the parameters from the NIST Webbook for O2. from ase.units import * K = 1.0 print J, mol, K print 0.100 * kJ / mol / K print 1 * eV / (kJ / mol) 6.24150912588e+18 6.022140857e+23 1.0 0.00103642695747 96.4853328825 import numpy as np import matplotlib.pyplot as plt from ase.units import * K = 1. # Kelvin not defined in ase.units! # Shomate parameters T = np.linspace(10, 700) G = enthalpy(T) - T * entropy(T) plt.plot(T, G) plt.xlabel('Temperature (K)') plt.ylabel(r'$\Delta G^\circ$ (eV)') plt.savefig('images/O2-mu.png') Figure 93: Effect of temperature on the Gibbs free energy of an O_2 molecule at standard state (1 atm). This is clearly a big effect! Between 500-600K, the energy has dropped by nearly 1 eV. Pressure also affects the free energy. In the ideal gas limit, the pressure changes the free energy by \(k T \ln P/P_0\) where \(P_0\) is the standard state pressure (1 atm or 1 bar depending on the convention chosen). Let us see how this affects the free energy at different temperatures. import matplotlib.pyplot as plt import numpy as np from ase.units import * atm = 101325 * Pascal #atm is not defined in units K = 1 # Kelvin # examine range over 10^-10 to 10^10 atm P = np.logspace(-10, 10) * atm plt.semilogx(P / atm, kB * (300 * K) * np.log(P / (1 * atm)), label='300K') plt.semilogx(P / atm, kB * (600 * K) * np.log(P / (1 * atm)), label='600K') plt.xlabel('Pressure (atm)') plt.ylabel(r'$\Delta G$ (eV)') plt.legend(loc='best') plt.savefig('images/O2-g-p.png') Figure 94: Effects of pressure on the ideal gas Gibbs free energy of O\(_2\). Similarly, you can see that simply changing the pressure has a large effect on the Gibbs free energy of an ideal gas through the term: \(kT\ln(P/P_0)\), and that this effect is also temperature dependent. This leads us to the final formula we will use for the chemical potential of oxgyen: \(\mu_{O_2} = E_{O_2}^{DFT} + E_{O_2}^{ZPE} + \Delta \mu (T) + kT \ln(P/P_0)\) We can use \(\mu_{O_2}\) in place of \(E_{O_2}\) everywhere to include the effects of pressure and temperature on the gas phase energy. If T=0K, and P=1 bar, we are at standard state, and this equation reduces to the DFT energy (+ the ZPE). 6.1 Bulk phase stability of oxides We will consider the effects of oxygen pressure and temperature on the formation energy of Ag2O and Cu2O. For now, we neglect the effect of pressure and temperature on the solid phases. Neglecting pressure is pretty reasonable, as the solids are not that compressible, and we do not expect the energy to change for small pressures. For neglecting the temperature, we assume that the temperature dependence of the oxide is similar to the temperature dependence of the metal, and that these dependencies practically cancel each other in the calculations. That is an assumption, and it may not be correct. \(2Cu + 1/2 O_2 \rightarrow Cu_2O\) In atomistic thermodynamics, we define the free energy of formation as: \(G_f = G_{Cu_2O} -2G_{Cu} - 0.5 G_{O_2}\) We will at this point assume that the solids are incompressible so that \(p\Delta V \approx 0\), and that \(S_{Cu_2O} -2S_{Cu} \approx 0\), which leads to \(G_{Cu_2O} -2G_{Cu} \approx E_{Cu_2O} -2E_{Cu}\), which we directly compute from DFT. We express \(G_{O_2} = \mu_{O_2} = E_{O_2}^{DFT} + E_{O_2}^{ZPE} + \Delta \mu (T) + kT \ln(P/P_0)\). In this example we neglect the zero-point energy of the oxygen molecule, and finally arrive at: \(G_f \approx E_{Cu_2O} -2E_{Cu} - 0.5 (E_{O_2}^{DFT} + \Delta \mu (T) + kT \ln(P/P_0))\) Which, after grouping terms is: \(G_f \approx E_{Cu_2O} -2E_{Cu} - 0.5 (E_{O_2}^{DFT}) - 0.5*\Delta \mu_{O_2}(P,T)\) with \(\Delta \mu_{O_2}(P,T) = \Delta \mu (T) + kT \ln(P/P_0)\). We get \(\Delta \mu (T)\) from the Janaf Tables, or the NIST Webbook. - we are explicitly neglecting all entropies of the solid: configurational, vibrational and electronic - we also neglect enthalpic contributions from temperature dependent electronic and vibrational states You will recognize in this equation the standard formation energy we calculated in Metal oxide oxidation energies plus a correction for the non standard state pressure and temperature (\(\Delta \mu_{O_2}(P,T) = 0\) at standard state). \(G_f \approx H_f - 0.5*\Delta \mu_{O_2}(P, T)\) The formation energy of Cu2O is -1.9521 eV/formula unit. The formation energy for Ag2O is -0.99 eV/formula unit. Let us consider what temperature the oxides decompose at a fixed oxygen pressure of 1\(\times 10^{-10}\) atm. We need to find the temperature where: \(H_f = 0.5*\Delta \mu_{O_2}(P, T)\) which will make the formation energy be 0.): ''' returns delta chemical potential of oxygen at T and P T in K P in atm ''' return enthalpy(T) - T * entropy(T) + kB * T * np.log(P / atm) P = 1e-10*atm def func(T): 'Cu2O' return -1.95 - 0.5*DeltaMu(T, P) print 'Cu2O decomposition temperature is {0:1.0f} K'.format(fsolve(func, 900)[0]) def func(T): 'Ag2O' return -0.99 - 0.5 * DeltaMu(T, P) print 'Ag2O decomposition temperature is {0:1.0f} K'.format(fsolve(func, 470)[0]) T = np.linspace(100, 1000) # Here we plot delta mu as a function of temperature at different pressures # you have use \\times to escape the first \ in pyplot plt.plot(T, DeltaMu(T, 1e10*atm), label=r'1$\times 10^{10}$ atm') plt.plot(T, DeltaMu(T, 1e5*atm), label=r'1$\times 10^5$ atm') plt.plot(T, DeltaMu(T, 1*atm), label='1 atm') plt.plot(T, DeltaMu(T, 1e-5*atm), label=r'1$\times 10^{-5}$ atm') plt.plot(T, DeltaMu(T, 1e-10*atm), label=r'1$\times 10^{-10}$ atm') plt.xlabel('Temperature (K)') plt.ylabel(r'$\Delta \mu_{O_2}(T,p)$ (eV)') plt.legend(loc='best') plt.savefig('images/O2-mu-diff-p.png') Cu2O decomposition temperature is 917 K Ag2O decomposition temperature is 478 K Figure 95: Δ \(\mu_{O_2}\)(T,p) at different pressures and temperatures. \label{fig:mu-o2} Now, let us make a phase diagram that shows the boundary between silver oxide, and silver metal in P and T space.): ''' T in K P in atm ''' return enthalpy(T) - T * entropy(T) + kB * T * np.log(P / atm) P = np.logspace(-11, 1, 10) * atm T = [] for p in P: def func(T): return -0.99 - 0.5 * DeltaMu(T, p) T.append(fsolve(func, 450)[0]) plt.semilogy(T, P / atm) plt.xlabel('Temperature (K)') plt.ylabel('Pressure (atm)') plt.text(800, 1e-7, 'Ag') plt.text(600, 1e-3, 'Ag$_2$O') plt.savefig('images/Ag2O-decomposition.png') Figure 96: Temperature dependent decomposition pressure for Ag2O. This shows that at high temperature and low pO2 metallic silver is stable, but if the pO2 gets high enough, the oxide becomes thermodynamically favorable. Here is another way to look at it. import numpy as np import matplotlib.pyplot as plt from ase.units import * K = 1. # not defined in ase.units! atm = 101325*Pascal Hf = -0.99 P = 1 * atm Dmu = np.linspace(-4, 0) Hf = -0.99 - 0.5*Dmu plt.plot(Dmu, Hf, label='Ag$_2$O') plt.plot(Dmu, np.zeros(Hf.shape), label='Ag') plt.xlabel(r'$\Delta \mu_{O_2}$ (eV)') plt.ylabel('$H_f$ (eV)') plt.savefig('images/atomistic-thermo-hf-mu.png') Figure 97: Dependence of the formation energy on the oxygen chemical potential. This graph shows graphically the \(\Delta \mu_{O_2}\) required to make the metal more stable than the oxide. Anything less than about -2 eV will have the metal more stable. That can be achieved by any one of the following combinations (graphically estimated from Figure fig:mu-o2): About 500K at 1\(\times 10^{-10}\) atm, 600K at 1\(\times 10^{-5}\) atm, 900K at 1atm, etc… 6.2 Effect on adsorption We now consider the question: Given a pressure and temperature, what coverage would you expect on a surface? We saw earlier that adsorption energies depend on the site and coverage. We lso know the coverage depends on the pressure and temperature. Above some temperature, desorption occurs, and below some pressure adsorption will not be favorable. We seek to develop a quantitative method to determine those conditions. We redefine the adsorption energy as: \(\Delta G_{ads} \approx E_{slab, ads} - E_{slab} - \mu_{ads}\) where again we neglect all contributions to the free energy of the slabs from vibrational energy and entropy, as well as configurational entropy if that is relevant. That leaves only the pressure and temperature dependence of the adsorbate, which we treat in the ideal gas limit. We expand \(\mu_{ads}\) as \(E_{ads}+\Delta \mu(T,p)\), and thus: \(\Delta G_{ads} \approx E_{slab, ads} - E_{slab} - E_{ads} -\Delta \mu(T,p)\) or \(\Delta G_{ads} \approx \Delta H_{ads} -\Delta \mu(T,p)\) where \(\Delta H_{ads}\) is the adsorption energy we defined earlier. Now we can examine the effect of \(\Delta \mu(T,p)\) on the adsorption energies. We will use the adsorption energies for the oxygen on Pt(111) system we computed earlier: import numpy as np import matplotlib.pyplot as plt fcc25 = -1.04 hcp25 = -0.60 bridge25 = -0.49 fcc1 = -0.10 Dmu = np.linspace(-4, 2) plt.plot(Dmu, np.zeros(Dmu.shape), label='Pt(111)') plt.plot(Dmu, 0.25 * (fcc25 - 0.5*Dmu), label='fcc - 0.25 ML') plt.plot(Dmu, 0.25 * (hcp25 - 0.5*Dmu), label='hcp - 0.25 ML') plt.plot(Dmu, 0.25 * (bridge25 - 0.5*Dmu), label='bridge - 0.25 ML') plt.plot(Dmu, 1.0 * (fcc1 - 0.5*Dmu), label='fcc - 1.0 ML') plt.xlabel(r'$\Delta \mu O_2$ (eV)') plt.ylabel(r'$\Delta G_{ads}$ (eV/O)') plt.legend(loc='best') plt.savefig('images/atomistic-thermo-adsorption.png') Figure 98: Effect of oxygen chemical potential on the adsorption energy. 6.3 Atomistic therodynamics and multiple reactions In Inoglu2009188 we considered multiple reactions in an atomistic thermodynamic framework. Let us consider these three reactions of dissociative adsorption of hydrogen and hydrogen disulfide, and consider how to compute the reaction energy for the third reaction. - \(H_2 + 2* \leftrightharpoons 2H*\) - \(H_2S + 2* \leftrightharpoons H* + SH*\) - \(SH* + * \leftrightharpoons S* + H*\) The reaction energy of interest is \(E_{rxn} = \mu_{S*} + \mu{H*} - \mu{SH*}\) The question is, what are these chemical potentials? We would like them in terms of pressures and temperature, preferrably of molecules that can be approximated as ideal gases. By equilibrium arguments we can say that \(\mu_{H*} = \frac{1}{2} \mu_{H_2}\). It follows that at equilibrium: \(\mu_{H*} + \mu_{SH*} = \mu_{H_2S}\) and \(\mu_{H*} + \mu_{S*} = \mu_{SH*}\). From the first equation we have: \(\mu_{SH*} = \mu_{H_2S} - \frac{1}{2}\mu_{H_2}\) and from the second equation we have: \(\mu_{S*} = \mu_{SH*} - \mu_{H*} = \mu_{H_2S} - \mu_{H_2}\). Thus, the chemical potentials of all these three adsorbed species depend on the chemical potentials of two gas-phase species. The chemical potentials of each of these gases can be defined as: \(\mu_{gas}(T,p) = E_{gas}(0K) + \delta \mu + kT\ln\left (p/p^0\right )\), as we have defined before, so that only simple DFT calculations are needed to estimate them. 7 Advanced electronic structure methods 7.1 DFT+U It can be difficult to find the lowest energy solutions with DFT+U. Some strategies for improving this are discussed in PhysRevB.82.195128. 7.1.1 Metal oxide oxidation energies with DFT+U We will reconsider here the reaction (see Metal oxide oxidation energies) 2 Cu2O + O2 \(\rightleftharpoons\) 4 CuO. We need to compute the energy of each species, now with DFT+U. In PhysRevB.73.195107 they use a U parameter of 4 eV for Cu which gave the best agreement with the experimental value. We will also try that. 7.1.1.1 Cu2O calculation with U=4.0 from vasp import Vasp from ase import Atom, Atoms import logging calc = Vasp('bulk/Cu2O') calc.clone('bulk/Cu2()) #print calc -22.32504781 grep -A 3 "LDA+U is selected, type is set to LDAUTYPE" bulk/Cu2O-U=4.0/OUTCAR LDA+U is selected, type is set to LDAUTYPE = 2 angular momentum for each species LDAUL = 2 -1 U (eV) for each species LDAUU = 4.0 0.0 J (eV) for each species LDAUJ = 0.0 0.0 7.1.1.2 CuO calculation with U=4.0 from vasp import Vasp from ase import Atom, Atoms calc = Vasp('bulk/CuO') calc.clone('bulk/Cu()) -16.91708676 7.1.1.3 TODO Reaction energy calculation with DFT+U from vasp import Vasp calc = Vasp('bulk/Cu2O-U=4.0') atoms = calc.get_atoms() cu2o_energy = atoms.get_potential_energy() / (len(atoms) / 3) calc = Vasp('bulk/CuO-U=4.0') atoms = calc.get_atoms() cuo_energy = atoms.get_potential_energy() / (len(atoms) / 2) # make sure to use the same cutoff energy for the O2 molecule! calc = Vasp('molecules/O2-sp-triplet-400') o2_energy = calc.results['energy'] calc.stop_if(None in [cu2o_energy, cuo_energy, o2_energy]) # don't forget to normalize your total energy to a formula unit. Cu2O # has 3 atoms, so the number of formula units in an atoms is # len(atoms)/3. rxn_energy = 4.0 * cuo_energy - o2_energy - 2.0 * cu2o_energy print('Reaction energy = {0} eV'.format(rxn_energy)) print('Corrected energy = {0} eV'.format(rxn_energy - 1.36)) Reaction energy = 7.36775847 eV Corrected energy = 6.00775847 eV This is still not in quantitative agreement with the result in PhysRevB.73.195107, which at U=4 eV is about -3.14 eV (estimated from a graph). We have not applied the O\(_2\) correction here yet. In that paper, they apply a constant shift of -1.36 eV per O\(_2\). After we apply that correction, we agree within 0.12 eV, which is pretty good considering we have not checked for convergence. 7.1.1.4 How much does U affect the reaction energy? It is reasonable to consider how sensitive our results are to the U parameter. We do that here. from vasp import Vasp for U in [2.0, 4.0, 6.0]: ## Cu2O ######################################## calc = Vasp('bulk/Cu2O') calc.clone('bulk/Cu21 = calc.get_atoms() cu2o_energy = atoms1.get_potential_energy() / (len(atoms1) / 3) ## CuO ######################################## calc = Vasp('bulk/CuO') calc.clone('bulk/Cu2 = calc.get_atoms() cuo_energy = atoms2.get_potential_energy() / (len(atoms2) / 2) ## O2 ######################################## # make sure to use the same cutoff energy for the O2 molecule! calc = Vasp('molecules/O2-sp-triplet-400') atoms = calc.get_atoms() o2_energy = atoms.get_potential_energy() if not None in [cu2o_energy, cuo_energy, o2_energy]: rxn_energy = (4.0 * cuo_energy - o2_energy - 2.0 * cu2o_energy) print 'U = {0} reaction energy = {1}'.format(U, rxn_energy - 1.99) U = 2.0 reaction energy = 3.32752349 U = 4.0 reaction energy = 5.37775847 U = 6.0 reaction energy = 5.71849513 U = 2.0 reaction energy = -3.876906 U = 4.0 reaction energy = -3.653819 U = 6.0 reaction energy = -3.397605 In PhysRevB.73.195107, the difference in reaction energy from U=2 eV to U=4 eV was about 0.5 eV (estimated from graph). Here we see a range of 0.48 eV from U=2 eV to U=4 eV. Note that for U=0 eV, we had a (corrected reaction energy of -3.96 eV). Overall, the effect of adding U decreases this reaction energy. This example highlights the challenge of using an approach like DFT+U. On one hand, U has a clear effect of changing the reaction energy. On the other hand, so does the correction factor for the O\(_2\) binding energy. In PhysRevB.73.195107 the authors tried to get the O\(_2\) binding energy correction from oxide calculations where U is not important, so that it is decoupled from the non-cancelling errors that U fixes. See PhysRevB.84.045115 for additional discussion of how to mix GGA and GGA+U results. In any case, you should be careful to use well converged results to avoid compensating for convergence errors with U. 7.2 Hybrid functionals 7.2.1 FCC Ni DOS This example is adapted from HSE06 from vasp import Vasp from ase.lattice.cubic import FaceCenteredCubic from ase.dft import DOS atoms = FaceCenteredCubic(directions=[[0, 1, 1], [1, 0, 1], [1, 1, 0]], size=(1, 1, 1), symbol='Ni') atoms[0].magmom = 1 calc = Vasp('bulk/Ni-PBE', ismear=-5, kpts=[5, 5, 5], xc='PBE', ispin=2, lorbit=11, lwave=True, lcharg=True, # store for reuse atoms=atoms) e = atoms.get_potential_energy() print('PBE energy: ',e) calc.stop_if(e is None) dos = DOS(calc, width=0.2) e_pbe = dos.get_energies() d_pbe = dos.get_dos() calc.clone('bulk/Ni-PBE0') calc.set(xc='pbe0') atoms = calc.get_atoms() pbe0_e = atoms.get_potential_energy() if atoms.get_potential_energy() is not None: dos = DOS(calc, width=0.2) e_pbe0 = dos.get_energies() d_pbe0 = dos.get_dos() ## HSE06 calc = Vasp('bulk/Ni-PBE') calc.clone('bulk/Ni-HSE06') calc.set(xc='hse06') atoms = calc.get_atoms() hse06_e = atoms.get_potential_energy() if hse06_e is not None: dos = DOS(calc, width=0.2) e_hse06 = dos.get_energies() d_hse06 = dos.get_dos() calc.stop_if(None in [e, pbe0_e, hse06_e]) import pylab as plt plt.plot(e_pbe, d_pbe, label='PBE') plt.plot(e_pbe0, d_pbe0, label='PBE0') plt.plot(e_hse06, d_hse06, label='HSE06') plt.xlabel('energy [eV]') plt.ylabel('DOS') plt.legend() plt.savefig('images/ni-dos-pbe-pbe0-hse06.png') Figure 99: Comparison of DOS from GGA, and two hybrid GGAs (PBE0 and HSE06). 7.3 van der Waals forces Older versions (5.2.11+) implement DFT+D2 JCC-JCC20495 with the LVDW tag. The vdW-DF klimes-2011-van-waals is accessed with LUSE_VDW. See for notes on its usage. In Vasp 5.3+, the IVDW tag turns van der Waal calculations on. You should review the links below before using these Van der Waal forces can play a considerable role in binding of aromatic molecules to metal surfaces (ref). Here we consider the effects of these forces on the adsorption energy of benzene on an Au(111) surface.First, we consider the regular PBE functional. 7.3.1 PBE 7.3.1.1 gas-phase benzene from vasp import Vasp from ase.structure import molecule benzene = molecule('C6H6') benzene.center(vacuum=5) print(Vasp('molecules/benzene-pbe', xc='PBE', encut=350, kpts=[1, 1, 1], ibrion=1, nsw=100, atoms=benzene).potential_energy) -76.03718564 7.3.1.2 clean slab # the clean gold slab from vasp import Vasp from ase.lattice.surface import fcc111, add_adsorbate from ase.constraints import FixAtoms atoms = fcc111('Au', size=(3,3,3), vacuum=10) # now we constrain the slab c = FixAtoms(mask=[atom.symbol=='Au' for atom in atoms]) atoms.set_constraint(c) #from ase.visualize import view; view(atoms) print(Vasp('surfaces/Au-pbe', xc='PBE', encut=350, kpts=[4, 4, 1], ibrion=1, nsw=100, atoms=atoms).potential_energy) -81.22521492 7.3.1.3 benzene on Au(111) # Benzene on the slab from vasp import Vasp from ase.lattice.surface import fcc111, add_adsorbate from ase.structure import molecule from ase.constraints import FixAtoms atoms = fcc111('Au', size=(3,3,3), vacuum=10) benzene = molecule('C6H6') benzene.translate(-benzene.get_center_of_mass()) # I want the benzene centered on the position in the middle of atoms # 20, 22, 23 and 25 p = (atoms.positions[20] + atoms.positions[22] + atoms.positions[23] + atoms.positions[25])/4.0 + [0.0, 0.0, 3.05] benzene.translate(p) atoms += benzene # now we constrain the slab c = FixAtoms(mask=[atom.symbol=='Au' for atom in atoms]) atoms.set_constraint(c) #from ase.visualize import view; view(atoms) print(Vasp('surfaces/Au-benzene-pbe', xc='PBE', encut=350, kpts=[4, 4, 1], ibrion=1, nsw=100, atoms=atoms).potential_energy) /home-research/jkitchin/dft-book/surfaces/Au-benzene-pbe submitted: 1413525.gilgamesh.cheme.cmu.edu None resubmitted /home-research/jkitchin/dft-book/surfaces/Au-benzene-pbe submitted: 1399668.gilgamesh.cheme.cmu.edu None from vasp import Vasp e1, e2, e3 = [Vasp(wd).potential_energy for wd in ['surfaces/Au-benzene-pbe', 'surfaces/Au-pbe',
http://kitchingroup.cheme.cmu.edu/dft-book/dft.html
CC-MAIN-2020-05
refinedweb
50,405
60.61
The cElementTree Module January 30, 2005 | Fredrik Lundh&install] [usage] [benchmarks] Download and install # cElementTree is included with Python 2.5 and later, as xml.etree.cElementTree. For earlier versions, cElementTree 1.0.5 can be downloaded from the effbot.org downloads site. You also need a recent version of the standard ElementTree library. If you’re using Linux or BSD systems, check your favourite package repository for python-celementtree or py-celementtree packages. Note that some distributors have included cElementTree in their ElementTree package. Mac OS X users may want to check the Fink repository. To install binary distributions from effbot.org, download and run the installer, and follow the instructions on the screen. If the installer cannot find your Python interpreter, see this page. To install from sources, simply unpack the distribution archive, change to the distribution directory, and run the setup.py script as follows: $ python setup.py install See the README and CHANGES files for more on installation, licensing (BSD style), changes since the last version, etc. cElementTree is designed to work with Python 2.1 and newer. The iterparse mechanism is currently only supported for Python 2.2 and later. Earlier Python versions are not supported (let me know if you need support for 2.0 or 1.5.2). For best performance, use Python 2.4. Note: Mandriva Linux ships with broken Python configuration files, and cannot be used to build Python extensions that rely on distutils feature tests. For a workaround, see this thread. Usage # The cElementTree module is designed to replace the ElementTree module from the standard elementtree package. In theory, you should be able to simply change: from elementtree import ElementTree to import cElementTree as ElementTree in existing code, and run your programs without any problems (note that cElementTree is a module, not a package). (Let me know if you find that something you rely on doesn’t work as expected.) cElementTree contains one new function, iterparse, which works like parse, but lets you track changes to the tree while it is being built. You can also modify and remove elements during the parse, as in this example, which processes “record” elements as they arrive, and then removes their contents from the tree. import cElementTree for event, elem in cElementTree.iterparse(file): if elem.tag == "record": ... process record element ... elem.clear() For more information about the ElementTree module, see Elements and Element Trees. For more information about the iterparse interface, see The ElementTree iterparse Function. Older versions only supports a small number of standard encodings. For a workaround, see Using Non-Standard Encodings in cElementTree. Benchmarks # Here are some benchmark figures, using a number of popular XML toolkits to parse a 3405k document-style XML file, from disk to memory: The figures may of course vary somewhat depending on Python version, compiler, and platform. The above was measured with Python 2.4, using prebuilt Windows installers (as published by the maintainers) for all C extensions. If you want further details about the tests, drop me a line. Several other toolkits were tested, but failed to parse the test file (which uses both non-ASCII characters and namespaces). Toolkits that parse namespaces but don’t handle them properly are included, though (see notes 2 and 5, below). For comparision, here are some benchmarks for event-based parsers (using the same file as above, and enough dummy handlers to be able to handle complete elements and their character data contents): Note 1) For these toolkits, the looping variant of my benchmark behaves very badly, resulting in unexpected memory growth and wildly varying parsing times (typically 150-300% of the values in the table). Strategic use of forced garbage collection (gc.collect()) will usually make things better. Be careful. Note 2) Even with namespace handling enabled, PyRXPU returns namespace prefixes instead of namespace URI:s, which makes it pretty much useless for namespace-aware XML processing. I’ve included it anyway, since it’s often put forth as the fastest XML parser you can get for Python. Note 3) Tests on other platforms indicate that libxml2 is closer to cElementTree than this benchmark indicates. This is most likely a compiler-related issue (I’m using “official” Windows binaries for this benchmark, but so will most other users). Note 4) There are no Windows binaries for lxml.etree (dead link) yet, but it uses libxml2’s parser and object model, so the timings for this test should be very close to those for libxml2. Note 5) An undocumented function (config_nspace_sep) must be called to enable namespace parsing. With that in place, the library parses the file without problems, but the resulting data structure depends on the namespace prefixes used in the file, rather than the namespace URI:s (also see note 2).
http://effbot.org/zone/celementtree.html
CC-MAIN-2015-22
refinedweb
797
56.96
You can edit an XML schema using drag and drop operations or contextual menu actions. Drag and drop is the easiest way to move the existing components to other locations in an XML schema. For example, you can quickly insert an element reference in the diagram with a drag and drop from the Outline view to a compositor in the diagram. Also, the components order in an xs:sequence can be easily changed using drag and drop. If this property has not been set, you can easily set the attribute/element type by dragging over it a simple type or complex type from the diagram. If the type property for a simple type or complex type is not already set, you can set it by dragging over it a simple or complex type. You can edit some schema components directly in the diagram. For these components, you can edit the name and the additional properties presented in the diagram by double clicking the value you want to edit. If you want to edit the name of a selected component, you can also press (Enter). The list of properties which can be displayed for each component can be customized in the Preferences. When editing references, you can choose from a list of available components. Components from an imported schema for which the target namespace does not have an associated prefix is displayed in the list as componentName#targetNamespace. If the reference is from a target namespace which was not yet mapped, you are prompted to add prefix mappings for the inserted component namespace in the current edited schema. You can also change the compositor by double-clicking it and choose the compositor you want from the proposals list. There are some components that cannot be edited directly in the diagram: imports, includes, redefines. The editing action can be performed if you double-click or press (Enter) on an import/include/redefine component. An edit dialog is displayed, allowing you to customize the directives.
http://www.oxygenxml.com/doc/ug-developer/topics/xml-schema-diagram-editing-actions.html
CC-MAIN-2015-11
refinedweb
332
52.09
By Gary simon - May 31, 2018. The difference between Angular 2-6 isn't massive if we're talking about the core fundamentals, but AngularJS (1.0) certainly is! In this tutorial, I'm going to teach you by example, while discussing how things work and why they work. Our app will fetch data from a mock API service and display it in a beautiful UI. You're going to learn how to: Let's get started! Be sure to Subscribe to the Official Coursetro Youtube Channel for more videos. The quickest and easiest way of starting an Angular 6 app is through the Angular CLI (Command Line Interface). To install it, you're going to need either the yarn package manager or the node package manager. To check whether or not you have npm, in your console / command line, type: > npm -v If this goes unrecognized (and not a version number), you need to install NodeJS. Once you install NodeJS, reload your console or command line and you will have access to NPM. Now, we can use NPM to install the Angular CLI. The Angular CLI has an official website located here. To install it: > npm install -g @angular/cli If you run ng -v after installing, it will provide you with the version number. Mine happens to be 6.0.7. Once the CLI is installed, we can now use it to start a install a brand new Angular 6 project: > ng new ng6-proj --style=scss --routing To check out all of the available commands and options, run ng at the command line. Once the CLI has generated the new project, you can hop into it: > cd ng6-proj If you use Visual Studio Code, you can type code . to launch the code editor. Then, to serve the project to the browser, you run: > ng serve -o The -o flag tells the CLI to launch your browser with the project. Now, you're able to watch your Angular 6 project as you code and it will automatically live reload. Awesome! This is a beginner's tutorial, so we're not going to do a deep dive into every file. All that's important for you to understand are the absolute basics. When you view the folder and file structure of your Angular 6 app, it should look something similar to this: > e2e > node_modules > src > app ...a bunch of files ...a bunch of files You're going to spend most of your time working within the /src/app folder. This is where components and services are stored (we'll get to those shortly). In the /src/ folder itself, you will see an index.html (the apps entry point) and a styles.css file, which is where any global CSS rulesets are placed. The /src/assets folder is where you place any assets such as graphics. Not present right now is a /dist/ folder, which is generated when you build the project, which we'll do later on. Before we tackle components, it's worth looking at the /src/app/app.module.ts file. Oh, and by the way, what is that .ts extension? It stands for TypeScript, and Angular 6 uses TypeScript. In short, TypeScript provides strong type checking on JavaScript. The app.module.ts file looks like this: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Whenever you use the CLI to generate components and services, it will automatically update this file to import and add them to the @NgModule decorator. Components are added to the declarations array, and services are added as providers. You will also find yourself adding various imports to the imports array. For instance, when we want to add animations, we will add them here. If you're a little confused at this point, don't worry. Just understand that this is an important file that you will need to visit routinely. The CLI will take care of things for the most part, especially when generating components, but when generating services and performing some other tasks, you will need to visit this file. You'll see as a proceed. A component in Angular 6 provides you with the basic building blocks of your Angular app. When we used the Angular CLI to generate our project, it created a single component. When you use the CLI to generate components, it creates 4 files: > src > app app.component.html app.component.scss (or .css) app.component.spec.ts app.component.ts Open up the app.component.ts file: import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) export class AppComponent { title = 'app'; } At the top, we have our imports. You will import other components here, along with service files. We'll do that a little later. The @Component decorator is an object with associated property / value pairs that defines important stuff associated with this component. For instance, you will see selector: 'app-root' - This provides this component with a unique identifier that is used in other areas of the app. You can also see templateUrl and styleUrls, which tells Angular where this component's HTML and CSS files are located. There are other properties that can be added here, such as animations, but we'll get to that later. Finally, we have the logic section of the component file. This is where properties (we see that title was defined by the CLI), dependency injection and methods are defined. Now that you understand the very basics of a component, let's create a few of our own! In the console, run: > ng generate component sidebar CREATE src/app/sidebar/sidebar.component.html (26 bytes) CREATE src/app/sidebar/sidebar.component.spec.ts (635 bytes) CREATE src/app/sidebar/sidebar.component.ts (274 bytes) CREATE src/app/sidebar/sidebar.component.scss (0 bytes) UPDATE src/app/app.module.ts (479 bytes) Here, we've told the CLI to generate a component with the name of sidebar. It outputs the 4 files it created, along with the app module file it updated! Let's generate a few more components. Run the following commands to generate 3 more components: > ng g c posts > ng g c users > ng g c details Now, you should have a total of 5 components, 4 of which we just created ourselves. Shortly, you'll see how all of these start to work with each other and in relation to the app. Let's say for instance that we want our particular app to have a sidebar with some icons. This sidebar will always be present in the app. The sidebar component is something we already generated with the CLI. Open the src/app/app.component.html file. You will see all of the boilerplate HTML the CLI generated, and consequently, what you see in the browser for the time being. Remove all of that and paste this (or better yet, type it!): <div id="container"> <app-sidebar></app-sidebar> <div id="content"> <router-outlet></router-outlet> </div> </div> We've wrapped everything in a container id. Then, you will notice a custom HTML element called app-sidebar. What's that? Well, when the CLI generated the sidebar component, it made the component's selector value app-sidebar. Don't believe me? Check out /src/app/sidebar/sidebar.component.ts -- It's right there in the component decorator! That's how you embed a component inside of another component. Now, anything defined in that component's HTML, will be displayed where <app-sidebar></app-sidebar> is defined. Another very important element is the router-outlet. This was added by the CLI when we added the --routing flag (it also generated a routing file in /src/app. This element defines where any components defined by their routes will be displayed. Let's head over to the /src/app/sidebar/sidebar.component.html file to define the sidebar templating: <nav> <ul> <li> <a routerLink=""> <i class="material-icons">supervised_user_circle</i> </a> </li> <li> <a routerLink="posts"> <i class="material-icons">message</i> </a> </li> </ul> </nav> You will notice routerLink= here. Instead of href, we use routerLink to direct the user to different routes. Right now, this will not work though, we'll get to that during the routing section. We're also going to use Material Icons, so we need to import that first. Save this file and open up /src/index.html and add the following 2 lines between the <head> tags: <link href="" rel="stylesheet"> <link href="" rel="stylesheet"> We're importing material icons first, and then a Google web font called Montserrat. Let's add some CSS rulesets to make our app look better. First, open up /src/styles.css. Any CSS/Sass defined here will apply to the HTML templating of all components, while component-specific CSS files only apply to that component's HTML template. Add the following rulesets: /* You can add global styles to this file, and also import other style files */ body { margin: 0; background: #F2F2F2; font-family: 'Montserrat', sans-serif; height: 100vh; } #container { display: grid; grid-template-columns: 70px auto; height: 100%; #content { padding: 30px 50px; ul { list-style-type: none; margin:0;padding:0; li { background: #fff; border-radius: 8px; padding: 20px; margin-bottom: 8px; a { font-size: 1.5em; text-decoration: none; font-weight: bold; color:#00A8FF; } ul { margin-top: 20px; li { padding:0; a { font-size: 1em; font-weight: 300; } } } } } } } This is a little lengthy because I don't want to keep revisiting this file. The rulesets here are applying to some elements that we haven't yet defined. Nothing too exciting is happening here though, just some standard Sass/CSS. Next, open up the sidebar CSS file /src/app/sidebar/sidebar.component.scss: nav { background: #2D2E2E; height: 100%; ul { list-style-type: none; padding: 0; margin: 0; li { a { color: #fff; padding: 20px; display: block; } .activated { background-color: #00a8ff; } } } } Great. View your browser and the result should look like this: Now, let's make our 2 icons work when they're clicked. In order to do that, we need to visit the /src/app/app-routing.module.ts file. This is what it looks like: import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = []; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } We need to import our components at the top, and add them to the Routes array shown on line 4 above. To do that, add the following: import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { UsersComponent } from './users/users.component'; import { DetailsComponent } from './details/details.component'; import { PostsComponent } from './posts/posts.component'; const routes: Routes = [ { path: '', component: UsersComponent }, { path: 'details/:id', component: DetailsComponent }, { path: 'posts', component: PostsComponent }, ]; We've imported our 3 components, and then defined three objects in the Routes array. The first object specifies that the UsersComponent will be the default component that loads on the root path. We leave the path value empty forthis. The next route is for a user details section. We've specified a wildcard named id. We'll use this to fetch that value from the router in order to retrieve the correct user details. Then another route for a component and path called posts. Save this file, and the browser should now show: Awesome! For our users component, we want to fetch a list of users from a public API. To do that, we're going to use the Angular CLI to generate a service for us. An Angular 6 service is useful placing code that's reusable throughout your app's different components. In the console, type: > ng generate service data Open up the new service file: /src/app/data.service.ts: import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class DataService { constructor() { } } It looks similar to a regular component, right? You define your imports at the top, and your methods and properties in the class that's exported. The purpose of our service file will be to communicate with an API via the Angular 6 HTTP Client. Angular comes with a built in HTTPClient. Let's import that at the top of our data.service.ts file: import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; Next, in order to use the HttpClient, we need to create an instance of it through dependency injection within the constructor: constructor(private http: HttpClient) { } getUsers() { return this.http.get('') } We also defined a method called getUsers() which we'll call in our component shortly. It returns a list of users from a public testing API. Before we can use the HTTPClient, we need to add as an import in our app's /src/app/app.module.ts file: // Other imports removed for brevity import { HttpClientModule } from '@angular/common/http'; // <-Add here @NgModule({ declarations: [ // Removed for brevity ], imports: [ BrowserModule, AppRoutingModule, HttpClientModule, // <-Add here ], providers: [], bootstrap: [AppComponent] }) Great! Next, let's open up the /src/app/users/users.component.ts file and import our service: import { Component, OnInit } from '@angular/core'; import { DataService } from '../data.service'; import { Observable } from 'rxjs'; To display the results, we're going to use an Observable, so we're importing it here, too. In the class, add: export class UsersComponent implements OnInit { users$: Object; constructor(private data: DataService) { } ngOnInit() { this.data.getUsers().subscribe( data => this.users$ = data ); } } In the constructor, we're creating an instance of our service. Then, within the lifecycle hook ngOnInit() (this runs when the component loads), we're calling our getUsers() method and subscribing to it. Inside, we're binding our users$ object to the result returned by the API. Next, open up /src/app/users/users.component.html: <h1>Users</h1> <ul> <li * <a routerLink="/details/{{user.id}}">{{ user.name }}</a> <ul> <li>{{ user.email }}</li> <li><a href="http://{{ user.website }}">{{ user.website }}</a></li> </ul> </li> </ul> Whenever you wish to iterate over an array or array of objects, you use the Angular directive *ngFor. We then use interpolation brackets to call upon the properties of the returned object to display them in the browser! If you save this and refresh, you should now see a list of users and their information: Let's revisit the service file /src/app/data.service.ts and add the following methods: getUser(userId) { return this.http.get(''+userId) } getPosts() { return this.http.get('') } The getUser() method will provide us with a single user's information, which will accept a userId as a parameter. getPosts() will fetch some fictional posts for us to get more muscle memory with this process of communicating with services. Visit /src/app/details/details.component.ts: import { Component, OnInit } from '@angular/core'; import { DataService } from '../data.service'; import { Observable } from 'rxjs'; import { ActivatedRoute } from "@angular/router"; @Component({ selector: 'app-details', templateUrl: './details.component.html', styleUrls: ['./details.component.scss'] }) export class DetailsComponent implements OnInit { user$: Object; constructor(private route: ActivatedRoute, private data: DataService) { this.route.params.subscribe( params => this.user$ = params.id ); } ngOnInit() { this.data.getUser(this.user$).subscribe( data => this.user$ = data ); } } This, as you see, is very similar to our users component. The only difference comes when we import ActivatedRoute and call it within the constructor. The purpose of this code allows us to grab the id router parameter that we defined in the app's routing file earlier. This will give us access to the user ID and then pass it to the getUser() method that we defined. Open up the details.component.html and specify: <h1>{{ user$.name }}</h1> <ul> <li><strong>Username:</strong> {{ user$.username }}</li> <li><strong>Email:</strong> {{ user$.email }}</li> <li><strong>Phone:</strong> {{ user$.phone }}</li> </ul> Save it, and click on one of the user's names: Awesome! For more muscle memory, let's repeat this process for the /src/app/posts/posts.component.ts file: import { Component, OnInit } from '@angular/core'; import { DataService } from '../data.service'; import { Observable } from 'rxjs'; @Component({ selector: 'app-posts', templateUrl: './posts.component.html', styleUrls: ['./posts.component.scss'] }) export class PostsComponent implements OnInit { posts$: Object; constructor(private data: DataService) { } ngOnInit() { this.data.getPosts().subscribe( data => this.posts$ = data ); } } And the posts.component.html file: <h1>Posts</h1> <ul> <li * <a routerLink="">{{ post.title }}</a> <p>{{ post.body }}</p> </li> </ul> Save it, and click on the posts icon in the sidebar: Great! It would be nice to indicate which page a user is currently on in the left sidebar, perhaps by adding a class to the icon that will make its' background blue? Sure! Visit the /src/app/sidebar/sidebar.component.ts file and add the following: import { Component, OnInit } from '@angular/core'; import { Router, NavigationEnd } from '@angular/router'; export class SidebarComponent implements OnInit { currentUrl: string; constructor(private router: Router) { router.events.subscribe((_: NavigationEnd) => this.currentUrl = _.url); } ngOnInit() {} } We're importing the Router and NavigationEnd, then defining a string property currentUrl. Then, we create an instance of the Router in order to subscribe to router.events. It will provide us with a string, which is the current router path. Open the sidebar.component.html file and update it to match: <nav> <ul> <li> <a routerLink="" [class.activated]="currentUrl == '/'"> <i class="material-icons">supervised_user_circle</i> </a> </li> <li> <a routerLink="posts" [class.activated]="currentUrl == '/posts'"> <i class="material-icons">message</i> </a> </li> </ul> </nav> Class binding works by binding [class.csspropertyname] to a template expression. It will only add the .activated CSS ruleset (defined in styles.scss) if our currentUrl is equal to either / or /posts. Save it. The result should now look like this: Try clicking on the other icon! Let's say for instance that we want our list of user's on the user's page to fade in when the component loads. We can use Angular's powerful animation library to help us achieve this. In order to gain access to the animation library, we have to first install it from the console: > npm install @angular/animations@latest --save Then, we add it to the imports of /src/app/app.module.ts: // Other imports removed for brevity import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; @NgModule({ ... imports: [ // other modules removed for brevity BrowserAnimationsModule ], }) Next, open up /src/app/users/users.component.ts and add the following to the top imports: import { trigger,style,transition,animate,keyframes,query,stagger } from '@angular/animations'; Then, in the component decorator, add the following animations property with the associated code: @Component({ selector: 'app-users', templateUrl: './users.component.html', styleUrls: ['./users.component.scss'], // Add this: animations: [ trigger('listStagger', [ transition('* <=> *', [ query( ':enter', [ style({ opacity: 0, transform: 'translateY(-15px)' }), stagger( '50ms', animate( '550ms ease-out', style({ opacity: 1, transform: 'translateY(0px)' }) ) ) ], { optional: true } ), query(':leave', animate('50ms', style({ opacity: 0 })), { optional: true }) ]) ]) ] }) Whew, a lot happening here! To make this work, visit the /src/app/users/users.component.html file and reference the animation trigger: <ul [@listStagger]="users$"> Save it, and click on the users icon. You will notice that the list animates in! There's a lot more to Angular animation, so this is just one potential use case.
https://coursetro.com/posts/code/154/Angular-6-Tutorial---Learn-Angular-6-in-this-Crash-Course
CC-MAIN-2020-50
refinedweb
3,206
57.57
By Christopher A. Jones, Fred L. Drake, Jr. Price: $39.95 USD £28.50 GBP Cover | Table of Contents | Colophon _ _¿ _ _ >_ _ bjbjU_U_ __ 0¸_ 7| 7| W_ _ C ÿÿ_ ÿÿ_ ÿÿ_ l Ê_ Ê_ Ê_ Ê_ Ê_ Ê_ Ê_ ¶ _. (The second edition differs from the first only in that some editorial corrections and clarifications have been made; the specification is stable.) <"?> <book> <title>Python and XML</title> </book> bookand titleelements are opened and closed so that elements nest within each other in a strictly hierarchical way. You can't open a bookand close a magazine. <?xml version="1.0" encoding="UTF-8"?> <?xml encoding="UTF-8"?> myEntityand a parameter entity of the same name, and the names do not clash. xmlnsattribute with a URI. Namespaces are communicated in an XML document using the reserved colon character in an element name, prefixed with the xmlnssymbol. For example: <sumc:purchaseOrder <sumc:product <sumc:qtyOne Case Order</sumc:qty> <sumc:amount34.56</sumc:amount> <sumc:shippingNext-day</sumc:shipping> </sumc:product> </sumc:purchaseOrder> sumchas been associated with it in the xmlns:sumcattribute. Elements prefixed with sumc:are within this namespace. This purchaseOrdernow has a context that can set it apart from a similarly structured purchase order intended for a different business domain. <?xml version="1.0"?> <webArticle category="news" subcategory="technical"> <header title="NASA Builds Warp Drive" length="3k" author="Joe Reporter" distribution="all"/> <body>Seattle, WA - Today an anonymous individual announced that NASA has completed building a Warp Drive and has parked a ship that uses the drive in his back yard. This individual claims that although he hasn't been contacted by NASA concerning the parked space vessel, he assumes that he will be launching it later this week to mount an exhibition to the Andromeda Galaxy. </body> </webArticle> ArticleHandlerclass to a new file, handlers.py; we'll keep adding new handlers to this file throughout the chapter. Keep it simple at first, just to see how SAX works: # - ArticleHandler (add to handlers.py file) class ArticleHandler(ContentHandler): """ A handler to deal with articles in XML """ def startElement(self, name, attrs): print "Start element:", name popencall to the filecommand for each file. (While this could be made more efficient by calling findless often and requiring it to operate on more than one file at a time, that isn't the topic of this book.) One of the key methods of this class is indexDirectoryFiles: def indexDirectoryFiles(self, dir): """Index a directory structure and creates an XML output file.""" # prepare output XML file self.__fd = open(self.outputFile, "w") self.__fd.write('<?xml version="1.0" encoding="' + XML_ENC + '"?>\n') self.__fd.write("<IndexedFiles>\n") # do actual indexing self.__indexDir(dir) # close out XML file self.__fd.write("</IndexedFiles>\n") self.__fd.close() outputFileand an XML declaration and root element are added. The indexDirectoryFilesmethod calls its internal _ _indexDirmethod—this is the real worker method. It is a recursive method that descends the file hierarchy, indexing files along the way. fileelements within the XML document that have a corresponding <imagename>.jpg file that is the entire image, and a t-<imagename>.jpg file that is a thumbnail-size image. $> ls -l *newimage* -rw-rw-r-- 1 shm00 shm00 98197 Jan 18 11:08 newimage.jpg -rw-rw-r-- 1 shm00 shm00 5272 Jan 18 11:42 t-newimage.jpg convertcommand. This command is part of the ImageMagick package, and is installed by default by most modern Linux distributions. For other Unix systems, the package is available at. $> convert image.jpg -geometry 192x128 t-image.jpg os.path.walkfunction.) """ genxml.py Descends PyXML tree, indexing source files and creating XML tags for use in navigating the source. """ import os import sys from xml.sax.saxutils import escape def process(filename, fp): print "* Processing:", filename, # parse the file pyFile = open(filename) fp.write("<file name=\"" + filename + "\">\n") inClass = 0 line = pyFile.readline( ) while line: line = line.strip( ) if line.startswith("class") and line[-1] == ":": if inClass: fp.write(" </class>\n") inClass = 1 fp.write(" <class name='" + line[:-1] + "'>\n") elif line.find("def") > 0 and line[:-1] == ":" and inClass: fp.write(" <method name='" + escape(line[:-1]) + "'/>\n") line = pyFile.readline( ) pyFile.close( ) if inClass: fp.write(" </class>\n") inClass = 0 fp.write("</file>\n") def finder(fp, dirname, names): """Add files in the directory dirname to a list.""" for name in names: if name.endswith(".py"): path = os.path.join(dirname, name) if os.path.isfile(path): process(path, fp) def main( ): print "[genxml.py started]" xmlFd = open("pyxml.xml", "w") xmlFd.write("<?xml version=\"1.0\"?>\n") xmlFd.write("<pyxml>\n") os.path.walk(sys.argv[1], finder, xmlFd) xmlFd.write("</pyxml>") xmlFd.close( ) print "[genxml.py finished]" if __name__ == "__main__": main( ) ParserFactoryclass is provided that supplies a SAX-ready parser guaranteed available in your runtime environment. Additionally, you can explicitly create a parser (or SAX driver) by dipping into any specific package, such as PyExpat. We illustrate an example of both, but normally you should rely on the parser factory to instantiate a parser. make_parserfunction (imported from xml.sax) returns a SAX driver for the first available parser in the list that you supply, or returns an available parser if no list is specified or if the list contains parsers that are not found or cannot be loaded. The make_parserfunction has its roots as part of the xml.sax.saxexts.ParserFactoryclass, but it is better to import the method from xml.sax(more on this in a bit). For example: from xml.sax import make_parser parser = make_parser( ) make_parserwithout an argument is sure to return either a PyExpat or xmlprocdriver. If you dig into the source of the xml.saxmodule, you will see this list supplied to the ParserFactoryclass. If you instantiate a parser factory directly out of xml.sax.saxexts, you need to be sure to supply a list containing the name of at least one valid parser, or it won't be able to create a parser: >>> from xml.sax.saxexts import ParserFactory >>> p = ParserFactory( ) >>> parser = p.make_parser( ) Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/local/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 77, in make_parser raise SAXReaderNotAvailable("No parsers found", None) xml.sax._exceptions.SAXReaderNotAvailable: No parsers found >>> from xml.sax.saxexts import ParserFactory >>> p = ParserFactory(["xml.sax.drivers.drv_pyexpat"]) >>> parser = p.make_parser( ) ContentHandler, you may be wondering how much of that ease comes from using SAX and how much is a matter of convenience functions in the Python libraries. While we won't delve deeply into the native interfaces of the individual parsers, this is a good question, and can lead to some interesting observations. xml.parsers.expatmodule. If we want to modify our last example to use PyExpat directly, we don't have a lot of work to do, but there are a few changes. Since the PyExpat handler methods closely match the SAX handlers, at least for the basic use we demonstrate here, we can use the same handler class we've already written. The imports won't need to change much: #!/usr/bin/env python import sys from xml.parsers import expat from handlers import PyXMLConversionHandler parser = expat.ParserCreate( ) >>> from xml.parsers import expat >>> parser = expat.ParserCreate( ) >>> dir(parser) ['CharacterDataHandler', 'CommentHandler', 'DefaultHandler', 'DefaultHandlerExpa nd', 'EndCdataSectionHandler', 'EndElementHandler', 'EndNamespaceDeclHandler', ' ErrorByteIndex', 'ErrorCode', 'ErrorColumnNumber', 'ErrorLineNumber', 'ExternalE ntityParserCreate', 'ExternalEntityRefHandler', 'GetBase', 'NotStandaloneHandler ', 'NotationDeclHandler', 'Parse', 'ParseFile', 'ProcessingInstructionHandler', 'SetBase', 'StartCdataSectionHandler', 'StartElementHandler', 'StartNamespaceDec lHandler', 'UnparsedEntityDeclHandler', 'ordered_attributes', 'returns_unicode', 'specified_attributes']
http://www.oreilly.com/catalog/pythonxml/toc.html
crawl-001
refinedweb
1,248
51.14
Timeline … Oct 17, 2008: - 4:34 PM libcaca edited by - fix links to www repo (diff) - 1:44 AM Changeset [2959] by - Minor build system cosmetic changes. - 1:21 AM Changeset [2958] by - Do not attempt to create libcaca++ symlinks if C++ support was disabled. - 1:15 AM Changeset [2957] by - Fix the gmcs detection in configure.ac that was incorrectly causing … - 1:03 AM Changeset [2956] by - Remove executable bit from files that do not need it. - 12:59 AM Changeset [2955] by - Rename csharp/ into caca-sharp/. - 12:52 AM Changeset [2954] by - Rename win32/dist into win32/gtksharp because we might very well put … - 12:48 AM Changeset [2953] by - Do not ship build-win32 with the tarballs, since they do not ship the … Oct 16, 2008: - 9:17 PM Changeset [2952] by - win32: move GTK# assemblies into the same directory as the rest of the … Oct 14, 2008: - 10:42 AM Changeset [2951] by - ThePimp?: fix copyright information. - 12:56 AM Changeset [2950] by - ThePimp?: add tooltips and ellipses here and there in the GUI. - 12:56 AM Changeset [2949] by - ThePimp?: switch back to GTK# 2.12 now that it works on Windows. Oct 13, 2008: - 11:17 PM Changeset [2948] by - ThePimp?: reorganise "New Image" dialog. - 11:17 PM Changeset [2947] by - ThePimp?: improve Visual Studio solution and cross-build script. We can … - 11:17 PM Changeset [2946] by - Fix the gdk-pixbuf loader paths in our Win32 installation. - 11:17 PM Changeset [2945] by - libpipi: do not swap bytes in the GDI loader. - 11:00 AM Changeset [2944] by - libpipi: fix a double free in the Oric codec. - 4:06 AM Changeset [2943] by - libpipi: fix a buffer underallocation. - 3:18 AM Changeset [2942] by - libpipi: get rid of all remaining large stack allocations. - 3:15 AM Changeset [2941] by - libpipi: replace large stack buffer allocations with malloc(). - 1:42 AM Changeset [2940] by - Import GTK# 2.12 for Windows. We can now build a Win32 The Pimp … - 1:40 AM Changeset [2939] by - pipi-sharp: use libpipi-0.dll instead of libpipi.dll in the C# bindings. Oct 12, 2008: - 4:16 PM Changeset [2938] by - ThePimp?: target GTK# 2.10 instead of 2.12. - 4:04 PM Changeset [2937] by - Improve the Win32 cross-build script. Still doesn't work well. - 4:03 PM Changeset [2936] by - Add a bunch of .gitignore files for git-svn users. - 4:03 PM Changeset [2935] by - Do not install example programs. - 4:03 PM Changeset [2934] by - libpipi: sometimes imlib_load_image() succeeds but … - 4:03 PM Changeset [2933] by - Better autotools/Monodevelop integration. - 4:03 PM Changeset [2932] by - ThePimp?: minor GUI changes. - 4:03 PM Changeset [2931] by - ThePimp?: do not attempt to save a file if there is no open image. - 4:03 PM Changeset [2930] by - pipi-sharp: make .config really absolute. - 1:20 PM Changeset [2929] by - win32: remove the executable bit on MonoPosixHelper?.dll. - 1:04 PM libpipi/oric edited by - fix links (diff) - 1:00 PM libpipi/devel edited by - << back to libpipi (diff) - 12:58 PM libpipi edited by - link to thepimp and development notes (diff) - 12:57 PM libpipi/examples edited by - << back to libpipi (diff) - 12:57 PM libpipi/oric edited by - << back to libpipi (diff) - 12:56 PM libpipi/devel edited by - libpipi development notes (diff) - 12:42 PM libpipi/devel created by - starting to work on the libpipi developer notes - 12:22 PM thepimp edited by - (diff) Oct 11, 2008: - 11:56 PM Changeset [2928] by - pipi-sharp: make the .dll.config file contents absolute. - 11:56 PM Changeset [2927] by - Update the Monodevelop projects. - 11:56 PM Changeset [2926] by - ThePimp?: make image loading more robust. - 11:56 PM Changeset [2925] by - Tell git to ignore generated pipi/pipi_types.h. - 5:40 PM libpipi edited by - (diff) - 5:11 PM Changeset [2924] by - Fix paths in Trac configuration. - 5:09 PM Changeset [2923] by - Fix paths in Trac configuration. - 5:06 PM Changeset [2922] by - Branch web trunk for tests. - 5:05 PM Changeset [2921] by - Create branches and tags for the website. - 4:41 PM Changeset [2920] by - Merge www/ and trac/ into a single web/ project. - 2:02 PM Changeset [2919] by - pipi-sharp: copy libpipi.dll before building test-csharp. - 1:55 PM Changeset [2918] by - ThePimp?: the Visual Studio solution now builds a working Pimp.exe - 1:54 PM Changeset [2917] by - Update the win32 contribs with Mono 2.0 libraries. - 12:19 PM Changeset [2916] by - Tune Visual Studio files so that they work with Monodevelop. - 12:19 PM Changeset [2915] by - Put back ● as the TextEntry? invisible character and tell gmcs our … - 11:55 AM Changeset [2914] by - Create Visual Studio build files for libpipi, pipi-sharp and The Pimp. - 11:10 AM Changeset [2913] by - Fix C include paths for separate directory builds. - 11:10 AM Changeset [2912] by - Make the GTK# detection code more robust. - 11:10 AM Changeset [2911] by - libpipi: include <stdlib.h> in files where NULL is used. - 10:20 AM Changeset [2910] by - ThePimp?: add namespace before resource names. - 3:30 AM Changeset [2909] by - libpipi: fix a sign bug in the GDI loader. - 2:52 AM Changeset [2908] by - Set PACKAGE_VERSION instead of VERSION in the win32 config.h file. - 1:41 AM Changeset [2907] by - Add Mono.Posix.dll to the shipped binary assemblies for Win32. Oct 10, 2008: - 12:24 AM Changeset [2906] by - * Cleanup my term after a grab Oct 9, 2008: - 1:53 AM Changeset [2905] by - Remove tabs in the code here and there. - 1:50 AM Changeset [2904] by - Start writing Visual Studio projects. Oct 8, 2008: - 10:56 AM Changeset [2903] by - libpipi: fix file headers. - 2:14 AM Changeset [2902] by - Support C99 types on Win32 through the same hacks as in libcaca. - 1:43 AM Changeset [2901] by - Update the Win32 cross-build script to reflect recent reorganisation. - 1:27 AM Changeset [2900] by - Renamed msvc into win32. - 12:06 AM Changeset [2899] by - Reorganise win32 files and add proper svn:ignore properties everywhere. Oct 7, 2008: - 11:26 PM Changeset [2898] by - Reorganise MSVC files so that each project is with its source code. - 8:10 PM Changeset [2897] by - * Added preliminary support of CoreImage? (Cocoa/Mac? OS X) Changed … - 6:06 PM Changeset [2896] by - Move stubs.h to caca/caca_stubs.h since it's only used by the library. - 6:06 PM Changeset [2895] by - Fix the library suffix detection. - 1:36 AM Changeset [2894] by - Fix the library suffix detection. - 12:40 AM Changeset [2893] by - ThePimp?: store Win32 GTK# in SVN (but not in distributed tarballs). - 12:38 AM Changeset [2892] by - ThePimp?: add missing NewFile? source files. - 12:14 AM Changeset [2891] by - ThePimp?: deactivate toolbox for now. - 12:11 AM Changeset [2890] by - ThePimp?: the "New" button now works. Oct 6, 2008: - 11:33 PM Changeset [2889] by - ThePimp?: middle mouse drag now scrolls the image. - 10:45 PM Changeset [2888] by - Detect shared library suffix at configure stage. - 10:44 PM Changeset [2887] by - ThePimp?: toolbox test. - 10:44 PM Changeset [2886] by - Detect shared library suffix at configure stage. - 10:44 PM Changeset [2885] by - ThePimp?: creating the BEST FUCKING ABOUT BOX IN THE WORLD! - 9:53 PM Changeset [2884] by - * Reverted dll.config.in stuff as it doesn't work as expected - 9:41 PM Changeset [2883] by - * Added temporary autoconf support for OSX - 9:33 PM Changeset [2882] by - caca-sharp: support systems with .dylib or .sl shared libraries. - 9:33 PM Changeset [2881] by - .gitignore: ignore files generated by MonoDevelop?. - 9:33 PM Changeset [2880] by - Clean up the web server directories before copying the documentation there. - 9:33 PM Changeset [2879] by - doc: rewrite the tutorial to reflect recent API updates. - 9:33 PM Changeset [2878] by - libcaca: fix an infinite loop in the .pc file. - 10:55 AM thepimp edited by - link to libpipi (diff) - 10:47 AM Changeset [2877] by - ThePimp?: fix URL in the FUCKING ABOUT BOX! - 12:19 AM Changeset [2876] by - ThePimp?: we now have a FUCKING ABOUT BOX. That's right. Now we're a … - 12:14 AM thepimp created by - creating The Pimp page Oct 5, 2008: - 11:05 PM Changeset [2875] by - ThePimp?: reorganised stuff. - 10:31 PM Changeset [2874] by - ThePimp?: we can now save files. - 10:31 PM Changeset [2873] by - ThePimp?: we can now open and display files. - 10:30 PM Changeset [2872] by - Start playing with scrolling widgets in Pimp. - 5:50 PM Changeset [2871] by - Fix detection of floating point assembly instructions. They were … - 3:46 AM Changeset [2870] by - Remove unused pimp directory. - 3:43 AM Changeset [2869] by - Reorganise ThePimp? and pipi-sharp, adding a test program and allowing … - 3:37 AM Changeset [2868] by - Tidy the .NET Makefile. - 2:56 AM Changeset [2867] by - Split the C# bindings into separate files. Oct 4, 2008: - 5:54 PM Changeset [2866] by - Starting the work on Pimp. It's a MonoDevelop? project but eventually … - 5:54 PM Changeset [2865] by - pipi.c: add pipi_get_version(). - 3:05 PM Changeset [2864] by - configure.ac: use more modern autoconf syntax. Oct 3, 2008: Oct 1, 2008: - 10:20 PM Changeset [2863] by - build-win32: pass script arguments to configure, to allow --disable-shared. - 10:20 PM Changeset [2862] by - oric.c: allow to load invalid files that img2oric used to generate. - 1:40 PM libpipi/oric edited by - remove not so interesting images (diff) Sep 30, 2008: - 9:02 AM libpipi/oric edited by - (diff) - 1:03 AM Changeset [2861] by - Clean up the tree before configuring the Win32 build. - 1:03 AM Changeset [2860] by - Fix separate directory build failure caused by caca_types.h. - 1:03 AM Changeset [2859] by - Fix the Win32 build. - 1:03 AM Changeset [2858] by - Properly export legacy 0.9 symbols. - 12:05 AM Changeset [2857] by - Hide the list of available commands in pipi/context.c, so that the … - 12:05 AM Changeset [2856] by - Add a script to cross-compile Win32 binaries. - 12:05 AM Changeset [2855] by - Fix library name in pipi.pc.in. - 12:02 AM Changeset [2854] by - Fix for the libcucul symlinks installation, courtesy of Ben Wiley Sittler. Sep 29, 2008: - 11:28 PM Changeset [2853] by - switch to weak aliases so it at least compiles on Mac OS X; note that … - 11:26 PM Changeset [2852] by - remove reference to obsolete common.h - 11:26 PM Changeset [2851] by - add missing CUCUL_* compatiblity constants - 11:16 PM Changeset [2850] by - Support for platforms where shared libraries are not called *.so. - 10:27 PM libpipi/examples edited by - better styles (diff) - 6:38 PM libpipi/oric edited by - remove overcomplicated example (diff) - 6:34 PM libpipi/oric edited by - rotation example (diff) - 6:32 PM Changeset [2849] by - Add missing image to the Oric examples. - 6:25 PM libpipi/oric edited by - remove unnecessary links (diff) - 3:04 AM libpipi edited by - (diff) - 3:01 AM WikiStart edited by - expire old news (diff) - 3:00 AM libpipi edited by - remove mention of img2oric (diff) - 2:59 AM libpipi/oric edited by - fix link (diff) - 2:58 AM libpipi/oric edited by - update examples (diff) - 1:26 AM Changeset [2848] by - Add a Gaussian blur sample. - 1:26 AM Changeset [2847] by - Fix a probable bug in the .TAP output. - 12:31 AM libpipi/oric edited by - link to sample OUTPUT.TAP (diff) - 12:24 AM libpipi/oric edited by - (diff) Sep 28, 2008: - 11:28 PM libpipi/oric created by - move the relevant img2oric information to a libpipi subpage - 10:48 PM libpipi edited by - (diff) - 7:01 PM Changeset [2846] by - Add a --gamma command to modify the global gamma value. This is a … - 5:55 PM Changeset [2845] by - Wrote an Oric hires file writer, based on img2oric. - 5:54 PM Changeset [2844] by - Fix headers. - 4:14 PM Changeset [2843] by - Add an AUTHORS file. - 4:09 PM Changeset [2842] by - Wrote an Oric hires file parser. - 4:09 PM Changeset [2841] by - Fix uninitialised values in most codec image writers. - 1:58 PM libpipi/research/coolstuff edited by - (diff) - 7:08 AM Changeset [2840] by - gdi.c: the GDI codec can now open and save BMP files. - 7:08 AM Changeset [2839] by - Detect Windows GDI at configuration time. - 7:08 AM Changeset [2838] by - COPYING: add a global license file. - 7:07 AM Changeset [2837] by - Allow to use all available image loaders instead of just the first one. - 1:02 AM PWNtcha edited by - (diff) - 12:52 AM Ticket #4 (figfont.c needs cleanup) closed by - fixed: Fixed in [2412]. - 12:45 AM toilet edited by - libcucul->libcaca (diff) - 12:44 AM development edited by - libcucul -> libcaca (diff) - 12:42 AM libcaca edited by - remove mention of libcucul (diff) - 12:41 AM libpipi edited by - (diff) - 12:41 AM zzuf edited by - (diff) - 12:36 AM CPUShare edited by - libcaca -> caca labs (diff) - 12:34 AM libpipi edited by - URL fix (diff) - 12:33 AM neercs edited by - shorten URLs (diff) - 12:27 AM Changeset [2836] by - Change a few occurrences of libcaca into caca labs. Sep 27, 2008: - 11:30 PM Changeset [2835] by - Fix documentation installation. This is the real 0.99.beta15 release. - 11:10 PM Changeset [2834] by - Change the website name to caca.zoy.org. - 11:09 PM Changeset [2833] by - Change the website name to caca.zoy.org. - 8:18 PM WikiStart edited by - libcaca 0.99.beta15 (diff) - 8:16 PM libcaca edited by - 0.99.beta15 release (diff) - 8:11 PM Changeset [2832] by - Set version to 0.99.beta14. Updated NEWS and ChangeLog?. - 7:56 PM Changeset [2831] by - End of the libcucul merge: add symbolic links where appropriate. - 7:56 PM Changeset [2830] by - caca, cxx: install symlinks for backwards compatibility with libcucul. - 6:23 PM Changeset [2829] by - * Fix a warning - 6:07 PM Changeset [2828] by - .gitignore: ignore caca_types.h. - 5:57 PM Changeset [2827] by - Add missing svn:ignore SVN properties. - 5:43 PM Changeset [2826] by - Continue the libcaca/libcucul merge. Source and binary compatibility … - 4:29 PM Changeset [2825] by - * No need to require test/unit in each testfile - 4:29 PM Changeset [2824] by - Continuing the libcucul-libcaca merge. - 4:29 PM Changeset [2823] by - * Have local paths first in LOAD_PATH - 4:11 PM Changeset [2822] by - Continuing the libcucul-libcaca merge. - 3:12 PM Changeset [2821] by - Starting refactoring to get rid of libcucul. The initial reason for … - 2:13 PM Changeset [2820] by - test: remove legacy empty directory. - 11:52 AM Changeset [2819] by - makefont.c: change the font data encoding, the source is now 5% smaller. Sep 26, 2008: Sep 18, 2008: - 12:17 AM Changeset [2818] by - * zzuf.c: use atol() instead of atoi() for the --seed flag. - 12:17 AM Changeset [2817] by - * zzuf.c: allow the use of -r=0 in addition to -r 0, and likewise for … Note: See TracTimeline for information about the timeline view.
http://caca.zoy.org/timeline?from=2008-10-18T23%3A36%3A17%2B02%3A00&precision=second
CC-MAIN-2016-18
refinedweb
2,530
76.11
Revision history for CatalystX-ExtJS 2.1.3 2011-06-20 - fixed prereqs 2.1.2 2011-02-09 - more prereqs - tutorial fix 2.1.1 2011-02-06 - Fixed prereqs to include example and tutorial prereqs 2.1.0 2011-02-05 - CatalystX::ExtJS has been split up in CatalystX::ExtJS::Direct and CatalystX::ExtJS::REST. Installing this module will pull in both of them, so nothing changes for you. The benefit is that you can use CatalystX::ExtJS::Direct without installing CatalystX::ExtJS and the prereqs for CatalystX::ExtJS::REST, which include DBIx::Class and HTML::FormFu. 2.0.0 2011-01-31 - pass 'object' or 'list' to the default_rs_method as second parameter - limit of rows to fetch is now 100, set limit => 0 to disable - order_by specifies the default column to sort (e.g. { -desc => 'updated_on' }) - silenced test warnings - forms can now be defined in the class itself (forms => { default => , get => , list => }) - added tests and documentation - bump prereq versions - fixed Ext.Direct create with only one attribute - don't ship extjs, using CDN for examples and tutorial instead - using ExtJS 3.3.1 in examples and tutorial - catch exceptions in Controller::REST and send a 400 bad_request status to the browser including a message and success: false - Ext.Direct will send an exception if the response status of the subrequest is >= 400 The response includes the status of the subrequest as well as $c->error or the body - status_not_found also includes success => 0 in it's response - request trait application moved to Deserializer action class - added namespace option to /api/src 1.124000 2010-12-13 - object_GET allowed to override certain fields - fixed object_DELETE which was calling status_not_found incorrectly 1.123000 2010-11-29 - API controllers in the API namespace lose the "API" prefix - Fixed naming for actions in deep controller namespaces 1.122000 2010-09-27 - Fixed location algorithm for config files - Adjusted file upload via Direct 1.121000 2010-08-18 - Fixed file uploads via the Ext.Direct API 1.120000 2010-08-17 - Fixed #60070 (ExtJS 3.2.1 compatibility) - Fixed #60396 (Ext.Direct error handling) - Require JavaScript::Dumper (fixes bogus prereq problem) - The default root property is now "data". If you have set no_list_metadata then it will remain the old value of "rows" - Also you can set root_property 1.110000 2010-08-16 - Fixed form_base_file to work with deep-hierarchy controllers 1.101700 2010-06-19 - Fixed prereq 1.101670 2010-06-16 - Fixed content-type for Direct API - Fixed prereq 1.101570 2010-06-06 - Fixed bogus bug where C::View::JSON prepends the BOM when agent = Safari - Fixed meta to not include example/tutorial libs - Fixed #57373 (Global configuration doesn't work as documented) 1.101560 2010-06-05 - Ext.Direct support - Ext.Direct API Controller - REST uses now path_to to find form files instead of hard-coded path - added example (run 'perl -Ilib example/script/myapp_server.pl') - added tutorial - added tutorial app (run 'perl -Ilib tutorial/script/myapp_server.pl') - caching is disabled in debug mode - improved performance (using Moose attributes and config file caching) - caching is disabled in debug mode - ditch Subrequest in favor of visit() - works with latest C::R::REST - Got rid of C::R::REST dispatcher in ::REST class - SELECT ... FOR UPDATE for update & delete - use transactions 0.11 2010-01-03 - Order by me.* (fixes ambiguous errors) 0.10 2009-12-08 - Silence debug messages - fixed failing test on win32 (#500002, thanks kmx) 0.09 2009-09-17 - removed hack to ignore empty file and password fields this can now be achieved via ignore_if_empty in formfu - the object is now stashed after creating it you can access it via $c->stash->{object} - the form object is now on the stash and can easily be manipulated 0.08 2009-09-12 - introduced parameter validation in list context 0.07 2009-09-10 - yet another missing prereq 0.06 2009-09-07 - yet another missing prereq 0.02 - 0.05 - fixed versioning problems - added new prereqs 0.01 2009-08-27 - first official cpan release
https://metacpan.org/changes/distribution/CatalystX-ExtJS
CC-MAIN-2015-35
refinedweb
677
57.37
I have a new Macbook – a user installed it, and then I installed a new user (mine), granted admin privileges and deleted the old one. I am on OS Catalina. Since the installation I’ve been having several permission problems. VSCode can’t find Jupyter Notebook, pip installs packages at ~/Library/Python/3.7/site-packages. When I do which python3 I get usr/bin/python3. When I do pip3 install <package> I get: Defaulting to user installation because normal site-packages is not writeable And then it says it has already been installed, even though I can’t access it when I do import <package>. It’s seems clear that this is a permission problem, pip can’t install to the “base” python, and them python can’t find what I’ve installed into ~/Library/Python/3.7/site-packages. I’ve tried reinstalling the OS, but since I haven’t done a clean install, it didn’t change anything. What am I missing? How exactly can I fix permissions? Where do I want packages to be installed ( venv sure, but some packages I want global (like jupyter). Thanks Solution #1: As @TomdeGeus mentioned in the comments, this command works for me: python3 -m pip install [package_name] Solution #2: It’s best to not use the system-provided Python directly. Leave that one alone since the OS can change it in undesired ways, as you experienced. The best practice is to configure your own Python version(s) and manage them on a per-project basis using virtualenv (for Python 2) or venv (for Python 3). This eliminates all dependency on the system-provided Python version, and also isolates each project from other projects on the machine. Each project can have a different Python point version if needed, and gets its own site_packages directory so pip-installed libraries can also have different versions by project. This approach is a major problem-avoider. Solution #3: python3.7 -m pip install [package_name] solved it for me. The most voted answer python3 -m pip install [package_name] does not help me here. In my case, this was caused by a conflict with the dominating 3.6 version that was also installed. Here is a proof by example --upgrade pip: pip3 install --upgrade pip Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in /home/USERNAME/.local/lib/python3.6/site-packages (20.3.1) python3 -m pip install --upgrade pip Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in /home/USERNAME/.local/lib/python3.6/site-packages (20.3.1) python3.7 -m pip install --upgrade pip Collecting pip Cache entry deserialization failed, entry ignored Using cached Installing collected packages: pip Successfully installed pip-20.3.1 Solution #4: I had the Same issue with Jetson Nano, Used Sudo and it worked So try sudo with pip. Solution #5: It occurs with me when I the virtual enviroment folder name was : venv. in this case, It gives errors like : No module pip Default folder is unwritable renaming the folder solve the proplem. Solution #6: Had this same issue on a fresh install of Debian 9.12. Rebooting my server solved the issue. Solution #7: in my case python3 -m pip install [package_name] did not solve that. in my case, it was a problem related to other processes occupying the directory. I restart Pycharm and close any other program that might occupy this folder, and reinstalled the package in site-packages directory successfully.
https://techstalking.com/programming/question/solved-pip-python-normal-site-packages-is-not-writeable/
CC-MAIN-2022-40
refinedweb
591
63.49
Many developers intrigued by Web Services have started writing simple services for inter-organizational or personal use. These developers, unfamiliar with the intricacies of Web services technology and development, rely on tools for most of the service code implementation. In these situations, the development and use of the Web Service is all carried out by one developer or one team. The service functions properly but, because of problems with the code generation tools, fails to achieve the primary goal of Web services; interoperability. This article will discuss the advantages of top-down Web services development. It will show you how to use IBM ® WebSphere® Studio V5.1 tools to author WSDL documents, generate interoperable Web services from these documents, and unit test your Web services. What is WSDL and how is it used? WSDL (sometimes pronounced wiz-dul), or Web Services Description Language, is an XML meta-language for describing everything about a Web service, from the way to connect to the service to what type of information the service will return. A WSDL document gets as detailed as to specify the operations a client can use to interact with a service and the access point of the service. The allowable content of a WSDL document is controlled by the WSDL specification. This description document provides a standard way for clients wanting to consume a given service to obtain all the relevant information about the service. Instead of contacting the organization that has published a Web service to find out how to use it, all of the information can be obtained from the WSDL document associated with the service. The WSDL document can even be used to generate skeleton code for the client that will consume the service. The Web services Description Working Group, a consortium of people from many different organizations and companies including Sun, Microsoft, Oracle and IBM, manages the WSDL specification. The goal of this group is to maintain and evolve the WSDL open standard to provide a platform independent way to describe a Web service. The Web Services Interoperability (WS-I) organization, provides best practices for creating interoperable Web services in their basic profile document. The basic profile contains guidelines for authoring WSDL documents that describe interoperable Web services. The best way to ensure that your Web service will be interoperable is to first create a description document that complies with the basic profile and then to generate your service from the compliant WSDL document. Benefit of top-down Web services development The quickest and easiest way to start developing Web services is the bottom-up approach. Using the bottom-up approach the developer writes the service implementation in any high-level programming language, such as JavaTM and C#, has some tool generate the WSDL document describing the service and deploys the service to a server. This approach is commonly used by companies that have legacy systems and applications within their organization which they want to have communicate with one another. A Web service is built on top of the existing infrastructure and then exposed within the organization. In this situation the quick, bottom-up approach works fine. One company is both creating and consuming the services and the environment can be controlled as all of the clients of the service are known. Since both the services and the clients can be created using the same set of tools and run on the same platform, there does not need to be a lot of concern with respect to interoperability. Interoperability problems begin to surface when Web services developed using the bottom-up approach are exposed for external use by other companies and consumers running on different platforms, or when one company acquires another and now has to merge the second companies' systems with their own. In these situations all the possible clients to a service are not known when the service is created and this variable allows for interoperability problems. The interoperability problems originate from the tools that generated the WSDL documents. The generated WSDL documents may contain platform specific information that is allowed in the document because of ambiguities in the WSDL specification. Platform specific WSDL documents are by definition not interoperable and defeat the real goal of Web services. For Web services to achieve their primary goal, live up to the hype and enter the mainstream of computing, they must be interoperable. In order for that to happen, developers must sacrifice the convenience of having tools generate their WSDL documents. Using the top-down approach, a developer starts by authoring a WSDL document. Once the description of the service is written, stub code for the service can be generated by development tools. The developer then fills in the implementation for the Web service and creates connections to any existing architecture. By authoring the WSDL document yourself instead of relying on platform biased tools, you have complete control of the WSDL document and the document can be written in a standard way such that it can be understood by any Web services platform. As the WSDL document is platform independent, stub code for any language can then be generated, provided there is a tool to do so. WSDL2Java is one example of a tool available in Apache Axis that will create Java stub code from a WSDL document. The top-down approach eliminates the ability for the developer to have tools generate WSDL documents with interoperability problems as the first step in this approach is to author the description document. The top-down approach is favorable because it eliminates a major source of interoperability problems from Web services. Developing Web services in this way promotes the effort to achieve true interoperability between Web service implementations. The following section will illustrate a scenario where a service created using the bottom-up approach has an interoperability problem while the same service created using the top-down approach does not. Use case scenario comparing top-down and bottom-up Web services development The problem with WSDL documents generated by development tools is that errors originating in the tools get propagated to the consumers of a Web service. A service's WSDL document is its contract with its clients. It defines the terms and uses of a Web service. Errors in the WSDL document mislead the consumers of a Web service as to what the expected behavior is. This use case scenario compares the interoperability of an address book Web service developed using the bottom-up approach to that of the same Web service developed using the top-down approach. Developing the echo message Web service using the bottom-up approach The echo message Web service consists of four Java classes, EchoMessage.java, Message.java, RequestMessage.java and PhoneNumber.java. They are shown in Figures 1, 2, 3 and 4 respectively: Figure 1. EchoMessage.java Figure 2. Message.java Figure 3. RequestMessage.java Figure 4. ResponseMessage.java EchoMessage.java has one method, getClassName. This method takes in a Message instance as input and returns that instance's fully qualified Java class name as output. Message.java itself is an abstract class, it has two concrete implementations. They are RequestMessage.java and ResponseMessage.java. Therefore, the expected result for the getClassName method is either com.example.RequestMessage or com.example.ResponseMessage. Figure 5 shows the corresponding WSDL document describing the echo message Web service generated and deployed using Apache Axis 1.0. Figure 5. WSDL document created using bottom-up approach In the XML schema type definition section of the WSDL document, there is an abstract complex type defined for com.example.Message. The definition for the two derived types, com.example.RequestMessage and com.example.ResponseMessage, are not present. The lack of the two derived types is an interoperability problem because abstract types cannot appear in an XML instance document. Abstract types must be substituted by their derived types. Without knowing what the derived types are, Web service clients created using tools such as Microsoft Visual Studio .NET, or even Apache Axis itself will have problems invoking this Web service. The echo message Web service created using the bottom-up approach is not interoperable as it has an incomplete WSDL document. Developing the echo message Web service using the top-down approach The echo message Web service created using the top-down approach does not suffer from the interoperability problem illustrated in the previous section. By authoring the WSDL document, all the required XML schema type definitions can be specified. Using the WSDL document shown in Figure 6, the same echo message Web service is created. Using this description document, the echo message Web service interoperates well with other Web service clients. Figure 7 shows a sample request and response message. Figure 6. WSDL document created using top-down approach Figure 7. Sample request and response message of echo message Web service The next section demonstrates how to use WebSphere Studio 5.1 Tools to author a WSDL document, create a Web service from a WSDL document, and test the Web service. Authoring the WSDL document Whenever you develop in WebSphere Studio, your first step is to create a project: - Select File => New => Project. - Select Web on the left and Dynamic Web Project on the right, and then click Next. - Enter a project name of AddressBookWeb and click Finish. If you are prompted to switch to a Web Perspective. Click OK. - The first step in the top-down approach to Web service creation is to create our WSDL document. Select the AddressBookWeb project, right click on it to bring up the context menu, and select New => Other Select Web Services on the left and WSDL on the right. Click Next. Change the name to AddressBook.wsdl and click Next. Change the namespace to and the Definition name to AddressBook. Ensure the WSDL and XSD prefixes are selected and select the soap Prefix and click Finish. - The WSDL editor is now opened with the newly created WSDL document. Select the Graph tab - We will start by defining the service. Right-click under Services and select Add child => service. Name the service GetInfoByNameService and click OK. We have to create a port where the service will be located. Right-click on the GetInfoByNameService and select Add Child => Port. Name the port SOAPPort and click OK. Figure 8. Create a service - We need to set the concrete binding information for the service. We will start by just setting the binding and will fill in the details later. Figure 9 shows the Specify Binding wizard. Right click on Port and click Set Binding. Change the binding name to GetInfoByNameBinding and click Finish. Figure 9. Create a new binding wizard - The binding has to refer to an abstract definition of the operations. The operations are contained in a portType so we need to set a portType for the binding. Select the Show Bindings button at the top, right hand corner of the WSDL editor. The bindings will now be visible in the graph view. Right click on GetInfoByNameBinding and select Set PortType. Ensure Create a new Port Type is selected and name the port type AddressBookPortType. Click Finish. - We can now specify the operation to get address book information given a name. Right click on AddressBookPortType and select Add Child=> Operation. Name the operation GetInfoByName and click OK. - We will specify the input for this operation. This is where the request will pass in the name for which the address information is requested. Right click on GetInfoByName and select Add Child=>Input. - Now we can specify the information that is required for the input to the operation. Right click on Input and select Set Message. Ensure that Create a new message is selected, set the message name to GetInfoByNameRequest and click Finish. Right click on GetInfoByNameRequest and select Add Child => part. Name the part Name and click OK. - Now we will add the output or return information for the operation. Right click the portType operation GetInfoByName and select Add Child => Output. - We need to set what the output will return. Right click on the Output and select Set Message. Ensure that Create a new message is selected, set the message name to GetInfoByNameResponse and click Finish. Right click on GetInfoByNameResponse and select Add Child => part. Name the part AddressBookInfo and click OK. - Now we are going to create a one-way operation for adding information to our address book. This is an operation that requires an input but does not output anything. Right click on the AddressBookPortType and select Add Child => operation. Name the operation SaveInfo and click OK. - We need to add an input. Right click on SaveInfo and select Add Child => input. - Right click on the input for SaveInfo and select Set Message. Ensure that Create a new message is selected, set the message name to SaveInfoRequest and click Finish. Right click on SaveInfoRequest and select Add Child => part. Name the part AddressBookInfo and click OK. Your WSDL document, with the bindings hidden, should now look like figure 10. Figure 10. Current WSDL document with bindings hidden - We will now create the custom elements that will be used as input and output parameters from our operations. Right click under Types and select Add Child => Add Schema. Figure 11. Adding a schema to the document - Double-click on the arrow next to the Types section schema. This will open the schema editor. Figure 12. To edit the schema, select the arrow next to it - First we will create a complex type to hold the address information. Right click on the schema element and select Add Complex Type. Change the name of the element to Address. Right click on Address and select Add Complex Content. Right click on the complex content and select Add Element. Change the name of the element to Street. Add elements for City, Province, PostalCode and PhoneNumber in that order. Figure 13. Adding a global complex type - Next we will create a complex type to hold the name information. Right click on Complex Types and select Add Complex Type. Change the name of the complex type to Name. Right click on Name and select Add Complex Content. Right click on the complex content and select Add Element. Change the name to FirstName. Add a second element and change the name to LastName. - We now need a concrete way to refer to our complex types. We will create global elements in our schema. The first element we need is for our name. Right click on Schema and select Add Global Element. Change the name of the element to Name. Using the drop down menu, change the type of the element to tns:Name. Figure 14. Adding a global element - The second global element we need is for our address book information which will contain a name and an address. Right click on Global Elements and select Add Global Element. Change the name of the new element to AddressBookInfo. Right click on AddressBookInfo and select Add Local Complex Type. Right click on the complex type and select Add Element. Name the element Name and select a type of tns:Name. Create a second element for the complex type. Name the element Address and select a type of tns:Address. - We now have all the elements we need. Click on the arrow in the top left-hand corner of the editor to return to the WSDL editor. - Now we have to set the parts of our messages to use the elements we just defined in our schema. Figure 15 shows the Specify Element wizard. Right click on the tns:Name part for GetInfoByNameRequest. Select Set Element. Select an existing element. Select tns:Name and click Finish. Set the elements for the other two parts to tns:AddressBookInfo. Figure 15. Specifying an element for a message - Now all we have to do is fill in the binding information. The Generate Binding Wizard is shown in figure 16. Select the Generate Binding Wizard from the WSDLEditor menu. Ensure Generate content for an existing binding is selected. Select GetInfoByNameBinding. Selecttns:AddressBookPortType. Select Protocol: SOAP and set the SOAP Binding Options to document literal. Ensure that Overwrite existing binding information is selected and click Finish. Figure 16. Setting binding information in the binding wizard - We should check that the WSDL document is valid. Save the WSDL document and then right click on it in the Navigator view and select Validate WSDL File from the context menu. You should see a dialog like figure 17 that states that the WSDL file is valid. Figure 17. Message Window upon successful validation Congratulations! Your service definition is complete. You can view the completed WSDL document here. Building and testing a Web service - Bring up the wizard selection dialog. Click on menu File => New => Other. - Launch the Web services creation wizard. Select Web Services from the menu on the left and Web Service from the list on the right. Click Next. - Figure 18 shows the Web services creation wizard. Choose Skeleton Java bean Web Service as the Web service type, deselect the Start Web service in Web project checkbox and select the Overwrite files without warning checkbox. Click Next. Figure 18. Web services creation wizard - Accept the default settings in the service deployment configuration page. The address book Web service will be deployed to the IBM WebSphere V5.0.2 Web services engine on a WebSphere V5.0.2 unit test environment server. Click Next. - Figure 19 shows the Web service selection page. Click on the Browse button and navigate to the location of AddressBook.wsdl. Click Next. Figure 19. Web service selection page - Accept the default settings in the Web service skeleton Java bean configuration page. Click Finish. This will generate Java skeleton code based on AddressBook.wsdl. - Before invoking the address book Web service, we must fill in the implement for the Java skeleton. Open /AddressBookWeb/JavaSource/com/example/GetInfoByNameBindingImpl.java in an editor. Modify this Java skeleton as shown in Figure 20. Save and close the editor. Figure 20. GetInfoByNameBindingImpl.java - Open the server view if necessary. Click on menu Window => Show View => Other... In the show view dialog, expand the Server tab, select the Servers node and click OK. - In the Servers view, right-click on the server that the address book Web service is deployed to and select menu item Start. - We'll now use the Web Services Explorer to test the address book Web service. To launch the Web Services Explorer, right click on /AddressBookWeb/WebContent/wsdl/com/example/AddressBook.wsdl and select menu item Web Services => Test with Web Services Explorer. Figure 21. Web Services Explorer - To save an address, click on the SaveInfo operation. Enter the values shown in figure 22 and click Go. Figure 22. SaveInfo operation - To retrieve the address saved in the previous step, click on the GetInfoByName node in the Web Services Explorer's navigator pane. Enter John as the first name and Doe as the last name. Click Go to invoke the operation. Figure 23 shows the status pane containing the result of invoking the operation. Figure 23. Status pane containing result of invoking GetInfoByName operation Interoperability is the key that Web services require to open the door to mainstream computing. The interoperability of a Web service begins with its WSDL document, which describes everything about a service. The bottom-up approach to Web service development is fast and easy but may produce services with interoperability problems. The best way to maintain interoperability is by following the top-down approach to Web service development and starting your development by authoring your description document. WebSphere Studio 5.1 provides development tools to author WSDL documents, generate interoperable services from the documents and unit test your newly created services. - Apache Axis - Web Services Description Language (WSDL) - Web Services Description Working Group - Web Services Interoperability (WS-I) Organization - Web Service Validation Tools (WSVT) eclipse open source project - WebSphere Studio Jeffrey Liu is a software developer on the WebSphere Studio Application Developer Web Services Tools Team at the IBM Toronto Lab.
http://www.ibm.com/developerworks/websphere/library/techarticles/0401_liu_mandel/0401_liu_mandel.html
crawl-003
refinedweb
3,339
56.96
WPF 3.5 SP1 App Model with Jennifer Lee - Posted: May 16, 2008 at 12:51 AM - 23,143 can't help but wonder why they just didn't call it "WPF 3.6" it would be allot more clearer for developpers, because some things just don't run on 3.5 , most things are not updates/bug fixes but new features. I know this seems to be the Microsoft way, but in this video almost every sentence Jennifer says begins so it. Normally I don't notice these kind of things, but it was really off putting for me! Sorry to rant - interesting information, but the delivery just grated on me. Agreed -- or even just drop the whole SP and call it .Net 3.51 As hard as it may be to covince developers that its "a service pack with features" convincing users is even harder. BTW I like the agility of the wpf team to come out with three really solid (looking) releases in such a short time frame. I just think that there is no reason to conflate a service pack with a release. Why can't you guys remember to post a link to the LOW-RES file on every video that you do? It's very irritating when downloading with BITS to have to wait on 600Mb files. ---dup---dup--- Beavis has spoken! I've heard its a northwestern thing.! I'm taking this to mean you want to be able to bind an HTML string to some property on the WebBrowser and cause the WebBrowser to navigate when the data bound to the property is updated. A few things here - Yes, the WebBrowser control is a DependencyObject. However, we would have to expose navigating to an HTML string as a property (specifically, a DependencyProperty) for the databinding to work, and for it to work in XAML, which we do not. The functionality of navigating to an HTML string is achieved by calling the WebBrowser.NavigateToString() method. You will need to implement your scenario in code by calling the NavigateToString() method. Therefore, the answer is NO, you could not do this: <WebBrowser DocumentText="{Binding MyHtmlProperty}" /> I can see how the above would be useful and convenient. The change would mean you could 'get' the html string loaded as well as 'set' it (which is what the method achieves). We may look into adding it for a future release. As always, your feedback and specific affected scenarios certainly help in these matters, so please keep it coming! Thanks! Jennifer For my purposes I would only need to set the HTML string (a "OneWay" binding) since the browser control itself wouldn't be doing any updating. Perhaps I could define an attached property or something that called NavigateToString() when it was set? I think the old example you often see of a WPF app that reads RSS feeds would be well-served by a simple property on WebBrowser that lets you bind directly to a string - that way the browser could be bound to the selected item in the feed and simply display it as the user navigates through the list of items. Matt Yes, that sounds like the right thing to do. You might also try writing a custom control that has a child WebBrowser, if you want encapsulation. We are working on putting together some WebBrowser samples, and I think the RSS scenario would be a good one, as well. Thanks! Jennifer need more wpf controls~~ anyway, nice video~ This works when using the browser normally (sans the embedded WPF behavior) ----------------------------------- · Open Internet Explorer · Go to Tools > Internet Options · Click the Advanced tab · Scroll down to the Security options · Check the option that reads: Allow active content to run in files on My Computer Does the new webbrowser sp1 can be inside visualbrush ? ? A poor man's workaround for this problem is to create a user control that wraps the WebBrowser and offers the needed dependency properties. Here is a quick example: XAML Defenition (MyWebBrowser.xaml): <UserControl x:Class="RSSTest.MyWebBrowser" xmlns="" xmlns: <WebBrowser Name="mWebBrowser"> </WebBrowser> </UserControl> C# code (MyWebBrowser.cs): public partial class MyWebBrowser { public MyWebBrowser() { InitializeComponent(); } public WebBrowser WebBrowser { get { return mWebBrowser; } } public static readonly DependencyProperty StringHtmlProperty = DependencyProperty.Register("StringHtml", typeof(string), typeof(MyWebBrowser), new PropertyMetadata(new PropertyChangedCallback(OnStringHtmlChanged))); public string StringHtml { get { return (string)GetValue(StringHtmlProperty); } set { SetValue(StringHtmlProperty, value); } } private static void OnStringHtmlChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { MyWebBrowser me = (MyWebBrowser)d; me.WebBrowser.NavigateToString(me.StringHtml); } public static readonly DependencyProperty UriProperty = DependencyProperty.Register("Uri", typeof(Uri), typeof(MyWebBrowser), new PropertyMetadata(new PropertyChangedCallback(OnUriChanged))); public Uri Uri { get { return (Uri)GetValue(UriProperty); } set { SetValue(UriProperty, value); } } private static void OnUriChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { MyWebBrowser me = (MyWebBrowser)d; me.WebBrowser.Navigate(me.Uri); } } Binding Examples: <local:MyWebBrowser <local:MyWebBrowser <local:MyWebBrowser Very nice sharing.thanks <a href=" "></a></div> nice share. thank you Bedava dizi izle - Thank you for sharing your article I would always follow. <a href="" title="chat" target="_blank">chat siteleri</a> Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/AdamKinney/WPF-35-SP1-App-Model-with-Jennifer-Lee?format=progressive
CC-MAIN-2013-48
refinedweb
840
52.8
Jumping from Angular1 to Angular Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. In Angular1, you used to create component or directives in Angular1 to make reusable web components. They still exist in Angular, and they are awesome! Most of the things you do in Angular will be components. For example: - Where you used AngularJS directives, you will use components - Where you used AngularJS modules, you will almost always use components - Where you used AngularJS components, you will use components Syntax In Angular1, you would declare a component like this: theModule.component('componentName', { templateUrl: 'path/to/template.html', controller: function() { ... } }); But hey, how does it work? How do I use this component in a template? There is some angular1 magic where camel case <componentName> will become <component-name> but it's kinda odd, especially since you didn't ask for it. What if I want to specify a custom tag name that is different from the name of the component? Let's look at how Angular does it. You need a class and a @Component decorator: @Component({ selector: 'compo', templateUrl: './compo.html', styleUrls: ['./compo.css'], }) class Compo { awesome: string = 'yes'; } selectoris the tag name templateUrlis the same as in Angular1, you can also use inline template styleUrls, this is the cool new feature as you can add a list of stylesheets to include. You can also use inline styles The class serves as a controller for your component. Cleaner. Leaner. Let's try it out: In order to register and use them, declare your components in your @NgModule, in order to register it and be able to use it. Add it to the list of component declarations of your module: @NgModule({ ... declarations: [ TheComponent, ... ], ... } Advanced There are a bunch of other properties associated with the @Component decorator, like a list of providers it will use. You need to reference these providers if your component uses them. Refer to the official Component documentation for a list of usable properties. What happened to... The $onInit, $onDestroy, $onChanges hooks? Angular component have a series of lifecycle hooks you can use. A complete list and guide are available here. To use one of these hooks, OnInit for instance, first thing first you will need to import it from the Angular core library: import { OnInit } from '@angular/core'; Next, you will need to have your component class implement OnInit (and any other hook you need): @Component({ ... }) class Compo implements OnInit { ... } Lastly, you can add a ngOnInit function to this component class: ngOnInit() { this.logIt('On Init'); } Transclusion? AngularJS directives used to have a transclude property, but now Angular components support transclusion by default. Just use the <ng-content> tag in your template like so: <div class="component"> <ng-content><!-- Transcluded data, if any, will come here --></ng-content> </div> Then, call your component with data that will be transcluded automatically <my-component>Data to be transcluded</my-component>
https://tech.io/playgrounds/252/jumping-from-angular1-to-angular/components
CC-MAIN-2022-05
refinedweb
495
56.45
You are browsing the Symfony 4 documentation, which changes significantly from Symfony 3.x. If your app doesn't use Symfony 4 yet, browse the Symfony 3.4 documentation. Coding Standards Coding Standards¶ Symfony code is contributed by thousands of developers around the world. To make every piece of code look and feel familiar, Symfony defines some coding standards that all contributions must follow. These Symfony coding standards are based on the PSR-1, PSR-2 and PSR-4 standards, so you may already know most of them. Making your Code Follow the Coding Standards¶ Instead of reviewing your code manually, Symfony makes it simple to ensure that your contributed code matches the expected code syntax. First, install the PHP CS Fixer tool and then, run this command to fix any problem: If you forget to run this command and make a pull request with any syntax issue, our automated tools will warn you about that and will provide the solution. Symfony Coding Standards in Detail¶ If you want to learn about the Symfony coding standards in detail,; - Add a usestatement for every class that is not part of the global namespace. Naming Conventions¶ - Use camelCase for PHP variables, function and method names, arguments (e.g. $acceptableContentTypes, hasSession()); - Use snake_case for configuration parameters and Twig template variables (e.g. framework.csrf_protection, http_status_code); - Use namespaces for all PHP classes and UpperCamelCase for their names (e.g. ConsoleLogger); - UpperCamelCase for naming PHP files (e.g. EnvVarProcessor.php) and snake case for naming Twig templates and web assets ( section_layout.html.twig, index.scss); - must be the same as the fully qualified class name (FQCN) of its class (e.g. App\EventSubscriber\UserSubscriber); - If there are multiple services for the same class, use the FQCN for the main service and use lowercased and underscored names for the rest of services. Optionally divide them in groups separated with dots (e.g. something.service_name, fos_user.something.service_name); - Use lowercase letters for parameter names (except when referring to environment variables with the %env(VARIABLE_NAME)%syntax); - Add class aliases for public services (e.g. alias Symfony\Component\Something\ClassNameto something.service_name).; - Don't inline PHPDoc blocks, even when they contain just one tag (e.g. don't put /** {@inheritdoc} */in a single line); - When adding a new class or when making significant changes to an existing class, an @authortag with personal contact information may be added, or expanded. Please note it is possible to have the personal contact information updated or removed per request to the doc:core team </contributing/code/core_team>. This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license.
https://symfony.com/doc/master/contributing/code/standards.html
CC-MAIN-2018-43
refinedweb
441
52.8
WebReference.com - Chapter 16 of Java Servlets Developer's Guide, from Osborne/McGraw-Hill (2/4) Java Servlets Developer's Guide Avoid Debugging Statements Many developers like to use System.out.println() for debugging. Sometimes it's just a lot easier to use this crude debugging method instead of firing up an IDE. Some developers will move the actual printing of debug messages to a common method so that it can be turned on and off: public void debug(String msg) { if (debugEnabled) { System.out.println(msg); } } You can even add a timestamp to each message when you print it, or change the target of the message (to a file, for example). In your servlet, you could then simply call the debug routine: debug("The value is " + someValue); To get debug messages, you simply need to make sure that debugging is enabled in the debug() method. Sounds great, doesn't it? Look out! This is a nonobvious performance drain. You will end up performing String concatenation of the message to be debugged before you realize whether the message will even be used. If debugging is not enabled, you will still end up creating unused String messages. I'm not suggesting that you throw away using a debug() method, but you should provide some way for the caller to check to see if debugging is enabled first before calling the debug() method: if (isDebugEnabled()) { debug("The value is " + someValue); } Avoid Use of StringTokenizer A common performance hotspot that I have run across when using profiling tools is the use of java.util.StringTokenizer. While there is convenience in using the StringTokenizer, there is a tradeoff in performance. If you are using a StringTokenizer simply to scan for a single delimiter, it is very easy to use simple String functions instead. Consider the following example, which uses StringTokenizer to parse a String: import java.util.StringTokenizer; public class TestStringTokenizer { public static void main(String[] args) { String s = "a,b,c,d"; StringTokenizer st = new StringTokenizer(s, ","); while (st.hasMoreTokens()) { String token = st.nextToken(); System.out.println(token); } } } Executing this application will result in the following output: a b c d The following code replaces the StringTokenizer with a loop that uses String.indexOf() to find the next delimiter: public class TestIndexOf { public static void main(String[] args) { String s = "a,b,c,d"; int begin = 0; int end = s.indexOf(","); while (true) { String token = null; if (end == -1) { token = s.substring(begin); } else { token = s.substring(begin, end); } System.out.println(token); // End if there are no more delimiters if (end == -1) break; begin = end + 1; end = s.indexOf(",", begin); } } } Is it easier to read and understand? Definitely not, but it is much more efficient than using the StringTokenizer. Life is full of tradeoffs; you sometimes must choose between readability and performance. Avoid Unnecessary Synchronization You always need to remember that servlets operate in a multithreaded, multiuser environment, with the multiple threads working with a single instance of your servlet. When you decide to synchronize a block of code, it will result in only a single thread being able to operate on your servlet at any point in time. There are certainly cases in which you need to synchronize access to a particular block of code, such as generating a unique ID or atomically updating a series of counters. Instead of synchronizing an entire method (or worse yet, an entire class), synchronize only the blocks of code that really need to be synchronized. It is easy to synchronize an entire method just to be safe; try to avoid this if at all possible. Created: July 31, 2002 Revised: July 31, 2002 URL:
http://www.webreference.com/programming/java/servletsguide/chap16/2.html
CC-MAIN-2016-30
refinedweb
610
53
Clustericious::RouteBuilder - Route builder for Clustericious applications version 0.9940 package MyApp; use Mojo::Base qw( Clustericious::App ); package MyApp::Routes; use Clustericious::RouteBuilder; get '/' => sub { shift->render(text => 'welcome to myapp') }; This module provides a simplified interface for creating routes for your Clustericious application. To use it, create a Routes.pm that lives directly under your application's namespace (for example above MyApp's route module is MyApp::Routes). The interface is reminiscent of Mojolicious::Lite, because it was forked from there some time ago. none Define an HTTP route that matches any HTTP command verb. Define an HTTP GET route Define an HTTP HEAD route Define an HTTP POST route Define an HTTP PUT route Define an HTTP DELETE route. Define a Websocket route. Require authentication for all subsequent routes. Require specific authorization for all subsequent routes. Clustericious, Mojolicious::Lite.
http://search.cpan.org/~plicease/Clustericious/lib/Clustericious/RouteBuilder.pm
CC-MAIN-2014-52
refinedweb
142
50.02
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove Ads Via DevMavens I’ve been asked quite a lot recently whether one can write unit tests and isolate logical code that runs inside a silverlight application. Up until today my initial answer was ‘no’ because silverlight runs under different versions of mscorlib.dll. however. Today I actually gave it a try and realized that writing unit tests (not integration tests, as the silverlight test framework allows) against silverlight based code is possible and quite easy. Just like any other code that relies on a third party platform (like sharepoint code) the silverlight related code might have various dependencies. I’m going to show how to use Typemock Isolator to overcome a couple of simple silverlight dependencies (using HtmlPage) and how to setup a test project against a silverlight project (with NUnit or MsTest) Assumptions: To setup a test project against silverlight using MSTest: To setup a test project against silverlight using NUnit or MbUnit: How to break the dependencies Now, here is a simple example of code that has a silverlight dependency we’d like to test: Let’s say we have a class in the silverlight project called ChatSession (I’m basing this on ScottGu’s Chat demo). but it’s constructor looks like this: What if we wanted to control during our test whether the page is enabled or disabled? Here’s one way to do it using Isolator: Using Isolate.WhenCalled() we are able to circumvent any method (static or not) to return whatever we want, or throw an exception. Here’s a more interesting case. Let’s say we have a method that modifies the current page and shows some html to the user in a span tag: here is one way to write a test that makes sure that the right text is set into the message element in the page, without needing to have a real page present: There are several things to note here: If you are developing an open source silverlight project, it is important to note that there is a free full version of isolator for open source projects with full functionality. These are just the beginning of my journey into silverlight unit testing. I’m looking for good code examples that you might want to test, with various dependencies that need breaking. your comments are appreciated. Specifically, those dependencies aren't broken - they're intercepted. It's not entirely disingenuous to say that dependencies are broken, but the "break dependencies" terminology colloquially refers to a design activity rather than such a brute force countermeasure. Wow, very timely. I have a spike to work on just this very item. Thanks. Looks like Visual Studio has a bit of a issue with the code coverage feature and Silverlight applications. The test run config dialog disappears when you click on the code coverage list item. Roy, I was trying to run NUnit test from NUnit GUI. I received invalid thread access exception. It looks like the NUnit thread is not able to access the Silverlight UI thread for some reason. How to overcome this crossthread access issue? Your help would be appreciated. Exception details: Tests.UnitTest1.TestMethod1: System.TypeInitializationException : The type initializer for 'System.Windows.DependencyObject' threw an exception. ----> System.UnauthorizedAccessException : InvalidCrossThreadAccess ----- at System.Windows.DependencyObject..ctor(UInt32 nativeTypeIndex) at System.Windows.UIElement..ctor(UInt32 nKnownTypeIndex) at System.Windows.FrameworkElement..ctor(UInt32 nKnownTypeIndex) at System.Windows.Controls.Control..ctor(UInt32 nKnownTypeIndex) at System.Windows.Controls.UserControl..ctor() at CheckFree.ConsumerBanking.AccountDetailsControl..ctor() in C:\dev\SLProject\WintellectLatest\ConsumerBanking\AccountDetailsControl.xaml.cs:line 11 at Tests.UnitTest1.TestMethod1() in C:\dev\SLProject\WintellectLatest\Tests\UnitTest1.cs:line 26) at Tests.UnitTest1.TestMethod1() in C:\dev\SLProject\WintellectLatest\Tests\UnitTest1.cs:line 22 --UnauthorizedAccessException at MS.Internal.XcpImports.CheckThread() at System.Windows.DependencyObject..ctor(UInt32 nativeTypeIndex, IntPtr constructDO) at System.Windows.DependencyObject..ctor() at System.Windows.DependencyObject.ManagedReferencesToken..ctor() at System.Windows.DependencyObject..cctor() ------ I wrote one sample test like below: namespace Tests { /// <summary> /// Summary description for UnitTest1 /// </summary> [TestFixture] public class UnitTest1 { [Test, Isolated] public void TestMethod1() { Isolate.WhenCalled(() => App.IsDesignTime()).WillReturn(true); AccountsControl accountsControl = new AccountsControl(); Assert.IsNotNull(accountsControl); } } } Chandra : that is because the NUNIT gui was not running with Typemock Isolator enabled on it's process. either run "TmockRunner.exe nunitgui.exe" which will enable this, or start Nunit from within visual studio (if you have testdriven .net you can right click on the test project and select test with->nunit" It did not solve the issue. Now, I also commented the typemock code and tried to run NUnit from visual studio as well. It throws the same exception. Invalid Cross Thread access. I think the Nunit is running on different thread from Silverlight UI. So, it is complaining. Do I need to add any configuration file into my silverlight project so that Nunit tests will have access to silverlight thread? --Chandra Roy sent me the CThru that was built with TypeMock5.2.3. When I used this CThru 5.2.3 alogn with TypeMock5.2.3, it has resolved the CrossThread access issue. Now, the silverlight calls are being intercepted successfully when UnitTests were executed. Thanks to Roy. I apprecieate your help. I hope it would help somebody else too who might face this issue.
http://weblogs.asp.net/rosherove/archive/2008/12/27/unit-testing-in-silverlight-land-with-typemock-isolator.aspx
crawl-002
refinedweb
886
50.63
Time - less than or equal to formula.ashleyfoozi Dec 18, 2014 12:11 AM I have an overtime sheet that calculates the difference between the time in and the time out on each line and puts the result in a text box called "TIMERow1_0" In another text box ("TRow1_0") on the same row I want to put the time to be paid. If the time is <= 4:00 then "TRow1_0" must show 4:00 and if greater than 4:00 it must show the actual time worked. In South Africa if you work on a Sunday or Public Holiday you will be paid a minimum of 4 hours at double your hourly rate. So if you are called out in an emergency repair and it takes you 2 hours then you will be paid 4 hours. If you work more than 4 hours you will be paid at the actual time worked multiplied double your hourly ate. In the screenshot below "TIME ACTUAL" = "TIMERow1_0" "TIME" = "TRow1_0" The button does the time diff calculation and puts the result in "TIMERow1_0" 1. Re: Time - less than or equal to formula.try67 Dec 18, 2014 1:57 AM (in response to ashleyfoozi) I'm assuming you're using a script to calculate the "actual time" field's value. In the same script you can also assign the value of the "time" field. If the hours digits is 4 or more, assign the actual value. If it's less (but more than zero, presumably), assign "4:00". On Thu, Dec 18, 2014 at 9:12 AM, ashleyfoozi <forums_noreply@adobe.com> 2. Re: Time - less than or equal to formula.gkaiseril Dec 18, 2014 7:36 AM (in response to ashleyfoozi) A lot would depend upon how you are calculating the time variables and how one is storing the value in the time worked. For PDF forms the formatted display can be very different from stored value of a field. One can use the Math.max method to compare 2 or more values and return the maximum value of the items. If you are forcing the value to the time string // compute the time worked as a minimum of 4 or the actual time if more than 4 hours; if(this.getField( "TIMERow1_0").value != "") event.value = Math.max(this.getField( "TIMERow1_0").value, "4:00"); 3. Re: Time - less than or equal to formula.ashleyfoozi Dec 22, 2014 5:31 AM (in response to ashleyfoozi) Thank you for the replies, they pointed me in the right direction. However my code is only able to calculate correctly if the time in column "Time Actual" is a whole hour. The grey block in each row calculates the time difference between "Time In" and "Time Out" "Paid Time" is what I am battling with because if a person works 4 hours or less they will get paid 4 hours and anything over 4 hours is paid at actual time. The problem is in the "if" statement. I know that the variation should be converting the time to minutes where I can then say <= 240 minutes. Where am I going wrong. var hrsTime = parseInt(this.getField("TIMERow.0.0").value.split(":")[0]); var minTime = parseInt(this.getField("TIMERow.0.0").value.split(":")[1]); if (hrsTime + minTime <= 4) { var minRez = 00; var hrsRez = 4; } else { var minRez = minTime; var hrsRez = hrsTime; } this.getField("TRow.0.0").value = hrsRez + ":" + minRez; 4. Re: Time - less than or equal to formula.gkaiseril Dec 22, 2014 8:53 AM (in response to ashleyfoozi) A minute is 1/60 of an hour. So if you want the hours and minutes expressed as hour, you need to add the minutes times 1/60 to the hours so the total value is in hours. Because of issues with decimals in floating point notaion, I would convert the hours to minutes. var hrsTime = parseInt(this.getField("TIMERow.0.0").value.split(":")[0]); var minTime = parseInt(this.getField("TIMERow.0.0").value.split(":")[1]); var minRez = minTime; var hrsRez = hrsTime; var TotalMin = (hrsTime * 60) + Number(minTime); var TotalMin = Math.max((hrsTime * 60) + Number(minTime) , 240); hrsRez = Math.floor(TotalMin / 60); // whole hours; minRez = TotalMin % 60; // remainder in minutes; this.field("TRow.0.0").value = hrsRez + ":" + minRez; You are still going to have to workout your formatting. Since you will be converting the time string form the string of hourw:minutes I would look at a function to perform this task. I would also look at setting the computed field values to the decimal hours and then create a document level function for the display format of the value. This function is then called for the custom format using the event value as a parameter. // document level functions; function Time2Num(sTime) { // convert time string of HH:MM to minutes; var aTime = sTime.split(":"); var nHours = aTime[0]; // number of hours; var nMinutes = Number(aTime[1]); // number of minutes; // return time as minutes; return (nHours * 60) + nMinutes; } function Num2Time(nMinutes) { // get whole hours from minutes var nHrs = Math.floor(nMinutes / 60); // get remainder of minutes when divided by 60; var nMins = nMinutes % 60; // return formatted time string; return util.printf("%,1 1.0f", nHrs) + ":" + util.printf("%,102.0f", nMins); } // end document level functions; // custom calculation for TRow.0.0; var sTime = this.getField("TIMERow.0.0").value; // input field value; var nTime = Math.max(Time2Num(sTime), 240); // convert to minutes; event.value = Num2Time(nTime); // formatted result; // end custom calculation script; 5. Re: Time - less than or equal to formula.ashleyfoozi Dec 23, 2014 4:34 AM (in response to gkaiseril) You are a GENIUS!!! Thank you. All I had to do was adapt it to an "if else" and now the code works for odd time differences like 03:56. As I said earlier we have to pay overtime if a person works 4 hours or less on a Sunday and then actual time worked for more than 4 hours. We were using photocopied excel spreadsheets and the amount of errors there were each month was horrendous. Ever see people trying to calculate time on a calculator. Now with my fillable PDF the poor pay clerk will have a much easier time. Thank you again
https://forums.adobe.com/thread/1661642
CC-MAIN-2017-30
refinedweb
1,033
73.07
. In this and the next tutorial, we will be introducing you to the Cucumber – a Behavior Driven Development (BDD) framework which is used with Selenium for performing acceptance testing. What You Will Learn: Cucumber Introduction A cucumber is a tool based on Behavior Driven Development (BDD) framework which is used to write acceptance tests for an extension of Test Driven Development and it is used to test the system rather than testing the particular piece of code. Files: Feature files are the essential part of cucumber which is used to write test automation steps or acceptance tests. This can be used as the live document. The steps are the application specification. All the feature files end with .feature extension. Sample feature file: Feature: Login Functionality Feature In order to ensure Login Functionality works, I want to run the cucumber test to verify it is working Scenario: Login Functionality Given user navigates to SOFTWARETETINGHELP.COM When user logs in using Username as “USER” and Password “PASSWORD” Then login should be successful Scenario: Login Functionality Given user navigates to SOFTWARETETINGHELP.COM When user logs in using Username as “USER1” and Password “PASSWORD1” Then error message should be thrown #2) Feature: This gives information about the high-level business functionality (Refer to need to be placed in Background. For Instance: If a user needs to clear database before each scenario then those steps can be put in a background. - And: And is used to combine two or more same type of action. Example: Feature: Login Functionality Feature Scenario: Login Functionality Given user navigates to SOFTWARETETINGHELP.COM When user logs in using Username as “USER” And password as “password” Then login should be successful And Home page should be displayed Example of Background: Background: Given user logged in as databases administrator And all the junk values are cleared #4) Scenario Outline: Scenario outlines are used when the same test has to be performed with different data set. Let’s take the same example. We have to test login functionality with multiple different sets of username and password. | Note: - As shown in above example column names are passed as a parameter to When statement. - In place of Scenario, you have to use Scenario Outline. - Examples are used to pass different arguments in the tabular format. Vertical pipes are used to separate two different columns. An example can contain many different columns. #5) Tags: Cucumber by default runs all scenarios in all the feature files. In real time projects, there could be hundreds of feature file which are not required to run at all times. For instance: Feature files related to smoke test need not run all the time. So if you mention a tag as smokeless in each feature file which is related to smoke test and runs cucumber test with @SmokeTest tag. Cucumber will run only those feature files specific to given tags. Please follow the below example. You can specify multiple tags in one feature file. Example of use of single tags: @SmokeTest | Example of use of multiple tags: As shown in below example same feature file can be used for smoke test scenarios as well as for login test scenario. When you intend to run your script for a smoke test then use @SmokeTest. Similarly when you want your script to run for Login test use @LoginTest tag. Any number of tags can be mentioned for a feature file as well as for scenario. @SmokeTest @LoginTest Feature: Login Functionality Feature In order to ensure Login Functionality works, I want to run the cucumber test to verify it is working Scenario Outline: Login Functionality Given user navigates to SOFTWARETETINGHELP.COM When user logs in using Username as <username> and Password <password> Then login should be successful Examples: |username |password | |Tom |password1 | |Harry |password2 | |Jerry |password3 | Similarly, you can specify tags to run the specific scenario in a feature file. Please check below example to run specific scenario. Feature: Login Functionality Feature In order to ensure Login Functionality works, I want to run the cucumber test to verify it is working @positiveScenario Scenario: Login Functionality Given user navigates to SOFTWARETETINGHELP.COM When user logs in using Username as “USER” and Password “PASSWORD” Then login should be successful @negaviveScenario Scenario: Login Functionality Given user navigates to SOFTWARETETINGHELP.COM When user logs in using Username as “USER1” and Password “PASSWORD1” Then error message should throw #6) JUnit Runner: To run the specific feature file cucumber uses standard JUnit Runner and specify tags in @Cucumber. Options. Multiple tags can be given by using comma separate. Here you can specify the path of the report and type of report you want to generate. Example of Junit Runner: import cucumber.api.junit.Cucumber;</pre> import org.junit.runner.RunWith; @RunWith(Cucumber.class) @Cucumber.Options(format={"SimpleHtmlReport:report/smokeTest.html"},tags={"@smokeTest"}) Public class JUnitRunner { } Similarly, you can give instruction to cucumber to run multiple tags. Below example illustrates how to use multiple tags in cucumber to run different scenarios. import cucumber.api.junit.Cucumber; import org.junit.runner.RunWith; @RunWith(Cucumber.class) @Cucumber.Options(format={"SimpleHtmlReport:report/smokeTest.html"},tags={"@smokeTest",”@LoginTest”}) Public class JUnitRunner { } #7) Cucumber Report: Cucumber generates its own HTML format. However, better reporting can be done using Jenkins or bamboo tool. Details of reporting are covered in next topic of cucumber. Cucumber Project Setup: Detail explanation of cucumber project set up is available separately in next tutorial. Please refer to Cucumber Tutorial Part2 from more information about project setup. Remember there are no extra software installations required for cucumber. “$”. The user can use regular expressions to pass different test data. Regular expressions take data from feature steps and pass to step definitions. The order of parameters depends on how they are passed from feature file. Please refer next tutorial for project setup and mapping between feature files and Java classes. Example: Below example is to illustrate how feature files can be implemented. In this example, we have not used any selenium API. This is to just show how cucumber works as a standalone framework. Please follow next tutorial for selenium integration with cucumber. public class LoginTest { @Given("^user navigates to SOFTWARETETINGHELP.COM$") public void navigatePage() { system.out.println(“Cucumber executed Given statement”); } @When("^user logs in using Username as \"(.*)\" and Password \"(.*)\"$") public void login(String usename,String password) { system.out.println(“Username is:”+ usename); system.out.println(“Password is:”+ password); } @When("^click the Submit button$") public void clickTheSubmitButton() { system.out.println(“Executing When statement”) } @Then("^Home page should be displayed$") public void validatePage() { system.out.println(“Executing Then statement”) } @Then("^login should be successful$") public void validateLoginSuccess() { system.out.println(“Executing 2<sup>nd</sup> Then statement”) } } When you execute cucumber runner class, cucumber will start reading feature file steps. For example, when you execute @smokeTest, cucumber will read Feature step and Given a statement of scenario. As soon as cucumber finds Given the statement, same Given statement will be searched for your java files. If the same step is found in java file then cucumber executes the function specified for the same step otherwise cucumber will skip the step. Conclusion In this tutorial, we have covered features of cucumber tool and its usage in real time scenario. Cucumber is a most favourite tool for many projects as it is easy to understand, readable and contains business functionality. In the next chapter, we will cover how to set up a cucumber – java project and how to integrate Selenium WebDriver with Cucumber. 52 thoughts on “Automation Testing Using Cucumber Tool and Selenium – Selenium Tutorial #30” Hi, Can we use cucumber with TestNG instead of junit? Thanks in advance Yes, we can.. We have to use cucumber-testng instead of cucumber-junit Thanks for detailed information on Cucumber, it helped a lot as a beginner . Anyone knows answer please mail me Ideally, unlimited but good approach would be too write feature file according to component division of your application What are the maximum number of scenarios and scenario steps that we can use in a single feature file. Anyone knows answer please mail me Thanks………….. we can use any number of scenarios I want to use cucumber with java so how can i install it on my machine. This was not very helpful. It talks in such vague generalities, you’d pretty much have to already know Cucumber to learn it from this page. Thanks to the author. Very nice introduction to Cucumber. Much appreciated. can we test php application with selenium and cucumber.plz explain in brief. Yes, Selenium supports 8 programming languages so thats y we can test php applications also. Thanks for the nice introduction on cucumber. Can we use cucumber with TestNG instead of junit? Thanks in advance Thanks When i am trying to run cucumber class some scenarios is working after that driver object is showing null why it ts? Extend the class where you have initialise drivers Can we use testNG instead of Junit or cucumber only uses Junit runner? Very useful information for a beginner. @William Deich : Sir, Please guide us from where we can get more basic info we don’t know cucumber at all ? how to download cucumber on the system? Need a example where of feature file where More than two scenarios are written. Hello i need to know how can i use the data from database like ms sql as data table in example of scenario outline for my selenium scripts ? Awesome blog – all the concepts are cleared. Still testng not supportable in cucumber. @mallika: yes u can write more than one scenarios in feature file. @gurpreet: pls downlaod cucumber related jar’s and install cucumber plugin to eclipse This website is very useful. i learnt many new things. I appreciate the content and precise information. Thanks, Mahesh Thanks for giving good clarity. could you please give the same thing with the hepl of Ruby as well. Hope sometimes someone comes with TestNG with Cucumber so we can make parallel execution possible. Has anyone done parallel execution with Cucumber before? Hii!! Thanks for explaining cucumber. But please provide the material on gherkin as well. Hi Team, First of all let me tell you one thing is you all doing a fantastic job by giving us knowledge and making our future very best. I would like to ask how can i download cucumber ? Please provide me link or files ? @Given(“^user navigates to SOFTWARETETINGHELP.COM$”) misspelled your website. Very informative, Thanks ! Hi , I am looking for certification in cucumber or something similar to that. Could you please suggest what should I go for. Thanks Pankaj Hi friends Cucumber cannot be used with TestNg because TestNg has Inbuilt Annotations as @Test,@BeforeTest,@AfterTest——– TestNg is Very Good at Understanding with simple Code with Selenium and platform java but in cucumber we have to write Each Step Hi, I want to call a method, when each test scenario is failed.Which method I can call in Cucumber.? Thanks in advance. Regards, Poornima Awesome, very helpful for the beginners. Many thanks What are the maximum number of scenarios and scenario steps that we can use in a single feature file Any number. In our project we are following so you can use no of Scenario in one feature file its depend upon you I’M PLANING TO CONVERT MY TESTNG FRAMEWORK TO CUCUMBER, THIS CONTAIN IS VERY HELPFUL FOR ME. tHANX rAM scenario outline and maintaining data in the feature file makes the feature file clumsy and difficult to edit and update. is there a something like DataProvider in testng that is available in cucumber that can be used to load data from csv, xls. Can we use excel sheet to read scenarios instead of writing scenarios manually? can we run feature file without testrunner, but through scenario name Would you help me? How can I use selenium webdriver + cucumber + weinre? Is it possible to do that. Thank you try catch are not required BDD? how Hooks and html report generation can be integrated with this, can we get any single project which used Base class, step Definition, content management page, page object, Hooks, Runner class, Report in HTML, generic utilities. How to debug the test cases? I cannot understand yet how the cucumber scenarios are integrated with the code runned through the application. really helpful. Thank you. Looking more like this..I am completely a beginner to automation. No coding knowledge and only came here with the willing to learn. Will move ahead with the same interest believe and perseverance in my work :) Find another job if you have to use cucumber, it’s the worst thing you can have when it comes to writing tests as a developer. No code completion, no way of reusing most of the code (you reuse only methods which are already part of any decent framework like click/type), you have to search for tests with regex, no concept of page objects, no images (an image says more than 1000 words). God save you when you have to combine it with screenshots and visual testing. Also if the project has a monorepo it will be hellish to update any feature that has tests in multiple projects. Even as bad as it would take you a day to change something and a week to make the tests pass and get the approvals. Hello James, How to contact you? If possible, can you look me up? Thank you team. I have gone through all the topics like maven, cucumber BDD, Java and Selenium Webdriver and All the study material/ note are very useful in interview process. Could you please create some more blogs on advanced level interview questions with examples And one more point I would like to add like provide java programs with detailed information or explanation so that it will easy us to understand those who are new to java. Thanks and Best Regards Bhagyashree Zaware
https://www.softwaretestinghelp.com/cucumber-bdd-tool-selenium-tutorial-30/
CC-MAIN-2021-21
refinedweb
2,323
55.84
There are three possibilities, but I can't find examples: I want to write some unit tests to see if I can handle them, but I don't know how to write them except for the first one, which seems to be new Foo { Property = "value" } where Property = "value" is an expression of type MemberAssignment. See also this MSDN article. EDIT This replaces the previous answer in response to the first comment. The classes I'm using in these examples are as follows: public class Node { //initialise non-null, so we can use the MemberMemberBinding private NodeData _data = new NodeData(); public NodeData Data { get { return _data; } set { _data = value; } } //initialise with one element so you can see how a MemberListBind //actually adds elements to those in a list, not creating it new. //Note - can't set the element to 'new Node()' as we get a Stack Overflow! private IList<Node> _children = new List<Node>() { null }; public IList<Node> Children { get { return _children; } set { _children = value; } } } public class NodeData { private static int _counter = 0; //allows us to count the number of instances being created. public readonly int ID = ++_counter; public string Name { get; set; } } Firstly, you can get the C# compiler to generate expressions for you to investigate how they work more by doing the following: Expression<Func<Node>> = () => new Node(); Will generate an inline expression that contains a call to Expression.New, passing the ConstructorInfo of the Node type. Open the output DLL in Reflector to see what I mean. I should first mention that these three expression types you ask about are typically passed in a MemberBinding[] array in an Expression.New, or embedded within each other (since Member initializers are inherently recursive). On to the plot... The MemberAssignment expression represents the setting of a single member of a new instance with the return value of a given expression. It is produced in code using the Expression.Bind factory method. This is the most common that you'll see, and in C# code this is equivalent to the following: new NodeData() { /* start */ Name = "hello" /* end */ }; or new Node() { /* start */ Data = new NodeData() /* end */ }; The MemberMemberBinding represents the inline initialisation of the members of a member that is already initialised (i.e. newed, or a struct that can't be null anyway). It is created through the Expression.MemberBind and does not represent creating a new instance. Therefore, it differs from the MemberBind method by not taking a ConstructorInfo, but a reference to a Property Get method (property accessor). As a result, an attempt to initialise a member in this way that starts off null will result in a NullReferenceException. So, to generate this in code you do this: new Node() { /* start */ Data = { Name = "hello world" } /* end */}; This might seem a bit odd, but what's happening here is that the property get method for Data is being executed to obtain a reference to the already initialised member. With that in hand, the inner MemberBindings are then executed in turn, so effectively the above code is not overwriting Data, but doing this: new Node().Data.Name = "hello world"; And this is why this expression type is required, because if you've got to set multiple property values, you can't do it in a one-liner, unless there's some special expression/syntax to do it. If NodeData had another string member ( OtherName) that you also wanted to set at the same time, without initialiser syntax/expressions, you'd have to do this: var node = new Node(); node.Data.Name = "first"; node.Data.OtherName = "second"; Which isn't a one liner - but this is: var node = new Node() { Data = { Name = "first", OtherName="second" } }; Where the Data = bit is the MemberMemberBinding. I hope that's clear! Created by the Expression.ListBind method (requiring also calls to Expression.ElementInit), this is similar to the MemberMemberBinding (in that an object's member is not being created anew), but this time, it's an instance of ICollection/IList that is being added to with inline elements.: new Node() { /* start */ Children = { new Node(), new Node() } /* end */ }; So, these last two expressions are kinda edge-cases, but certainly are things that you could well come across, as they are clearly very useful. Finally, I enclose a unit test that you can run that will prove the assertions I make about these expressions - and if you reflect the method body, you'll see that the relevant factory methods are being called at the points I highlight with the comment blocks: [TestMethod] public void TestMethod1() { Expression<Func<Node>> e = () => new Node() { Data = new NodeData() }; Expression<Func<Node>> e2 = () => new Node() { Data = { Name = "MemberMemberBinding" } }; Expression<Func<Node>> e3 = () => new Node() { Children = { new Node(), new Node() } }; var f = e.Compile(); var f2 = e2.Compile(); var f3 = e3.Compile(); var node = f(); //proves that this data was created anew as part of the expression. Assert.AreEqual(2, node.Data.ID); var node2 = f2(); //proves that the data node's name was merely initialised, and that the //node data itself was not created anew within the expression. Assert.AreEqual(3, node2.Data.ID); Assert.AreEqual("MemberMemberBinding", node2.Data.Name); var node3 = f3(); //count is three because the two elements in the MemberListBinding //merely added two to the existing first null item. Assert.AreEqual(3, node3.Children.Count); } There you go, I think that should cover it. Whether you should be supporting them in your code is another matter! ;)
https://expressiontree-tutorial.net/knowledge-base/2917448/what-are-some-examples-of-memberbinding-linq-expressions-
CC-MAIN-2022-40
refinedweb
906
50.16
Part 2: Nvidia CUDA tutorial (with code) - how to use GPU computing power to boost speed of options pricing valuation. Black-Scholes-Merton model boosted by CUDA in c++. Note: Part 1 may be found here — where I run tests of Python vs C++ vs CUDA performance. There are millions of financial transactions each day globally. The vast majority is conducted on a market for derivatives (options, futures etc., are typical examples). This means that, every day, thousands of financial institutions (like: banks, stock exchanges, etc.) have to value its financial holdings. Note: here is the link to the latest very interesting case connected with the pricing of derivatives. In this tutorial, we’re going to introduce CUDA as a solution to speed up calculations for evaluation of options. What is an option, and what’s the formula (in very short)? Simply, an option is a financial contract. A Buyer has an option (can decide) to buy/sell an underlying security. Actually, underlying security may mean anything, but usually it’s currency, stock or bond. Very simply, it means that the buyer can decide whether to buy/sell the asset at the current price on the market. The formula for the valuation of options is a little complicated at the first glance. In fact the options' formula, known as the holy grail of investing, caused many banks to crash. It seems that rational models do not work properly in a greed and irrational human environment. Here is the formula: As you can see, the above formula takes as parameters current price, strike price, time and interest rate. Options are priced based on a normal distribution assumption (which in reality may not hold). OK, let’s go to the code! Fortunately for us, CUDA devs prepared implementation for pricing options here. I’m going to use this code, explain key details and run it on my PC. There are three files: - BlackScholes.cu - BlackScholes_gold.cpp - BlackScholes_kernel.cuh BlackScholes.cu file The file consists of the main function, that is responsible for executing the whole program for pricing of options. The key parts here are: - Find cuda device helper — it is stored in a helper_functions.h. This function simply checks whether there is an NVIDIA GPU available in the machine. findCudaDevice(argc, (const char **)argv); 2. Malloc function — allocates a block of memory and returns a void pointer to the first byte of the allocated memory block. In our case we need to allocate memory of each data we are going to use. h_CallResultCPU = (float *)malloc(OPT_SZ); And we also need to free memory in the way: free(h_CallResultCPU); 3. Malloc in cuda — allocates a block of memory in a GPU. Next, we’re going to copy data from a host to GPU. checkCudaErrors(cudaMalloc((void **)&d_CallResult, OPT_SZ)); After calculation, we need to free a memory using: checkCudaErrors(cudaFree(d_CallResult)); 4. Memory copy cuda — it copies data from host to GPU so we are able to use GPU and make calculation. checkCudaErrors(cudaMemcpy(d_StockPrice, h_StockPrice, OPT_SZ, cudaMemcpyHostToDevice)); 5. CUDA’s special <<<1, 1>>>syntax. It tells a GPU device to perform a given operation defined by __global__ function. The first parameter stands for a number of blocks. The second parameter is for a number of threads in a thread block. Here is a detailed explanation for this: BlackScholesGPU<<<DIV_UP((OPT_N/2), 128), 128>>> In our case DIV_UP is used to dynamically decide on a number of blocks. Let’s see an example numbers like: #include <iostream> using namespace std;#define DIV_UP(a, b) ( ((a) + (b) — 1) / (b) )int main() { cout<<DIV_UP(128, 128) << endl; cout<<DIV_UP(400, 128) << endl; cout<<DIV_UP(1000, 128) << endl; return 0; } code outputs: 1 4 8 Note: for quick tests you can use an online c++ compiler here. File BlackScholes_gold.cpp The file consists of c++ implementation of black-scholes-merton model. The code is run on a CPU to serve as a benchmark and validator of results given by GPU. The code is pretty straight forward, there are only three functions with below parameters: static double CND(double d)static void BlackScholesBodyCPU( float &callResult, float &putResult, float Sf, //Stock price float Xf, //Option strike float Tf, //Option years float Rf, //Riskless rate float Vf //Volatility rate)extern "C" void BlackScholesCPU( float *h_CallResult, float *h_PutResult, float *h_StockPrice, float *h_OptionStrike, float *h_OptionYears, float Riskfree, float Volatility, int optN) Function CND: This function approximates a cumulative distribution function: Function BlackScholesBodyCPU: This function calculates call and put price of an option. Function BlackScholesCPU: This function is actually a loop to calculate many options with different parameters for testing. File BlackScholes_kernel.cuh The file consists of the same c++ code as in above .cpp file with slight additional syntax code. - __global__ function is also called “kernel” function. It’s the function that you may call from the host side using CUDA kernel call semantics ( <<<...>>>). In our case it’s defined in the above file a BlackScholesGPU function. __global__ void BlackScholesGPU 2. __launch_bounds__ function is used to specify manually number of registers for a program. What are registers then? — registers are very fast computer memory which are used to execute programs and operations efficiently. __launch_bounds__(128) 3. __device__ function can be called only from the device (GPU), and it is executed only in the device. This is very similar to __global__, but can be called only from a device. __device__ inline void BlackScholesBodyGPU Let’s run the program and see output: Note: program is executed on Windows 10 and NVIDIA RTX2080 Super (without boost enabled). As for starting point I left original data configured by CUDA devs: const int OPT_N = 4000000; const int NUM_ITERATIONS = 512; const int OPT_SZ = OPT_N * sizeof(float); const float RISKFREE = 0.02f; const float VOLATILITY = 0.30f; And here is the final output: GPU Device 0: “Turing” with compute capability 7.5Initializing data… …allocating CPU memory for options. …allocating GPU memory for options. …generating input data in CPU mem. …copying input data to GPU mem. Data init done.Executing Black-Scholes GPU kernel (512 iterations)… Options count : 8000000 BlackScholesGPU() time : 2.458114 msec Effective memory bandwidth: 32.545271 GB/s Gigaoptions per second : 3.254527BlackScholes, Throughput = 3.2545 GOptions/s, Time = 0.00246 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128Reading back GPU results… Checking the results… …running CPU calculations.Comparing the results… L1 norm: 1.787766E-07 Max absolute error: 1.192093E-05Shutting down… …releasing GPU memory. …releasing CPU memory. Shutdown done.[BlackScholes] — Test SummaryNOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.Test passed For 4 000 000 (actually 8kk) options it took 2.458114 msec (0.00246 s) — Amazing! All the best!
https://maciejzalwert.medium.com/part-2-nvidia-cuda-boost-with-code-how-to-use-gpu-computing-power-to-boost-speed-of-options-22206d393eec?source=post_internal_links---------2----------------------------
CC-MAIN-2021-25
refinedweb
1,116
50.12
Instead of copying source and self-written header files to a multi-file Studio project, there's a better way. Instructions apply to Studio 4.18, SP2. Create a folder for header files that you've written and a second folder for reusable C source code modules. Mine are "My_Includes" and "My_LIBs". Put all header files you've written in one and whatever reusable source modules in the other. For Header Files: 1) Hit the "Edit current configuration options" button ( the white gear icon opposite-right to the "build" button ). a) OR do: Project-> "configuration options" 2) Hit "include Directories" button and then the yellow folder near the upper right hand corner, then a browse box will open and you'll see 3 "..." Hit those and navigate to WHEREVER your include folder is and hit "ok" ( Of course you can add subdirectory folders ). For source files: Do the same with the "Libraries" button just below the "Include Directories" button for your source files. Such source files won't show up in your project folder, but that's ok. --------------------------------------------------- Now with a new project open, you have to add whichever files are needed. For Source Files: In the project tree, right click the yellow "source files" and do "add existing source file(s)". For Header Files: Do the same steps with the "Header Files", in the project tree. You must also use #include "some_header_file" in each project C source file where it's needed. See abcminiuser's tutorial Modularizing C Code: Managing large projects if you don't understand my last sentence. You don't have to add header files to that folder in the tree, but if you don't then the header files will appear in the "external dependencies" folder automatically. I like to see them in their own folder, apart from the system file "clutter" of the external d. folder. ---------------------------------------------------- Now in your project files you can just do, for example: #include OR #include "spi.h" etc. for all header files you've written and will use. For your C modules, just navigate to your LIBs folder in the project view and add them. Advantages: 1) You don't have to hassle with navigate -> copy -> pasting them to the current project folder any more ! 2) If you modify your modules / header files while still developing said files, then when you save the changes they apply to the master file and you're done. Much better than saving the project copy and STILL having to jump through hoops to have to save those changes to the master through a copy -> navigate to master-> paste ( "overwrite" it ). It especially saves time if you're modifying multiple module / header files ( just hit "save all" ) ! I used Studio for years to do multi-file projects the "hard" way and I thought I was stuck with it ( since I never bothered to read the entire Studio manual to learn about the 2 options mentioned above ). So for any 'freaks that stumble onto this before you find out via Studio's help file ( I got the idea that the Studio IDE MIGHT be able to do these things from a project book that used the MPLAB IDE :roll: ), this tut's for YOU ! :wink: Jerome I was puzzled as to why AS5 wouldn't allow me to include delay.h even though found in toolchain. This solved the puzzle. 8) In Studio 5, you can open the project's properties either from toolbar menu Project (screenshot 1) or from Project Explorer window by right click on your project name (screenshot 2). Once in the properties page, go to GNU C Linker -> Libraries, then click on the "+" icon in Library search path. This brings up a file browser, select the path you want the linker to search (screenshot 3), and click "OK" all the way back. You can add more paths from all drives and network places. After this, instead of writing an awkward I simply do In fact, when I start to type " Attachment(s): This isn't a tutorial - it sounds more like a bug report. (and why is that everyone else using toolchain/AS5 can just use just as they have always done in AS4? Sounds like the problem may be local to your own installation) I'll move this to the AS5 forum where the developers can see/comment on this. Moderator Clawson, my post is clearly a tutorial, I don't know why you moved it !? :? Espespcially when I don't reference AS5 at all ( I'm glad my tutorial :wink: helped valleyman , though. ) It seems like another bug in AS5 ( what's new... ), I agree with you on that. I need you to move it back where it belongs ( actually both, since it helped valleyman... I need ma props, dude Apologies - thread now back where it belongs - it was valleyman's post I was responding too - not sure how I missed your post previously :oops: Valleyman, I suggest posting a link in AS5 forum to your post here or some indicator in that forum of this bug in AS5 ( it could as Clawson, suggested in your overall setup, Though my unfortunate example may have revealed a bug, I intended to illustrate that the same technique demonstrated in this tutorial can apply in AS5 as well. (I used all defaults from fresh AS 5, project created with AS5 template. If relative path util/ is expected to work from ASF without explicit linker directive, it's got to be a tool chain bug.) Can you explain how this would be done in Studio 6.1? I cannot find the configurations options. I am copying the Header file with no success. Thanks Hello AVRfreaks, I modified the SD Arduino library and I want to include this to my current project I've tried to follow the steps and this is the result: The header files show up in the "External dependencies" folder During the linking process he cannot find the files apparently I'm using AVR Studio 4 because I have a cheap programmer which doesn't work with newer versions. Thanks in advance and Merry Christmas to you guys :wink: Attachment(s): It is not enough to only include the header files. You will also need to add the corresponding C/C++ source files to the project so that they get compiled (they will NOT get compiled merely because you've #included the header files). It is not the (.h) header files that the linker can not find. It is the (.o) object files that comes from compiling the C/C++ source files. Since you've not added the source files to the project they are not compiled, and because of that there are no such object files for the linker to consume, and so the linker must protest with an error.] By "include directories" I added the "SD" directory and the "SD/Utility" directory. By "libraries" I first added both the directories and used "add object" to add all of my files one for one. Did I made a mistake here? thanks in advance :) Attachment(s): I haven't checked my emails in awhile... Merry CHRISTmas ! Yes, you made mistakes. Remove everything you added using "add objects". Hit the "Libraries" button and then the yellow folder near the upper right hand corner, then a browse box will open and you'll see 3 "...". Hit those and navigate to WHEREVER your C source files are and select that folder ( Do the same for the header files. ). Finally, to add source files to a pjt., in the project tree, right click "source files" and do "add existing source file(s)". This way, you don't need to copy the file from that dir. and then paste it into a project folder. You could do the same for header files and they'll show up in the header file folder on the tree, or just leave as it is and they show under "external ..." I've added a bit more detail to the tut. for how to navigate to the source/header I have the same problem,how do i add .h files and .c files to winavr programmer's notepad? Wrong place to ask that question. (a) this is a tutorial so only posts should be feedback about post #1 and (b) your question has nothing to do with this tutorial anyway. So this is locked. (anyone with real feedback to add contact a moderator to unlock this). (oh and by the way this is 2016 not 2006 - no one uses Programmers Notepad as an IDE for WinAVR these days!)
http://www.avrfreaks.net/comment/1926401
CC-MAIN-2017-51
refinedweb
1,438
70.23
tinytag 0.6.0 Read music meta data and length of MP3, OGG, FLAC and Wave files tinytag is a library for reading music meta data of MP3, OGG, FLAC and Wave files with python []() []() - Features: - Read tags and length of music files - supported formats * MP3 (ID3 v1, v1.1, v2.2, v2.3+) * Wave * OGG * FLAC - pure python - supports python 2 and 3 (without 2to3) - is tested - Just a few hundred lines of code (just include it in your project!) tinytag only provides the minimum needed for _reading_ MP3, OGG, FLAC and Wave meta-data. It can determine track number, total tracks, title, artist, album, year, duration and more. from tinytag import TinyTag tag = TinyTag.get(‘/some/music.mp3’) print(‘This track is by %s.’ % tag.artist) print(‘It is %f seconds long.’ % tag.duration) List of possible attributes you can get with TinyTag: tag.album # album as string tag.artist # artist name as string tag.audio_offset # number of bytes before audio data begins tag.bitrate # bitrate in kBits/s tag.duration # duration of the song in seconds tag.filesize # file size in bytes tag.samplerate # samples per second tag.title # title of the song tag.track # track number as string tag.track_total # total number of tracks as string tag.year # year or data as string supported python versions: - 2.6 - 2.7 - 3.2 - 3.3 - pypy and possibly more. - Downloads (All Versions): - 0 downloads in the last day - 20 downloads in the last week - 832 downloads in the last month - Author: Tom Wallroth - License: GPLv3 - Categories - Development Status :: 4 - Beta - Environment :: Web Environment - Intended Audience :: Developers - License :: OSI Approved :: GNU General Public License v3 (GPLv3) - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.2 - Programming Language :: Python :: 3.3 - Topic :: Internet :: WWW/HTTP - Topic :: Multimedia - Topic :: Multimedia :: Sound/Audio - Package Index Owner: devsnd - DOAP record: tinytag-0.6.0.xml
https://pypi.python.org/pypi/tinytag/0.6.0
CC-MAIN-2016-18
refinedweb
330
59.7
/* * _clnt.c 1.37 87/08/11 Copyr 1984 Sun Micro";*/ /*static char *sccsid = "from: @(#)pmap_clnt.c 2.2 88/08/01 4.0 RPCSRC";*/ static char *rcsid = "$Id: pmap_clnt.c,v 1.4 2002/02/19 20:36:23 epeyton Exp $"; #endif /* * pmap_clnt.c * Client interface to pmap rpc service. * * Copyright (C) 1984, Sun Microsystems, Inc. */ #include <string.h> #include <rpc/rpc.h> #include <rpc/pmap_prot.h> #include <rpc/pmap_clnt.h> #include <sys/types.h> #include <sys/socket.h> #include <net/if.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> static struct timeval timeout = { 5, 0 }; static struct timeval tottimeout = { 60, 0 }; void clnt_perror(); /* * Set a mapping between program,version and port. * Calls the pmap service remotely to do the mapping. */ bool_t pmap_set(program, version, protocol, port) u_long program; u_long version; int protocol; u_short port; {_prot = protocol; parms.pm_port = port; if (CLNT_CALL(client, PMAPPROC_SET, xdr_pmap, &parms, xdr_bool, &rslt, tottimeout) != RPC_SUCCESS) { clnt_perror(client, "Cannot register service"); return (FALSE); } CLNT_DESTROY(client); (void)close(socket); return (rslt); } /* * Remove the mapping between program,version and port. * Calls the pmap service remotely to do the un-mapping. */ bool_t pmap_unset(program, version) u_long program; u_long version; {_port = parms.pm_prot = 0; CLNT_CALL(client, PMAPPROC_UNSET, xdr_pmap, &parms, xdr_bool, &rslt, tottimeout); CLNT_DESTROY(client); (void)close(socket); return (rslt); }
http://opensource.apple.com/source/Libinfo/Libinfo-129.1/rpc.subproj/pmap_clnt.c
CC-MAIN-2016-30
refinedweb
214
55.81
afunction(int & value) {within the function, value is actually the external variable and any changes to value affect the external variable. // 8_2.cpp :In example 8_2, testfunction is called twice, on variables a and b. The result is // #include "stdafx.h" #include <iostream> void testfunction(int & value) { value++; } int main(int argc, char* argv[]) { int a=10; int b=20; testfunction(a) ; testfunction(b) ; std::cout << "a = " << a << " b = "<< b << std::endl; return 0; } a = 11 b = 21Showing that a and b were both changed by the calls to testfunction. Const ParametersIf you put the word const before the type then any attempt to change the parameter will cause a compilation error. You've indicated to the compiler that it is a const, so the compiler will prevent it being altered. But as we've just discovered a way to alter variables passed in to functions, why try to prevent that? The Answer : for single variables like ints or even floats, there is little point in making them const and reference. Might as well just pass them by value. . For larger types such as structs and arrays though it makes a lot of sense, because if passed by value they have to be copied. Pass by reference doesn't make a copy, it just passes in a pointer. This is far faster to execute. On the next page : Learn about references and return values.
http://cplus.about.com/od/learning1/ss/references_2.htm
crawl-002
refinedweb
234
71.24
The Storage Team Blog about file services and storage features in Windows and Windows Server. If you’ve used the Distributed File System snap-in in Windows Server 2003, you might’ve seen the checkbox called “Publish this root in Active Directory.” You might’ve even checked it and wondered what benefits it provides. Does it somehow make a namespace more fault-tolerant? Does this make the namespace part of Active Directory? Just what does it do?? And if you’re an eagle-eyed fan of DFS, you might have noticed that (A) nothing seems to happen when it’s checked or unchecked and (B) the checkbox doesn’t exist in the new DFS Management snap-in in Windows Server 2003 R2. So here’s the scoop: The checkbox is broken in Windows Server 2003. I think the original intent was to make the namespace appear in Active Directory searches. But even if the checkbox worked, this wouldn’t affect the functionality or fault tolerance of the namespace in any way whatsoever. What makes a namespace fault tolerant is where and how the root is hosted. For a stand-alone namespace to be fault tolerant, the root must be created on a server cluster. For a domain-based namespace to be fault tolerant, you need at least two domain controllers in the domain (to provide referrals to the namespace) and two namespace servers hosting the root (to provide referrals to folder targets). This last point about domain-based namespaces is often misunderstood, too. Customers think that because the namespace is in Active Directory, this somehow makes the namespace fault tolerant. The AD aspect is more about consistency than redundancy. All namespace servers will poll a domain controller periodically to obtain the latest DFS metadata, helping ensure that all the namespace servers provide referrals that are consistent. You still need an operational domain controller and namespace server (which can be the same server) to provide referrals. Want to know more about how a namespace works? Check out the DFS Technical Reference. --Jill PingBack from
http://blogs.technet.com/b/filecab/archive/2006/08/07/444824.aspx
CC-MAIN-2015-18
refinedweb
343
64.1
USERNAME: Save Info! Logging in… We found 5,896 matches. Viewing 1-30 of 5,896 matches. 1 | 2 | 3 | 4 | 5 | 6 | 7…102…197 Posted: 07/05/09 06:03 AM Forum: Flash If you want a serious artist then you're going to have to show some decent programming examples. Posted: 07/05/09 05:19 AM At 7/5/09 04:38 AM, Jereminion wrote: function moveBullet(event:Event) { for (var u:int = bullets.length-1; u>=0; u--) { removeChild(bullet[u]) } } after i delcare the properties of the bullet initially with newBullet, i use bullets[u] to modify it dynamically. i try to remove bullets[u] within the array using removeChild(bullets[u]) and it says removeChild(bullet[u]) You're referring to 'bullet' here, but everywhere else you've referred to 'bullets'. Also once that's done you'll still get an error because you're removing the child yet leaving it in the array so flash is trying to remove it again (seeing as it's on an ENTER_FRAME) Posted: 07/03/09 06:04 AM At 7/3/09 04:46 AM, Penboy wrote: I submitted a picture to the art portal a while back, but now I can't find it. I've been searching everywhere, trying all the keywords I could remember. lolwuthelp? This? Posted: 07/02/09 05:21 AM 16,000 is the limit for frames, layers, symbols etc Posted: 07/01/09 07:59 AM At 7/1/09 07:43 AM, KaynSlamdyke wrote: Phew... Someone needs to write this up for everyone... This is the site a found when reading up on this stuff a few months ago. The source code itself is XNA but it gives a pretty detailed explanation along with the link at the top ('Quadtree Code Design'). There is also this video tutorial. It is C++ but the first ~5mins explain collision detection and octrees (3D quadtrees) but he does actually explain quadtrees using a 2D image. Not sure how useful other people will find this though. Posted: 07/01/09 06:59 AM At 6/29/09 06:12 AM, 4urentertainment wrote: It's illegal to use copyrighted music. Even if one second of it. The music in the audio portal however is free and not copyrighted. It is copyrighted. It's just under a license as that you can use it without permission in a flash without permission but if you're making profit from it, whether it be from sponsorship or ad revenue, you have to get the permission of the author, could be that they say you can use it for free or that they want x% or $y. Posted: 07/01/09 03:56 AM Uncheck 'export in first frame'. Then create a blank keyframe after the preloader that will never actually play with a MovieClip containing the sound (not attached via AS). That way the sound will be initially loaded by the preloader. Posted: 07/01/09 03:53 AM In AS3 it is gotoAndStop(1, "animation");. It used to be (scene, frame) or (frame). It's more logical now. If you just have ("animation") then it will treat it as a frame label, not a scene. Posted: 07/01/09 02:37 AM At 7/1/09 02:27 AM, l300l30 wrote: enterFrame event{ if( man.hitTest(this) ){ gotoAndPlay(2); } } How about you actually learn? Read. And read the links at the bottom too. Posted: 07/01/09 02:21 AM You're missing a closing bracket. if ( man.hitTest(Finish Point)) And make sure it is in an enterFrame event, obviously. Posted: 07/01/09 02:13 AM Of course that's going to happen. You can't expect the whole community to stay the same people and develop with you. Those questions are always going to exist here. If you want more advanced problems, both to ask and to answer, then try a site like Dream.In.Code, a site dedicated to coding. They're a lot stricter on what you can ask and don't have the noobs there as the only reason to go there is for the forums, unlike NG. You have to post some sort of code or show some attempt at solving a problem by yourself before asking. I'm sure there are plenty of other sites like it. But this means the flash forum is alot slower than here. Posted: 06/30/09 07:46 AM At 6/30/09 06:52 AM, fluffkomix wrote: teasing the emos is fun. especially when you got a friend in the same room logged in with you. emocitychat.com That place is weird.. The moment I enter I got a PM 'Hey hey. Are you a vampire?'.. Yes... Yes I am a vampire. I can't tell if these people are retarded, or just trolls. If they're not trolling they need help.. seriously. Posted: 06/29/09 11:48 AM Happy birthday :3 Posted: 06/28/09 04:29 PM At 6/28/09 03:37 PM, knugen wrote: USA is up by 2-1 (was 2-0 in half time) against Brazil. This tournament must be rigged ;) Lost 3-2 lol Posted: 06/27/09 05:35 PM At 6/26/09 09:02 PM, Zyphonee wrote: Too goddamn fucking soon. How long do you have to wait until it becomes 'acceptable'? Is there some sort of cut off point? At 6/27/09 05:29 PM, matrix5565 wrote: If only there was a web site with a bunch of Flash games and a BBS... Kongregate? Posted: 06/26/09 01:55 PM You're more likely to get attention if you actually post something convincing on here. There are a lot of these sorts of threads and unless you show what you can do in this thread I doubt you'll get many people contacting you as they'll assume you're the 'usual noob'. Posted: 06/26/09 07:38 AM At 6/26/09 12:13 AM, Toast wrote: Weird intellectucal performance tendancies I have that same thing. I didn't really notice until a few weeks ago. I also often have to miss a nights sleep to return to normal sleeping times. Posted: 06/25/09 04:04 PM Can someone please enlighten me as to why Spotify doesn't have targeted adverts? Most of their adverts are for songs/albums etc so surely they should be targeted? They know perfectly well what sort of music I listen to yet I get dumb adverts for Britney Spears and the like. Is it just that they don't get enough different adverts? Posted: 06/24/09 04:19 PM Spain beaten by USA at football ('soccer')... what's going on here? Posted: 06/21/09 03:04 PM At 6/21/09 02:59 PM, Yambanshee wrote: for(i in _root){ i.onEnterFame = function () { //CODE } } for (var i in _root) { if (typeof(_root[i]) == "movieclip") _root[i].swapDepths(_root[i]._y); } Posted: 06/20/09 03:52 PM At 6/20/09 03:31 PM, AlmostDead1 wrote: _root.onEnterFrame = function() { if (_root.score.text == 4) { gotoAndPlay(4); } if (_root.health.text == 0) { gotoAndPlay(1); } if (_root.displayTime == 0) { gotoAndPlay(1); } }; The text in dynamic text boxes is a string; you're comparing it to a number. if (_root.score.text == "4") { gotoAndPlay(4); } if (_root.health.text == "0") { gotoAndPlay(1); } Posted: 06/20/09 02:52 PM The steering is way to quick/tight making it too hard to control properly. Posted: 06/20/09 02:50 PM Or something like: for(var i:Number = 1; i <= 250; i++) { _root["rabbit"+i].onRollOver = function() { // stuff } } Posted: 06/19/09 02:52 PM Although; I have faith in the Art mods as quite a few of the authors of terrible things which were around earlier, looking at their profiles, have been un-scouted. Posted: 06/19/09 01:10 PM There's a lot of disappointing stuff on the art portal already, that takes away the emphasis of the amazing stuff. It seems to be a decent artist or mod scouts an average artist, who deserves it nonetheless. Then they scout someone else who's art is pretty poor and then there is a trail of noob-scouting-noob... Half this stuff isn't even 'art', IMO :'C Posted: 06/19/09 12:20 PM I'm fairly sure you can't do it like that. Try something along the lines of: import flash.display.BitmapData; import flash.geom.Matrix; var bmp:BitmapData = new BitmapData(550, 400); var m:Matrix = new Matrix(); m.tx = theMC._x; m.ty = theMC._x; bmp.draw(theMC, m); trace(bmp.getPixel(50, 50)); Posted: 06/16/09 02:16 AM AS2 AS3 You can use hoursUTC(AS3) or getUTCHours()(AS2) to get the hours in universal time. You can do a similar thing for the other values you need. Posted: 06/13/09 04:25 PM My bet is that player = the movieclip you're putting this code in, in which case it's obviously going to return true. The syntax has been explained to you. Do not just reject the code and say "Nope", if you want further help you're going to have to give more detail or upload the fla or no one can help you. What doesn't work about it? Posted: 06/13/09 04:21 PM At least read the link. It is something that they, quote, "run every year". Yes, I'm pretty sure they did one for games before but this is not that and it is definitly for now, not "years ago". "Entries must be submitted by 4pm on Tuesday 25th August 2009." Posted: 06/13/09 04:07 PM Sigh, read the post {x, y} is the coordinate you're testing against. x and y are numbers. If you want to test against (the registration point of) a movieclip then you have to put it's coordinates there not 'x, y'. As mentioned, you need this._x but you should be coding in the timeline; not in individual MCs. All times are Eastern Daylight Time (GMT -4) | Current Time: 08:50 AM
http://www.newgrounds.com/bbs/search/author/unknownfury
crawl-002
refinedweb
1,712
73.78
Defines some capabilities of the KHR2 humanoid robots. More... #include <cmath> #include <stdlib.h> #include "CommonInfo.h" Go to the source code of this file. Contains information about an KHR2 humanoid robot, such as number of joints, LEDs, etc. The order in which inputs should be stored holds offsets to different buttons in WorldState::buttons[] Corresponds to entries in ERS7Info::PrimitiveName, defined at the end of this file the ordering of arms the ordering of legs Defines some capabilities of the KHR2 humanoid robots. Definition in file KHR2Info.h. a flag so we undef these after we're done - do you have a cleaner solution? Definition at line 271 of file KHR2Info.h. Just a little macro for converting degrees to radians. Definition at line 269 of file KHR2Info.h.
http://tekkotsu.org/dox/KHR2Info_8h.html
CC-MAIN-2018-51
refinedweb
130
60.21
5.28: The copy() Surface Method - Page ID - 14523 def slideAnimation(board, direction, message, animationSpeed): # Note: This function does not check if the move is valid. blankx, blanky = getBlankPosition(board) if direction == UP: movex = blankx movey = blanky + 1 elif direction == DOWN: movex = blankx movey = blanky - 1 elif direction == LEFT: movex = blankx + 1 movey = blanky elif direction == RIGHT: movex = blankx - 1 movey = blanky # prepare the base surface drawBoard(board, message) baseSurf = DISPLAYSURF.copy() # draw a blank space over the moving tile on the baseSurf Surface. moveLeft, moveTop = getLeftTopOfTile(movex, movey) pygame.draw.rect(baseSurf, BGCOLOR, (moveLeft, moveTop, TILESIZE, TILESIZE)) for i in range(0, TILESIZE, animationSpeed): # animate the tile sliding over checkForQuit() DISPLAYSURF.blit(baseSurf, (0, 0)) if direction == UP: drawTile(movex, movey, board[movex][movey], 0, -i) if direction == DOWN: drawTile(movex, movey, board[movex][movey], 0, i) if direction == LEFT: drawTile(movex, movey, board[movex][movey], -i, 0) if direction == RIGHT: drawTile(movex, movey, board[movex][movey], i, 0) pygame.display.update() FPSCLOCK.tick(FPS) The copy() method of Surface objects will return a new Surface object that has the same image drawn to it. But they are two separate Surface objects. After calling the copy() method, if we draw on one Surface object using blit() or the Pygame drawing functions, it will not change the image on the other Surface object. We store this copy in the baseSurf variable on line 20 [273]. Next, we paint another blank space over the tile that will slide. This is because when we draw each frame of the sliding animation, we will draw the sliding tile over different parts of the baseSurf Surface object. If we didn’t blank out the moving tile on the baseSurf Surface, then it would still be there as we draw the sliding tile. In that case, here is what the baseSurf Surface would look like: And then what it would look like when we draw the "9" tile sliding upwards on top of it: You can see this for yourself by commenting out line 23 [276] and running the program. In order to draw the frames of the sliding animation, we must draw the baseSurf surface on the display Surface, then on each frame of the animation draw the sliding tile closer and closer to its final position where the original blank space was. The space between two adjacent tiles is the same size as a single tile, which we have stored in TILESIZE. The code uses a for loop to go from 0 to TILESIZE. Normally this would mean that we would draw the tile 0 pixels over, then on the next frame draw the tile 1 pixel over, then 2 pixels, then 3, and so on. Each of these frames would take \( 1/30^{th} \) of a second. If you have TILESIZE set to 80 (as the program in this book does on line 12) then sliding a tile would take over two and a half seconds, which is actually kind of slow. So instead we will have the for loop iterate from 0 to TILESIZE by several pixels each frame. The number of pixels it jumps over is stored in animationSpeed, which is passed in when slideAnimation() is called. For example, if animationSpeed was set to 8 and the constant TILESIZE was set to 80, then the for loop and range(0, TILESIZE, animationSpeed) would set the i variable to the values 0, 8, 16, 24, 32, 40, 48, 56, 64, 72. (It does not include 80 because the range() function goes up to, but not including, the second argument.) This means the entire sliding animation would be done in 10 frames, which would mean it is done in \( 10/30^{th} \) of a second (a third of a second) since the game runs at 30 FPS. Lines 29 [282] to 36 [289] makes sure that we draw the tile sliding in the correct direction (based on what value the direction variable has). After the animation is done, then the function returns. Notice that while the animation is happening, any events being created by the user are not being handled. Those events will be handled the next time execution reaches line 70 in the main() function or the code in the checkForQuit() function.
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/05%3A_Slide_Puzzle/5.28%3A_The_copy()_Surface_Method
CC-MAIN-2021-10
refinedweb
711
64.75
2009/12/17 Filip Hanik - Dev Lists <devlists@hanik.com>: > On 12/16/2009 07:37 PM, Konstantin Kolinko wrote: >> >> I think, that in JNDI there is no such way > > ok, maybe we can add in a namespace for that, such as > InitialContext.lookup("global:"); > and then have a config attribute allowGlobalLookup="true|false" to be > backwards compatible > Why using <ResourceLink> does not satisfy you? I would like all access to the global resources to be explicit. If you need it, just add a <ResourceLink>, as documented. > and then have a config attribute allowGlobalLookup="true|false" to be > backwards compatible How is that from Security stand point? I mean, it must be allowGlobalLookup="false" by default. Best regards, Konstantin Kolinko --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org For additional commands, e-mail: dev-help@tomcat.apache.org
http://mail-archives.apache.org/mod_mbox/tomcat-dev/200912.mbox/%3C427155180912181038n7584e98bxae95a04c02c45c30@mail.gmail.com%3E
CC-MAIN-2016-50
refinedweb
139
50.02
3.1.1 Search path example The following example program uses a library that might be installed as an additional package on a system--the GNU Database Management Library (GDBM). The GDBM Library stores key-value pairs in a DBM file, a type of data file which allows values to be stored and indexed by a key (an arbitrary sequence of characters). Here is the example program ‘dbmain.c’, which creates a DBM file containing a key ‘testkey’ with the value ‘testvalue’: #include <stdio.h> #include <gdbm.h> int main (void) { GDBM_FILE dbf; datum key = { "testkey", 7 }; /* key, length */ datum value = { "testvalue", 9 }; /* value, length */ printf ("Storing key-value pair... "); dbf = gdbm_open ("test", 0, GDBM_NEWDB, 0644, 0); gdbm_store (dbf, key, value, GDBM_INSERT); gdbm_close (dbf); printf ("done.\n"); return 0; } The program uses the header file ‘gdbm.h’ and the library ‘libgdbm.a’. If the library has been installed in the default location of ‘/usr/local/lib’, with the header file in ‘/usr/local/include’, then the program can be compiled with the following simple command: $ gcc -Wall dbmain.c -lgdbm Both these directories are part of the default gcc include and link paths. However, if GDBM has been installed in a different location, trying to compile the program will give the following error: $ gcc -Wall dbmain.c -lgdbm dbmain.c:1: gdbm.h: No such file or directory For example, if version 1.8.3 of the GDBM package is installed under the directory ‘/opt/gdbm-1.8.3’ the location of the header file would be, /opt/gdbm-1.8.3/include/gdbm.h which is not part of the default gcc include path. Adding the appropriate directory to the include path with the command-line option -I allows the program to be compiled, but not linked: $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c -lgdbm /usr/bin/ld: cannot find -lgdbm collect2: ld returned 1 exit status The directory containing the library is still missing from the link path. It can be added to the link path using the following option: -L/opt/gdbm-1.8.3/lib/ The following command line allows the program to be compiled and linked: $ gcc -Wall -I/opt/gdbm-1.8.3/include -L/opt/gdbm-1.8.3/lib dbmain.c -lgdbm This produces the final executable linked to the GDBM library. Before seeing how to run this executable we will take a brief look at the environment variables that affect the -I and -L options. Note that you should never place the absolute paths of header files in #include statements in your source code, as this will prevent the program from compiling on other systems. The -I option or the INCLUDE_PATH variable described below should always be used to set the include path for header files.
http://www.network-theory.co.uk/docs/gccintro/gccintro_22.html
crawl-001
refinedweb
469
63.39
Make interface using design tool and edit it by script (add some ui elements). Is it possible? I had made interface using design tool in Pythonista. And I need to edit "scrollview1" from script and add list of buttons. How can I do it? May be it is imposible? If I make interface in design tool then I can only edit action script? Do you mean take a .pyui file and "decompile" it into a .py module that produces the same effect? (Like the .designer.cs files that you get with .NET WinForms.) I was playing around with that idea a bit, but didn't get too far with it, as it seems a lot of the object property names on the UI classes differ from what's serialized into the JSON .pyui files. Feel free to play around with this and improve on it, though. No). I am beginner). I`ll have wanted to make dictionary app. I am using scrollList with list of buttons for each words. I am using sqlite3 for saving words. But I can't load words, because i don't no how edit scroll layout. I had try use add_subview, but it did not work. You can add buttons from within the editor too, just clck on the scrollview, then select SubViews. polymerchm created a tool that let you reorganize pyui heirarchys, though it doesnt sound like thats whatbyou need in this case. need to load from data base the list of words! Each word will be button. @cg-mag, I did a simple example without trying to place etc that may help you. Most of the code is just making up a view. But just done to illustrate it. From what I understand about your question is you really need to look at the function def add_buttons_to_scrollview(sender): In the example. # coding: utf-8 import ui # your action function for your button # sender is the ui element. In this case the button. def add_buttons_to_scrollview(sender): # get a reference to the view. v = sender.superview # get a reference to the scrollview, using array # notation, using the name of the object. sv = v['scroll'] # make a button in code and add it to the scrollviews, subviews btn = ui.Button(title = 'test button') sv.add_subview(btn) if __name__ == '__main__': f = (0,0,540,576) v = ui.View(frame = f) #ignore this, is like your pyui file, just in code scrollview = ui.ScrollView(name = 'scroll') scrollview.background_color = 'white' scrollview.frame = v.frame scrollview.height -= 40 scrollview.y = 40 v.add_subview(scrollview) btn = ui.Button(title= 'Press') btn.border_width = .5 btn.x =btn.y = 5 btn.width = 100 btn.background_color = 'white' v.add_subview(btn) #end ignore # in the ui designer, enter the function to be # called by your btn action # done in code here, but you can enter the function # name in the ui designer. btn.action = add_buttons_to_scrollview v.present('sheet') Maybe post a snippet of what you tried that didnt work. It sounds like you really want a TableView, so that you don't need to instantiate all buttons at once. Thank you! I'll try @cg-mag, now seeing you requirements, what @JonB says about a TableView makes most sense to me'll think about tableView. Thanks. But it will be later.. I touch the button, but nothing was heppend(. "action_butt" execute.. # coding: utf-8 import ui def action_butt(sender): v=sender.superview myScrollView=v['scrollview1'] butt = ui.Button(title='test') myScrollView.add_subview(butt) v=ui.load_view('Untitled 5') v.present() I need to understend the principle of work with views.. @cg-mag, it looks ok to me what you did. But look in ui designer and make sure you have the correct names. And also that you action field in the btn is just action_butt Nothing else @Phuket2, Yes I have correct names. photos below. It is a simple scene. Where can i mistake?(( @cg-mag , what is the 'No Name' control on the picture? Is your scrollview1 at the front? I mean object layering? You have a menu to move to the front and back. Also is your scrollview apart if a subview in the form? From your pics it looks correct to me. But, you are right, you really need to understand views. Might be time to take a step back and read about them in the included documentation. It's the most fundamental thing you need to understand. Let's say you scrollview is a subview of customview on the form called mycustomview. Then when you reference it you would need to do something like v = sender.superview scrollview = v['mycustomview']['scrollview1'] Sorry, I am trying to help, for me it's not clear what your problem is No Name - it is not control. It is standart title... Scroll view I move to back. There obly 2 objects - scroll view and buuuton. All in the center with standart sizes. Action from button have executed and I can change with this action for example content_size, but I can not add button! I can change in scrollView anything, but can`t add new ui element Maybe start a new project. Only put the scrollview and the button on the form. Maybe something will show up clearly You are also right, it's a very easy example. So just start from scratch to be sure I start new script and do this. I have the same Ok, maybe I didn't see in the code version. Change the content size in the ui designer. Make the width and height the size of your screen. Thank you a lot! Problem was in that - button was created but in left upper conner - very high. I move scroll view down and now all works right!! Now I have new questions) - can i add 2 ui.label in each raw in table view? @cg-mag , yes you can create ui.Labels in your ui.TableViews. But it has default ui.Labels built in already. You can do so much with ui.TableView, but it's worth the time reading first, it will save you a lot of time. It looks more difficult than it is. Hard for me to reply fast at the moment. Floods every where and electric on and off. 3G is really bad. Maybe many people using it.
https://forum.omz-software.com/topic/2241/make-interface-using-design-tool-and-edit-it-by-script-add-some-ui-elements-is-it-possible
CC-MAIN-2018-13
refinedweb
1,052
78.55
For this problem, if we look at the binary form of each number, we can get the idea that for each '1' (except for the first '1') it counts to two steps, for each '0', it counts to one step. So our goal is to use +1 or -1 to reduce steps. For example, 13 = 1101 If we plus one, we can get 1110; if we reduce one, we can get 1100; 1110 needs 2+2+1 = 5 steps, while 1100 only needs 2+1+1 = 4 steps, so we choose n-1 in this step. Use long to avoid overflow (if n is Integer.MAX_VALUE). public class Solution { public int integerReplacement(int n) { long N = n; long small,big; int cnt = 0; while( N != 1){ small = (N & ( N -1)); big = ( N & (N + 1)); if( (N & 1) == 0){ N >>= 1; } else if ( (small & (small-1)) <= (big & (big-1))){ N = N - 1; } else{ N = N +1; } cnt++; } return cnt; } } Here is my solution with similar idea. When N is odd, only the second bit matters. If the bit is '1', N+1 will remove at least one '1' in N. 1011 + 1 = 1100, 1111 + 1 = 10000. However, N - 1 will remove only one '1'. 1011 - 1 = 1010 or 1111 - 1 = 1110. So we favor N + 1 here. If the bit is '0', N+1 will remove zero '1'. 1001 + 1 = 1010. N -1 will remove one '1'. 1001 - 1 = 1000. N = 3 is a special case. public class Solution { public int integerReplacement(int n) { long N = n; int count = 0; while(N != 1) { if(N % 2 == 0) { N = N >> 1; } else { if(N == 3) { count += 2; break; } N = (N & 2) == 2 ? N + 1 : N - 1; } count ++; } return count; } } I do not why you do like small = (N & ( N -1)) and (small & (small-1)) <= (big & (big-1)), can you explain? Similar idea: Just check the lowest two digits ...ba, IF a is 0, it's a even number, divide by 2 IF a is 1, it's a odd number, whether to increase or decrease depends on the b public int integerReplacement(int n) { int count=0; while(n>3){ if((n&1)==1){ if((n&2)==2) { n>>=1; n+=1; } else n>>=1; count+=2; }else{ n>>=1; count++; } } if(n==3) count+=2; else if(n==2) count+=1; return count; } Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/58839/java-3ms-bit-manipulation-solution
CC-MAIN-2017-39
refinedweb
408
88.36
Ray offsetting adopted in Cycles has been reported to cause various artifacts: T43835, T54284, and etc. These artifacts stand out when the scene is far from the origin or the scale of the scene is too large or too small compared to 1. In the case of instancing, the problem becomes worse because the ray offset, calculated for the world position, scale, and axis directions of the instanced object, is transformed into the object space during ray-object intersection. There was an experiment D1212, which tried to address these problems by skipping the ray push and instead checking distance to triangle with epsilon during intersection. The result was not satisfactory. This patch takes a different approach to tackle the problem. Instead of ray offset or epsilon, the following rules are enforced to the bvh traversal/intersection algorithms for preventing a ray from intersecting a surface that it has just departed from: These rules are evident if the primitives are triangles. The rules are also applicable to a curve consisting of line segments if each line segment is treated as a separate primitive. In the case of a curve consisting of cardinal curve segments, the fact is utilized that a cardinal curve segment is subdivided into a piecewise line segment on the fly for finding ray-curve intersection. The subdivided line segment is treated as a separate primitive, which allows a ray to intersect with(be occluded by) the same cardinal curve it departed while prohibiting it from intersecting to the same departure point. (It turns out that the ray-curve intersection point is refined later to that on a real cardinal curve, which already gives enough offset. However, as subdivision level is raised, a gap between a piecewise linear approx. and a real curve becomes reduced. On the other hand, shadow occlusion check is still done against the linear approx. Therefore, I conclude that excluding a tiny curve parameter space including the line segment while searching intersection would do good with little harm.) What if a ray hits a boundary of two primitives? For that case, this patch does nothing because the probability that a ray falls on the range of numerical errors between two adjacent primitives should be very very low and would not contribute to the render result. Otherwise, the tessellation itself is problematic: the primitives are either several orders of magnitude larger than the current view frustum or too small that their sizes are already comparable to floating point precision errors. A ray skipping a departing primitive alone suffers when there is another primitive overlapping it, which causes unpredictable back and forth of ray between two primitives and results in unobtrusive visual artifacts. This patch implements a novel method for coping with the problem. make this (estimation of ray start time) work for external path tracing libraries such as NVIDIA Optix and Intel Embree. Without a ray start time estimation, however, self-intersections are still prevented by the rules above. Comments on struct Ray, struct Intersection, and struct PathState in 'kernel_types.h' explain how this patch works. All the other modifications are just a bookkeeping for applying the rules. The following results will demonstrate the effectiveness of this patch. T43835 T54284 @Brecht Van Lommel (brecht) 's test set of different scales and origins blender file: This patch is not intended just as a proof of concept. The following are benchmark results of the demo scenes: Windows 10 64bit GPU: NVIDIA GeForce RTX 2080 SUPER CPU: Intel(R) Core(TM) i7-9700 'Cosmos Laundromat Demo' 'Agent 327 Barbershop' 'Spring' 2019.12.15 : All benchmark results are updated. Many bugs are fixed. Remaining tests to pass: ( * : noticeable differences ) hair basemesh intercept hair instancer uv hair particle random principled hair absorptioncoefficient principled hair directcoloring principled hair melaninconcentration T41143 Remaining tests to pass, with SIMULATE_RAY_OFFSET defined in 'kernel_types.h' The other test differences seem to be most likely due to 'ray offsetting' which causes various artifacts in the repository version. Now I claim that this patch passed all the cycles regression tests. The patched version with SIMULATE_RAY_OFFSET defined in 'kernel_types.h' passed all the cycles regression tests. The patched version without SIMULATE_RAY_OFFSET failed the following tests: test group 1, not using hair BSDF test group 2, using hair BSDF In the case of test group 1, the only difference between with and without SIMULATE_RAY_OFFSET is that with SIMULATE_RAY_OFFSET, 'extra' ray offsets are given on top of the basic provision of this patch. Therefore, it can be said the test results only show numerical differences. I also found that test sets for 'hair instancer uv', 'hair particle random', and 'visibility particles' suffered from overlapped particles(meshes), which cause the following differences. Although ray offsetting has a side effect of covering overlapped meshes, it should not be required that overlapped meshes be rendered 'normally'. The test sets should be revised. The situation of test group 2 is more complicated. With hair BSDFs in cycles, both ray offsetting and allowing self-intersection are required to produce the same render results, i.e. a ray may be shot backward to make it self-intersect. So with SIMULATE_RAY_OFFSET, the logic for avoiding self-intersection is also turned off for surfaces with hair BSDFs. Let us see 'hair geom reflection' as an example: The latter is slightly brighter because 'self-intersection' is allowed, but other than that I cannot see any difference in visual quality. Moreover, it may not sustain the hair effect if the object is translated/rotated/scaled/instanced, for the same reason that the ray offsetting fails in the current repository version. Logic for preventing self-intersection within a cardinal curve is revised to be consistent with logic for other primitives. Epsilon check is introduced between adjacent line segments to make the algorithm more numerically robust. Since this epsilon is defined in the curve parameter space [0,1], it will not cause any harm unlike those defined in the world space or the object space. Some inlined comments which I've potted while compiling this patch. Now, for the hair_instancer_uv.blend test. Don't think the issue is only due to intersecting geometry. Here is a render result of 2K image with 1K samples. To me this looks like a real render artifacts. As for hair shader on triangle geoemtry the shading difference might actually be expected. This case is somewhat stretching the shader above what it was supposed to be used for. Did you compare performance (aka render time) before and after applying this patch? #else // __SIMULATE_RAY_OFFSET__ #endif // __SIMULATE_RAY_OFFSET__ Suggest submitting such semantic fixes as a separate patch. Will be faster to review and apply, and reduce size of this patch. Shouldn't this be float? Thanks for spotting my last-minute error! I had to re-render benchmark scenes. Here are the results: 'The Spring' After changing the rule on cardinal curves, the hair shade of D6250 is slightly changed that two results become nearly indistinguishable. 'Barbershop' Render times are almost the same. About 'hair_instancer_uv', particles are indeed overlapped. 1000 particles are set to emit from faces of a base sphere, which is polygonized at a low level. In 'hair_particle_random', you can even watch spheres with different colors are overlapped. The 'emit from' property of the current particle system should be revised, I believe. And that is one weakness of this patch. Unlike ray offsetting, overlapped meshes are rendered really really ugly. So one of my idea is to provide 'bouncing ray offsetting' as a checkable property of an individual object for quick remedy for mesh overlapping, how about that? As I showed, this patch's method and ray offsetting coexist well and that may help people to be adapted between two methods. ( The exception codes for 'hair bsdf' and SIMULATE_RAY_OFFSET macro will be removed, of course. Those were just for proving my argument. ) Cycles object property 'offset_bouncing_ray' and light property 'offset_emitting_ray' are added. Their default values are 'False' but if given 'True' for older blender files after the blender minor version number being raised, all previous versions of blender files would be rendered as before with the mesh overlap issue as less noticed as before. ( Self-intersection avoidance is guaranteed now with this patch, but brightness of hair geom test results will become darken a little. ) I believe this is how blender has evolved. The related UI change is not included, and it seems better for blender staffs to consider the UI layout themselves. Mesh overlap occurs when areas of triangle primitives overlap so that the ray-triangle intersection algorithm cannot determine which one is nearer at each point on the entire overlapped area. This is a path-tracing version of z-fighting, happens only when meshes are placed with near-identical postures, and the render result is not defined over the overlapped area. While ray offsetting is relatively in luck because a ray usually happens to escape from the overlapped area by offsetting, this patch's method (called 'primitive id checking' in the literature) suffers from it directly because a ray does not leave the overlapped area although it skips the departing primitive. If this patch is accepted, systems in blender that are guaranteed to generate mesh overlapping should be revised, I think. But we buy some time to do that with 'ray offsetting' still in one hand. If this patch is accepted, it would be much easier for me to contribute because my some other works are based on this. Having such per-object ray offsetting options is something which will turn out unusable in a real production. A bit stretched example, but think of the hair_instancer_uv file. You'd to have it rendered correctly in the proximity of world origin you need some ray offsetting going on. To render it in a scene which it's animated (via armature deform, just to make things more interesting) far away from origin you'd need to disable ray offsetting (to address the issue this patch is solving). But then you trade one artifacts with another. In practice, when character consists of many many objects you can not possibly keep all those settings fine-tuned in a per-scene basis. systems in blender that are guaranteed to generate mesh overlapping should be revised I am not aware of such systems. Blender doesn't restrict from having objects intersections. It's almost unaviodable in a scene set construction. I would say that the solution to the original problem should be helping artists to make quality renders with less thinks to worry about and less knobs to be tweaked. There could be a tweak to this patch. Or could be completely separate approach: normalizing scene internally in Cycles in a way which brings area of interest to the world's origin. systems in blender that are guaranteed to generate mesh overlapping should be revised I am not aware of such systems. Blender doesn't restrict from having objects intersections. It's almost unaviodable in a scene set construction. Generally, overlapping faces already have artifacts in Cycles (and Eevee and the viewport), since we can't reliably determine which is closest. Artist normally avoid them and mesh tools and modifiers don't typically generate them. So in that sense I think it's a good trade-off to have that case fail here as well. Other types of intersections without overlapping faces are common in production, but not affected by this I think. I would say that the solution to the original problem should be helping artists to make quality renders with less thinks to worry about and less knobs to be tweaked. Agree that we should avoid adding a setting here if we can. There could be a tweak to this patch. Or could be completely separate approach: normalizing scene internally in Cycles in a way which brings area of interest to the world's origin. If this is automatically determined it would not be temporally stable in animation. And arguably, if there are precision problems then they also exists in other areas of Blender or in the shading node evaluation which would still work in world space. So I'm not sure if there is a good automatic solution for that problem (besides double precision). Recently I encountered an application which requires not only to render a scene filled with instanced objects correctly with arbitrary scaling but also to deal with objects of overlapped meshes without noticeable artifacts. For that application, my solution was to use this patch's method ( 'primitive id checking' ) with providing 'ray offsetting' as an additional option. I believe that we should put first priority on rendering a normal scene without glitches, but I admit that I am also distracted by mesh overlapping artifacts. But I think that I finally figured out how to have the advantage of this patch's method while easing most mesh overlapping artifacts that artists would face with in blender. I will report the result soon. Those fixes were for checking if I handled all the cases where object == OBJECT_NONE indicates it is not instanced, i.e., its transform is pre-applied, not a non-object. Should I submit those as a separate patch? Yes! Thanks for finding that. Artifacts caused by mesh overlapping are resolved. Regression tests results: ( Those hair tests still failed because of numerical differences between with and without ray offsetting. ) hair_instancer_uv, 2K image with 1K samples: Merit of no ray offsetting is still strong: ( Artifacts in line curves of Size1e-5 worsen since the update adding an epsilon test on curve segment boundaries, before this patch revision. ) estimate a tight ray start time for external path tracing libraries such as NVIDIA Optix and Intel Embree. Render time is increased slightly. ( about 30s for CPU rendering of 'The Spring' ) Benchmark results will be updated soon. Commented with some nitpicks on the OptiX side of things. Would prefer not to consume more payload registers (those come at a cost and accessing pointers through them especially), but it may not be possible to avoid that. It seems like it is not necessary to store all of src_prim, src_object and src_type in the ray though. The primitive index (src_prim_index) should uniquely identify any primitive, so just storing and comparing that should be enough? I also don't believe it would hurt much to just look up src_type in place from __prim_type where necessary (since it's only really used for curves), but haven't benchmarked. Doing so could make it possible to avoid passing in the ray pointer through the payload and instead just pass through a primitive index that should be avoided during intersection. OBJECT_NONE This anyhit program is never executed for anything but triangles (curves are added to the AS with the OPTIX_GEOMETRY_FLAG_DISABLE_ANYHIT and all traces that force an anyhit use a different anyhit program). This is not necessary: The intersection program can never be called for a primitive that is not a curve, since that is the only existing custom primitive type and triangles by definition cannot go through this. Some spaces sneaked in here and below =P Thank @Patrick Mours (pmoursnv) for your advice. I have practically zero knowledge on OptiX. On the cycles bvh side, src_object is needed for an instanced object and src_prim_index alone does not identify it. Although src_prim can be looked up with src_prim_index, different prim_indices map to a same prim, which is coupled with object id to identify a primitive. Without storing src_prim, table lookup will be repeated to find it later. Also src_type now stores a flag at its highest bit, which indicates whether the object is pre-transformed/instanced or not. This information is needed later to calculate a tight ray start time for a new ray direction for culling overlapping meshes. Although all these info may be found later except src_object, my intent was to reduce repeated table lookup and recycle already found info on the cycles side. Now, I have a question. Does OptiX currently provide APIs for implementing ray_update_tstart()? The calulated ray start time ( or epsilon ) by ray_update_tstart() must reflect the math inside OptiX bvh traversal algorithms accurately to cull overlapping meshes and offset the ray from the departing primitive consistently due to 'floating point determinism'. It looked like ray_update_tstart was only called in places where the ray was already set up, so the type data was available and could e.g. be passed in as another argument to the function. But really I was only searching for ways to avoid additional payload registers. And you are right, I was looking at it from the Cycles OptiX implementation and there the primitive index is a one-to-one mapping, but that is not the case for Cycles own BVH. It may be possible to take advantage of this for OptiX though. I can experiment a bit to see if there would be any advantage. As for a triangle_update_tstart implementation for OptiX: OptiX has no such API currently. And it is made more complicated by the fact that intersection precision may vary depending on which GPU generation is used and how the AS was build (e.g. a motion blur AS can behave differently from a non-motion one). I also don't think it would be easy to provide such an API because of how the hardware works, but I'll raise this with the team here to check. In D6250#147614, @Patrick Mours (pmoursnv) wrote: Implementation of triangle_update_tstart() in OptiX may be easier than it looks. Currently object inv transform matrices needed are all supplied from outside, not calculated inside the function. (The calculation code inside is for contingency and also serves as specification.) On the other hand, since a ray has all information to calculate triangle_update_tstart() now, it may also be possible to calculate it inside OptiX bvh traversal modules, not exposing the value outside. External path tracing libraries now skip triangle_update_tstart() because the function does not work with them at all. Some OptiX code is modified not to check unnecessary conditions. Optimization for OptiX is still left to experts. For an AVX2-optimized CPU kernel, bvh traversal routines are set to use triangle_intersect() temporarily instead of triangle_intersect8() for rays with tstart updated. Numerical difference between float calculation with and without AVX2 is somehow expected and a proper solution would be to implement an AVX2 version of triangle_update_tstart() for primitives supposed to be handed over to triangle_intersect8(). The problem is that among obvh intersection routines, some use triangle_intersect8() while others still use triangle_intersect(). In addition to triangle_update_tstart(), motion_triangle_update_tstart() is implemented to calculate a tight ray start time for motion triangle primitives. I have completely forgotten triangle vertex motion. For external path tracing libraries not supporting a feature for estimation of a tight ray start time at the moment, ray_offset() is restored as a fallback. However, self-intersection is still guaranteed to be prevented by this patch for them. Only functions ray_offset() and ray_update_tstart() need to be modified when those libraries are ready to support the feature. Now this patch has achieved: prevention of self-intersection of ray-triangle, no glitches caused by ray offsetting, and reduced artifacts of overlapping meshes on a par with ray offsetting. From this patch, artists will get only benefits with no extra burdens. For AVX2-optimized CPU kernels, bvh traversal routines are set to use triangle_intersect() temporarily instead of triangle_intersect8() for rays with a tight ray start time estimated, because two functions return slightly different results while bvh routines use both functions intermixed. One quick solution is to choose a greater value after estimating both tight ray start times with/without AVX2 optimzation but it increases unnecessary overhead. Besides, I also found some negative zero issues in triangle_intersect8(). I plan to deal with these as a separate patch. Update the patch to be appliable to the latest revision. I want to try including this for 2.83. I'll run some CPU and GPU benchmarks tonight. I think it should be possible to extend this to fix the problems with overlapping geometry too by implementing the algorithms from "Robust Iterative Find-Next-Hit Ray Traversal" Instead of sorting hits only by distance t, we can sort them also based on object and primitive ID, guaranteeing a consistent order of traversal even in the overlapping case. Since the ray's payload carries object and primitive ID of the object from which the ray originated, it shouldn't be too hard to add. This would also eliminate the need for ray_update_tstart() and could thus simplify the code. The performance impact of this for GPU rendering is not ideal, I think we'll have to spend some more time to see if we can optimize especially that. We can accept some slowdown, but would rather have it in the 0-2% range than up to 6.7%. I'm not sure if that's possible though, I haven't analyzed the implementation closely. AMD Ryzen 2990WX, Ubuntu Linux CUDA Quadro RTX 5000, Ubuntu Linux Optix Quadro RTX 5000, Ubuntu Linux Sorry, I left this thread for a while because there were no responses. Are reviewers still interested in this patch? Are there any technical issues other than performance? I am currently busy in finishing a company project, so I may get some free time only after a couple of months. removed. fixed.
https://developer.blender.org/D6250
CC-MAIN-2020-50
refinedweb
3,547
52.9
Chunk extraction is a useful preliminary step to information extraction, that creates parse trees from unstructured text with a chunker. Once you have a parse tree of a sentence, you can do more specific information extraction, such as named entity recognition and relation extraction. Chunking is basically a 3 step process: - Tag a sentence - Chunk the tagged sentence - Analyze the parse tree to extract information I’ve already written about how to train a NLTK part of speech tagger and a chunker, so I’ll assume you’ve already done the training, and now you want to use your pos tagger and iob chunker to do something useful. IOB Tag Chunker The previously trained chunker is actually a chunk tagger. It’s a Tagger that assigns IOB chunk tags to part-of-speech tags. In order to use it for proper chunking, we need some extra code to convert the IOB chunk tags into a parse tree. I’ve created a wrapper class that complies with the nltk ChunkParserI interface and uses the trained chunk tagger to get IOB tags and convert them to a proper parse tree. import nltk.chunk import itertools class TagChunker(nltk.chunk.ChunkParserI): def __init__(self, chunk_tagger): self._chunk_tagger = chunk_tagger def parse(self, tokens): # split words and part of speech tags (words, tags) = zip(*tokens) # get IOB chunk tags chunks = self._chunk_tagger.tag(tags) # join words with chunk tags wtc = itertools.izip(words, chunks) # w = word, t = part-of-speech tag, c = chunk tag lines = [' '.join([w, t, c]) for (w, (t, c)) in wtc if c] # create tree from conll formatted chunk lines return nltk.chunk.conllstr2tree('\n'.join(lines)) Chunk Extraction Now that we have a proper NLTK chunker, we can use it to extract chunks. Here’s a simple example that tags a sentence, chunks the tagged sentence, then prints out each noun phrase. # sentence should be a list of words tagged = tagger.tag(sentence) tree = chunker.parse(tagged) # for each noun phrase sub tree in the parse tree for subtree in tree.subtrees(filter=lambda t: t.node == 'NP'): # print the noun phrase as a list of part-of-speech tagged words print subtree.leaves() Each sub tree has a phrase tag, and the leaves of a sub tree are the tagged words that make up that chunk. Since we’re training the chunker on IOB tags, NP stands for Noun Phrase. As noted before, the results of this natural language processing are heavily dependent on the training data. If your input text isn’t similar to the your training data, then you probably won’t be getting many chunks. Pingback: Learning to do natural language processing with NLTK | JetLlib Journal() Pingback: What's the best way to extract phrases from a corpus of text using Python? - Quora() Pingback: dvdgrs » Graduation project() Pingback: Graduation project | david.graus()
http://streamhacker.com/2009/02/23/chunk-extraction-with-nltk/comment-page-1/
CC-MAIN-2015-40
refinedweb
476
71.55
Driving a Unipolar Stepper Motor With a PpDAQC Pi-Plate Introduction: Driving a Unipolar Stepper Motor With a PpDAQC Pi-Plate Stepper motors are versatile devices that allow precise and repeatable angular control. They are used in disk drives, translation tables, and 3D printers to name just a few applications. They typically come with two different wiring arrangements. The most common arrangement is four wires that are connected to two coils. This is called a bipolar motor and requires something called an “H” bridge to control. The other arrangement is five or six wires. These are called unipolar motors and they’re much easier to drive. Step 1: Background In this example, we will be using four of the seven open collector outputs on a Pi-Plates (Pi-Plates.com) ppDAQC to drive a small, unipolar motor. The motor which can be purchased from Amazon or Adafruit, has the following specifications: - Unipolar stepper with 0.1″ spaced 5-pin cable connector - 32 steps per revolution - 1/16.025 geared down reduction - 5V-12V DC suggested operation - Weight: 37 g. - Dimensions: 28mm diameter, 20mm tall not including 9mm shaft with 5mm diameter - 9″ / 23 cm long cable - Holding Torque @ 12VDC: 250 gram-force*cm, 25 N*mm/ 3.5 oz-force*in - Shaft: 5mm diameter flattened You will also need your prepared Raspberry Pi to make it all work! Step 2: Connections The motor is labeled to operate at 12VDC but to make this easy, we’re going to start by driving it at 5V. Use the enclosed diagram to make your connections. The connections all go to the DOUT terminal block and are as follows: - Motor RED wire: Terminal 10 - Motor BLUE wire: Terminal 2 - Motor PINK wire: terminal 5 - Motor ORANGE wire: terminal 4 - Motor YELLOW wire: terminal 3 - Connect a wire from terminal 1 to terminal 9 – this is IMPORTANT since it shunts the inductive kick generated when a coil is turned off. Using an ohmmeter, we measured about 100 ohms between the red center tap wire and the blue wire. If we use the 5VDC from the ppDAQC and assume that the “on” voltage of the open collector driver is 1 volt then we can calculate that each driver will have to sink about (5-1)/100 = 40mA. So, no special power supplies should be required for this experiment. Step 3: Code To determine the driving sequence, we referenced this Application Note from SiLabs. Now, open NANO and enter the following lines of code: import piplates.ppDAQC as ppDAQC import time delay =0.004 try: while(1): for i in range(0,1000): ppDAQC.setDOUTall(0,0x0A) time.sleep(delay) ppDAQC.setDOUTall(0,0x06) time.sleep(delay) ppDAQC.setDOUTall(0,0x05) time.sleep(delay) ppDAQC.setDOUTall(0,0x09) time.sleep(delay) for i in range(0,1000): ppDAQC.setDOUTall(0,0x09) time.sleep(delay) ppDAQC.setDOUTall(0,0x05) time.sleep(delay) ppDAQC.setDOUTall(0,0x06) time.sleep(delay) ppDAQC.setDOUTall(0,0x0A) time.sleep(delay) except KeyboardInterrupt: time.sleep(.1) ppDAQC.setDOUTall(0,0) print "Game over, man." Step 4: Save and Play! Save your program as UnipolarTest.py and then run it from the command line with the following statement: sudo python UnipolarTest.py If everything is correct, you should see your motor shaft rotate counterclockwise for about 16 seconds and then rotate clockwise for 16 seconds. This sequence will repeat until you press <CNTL-C>. While it’s running, try grabbing the shaft. and note that while it is a geared down motor it doesn’t provide a lot of torque at 5VDC. As an experiment, we disconnected the red wires from terminals 9 and 10 and connected them to a 12VDC power supply. Needless to say, the motor torque was substantially higher. So, there you go, another example of the possible uses of a ppDAQC board! Great tutorial, thank you for sharing this!
http://www.instructables.com/id/Driving-a-Unipolar-Stepper-Motor-with-a-ppDAQC-Pi-/
CC-MAIN-2017-30
refinedweb
649
56.15
Different data type in Web API response of the same method Both XML and JSON in return result of Web API REST service method Web API restful services are very flexible and they return different data format (JSON or XML) depending on accept type header value from your browser (or other client). However this is not always so precise so sometime you have to force return type. More on how to force return type on API method side itself you can find in this article. This is useful if you do not control you request, for example if you have Web API method link exposed on your web page, but in case you initiate request in a back-end with C# or JavaScript read this short article. As I mentioned at the beginning, response serialization type is based on accept type header value of the client. Based on this we can set this header value to get different value. As an example I will use generic class and get method which is created when you choose Web API in new project dialog in Visual Studio. After creating new project in Visual Studio with MVC 4 Web API template, Values controller will be created with the following methods using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; namespace MvcTests) { } } } For testing purposes, I created a small Windows Forms app for testing this behavior. It is attached to this document for downloading with source included. Test app is very simple, one button is going to fetch XML response and other one will fetch JSON result from GET method of the Web API REST service. public static string GetXmlResult() { var request = (HttpWebRequest)WebRequest.Create(""); request.Method = "GET"; request.Accept = "text/xml"; request.Timeout = 500000; var response = (HttpWebResponse)request.GetResponse(); if (response.StatusCode == HttpStatusCode.OK) { return new StreamReader(response.GetResponseStream(), Encoding.UTF8).ReadToEnd(); } return null; } This method will tel service that is accepts XML and data will be returned as XML. By changing only one line and setting different value for accept header value we will get JSON result. request.Accept = "text/json"; Another way to fetch different data format from Web API service is using JQuery on client side, with the same approach by setting accept header value in a request. Read more about how to do it with JQuery here. References - - Disclaimer Purpose of the code contained in snippets or available for download in this article is solely for learning and demo purposes. Author will not be held responsible for any failure or damages caused due to any other usage.
https://dejanstojanovic.net/aspnet/2014/june/different-data-type-in-web-api-response-of-the-same-method/
CC-MAIN-2019-47
refinedweb
439
55.95
32.2. pkgutil — Package extension utility¶ Source code: Lib/pkgutil.py This module provides utilities for the import system, in particular package support. - class pkgutil. ModuleInfo(module_finder, name, ispkg)¶ A namedtuple that holds a brief summary of a module’s info. New in version 3.6. strings referring to existing directories are ignored. Unicode items on sys.paththat cause errors when used as filenames may cause this function to raise an exception (in line with os.path.isdir()behavior). - class pkgutil. ImpImporter(dirname=None)¶ PEP 302 Finder that wraps Python’s “classic” import algorithm. If dirname is a string, a PEP 302 finder is created that searches that directory. If dirname is None, a PEP 302 finder is created that searches the current sys.path, plus any modules that are frozen or built-in. Note that ImpImporterdoes not currently support being used by placement on sys.meta_path. - class pkgutil. ImpLoader(fullname, file, filename, etc)¶ Loader that wraps Python’s “classic” import algorithm. pkgutil. find_loader(fullname)¶ Retrieve a module loader for the given fullname. This is a backwards compatibility wrapper around importlib.util.find_spec()that converts most failures to ImportErrorand only returns the loader rather than the full ModuleSpec. Changed in version 3.3: Updated to be based directly on importlibrather than relying on the package internal PEP 302 import emulation. pkgutil. get_importer(path_item)¶ Retrieve a finder for the given path_item. The returned finder is cached in sys.path_importer_cacheif it was newly created by a path hook. The cache (or part of it) can be cleared manually if a rescan of sys.path_hooksis necessary. pkgutil. get_loader(module_or_name)¶ Get a loader object for module_or_name. If the module or package is accessible via the normal import mechanism, a wrapper around the relevant part of that machinery is returned. Returns Noneif the module cannot be found or imported. If the named module is not already imported, its containing package (if any) is imported, in order to establish the package __path__. Changed in version 3.3: Updated to be based directly on importlibrather than relying on the package internal PEP 302 import emulation. pkgutil. iter_importers(fullname='')¶ Yield finder objects for the given module name. If fullname contains a ‘.’, the finders will be for the package containing fullname, otherwise they will be all registered top level finders (i.e. those on both sys.meta_path and sys.path_hooks). If the named module is in a package, that package is imported as a side effect of invoking this function. If no module name is specified, all top level finders are produced. pkgutil. iter_modules(path=None, prefix='')¶ Yields ModuleInfofor all submodules on path, or, if path is None, all top-level modules on sys.path. path should be either Noneor a list of paths to look for modules in. prefix is a string to output on the front of every module name on output. Note Only works for a finder which defines an iter_modules()method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinderand zipimport.zipimporter. pkgutil. walk_packages(path=None, prefix='', onerror=None)¶ Yields ModuleInfofor all modules recursively on path, or, if path is None, all accessible modules. path should be either None__ + '.') Note Only works for a finder which defines an iter_modules()method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinderand zipimport.zipimporter. pkgutil. get_data(package, resource)¶ Get a resource from a package. This is a wrapper for the loader get_dataAPI. loader which does not support get_data, then Noneis returned. In particular, the loader for namespace packages does not support get_data.
https://docs.python.org/3.8/library/pkgutil.html
CC-MAIN-2018-22
refinedweb
596
51.44
Trying to get this logging issue resolvedGlenn Puckett Apr 12, 2011 12:54 PM I have been using jBoss for 6 years now and still have not figured out how to generate consistent separate logs from my servlets. So far the best I have been able to do is get maybe 75% of my log output to go to a unique log file but also getting 100% of the output to the jboss server log file. I would like to get this resolved completely. Unfortunately I usually research and scratch and claw at a solution until my back is up against the wall before I try to ask for help on the forums. I have Googled this and tried a number of suggestions. I just can't get anything to work as advertised. I started out with the attempt of putting the log4j jar file in my EAR file. This doesn't work because of the way the class loader works in jBoss. It gets all screwed up when attempting to generate logging from both the EJB and the WEB containers. I tried ALL of the suggested solutions for that one without any success. I have come across what seems like the best solution in the jBoss wiki. After implementing the code as described I don't get any errors, I get a log file created, but nothing ever goes to the log file. At the moment I am first trying to get the log output from the JPA persistence EntityManagerHelper to go to a specific log file. Once I get that working I need to implement other logging functionality. Based on my research it appears the best solution (at least for me) is using a custom repository selector. So I used the provided class that implements RepositorySelector: package mm.logging.repository; /** * This RepositorySelector is for use with web applications. It assumes that * your log4j.xml file is in the WEB-INF directory. * * */ public class AppRepositorySelector implements RepositorySelector { private static boolean initialized = false; private static Object guard = LogManager.getRootLogger(); private static Map repositories = new HashMap(); private static LoggerRepository defaultRepository; public static synchronized void init(ServletConfig servletConfig) throws ServletException { init(servletConfig.getServletContext()); } public static synchronized void init(ServletContext servletContext) throws ServletException { if (!initialized) // set the global RepositorySelector { defaultRepository = LogManager.getLoggerRepository(); RepositorySelector theSelector = new AppRepositorySelector(); LogManager.setRepositorySelector(theSelector, guard); initialized = true; } Hierarchy hierarchy = new Hierarchy(new RootLogger(Level.DEBUG)); loadLog4JConfig(servletContext, hierarchy); ClassLoader loader = Thread.currentThread().getContextClassLoader(); repositories.put(loader, hierarchy); } public static synchronized void removeFromRepository() { repositories.remove(Thread.currentThread().getContextClassLoader()); } // load log4j.xml from WEB-INF private static void loadLog4JConfig(ServletContext servletContext, Hierarchy hierarchy) throws ServletException { try { String log4jFile = "/WEB-INF/log4j.xml"; InputStream log4JConfig = servletContext.getResourceAsStream(log4jFile); Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(log4JConfig); DOMConfigurator conf = new DOMConfigurator(); conf.doConfigure(doc.getDocumentElement(), hierarchy); } catch (Exception e) { throw new ServletException(e); } } private AppRepositorySelector() { } public LoggerRepository getLoggerRepository() { ClassLoader loader = Thread.currentThread().getContextClassLoader(); LoggerRepository repository = (LoggerRepository) repositories.get(loader); if (repository == null) { return defaultRepository; } else { return repository; } } } Then I put in the context listener package mm.logging.repository; import javax.servlet.ServletContextEvent; import javax.servlet.ServletContextListener; public class AppContextListener implements ServletContextListener { public void contextDestroyed(ServletContextEvent contextEvent) { } public void contextInitialized(ServletContextEvent contextEvent) { try { AppRepositorySelector.init(contextEvent.getServletContext()); } catch (Exception ex) { System.err.println(ex); } } } and then added the listener to the web.xml file <listener> <listener-class>mm.logging.repository.AppContextListener</listener-class> </listener> And then put the following log4j.xml file in the WEB-INF folder: <log4j:configuration xmlns: <appender name="JPA" class="org.jboss.logging.appender.RollingFileAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="File" value="D:/WWW/CWDC/logs/jpa.log"/> <param name="Append" value="false"/> <param name="MaxFileSize" value="500KB"/> <param name="MaxBackupIndex" value="1"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c] %m%n"/> </layout> </appender> <category name="CWDCPU"> <priority value="DEBUG" /> <appender-ref </category> </log4j:configuration> The EntityManagerHelper has the followoing code: logger = Logger.getLogger("CWDCPU"); logger.setLevel(Level.ALL); And in the JPA code the logging looks something like: EntityManagerHelper.log("saving AccountStructureTbl instance", Level.INFO, null); When jBoss starts don't see any sort of diagnostic indicating this did not start properly. I can't say I see anything that says it does either. I end up getting all of the JPA logging on the console and nothing in the log file itself. I will admit that I am much more familiar with the log4j properties file rather than the xml but I tried to keep it as simple as possible. Where did I go wrong? 1. Trying to get this logging issue resolvedDieter Cailliau Apr 13, 2011 8:56 PM (in response to Glenn Puckett) Your question is not entirly clear. If the problem is - you have 2 servlets A and B, probably both using the same underlying code (eg JPA) - you want the logging from both, but in different files a.log and b.log then i'ld propose to use MDC (a class with static methods, in log4j, which is inside jboss-logging). This is a way to "stamp" each log-line with a particular tag that sticks to the current thread. To fill this tag, you need to intercept your thread somehow, eg using a servlet filter. In there, have a try / finally where you set / clear some MDC key-value pair: MDC.put("servletname","a" or "b"). Internally, a ThreadLocal takes care of holding on to this value. Use a single server.log file (like in default jboss). In your logging pattern, use %X{servletname} this will result in the "a" or the "b" one each line that gets logged. Because it is on each line, it's easy to split up your server.log in 2 files in a post processing step. Note: since the mechanism is based on ThreadLocal, the tag will not be printed if you spawn parts of your processing to other threads, unless you do fill the MDC for these threads as well. Note: you need to depend on log4j in order to compile your source code (the call the MDC.put method), (or use reflection to reach this method) but you should not package the log4j.jar in your war: log4j is on the runtime jboss classpath. avax.servlet 2. Trying to get this logging issue resolvedGlenn Puckett Apr 15, 2011 4:23 PM (in response to Dieter Cailliau) I'm sorry it is not clearly stated. My problem is that I can't get log messages from my entire web ap to appear in a log file completely separate from the JBoss log file. No matter what I do I end up with log messages showing up on the JBoss log file. I want to create distinct separate log files in separate directories for ALL of my applications. I picked the ouput of the JPA code simply because it is already there. It should be no different directing the output of the JPA to a distinct log file than any other application. The significance of the JPA code is that it instantiates Logger using a constant value rather than the class name. But that should not make a difference to Logger. Instead of all the Logger output going to the console I want it to go to a log file. I made the changes as listed in my initial post and the result was no difference. JBoss still ended up overriding my Logger setup and the JPA output continues to go to the JBoss console. The significance is that if I can figure out how to get the JPA log to it's own log file then I can also get my application code to generate log entries into it's own distinct log file. I'm not talking about a specific servlet. I mean I want to be able to direct all "com.myapp" to a separate log file. It sounds easy. But it has proven NOT to be. No matter what I try to do JBoss ends up overriding any setup I attempt and my application log entries go to the JBoss log file. I have a server that will be running several separate virtual web applications. I need to have separate distinct log files for each application. Right now I get everything in ONE log file. I have tried 3 or 4 distinct solutions to make this happen with only limited success. I have only been able to get part of the application to consistently route log messages to a separate log file. When I deploy the second application my log files will be a disaster if I can't figure this out. 3. Trying to get this logging issue resolvedGlenn Puckett Apr 20, 2011 7:16 AM (in response to Glenn Puckett) Is this not being answered because of lack of interest or lack of knowledge? Am I asking a question everyone that knows the answer considers a stupid question? I can't help but be curious. 4. Trying to get this logging issue resolvedjaikiran pai Apr 20, 2011 7:36 AM (in response to Glenn Puckett) I did not answer this because, to be honest, I'm really tired of seeing all these logging issues and all these numerous ways of trying to get it to work. Anyway, let's start this from the beginning. Which version of JBoss AS are you using? What kind of a application are you deploying - .war or .ear? Let's for a moment keep aside the repository selector approach and see what really needs to be done to get this working. So you have a application which has classes belonging to com.myapp package and you use log4j in those classes. You want those log message to be logged in a specific blah.log file instead of the default server.log file. Is that correct? Also, one of your post talks about "when you deploy another application" things go wrong. So you have 2 applications both having com.myapp classes and both logging using log4j and you want each of those applications to log to a separate file? Are you willing to do changes to the log4j.xml/jboss-log4j.xml that is shipped in JBoss AS? Or do you want to ship your own version of the config file? 5. Trying to get this logging issue resolvedPeter Johnson Apr 20, 2011 9:14 AM (in response to Glenn Puckett) Two reasons I can think of. First, yoiu never said which version of JBoss AS., Logging in 6.0.0 is way different than in 5.1.0 and earlier. Second, you are trying to do something fairly complicated with multiple separate logs with unique log entries, and as Jaikiran pointed out, that can get rather complex depending on exactly what you want to see. 6. Trying to get this logging issue resolvedGlenn Puckett Apr 20, 2011 3:21 PM (in response to Glenn Puckett) Yes, there are a lot of questions on the forums regarding loggin with JBoss. That should be an indicator that there is a serious problem with that topic and needs some attention. It should not be this difficult to make it happen. Maybe it has been fixed in more recent versions. However, if you think it is tiresome seeing all the questions about logging think of how tiresome it is to attempt all the various suggestions for fixes just to find that none of them work. I am using version 4.2.3. Yes, I would LOVE to upgrade. But I am a single developer maintaning an application and supporting the JBoss installation. I tried installing JBoss 5.x and implementing my application in it and it didn't work. I don't have the time at this point to stop my development efforts to figure out a new installation/configuration. v4.2.3 works great for us. When I can spare a week or two to investigate how to configure the latest and greatest of JBoss I will do so. IMHO when you are running several virtual applications on the same instance of JBoss the logging should be separated by default. Who would want multiple applications generating a single combined log? Also, JBoss itself generates quite a bit of logging output. It just becomes more difficult to have to search through the JBoss log entries just to find specific entries from the application itself. Actually at this point I have two applications running on one instance of JBoss. One is an enterprise application (EAR) and the other is a Web Application (WAR). The applications are for two totally separate clients. I have abandoned the repository selector. I couldn't begin to get it to work without some feedback. I didn't get any so I discarded that effort. In my further investigations I came across a message board post suggesting you make sure there are no extra log4j jar files in the WAR file. It turned out that the Eclipse deployment was putting a log4j jar file in the WAR files since it was part of the J2EE library. I can't see there is a way to configure Eclipse to NOT include that jar so I have started taking it out each time I deploy. This has eliminated a warning I was getting at JBoss startup that I didn't realize was a problem. Essentially it was resulting in a class cast exception but was covering that up with a more vague error message. I started getting some messages out when I did this.. By far my preference would be to leave the jboss-log4j.xml file alone and ship the application with it's own log4j configuration. I would rather leave modifications to JBoss itself to the bare minimum and leave application specific stuff in the application. I tried adding a log4j.xml file in the lib directory of WEB-INF and got absolutely nothing out. I really appreciate the response. 7. Trying to get this logging issue resolvedElias Ross Apr 20, 2011 11:03 PM (in response to Glenn Puckett). If you use your app-specific log4j.jar file, then there's basically two copies of each log4j class. As you have seen, the classes part of the container get the container log4j.jar and the jboss-log4j.xml configuration. Your stuff loads classes from your own log4j.jar and works, but not so for the container. Anyway, it's a mess! If I were you: Forget putting log4j.jar in your deployment. For every entry point in your code, specifically servlet requests or new threads you create, use the log4j MDC class and mark that thread as belonging to your application. You can use something like this, for example: Basically you do: MDC.put("app", "name of your app") Then write a filter that filters these events: See this for inspiration: Create a global jboss-log4j.xml file that contains appenders that filter out only the messages with the MDC set. ... There's a lot of forum threads about the same stuff, logging in particular, because people have very particular needs. 8. Trying to get this logging issue resolvedPeter Johnson Apr 21, 2011 11:35 AM (in response to Glenn Puckett) It appears at this point that I am getting log entries from the Web application on both the separate application log AND the JBoss log. That is because all loggers inherit from the root logger and the root logger says to log to FILE and CONSOLE. Once way to get your app logging not to go to FILE (server.log) is to remove FILE from the root logger. Of course, that will result in nothing going to FILE because I don't think that any of the loggger categories reference the FILE logger. So you should add a few base categories, such as org.jboss, and reference the FILE appender for them. The best thing to do is scan the current server.log and see what base categories are logged. Note that if you add org.jboss and reference FILE, then categories such as org.jboss.serial and org.jboss.management (which already appear in the jboss-log4j.xml file) inherit from org.jboss and thus will already got to FILE, thus you do not need to add the reference to FILE to those. In general, this is all just about how to properly configure Log4J and has little to do with JBoss AS itself. By far my preference would be to leave the jboss-log4j.xml file alone and ship the application with it's own log4j configuration. I assume you have seen this: 9. Trying to get this logging issue resolvedGlenn Puckett Apr 22, 2011 12:46 PM (in response to Glenn Puckett) I think I finally have this working. Honestly I'm not sure what I did to fix it. I hate it when that happens. I kept trying various options that I found on message boards and testing until something worked. I tried then abandoned the configuration as suggested in the post above. It never wrote anything to the log. So I concentrated on making it work with everything in the jboss-log4j.xml file. Analyzing the working version with what I had before I really can't tell any difference. It may be that I never got all the pieces deployed at the same time before this. Sometimes Eclipse and JBoss get a bit out of synch from what I have experienced. But I still had everything in the jboss-log4j.xml file. As I said I tried setting it up as described in the thread listed above and got a blank log. So I removed the log4j.jar file and lib directory from the EAR and just left the log4j.xml file at the EAR root. I am now getting a log file generated for my app. Now I have the log configuration application specific and deployed with the application, which is what I wanted. Logging in JBoss has been a frustrating experience. I don't remember ever seeing a post suggesting that all you need is the configuration file at the EAR level. Folks are always telling you to do some funky things with the log4j jar file. I think that complicates the situation. Having an extra log4j jar file in the EAR/WAR seems to cause an error message as JBoss starts up but it doesn't appear to cause any logging failure unless it is at the root level of the EAR itself. I really appreciate the responses. It helped me sort out what I was and was not doing wrong. 10. Trying to get this logging issue resolvedGlenn Puckett Apr 22, 2011 1:19 PM (in response to Glenn Puckett) One further item regarding deploying application specific log4j configurations. For WEB applications that are deployed as separate WAR files. The log4j.xml file needs to be in the classes directory of the WAR. However, in Eclipse you normally won't see the classes directory. You should place the xml file in the src directory. It will get placed in the classes directory at deployment. DO NOT put it in the META-INF directory. It won't work there. 11. Trying to get this logging issue resolvedsetianusa Apr 27, 2011 4:24 PM (in response to Glenn Puckett) Hi Glenn, If you don't mind, maybe you could share with us your solution, which config files, what have you done, how the ear/war structure should be, etc. Thanks, Tom 12. Trying to get this logging issue resolvedGlenn Puckett Apr 29, 2011 2:47 PM (in response to Glenn Puckett) First there is nothing that is needed to be done with any log4j libraries. Most of the posts I found had something to say about managing a log4j.jar file within the application. That is totally unnecessary. Also, I discovered that the way I have it configured it doesn't matter if your EAR/WAR file has a log4j.jar file in it. I can't say for sure which jar file gets used, JBoss or your distrubution, it just works. In my Eclipse workspace I have my J2EE application set up as 3 projects. The base project (MyApp), the EJB project (MyApp EJB) and the WEB project (MyApp WEB). I placed a log4j.xml configuration file in the root level of the base project (MyApp). This configuration file has the Log4j setup for both the EJB and the WEB projects. It appears to be working fine and does not interfere with the JBoss log files at all. The other application it is a Web application only. I placed the log4j.xml file at the root level of the src directory. This is very different from most posts which tell you to put it in WEB-INF\lib. Since the struts.xml file is at the src root I tried the log4j.xml file there and it worked. The deployment process ends up putting log4j.jar in the WEB-INF\lib directory of the WAR file but that does not appear to be used. I have tested with it there and with it deleted with no effect on my logs.
https://developer.jboss.org/message/599936?tstart=0
CC-MAIN-2015-18
refinedweb
3,534
66.44
Removing the Cookie For testing purposes, it is useful to be able to get rid of the cookie (without waiting for a week for it to expire.) To save you the trouble of figuring out how to do this—it varies depending on the browser and operating system you are using—we have provided a Web page, use the URL, to do it for you. In case you are wondering, a cookie is removed by setting its expiration date to a date in the past. See the OnPage method in Cinema.RemoveCookie for an example of the necessary code. You can find Cinema.RemoveCookie in the SAMPLES namespace.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=TWEB2_BUILDINGAPPTIPS10
CC-MAIN-2021-10
refinedweb
108
64.81
If you don't have access to the source code of a class and the class is sealed, or you are of the opinion that inheritance is bad and dangerous, what can you do to add functionality without recreating or aggregating the class? Currently the only answer is to use an extension method. An extension method is a static method declared within a static class that has a special "pointer" to an instance of the class it extends. The name of the static class, and indeed its namespace, have nothing much to do with the extension method's name and use. So for example, you could create a general class to be used to hold all the extension methods in your project: public static class MyExtensions{ If you want to add a method called MyExtension to MyClass1 the declaration needed is: public static void MyExtension( this MyClass1 myinstance) { myinstance.MyMethod1(); }} The first parameter specification "this MyClass1" indicates the type that the method is added to and the actual parameter "myinstance" is set to a reference to the instance of MyClass1 that the method has been invoked on. That is, myinstance can be used like "this" in a regular class method and hence it gives access to all of the members of the class. To use the extension method you simply invoke the method as if it was a regular method defined as part of the class: MyClass1 myobj2 = new MyClass1();myobj2.MyExtension(); At this point it looks as if the extension method is exactly the solution we have been looking for. It even allows you to add methods to classes that have been sealed and so provides a way of extending the .NET base class library and fundamental types such as int or bool. However, there are some restrictions on the way that an extension method works and behaves. The first restriction is that an extension method only has access to the public types of the class it extends. In particular an extension method cannot access private or protected members. An extension method is no more privileged that any code external to the class. In addition you can't override a class method by an extension method. If you attempt to give an extension method the same name and signature as a class method then the class method is always used in preference. Notice that no warning is given of the name conflict. This means that extension methods really are only able to extend the behaviour of a class and not modify it. This makes extension safer than inheritance and overriding. Extension methods are, however, inherited. If you create a MyClass2 which inherits from MyClass1 then a MyClass2 object will have all of MyClass1's extension methods. This inheritance isn't virtual and hence polymorphism isn't supported. The availability of an extension method depends on the declared type of a reference and not on the type of the instance it actually references. This means that extension method calls are early bound and type checked at compile time. For example, if MyClass2 inherits from MyClass1 and both declare an extension method MyExtension then MyClass1 will use its version and MyClass2 its version. Notice that there is no need to use new or override to redefine extension methods in derived classes. The rule is that the extension method used by a class is the one that matches its type most accurately. This matching is done at compile time. For example: MyClass1 myobj1;myobj1= new MyClass2();myobj1.MyExtension(); results in the extension method for MyClass1 being called even though the reference is to an instance of MyClass2. In other words, extension methods are not inherited virtually. Inheritance is a powerful idea and it can be misused more easily than used to good effect. The problems usually arise when inheritance chains are long and ad hoc – but long inheritance chains are usually an indication that either the principles of object design are not being used or that the situation is truly complex! Your choices are to either to attempt to control inheritance piecemeal with access modifiers or forbid it completely using sealed. If you choose not to use inheritance then you need to be aware that the alternatives that you are almost certain to be attracted to carry their own problems and increase your workload. Whatever you do make sure you understand the technology you opt for. <ASIN:0596514824> <ASIN:1590598849>
http://www.i-programmer.info/ebooks/deep-c/559-chapter-five.html?start=4
CC-MAIN-2014-35
refinedweb
738
60.04
tf_conversions / posemath / pykdl problem with fromMsg There seems to be a bug in the python kdl wrapper, leading to unexpected behavior when working on KDL frames. I felt like posting it here since other people might run into it before it's fixed, and they would most probably look for a solution here. The simple remedy is of course to assign whatever is returned from fromMsg to a variable before usage. import roslib; roslib.load_manifest('tf') roslib.load_manifest('tf_conversions') import tf_conversions.posemath as pm roslib.load_manifest('geometry_msgs') import geometry_msgs pose_msg = geometry_msgs.msg.Pose() pose_msg.position.x = 100 print pm.fromMsg(pose_msg).p #[ 0, 0, 0] pose = pm.fromMsg(pose_msg) print pose.p #[ 100, 0, 0] We can create this error using kdl: import PyKDL as kdl kdl.Frame(kdl.Rotation.Quaternion(0,0,0,1),kdl.Vector(100,200,300)).p Out[9]: [2.56735e-316,6.93179e-310, 300] kdl.Frame(kdl.Rotation.Quaternion(0,0,0,1),kdl.Vector(100,200,300)).p[0] Out[10]: 0.0 # which is not 100! Turns out it's a general problem with SIP. Whenever we have a struct C { Member m; } and access from from a an instantiation C().m, the instantiated object gets garbage collected while .m still points to the memory held by C++ directly. It seems like there is no way to do this right in SIP.
http://answers.ros.org/question/27685/tf_conversions-posemath-pykdl-problem-with-frommsg/
CC-MAIN-2016-44
refinedweb
230
54.18
Test Coverage Tools in VS2008 Perhaps I spend a bit too much time “working” and not enough time learning the ins and outs of my tools. I stumbled across the Code Coverage Results tab this morning and was quite pleased. The Code Coverage metrics count what lines of code are not tested in your project. I’ve seen 3rd party tools that so this, but never found this inside of Visual Studio. To use these tools, I believe you must use the Visual Studio testing framework (which I do). Here’s how to get it working. 1. Enable Code Coverage in the local test run configuration for your test project. Test > Edit Test Run Configurations > Your Test Configuration Click on Code Coverage and check the libraries to include in the code coverage routines. 2. Execute your Test Project. After the tests are complete, right-click one of the tests and select “Code Coverage Results”. 3. Use the Code Coverage Results window to analyze the coverage of your unit tests. The Code Coverage Results window (with the default columns) tell you your: - Not Covered Blocks - Not Covered Blocks % - Covered Blocks - Covered Blocks % In the image above, you can see that ERC.Models has VERY poor coverage. That’s a LINQ library that, quite honestly, DOES have poor coverage as all of the code is automatically generated. The implementation of the Model (in ERC.Controllers) has quite good coverage, but has room for improvement. I can further drill down into the ERC.Controllers namespace and see that I left out the ReportsController. I remember creating the tests for the controller, but I added a quick method to it and forgot to update the test. For this controller, with only a few lines, it’s easy to spot the problem but what about a namespace or class with thousands of lines of code? This is where the code highlighting comes in handy. 4. Use Code Highlighting to pick out the missed lines of code. Click on the Code Highlighting button to toggle highlighting on and off. The code highlighting button, seen to the left, toggles red highlights on and off in your code. This only works for code that is included in your code coverage metrics, but helps the developer find those little code blocks that may have been overlooked. In my ReportsController, I remember adding a quick method, but forgetting to update the test. I can open up that controller and see that the untested code is now highlighted. From here, I can go back, add or update the approprate tests, and rerun. It’s a simple feature, but GREAT to see it built in—especially now that I know it’s there! The only caveat is that I wish you could (or wish I knew how to) exclude pre-generated code, such as LINQ code. Good Information, Thanks for sharing. DotNetGuts
https://tiredblogger.wordpress.com/2008/05/02/test-coverage-tools-in-vs2008/
CC-MAIN-2017-22
refinedweb
480
73.07
TensorFlow.js Crash Course – Machine Learning For The Web – Getting Started Subscribe On YouTube DemoCode CodingTheSmartWay, then consider supporting us via Patreon. With your help we’re able to release developer tutorial more often. Thanks a lot! What is TensorFlow.js TensorFlow.js is a JavaScript library which makes it possible to add machine learning capabilities to any web application. With TensorFlow.js you can develop machine learning scenarios from scratch. You can use the APIs to build and train models right in the browser or in your Node.js server application. Furthermore you can use TensorFlow.js to run existing models in your JavaScript environment. You can even use TensorFlow.js to retrain pre-existing machine learning models with data which is available client-side in the browser. E.g. you can use image data from your web cam. The project’s website can be found at: TensorFlow.js Fundamentals Before getting started with practical example let’s take a look at the main building blocks in TensorFlow. Tensors Tensors are the central unit of data in TensorFlow. A tensor contains a set of numeric values and can be of any shape: one or more dimensional. When you’re creating a new Tensor you need to define the shape as well. You can do that by using the tensor function and defining the shape by passing in a second argument like you can see in the following: const t1 = tf.tensor([1,2,3,4,2,4,6,8]), [2,4]); This is defining a tensor of a shape with two rows and four columns. The resulting tensor looks like the following: [[1,2,3,4], [2,4,6,8]] It’s also possible to let TensorFlow infer the shape of the tensor: const t2 = tf.tensor([[1,2,3,4], [2,4,6,8]]); The result would be the same as before. Furthermore you can use the following functions to enhance code readability: - tf.scalar: Tensor with just one value - tf.tensor1d: Tensor with one dimensions - tf.tensor2d: Tensor with two dimensions - tf.tensor3d: Tensor with three dimensions - tf.tensor4d: Tensor with four dimensions If you would like to create a tensor with all values set to 0 you can use the tf.zeros function, as you can see in the following: const t_zeros = tf.zeros([2,3]); This line of code is creating the following tensor: [[0,0,0], [0,0,0]] In TensorFlow.js all tensors are immutable. That means that a tensor once created, cannot be changed afterwards. If you perform an operation which is changing values of a tensor, always a new tensor with the resulting value is created and returned. Operations By using TensorFlow operations you can manipulate data of a tensor. Because of the immutability of tensor operations are always returning a new tensor with the resulting values. TensorFlow.js offers many useful operations like square, add, sub and mul. Applying an operation is straight forward as you can see in the following: const t3 = tf.tensor2d([1,2], [3, 4]); const t3_squared = t3.square(); After having executed this code the new tensor contains the following values: [[1, 4 ], [9, 16]] Models And Layers Models and Layers are the two most important building blocks when it comes to deep learning. Each model is build up of one or more layers. TensorFlow is supporting different types of layers. For different machine learning tasks you need to use and combine different types of layers. For the moment it’s sufficient to understand that layers are used to build up neural networks (models) which can be trained with data and then used to predict further values based on the trained information. Setting Up The Project Let’s start by taking a look at a real world example. In the first step we need to set up the project. Create a new empty directory: $ mkdir tfjs01 Change into that newly created project folder: $ cd tfjs01 In index.html let’s insert the following code of a basic HTML page: <html> <body> <div class="container"> <h1>Welcome to TensorFlow.js</h1> <div id="output"></div> </div> <script src="./index.js"></script> </body> </html> In addition add the following code to index.js: import 'bootstrap/dist/css/bootstrap.css'; document.getElementById('output').innerText = "Hello World"; Ee’re writing the text Hello World to the element with ID output to see a first result on the screen and get the confirmation that the JS code is being processed correctly. Finally let’s start the build process and the development web server by using the parcel command in the following way: $ parcel index.html You now should be able to open the website via URL in your browser. The result should correspond to what you can see in the following screenshot: Adding TensorFlow.js To add Tensorflow.js to our project we again make use of NPM and execute the following command in the project directory: $ npm install @tensorflow/tfjs This is downloading the library and installing it into the node_modules folder. Having executed this command successfully we’re now ready to import the Tensorflow.js libraray in index.js by adding the following import statement on top of the file: import * as tf from '@tensorflow/tfjs'; As we’re importing TensorFlow.js as tf we now have access to the TensorFlow.js API by using the tf object within our code. Defining The Model Now that TensorFlow.js is available let’s start with a first simple machine learning exercise. The machine learning szenario the following sample application should cover is based on the formula Y=2X-1, a linear regression. This function is returning the value Y for a given X. If you plot the points (X,Y) you will get a straight line like you can see in the following: The machine learning exercise we’d like to implement in the following will use input data (X,Y) from this function and train a model with these value pairs. The model will not know the function itself and we’ll use the trained model to predict Y values based on X value inputs. The expectation is that the Y-results which are returned from the model are close to the exact values which would be returned by the function. Let’s create a very simple neural network to perform the interference. This model needs to deal with just one input value and one output value: // Define a machine learning model for linear regression const model = tf.sequential(); model.add(tf.layers.dense({units: 1, inputShape: [1]})); First we’re creating a new model instance by calling tf.sequential method. We’re getting returned a new sequential model. A sequential model is any model where the outputs of one layer are the inputs to the next layer, i.e. the model topology is a simple ‘stack’ of layers, with no branching or skipping. Having created that model we’re ready to add a first layer by calling model.add. A new layer is passed into the add method by calling tf.layers.dense. This is creating a dense layer. In a dense layer, every node in the layer is connected to every node in the preceding layer. For our simple example it’s sufficient to only add one dense layer with an input and output shape of one to the neural network. In the next step we need to specify the loss and the optimizer function for the model. // Specify loss and optimizer for model model.compile({loss: 'meanSquaredError', optimizer: 'sgd'}); This is done by passing a configuration object to the call of the compile method of the model instance. The configuration object contains two properties: - loss: Here we’re using the meanSquaredError loss function. In general a loss function is used to map values of one or more variables onto a real number that represents some “costs” associated with the value. If the model is trained it tries to minimize the result of the loss function. The mean squared error of an estimator measures the average of the squares of the errors — that is, the average squared difference between the estimated values and what is estimated. - optimizer: The optimizer function to use. For our linear regression machine learning task we’re using the sgd function. Sgd stands for Stochastic Gradient Descent and it an optimizer function which is suitable for linear regression tasks like in our case. Now that model is configured and the next task to perform is the training of the model with values. Training The Model To train the model with value pairs from the function Y=2X-1 we’re defining two tensors with shape 6,1. The first tensor xs is containing the x values and the second tensor ys is containing the corresponding y values: // Prepare training data const xs = tf.tensor2d([-1, 0, 1, 2, 3, 4], [6, 1]); const ys = tf.tensor2d([-3, -1, 1, 3, 5, 7], [6, 1]); Now let’s train the model by passing the two tensors to the call of the model.fit method. // Train the model model.fit(xs, ys, {epochs: 500}).then(() => { }); As the third parameter we’re passing over an object which contains a property named epochs which is set to the value 500. The number which is assigned here is specifying how many times TensorFlow.js is going through your training set. The result of the fit method is a promise so that we’re able to register a callback function which is activated when the training is concluded. Prediction Now let’s perform the final step inside this callback function and predict a y value based on a given x value: // Train the model model.fit(xs, ys, {epochs: 500}).then(() => { // Use model to predict values model.predict(tf.tensor2d([5], [1,1])).print(); }); The prediction is done using the model.predict method. This method is expecting to receive the input value as a parameter in the form of a tensor. In this specific case we’re creating a tensor with just one value (5) inside and pass it over to predict. By calling the print function we’re making sure that the resulting value is printed to the console as you can see in the following: The output shows that the predicted value is 8.9962864 and that is very close to 9 which would be the Y value of function Y=2X-1 if x is set to 5. Optimizing The User Interface The example which has been implemented is using a fixed input value for prediction (5) and outputting the result to the browser console. Let’s introduce a more sophisticated user interface which gives the user the possibility to enter the value which should be used for prediction. In index.html add the following code: <html> <body> <div class="container" style="padding-top: 20px"> <div class="card"> <div class="card-header"> <strong>TensorFlow.js Demo - Linear Regression</strong> </div> <div class="card-body"> <label>Input Value:</label> <input type="text" id="inputValue" class="form-control"><br> <button type="button" class="btn btn-primary" id="predictButton" disabled>Model is being trained, please wait ...</button><br><br> <h4>Result: </span></h4> <h5><span class="badge badge-secondary" id="output"></span></h5> </div> </div> </div> <script src="./index.js"></script> </body> </html> Here we’re making use of various Bootstrap CSS classes, adding input and button elements to the page and defining an area which is used for outputting the result. We need to make a few changes in index.js too: import * as tf from '@tensorflow/tfjs'; import 'bootstrap/dist/css/bootstrap.css'; // Define a machine learning model for linear regression const model = tf.sequential(); model.add(tf.layers.dense({units: 1, inputShape: [1]})); // Specify loss and optimizer for model model.compile({loss: 'meanSquaredError', optimizer: 'sgd'}); // Prepare training data const xs = tf.tensor2d([-1, 0, 1, 2, 3, 4], [6, 1]); const ys = tf.tensor2d([-3, -1, 1, 3, 5, 7], [6,1]); // Train the model and set predict button to active model.fit(xs, ys, {epochs: 500}).then(() => { // Use model to predict values document.getElementById('predictButton').disabled = false; document.getElementById('predictButton').innerText = "Predict"; }); // Register click event handler for predict button document.getElementById('predictButton').addEventListener('click', (el, ev) => { let val = document.getElementById('inputValue').value; document.getElementById('output').innerText = model.predict(tf.tensor2d([val], [1,1])); }); An event handler for the click event of the predict button is registered. Inside this function the value of the input element is read and the model.predict method is called. The result which is returned by this method is inserted in the element with id output. The result should now look like the following: The user is now able to input the value (x) for which is the Y value should be predicted. The prediction is done when the Predict button is clicked: The result is then showed directly on the website. What’s Next In this first episode of this series you’ve learned the basics of Tensorflow.js and by using that library we’ve implemented a first simple machine learning example based on linear regression. Now you should have a basic understanding of the main Tensorflow.js building blocks. In the next part we’ll again focus on a practical machine learning example and dive deeper into JavaScript-based machine learning with TensorFlow.js. - …
https://codingthesmartway.com/tensorflow-js-crash-course-machine-learning-for-the-web-getting-started/
CC-MAIN-2020-40
refinedweb
2,236
56.76
************************************************* Originally developed by me Tahir Ramzan, ************************************************* Views: 2733 yar jab me is ko compile kar k debug ya run karu to ye wrning deta hai k source file is not complete Jazak ALLAH azawajal AoA, mashaALLAh, lekin yeah bhi compile karnay main error de raha hay , i upload the SC of the errorl, please help to resolve these errors, thanx, jazakALLAH type start of the file "using namespace std;" and remove ".h" from #include<iostream.h>..hope ur problem will be solved.. not solved yet same errors occurs thanks alot tahir bhai works really fine and thanks for helping virtualians. Dear friends Check It ASSALAMUALAIKUM please koi C paratha hai too plzz Help me mujy parni hai mujhe b Aryy yarrr ap ny b parni hai tooo replyyy kary online hai tooo bayaeee Yes I m © 2022 Created by + M.Tariq Malik. Promote Us | Report an Issue | Privacy Policy | Terms of Service
https://vustudents.ning.com/forum/topics/cs201-4-full-n-final-perfect-solution?groupUrl=cs201introductiontoprogramming&commentId=3783342%3AComment%3A1358807&groupId=3783342%3AGroup%3A58836
CC-MAIN-2022-05
refinedweb
152
63.32
Advanced Data Import 3.1 Sponsored Links Advanced Data Import 3.1 Ranking & Summary RankingClick at the star to rank Ranking LevelBuy now User Review: 0 (0 times) File size: 12.7 Mb Platform: Vista, Windows, Mobile License: Shareware Price: $195.00 Downloads: 202 Date added: 2009-03-16 Publisher: EMS Database Management Solutions, Inc Advanced Data Import 3.1 description Advanced Data Import 3.1 is an easy-to-use and powerful program quickly, irrespective of the source data format. In-purchase benefits - FREE One Year of Maintenance already included! - FREE software updates and upgrades during Maintenance period! - FREE and unlimited Technical Support during Maintenance period! - Reasonable pricing for Maintenance renewal – from 20% per Year! - Volume discounts when buying two or more copies of one product - Cross-selling discounts on related products Major - Borland Delphi 5-7, 2005, 2006, CodeGear Delphi 2007, 2009 and Borland C++ Builder 5-6, CodeGear C++ Builder 2007, 2009 support - BDS 2009 support added - Unicode support Requirements: - Pentium 300, 64 MB RAM - Borland Delphi 5-7, 2005, 2006 - Borland C++ Builder 5-6 WareSeeker Editor Advanced Data Import 3.1 Screenshot Advanced Data Import 3.1 Keywords Bookmark Advanced Data Import 3.1 Advanced Data Import 3.1 Copyright WareSeeker.com do not provide cracks, serial numbers etc for Advanced Data Import 3.1. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited. Featured Software Want to place your software product here? Please contact us for consideration. Contact WareSeeker.com Version History Related Software import data from files of the most popular data formats to the database. Free Download Advanced Data Export is a useful and powerful component suite (for Borland Delphi and .NET) Free Download Import data from MS Excel, MS Access, DBF, TXT, CSV and XML files to DB2 tables. Free Download save your data in the most popular data formats for the future viewing Free Download import your data quickly from MS Excel 97-2007, MS Access, DBF, XML, TXT, CSV Free Download Import data from MS Excel, MS Access, DBF, TXT, CSV and XML files to IB table Free Download Import data from MS Excel, MS Access, DBF, TXT, CSV, XML files to MySQL table. Free Download Import data from MS Excel, MS Access, DBF, TXT, CSV, XML files to MS SQL tables. Free Download Latest Software Popular Software Favourite Software
http://wareseeker.com/Software-Development/advanced-data-import-3.1.zip/1f3c4e948
CC-MAIN-2014-42
refinedweb
396
58.08
Talk:Proposed features/Scheduled lifecycle I really like concept of namespace prefixes and systematization of tags. But mapping non-existent objects is obviously against the core principles of OSM, described here: Good practice. It applies both to historical objects with no material traces left and to future objects with no sign of construction. It doesn't matter, how sure we are about particular construction project to be started - it doesn't exist. In addition, any new tags can't change situation of systematic abuse of them - some people just can't stop thinking wishfully. --BushmanK (talk) 17:59, 31 May 2016 (UTC) Rendering need? I'm not sure proposed/scheduled need to be rendered (although it might be nice). It is handy to be able to map things ready for them to reach the next stage (be that construction or final tags), and so I have added some things to OSM for that. Scheduled is confusing, it is potentially a short time between proposals being accepted and when you start to see signs of construction. Construction could be the ground being cleared, or even a site being closed so it can be demolished (slightly against the language, I use landuse=construction for demolition sites because I expect them to be constructed on shortly after). - LastGrape/Gregory 12:30, 1 June 2016 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Scheduled_lifecycle
CC-MAIN-2018-34
refinedweb
222
60.65
. Features Fabric implements functions which can be used to communicate with remote hosts: fabric.operations.run() This operation is used to run a shell command on a remote host. Examples run("ls /var/www/") run("ls /home/userx", shell=False) output = run('ls /var/www/mysites' fabric.operations.get() This function is used to download file(s) from a remote host. The example below shows how to download a backup from a remote server. # Downloading a back-up get("/backup/db.bak", "./db.bak") fabric.operations.put() This functions uploads file(s) to a remote host. For example: with cd('/tmp'): put('/path/to/local/test.txt', 'files') fabric.operations.reboot() As the name suggests, this function reboots a system server. # Reboot the remote system reboot() fabric.operations.sudo() This function is used to execute commands on a remote host with superuser privileges. Additionally, you can also pass an additional user argument which allows you to run commands as another user other than root. Example # Create a directory sudo("mkdir /var/www") fabric.operations.local() This function is used to run a command on the local system. An example is: # Extract the contents of a tar archive local("tar xzvf /tmp/trunk/app.tar.gz") # Remove a file local("rm /tmp/trunk/app.tar.gz") fabric.operations.prompt() The function prompts the user with text and returns the input. Examples # Simplest form: environment = prompt('Please specify target environment: ') # specify host env_host = prompt('Please specify host:') fabric.operations.require() This function is used to check for given keys in a shared environment dict. If not found, the operation is aborted. SSH Integration One of the ways developers interact with remote servers besides FTP clients is through SSH. SSH is used to connect to remote servers and do everything from basic configuration to running Git or initiating a web server. With Fabric, you can perform SSH activities from your local computer. The example below defines functions that show how to check free disk space and host type. It also defines which host will run the command: # Import Fabric's API module from fabric.api import run env.hosts = '159.89.39.54' # Set the username env.user = "root" def host_type(): run('uname -s') def diskspace(): run('df') def check(): # check host type host_type() # Check diskspace diskspace() In order to run this code, you will need to run the following command on the terminal: fab check Output fab check[159.89.39.54] Executing task 'check' [159.89.39.54] run: uname -s [159.89.39.54] Login password for 'root': [159.89.39.54] out: Linux [159.89.39.54] out: [159.89.39.54] run: df [159.89.39.54] out: Filesystem 1K-blocks Used Available Use% Mounted on [159.89.39.54] out: udev 242936 0 242936 0% /dev [159.89.39.54] out: tmpfs 50004 6020 43984 13% /run [159.89.39.54] out: /dev/vda1 20145768 4398716 15730668 22% / [159.89.39.54] out: tmpfs 250012 1004 249008 1% /dev/shm [159.89.39.54] out: tmpfs 5120 0 5120 0% /run/lock [159.89.39.54] out: tmpfs 250012 0 250012 0% /sys/fs/cgroup [159.89.39.54] out: /dev/vda15 106858 3426 103433 4% /boot/efi [159.89.39.54] out: tmpfs 50004 0 50004 0% /run/user/0 [159.89.39.54] out: none 20145768 4398716 15730668 22% /var/lib/docker/aufs/mnt/781d1ce30963c0fa8af93b5679bf96425a0a10039d10be8e745e1a22a9909105 [159.89.39.54] out: shm 65536 0 65536 0% /var/lib/docker/containers/036b6bcd5344f13fdb1fc738752a0850219c7364b1a3386182fead0dd8b7460b/shm [159.89.39.54] out: none 20145768 4398716 15730668 22% /var/lib/docker/aufs/mnt/17934c0fe3ba83e54291c1aebb267a2762ce9de9f70303a65b12f808444dee80 [159.89.39.54] out: shm 65536 0 65536 0% /var/lib/docker/containers/fd90146ad4bcc0407fced5e5fbcede5cdd3cff3e96ae951a88f0779ec9c2e42d/shm [159.89.39.54] out: none 20145768 4398716 15730668 22% /var/lib/docker/aufs/mnt/ba628f525b9f959664980a73d94826907b7df31d54c69554992b3758f4ea2473 [159.89.39.54] out: shm 65536 0 65536 0% /var/lib/docker/containers/dbf34128cafb1a1ee975f56eb7637b1da0bfd3648e64973e8187ec1838e0ea44/shm [159.89.39.54] out: Done. Disconnecting from 159.89.39.54... done. Automating Tasks Fabric enables you to run commands on a remote server without needing to log in to the remote server. Remote execution with Fabric can lead to security threats since it requires an open SSH port, especially on Linux machines. For instance, let's assume you want to update the system libraries on your remote server. You don't necessarily need to execute the tasks every other time. You can just write a simple fab file which you will run every time you want to execute the tasks. In this case, you will first import the Fabric API's module: from fabric.api import * Define the remote host you want to update: env.hosts = '159.89.39.54' Set the username of the remote host: env.user = "root" Although it's not recommended, you might need to specify the password to the remote host. Lastly, define the function that updates the libraries in your remote host. def update_libs(): """ Update the default OS installation's basic default tools. """ run("apt-get update") Now that your fab file is ready, all you need to do is execute it as follows: $ fab update You should see the following result: $ fab update [159.89.39.54] Executing task 'update' [159.89.39.54] run: apt-get update [159.89.39.54] Login password for 'root': If you didn't define the password, you will be prompted for it. After the program has finished executing the defined commands, you will get the following response, if no errors occur: $ fab update ............ Disconnecting from 159.89.39.54... done. Conclusion This tutorial has covered what is necessary to get started with Fabric locally and on remote hosts. You can now confidently start writing your own scripts for building, monitoring or maintaining remote servers. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/getting-started-with-the-fabric-python-library--cms-30555
CC-MAIN-2018-30
refinedweb
980
60.61
I am trying to create a .csv file of UUID numbers. I see how to make a single UUID number in python but can't get the correct syntax to make 50 numbers and save them to a .csv file. I've googled and found many ways to create .csv files and how to use For loop but none seem to pertain to this particular application. Thank you for any help. Just combine a csv writer with an uuid generator import csv import uuid with open('uuids.csv', 'w') as csvfile: uuidwriter = csv.writer(csvfile) for i in range(50): uuidwriter.writerow([uuid.uuid1()])
https://codedump.io/share/hwVfBLuAHqmx/1/saving-a-list-of-uuid-numbers-to-a-csv-file-in-python
CC-MAIN-2017-09
refinedweb
104
77.84
User Tag List Results 1 to 3 of 3 Thread: When throws keyword is used? Hybrid View - Join Date - Nov 2013 - 0 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) When throws keyword is used? Hi, When throws keyword is used? who knows best answer for that question? - Join Date - Jun 2011 - Location - Argyll, Scotland - 4,972 - Mentioned - 195 Post(s) - Tagged - 5 Thread(s) Did you try an internet search? There seem to be plenty of results for that question. This is the first one I found: (Although given what you've said about your location, shouldn't you be the one explaining it to me? )Don't serve your porridge and then go out for a walk. - Join Date - Dec 2013 - 0 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Hey Guys well i not know much about throws keyword but i think that throw is used when you observe error of some kind and then put that right after the introduction of class.like (public class Class throws "NullPointerException".)Thanks!! Bookmarks
http://www.sitepoint.com/forums/showthread.php?1180118-When-throws-keyword-is-used&mode=hybrid
CC-MAIN-2014-10
refinedweb
174
80.41
ListItemSwitch QML Type The list item with progress bar component of Neptune 3. More... Properties Signals Detailed Description The ListItemSwitch provides a type of a list item with a Switch at the right side. See Neptune 3 UI Components and Interfaces to see more available components in Neptune 3 UI. Example Usage The following example uses ListItemSwitch: import QtQuick 2.10 import shared.controls 1.0 Item { id: root ListView { model: 3 delegate: ListItemSwitch { Layout.fillWidth: true icon.name: "ic-update" text: "Downloading the application" onSwitchClicked { console.log("switch clicked"); } } } } Property Documentation This property holds the logical position of the thumb indicator. This property's default is 0.0. This property holds whether the switch is on or off. This property's default is false. Signal Documentation This signal is emitted when the switch is clicked by the user. This signal is emitted when the switch is toogled by the user..
https://doc.qt.io/archives/neptune3ui/qml-listitemswitch.html
CC-MAIN-2021-31
refinedweb
151
61.33
hey im having trouble workin out the average it keeps showing 0 instead #include<iostream> using namespace std; int main() { int count; double grade, total, average; grade = 0; total = 0; count = 0; cout <<"\nTo stop entering grades. Type in the number"; cout <<"\n 999.\n\n"; total = total + grade; cout << "Enter a grade: "; cin >> grade; count++; while (grade != 999) do { cout <<"\nEnter a grade: "; cin >> grade; if (grade < 0 || grade > 100 && grade !=999) { cout <<"\n Invalid grade has been entered" <<"\nPlease check the grade and re-enter"; } else break; //break if a valid grade was entered } while(1); //this expression is always true average = total / count; cout << "\nThe average of the numbers is " << average << endl; system("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/42533/need-help-this-is-stressin-me-out
CC-MAIN-2018-13
refinedweb
119
73.92
-- third edition -- November, 2002 Index Let me start with an early reminder, for the few of us who to whom these 6-monthly reports still come as such a surprise that they have problems scheduling the time they would need to contribute (perhaps even before the target deadline?-) -- because there are a few summaries I had hoped for that have failed to materialize... But now you must be eager to read it, so -- enjoy!-) Claus Reinke, University of Kent at Canterbury, UK Less than 6 months left until the next edition! In April 2003, you will be invited to contribute to the May 2003 edition of this report!Less than 6 months left until the next edition! In April 2003, you will be invited to contribute to the May 2003 edition of this report!haskell.org and the more fleeting various Haskell-related mailing lists. But now you must be eager to read it, so -- enjoy!-) Claus Reinke,. What can haskell.org do for you? There are a lot of things we can do that are of use to members of the haskell community: Just what can you do for haskell.org? Here are a few ideas: The good news is that the Haskell 98 Report is finally done. The original Haskell 98 report came out in February 1999. Soon afterwards, in 2000, I began to collect "typos". It soon became apparent that more than just typographical errors were involved. My goals became: The Haskell committee per se no longer existed, so I have consulted the Haskell mailing list about every change. Two years on, I have accumulated over 2000 email messages, almost all of which have required careful reading on my part. They led to more than 230 individually-documented changes, plus dozens of more minor corrections. At times it seemed that each time I put out a draft, people would spot a new raft of issues, but the process does now seem to have converged. I am still making tiny changes but, from a content point of view, it's all over bar the shouting. A lot of people have now looked hard at the Report and, for all its flaws, it's now in pretty good shape. It would be a very good thing for our community if the Report (both language and libraries) was available as a physical book. I made enquiries with several publishers, who declined politely. (They want to print high-volume undergraduate texts.) However, Cambridge University Press have agreed to publish it! It will come out both as Volume 13(1) (Jan 2003) of the Journal of Functional Programming, and as a separate book. It is being typeset now. The question of copyright remains open at the time of writing. CUP normally take copyright of what they publish, but the Haskell report is rather different, because it belongs to all of us, not just to me. What is definitely agreed is that the Report will continue to be available online, and that it can freely be reproduced for non-commercial purposes. But is that enough? The Haskell workshop was a good opportunity to have an open discussion about this question. There was a strong sentiment that we would much prefer the Report to be entirely unrestricted (as it has been to date). For example, would Debian distribute the Report as they do now? Probably not. Would the just-pre-publication online version be completely unrestricted? Presumably so. Would electronic distribution be OK? Unclear. And so on. On the other hand, there was also strong feeling that publishing a book would be a Really Good Thing; that CUP will probably lose money on the exercise; and that they are being much more flexible already than their normal terms. (John Reppy tells me that the Standard ML Basis Library book is much more restricted.) In the end, we took a straw poll, to get the sense of the meeting. Here's what happened. While many people have strongly-held views, it was a polite debate that generated at least as much light as heat. While there isn't a consensus of opinion about A/B/C, there was a consensus that views can legitimately differ on this point, and we should go with a majority view. So the outcome was that I should present as strong a case as possible to CUP for completely open publication; failing that, for open electronic publication; but failing that, publish anyway. This I have done, and CUP are considering it as I write. I hope this is acceptable to the Haskell community. The Haskell workshop is not everyone with a stake, by any means, but there were enough people present to be reasonably representative.. Ultimately, all these things should be linked from the Haskell bookshelf (have another look, it is not limited to books:): Following on from our last edition, this section tries to enhance the visibility of such valuable resources, by introducing new additions. Hal Daume mentions the "Yet Another Haskell Tutorial" project: "The goal of Yet Another Haskell Tutorial is to provide a complete intoduction to the Haskell programming language. It assumes knowledge neither of the Haskell language nor of functional programming in general. However, general familiarity with programming concepts (such as algorithms) will be helpful. This is not intended to be an introduction to programming in general; rather, to programming in Haskell." Alastair Reid is working on a tutorial for the new Foreign Function Interface "The goal is to collect together all the useful resources, compare the various ffi tools and implementations, provide tips and tricks for dealing with common awkward cases, etc. but the reality falls far behind the dream at present." For those always-on Haskellers who can find spare moments between following the dozens of Haskell mailing lists, Andrew Bromage points out that there is a semi-official Haskell IRC channel on the freenode network. You can get there by pointing your IRC client at irc.freenode.net and joining #haskell. Logs are available here: In this section, we try to give pointers to and perhaps short descriptions of recent Haskell-related publications (books, conference proceedings, special issues in journals, PhD theses, etc.), with brief abstracts. For a more exhaustive overview of Haskell publications, see Jim Bender's "Online Bibliography of Haskell Research" (). Please make sure to keep him up to date about new (and not so new) Haskell-related publications!). In case you hadn't noticed, the JFP special issue on Haskell has finally appeared: Volume 12 -- Issue 05 -- July 2002 (we had it's abstracts in our November 2001 edition;-). If you or your institution has a subscription, you can also get it online via the journal's home page, otherwise check out your nearest University library. The 2002 Haskell Workshop proceedings are now online, in the ACM digital library, see the workshop home page for link and titles:. Functional and Declarative Programming in Education, FDPE02, was a workshop at PLI02, Pittsburgh. Many of its papers are relevant to Haskell based courses, and the proceedings are available at. "Value Recursion in Monadic Computations", Levent Erkok, PhD Thesis , OGI/OHSU, October 2002 (Advisor: John Launchbury). This thesis addresses the interaction between recursive declarations and computational effects modeled by monads. More specifically, we present a framework for modeling cyclic definitions resulting from the values of monadic actions. We introduce the term "value recursion" to capture this kind of recursion. Our model of value recursion relies on the existence of particular fixed-point operators for individual monads, whose behavior is axiomatized via a number of equational properties. These properties regulate the interaction between monadic effects and recursive computations, giving rise to a characterization of the required recursion operation. We present a collection of such operators for monads that are frequently used in functional programming, including those that model exceptions, non-determinism, input-output, and stateful computations. In the context of the programming language Haskell, practical applications of value recursion give rise to the need for a new language construct, providing support for recursive monadic bindings. We discuss the design and implementation of an extension to Haskell's do-notation which allows variables to be bound recursively, eliminating the need for programming with explicit fixed-point operators. Details (including downloadable text of the thesis) are available at: (see also section 3.7.1) "A Formal Specification of the Haskell 98 Module System" Iavor S. Diatchki, Mark P. Jones, and Thomas Hallgren Many programming languages provide means to split large programs into smaller modules. The module system of a language specifies what constitutes a module and how modules interact. This paper presents a formal specification of the module system for the functional programming language Haskell. Although many aspects of Haskell have been subjected to formal analysis, the module system has, to date, been described only informally as part of the Haskell language report. As a result, some aspects of it are not well understood or are under-specified; this causes difficulties in reasoning about Haskell programs, and leads to practical problems such as inconsistencies between different implementations. One significant aspect of our work is that the specification is written in Haskell, which means that it can also be used as an executable test-bed, and as a starting point for Haskell implementers. Available at Simon Marlow and Simon Peyton Jones continue to hack away on GHC, aided by many others (see). In the last six months we have been working on three major new areas: Template Haskell isn't in any released GHC, but it is available and working in the HEAD, for those who care to check out source code. Push-enter looks very attractive for lazy languages, but it has many small costs scattered through the code generator and run-time system. For one thing, it seems to be practically impossible to compile it into C--, even though C-- was designed to be as flexible a code generator as possible (): we just couldn't think of a clean way to design C-- to deal with push-enter. (Except by using C-- in the unsatisfactory way we currently use C, namely ignoring the C stack and using an explicitly-managed stack instead.) So we have spent quite a bit of effort rejigging GHC's back end and run-time system to use eval-apply instead of push-enter. So far it seems that performance is pretty much unchanged; the number of lines of code in the runtime system and code generator is pretty much unchanged; but the complicated stuff is concentrated in a few places rather than being thinly distributed. And it makes the native C-- route possible. This stuff isn't even in the repository yet, but it will be. Other excitements On their own many of these features look a bit unnecesary, but Haskell encourages virtuoso programming with types, and it can make all the difference to be able to say what you mean in the type language. The Hugs98 interpreter is now maintained by Sigbjorn Finne and Jeffrey Lewis, both of Galois Connections, with help from Alastair Reid of Reid Consulting and Ross Paterson of City University London and others. At the time of writing, a new major release is on the brink of being released. Feature highlights of this new release will be: Our primary goal is for Hugs to continue its move to greater compatability with Haskell98, GHC and NHC. This will be helped enormously by our mutual support for the FFI and hierarchical library specification and adoption of a common codebase for the libraries. This release adds a lot of new functionality whilst maintaining compatibility with previous releases; future releases will drop some of this backwards compatibility. We released nhc98 version 1.14 in June 2002, and hmake 3.06 in August. Since both the language and the compiler are now very stable, these were mainly bugfix releases. One new feature is a `package' mechanism closely based on the ghc model, so third-party libraries can be easily added and removed. (hmake's handling of packages has also improved significantly.) Platform support now includes MacOS-X, in addition to just about every other Unix-like environment. The next release (1.16) will probably arrive towards the very end of the year. Its main new features will be support for the latest most stable version of the standard FFI, and for the new library packages already supported by ghc and Hugs, both of which we have been promising for a long time now. The Eager Haskell compiler runs ordinary Haskell programs using resource-bounded eager evaluation. Project details are available from The best overview of the current system can be found in my recent Haskell Workshop paper: The Eager Haskell compiler is now available as source code runnable on Linux x86. It's already in use by about 15 MIT students taking Arvind's course. Installation should only require gcc 2.95.3 or later; hacking the compiler itself will require a working Haskell 98 compiler. See the project homepage for more information and a download link. Porting should be extremely easy if your system has gcc; x86 Linux is simply the only machine we've tested on. The present compiler unifies the Eager Haskell and pH compilers under a single umbrella; the language in use can be selected by a compile-time switch. By default, phc is an ordinary haskell compiler. Project Status: Version 1.0 (RC7) The Haskell 98 FFI Addendum is meanwhile up to Release Candidate 7 and recently triggered an involved technical discussion on what is the right interface for finalizers for foreign objects. For details, see the archive of the FFI mailing list: After the finalizer debate has settled, there will be another release for public review. The current version of the addendum is available from GHC supports the FFI extension as defined in the addendum, in addition to the pre-standard syntax for backward compatibility. Hugs' FFI support has recently been overhauled and mostly brought in line with the FFI Addendum by Alastair Reid. Work on bringing nhc98 closer to the proposed standard is underway. Activity has shifted towards populating and supporting the new hierarchical libraries (4.1).: GpH aims to provide low-pain parallelism, i.e. acceptable performance with minimal programming effort. It does so by introducing a single new primitive combinator: par x y that returns y but may create a thread to evaluate x depending on machine load. Evaluation strategies are higher-order polymorphic functions that abstract over par and seq to provide high level constructs to coordinate parallelism, e.g. parList s applies strategy s to every element of a list. The project has been running since 1994 initially at Glasgow and subsequently at Heriot-Watt and St Andrews Universities. Recent work covers language, system and applications aspects, and consistently emphasises the architecture independence (cf.) of our approach. A robust version of GpH (GUM-4.06) is available for RedHat-based Linux machines (binary snapshot; installation instructions). Versions for Sun shared-memory machines, Debian, and an alpha-release based on GHC 5.02 are available on request <gph@cee.hw.ac.uk>. to avoid global garbage collection. The current (freely available) implementation is based on GHC 5.00.2 (beta release). It comprises a library which provides predefined Eden skeletons for many parallel computation patterns like task farms, work-pools, divide-and-conquer etc. Eden has been jointly developed by two groups at Philipps Universitat Marburg, Germany and Universidad Complutense de Madrid, Spain. The project has been ongoing since 1996. Current and future topics include program analysis, skeletal programming, optimisations. Our goal is to develop a generic constraint-based program analysis framework for Haskell. We have designed and implemented a binding-time, strictness and exception analysis for Haskell and incorporated both analyses into the GHC compiler. The analysis deals with all features of Haskell such as polymorphic programs and structured data. (section 5.2.1) is a preprocessor which generates instances of generic functions. It is used in Strafunski (section 4.2.2) to generate a framework for generic programming on terms. Light-weight generic programming: Generic functions for data type traversals can (almost) be written in Haskell itself, as shown by Ralf Laemmel and Simon Peyton Jones in `Scrap your boilerplate' (). In `Strategic polymorphism requires just two combinators!' (), Ralf Laemmel further develops these ideas. Another light-weight approach, using type representations inside Haskell, was presented by Cheney and Hinze at the Haskell workshop. Generic programs can also be implemented in a language with dependent types, as shown by McBride and Altenkirch in a paper in WCGP'02, see. More about generic programming and type theory (`Generic Haskell in type theory') can be found in Ulf Norells recent MSc thesis. The Generic Haskell release of last summer supports type-indexed data types, dependencies between generic functions, and special cases for constructors (besides the `standard' type-indexed functions and kind-indexed types). These extensions are described in the "Type-indexed data types" paper presented at MPC'02, and the "Generic Haskell, Specifically" paper at WCGP'02. The new Generic Haskell release was used in the Summer School in Generic Programming in Oxford last August, at which Ralf Hinze and Johan Jeuring presented two tutorials: Generic Haskell - theory and practice, and Generic Haskell - applications. The former tutorial introduces Generic Haskell, and gives some small examples, the latter paper discusses larger applications such as an XML compressor. More XML tools are described in Paul Hagg's MSc thesis on a framework for developing generic XML tools. Patrik Jansson adds that there is still a small group at Chalmers working under the slogan "Functional Generic Programming - where type theory meets functional programming" (), and that while PolyP is not really actively developed anymore, a new version could come out if somebody showed interest. There is a mailing list for Generic Haskell: <generic-haskell@cs.uu.nl>. See the homepage for how to join. Template Haskell is an extension to Haskell that supports compile-time meta-programming. The idea is to make it really easy to write Haskell code that executes at compile time, and generates the Haskell program you want to compile. The ability to generate code at compile time allows the programmer to implement such features as conditional compilation, polytypic programs, macro-like expansion, and the generation of supporting data structures and functions from existing data structures and functions (e.g. the 'deriving' clause). Here's a tiny example of conditional compilation: assert :: String -> Expr assert s | wantAsserts = [| \b v -> if b then v else error s |] | otherwise = [| \b v -> v |] To use 'assert', do this: foo x = $(assert "Uh ho") (x>3) (..foo's rhs..) The $(...) construct is a splice that says "evaluate this at compile time, and splice in the code returned in place of the splice". The [| ... |] is a quotation that lets you return a chunk of code as a result. All this is based on the ideas first put forth by Tim Sheard in MetaML (), with the following big difference: Template Haskell is a compile-time-only system, so there is no execution-time cost. As a direct result TH's type system is more generous, and lets you write programs that MetaML would reject. TH is described in our Haskell Workshop paper () and is implemented in GHC. It doesn't appear in any released version yet, but it's in the CVS HEAD and will be in the next release. Our hope is that by making TH available as part of GHC, people will start to use it for purposes we haven't even dreamt of. Please tell us!. If recursive bindings are required for a monadic computation, then the underlying monad should be made an instance of the MonadFix class, whose declaration looks like: class Monad m => MonadFix m where mfix :: (a -> m a) -> m a The operator mfix is required to satisfy several axioms; details can be found on the web-page. The following instances of MonadFix are automatically provided: Maybe, [], IO, and ST (both lazy and strict). Jeff Lewis and Sigbjorn Finne helped with the Hugs implementation. GHC implementation was mainly done by Simon Peyton Jones. The preprocessor is being maintained, but is now quite stable. It is used by the Yampa () system from Yale (section 6.4.3). I'm still taking requests, and am very keen to hear from any users. Development continues on the hierarchical libraries, although it has slowed somewhat over the last 6 months. GHC 5.04 shipped this year with the hierarchical libraries, and Hugs is also expected to ship a new version shortly with the hierarchical libraries. The libraries are now mostly documented with Haddock (section 5.3.5), although we're still missing documentation for many of the "standard" libraries (those that came originally from the Haskell 98 language &library reports). Contributions of more documentation would be gratefully received; it's often just a case of cut-n-paste from the appropriate report adding Haddock syntax as appropriate. Work on a hierarchical replacement for the aging Posix library is underway, the work in progress can be seen in fptools/libraries/unix in the CVS repository. The general plan is to increase portability by using only the FFI (section 3.1) and hsc2hs (the old Posix library used GHC extensions), and to update the library to support functionality from the POSIX 1003.1-2001 standard. The old hslibs libraries are now almost completely deprecated. GHC itself no longer requires any of them; if you have any code that uses libraries from hslibs (with certain exceptions for those libraries which don't as yet have a hierarchical replacement) then you're encouraged to port your code over to the new libraries - another benefit is that your code will then port to the new release of Hugs much more easily too. For GHC we'll probably do one more major release with hslibs before removing them. The intention of the project is to produce a set of foundation libraries suitable for eventual standardisation. At the moment, most of the effort is going into producing and supporting a forked version of Chris Okasaki's Edison library, updated to use post-H98 features. It's not very complete and only works under GHC at the time of writing. We pretty much have taken over Edison. Or, more correctly, we have forked Edison and are (according to Chris) the only ones actively maintaining an Edison derivative (the interface has changed enough that it's not quite Edison any more). For further information, contact: <hfl-devel@lists.sourceforge.net> a Haskell library StrategyLib and some tools along with it, most notably a precompiler for user-supplied datatypes. The StrategyLib provides themes such as simple traversal schemes and generic algorithms for name analysis. The precompiler is simply a customised version of DrIFT (section 5.2.1). The Strafunski-style of generic programming can be seen as a lightweight variant of generic programming (section 3.5) Pros: simplicity (no language extension, generic functionality just relies on a few original combinators), domain support (program transformation and analysis is addressed by the library and various examples). Cons: restrictions (focus on term traversal). Generic Haskell (section 3.5) provides a more powerful language for polytypic programming, think of anamorphisms, functorial maps, and type-indexed dataypes. HTk is an encapsulation of the graphical user interface toolkit and library Tcl/Tk for the functional programming language Haskell. It allows the creation of high-quality graphical user interfaces within Haskell in a typed, abstract, portable manner. HTk is known to run under Linux, Solaris, Windows 98, Windows 2k, and will probably run under many other POSIX systems as well. HTk works with GHC, versions 5.02.3 and later. aim is to create a highly portable GUI library, so that programs will be translated to different platforms without rewriting while giving. The current implementation works only with GHC-5.02 or higher compatibles and has moved from the hslibs collection to the new libraries. library is functionally completed, but the attempt to port the library to Gtk (as the basis for a Linux version) turned out too difficult and is still unfinished. Also no attempt has been made to build the library with nhc or hugs. Krasimir is still available for bug fixes, but apart from those, he has now moved on to other projects, building on Object I/O, but also trying some new ideas about GUI library design (more about those in the next edition;-). He would be happy to assist, if someone else would come forward to complete the Gtk port.. Support for the new GTK+ 2.0 API is the next item on the todo list. The binding is currently at version 0.14.10, which is a full binary release published in September 2002. More details as well as source and binaries packages are at Gtk2hs is a wrapper around the latest Gtk release (Version 2.0 or Gtk 2 for short). Although it provides a similar low level veneer like Gtk+HS (section 4.3.3), it is completely rewritten from scratch, circumventing some of the problems the Gtk+HS has: Our current work is done in a CVS repository which can be found on. We plan to finish the library by Summer next year.) consisted of a 2D platform-independent game engine, which implementation was based in HOpenGL (Haskell Open Graphics Library). It supported:. We would like to thank Mike Wiering, the creator of Clean Game Library. FunWorlds is an ongoing experiment to investigate language design issues at the borderlines between concurrent systems, animated interactive 2&3d graphics, and functional programming. One of the aims is to get a suitable platform for expressing such things, preferably from Haskell. Our earlier prototypes translated scene descriptions from a Haskell-embedded DSL combining ideas from Fran and VRML into standard VRML+ECMAScript (reported at IFL'2001), but due to some cumbersome VRML restrictions, this is currently being reimplemented. The new prototypes are built on Sven Panne's HOpenGL Haskell binding to OpenGL. This means that it is no longer easy to import ready-made high-level functionality from VRML-browsers, but we've got access to functional programming concepts at runtime, not just at compile time. The focus so far, as reported at the recent IFL'2002, has been on a redesign of some fundamental Fran concepts, towards a simpler operational semantics as a basis for uncomplicated implementations with more predictable performance characteristics. At the same time, we don't want to throw out too much of Fran's high-level modeling approach. An initial release is planned for later this year; this will not have substantially more functionality than the snapshot used for IFL'2002, so it will be pretty basic. The main obstacle on the way is to write some form of introductory tutorial on the new DSEL design and how one might use it. Once we've got some more experience with the language basics, graphics functionality will be added on demand. Instead of developing fixed tools, it is sometimes possible to generalize the code implementing the tool functionality into a library, so that the code can be reused for a family of tools. The Medina library is a Haskell library for GHC that provides tools and abstractions with which to build software metrics for Haskell. The library includes a parser and several abstract representations of the parse trees, some visualisation systems including pretty printers and HTML generation, and now includes some integration with CVS to allow temporal operations such as measuring a metric value over time. This is linked with some simple visualisation mechanisms to allow exploring such data. Recently support for generating call graphs of programs has been added, including a visualisation system to browse such call graphs. A case study has been started to work towards some validation of metrics by looking at the change history of a program and how various metric values evolve with those changes. The Medina project collaborates with the Refactoring project (section 5.3.3), also at UKC. Project Status: stable, maintained The HaXml project is still alive, in stable maintenance mode, now at version 1.07b. HaXml provides many facilities for using XML from Haskell. The most user-visible change recently has been to convert the HaXml libraries to the new hierarchical namespace. We have also recently provided a validator for checking documents against a DTD. HXML is a non-validating XML parser written in Haskell. It is designed for space-efficiency, taking advantage of lazy evaluation to reduce memory requirements. HXML may be used as a drop-in replacement for the HaXml (section 4.6.1) parser in existing programs. HXML includes a module with functionality similar to HaXml's 'Combinator' module, but recast in an Arrow-based (section 3.7.2) framework. HXML also provides multiple representations for XML documents: a simple algebraic data type containing only the essentials (elements, attributes, and text), a tree representation which exposes most of the full XML Information Set, and a navigable tree representation supporting all of the principal XPath axes (ancestors, following-siblings, etc). HXML has been tested with GHC 5.02, GHC 5.04, NHC 1.12, and most recent versions of Hugs. NHC 1.10 requires a patch. HXML is basically in maintenance mode right now until I can find some spare time; support for XML Namespaces is next on the TODO list. The Haskell XML Toolbox is a collection of tools for processing XML with Haskell. It is itself purely written in Haskell. The core component of the Haskell XML Toolbox is a validating XML-Parser that supports almost fully the Extensible Markup Language (XML) 1.0 (Second Edition). The Haskell XML Toolbox bases on the ideas of HaXml (section 4.6.1) and HXML (section 4.6.2), Items still on the to do list C-->Haskell is an interface generator that simplifies the development of Haskell bindings to C libraries. The latest binary release, version 0.10.17, was published in September 2002. The tool significantly simplifies binding development by automatically translating C types and enums into Haskell data types, managing access to C structs, and semi-automatically generating marshalling code for arguments and results of C functions. It has been stress tested in the development of the Gtk+HS GUI library (section 4.3.3). Source and binary packages as well as a reference manual are available from GreenCard is a foreign function interface preprocessor for Haskell and has been used (amongst other things) for the Win32 and X11 bindings used by Hugs and GHC. Source and binary releases (Win32 and Linux) are available. The last release was 2.0.4 (August 2002). A release that provides access to and takes advantage of the new Foreign Function Interface libraries will be available soon., ...). Not only is this a sub-optimal use of precious developer time, it also carries the risk of tools representing substantial investments being left behind as their developers move on to other commitments. The latest example is Hatchet, a Haskell-in-Haskell frontend incorporating not only parsing and pretty-printing, but also a type system. Introduced only in our previous edition, it is now looking for a new home (please contact Bernie Pope if you are interested). In this section, we hope to document any progress being made in this area, but see also the language extensions chapter for Template Haskell (section 3.6.1).. DrIFT is very close to the big 2.0 release once I fix a couple known bugs. See also constraint-based program analysis (section 3.4.2), and the group of tools developed in Utrecht (section 6.4.5). Happy has seen one new release, 1.13, in June of this year, which was mainly a bugfix release. On the development front, there's nothing major to report, except that the changes for GLR parsing are still waiting in the wings and I'm talking to the author of Alex (Chris Dornan) about a possible merge of Happy &Alex at some point. The HSX framework is a prototype for experimentation with the application of rewriting strategies using the transformation language Stratego to program optimization. Although the syntax is for complete Haskell (without layout), the transformations are done on a core-like subset only. The framework was used to implement the Warm Fusion transformation for deforestation that turns recursive function definitions into build/cata form. This form makes deforestation, the fusion of a composition of data structure producing and consuming functions, a piece of cake. Extension of the work to full Haskell was not continued by lack of a reusable Haskell front-end. Recently we have started to work on a new version of the framework called HsOpt. This is a transformation framework for Helium, a proper (light) subset of Haskell developed at Utrecht University (section 6.4.5). We reuse the parser, prettyprinter, and typechecker from the Helium project. The first target is the specification of a GHC style simplifier in Stratego. The Haskell ATerm library is used to interface Haskell components and Stratego components. There are a number of tools with rather different approaches to tracing Haskell programs for the purpose of debugging and program comprehension. In particular Hood and Hat seem to become increasingly popular. Both Hood, the portable library for observing data structures at given program points, and GHood, the graphical variant for animated observations have remain unchanged for over a year. On 14 June version 2.00 of Hat, the Haskell tracing (and debugging) system, was released. It is distributed separate from nhc98 and can be used both with nhc98 and ghc. The compiled program generates a trace file alongside its computation. With several improved tools the trace can be viewed in various ways: algorithmic debugging a la Freja; Hood-style observation of top-level functions; backwards exploration of a computation, starting from (part of) a faulty output or an error message. All tools inter-operate and use a similar command syntax. A tutorial explains how to generate traces, how to explore them, and how they help to debug Haskell programs. Hat 2.00 requires programs to strictly conform to the Haskell 98 standard. A new release that supports hierarchical modules, more libraries, multi-parameter classes with functional dependencies and improved performance with ghc will appear by the end of the year. Buddha is a declarative debugger for Haskell 98. tree with nodes that correspond to function applications. Currently buddha works with GHC version 5.04 or greater. No changes to the compiler are needed. There are no plans to port it to other Haskell implementations, though there are no significant reasons why this could not be done. Version 0.1 of buddha is freely available as source. This version supports most of Haskell 98, however there are a few small items that are not supported. These are listed in the documentation. Future releases will include support for the missing features, and a much improved user interface. At IFL 2002, Frank Huch and Thomas Boettcher presented a debugger for Concurrent Haskell (section 3.3.1). It has a graphical user interface for visualising the state of threads and communication abstractions. It is based on replacing the Concurrent library by a library ConcurrentDebug with the same interface. A first public release will appear soon at Also at IFL 2002, Alcino Cunha, Jose Barros and Joao Saraiva presented the prototype of a tool for graphically animating the evaluation of recursive Haskell functions. hIDE is an integrated development environment for Haskell (primary author: Jonas Svensson). In the last 6 months, hIDE in its current form has been frozen. A complete rewrite is now underway which aims to provide an attractive gtk2 interface and a plugin system. For people reading this report, the plugin system is probably the most important new feature. The significance of this is that you will be able to use a Haskell IDE with Haskell as its extension language. No longer will you have to delve into emacs' elisp or another foreign languages to get your favourite tool to work with your editor. Ultimately, we would love to see plugins for many of the excelent tools listed in this report. Development has been a little slow recently as we work out the basic architecture. Help would be welcomed. We will announce a development release when the core is finished and the API for plugins is more stable. haide-devel@lists.sourceforge.net Refactoring means changing the structure of existing programs without changing their functionality, and has become popular in the object-oriented and extreme programming communities as a means to achieve continuous evolution of program designs. Like most Haskellers, we had developed a habit of refactoring our programs on a small scale long before we learned that others would call this refactoring. And in a functional language like Haskell, we are much more adventurous about the scale of program transformations we will try to improve our code. Without proper tool support, however, this soon gets out of hand, and only backups and undo come to the rescue of the over-zealous code-improver. We want to explore the wealth of functional program transformation research to bring refactoring to Haskell programmers. Our project finally got underway this July, and apart from the usual background work -- (re-)reading existing work on refactoring catalogues and tools, surveying Haskellers' editing habits, and doing some casestudies -- we have now started to experiment with Haskell frontends (parsing, pretty-printing, type-checking) and other tools (strategic programming) that could form the foundations for actually implementing a refactoring tool for Haskell. Tools we've looked at so far include HsParser, Haddock's parser (section 5.3.5), Hatchet (section 3.6), parts of Programmatica, and Strafunski (section 4.2.2). While all of these are helpful in different ways, they are also limited in different ways -- chosing some of them and making them work together on real-world Haskell sources to the extent required for our project has proven to be challenging so far. This is partially due to the tools being relatively new, partially due to each of them focussing only on a specific job that might not match our more general requirements. The issues relating to infrastructure for Haskell meta-programming (section 3.6) require, and would merit, more attention and further development. Because of the lack of a standard interface to Haskell frontend information (compare, e.g., ADA's Semantic Interface Specification), Haskell tool and IDE developers keep reinventing wheels and unsatisfactory hacks. We are also looking into XML/XSLT support for organising and maintaining our database of functional refactorings, and we collaborate with the Medina project (section 4.5.1).. A minor revision (HUnit 1.1) is in the works and should be out soon (1-2 weeks). Its main purpose is to adapt to recent GHC (5.04) and Hugs (Oct. 2002) versions. HUnit is free software that is written in Haskell 98 and runs on Haskell 98 systems. The software and documentation can be obtained at.. We have been using QuickCheck to test monadic code, especially in the ST monad, and there is a new version available with combinators for defining "monadic properties". Our Haskell Workshop paper explains how to use them. We are also experimenting with combinators to make test data generators easier to write, with using Generic Haskell (section 3.5) for the same purpose, and with integration with the Hat tracer. Some of this was presented at the Advanced Functional Programming summer school this year. We've been planning for a while to combine all our experimental versions into one stable new version, but teaching has got in the way. Haddock is now described in a paper, which appeared at the Haskell Workshop 2002: At this point I consider Haddock to have most of the main feature set I had originally intended. There is however still a long list of things to do: see the file TODO in the Haddock source tree, and comments/contributions are of course always welcome. This section lists applications developed in Haskell, be it in academia, in industry, or just for fun, which achieve some non-Haskell-related end. HScheme is), or a fixed-point Scheme (that allows call-with-result), etc. Current status: It's close to R5RS, but it's currently missing certain functionality such as inexact numbers and vectors. There have been no releases: you'll have to download and build from CVS if you want to use it. But you can play with the interpreter on the web at. It's not particularly fast. Contact: <ashley@semantic.org> Hume is a strict functionally-based concurrent language incorporating high-level coordination constructs. The language is aimed at hard real-time, low memory (e.g. embedded systems) settings. The expression layer is purely functional, with a syntax that is similar to that of Haskell. Hume programs are short and have a small footprint (< 64K including RTS and heap is possible on RTLinux). Compared with Java KVM (which is aimed at embedded systems), Hume programs are 10x faster, and require about 50%of the memory. It is possible to statically cost memory requirements for a restricted (but usable) subset of the language. We are working on improving this using a novel type-and-effect system. We have already demonstrated hard real-time capability on RTLinux and are now working towards a complete embedded systems implementation on e.g. Lego Mindstorms robots, or semi-autonomous vehicles. The front-end tools (lexer, parser, cost modeller etc.) are all Haskell-based, using Alex and Happy (section 5.2.2) as appropriate. There is a reference interpreter written in Haskell and an abstract machine (bytecode) compiler/interpreter. The compiler is written in Haskell, with the bytecode interpreter written in C for portability and performance. We are now working on improved code generators, using e.g. threaded code. These tools should be seen as research-quality! We have been impressed by Haskell's ease/speed of construction/modification, conciseness and robustness (this is an interesting experience for a language designer/implementor!). This project aims to provide parallel programming support for the GAP computer algebra system on a range of parallel architectures. This will be achieved by calling GpH library functions (section 3.3.2) from within GAP code. We anticipate two useful outcomes for the Haskell community:). Many companies are starting to allow their programmers to develop small prototypes in Haskell but few are willing to take a chance on using Haskell on a large project. The risks to these companies include lack of support for tools, lack of tutorials and training courses, gaps in the set of available libraries, and lack of `gurus' to call on when things go wrong. Reid Consulting can meet those needs. Our background and continuing involvement in the development of Haskell tools and compilers (GreenCard, Hugs, GHC, etc.) and the Haskell language and library design (the Haskell report, the Standard libraries, the Hugs-GHC libraries, the Foreign Function Interface and the HGL Graphics Library) and our use of Haskell to develop large systems, provide the experience and the contacts needed for effective support of real projects. Where acceptable to clients, we have a policy of releasing any fixes or developed code as OpenSource for use by the wider Haskell community. Sengan Baring-Gould <Sengan.Baring-Gould@nsc.com> at National Semiconductor is developing a binary parser which given a grammar is able to extract fields from values. This is used as part of an internal ICE (hardware debugger). The declarations follow the format: declaration :: supertype = { fields in order, msb first, allowing bitslicing } : { local fields if any, and their types or lengths }; For instance the definition of the eax register on a x86 is: eax = { ex, ax } : { ex[16] }; ax = { ah, al }; The grammar provides types that can be arranged into a hierarchy so that fields whose location moves depending on bits within the value can be automatically found, and referenced. For instance cs is a descriptor and there are 20 kinds of descriptors, two of which are illustrated below. cs :: Descriptor32; Small_Code_Segment :: Descriptor32 = { base[31:24], 1'b0, is_32_bit, 1'bx, available, limit[19:16], present, dpl, 2'b11, conforming, readable, a, base[23:0], limit[15:0] } : { is_32_bit[1], present[1], conforming[1], readable[1], a[1], dpl[2], available[1], base[32], LimitBytes limit }; Small_Data_Segment :: Descriptor32 = { base[31:24], 1'b0, is_32_bit, 1'bx, available, limit[19:16], present, dpl, 2'b10, expand_down, writable, a, base[23:0], limit[15:0] } : { is_32_bit[1], present[1], expand_down[1], writable[1], a[1], dpl[2], available[1], base[32], LimitBytes limit }; In all cases cs.dpl will access the relevant field even if it were in different places. Binary parser provides the ability to reference by name values which may be composed of other values. It goes one step further in that the client program does not need to know where particular value is buried, only what its value is. Binary parser grammars are intended to enable non-programmers to access fields of their registers, without requiring the ICE-developer to write explicit code to do so. For instance a technical writer could write a binary parser grammar for a device of which the ICE developer has never heard. Stress has been put on generality and simplicity, rather than efficiency. For instance binary parser allows multiple definitions, cyclic definitions, etc. Binary Parser is implemented in Haskell whereas the current ICE is not (C++) -- but the next generation will be. Currently communication is achieved using pipes so as to be compatible with both windows and unix (binary parser is used by 2 internal tools, one is unix one is windows). Binary parser simplifies the porting of the ICE from chip to chip where the location of register-fields may change but their functionality does not. Lava is a set of Haskell modules that define a domain specific hardware description language for producing circuits for implementations on Field Programmable Gate Arrays () (FPGAs). Previous work has focused on designing and implementing a robust and practical system for realizing structural (graph based) circuit descriptions using combinations (higher order functions or connection patterns) to compose circuits in interesting ways. We have now turned our attention to capturing system level information in Lava. In particular we would like to describe the architecture of bus-based systems were components communicate not through directly connected wires but instead use a protocol for communication over a bus. Ideally we would like to define some kind of embedded type system that captures not only what information is communicated but how it is communicated (direct connection, bus, FIFO, shared memory, interrupt etc.). This could allow much higher level descriptions of "System on Chip" (SoC) systems and allows for the possibility of automatically generating interfacing circuitry. As our focus changes from computation (what the gates do) to communication (how to compose large modules) we expect to produce interesting requirements for a type system for hardware description languages intended for designing SoCs. At the recent workshop on functional and declarative languages in education (FDPE 2002), there seemed to be a consensus that functional programming in first year computer science courses works best if it focusses on general programming and computer science concepts, i.e. uses functional programming as a tool, not as a goal or main topic. While preparing the ground for a more ambitious initiative (creating a formally based software engineering program at his University), Rex Page has been collecting empirical evidence on using Haskell as a tool for familiarizing students with the idea of reasoning about software artifacts. Jose Labra introduces a new project to develop a generic web based programming environment to teach Haskell and other programming languages. See also the EDSL project at Yale (section 6.4.3), developing domain-specific languages for use in high-school education. Studying connections between programming effectiveness and practice in reasoning about software. The test and debug cycle accounts for the entire defect prevention strategy in most software projects. What difference might it make if software developers had some experience in reasoning about software artifacts using methods from mathematical logic? The Beseme Project (three syllables, all rhyming with "eh") is gathering some evidence bearing on this question by introducing students in a discrete mathematics course to logic through examples based entirely on reasoning about software, most of which is expressed as Haskell equations. The subsequent performance of these students in a programming-intensive course is compared to that of students who have gone through a traditional discrete mathematics course. Progress reports and course materials including over 350 lecture slides, homework, exams, solutions, and software tools are available through the Beseme website: I have started a project called IDEFIX where we want to develop a generic web based programming environment to teach Haskell and other programming languages. The first steps of the system were presented in the conference "Implementation of Functional Programming languages", 2002 (Madrid). Jose Emilio Labra Gayo! Here is some input on Haskell-related research at CS.Chalmers.se. This is by no means complete - we do other Haskell-related stuff as well, including the work on generic programming (3.5) and QuickCheck (5.3.4) ... A bigger project is just about to start: (in addition to the professors listed below, Marcin Benke, Koen Claessen, Patrik Jansson and Ulf Norell will also be involved) Thierry Coquand, Peter Dybjer, John Hughes, Mary Sheeran The goal of this programme is to develop methods for improving software quality. The approach is to integrate a variety of verification methods into a framework which permits a smooth progression from hacking code to fully formal proofs of correctness. By a pragmatic integration of different techniques in a next-generation program development tool, we hope to handle systems on a much larger scale than hitherto. This proposal builds on and combines our extensive and internationally well-known research in interactive theorem provers, formal methods, program analysis and transformation, and automatic testing. Our long experience with functional languages, which we use both as implementation tools and a test-bed, improves our chances of success in tackling these difficult problems. The activities of our group are centered on the UniForM workbench and the Common Algebraic Specification Language (CASL). The). The workbench is actively used in the MMiSS project (). It currently contains about 70k lines of Haskell code (plus a few hundred lines of C), and compiles with the Glasgow Haskell Compiler, making use of many of its extensions, in particular concurrency. We are also using GHC to develop tools, like parsers and static analysers, for languages from the CASL family. Several parsers have been written using the combinator library Parsec. (Annotated) terms (from the ATerm Library) are used as a data exchange format and we use DrIFT (section 5.2.1) to derive instances for conversions. For various graph data structures we use the Functional Graph Library (FGL). Documentation will be generated using Haddock (section 5.3.5). (for Parsec, ATerm, and FGL, see) One extension of CASL, namely HasCASL, strives to combine CASL and Haskell. The HasCASL development paradigm (from requirements to functional programs) has been presented at the recent IFL'02 Workshop in Madrid. The language HetCASL is a combination of several specification languages used in formal methods, such as CSP, CASL, HasCASL, and Modal and Temporal Logic. We exploit Glasgow Haskell's multiparameter type classes and functional dependencies to provide a type-safe interface to analysis tools for particular languages. Specifications involving several languages can be processed using existential and dynamic types. UniForM workbench HTk CASL The members of our group are Paul Hudak, John Peterson, Henrik Nilsson, Antony Courtney, and Liwen Huang.. Yampa is the culmination of our efforts to provide domain-specific embedded languages for the programming of hybrid systems. Yampa differs from previous FRP based system in that it is structured using the arrow combinators (section 3.7.2). This greatly reduces the chance of introducing space and time leaks into reactive, time-varying systems. We have released a preliminary version of Yampa that contains: Antony Courtney is working on yet another graphics library for Haskell to provide capabilities similar to the Java 2-D graphics library. The goal of this project is to use computers to assist students in understanding fundamental concepts within the core of the high school (ages 12 - 18) curriculum. In our approach, students must be able to describe objects in a learning domain in a formal manner that looks suspiciously like Haskell code. These abstract objects may be mathematical functions, physical laws, computational processes, or other intangible entities. Once the computer knows what the student is expressing, it can then realize the object in ways that help the student to explore and understand its properties. We are currently working with on languages: one for mathematical visualization (Pan) and another for algorithmic music composition (Haskore). We don't have an educational version of Haskore yet but have used it (and Hugs) on high school students with good results. We are about to release a new version of Conal Elliott's Pan system, Pan#, in a version that no longer requires the user to use a Haskell compiler. We use a subset of Haskell ("friendly Haskell") supplemented with primitives for color manipulation to describe images. This system depends on C#and Microsoft's .NET at the moment. A "hackers" version of the software is available now on the web and a formal release should occur around December 1. One prong of the Metis Project at Brooklyn College, City University of New York, is research on and with Parallel Haskell (section 3.3.2) in a Mosix-cluster environment. In fact, although we are just starting up, we want to both use and work on Haskell--we are gathering a number of smaller research efforts under a single umbrella.
http://www.haskell.org/communities/11-2002/html/report.html
crawl-001
refinedweb
8,984
53.41
Enable jinja2 and i18n translations on Google AppEngine My initial goal was to make our new application (based on python/AppEngine) translatable. This means the following requirements: - All strings in the application must be translatable - Translations should preferably stored in separate files - It should be easy to use the translations both in .py files and html templates The solution that I came to after a couple of hours includes the following components: Babel (string file generation), i18n.gettext (getting strings in code) and jinja2 <% trans %> tag (getting strings in templates). The setup of all this is not obvious, so I'll put the steps in this blog post. Let's start! Intall Babel: You need to install it, not just ref from the application, as you'll need its comman 'pybabel' to generate locale-specific files. I use Windows, so I just downloaded the installation package. Make sure that Python folders are in your PATH variable. I use Python 2.7, so to make Babel work I'll need the following values in PATH: "C:\Python27;C:\Python27\Scripts". Scripts folder contains the pybabel executable. Install jinja2: Once again, you need to install it, as Babel will need it to parse strings in templates. Just run easy_install Jinja2 Put Babel and gaepytz libraries inside your GAE application. They are required for i18n module. Configure jinja2 to be used in your application. You'll need the following entry in app.yaml: libraries: - name: jinja2 version: "2.6" and your webhandler.py will look something similar to this: import webapp2 from webapp2_extras import jinja2 from webapp2_extras import i18n from google.appengine.ext.webapp.util import run_wsgi_app class MainHandler(webapp2.RequestHandler): def jinja2(self): return jinja2.get_jinja2(app=self.app) def get(self): i18n.get_i18n().set_locale('ru_RU') # sample locale assigned ... # your web site functionality goes here # jinja2 config with i18n enabled config = {'webapp2_extras.jinja2': { 'template_path': 'templates', 'environment_args': { 'extensions': ['jinja2.ext.i18n'] } } } application = webapp2.WSGIApplication([('.*', MainHandler)], config=config) def main(): run_wsgi_app(application) if __name__ == "__main__": main() This code will work if you put your jinja2 templates into "templates" folder. Create the translations markup. This means, you define the translatable strings in python code with a commonly used '_' alias: from webapp2_extras.i18n import gettext as _ def do_some_text(): return _('some text') or in jijna2 template with {% trans %} block: {% block buttons %} <div> <div onclick="window.print()">{% trans %}Print{% endtrans %}</div> </div> {% endblock %} Create a Babel configuration file babel.cfg (put it into the application folder for now): [jinja2: **/templates/**.html] encoding = utf-8 [python: source/*.py] [extractors] jinja2 = jinja2.ext:babel_extract This file instructs Babel to extract translatable strings from html jinja2 templates in "templates" folder and python files in "source" folder. Now it's time to create translations. First, add a "locale" folder in application root. Still being in root folder, run the following pybabel command to extract the translatable strings from the code pybabel extract -F ./babel.cfg -o ./locale/messages.pot ./ then initialize the locales with pybabel init -l en_US -d ./locale -i ./locale/messages.pot pybabel init -l ru_RU -d ./locale -i ./locale/messages.pot Now open locale\ru_RU\LC_MESSAGES\messages.po file in your favorite text editor, and produce the translations (you have to change 'msgstr' only): #: templates/sample.html:10 msgid "Print" msgstr "Печать" #: source/test.py:13 msgid "some text" msgstr "немного текста" And finally compile the texts with pybabel compile -f -d ./locale Every time you need to add more strings, you should do the same steps as in 6, but use "update" instead of "init": pybabel update -l en_US -d ./locale -i ./locale/messages.pot pybabel update -l ru_RU -d ./locale -i ./locale/messages.pot Done! You should be able to run the application and see the strings translated. Like this post? Please share it!
https://mikhail.io/2012/07/26/enable-jinja2-and-i18n-translations-on-google-appengine/
CC-MAIN-2018-09
refinedweb
630
59.9
ulimit - get and set process limits #include <ulimit.h> long int ulimit(int cmd, ...); The ulimit() function provides for control over process limits. The cmd values, defined in <ulimit.h> include: - UL_GETFSIZE -. - UL_SETFSIZE -lim_t, the actual value set is unspecified. The ulimit() function will not change the setting of errno if successful. Upon successful completion, ulimit() returns the value of the requested limit. Otherwise -1 is returned and errno is set to indicate the error. The ulimit() function will fail and the limit will be unchanged if: - [EINVAL] - The cmd argument is not valid. - [EPERM] - A process not having appropriate privileges attempts to increase its file size limit. None. As all return values are permissible in a successful situation, an application wishing to check for error situations should set errno to 0, then call ulimit(), and, if it returns -1, check to see if errno is non-zero. None. getrlimit(), setrlimit(), write(), <ulimit.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/ulimit.html
CC-MAIN-2014-41
refinedweb
161
60.21
Hi, I'm trying to use the ImageMagick libraries in a project and I'm getting lots of Unresolved External Symbol problems I'm writing it in C, in MSVC and I'm pretty sure the code's all ok because at the moment I'm only trying to work one of their samples. The includes look like this: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <magick/api.h> and those are all in the right places, and the list of external dependencies contains a long list of .h files which contain all the functions I'm using. This is probably just because I don't really know what I'm doing where MSVC is concerned, but I couldn't find anything on search so I'm wondering if there's anything really obvious I'm doing wrong? thanks for any help
https://cboard.cprogramming.com/c-programming/56707-linker-problems.html
CC-MAIN-2017-13
refinedweb
148
75.3
I have a string "I love stack overflow very much" how to remove spaces between character and make groups of 8 character? how to add dummy data if no of character does not divided by 8 to make group of 8?how to add dummy data to make groups of 8 characrers? public class Run { public static void main(String[] args) { String string = "I love stack overflow very much"; //replacing all newline and and then making tokens String[] words = string.replaceAll("\\s", "").split("(?<=\\G.{8})"); for (String st : words) { if (st.length() == 8) { // if length of the string is 8, just print the string System.out.println(st); } else { System.out.print(st); // printing dummy characters after the final string for (int i = 0; i < 8 - st.length(); i++) { System.out.print("X"); // assuming dummy character = X; } } } } } Output Ilovesta ckoverfl owverymu chXXXXXX
https://codedump.io/share/uFRR3O7hxgzM/1/i-have-a-string-quoti-love-stack-overflow-very-muchquot-how-to-remove-spaces-between-character-and-make-groups-of-8-character
CC-MAIN-2017-47
refinedweb
142
75.71
This tutorial shows how to configure UrbanCode® Deploy (UCD) and Jenkins to build a pipeline that streamlines the development of applications up to the deployment phase. This tutorial also covers how to include automated unit testing, code coverage, and IBM Developer for z/OS (IDz) code review in the pipeline by using the zUnit feature of Developer for z/OS®. The sample application for this tutorial is GenApp, the IBM® standard CICS® application. After the configuration steps, this tutorial shows how to quickly test the configuration by starting a Jenkins build that runs a Dependency Based Build (DBB) build. This build will create a version in the client UCD, upload the binary files to Artifactory, and deploy the application via UCD, using the binary files in Artifactory. Then, for unit testing and code coverage, this tutorial shows how to configure and run zUnit test cases from the pipeline. Note: This tutorial uses only one pipeline for all of the steps (build/test/code coverage/IDz code review/package/deploy). However, in a real environment, this should be split into several pipelines depending on the step (development, staging, production, etc.). Overall prerequisites - You need access to an Artifactory repository. - You must have a Jenkins server installed and configured. - The DBB toolkit and server must be installed. For more information, see Installing and configuring DBB in the IBM Dependency Based Build Knowledge Center. - UCD version 7.0.4 or later must be installed and configured. This tutorial is based on UCD version 7.0.5. For more information, see the IBM UrbanCode Deploy V7.0.5 documentation. - The GenApp application must be installed in your CICS environment and CMCI must be configured. You can download GenApp from the IBM Support website. Estimated time It should take you about 60-90 minutes to complete this tutorial. Steps Here are the steps required to complete this tutorial: Step 1. Configure UrbanCode Deploy - UCD prerequisites - Install two required plugins - Create a UCD component for the GenApp application - Create a deployment process in the GenAppComponent component - Describe the deployment process of the component - Create a resource to deploy on the target - Create a deployment configuration Step 2. Configure Jenkins Step 3. Configure the pipeline for zUnit Step 4. Configure the pipeline for code coverage - Configure code coverage headless daemon - Configure the pipeline for code coverage - Trigger a build for code coverage Step 5. Configure the pipeline for IDz code review Step 1. Configure UrbanCode Deploy 1. UCD prerequisites The UrbanCode Deploy agent must be installed on z/OS®. The name of the agent used in this tutorial is “e2e-pipeline”. Your UCD instance must be ready to use. Then you need to open it. Follow the tutorial steps to configure your UCD instance and deploy the application in the pipeline context. The application, named GenApp, is a CICS application. 2. Install two required plugins Install the following two plugins, which are required to deploy a CICS application and to push the binary files to Artifactory: Click on the first link. Then, click Download and select I Agree to start downloading. Do the same with the second link. In UCD, click the Settings tab. In the Automation panel, click Automation Plugins. Then click Load Plugin. Select the downloaded plugins ( .zip), one at a time, and click Submit. You can see that the two plugins (CICS TS and the z/OS Utility to download external artifacts) are now displayed in the Automation plugins list. 3. Create a UCD component for the GenApp application Now you need to declare the GenApp application as a component in UCD. In UCD, click the Components tab and click Create Component. In the wizard that opens, enter “GenAppComponent” in the Name field and select “z/OS” for the field Component Type. Then click Save. 4. Create a deployment process in the GenAppComponent component Once the UCD component is created, you must create a process in this component. A process describes a set of tasks that must be run to deploy the application. A typical set of tasks for a CICS application is to deploy the load modules to the target environment, to bind, and to make a new copy to refresh the updated load modules. The following steps show you how to insert the tasks into the graphical interface of the process. At the end, the process will look like this. You can refer to it each time you insert a task. You must start by creating the process. So, in the GenAppComponent component, click the Processes tab and click Create Process. In the wizard that opens, enter “Deploy” in the Name field. Leave the default options. Make sure that the Process Type is Deployment. Then click Save. The graphical interface of the Deploy process opens. 5. Describe the deployment process of the component In the graphical interface of the Deploy process, you must now describe all the tasks that will be run to deploy the GenApp application. This deployment process is not specific to a target — it can be run in any z/OS environment. When you insert a task, a rectangle is added to the graph. Here’s some basic information about the graphical interface: - You can click Save in the toolbar when you modify the graph. You can then click Revert to go back to the last saved version of the graph. - To rearrange the display of the rectangles in the graph, you can slide the Autolayout cursor to the right in the toolbar. - You can manually set a link between two tasks. To do so, hover over the source task, click the arrow in the blue circle and, with the mouse button still pressed, go to the target task and release the button. This inserts a link between the two tasks. 5.1. Download the artifacts from Artifactory The deploy phase comes after the build phase, which created a version in UCD and pushed the binary files to Artifactory. So the first task of this process is to retrieve these binary files. In the Type to filter field on the left side of the graphical interface, copy and paste the words “Download Artifacts for zOS External Repo”. Then, drag the zOS ExternalArtifactsDownload line found under Artifact and drop it between the Start and Finish tasks. Edit the properties of the task by clicking the edit icon on the top right corner of the rectangle and proceed as follows: - Keep “${p:extRepoURL}” in Repository URL. It represents the Artifactory repository that contains the binary files of the application. The value of this property will be assigned during the build and associated with the UCD version. - Enter “${p:artifactory.user}” in Repository User Name. It represents the user ID of the Artifact repository. - Enter “${p:artifactory.password}” in Repository Password. It represents the password of the user in the Artifact repository. Even if the value is hidden, it will be recognized. As a UCD component can be deployed on any z/OS environment, the actual values of these last two properties will be assigned when a specific target is specified for the deployment. Click OK. 5.2. Deploy Data Sets This task deploys the binary files from Artifactory to the target. It deploys the load modules and the DBRMs (data set members created by Db2® for the z/OS precompiler) to PDS on z/OS. The DBRMs will be used in the bind process. In the Type to filter field on the left side of the graphical interface, copy and paste the words “Deploy Data Sets”. Then drag the Deploy Data Sets line found under zOS Utility and drop it between the Download Artifacts for zOS External Repo and Finish tasks. You can keep all the default values of the properties. The Data Set Mapping property, with its default deploy.env.pds.mapping value, conveys that the load modules and the DBRMs will be deployed to PDS in the target environment. The next tasks in the process will constitute two branches that originate from the Deploy Data Sets task: - The branch for the CICS load module process - The branch for the bind process The processing for these two branches is run in parallel. 5.3. CICS load module tasks The CICS application must recognize the deployed load modules. So, add the following two tasks: List the load modules to refresh This CICS task generates a string that lists the load modules to be refreshed. This list will be used in the next task. In the Type to filter field on the left side of the graphical interface, copy and paste the words “Generate Artifact Information”. Then, drag the Generate Artifact Information line found under zOS Utility and drop it between the Deploy Data Sets and Finish tasks. Edit the properties and proceed as follows: - Enter “Generate Load Module List” in the Name field. This name will be reused later. - Enter “ /.*LOAD*/” in the Deploy Type Filter field. Only the binary files with a LOAD type (load module) in the package deployed in Artifactory will be considered. - Enter “${member},” (with the comma at the end) in the Template field. Clicking Template opens an editor where you can copy and paste the value. This value means that a line with the name of the member will be generated for each member present in Artifactory. Click Save to go back to the properties. Then click OK. New copy This CICS task reloads the load modules generated in the preceding step. In the Type to filter field on the left side of the graphical interface, copy and paste the words “New copy resources”. Next, drag the New copy resources line found under CICS TS and drop it between the Generate Load Module List and Finish tasks. Edit the properties and enter “${Generate Load Module List/text}” in Resource Name List. Clicking this field opens an editor where you can copy and paste the value. This value means that the list produced in the preceding step will be output as text. Click Save to go back to the properties. Make sure that the Resource Type is “Program.” Clicking Precondition opens an editor where you can copy and paste the following lines: properties.get("Generate Load Module List/text") !== undefined && properties.get("Generate Load Module List/text") !== null && properties.get("Generate Load Module List/text") != "" Click Save to return to the properties. Leave the default values of the other properties. If you click Show Hidden Properties, you can see that this task will require information to log into CICS. The values will be assigned in the deployment environment. Click OK. 5.4. Bind tasks When Db2 load modules are deployed with new DBRMs, they must be bound again for the Db2 database in use. So, add the following two tasks: Generate artifact information If a COBOL that contains Db2 statements is modified, the load module must be bound again with the configured Db2 package. In the Type to filter field on the left side of the graphical interface, copy and paste the words “Generate Artifact Information”. Next, drag the Generate Artifact Information line found under zOS Utility and drop it at the same level as Generate Load Module List. Then, connect it to the Deploy Data Sets task. To do this, hover over the Deploy Data Sets task, click the arrow in the blue circle and, with the mouse button still pressed, go to the Generate Artifact Information task and release the button. This will insert a link between the two tasks. You should then see the second branch starting from the Deploy Data Sets task. Edit the properties and proceed as follows: - Enter “Generate Bind Package” in the Name field. - Enter “ /.*DBRM*/” in the Deploy Type Filter field. Only the binary files with a DBRM type in the package deployed in Artifactory will be considered. - Enter a SQL text fragment that the JCL will use to bind each member. It will bind the package (information from the Db2 database configured on z/OS), the DBMR member, the qualifier (Db2 information), and the owner (user who will run the bind). The other lines of the fragment are required for the Db2 bind. Click Template to open an editor where you can copy and paste the following lines to overwrite the default line: BIND PACKAGE(${p:db2.target.collId}) + MEMBER(${member}) + LIBRARY('${dataset}') + QUALIFIER(${p:db2.target.qualifier}) + OWNER(${p:jes.user}) + ACTION(REPLACE) + ISOLATION(CS) + RELEASE(COMMIT) + ENCODING(EBCDIC) Click Save to go back to the properties. Then click OK. Submit the job on z/OS In the Type to filter field on the left side of the graphical interface, copy and paste the words “Submit Job”. Then drag the Submit Job line, found under zOS Utility, and drop it after the Generate Bind Package task. Connect it to Generate Bind Package task and, since this is the last task, connect it to New copy resources. Enter the JCL lines in the JCL field. This will bind the new DBRMs. Values will be assigned to specific properties later: ${p:db2.hlq}— where Db2 HLQ is installed ${p:db2.target.sqlid}— Db2 on z/OS ${Generate Bind Package/text}— the text generated in the preceding step will be produced Clicking JCL opens an editor where you can copy and paste the following lines: //BINDPKG JOB 'DBB-PKGBIND',MSGLEVEL=(1,1),MSGCLASS=R,NOTIFY=&SYSUID //*ROUTE PRINT @JCLPRINT@ //JOBLIB DD DISP=SHR, // DSN=${p:db2.hlq}.SDSNEXIT // DD DISP=SHR, // DSN=${p:db2.hlq}.SDSNLOAD //******************************************* //* PKGBIND //* Step bind packages //******************************************* //**BEGIN //PKGBIND EXEC PGM=IKJEFT01,DYNAMNBR=20,COND=(4,LT) //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD DUMMY //SYSTSIN DD * DSN SYSTEM(${p:db2.target.sqlid}) ${Generate Bind Package/text} END /* //*END Leave the default values for the other properties. If you click Show Hidden Properties, you can see the host where the job will run, the JES job monitor port, the user name, and password. Click Save to return to the properties. Then click OK. If you haven’t saved the graph yet, you should do so now. 6. Create a resource to deploy on the target This task configures the target of the deployment. Note that a z/OS agent has been created and configured in the target environment. UCD communicates with it when a deployment is requested on this z/OS. The resource consists of the resource itself, an agent, and a component. Click the UCD Resources tab and then Create Top-level Group. In the wizard that opens, enter the resource name “GenAppResource” and click Save. Then add an agent to this resource: On the GenAppResource line, click on the ellipsis (…) and select Add Agent. In the wizard that opens, expand the Agent list and select the agent where you want to deploy GenApp. In our example, the agent is e2e-pipeline. Click Save. The agent is now displayed in the Resource Tree, nested under GenAppResource. Next, you need to associate a component with the agent. To do so, on the e2e-pipeline line, click on the ellipsis (…) and select Add Component. In the wizard that opens, expand the Component list and select the GenAppComponent. Click Save. The component is now displayed in the Resource Tree, nested under the e2e-pipeline. This is what you should see: 7. Create a deployment configuration 7.1. Create an application Click the UCD Applications tab. Click Create Application, expand the list, and select New Application. In the wizard that opens, enter the application name “GenApp-Deploy” and click Save. You should see the message “No environments found.” Don’t worry, you’ll create an environment shortly. 7.2. Associate a component with the application In the GenApp-Deploy application, click the Components tab. Then click Add Component. In the wizard that opens, expand the Select a Component list and select GenAppComponent. Then click Save. 7.3. Define a process to install this component In the GenApp-Deploy application, click the Processes tab. Then click Create Process. In the wizard that opens, enter the name of the process, “Deploy”, then click Save. The graphical interface of the process should open. Drag Install Component (under Application Steps) and drop it between the Start and Finish tasks. Edit the properties and make sure that GenAppComponent is selected in the Component property. This means that the process will deploy this component. Click OK in the properties. Then click Save on the process. 7.4. Create an environment for the application In this task, you will make the final link with the deployment target — so you need to create a development environment. In the GenApp-Deploy application, click the Environment tab. Then click Create Environment. In the wizard that opens, enter a name for the environment: “Development”, then click Save. 7.5. Assign resources to the environment In the list of the environments for the GenApp-Deploy application, click Development. Then click Add Base Resources. In the Add Resource to Environment dialog box, select the top-level GenAppResource item and then click Save. Return to the list of the environments by going back up one level. To do this, click Environments in the breadcrumb trail under the UCD tabs. 7.6. Assign values to the environment properties In the list of the environments for the GenApp-Deploy application, click Development. In the Development environment notebook, click the Configuration tab. Then click Environment Properties on the left part of the tab. Click Batch Edit and copy-paste the following lines to the text area, then click Save: cics.cicspex=YOUR_CICS_CICSPEX cics.cmciport=YOUR_CICS_CMCIPORT cics.host=YOUR_CICS_HOST cics.username=YOUR_CICS_USER deploy.env.pds.mapping=*.LOAD, “YOUR_PDS_FOR_LOAD_MODULES“ \n\ *.DBRM, “YOUR_PDS_FOR_DBRM“ db2.hlq=YOUR_DB2_HLQ db2.target.sqlid=YOUR_DB2_SQLID jes.user=YOUR_ZOS_USERID jes.host=YOUR_ZOS_HOST. As the access is local, you can put “\:\:1” for IPV6 or “localhost” for IPV41. jes.monitor.port=YOUR_ZOS_JOBMONITOR_PORT (6715 if not customized) db2.target.collId=YOUR_GENAPP_DB2_COLLECTION_ID db2.target.qualifier=YOUR_GENAPP_DB2_QUALIFIER artifactory.user=YOUR_ARTI_USER Close the Alert box. Click the Back to table mode button above the text area. Next, you need to add three properties whose values will be hidden because they correspond to passwords. - Click Add Property and enter “cics.password” in the Name field. Select the Secure checkbox to hide the value and enter “YOUR_CICS_PASSWORD” in the Value field. Then click Save. - Click Add Property again. Enter “artifactory.password” in the Name field. Select the Secure checkbox to hide the value and enter “YOUR_ARTI_PASSWORD” in the Value field. Then click Save. - Click Add Property again. Enter “jes.password” in the Name field. Select the Secure checkbox to hide the value and enter “YOUR_ZOS_PASSWORD” in the Value field. Then click Save. Congratulations, you’ve configured UCD! Step 2. Configure Jenkins For this tutorial, Jenkins already contains the plugins required to run the pipeline. Now you need to configure Jenkins to create the pipeline that will deploy the application by using the UCD instance that you’ve configured. The tutorial is based on Jenkins version 2.222.1. 1. Jenkins prerequisites You must have the following Jenkins plugins in your Jenkins instance: - Durable Task (version >= 1.29) - Git - Git Client - Pipeline - Pipeline Utility Steps - SSH Build Agents - GitHub (optional — only if you want to test webhook) 2. Set up your Git source repository At the time this tutorial is being written, the source code of GenApp is only available as MVS content through IBM Support. You need to extract the code source on MVS™ (see previous link) and add it back to your Git. - Fork this repository:. Add the GenApp source code in the “cics-genapp” folder of your forked repository with the following structure: - cics-genapp - base - src - BMS - Duplicate the contents of the PDS 'HQL'.BMS extracted on MVS from SOURCE.XMIT - COBOL - Duplicate the contents of the PDS 'HQL'.COBOL extracted on MVS from SOURCE.XMIT - COPY - Duplicate the contents of the PDS 'HQL'.COPY extracted on MVS from SOURCE.XMIT Note: The transferred files must have their corresponding extensions in the Git repository. The extension is .cblfor the COBOL files, .bmsfor the BMS files, and .cpyfor the COPY files. Edit the “cics-genapp/application-conf/datasets.properties” file according to your targeted z/OS. - Commit and push your changes into your forked repository. 3. Set up Jenkins Log into your Jenkins server. The Jenkins dashboard should open: 3.1. Install the required UCD plugin for the Jenkins pipeline The UCD Jenkins Pipeline plugin must be installed manually. So, go to the Jenkins Pipeline page on UrbanCode and click Download. Select I Agree to start downloading. Once the plugin is downloaded, go back to Jenkins. In the Jenkins dashboard, click Manage Jenkins and then select Manage Plugins. In the browser that opens, select the Advanced tab. In Upload Plugin, browse to the UCD plugin that you’ve just downloaded and click Upload. The plugin is then uploaded and installed. 3.2. Create the credentials for the pipeline Go back to the Jenkins dashboard and click Credentials. In the table, click Jenkins. In the table on the System page that opens, click Global credentials (unrestricted). In the left part of the Global credentials (unrestricted) page that opens, select Add Credentials for each credential that you create: - Add the credential to connect the Jenkins agent to z/OS. Make sure that Username with password is selected in the Kind dropdown list. Enter the z/OS SSH user name that will own the Jenkins agent process in Username. Enter the z/OS password of the SSH user in Password. Enter zosuser_idin ID and z/OS Userin Description. Then click OK. - Add the credential to upload the binary files to an Artifactory repository. Make sure that Username with password is selected in the Kind dropdown list. Enter artifactory_idin ID, “Artifactory” in Description, and your own credentials in Username and Password. Then click OK. You should now see the following two credentials: 3.3. Configure Jenkins for UCD Go back to the Jenkins dashboard and click Manage Jenkins. Then click Configure System. In the configuration page that opens, scroll down to IBM UrbanCode Deploy Pipeline Plugin Configuration and click Add to add a UCD server. Enter UrbanCodeE2EPipeline in Profile Name, the URL of your UCD server in IBM UrbanCode Deploy URL, and the user and password of your UCD instance in User and Password. Then click Save. Note: In a test configuration, you may need to select Trust All Certificates. However, for security purposes, all the certificates must be authentic in a production environment. 3.4. Create a Jenkins agent on z/OS Go back to the Jenkins dashboard and click Build Executor Status. In the Nodes page that opens, you can see a default agent named master. However, you will need to create a z/OS agent. To do so, click New Node. Enter “e2e-pipeline” in the Name field and select Permanent Agent. Then click OK. In the configuration page that opens, enter the following information: - Enter the remote directory in Remote root directory. It can be “/var/jenkins/agent/e2e-pipeline,” but it varies according to the environment. It is the working directory of the agent. - Enter “e2e-pipeline” in Label. - Select Launch agents via SSH in Launch method. - Enter the hostname of your z/OS in Host. - Select z/OS User, the credential you created in a previous step, in Credentials. - Select Non verifying Verification Strategy in Host Key Verification Strategy because this is a test environment. However, bear in mind that in a production environment you will have to select Known hosts file Verification Strategy for security reasons. - Click Advanced to specify the following advanced parameters: - Keep the default value, 22, in Port. However, if your z/OS SSH port is different, you must change this value. - Enter “/usr/lpp/java/J8.0_64/bin/java” in JavaPath. This is where the Java™ binary files are installed. - Enter “-Xquickstart -Dfile.encoding=UTF-8 -Xnoargsconversion” in JVM Options. - Enter the following in Prefix Start Agent Command:Note that the . /usr/lpp/IBM/dbb/conf/gitenv.sh && export JAVA_HOME=/usr/lpp/java/J8.0_64 && export IBM_JAVA_ENABLE_ASCII_FILETAG=ON && export GIT_CONFIG_NOSYSTEM=0 && env && gitenv.shscript is shipped with Dependency Based Build. - Enter ” -text” (with a whitespace at the beginning) in Suffix Start Agent Command. - Select Environment properties and add the following properties: - Enter “DBB_HOME” in Name and “/usr/lpp/IBM/dbb” in Value. This specifies where Dependency Based Build is installed on z/OS. - Enter “DBB_URL” in Name and the URL of the DBB server in Value. This specifies the URL of the Dependency Based Build server. - Enter “DBB_HLQ” in Name and the z/OS DBB HLQ in Value. This specifies the HQL of the PDS that the DBB build will create on z/OS. - (Optional) Enter “DBB_CREDENTIAL_OPTIONS” in Name and the DBB Web server zAppBuild credentials options in Value. The default value is “-id ADMIN -pw ADMIN”. - Enter “ARTIFACTORY_URL” in Name and the URL of the Artifactory server in Value. This specifies the URL of the Artifactory server. - Enter “ARTIFACTORY_REPO_PATH” in Name and the Artifactory repository path in Value. This specifies the Artifactory repository path where binary files will be uploaded. - Enter “UCD_BUZTOOL_PATH” in Name and the location of buztool.sh in Value. It specifies the absolute location of buztool.shon z/OS. Select Tool Locations and declare a wrapper shipped with DBB /usr/lpp/IBM/dbb/bin/git-jenkins.shin Home. Note: If your Jenkins Git plugins are recent, you might need to use the git-jenkins2.shwrapper scripts available on GitHub. Click Save to save the agent configuration. The agent is then launched automatically. 3.5. Create a Jenkins project This project will constitute the pipeline. Go back to the Jenkins dashboard and click New Item to create a project. Enter “Development” as the name, select Pipeline, and click OK. The configuration notebook of the pipeline opens. Enter the following information in the Pipeline tab: - Select Pipeline script from SCM in Definition.file in the home location of the user who owns the Jenkins agent process with the following contents: [http] sslVerify = false - Put “cics-genapp/Jenkinsfile” in Script Path. This Jenkins script describes a pipeline. It makes initializations, clones the repositories, starts the DBB build, creates the version that will be pushed to UCD and Artifactory, and calls UCD to deploy the application. Click Save. 4. Run the pipeline in Jenkins Go back to the Jenkins dashboard. You should see that the e2e-pipeline node is ready to run. You can click the Development pipeline and click Build now. The Stage View part of the Jenkins development pipeline displays the duration of each constituting stage of the pipeline: Note: This pipeline is a demonstration flow. In a real environment, some stages should be part of another pipeline. If you want your pipeline to be triggered by a webhook, you can configure the Git webhook for your organization and your Git provider. Step 3. Configure the pipeline for zUnit With the June/July 2020 releases of IBM Dependency Based Build (DBB), IBM Developer for z/OS (IDz) and the DBB reference implementation zAppBuild, it is now possible to easily integrate the execution of IDz unit test into an open and modern CI/CD pipeline. The purpose of this section is to outline the steps to configure and run zUnit test cases. A detailed technical paper titled Integrating Unit Tests into an open and modern CI/CD pipeline is available in the IBM Support website to help you start with zUnit testing in the context of a pipeline. 1. Configure GenApp for zUnit The GitHub repository you forked during the Jenkins configuration step already contains all the configurations to run a zUnit test for the COBOL cics-genapp/base/src/COBOL/LGICDB01.cbl. Here is the detail of the different configuration sections related to zUnit. A new property file was added for zUnit: cics-genapp/application-conf/zUnitConfig.properties: # Application properties used by zApp-Build/language/ZunitConfig.groovy # default zUnit maximum RCs allowed # can be overridden by file properties zunit_maxPassRC=4 zunit_maxWarnRC=8 # # file extension of zunit playback files zunit_playbackFileExtension=plbck # # zUnit dependency resolution rules # Rules defined in application.properties zunit_resolutionRules=[${testcaseRule}] In cics-genapp/application-conf/application.properties, the following elements were added: zUnitConfig.propertieswas added to applicationPropFilesproperty. cics-genapp/zUnit/testcfg,cics-genapp/zUnit/testcasewas added to applicationSrcDirs. ${testcaseRule},${testconfigRule}was added to impactResolutionRule. The following zUnit section was added: # Run zUnit Tests # Defaults to false, to enable, set to true runzTests=false # # Comma separated list of the test script processing order testOrder=ZunitConfig.groovy # Example: jobCard=//RUNZUNIT JOB ,MSGCLASS=H,CLASS=A,NOTIFY=&SYSUID,REGION=0M jobCard=//RUNZUNIT JOB ,MSGCLASS=H,CLASS=A,NOTIFY=&SYSUID,REGION=0M # zUnit Rules testconfigRule = {"category": "ZUNITINC", \ "searchPath": [ \ {"sourceDir": "${workspace}", "directory": "${application}/zUnit/testcfg"}, \ {"sourceDir": "${workspace}", "directory": "${application}/zUnit/testcase"} \ ] \ } testcaseRule = {"library": "SYSPLAY", \ "searchPath": [ \ {"sourceDir": "${workspace}", "directory": "${application}/zUnit/testplayfiles"} \ ] \ } In cics-genapp/application-conf/file.properties, the following elements were added: cics-genapp/zUnit/testcase/*.cblwas added to dbb.scriptMappingfor COBOL. The following zUnit section was added: # zUNIT dbb.scriptMapping = ZunitConfig.groovy :: cics-genapp/zUnit/testcfg/*.bzucfg dbb.scannerMapping = ZUnitConfigScanner :: cics-genapp/zUnit/testcfg/*.bzucfg cobol_testcase = true :: cics-genapp/zUnit/testcase/*.cbl In cics-genapp/application-conf/datasets.properties, the following lines were added: # Optional IDZ zUnit / WAZI VTP library containing necessary copybooks. # Example : RDZ.V14R2.SBZUSAMP SBZUSAMP=TOOLS.BZU.SBZUSAMP In cics-genapp/.gitattributes, the following lines were added: .bzucfg zos-working-tree-encoding=ibm-1047 git-encoding=utf-8 .plbck binary A folder called cics-genapp/zUnit was added with zUnit input files for LGICDB01.cbl (source, config, and playback). The Jenkins file cics-genapp/Jenkinsfile was modified to be able to record test results in the Jenkins build result. xsltprocis used on the distributed Jenkins server agent. This binary file is required to perform the XSLT transformation of the zUnit test result to a format that the Jenkins server can understand. The following sections were added just after the DBB build: def zUnitFiles = findFiles(glob: "**BUILD-${BUILD_NUMBER}/**/*.zunit.report.log") zUnitFiles.each { zUnit -> println "Process zUnit: $zUnit.path" def zUnitContent = readFile file: zUnit.path zUnitContents << zUnitContent } ... zUnitContents.each { zUnitContent -> writeFile file: '/tmp/zUnit.zunit', text:zUnitContent def rc = sh (returnStatus: true, script: '''#!/bin/sh curl --silent -o /tmp/AZUZ2J30.xsl xsltproc /tmp/AZUZ2J30.xsl /tmp/zUnit.zunit > ${WORKSPACE}/zUnit.xml ''') junit "zUnit.xml" } Note: You can browse all these additions in the following Git commit: 2. Trigger a zUnit for a COBOL source zUnit is not enabled by default for the pipeline. In the cics-genapp/application-conf/application.properties property file, you need to enable zUnit by entering: runzTests=true With the IDE of your choice, perform a modification on cics-genapp/base/src/COBOL/LGICDB01.cbl, commit your change to Git, and trigger a new Jenkins build. Note: DBB manages dependencies between zUnit files. Any modifications on a file that is related to LGICDB01.cbl will trigger a zUnit test case (test source, zUnit configuration file, and playback file). In the Jenkins build console, you should see the zUnit runner output: In the Jenkins build result, you should see a new zUnit test: In the Jenkins build result, you can view the zUnit log files: Step 4. Configure the pipeline for code coverage With the recent release of IBM Developer for z/OS (IDz) and the DBB reference implementation zAppBuild, it is now possible to easily integrate the execution of code coverage during zUnit tests in an open and modern CI/CD pipeline. This section outlines the steps to configure and run zUnit test cases with code coverage enabled. IBM z/OS Debugger (v15.0.x or later). See Overview of IBM z/OS Debugger. 1. Configure code coverage headless daemon The GitHub repository you forked during the Jenkins configuration step already contains all the configurations to run code coverage during zUnit tests. For the code coverage collection step, the pipeline will communicate with the code coverage headless daemon and will archive the code coverage PDF export into the pipeline build. In the context of this tutorial, the code coverage headless daemon must be up and running in the environment where your Jenkins master node is running. To install and start the daemon, you can follow the instructions in Generating code coverage in headless mode using a daemon. Start the code coverage headless daemon by entering the following command: ./codecov -d -port=8006 -exportertype=CCPDF -o=/opt/cc/output The daemon port and the daemon output folder are used to configure the code coverage in the pipeline. The server hostname where you start your code coverage headless daemon must be accessible from the z/OS where code coverage will be executed during zUnit execution. 2. Configure the pipeline for code coverage Log into your Jenkins server then click Build Executor Status. In the Nodes page that opens, you can see the e2e-pipeline z/OS agent you created before. Click it and click Configure. In the Configuration page that opens, select Environment properties and add the following properties: - Enter CCC_HOST in Name and the host name of your code coverage headless daemon in Value. - Enter CCC_PORT in Name and the port of your code coverage headless daemon in Value. - Enter CCC_FOLDER in Name and the output folder of your code coverage headless daemon in Value. Note: These values will be used during the DBB build to populate the following command line options: --ccczUnit, –cccHost, --cccPort. Click Save to save the agent configuration. 3. Trigger a build for code coverage With the IDE of your choice, perform a modification on cics-genapp/base/src/COBOL/LGICDB01.cbl, commit your change to Git, and trigger a new Jenkins build. In the Jenkins build result, you can view the code coverage PDF export: Here is a fragment of the PDF contents: Step 5. Configure the pipeline for IDz code review This section outlines the steps to configure and run IDz code review from inside a pipeline. We assume that the following tools have already been installed and configured: IBM DBB Toolkit (v1.0.9.ifix1 or later). See Installing and configuring the DBB toolkit on z/OS. We highly recommend installing the latest PTF for the removal of the blank character in the first column of IDz code review reports. IBM z/OS Source Code Analysis (v15.0.0 or later). See Program Directory for IBM z/OS Source Code Analysis. 1. Configure the Jenkins agent for IDz code review Log into your Jenkins server then click Build Executor Status. In the Nodes page that opens, you can see the e2e-pipeline z/OS agent that you created earlier. Click it and then click Configure. In the Configuration page that opens, select Environment properties and add the following environment property: - Enter RUN_IDZ_CODE_REVIEWin Name and truein Value. This will activate the IDz code review in the pipeline. The IDz code review rules files used are in the cics-genapp/cr-rules folder of your forked Git repository. For more information about the IDz code review integration into the pipeline, refer to Run IBM IDZ Code Review in Batch based on the DBB Build Report. Click Save to save the agent configuration. 2. Trigger a build for IDz code review With the IDE of your choice, perform a modification on cics-genapp/base/src/COBOL/LGICDB01.cbl, commit your change to Git, and trigger a new Jenkins build. In the Jenkins build result, you can view the IDz code review reports: At the end of the Jenkins build result page, you can click Test Result: And here’s how the test result should be displayed: Summary This tutorial has helped you understand and set up Jenkins CI facilities to integrate z/OS platforms into a unique solution. With the integration with UCD, you are now able to deploy a CICS application that uses Artifactory as the binary repository. You can now extend your Jenkins CI pipeline with z/OS actions to include unit testing, code review, and code coverage.
https://developer.ibm.com/tutorials/build-a-pipeline-with-jenkins-dependency-based-build-and-urbancode-deploy/
CC-MAIN-2021-31
refinedweb
5,989
58.18
A couple of days ago I was Lucky enough to receive an invitation to the SAP HANA Developer Access Beta program. For those of you not aware what that is, there is an excellent FAQ on SDN which you can find here: The following paragraphs will describe 7 steps towards your first HANA reporting by using SAP HANA Sandbox. Step 1 – Cloudshare In your invitation mail there is a link to cloudshare.com. Create a new account, log on to the desktop and follow the following YouTube video on how to set up your own HANA system: A great tip is given on how to use remote desktop instead of using the web version which comes with Cloudshare. This is highly recommended as navigation is much easier in the remote desktop. Logon details for your RDP session are given on the Cloudshare page each time you activate the environment: Step 2 – Getting big data So you managed to set up a system and created a RDP session. Good job! Now to find some “big data”. In this article I’m using data which I downloaded from the guys over at Infochimps. I’ve used the following data sources which in total will give me about 20 million records. - AMEX Exchange Daily 1970-2010 Open, Close, High, Low and Volume - NASDAQ Exchange Daily 1970-2010 Open, Close, High, Low and Volume - NYSE Exchange Daily 1970-2010 Open, Close, High, Low and Volume The files contain stock exchange data. Great recipe for finding something interesting. You will be getting a bunch of separate csv files. Use the daily prices ones. For simplicity sake I have merged all files into three separate csv files. Good old DOS can help you with that by using the following command: copy *.csv importfile.csv Make sure to execute the command in the same directory your files are placed. Replace importfile.csv with something recognisable (like AMEX.csv, NASDAQ.csv, NYSE.csv). Step 3 – Create your table in HANA And now the fun begins! You need to create a table which holds all your data records. Remember those files I downloaded from Infochimps? The files have the following structure: That means I need to replicate that structure into my HANA table. You can create your table in the HANA studio using the modeler or by using SQL. Modeler: Please note that you should create the table in your own schema and use Column Store (to witness awesome speed later on). SQL: I prefer SQL because it’s faster. The following command will create your table: create column table “S0001432066”.”NASDAQ”( “EXCHANGE” VARCHAR (10) not null, “STOCK” VARCHAR (10) not null, “DATE” DATE not null, “PRICEOPEN” DECIMAL (15,2), “PRICEHIGH” DECIMAL (15,2), “PRICELOW” DECIMAL (15,2), “PRICECLOSED” DECIMAL (15,2), “STOCKVOLUME” DECIMAL (15), “PRICECLOSEDADJ” DECIMAL (15,2), primary key (“EXCHANGE”,”STOCK”,”DATE”)) Step 4 – FTP and import your files into the HANA system The guys over at SAP will make you do a little treasure hunt in order to find the user id and password for the FTP server. Go into you HANA system and execute the following SQL statement: Select * from SYSTEM.FTP_SERVER Et voila, a username and password (masqued for obvious reasons): Take note of what is mentioned on where to store the files. More specifically you should create a folder on the server equal to your SCN number (in my case S0001432066). Fire off your favourite FTP client (mine is FileZilla): Create a directory and store your files: Take note that next to my files containing the data there is a so called “ctl” file. These files are required in order to be able to load data in your created (NASAQ) table. The files have the following content: Import data into table S0001432066.”NASDAQ” from ‘AMEX.csv’ record delimited by ‘ ‘ fields delimited by ‘,’ optionally enclosed by ‘”‘ error log ‘Text_Tables.err NASADAQ is the name of my created table, AMEX.csv the file I will load. If required, additional information can be found in this post: How to load CSV files into HANA Time to import your 20 million something records into HANA! Execute the following SQL statement: IMPORT FROM ‘/dropbox/S0001432066/AMEX.ctl’ Note the name of the folder I created in step 4 (folder S0001432066), /dropbox/ is a prefix. After a while you will get the following result back: Statement ‘IMPORT FROM ‘/dropbox/S0001432066/AMEX.ctl” successfully executed in 7:22.046 minutes – Rows Affected: 0 Hang on I hear you thinking. 0 Rows? No it’s not actually. You can check by firing off the following SQL statement: select count(*) from NASDAQ That looks promising! Let’s check some more: We have data! Look at the log file: Fetched 30 row(s) in 15 ms Wow! Step 5 – Create an Analytic view for reporting First step create a package which will hold your view: Create a so-called Analytic View: Give it a name: Select your table: Drag your objects into the attributes and measures sections: Validate, save and activate your view. Well done! We can use this view in Explorer and Excel. Note that you can preview your data and even auto document if by using these buttons: Important! In case preview fails it is likely you have to “grant” your schema by executing this statement: grant select on schema s0001432066 to _SYS_REPO with grant option Replace s0001432066 with your own namespace ofcourse. As an extra step your could create an extra table which holds the stock names. If you follow the same procedure as for creating the table with the stock records, you can join your two tables and have texts together with the stock names. Sequence for this would be: - Create and load your text table - Create an attribute view - Link your analytic view together with your attribute view Result would be: Preview: Step 6 – Using Explorer to report on your data On your desktop a shortcut to Explorer can be found: Fire it off and be sure to enter the correct server which can be found in your invitation mail: Go to manage spaces : Select your view and press “New”: Give it a name: Put in your objects: And press ok! Don’t forget to index your data. Refresh your page and you have a new information space: Press it and start analysing! This one is for you Steve. Apple stock prices in the year he had to leave Apple: Be sure to select enough data to get a nice trend. Step 7 – Using Excel to report on your data There is also a possibility to use Excel pivot tables for your reporting needs. Fire off Excel and connect to your view: Choose Other/advanced: Select the MDX connection: Enter server credentials (check your invitation mail if not known): Select your view: You now have your data in a pivot table format. Set some filters and analyse your data at great speed! Note that data retrieval is at great speed, but Excel makes it a bit sluggish in a RDP session. Many thanks for bearing with me till the end and good luck with this great opportunity to test drive HANA! thanks a lot for this awesome getting started blog. We’ll make sure every new user of the Developer Center sees this 🙂 And you just earned yourself another free month on the Developer Center – your access is extended until Feb 28. I hope you’ll share more of your findings with the community! cheers –Juergen Congrats , I don have HANA Developer Access but seeing your blog i understood how it would look like if i have access. Please do contribute more of these blogs, So that we can learn and experiment the same when we get a chance for HANA Hands-On. Regards, Krishna Tangudu If you want to try the steps yourself, go to and request access to the sandbox systems. Without an invitation code (aka access code) it might take a few days, but basically every SCN user can get access. –Juergen Thanks for your blog and it is lot helpful. Please keep continue your sequence. Best regards, Ramesh Choragudi This blog is awesome and it helped me. Please keep continue your sequence on this. Best regards, Ramesh Choragudi Hi Ronald, No doubt it is a very great blog and I got lots of learning by reading your blog. But the clod share currently has SPS05 and the environment is completely different and specially from step 5 from creating analytic view the wizard has changed. It will be really great if you can update the current blog to work for SPS05. Thanks, Kaustubh Your blog is amazing for jump start on HANA. Appreciate your time and efforts. -Naveen Hi, I am on hana server 7: hanasvr-07 00 I have been following the NASDAQ guide from the wiki. I retrieved the data from infochimps and uploaded it to the folders: /S0004092614 and /dropbox/S0004092614 I tried following the various guides for importing the data; i continuously keep getting the error. I followed the guide posted here:–get-the-best-out-of-your-test-drive I then followed the guide for the table for USERS in this guide: It still gives an error: SAP DBTech JDBC: [2]: general error: Cannot open Control file, /dropbox/S00040926142/USERS.CTL No error log is generated either .Am i missing something? Hi Zeeshan, I have stumbled upon the same problem faced by you. Could you let me know how you resolved your problem of SAP DBTech JDBC: [2]: general error: Cannot open Control file? Hi Ravi, I tried different things with the file; I do not recall the entire list of things which I did; i believe i touched upon the folder settings; the permissions; i also checked the file format and removed a couple of parameters from the load file syntax. Hi Ronald: Can you please share the link for downloading the data files ? I have got HANA box access. Thank you, Ronald, Raghu, I just got access to the sandbox. Can anyone post details on implementing Step2 – toget big data from Infochimps? I am assuming that you can move further only after completing Step2 to create Step3, 4, etc. Thanks Certainly Helpful. 🙂 I was amazed with the clear path of explanation and instructions provided. I’m glad to take it from here and reach the next level. ➕ Thank you much Ronald Konijnenburg Hi Ronald, Can we publish Calculated Measures or calculated view in BO Explorer. If Yes Please give us input as iam not able to see calculated Measure in Bo explorer. Thanks In Advance. Phani Rayasam Hi Ronald, can you please re-upload the pics to this blog space (the dropbox pics disappeared)? Thank you. Uwe Fetzer Hi experts, could please explain about how to generate data for calmonth,and cal quarter using calday in SAP Hana. i have a table contains day wise data. i have to transform into month and quarter. Regards, sivaumar sorry i am new to this.how can i ask for access code from someone guide me with details some errors when i go to manage spaces .the error message “Failed to retrieve the data source list,Request timed out” who knows why? Problem with ftp. I followed the guidelines for exchaaging files using ftp. I figured out the ftp user using the sql command as indicated above. But as soon as I try to open an ftp session I get the following message: The host ‘’ is unreachable. I tried several times. It seems to have fallen asleep. Is there anything you can do about this? Problem is solved. It was due to the ftp program (which always worked so far). Nonetheless I tried another one and it worked. hi Ronald, thank you for the great blog which helped me to better understand HANA. unfortunately my BusinessObjects Explorer stated “Failed to index the information space. Request timed out”. 😯 Hello community! I have got a very big and strange problem at Step 4 – FTP and import your files into the HANA system. – I am using FILEZILLA on Windows 7 as my FTP-Client. – I logged in with the Username & Password, everything works fine – BUT: I can not load my *csv-Files into my folder, which I created, there is always the following error code: 226 Transfer done (but failed to open directory). => The strange thing is, that I CAN load my *csv-Files in any other existing folder, but not in my own!??! The Name of my folder is: S0002866913 Thank you very much for helping. Problem solved. There you can see the solution. Regards Hi Experts, While loading the data i am getting below error: Could not execute ‘IMPORT FROM ‘/dropbox/DINESH/AMEX.ctl’ SAP DBTech JDBC: [139]: current operation cancelled by request and transaction rolled back: CSV file has some bad lines and cannot create bad file. some records are loaded. My Control file is: Import data into table P1527334943.”NASDAQ” from ‘AMEX.csv’ record delimited by ‘ ‘ fields delimited by ‘,’ optionally enclosed by ‘”‘ error log ‘Text_Tables.err Thanks, Dinesh Hi Ronald, What a powerful article from you. Really it will be a good start to all who want to taste HANA and will be delicious as you added main masala to it. Thank you and keep posting. Regards Raj Tx a lot, made me smile! Hi Ronald, A very good getting started guide, thanks. One issue – I am seeing an error with file permissions, in that I do not have permission to view the .err file generated during the CSV upload into HANA. I also do not have sufficient authorisations to change the .err file permissions via FTP, so I am blind as to what the errors in the .err file actually are. In case anyone else has the same issue (or knows how to fix it), the problem is here Hi, I uploaded the data into Linux box under S0005971056, and used the import command, it is not giving any error, but it is producing the error file same size as the CSV file, unfortunatley I am not able to open the error file. I try to change the file permissions to 777, but chmod command is failing all the time. I have access to hanaserver-03, you can look at amexnew.csv and amexnew.ctl. Thanks in advance for your help. Hey, thanks for this great post. But i cannot see any pictures… Is your dropbox closed or so? Can you please attach the screenshots again to this post. Thanks Regards /edit: Sorry.. it was the proxy… the pictures will show on my personal notebook Thanks What would be the procedure once you have say NYSE data loaded and AMEX data, and now you want to look at both of those combined? I realize I could just merge the two data files, but what if I wanted HANA to UNION these two tables essentially, would I do that as an Analytic view? Can someone give me some high level steps on what I would want to do there? Hey Guys, I am new to SAP and trying to self educate SAP HANA. I have followed the procedure step by step listed in this document and tried ever possible trick people have shared but still not able to import .csv file into HANA. My Folder Name is: P1716453917 I download .csv files from infochimps and merged them together via cat *.csv > AMEX.csv (using mac where copy command doesnt work, cat command returns the same result) Created .clt file, this is what it looks like: Import data into table P1716453917.”NASDAQ” from ‘AMEX.csv’ record delimited by ‘\n’ fields delimited by ‘,’ optionally enclosed by ‘”‘ error log ‘Text_Tables.err Uploaded both the files to my user folder via FileZilla Path: /P1716453917 The step i am having issue with is importing data into HANA. I am using tcommand IMPORT FROM ‘/dropbox/P1716453917/AMEX.ctl’ and it throws the following error: “Could not execute ‘IMPORT FROM ‘/dropbox/P1716453917/AMEX.ctl” in 73 ms 542 µs . SAP DBTech JDBC: [2]: general error: Error processing a statement at “{\rtf1\ansi\ansicpg1252\cocoartf1187\cocoasubrtf340” “ File permissions for both my files are set to 777. I have tried modifying the import part to include another /dropbox, but doing so throws “Cannot open Control file” error. I have already spend few hours on it and getting no where. I am hoping someone from this thread could triage my issue. Appreciate any advice or resolution in advance. Thanks Hello Everyone, Can someone send me those files please, I can’t download them……what are infochimps?? Best Regards Hi Everyone, As HANA studio SP5 shows query execution and processing time once you execute the query, has anyone experienced comparatively longer server query execution time for column table than row table ? Hi, Youtube link given at very beginning is not available. KIndly check. reg,avinash M
https://blogs.sap.com/2011/12/26/sap-hana-developer-access-beta-program-get-the-best-out-of-your-test-drive/
CC-MAIN-2019-35
refinedweb
2,815
72.36
Hello everyone, I don't have much experience with C++ and it has been a while since I have last used it. Right now I am writing my own class for quaternions. My problem which I cannot figure out is what is wrong with the operator overload shown below... No matter what I try and change I more or less recieve multiples of the same error: invalid operands of types `double*' and `double*' to binary `operator*' I've looked around and don't really understand what my problem is... Any help is much appreciated. Thanks in advance. class CQuaternion{ public: // Constructors CQuaternion(double,double,double,double); CQuaternion(); // Desctructors ~CQuaternion(); CQuaternion* operator*(CQuaternion&); void conjugate (); void print () ; private: double *w, *x, *y, *z; }; CQuaternion::CQuaternion (double a, double b, double c, double d) { *w = a; *x = b; *y = c; *z = d; } CQuaternion::CQuaternion () { w = new double; x = new double; y = new double; z = new double; *w = 0; *x = 0; *y = 0; *z = 0; } CQuaternion::~CQuaternion () { delete w; delete x; delete y; delete z; } void CQuaternion::conjugate() { *x = -(*x); *y = -(*y); *z = -(*z); } CQuaternion* CQuaternion::operator* (CQuaternion& param) { CQuaternion* temp; temp->w = (this->w)*param.w - (this->x)*param.x - (this->y)*param.y - (this->z)*param.z; temp->x = (this->w)*param.x + (this->x)*param.w + (this->y)*param.z - (this->z)*param.y; temp->y = (this->w)*param.y - (this->x)*param.z + (this->y)*param.w + (this->z)*param.x; temp->z = (this->w)*param.z + (this->x)*param.y - (this->y)*param.x + (this->z)*param.w; return (*temp); }
https://www.daniweb.com/programming/software-development/threads/305176/c-class-problem
CC-MAIN-2018-09
refinedweb
262
53.31
Hello, I have a Pi 3 and I got the temp/humidity sensor for it. I don’t know much coding, especially python but I’ve managed to edit some script I found online to get to where I’d like. The original script used a different method which I didn’t like (doesn’t read well to a noncoder) but it seems I’ve lost 2 significant digits on the humidity variable. I added a conversion from C to F degrees, and changed the output to a string variable. Other then that it’s the same. The original code had decimals for temp and humidity, now I only get it from temperature. I also added a rount to 2 decimal places for the output, but even if thats removed, humidity still doesnt output any decimals. Can someone review the code and tell me why? I took texas instruments basic and C++ decades ago for 1 highschool class so thats my limit of coding in the late 90’s but I hope I’m not to useless. I don’t know what the default variable size/type for pything, in c i would have manually set it to float or something to make sure it wasn’t truncated, but i dont see where the variable types are assigned in this… and i dont know why in the regular code the output humity HAS decimals, but doesnt in my edited one. Thanks!! my edits: import Adafruit_DHT DHT_SENSOR = Adafruit_DHT.DHT22 DHT_PIN = 4 while True: humidity, temperature = Adafruit_DHT.read_retry(DHT_SENSOR, DHT_PIN) tempf = ((temperature * 9.0) / 5.0) +32 #converts C to F output = "Temperature %s, Humidity %d" % (round(tempf,2), round(humidity,2)) #rounds to 2 decimals if humidity is not None and temperature is not None: print(output) else: print("no data") The original google’d code:("Temperature {0:0.1f}c Humidity {1:0.1f}%".format(temperature, humidity)) else: print("no data")
https://forum.freecodecamp.org/t/hello-output-question-on-phython-script-for-rasp-pi-temp-humidity/475422
CC-MAIN-2022-40
refinedweb
321
61.77
User Name: Published: 16 Oct 2007 By: Brad Vincent Download Sample Code Start your Silverlight development journey today with this step-by-step article. You are reading this article because you are either interested in what Silverlight has to offer, or maybe you want to start developing in Silverlight. Perhaps you are just curious to see how Silverlight may help you improve your web applications; or maybe you were always too lazy to learn Flash (like me) and now that there's a .NET equivalent, you've decided it's time to take the plunge! I think Microsoft sums it up quite nicely: ." Wikipedia says "Silverlight is a proprietary runtime for browser-based Rich Internet Applications, providing a subset of the animation, vector graphics, and video playback capabilities of Windows Presentation Foundation." I suggest you read both the Wikipedia definition as well as visit the Silveright Homepage itself. There have been quite a few discussions on this topic. Flash developers don't seem to like what Microsoft has "attempted to offer in terms of Rich Internet Applications," but let's remember one thing: it's a new release! What were the first versions of Flash like? Of course there will be limitations and maybe flaws in the earlier releases. One advantage Microsoft has on its side is this: It can learn from all the short-comings of the earlier Flash versions over the years. I am not going to be biased and say which I think is better, because Flash might be better for some developers and Silverlight better for others. I am interested in Silverlight for one reason and one reason only: I am a .NET developer and I can create Silverlight applications in visual Studio using C#. So it just seems to be the obvious choice for me. I suggest you go to the Silverlight Homepage and read why you should use Silverlight. A list of discussions and sites are available here written by developers on both sides of the fence. It is also important to know what the new features are in the latest 1.1 Alpha release compared to the initial release. This can be found on MSDN here. I say this because a lot of articles that comment on Silverlight have only used or seen version 1. Here's a shortlist of what Silverlight has to offer: More can be read on the Silverlight Architecture at MSDN. OK, so enough reading, let's start! We are going to develop our first Silverlight Application using Visual Studio 2008 Beta 2 and the Silverlight 1.1 Alpha runtime. First, we need to setup our environment before we can start coding. Go to and complete the following steps: Optional downloads are also available: After you have downloaded and installed everything, we are now ready to code. So what are we waiting for? Open Visual Studio 2008 beta 2 and click File > New > Project; then choose Silverlight Project; enter the project name "HelloSilverlight" and click OK : A stock standard Silverlight project is created. Let's go through the files we are looking at: Page.xaml - our first XAML (pronounced zammel) file! The XML in this file defines the UI for our Silverlight content and behaviors. It currently contains the following code (Note that I have reordered the attributes): As you can see there is only a root element (Canvas). The Canvas element has 2 namespace attributes, but the important attribute is x:Class. This attribute is used to define the Code behind class in your code (in the Page.xaml.cs file). It also references where it will build the assembly\DLL. The rest of the attributes are pretty straightforward: x:Name lets you define the name of the canvas. Loaded allows you to define an event handler in your code behind for the Loaded event for the canvas. Width/Height/Background are the canvas dimensions and color. Canvas x:Class Page.xaml.cs x:Name Loaded Width Height Background Page.xaml.cs - the code behind file for our XAML - it currently only contains our Page_Loaded event. Page_Loaded Silverlight.js - the Silverlight JavaScript helper file. It contains the methods to provide version checking and generation of the OBJECT or EMBED markup for plug-in instantiation in the HTML file that hosts the Silverlight plug-in. OBJECT EMBED TestPage.html - the default html page that contains the Silverlight content. An important thing to notice is the code: Here we reference the Silverlight JavaScript file as well as the page's JavaScript file which contains the important createSilverlight method. The other important code is: createSilverlight This is how we embed the Silverlight content into our page. You'll notice we called the createSilverlight method. TestPage.html.js - this contains the createSilverlight method which hooks up an HTML div element to the XAML content and then gives it focus. Here is the code: OK, let's run the project and see what happens! Wow - a blank page. Don't stop the project yet. Open the Page.xaml file and change the Background attribute to Background="Gray". Save the file, then do a refresh in the browser. You will now be able to see the blank gray canvas in your browser. Page.xaml Background="Gray Now let's add the famous "Hello World!" text. Open the Page.xaml file again and inside the root Canvas node, add a TextBlock with the following XAML: Save and refresh the page in the browser. Now you will see the Hello World text. Well done - you have created you very first Silverlight application and are well on your way to becoming a seasoned Silverlight developer. Hello World Let's extend our previous example by adding some clickable text that responds to a mouse click event. Stop the project if it's already running; then, open the Page.xaml file and add a child canvas which contains a TextBlock element like follows: Save and run. You will see another TextBlock that displays the text "Click here". Now we need to hook up a mouse click event. Edit the child canvas by adding the attribute Then go into the code behind file and add an event handler: Run the code and click on the child canvas. The canvas will change to a red color. If we wanted to change the TextBlock color too, we could change our code behind event handler like so: Some of you might look at this code and think "I dont like how you reference the Textblock with c.Children[0]. What if you add another object before the textblock, won't the code break?" I feel your pain, but don't worry because it's easy to get around. Simply add an x:Name attribute to the TextBlock XAML: Then, in your code behind handler, change the code to: Now that's more like it! But how could I reference this object in the code behind? What physically happened behind the scenes when I added the x:Name attribute? If you look at the Page.xaml.cs file you will notice it's a partial class. Then if you go to the definition of the InitializeComponent method called in the page load event, you'll be taken to a file named page.g.cs. It is located in the obj/Debug folder as shown below: InitializeComponent load page.g.cs This file is automatically generated every time you save the Page.xaml file, and also contains a partial Page class: That is how we can reference these objects directly in code. Each XAML page that contains x:Name attributes will generate a .g.cs file. Please don't edit the code in this file as it will be overwritten every time you save the .xaml file. .g.cs .xaml So now, by naming my XAML objects and referencing them in my code behind, I can attach event handlers in my code behind too. This is my new XAML: Notice I have taken out MouseLeftButtonUp. Then in my code behind I've changed the Page_Loaded event handler: MouseLeftButtonUp I have wired up the event handler in the code behind. As you can see it's very flexible and open doors for a lot of opportunity. We have gone through what Silverlight is and how to setup your environment to create your own Silverlight projects. We have gone through a simple project to see what makes up a Silverlight application and how all the pieces fit together. We have only covered the basics and this is only the tip of the iceberg. In my next article I will carry on from where we left off, figure out how to debug our Silverlight applications and cover some more advanced features. More importantly, we will look at some prettier examples by playing with Expression Blend. Now it's up to you. Experiment with XAML, download the samples and play. Good Luck! View complete profile WPF Tutorial. Simone Busoli Feel free to ask me any .NET question
http://dotnetslackers.com/articles/silverlight/HelloSilverlightStartYourSilverlightJourneyToday.aspx
crawl-002
refinedweb
1,498
73.47
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi experts, newbie here, I am having trouble reading data from the serial monitor sent by my Arduino in processing. Im not sure how to read multiple values. Here is my Arduino data being sent to the serial monitor (9600 baud): Serial.print(LeftSensor); Serial.print(','); Serial.print(FrontSensor); Serial.print(','); Serial.print(RightSensor); Here is my processing serial reading function: import processing.serial.*; Serial port; float center = 0; float left = 0; float right = 0; void setup(){ size(480, 300); // AqT arduino quad test port = new Serial(this, "/dev/cu.usbmodem1421", 9600); port.bufferUntil('\n'); } void serialEvent() { int newLine = 13; // new line character in ASCII String message; do { message = port.readStringUntil(newLine); // read from port until new line if (message != null) { String[] list = split(trim(message), " "); if (list.length >= 4 && list[0].equals("Stuff: ")) { //yaw = float(list[1]); // convert to float yaw //pitch = float(list[2]); // convert to float pitch left = float(list[1]); center = float(list[2]); right = float(list[3]); roll = float(list[4]); } else{ rect(20, 20, 20, 20); } } } while (message != null); } void draw(){ serialEvent(); } When I try the code out, Processing reads nothing but the arduino works fine in terms of output. Please help, Thanks Answers Processing does not receive the input from arduino (arduino code works fine) Hi ash8, What if somehow your processing program is out of step with the Arduino program? Your incoming message will only have some of the values. (Is the Arduino sending repeatedly?) or only occasionally? Try this idea: In Processing, make a global String called message. Each time a character arrives append it on the end of message (2 advantages already, you don't hang up your program waiting for something that's not coming, and you can print out what you have so far). Then check the message. If it has an end of line, and the stuff in front of the eol has 'stuff' and 4 values then use it. After using it (or not) delete it from the front of message.
https://forum.processing.org/two/discussion/24307/how-do-i-receive-multiple-values-through-processing-from-the-arduino-serial-monitor
CC-MAIN-2019-18
refinedweb
355
65.93