text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
7.19: Game Over Screens - Page ID - 14571 def showGameOverScreen(): gameOverFont = pygame.font.Font('freesansbold.ttf', 150) gameSurf = gameOverFont.render('Game', True, WHITE) overSurf = gameOverFont.render('Over', True, WHITE) gameRect = gameSurf.get_rect() overRect = overSurf.get_rect() gameRect.midtop = (WINDOWWIDTH / 2, 10) overRect.midtop = (WINDOWWIDTH / 2, gameRect.height + 10 + 25) DISPLAYSURF.blit(gameSurf, gameRect) DISPLAYSURF.blit(overSurf, overRect) drawPressKeyMsg() pygame.display.update() pygame.time.wait(500) checkForKeyPress() # clear out any key presses in the event queue while True: if checkForKeyPress(): pygame.event.get() # clear event queue return The game over screen is similar to the start screen, except it isn’t animated. The words "Game" and "Over" are rendered to two Surface objects which are then drawn on the screen. The Game Over text will stay on the screen until the player pushes a key. Just to make sure the player doesn’t accidentally press a key too soon, we will put a half second pause with the call to pygame.time.wait() on line 14 [180]. (The 500 argument stands for a 500 millisecond pause, which is half of one second.) Then, checkForKeyPress() is called so that any key events that were made since the showGameOverScreen() function started are ignored. This pause and dropping of the key events is to prevent the following situation: Say the player was trying to turn away from the edge of the screen at the last minute, but pressed the key too late and crashed into the edge of the board. If this happens, then the key press would have happened after the showGameOverScreen() was called, and that key press would cause the game over screen to disappear almost instantly. The next game would start immediately after that, and might take the player by surprise. Adding this pause helps the make the game more "user friendly".
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/07%3A_Wormy/7.19%3A_Game_Over_Screens
CC-MAIN-2022-21
refinedweb
300
75.91
Using native OPC API under Windows XP - 01 Maret 2011 19:22 Is there any way I can use the Native Open Packaging API's under windows XP? MusicBundle example (from Windows 7 SDK) returns "class not registered". What is the dll I need to register? I couldn't find any information. I have C# application ( using "System.IO.Packaging" ) that works. But I also need native access to the custom file I have... Thanks. Orhun Birsoy Semua Balasan - 01 Maret 2011 22:23 Hi Orhun, the Open Specifications group does not support the Windows 7 SDK samples. Please try posting your question in a more relevant forum such as the .NET Base Class Library forum since the Packaging class resides in the System.IO namespace. You might also find the following MSDN documentation helpful. Differences between the Native and Managed APIs Josh Curry (jcurry) | Escalation Engineer | US-CSS DSC Protocols Team - 09 Maret 2011 3:42 Hi Orhun, OpcServices.dll (located in Windows 7 \Windows\System32\) contains the Win7 native-code OPC API's. The native-code OPC APIs were not designed or tested to run down-level - it's doubtful that this DLL will work on Windows XP. Jack Davis - Disarankan sebagai Jawaban oleh Jack Davis [MSFT]Microsoft Employee 09 Maret 2011 3:45 - Ditandai sebagai Jawaban oleh Jack Davis - MSFTOwner 08 April 2011 6:40 -
http://social.msdn.microsoft.com/Forums/id-ID/os_opc/thread/3ab75f39-e008-4662-a2a6-96c6aff43080
CC-MAIN-2013-20
refinedweb
228
65.83
A blog of the technical and only sometimes uneventful side of programming in .NET and life within Microsoft System.Net.Mail namespace. This contains the classes used to send email using SMTP. The MailMessage class is used to represent the content of an email message. The SmtpClient transmits email to the SMTP host that you designate for mail delivery. For example you can use the following code within a button of a Windows form to send an HTML email. Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim client As New SmtpClient("smtp.server") Dim toAddr As New MailAddress(trobbins@microsoft.com) Dim fromAddr As New MailAddress(trobbins@microsoft.com) Dim message As New MailMessage(fromAddr, toAddr) message.IsBodyHtml = True message.Subject = "This is a Test" message.Body = "<html><body><b>This is an </b> <font color=red>HTML Message</font></body></html> " client.Send(message) End Sub The email that is sent looks like the following
http://blogs.msdn.com/trobbins/archive/2006/06/28/649859.aspx
crawl-002
refinedweb
165
54.79
Microsoft have released a major piece of the JUMP initiative: Visual J# .NET. This is a set of new tools for those wishing to develop using the java-language syntax. "Java Language Syntax" is the important point here. It is not Java - it's J#. Visual J# .NET beta 1 can be downloaded from here. Visual J# .NET beta 1 is fully integrated into Visual Studio .NET beta 2. It can be used to develop applications, class libraries and web services for .NET, and like other .NET languages targets the .NET CLR and uses the base class libraries. As well, J# includes features of other first class languages such as cross language integration, security, versioning and deployment, and debugging and profile support. The J# documentation notes that even if you do not have Visual J# .NET installed on your machine you can still debug your java-language applications. The new tools include: Microsoft is making a very clear distinction between Java - the trademarked technology from Sun - and the java language syntax. Files containing the J# language will have the extension .jsl. This distinction is important because Visual J# .NET does not contain the following functionality: To upgrade Visual J++ 6.0 projects to Visual J# .NET simply open the VJ++ project in the Visual Studio .NET IDE and you will be prompted to have the project upgraded. A new solution file (.jshproj) will be created for your J# project. There are a couple of issues involved in upgrading such as no support for Pre and post build steps, and version information now being stored in an assembly, and the CLASSPATH feature of VJ++ 6.0 is not supported. Java has been compared extensively to C#, but once you get past the familar syntax there are significant differences (see A Comparative Overview of C#). Visual J# does not support the following features: Visual J# supports the .NET CodeDOM. CodeDOM, or Code Document Object Model, provides a way to describe the structure of a piece of source code that can be rendered in multiple languages. It's used in ASP.NET to render HTML pages, XML Web Service proxies, code wizards, designers etc, or for dynamic compilation. COM Interop is supported in much the same was as was done in Visual J++ (ie. using the JActiveX tool) and using J# components from unmanaged clients is achieved by using the RegAsm tool that ships with the .NET SDK. See the online docs for more information. Obviously no introductory article on J# would be complete without HelloWorld. The code is very similar to what you would write using either C# or VB.NET, though there enough differences to make it clear that this is neither. package WindowsApplication1; import System.Drawing.*; import System.Collections.*; import System.ComponentModel.*; import System.Windows.Forms.*; import System.Data.*; // Summary description for Form1. public class Form1 extends void Dispose(boolean disposing) { if (disposing) { if (components != null) { components.Dispose(); } } super.Dispose(disposing); } #region Windows Form Designer generated code // Required method for Designer support - do not modify // the contents of this method with the code editor. private void InitializeComponent() { this.button1 = new System.Windows.Forms.Button(); this.SuspendLayout(); // // button1 // this.button1.set_Location(new System.Drawing.Point(((int)96), ((int)32))); this.button1.set_Name("button1"); this.button1.set_TabIndex(((int)0)); this.button1.set_Text("Click Me!"); this.button1.add_Click( new System.EventHandler(this.button1_Click) ); // // Form1 // this.set_AutoScaleBaseSize(new System.Drawing.Size(((int)5), ((int)13))); this.set_ClientSize(new System.Drawing.Size(((int)272), ((int)93))); this.get_Controls().AddRange(new System.Windows.Forms.Control[] {this.button1}); this.set_Name("Form1"); this.set_Text("HelloWorld"); this.ResumeLayout(false); } #endregion // The main entry point for the application. /** @attribute System.STAThreadAttribute() */ public static void main(String[] args) { Application.Run(new Form1()); } private void button1_Click (System.Object sender, System.EventArgs e) { System.Windows.Forms.MessageBox.Show("Hello, World!"); } } Visual J# .NET has been designed to allow developers to move from Visual J++ to J# as painlessly as possible. The documentation states "The only new syntax extensions are the keyword ubyte for consuming unsigned bytes and the @attribute directive, which can be used to attach custom attributes to the generated metadata." Visual J# .NET allows developers to write fully managed .NET applications using the java language syntax, and to move their existing java language applications over to .NET, but is not, one would imagine, the language of choice when developing .NET applications. J# is RAD, is fully managed, and includes CodeDOM support making it suitable for ASP.NET and the designer (unlike the current incarnation of Managed C++) but it lacks support for important .NET features such as properties, value types, delegates and events (unlike MC++ and C#). Even so, with .NET allowing inter-language operability and COM Interop, J# will allow Java developers wishing to move to .NET the ability to retain a good portion of their legacy java language code while moving forward with .NET development. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/net-languages/intro_vjsharp.aspx
crawl-002
refinedweb
823
51.55
For a farewell mail to my colleagues at Microsoft (I’ve decided to move on, but will continue contributing to this blog), I was worried I might miss someone. Using the arguable presumption that the “important people” were those who I’d sent mail to, I figured I would write an Outlook macro to go through my Sent Items and pull out all the email addresses. That turned out to be pretty straightforward; here is the code (I don't claim this is the best way to do it, but it does work): Sub GetSentItems() Dim eml As MailItem Dim nsMyNameSpace As NameSpace Dim colSentItems As Items Dim objItem As Object Dim fnum As Long fnum = FreeFile() Open "c:\users\xxxx\documents\recipients.txt" For Output As #fnum Set nsMyNameSpace = Application.GetNamespace("MAPI") Set colSentItems = nsMyNameSpace.GetDefaultFolder(olFolderSentMail).Items For Each objItem In colSentItems For Each objRecip In objItem.Recipients Write #fnum, objRecip.Name & “;” Close #fnum End Sub This creates a text file in my documents folder called recipients.txt (note: substitute your user name for “xxxx” in the path). It then navigates the Outlook namespace to find the Sent Items folder and gets all the items in there (in colSentItems – col for collection). It then iterates that collection, pulling out from each item (which represents a sent mail message) the recipients collection – remember an email may have been sent to more than one person or group. It iterates the recipients and pulls out for each the name. Note that to run this you may have to tweak the Outlook macro security in Tools / Macro / Security… to allow unsigned macros to run (either “warnings for all macros” or the “No security check” (be sure to set this back)). You’ll have to exit Outlook to get the new security setting to take effect. Then run it using Tools / Macro / Macros and it will generate the file. The problem is that you probably have a lot of duplicates since there are some people to whom you send mail frequently. Excel can help out here. Open the recipients.txt file in Excel and use Data / Remove Duplicates to eliminate those. You can then sort the list and go through and pick out any names you don’t want. I used Column B in Excel to put an “X” in the ones I wanted to send mail to, then when I was done sorted by column B, selected all those items and copied to the clipboard. Finally, back in Outlook, create an Outlook mail message and just paste all the names you copied from Excel into the BCC field. Note that there is a “;” at the end of each name to provide a separator for Outlook. It did work as I got your goodbye mail. Take care Mike and best of luck in your new endeavors.
https://blogs.msdn.microsoft.com/mikekelly/2009/12/20/everyone-ive-sent-mail-to/
CC-MAIN-2017-47
refinedweb
473
69.92
I just downloaded PyCharm community edition: PyCharm Community Edition 2016.1.2 Build #PC-145.844, built on April 8, 2016 JRE: 1.8.0_60-b27 x86 JVM: Java HotSpot(TM) Server VM by Oracle Corporation I'm getting started with learning the IDE. I noticed that this simple one-line program: raise Exception causes the word "Exception" to be underlined in a red squiggly. If I hover over it I see the message: Unresolved reference 'Exception' But, this is a built-in exception. What should I do so PyCharm will not flag built-in exceptions as unresolved? Also, I tried to copy the text of the error message to paste here, but I cannot just highlight it and copy the error message. Is that a limitation? P.S. FWIW There seems to be a problem with Python built-ins in general. e.g. import math triggers an Unresolved Reference error.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207375155-Built-in-exceptions-causing-Unresolved-Reference-errors
CC-MAIN-2020-05
refinedweb
151
67.35
!ATTLIST div activerev CDATA #IMPLIED> <!ATTLIST div nodeid CDATA #IMPLIED> <!ATTLIST a command CDATA #IMPLIED> In a predicament. Ol gameObject 1 and gameObject 2 have been getting together on the weekends and giving me spawns. They dont stop! Now i have too many. What can i do. The buggers are everywhere. In a way i guess i want to abort all future children from being instantiated under an object once i reach a nice number like 24. What script functions should i use to achieve this? Thanks for the help. asked Jun 16 '12 at 07:35 AM alternativee30 0 ● 2 edited Jun 16 '12 at 05:24 AM Instead of using a fairly vague metaphor, could you show us the actual code? It'll be much easier to understand the root cause. What is the intended behaviour? 0 Well thats the reason i gave a vague metaphor. To be a smartass and there is no Code yet. I am wondering if anyone has specific Unity recommended code references to use to achieve this type of effect or perhaps has had experience with this type of setup before. I would be trying to limit the children via a hard cap under my parent so that i could limit the amount of instantiated player prefabs I also was more vague then i meant to be. My bad on that. I am simply trying to set a limit to gameObjects Instantiated under a parent. Thanks for the responses. I will get cracking and post what i used when i get it working. Not sure what language you want the code in, but you basically need to count the number of items that are contained in the parent's transform. import System.Linq ... parentObject.transform.Cast.<Transform>().Count(); Would do it in UnityScript answered Jun 16 '12 at 09:45 AM whydoidoit 32.9k ● 11 ● 23 ● 98 edited Jun 16 '12 at 09:45 AM But you'd probably be better of just having a count variable on the parent and increment it each time you Instantiate a spawn x438 parent x418 child x407 asked: Jun 16 '12 at 07:35 AM Seen: 273 times Last Updated: Jun 16 '12 at 11:17 AM Parenting Transforms, Must (still) Set Parent at 0,0,0 ? how would I "child" an un-rendered object? Give prefab a parent spawn prefabs as children of another game object Problem with child and parent. Hierarchy question How can I get a parent GameObject of gameObject using Javascript? How to use Parent/Child Position of parent-Object How can I find a unique sibling/parent/child in a specific hierarchy and not from all the scene? EnterpriseSocial Q&A
http://answers.unity3d.com/questions/268120/children-overproducing.html?sort=oldest
CC-MAIN-2013-20
refinedweb
450
73.47
Usage Placing the code Place your code on the page and surround it with <pre> tag. Set name attribute to code and class attribute to one of the language aliases you wish to use. <pre name="code" class="c-sharp"> ... some code here ... </pre> NOTE: One important thing to watch out for is opening triangular bracket <. It must be replaced with an HTML equivalent of < in all cases. Failure to do won't break the page, but might break the source code displayed. An alternative to <pre> is to use <textarea> tag. There are no problems with < character in that case. The main problem is that it doesn't look as good as <pre> tag if for some reason JavaScript didn't work (in RSS feed for example). <textarea name="code" class="c#" cols="60" rows="10"> ... some code here ... </textarea> Extended configuration There's a way to pass a few configuration options to the code block. It's done via colon separated arguments. <pre name="code" class="html:collapse"> ... some code here ... </pre> Making it work Finally, to get the whole thing to render properly on the page, you have to add JavaScript to the page. > For optimal result, place this code at the very end of your page. Check HighlightAll for more details about the function. Can't get it to work in any way, shape or form :( Can you get included samples working? Please give more details. This code : dp.SyntaxHighlighter?.ClipboardSwf? = '/flash/clipboard.swf'; dp.SyntaxHighlighter?.HighlightAll('code'); must been in onload function like this : <script language="javascript"> window.onload = function () {} </script> This code : dp.SyntaxHighlighter?.ClipboardSwf? = '/flash/clipboard.swf'; dp.SyntaxHighlighter?.HighlightAll('code'); must been in onload function like this : <script language="javascript"> window.onload = function () {} </script> This code : must been in onload function like this : Awesome piece of code. Worked like a charm the first time. Thanks Alex mark... Awesome piece of code. Worked like a charm the first time. Thanks Alex mark... Very good, but I have some problem to make XHTML 1.1 valid pages. The problem is in <pre> tag: name is not allowed anymore for this tag. And <textarea> needs to be in a form... Any clue? .BoB. Excelent! Works flawlessly! 5 stars! :D My 1st try (took less than 5 minutes): <html> <body> <textarea name="code" class="html" cols="60" rows="10"> BEGIN --- This is the example code: > END --- This is the example code: </textarea> > </body> </html> Nice, some problems with <br /> but nice. i soooooo.... like it! i sooooo... like it i sooooo... like it i sooooo... like it A really great piece of work! Marvelous! Just added it to my blog. I found the last 2 lines to be problematic, especially if the Scripts take too long to download. It kept throwing this 'dp is undefined' error. It looked like the last script fragment namely - used to get initiated even before the other scripts were downloaded. So I moved it to another file called shInit.js and it looks like this - So, instead of the inline JavaScript?? I have this - <script language="javascript" src="js/shInit.js"></script> Well robccsilva's example posted on Aug 29, 2007 worked well, but the following and all other attempts I've tried just fail. Firebug reveals no errors, so I'm not sure what is wrong. Any ideas: <textarea name="code" class="java" cols="45" rows="10"> public class HelloWorld? { } </textarea> Is anyone able to use this on blogger? it looks to me like i need to upload a bunch of (primarily javascript) files but i dont think i can do that with blogger? anyone have any ideas? Where should I write this piece of code in wordpress? > Is it in the header.php in the theme or somewhere else? Please ... It doesn't work,man;-( I swear i did nothing wrong.I think i need a sample or more details. I found that syntax highlighting only worked if these two lines <script class="javascript"> dp.SyntaxHighlighter?.ClipboardSwf? = '/flash/clipboard.swf'; dp.SyntaxHighlighter?.HighlightAll('code'); </script> are placed AFTER the closing </pre> tag How to get syntax highlighting in Blogger with SyntaxHighlighter? <pre name="code" class="php"> $test = "syntax highlighter"; echo $test; </pre> When language is php,"empty" of the function is displayed double. example: emptyempty($example) Hi, why don;t you use <code> element instead? which is better .... your or from ,,, Worked first time like a charm! thanks for this! I have my own language file, and am trying to get the following to work: (this relates to my drumputer scripting language at) SET KEYWORD1=VALUE1;SET KEYWORD2=VALUE2; LET VARx=VALUE LET and SET are keywords (as is KEYWORDx) and get hilighted okay. I want to hilight the VARx part. "VARx" can be any string without spaces or an =. I think I figured it out: This has been added in my "Brush" file's RegexList?: { regex: new RegExp?('^LET\\s+([a-zA-Z\\-\\d\\]+)=.+', 'gmi'), css: 'vars', submatchIndex: 1, disallow: keywords } //Note the "submatchIndex" property and "disallow" //And I redefined the following two items from shCore.js (but left shCore alone): //--------- dp.sh.Highlighter.prototype.ProcessRegexList? = function() {} //--------- dp.sh.Highlighter.prototype.GetMatches? = function(regex, css, submatchIndex, disallowList) { } // I have two special cases that this takes care of; SET INTRO=intropatternname SET TRANSITION=transitionpatternname RAW INTRO=4/4,15,15,15,25,25 RAW TRANSITION=4/4,15,15,15,25,25 In which case the SET INTRO/TRANSITION would have been handled already Anyway, it's been an interesting day working through this. And I didn't have to alter any of the main code (but did have to co-opt it, I guess, and adapt a copy) All the above changes are in the customized Brush file David! This is killer! Worked perfect. Thanks, Steve let f1 and f2 be two input files of unknown length whose records are integers if the records in both files are in ascending order, the files can be merged to produce a third file f3 whose records are likewise in ascending order. the following algorithm shows how this can be done without using arrys or other costly internal data structures funtion merge (file f1, file f2, file f3)while(not eof(f1) or not eof(f2)) end-while end-funtion //it's merge program #include<iostream> using namespace std; const int MAX = 100; int main() {} took me a while to figure this out. In order to get the script working, you have to put the js declaration right before the closing </body> tag. Cool script but the instructions leave much to be desired! one two three This is WICKED! I have one problem though, my escape chars get lost (\). ie. the following code: <?php echo "Carlos said \"loving this script\""; ?> renders as: <?php echo "Carlos said "loving this script""; ?> which will cause a parse error. I suspect the solution is to use a special char or string in place of the \ and dynamically replace it from within the js. I'll post when I find the solution. regarding on Comment by joomlers, Dec 19, 2007 about function empty() does anyone knows how to fix this? coz I also experience it. thanks that worked great...thank you! :) This is unbelievably wonderful, awesome and works perfectly for me, thank you very much. :) in ff - cool, but doesn't work in opera not working. If you're one to pay attention to standards & DOCTYPEs, you're probably cringing at the instruction to add a "name" attribute to your <pre/> elements. Actually, if you look at the source "shCore.js" there's a comment about this which is kinda funny. "// for some reason IE doesn't find <pre/> by name, however it does see them just fine by tag name..." Ironicly enough that's probably got somthing to do with "name" not being a valid attribute of the <pre/> element. IE is actually behaving as it should in this case. "dp.sh.HighlightAll" really should be rewritten so invalid attributes aren't required. I plan to do just that for my implementation of this otherwise awesome script, just wanted to leave this note for any of the scripts developers who happen to read these comments. :) Hello, Excellent work, does it work with tinymce? If so how to configure that? Thanks Great Piece ......!!! I used in my site..... @mr.joebert I agree - I was trying to use this with SharePoint?, which strips the name attribute from the pre tag. I modified HighlightAll to look for a compound class name instead (like "xml code", or "css code"). That said, it still failed to work... To get it works on tinymce Hello, I have written a plugin for tinymce you can find here how to use in mediawiki Hello again, I have upgraded the plug-in to work on tinymce 3.x, you can find it here Hope this helps. klein.stephane said: This code : must been in onload function like this : maybe not necessary, put the code: after your pre or textarea will be fine. I admit that I'm eccentric upfront and I don't like to place script tags outside of the head element when I don't have to. Rather than place the scripts at the bottom of the page, how about placing the defer attribute on the script element? in the <head> I put And at the botom of the pageAnd its not working: I can't get the following to work. I know all the files are referred to correctly, but I did mess w/ the structure to make it simpler. (the CSS file is called syntax-highlighter.css and is in the same folder as this code... all the js's and the swf are in syntax/ which is in the same folder as this code) What am I doing wrong? {{{<html> <head></head> <body></body> </html>}}} Sorry... I meant the js's and swf are in the folder 'scripts'. Javascript should be linked that way: <script type="text/javascript" src="/synhighlighter/shBrushXml.js"></script> In other words the "language" attribute is deprecated and does not replace the "type" attribute. Using the "language" attribute only may lead to un expected behaviour (non-working JS): <script language="javascript" src="/synhighlighter/shBrushXml.js"></script> To use SyntaxHighligter? with SPIP. You can try the plugin : I think this needs a bit of a re-think in order to work with XHTML pages, you're not allowed to use name as that's been replaced by ID, and you can only have one ID which would limit this to one use per page (little use for tutorials). I think it needs to use the class attribute if possible... However, visually it looks great!
http://code.google.com/p/syntaxhighlighter/wiki/Usage
crawl-001
refinedweb
1,788
75.3
. Posted by: emilian on May 24, 2007 at 09:12 AM Posted by: augusto on May 24, 2007 at 09:12 AM Posted by: tompalmer on May 24, 2007 at 09:12 AM Posted by: enicholas on May 24, 2007 at 09:24 AM You seem to have analyzed the problems and then solved them in the best possible way. :) Cheers, Mikael Grev Posted by: mikaelgrev on May 24, 2007 at 09:34 AM public class TrivialClass { public static void main (String[] arrrrImAPirate) { Class.forName ("javax.swing.JButton"); } } Posted by: invalidname on May 24, 2007 at 10:26 AM Posted by: enicholas on May 24, 2007 at 10:35 AM Posted by: tompalmer on May 24, 2007 at 11:10 AM Posted by: tompalmer on May 24, 2007 at 11:13 AM Posted by: jdevp2 on May 24, 2007 at 01:20 PM It's a shame about the lack of web start support as I don't really see asking the users to mess around on the command prompt very consumer oriented. I don't see why web start couldn't download the needed bundles while downloading its usual resources listed in the JNLP - except I guess they'll be loaded by the wrong classloader and so might create some interesting security implications. How would installations work in the meantime? do you envisage developers creating native .exe installers? batch files?. Also wondered if the bundle creator by example would in practice be sufficient as not all clients are created equal. Some might be using XP l&f, Classic or be using Vista, some will have 3d cards some won't, some might not even be using Windows (shock horror ;). It looks like you'd have to add a fair bit of expert guesswork to the results to create a bunch of targeted platform bundles or else run that tool with every conceivable platform / theme / laf (or avoid giving users any choices)? Or would Swing just come as one or two big lumps all locales, themes, plafs, resources and accessibility apis etc.. included regardless of actual end usage? I take it the kernel is showing the progress dialogs as most requests for classes that might lead to downloads will happen on the EDT which'll be too late.. apps frozen. Are the sudden unexpected delays something developers might be expected to code for? what happens if I've just exceeded my monthly broadband cap etc..? Sorry you've lost all your data? (well not even that). I also take it the jre and kernel versions won't co-exist on the same machine fully? I mean who gets called when I double click a jar or run a JNLP file? might be a pain for developers testing deployments having to uninstall/reinstall the jre and/or purge kernel caches all the time just to check their apps out (as the end users might experience them). Some dev utils might be nice if I'm guessing correctly. Posted by: osbald on May 24, 2007 at 03:07 PM I'm a little skeptical about this downloading-in-the-background idea. It sounds like little more than a warmed over, more granular version of the current network install, without a way to predict exactly when the network activity is going to occur and when the whole process is going to be finished. Technically interesting, but it sounds fragile. Instead of treating the initial stages when you only have a partial jre installed as a transient state that you'll grow out of once everything is downloaded, I think it would be more interesting to have permanent partial or subset jres. What I mean is, say I'm distributing a commercial Java program. I have to distribute a 'private' jre with it, to make sure the user gets the right one that I tested with. I'd like to distribute only the bare minimum bits that my application will ever need (e.g., just the 'Foundation' level), and I don't want it to download anything else or pop up any progress bars automatically without my control. And are those going to be Swing-based progress dialogs? What if my app doesn't use Swing, say it's a SWT program or it's purely a console/command line program or daemon? Posted by: eburnette on May 24, 2007 at 07:35 PM Posted by: lelliot on May 24, 2007 at 07:36 PM Posted by: augusto on May 24, 2007 at 08:15 PM Posted by: vprise on May 25, 2007 at 01:53 AM Posted by: bharathch on May 25, 2007 at 02:36 AM Posted by: rabbe on May 25, 2007 at 05:10 AM Posted by: lowecg on May 25, 2007 at 06:17 AM Posted by: nopjn on May 25, 2007 at 09:51 AM Posted by: fullung on May 25, 2007 at 10:34 AM Posted by: atehrani on May 25, 2007 at 11:46 AM Posted by: cowwoc on May 25, 2007 at 05:40 PM There is quite a lot of effort in the Java community these days to resolve long standing issues but I am often unsure whether they are coordinated all together (e.g. bean-bindings and new bean-property syntax). Would these 'download bundles' be properly aligned with module system (JSR 277) and superpackages (JSR 294 Posted by: vtec on May 26, 2007 at 01:27 AM Posted by: emilmont on May 26, 2007 at 03:43 AM Posted by: andrewcross on May 27, 2007 at 02:23 AM Posted by: ivanooi on May 27, 2007 at 06:11 PM Posted by: coffeejolts on May 31, 2007 at 06:58 AM Posted by: evanx on July 02, 2007 at 09:53 AM You said, "All of the disparate bundles will be repackaged into a unified rt.jar file." Is the rt.jar per application or per kernel JRE? The former would mean a smaller footprint per application, whereas the latter may grow larger as more programs are run. Personally, I'd be thrilled if someone could extend the concepts of "bundles" to the running JVM (rather than just the environment). So that a commandline program that needs java.util, java.io, and java.net can load those parts, and only those parts of the rt.jar. (Those plus java.lang and whatever com.sun.* classes needed to make it run only come to about 8-10MB.) Right now, running anything with 1.6 loads a whopping 44MB rt.jar. Most of which (34-38MB in my example) goes unused. With each release rt.jar has grown by about 9MB. (1.4.2 to 1.5.0: 11MB, 1.5.0 to 1.60: 7MB) Can you imagine trying to running linux/windows where support for almost everything you might possibly ever need is stuffed into a single sharedobject/dll? Posted by: darkling on July 12, 2007 at 09:58 AM Posted by: fredruopp on September 04, 2007 at 05:31 PM Posted by: tmilard on October 02, 2007 at 08:10 AM Posted by: mgiacomi on January 06, 2008 at 01:29 AM > java -verbose -jar MyProgram.jar > class_list.txt > jkernel -create custom_bundle.zip -classes class_list.txt > jkernel -install custom_bundle.zip Posted by: wannachan on April 21, 2008 at 05:44 AM Posted by: sonichui on May 08, 2008 at 06:39 AM
http://weblogs.java.net/blog/enicholas/archive/2007/05/java_kernel_unm.html
crawl-001
refinedweb
1,231
68.2
Distributed: Shard Key¶ The collection uses the { datacenter : 1, userid : 1 } compound index as the shard key. The datacenter field in each document allows for creating a tag range on each distinct datacenter value. Without the datacenter field, it would not be possible to associate a document with a specific datacenter. The userid field provides a high cardinality and low frequency component to the shard key relative to datacenter. See Choosing a Shard Key for more general instructions on selecting a shard key.. This application requires one tag per datacenter. Each shard has one tag assigned to it based on the datacenter containing the majority of its replica set members. There are two tag ranges, one for each datacenter. alfaDatacenter Tag shards with a majority of members on this datacenter as alfa. Create a tag range with: - a lower bound of { "datacenter" : "alfa", "userid" : MinKey }, - an upper bound of { "datacenter" : "alfa", "userid" : MaxKey }, and - the tag alfa bravoDatacenter Tag shards with a majority of members on this datacenter as bravo. Create a tag range with: - a lower bound of { "datacenter" : "bravo", "userid" : MinKey }, - an upper bound of { "datacenter" : "bravo", "userid" : MaxKey }, and - the tag bravo Based on the configured tags and tag ranges, mongos routes documents with datacenter : alfa to the alfa datacenter, and documents with datacenter : bravo to the bravo datacenter. Configure Shard Tags¶ You must be connected to a mongos associated with the target sharded cluster in order to proceed. You cannot create tags by connecting directly to a shard replica set member. Tag each shard.¶ Tag each shard in the alfa data center with the alfa tag. Tag each shard in the bravo data center with the bravo tag. You can review the tags assigned to any given shard by running sh.status(). Define ranges for each tag.¶ Define the range for the alfa database and associate it to the alfa tag using the sh.addTagRange() method. This method requires: - The full namespace of the target collection. - The inclusive lower bound of the range. - The exclusive upper bound of the range. - The name of the tag. Define the range for the bravo database and associate it to the bravo tag using the sh.addTagRange() method. This method requires: - The full namespace of the target collection. - The inclusive lower bound of the range. - The exclusive upper bound of the range. - The name of the tag. The MinKey and MaxKey values are reserved special values for comparisons. MinKey always compares as less than every other possible value, while MaxKey always compares as greater than every other possible value. The configured ranges capture every user for each datacenter. Review the changes.¶ The next time the balancer runs, it splits and migrates chunks across the shards respecting the tag ranges and tags. Once balancing finishes, the shards tagged as alfa should only contain documents with datacenter : alfa, while shards tagged as bravo should only contain documents with datacenter : bravo. You can review the chunk distribution by running sh.status(). Resolve Write Failure¶ When the application's default datacenter is down or inaccessible, the application changes the datacenter field to the other datacenter. For example, the application attempts to write the following document to the alfa datacenter by default: If the application receives an error on attempted write, or if the write acknowledgement takes too long, the application logs the datacenter as unavailable and alters the datacenter field to point to the bravo datacenter. The application periodically checks the alfa datacenter for connectivity. If the datacenter is reachable again, the application can resume normal writes.. The results show that the document with message_id of 329620 has been inserted into MongoDB twice, probably as a result of a delayed write acknowledgement.: Using getTimestamp() on the document with ObjectId("56f08c457fe58b2e96f595fb") returns:
https://docs.mongodb.com/v5.0/tutorial/sharding-high-availability-writes/
CC-MAIN-2021-49
refinedweb
627
56.15
Allows placement of code within a XAML page, which is to be compiled by any XAML processor implementation that compiles XAML as opposed to interpreting it. <object> <x:Code> // code instructions, usually enclosed by CDATA... </x:Code> </object> x:Class Attribute must also be provided on the parent element shown as object in the syntax, and that element must be the root element in a page. The x:Code directive element must be an immediate child element of the object root element. The code within the x:Code XAML directive element is still interpreted within the XML namespaces provided. Therefore, it is usually necessary to also enclose the code within x:Code inside a CDATA segment. x:Code is not permitted for all possible deployment mechanisms of a XAML file. Code for WPF must still be compiled, it is not interpreted or used just-in-time. For instance, x:Code is not permitted within any XML Paper Specification (XPS) document, or loose XAML. The correct language compiler to use for x:Code content is determined by settings and targets of the containing project that is used to compile the application. Code declared within x:Code has several notable limitations. (that is legal, but uncommon, because nested classes cannot be referenced in XAML). Other CLR namespaces beyond the namespace being used for the existing partial class cannot be defined or added to. References to code entities outside of the partial class CLR namespace must all be fully qualified. If members being declared are overrides to the partial class overridable members, this must be specified with the language-specific override keyword. If members conflict with members of the partial class created out of the XAML page, in such a way that the compiler reports it, the XAML file will fail to be loaded or compiled.
http://msdn.microsoft.com/en-us/library/ms750494.aspx
crawl-002
refinedweb
303
53.1
Python impressionsJune 6th, 2008 at 11:42 am Introduction ‘>. Related posts: June 6th, 2008 at 15:29 I prefer “if not foo:” instead of “unless foo:”. Less cluttered grammar. June 6th, 2008 at 15:46 K: I love how LESS grammer seems MORE cluttered to you Plus, it has uses outside of your simple example: if foo != 1 and blah != True: can become: unless foo == 1 and blah: much sexier. June 6th, 2008 at 15:56 I agree about the indent. I am a ruby guy, I never understood why indent was used against python. I agree that the indent should be in space rather than tabs, and I also agree that I think a REAL programming language should not care about indent BUT on the other hand I think the whole indent issue was blown out of proportion. What I however dont like about python is the implicit self. I can’t stand it. The “foo:” part is also annoying, I dont like to use : Using : there feels like using ; in perl and I think both python and ruby have got away with perl’s legacy of becoming unmaintanable. The CPAN thing is overrated too because I have came to realize that a lot of the cpan modules are hugely outdated. But since cpan keeps on coming again and again, i think it would be cool to use something that ALL THE THREE could use. And maybe include php. So that the scripting languages could all use a library that solves a specific issue, without the needed reimplementation in each of these language. I feel a lot of the repetitive task is assigned to this, and that wastes man hours… June 6th, 2008 at 16:04 For the single item tuple syntax complaint, are you suggesting that parens shouldn’t be used for tuples and only be used as a way to group expressions? June 6th, 2008 at 16:27 These are cool: [x for x in list] I don’t know what they are called, but they rock. Sets are also useful in all sorts of ways - it’s like having SQL inside the language that works on your data monk.e.boy June 6th, 2008 at 16:40 K: I find unless to be more readable than if not Damien: perhaps, yes. Tuples are a special syntax anyway, so why reuse the poor parens that are used for function calls and expression grouping ? Dict initialization got its syntax, so why not tuples ? Maybe tuples could also use braces. monk.e.boy: These are list comprehensions. The best thing in them is they’re very fast unlike high-order features in other languages. June 6th, 2008 at 16:40 Nice overview. I ended up writing a very similar post after a few weeks using Python. I did have some observations though: I believe that Python threads are a little bit broken.. I’m not sure how much of problem it really is in real world programs. The syntax of join makes sense when you consider that the list you are joining shouldn’t be making assumptions about return type. There are a few different string types, and it could get messy casting your return value to the one you want. Also, private functions are not really private. The double underscore just a hint to the compiler to munge the name, e.g. MyClass.__func to _MyClass__func. I find that a little bit annoying, but in the scheme of things it probably doesn’t matter. June 6th, 2008 at 17:03 the bit about parens is that a single-item tuple has to have a trailing comma so that there’s not confusion with unnecessary grouping parens: (1) == the number 1, but with extraneous parens (1,) == A tuple containing one element, which is the number 1 Luckily, most places where you pass a tuple, all you need is something iterable, and [1] is a list containing only the element one. Oh, and if you like list comps, you gotsta love generator comps: x = min(item.count() for item in longlist if item.relevantTo(whatever)) June 6th, 2008 at 17:16 I totally agree with you on the deprecated libraries. There are so many modules that often you search for what you need in google and start using what you find…only to find later that “library” is old and there is a “library2″ out. #python on freenode is a great resource though to find this stuff out. June 6th, 2008 at 17:30 @Markus: “The CPAN thing is overrated too because I have came to realize that a lot of the cpan modules are hugely outdated.” I saw a presentation on “is perl dead?” or something to that effect, and they showed a chart of CPAN contributions. The rate at which CPAN contributions are made has risen steadily every year, meaning more modules are from 2007 than any other year except maybe 2008 right now. So your argument is wrong. June 6th, 2008 at 18:06 Which languages have you mainly been using before? June 6th, 2008 at 18:09 “len() is a function and not a method of string. In general, Python is less consistent in this respect than, say, Ruby. Some operations are functions (len, dir), some built-in statements (del) and some object methods (reverse, append, etc.)” As a general rule, functions like len() will not modify their arguments, while methods may. Also, len(obj) is simply a wrapper around obj.__len__(). del is a statement because it can’t be implemented as a function, without using horrible stack analysis hacks. As it is special in this way, it should not look like just another function. ———————— “string’s join method is OK, but why not also add a join method to a list. It is more elegant to write [1, 2, 3].join(’,') than to write ‘, ‘.join[1, 2, 3]” The original rationale given is that there are only three string classes, but there are dozens or hundreds of iterable classes (not to mention user-defined). It would be annoying to force all iterable types to implement a relatively special-purpose method like join(). However, I have hope that this will be solved in Python 3 with the use of abstract base classes. ———————— “The syntax for a single-item tuple is a parsing-imposed ugliness, akin to the need to separate the closing ‘>’s in C++ nested templates.” The tuple operator is not parens, but comma. 1, is a tuple. So is 1, 2,. The ugly special case is the empty tuple, not the 1-tuple. ———————— “The unless keyword is sorely missing” The last thing we need is extra special cases in “if” statements. There are already three or four forms of the damn things. June 6th, 2008 at 18:21 Josh: What do you refer to when you say threads are broken ? GIL and multiprocessors, or something additional ? troels: I’ve added an update with a short background paragraph to provide the context. Name: how do you write a 1-tuple ? isn’t that ugly ? Whatever works for if, can work for unless too. It will make code more readable. Can you elaborate on abstract classes that will solve the ‘join’ issue ? June 6th, 2008 at 19:40 Some of the legacy and code strewn about is addressed in Python 3000 and the standard library cleanup. June 6th, 2008 at 19:55 If you use MS Windows (and let 30% of your RAM is wasted in antivirus cycles) obviously you don’t have any authority to write about good technologies. June 6th, 2008 at 20:34 I think having ‘join’ in the str class actually makes sense. This way it works on all sequences (everything which has a __iter__ method) and you don’t have to reimplement in in lists, dicts, sets, custom collections, … June 6th, 2008 at 21:05 Interesting that roughly half of what you like is the language itself, and half the community and whatnot, whereas all of what you don’t like is the language. Not bad or good, just interesting. June 6th, 2008 at 21:25 מי מכריח אותך לעבוד בוינדוס ? תעבוד בלינוקס. פייתון מותקן בברירת מחדל ברוב הפצות הלינוקס. June 6th, 2008 at 21:51 Private methods are like curly brackets, getters setters and other OO purist cruft. You soon realize they add little but more typing. Other than __special__ methods I’ve not used private methods in a very very long time. I almost never see them in mature Python code. Don’t use them. June 6th, 2008 at 22:27 “len is a function and not a method”… Sorry to introduce on this Python discussion, but I would just like to point out that C# 3.0 have this attractive feature - it allows one to add methods to existing classes. Not happy with the the List class doesn’t have join()? Write string Join(List this, string seperator) { return seperator.Join(this); } Back to watching Babylon 5 now, and again sorry if it’s too off-topic. June 6th, 2008 at 22:37 “Name: how do you write a 1-tuple ? isn’t that ugly ? Whatever works for if, can work for unless too. It will make code more readable. Can you elaborate on abstract classes that will solve the ‘join’ issue ?” I write all tuples wrapped in parens, because it makes it easier for the human reader to distinguish them from function parameters. I also use trailing commas, because in multi-line tuples trailing commas reduce noise in diffs. However, the computer does not care; it would be equally as happy with any style: a = 1, a = (1,) b = 1, 2, 3 b = 1, 2, 3, b = (1, 2, 3,) ———————— Abstract base classes can provide default implementations of methods. For example, the sequence ABC can simply define a method join(): def join (self, sep): return sep.join (self) and let [1, 2, 3].join (’,') work the same as ‘,’.join ([1, 2, 3]). June 6th, 2008 at 22:49 Yoav: are you working ? are they using Linux at work ? If so, lucky you. ripper: yeah, actually Ruby has this feature too. I think it’s a bit confusing, because when you see code you thought you know its workings, it may surprise you because someone redefined the way some built-in class works. njharman: private methods have their uses, in Python too. And I see a lot of Python code (stdlib included) with them. June 7th, 2008 at 23:05 This does not work: >>> ‘, ‘.join([1, 2, 3]) Traceback (most recent call last): File “”, line 1, in TypeError: sequence item 0: expected string, int found This works: >>> ‘, ‘.join( ['%d' % num for num in [1, 2, 3] ] ) ‘1, 2, 3′ In tha latter case you’re joining a list of string elements that have been created using a list comprehension thus the join works. Regards, Antonio Lima - Peru June 16th, 2008 at 03:39 ‘, ‘.join(map(str, [1, 2, 3])) is cuter still December 13th, 2008 at 15:39
http://eli.thegreenplace.net/2008/06/06/python-impressions/
crawl-002
refinedweb
1,841
81.53
Scatter plots with a legend¶ To create a scatter plot with a legend one may use a loop and create one scatter plot per item to appear in the legend and set the label accordingly. The following also demonstrates how transparency of the markers can be adjusted by giving alpha a value between 0 and 1. import numpy as np np.random.seed(19680801) import matplotlib.pyplot as plt fig, ax = plt.subplots() for color in ['tab:blue', 'tab:orange', 'tab:green']: n = 750 x, y = np.random.rand(2, n) scale = 200.0 * np.random.rand(n) ax.scatter(x, y, c=color, s=scale, label=color, alpha=0.3, edgecolors='none') ax.legend() ax.grid(True) plt.show() Automated legend creation¶ Another option for creating a legend for a scatter is to use the PathCollection.legend_elements method. It will automatically try to determine a useful number of legend entries to be shown and return a tuple of handles and labels. Those can be passed to the call to legend. N = 45 x, y = np.random.rand(2, N) c = np.random.randint(1, 5, size=N) s = np.random.randint(10, 220, size=N) fig, ax = plt.subplots() scatter = ax.scatter(x, y, c=c, s=s) # produce a legend with the unique colors from the scatter legend1 = ax.legend(*scatter.legend_elements(), loc="lower left", title="Classes") ax.add_artist(legend1) # produce a legend with a cross section of sizes from the scatter handles, labels = scatter.legend_elements(prop="sizes", alpha=0.6) legend2 = ax.legend(handles, labels, loc="upper right", title="Sizes") plt.show() Further arguments to the PathCollection.legend_elements method can be used to steer how many legend entries are to be created and how they should be labeled. The following shows how to use some of them. volume = np.random.rayleigh(27, size=40) amount = np.random.poisson(10, size=40) ranking = np.random.normal(size=40) price = np.random.uniform(1, 10, size=40) fig, ax = plt.subplots() # Because the price is much too small when being provided as size for ``s``, # we normalize it to some useful point sizes, s=0.3*(price*3)**2 scatter = ax.scatter(volume, amount, c=ranking, s=0.3*(price*3)**2, vmin=-3, vmax=3, cmap="Spectral") # Produce a legend for the ranking (colors). Even though there are 40 different # rankings, we only want to show 5 of them in the legend. legend1 = ax.legend(*scatter.legend_elements(num=5), loc="upper left", title="Ranking") ax.add_artist(legend1) # Produce a legend for the price (sizes). Because we want to show the prices # in dollars, we use the *func* argument to supply the inverse of the function # used to calculate the sizes from above. The *fmt* ensures to show the price # in dollars. Note how we target at 5 elements here, but obtain only 4 in the # created legend due to the automatic round prices that are chosen for us. kw = dict(prop="sizes", num=5, color=scatter.cmap(0.7), fmt="$ {x:.2f}", func=lambda s: np.sqrt(s/.3)/3) legend2 = ax.legend(*scatter.legend_elements(**kw), loc="lower right", title="Price") plt.show() References¶ The usage of the following functions and methods is shown in this example: Out: <function PathCollection.legend_elements at 0x7fcc1690b3a0> Total running time of the script: ( 0 minutes 1.584 seconds) Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
https://matplotlib.org/3.3.4/gallery/lines_bars_and_markers/scatter_with_legend.html
CC-MAIN-2022-21
refinedweb
569
53.47
. I know there was work on new Perl bindings, has anything happened with those? I'll second that - I know quite a few perl developers who aren't going to suddenly take up C#, python etc just because perl isn't flavour of the month at the moment. Come on kde folks - kde is so cool, its a crime not to have bindings for as many languages as possible :) OK I'll reply to myself. Yeah, I know its Trolltech I should be ranting at since these are QT bindings :) In fact, the KDE bindings are built with a modified kdoc, as kdoc is already able to parse the C++ code and build a tree of it, it would have been stupid to rewrite this functionnality from scratch. This modified version of kdoc is called Kalyptus. With this tools, KDE developers (such as David Faure) were able to generate Python, Perl, Ruby, C#, C, Java, ... bindings for KDE and Qt. Just go on... , to see that I am right. As you've read it above, there are already Perl bindings for KDE and Qt, just go on or. I hope I solved your problems. Have a nice day, all ! -- "As a computing professional, I believe it would be unethical for me to advise, recommend, or support the use (save possibly for personal amusement) of any product that is or depends on any Microsoft product." Good to see that something is there, but the most recent revision I can find on CPAN is PerlQt-2.105.tar.gz dated 7th March 2000. KDE/QT3 had been out for some time now but the KDE developer page states: "Getting your perl scripts to work with KDE. This page is not done yet, but there are Qt-2 and KDE-2 perl bindings on CPAN. There is work going on to update those to Qt3." Looks to me like perl 6 will be here before up to date QT bindings - and we all know how long that's going to take :) Anyway, thanks for the reply. QtPerl is ready. It's fully working and has a complete ui compiler for Designer files. We are now in the process of writing documentation, furbishing astounding screenshots, etc... It should be released next week. Cheers, Germain Wonderful! I retract all my posts on this subject - you never know, might even be able to give something to KDE (perl based, of course) in the future instead of just take :) Thanks for the good news, Rich >you never know, might even be able to give something to KDE (perl based, of >course) in the future instead of just take :) Well, the initial release will only provide bindings for Qt... KDE is the next step ! (not too far, hopefully) Also -I'm really nuts- it is not QtPerl (though I like this name better) but indeed PerlQt, as it follows the naming of previous bindings for Qt 2. G. > Well, the initial release will only provide bindings for Qt... KDE is the next step ! (not too far, hopefully) Damn - never mind though, something's way better than nothing! I played around with perl wxWindows/GTK a bit but never really got into it - definitely prefer KDE/QT, though I appreciate the cool stuff the Gnome folks are doing with Gnome2. >Also -I'm really nuts- it is not QtPerl Can't say I noticed ;-> It would be nice if they put a link to the Qt# website as they have done with gtk# don't you think? Yes, they really should do this. Nobody talks about Qt over there though. To be fair, Go-Mono.org does have a link to Qt# on there resources page. Gtk# is there prefered GUI toolkit which should be of no surprise :-) I've found Miguel and the Ximian/Mono team to be nothing but entirely gracious and helpful. Adam Not even a news item on their site? Go-Mono.org does have a Qt#-comment on the first page as well. Of course, it's not a surprise but must feel a little bit strange for the Qt#-team. Ximian is cool. No... no... nononono. Ximian and Miguel are evil. They are not followers of the holy cause of TrollTech everywhere (tm). Ximian must die - they just use VC money to copy TheKompany and smash the benevolent Godhead of TrollTech. Shawn Gordon tells me so. You must not post nice comments about Ximian/Miguel here, it is heresy and a banning offence. Consider yourself warned. What does it mean? Will next MS Office releases (based on C#) integrate with KDE???? That would be great! Anhybody could explain a little bit what does Qt C# bindongs mean? There is no way the next MS Office releases will be based on C#. Just not gonna happen. The next version of Office does not run under the CLR, it does however uses .NET Web Services. So in other words, it woulnd't run on Mono. It would still be a Win32 application. etc. etc. On e should be able to popup a KDevelop QtDesigner window and drag and drop a Python <---> DCOP GUI clicky controller thingy application of some sort inside 5 minutes eg: Boss: Need custom app with cool widgets for querying 4 different databases dumping into spreadsheet and doing fancy printing/faxing/PDF conversion of the resulting charts. Need it tomorrow, slave!! (boss goes off on other rant ...) Developer: [flips open laptop in boss's office opens KDevelop/QtDesigner] [Developer drag and drops some Qt<-->Python<-->DCOP querying app together with DB access widgets (built-in to Qt no??), another 3 buttons to export to KSpread via DCOP, another button to use DCOP and scripted KSpread and CUPS (which rules to world) with choices [] Create PDF Print ===== [] HQ Printer (vai IPP) [] Mail room printer [] Legal Dept. printer [] View streaming video from gym change rooms: M:[] F:[] ....] Boss: ... [finishing rant] and furthermore we will convert all DBA's KDE workstations to XP!!! Developer: Oh ... I just finished developing the application you asked for. Boss: Wah??!! Lemme see that you genius!! When they give it up and support Python. I'll second that. My dream is that someday I'll be able to write QT apps in PHP. I'm learning C++, but PHP is soooooooo much nicer and easier than C++ that it would be more than welcome to being able to use QT with it. Currently Gtk works with PHP/CGI, but I'm a QT fan, so I hope it will someday work with it :) PHP is not easier. PHP seems easier while you learn it. PHP will be considerably harder once you start doing real world stuff. Hummm, I tought managing databases was a real world (tm) thing. PHP is really easier than C++ and can do things that C++ code do as well as it (but losses on speed, sure). Besides, I don't said I was going to build windows or games with it. PHP is good for writting some small apps as moo/mud clients, d20 player generators, etc. And all those things are real world man :) PHP is simple. Simple != easy. Now, python, on the other hand, is simple AND easy ;-) PS: if it is not to build "windows", hat is the point of PHP in a qt related thread? I metan m$ windows ;) Well, in my case I think PHP is simple and easy as you think about Phyton that I think isn't easy....so each person have it's prefereed language and let's not start a language war here :) Python *is* actually easier to learn that PHP. The syntax is easier, data handling (lists, maps etc) are easier to use etc. Nothing wrong with PHP, but it is harder to learn, especially for a newbie. Just think of the weird semicolons ";" you have to sprinkle all over, but not on every line. PHP means "PHP Hypertext Preprocessor". Since when is a hypertext preprocessor considered a full programming language? Can't a language evolves besides it's original name? Did you already see what python and perl means? Tehir names means nothing acctually, where made just to create a regular word with each letter. This means they are bad? Sure not! C means just C, the languace after A and B. C is crappy because it's name? Should it be: GLTMEOA (Great Language That Makes Everthing, Or Almost)? SURE NOT! No need for it. Now, please stop just talking bad of PHP. Seems like your guys think that if someone says PHP is good, he means "all other languages are crappy". :( No, but that preprocessor part in a name tells you what was the purpose when it was designed and how it could likely effect the design of the language. Personally, I think PHP is (just) C with better string handling. If that's good or bad depends on person. I'm not talking about it's name, I'm talking about it's function! It's function is exactly what the name says: to preprocess things. I have a hard time believing that a preprocessor can be used to create GUIs. I mean: believing that a preprocessor is an effective tool to create GUIs. You have no idea what PHP is. Look into the Zend VM and the PHP compiler. Yes, PHP 1.0 started out as a perl script and a small C program. Since 4, it's a powerful development environment that allows the creation of command line, ncurses and gtk applications, and has a wide library of loadable binary modules that can also be accessed via a CPAN like mechanism (called PEAR). And if you doubt that it could have turned into a nice language from such simple beginnings, consider that C++ started out as a header file with a bunch of C Preprocessor directives. -- Evan C++ did NOT start like that. Yes, some C++ compilers are "preprocessors" that generate C code. No, it is not the usual C preprocessor. Funny, back in 1983 when Bjarne Stroustrup created CFRONT, it started out as a simple set of header files. By the time he was writing about it in Dr. Dobbs, Byte or whatever I first heard about it back when, it had a simple namespace mangler, but in introducing the new variant of C in his articles, he would say that it started out as some neat CPP hacks (I think he also mentioned its CPP heritage in the CUJ article when they were looking to finalize the language spec, or right afterwards). It wasn't until 1990 or thereabouts that "real" C++ compilers appeared. I know - I was an avid C user (to the ludicrious level of writing my own tiny C in asm) throughout the 80s, and followed all the variants and compilers very very closely. Be that as it may, C++ is not the issue here. PHP is. PHP would make a fine candidate for Qt and KDE bindings. My MP3 organizer is written in PHP, and currently uses curses. While I personally prefer Ruby or C, PHP is not a terrible language in terms of being a primarily procedural language with a slight smattering of OOP concepts. It also happens to be the universal data juncture tool, even better than perl, and with a syntax saltier than perl's syntax (a Good Thing, IMO). PHP has a very detailed and flexible library system and a CPAN like system called PEAR. No reason not to do it, and it may encourage KDE development, which is a good thing, last I checked. No KDE bindings for PHP, hence it is EVIL. Once bindings are written it becomes blessed and holy. Roberta knows the truth. Whoa. Mispelling a name so it has a different gender. What a pinnacle of wit. I am php programmer and I would thank for qt and kde bindings of php, I have a idea (might be silly) there is wrapper SMOKE on qt and kde where classes and functions and other stuff can be called from it I think a perl bindings is based on it and it is working. I hope I helped in this discussion. troby. Python's name is not an acronym. Python is called python. Not P.Y.T.H.O.N. Python means python. It is a hommage to monty python. Ok, I tought I said, not let's start a war. :( I think Python is trash, junk, sh*t. Does this removes any merits of it or will stops you liking it? NOOOOOOOOOO! So why instead of saying PHP is bad MY language is better don't you people just keep quiet? If someone makes PHP bindings for QT will you people being forced to use it instead of Perl or Python? SURE NOT! C'mon! Relax. When someone tells you python is good, if you ask him, he CAN provide examples of software developed using it, and explain why the language has helped him being effective in the project. Python advocates can provide backup for their claims. PHP advocates, on the other hand, write "sh*t" in their responses. Go, take some linden, relax. >PHP advocates, on the other hand, write "sh*t" in their responses. Whereas you write "sh*t" instead. Big difference zealot boi. Funny that you call me zealot, KDE Zealot. Does that mean we are relatives? All I meant by what you quote is that the previous poster simply called python "sh*t" without giving any argument whatsoever. On python´s behalf, I can point out coherent syntax, simple to use object orientation, rich class library, readability, a simple extension and embedding mechanism, good Qt bindings, and JYTHON (python in the JVM). In all those aspects, I think Python is a better language than PHP for real world usage. Now, anyone can call this "sh*t". But it is going to take a whole lot more than a silly dot post to make that charge stick. That is "wiggle" again. Actually, it's not. Is that why you banned all Freeserve users from posting again, beause someone posted three sarcastic messages lampooning the zealot nutcases who inhabit your site? Just how stupid are you, anyway? Clue: If I wanted to play this game, I could rotate through a couple of hundreds proxies, and a dozen or so free ISPs. In fact, with a dumbass like you in charge, I could probably get most of the UK internet population banned from this site with a little effort. There are just some people who haven't the smarts or the temper to run a forum site, and you are the perfect example. Think yourself lucky that this is little more than a five minute time-wasting exercise for me. Ah, well, while we're at pluggin languages : Ruby. Ruby is cool. I am too entrenched in what I know to switch, but should I ever feel a need to learn a new language, Ruby would be a big candidate :-) If you know C++, Perl or Python, learning Ruby is a matter of a few hours. I don't think I've ever used a language which was such a delight to program with. Well, python does take 2 hours to learn, too. It takes a little loner to learn to use it effectively, of course. But ok, you convinced me, there goes the third sunday of August! Then there's Rebol ;-)
http://dot.kde.org/comment/48727
CC-MAIN-2014-10
refinedweb
2,577
82.04
Closed Bug 421611 Opened 14 years ago Closed 13 years ago Need to be able to run tests on arbitrary build Categories (Firefox Build System :: General, defect, P3) Tracking (Not tracked) mozilla1.9.2a1 People (Reporter: shaver, Assigned: ted) References Details (Keywords: fixed1.9.1) Attachments (4 files, 8 obsolete files) Case 1: We have builds that take a long time, like our PGO ones, and we want to test them fully. This means that we should be doing those builds once, and testing them for correctness in parallel and on other machines, as we do for perf. Case 2: We want to use the trunk test suite to help track down regressions on branch, or compare to other builds. Being able to do just a test-export pass on a source tree pulled with the same timestamp would let the unit boxes set up their bits and then drive a centrally-produced build through its paces. This probably has some deps for specific enable-tests cases decouplings, if they affect perf in some way we're not happy with for our perf testing. Hope not, though! Flags: blocking1.9? I'm not sure if I'm the right person to set blocking on this, and for now will refrain from potentially overstepping my bounds, but I think this is pretty important in terms of being able to properly test the builds that we're shipping to users. I was a little surprised to learn that we don't actually do that with the RCs we spin up for various shipping milestones. I would really hope we could get this for Fx3b5, and almost insist that we have it for Fx3rc1. Priority: -- → P2 This came up from discussing why we don't do unittests and perf tests for releases. What? We're not testing builds we're shipping to users? (Comment #1). Please explain. :-/ At minimum, we should send release builds through Talos, and trigger unittest builders from the release tag. These can be done in parallel to the existing QA activities. OS: Windows Vista → All Hardware: PC → All We have five kinds of testcases: * mochitest/mochichrome: we should definitely be able to run these on an arbitrary build; ted had an extension to do this, IIRC * reftest: we already have the capability to run these on a packaged build, but should automate the capability * crashtests: I don't know much about these... Jesse, are these something we can run on a packaged build? * xpcshell tests: this is a mixed bag. You can't run these on a packaged build right now because the packages don't contain xpcshell. With that fixed, you might be able to get some useful results... more of a long-pole, though * random custom 'make check' tests: most of these are run on test-binaries that aren't in a packaged build or rely on data which you would have to rebuild to be useful... not worth trying to create a new type of package to test these, IMO Crashtests use the same code as reftests, so if reftests work, crashtests should work. I have a reftest extension, that's easy enough (and that gets you crashtest too). Mochitest/mochichrome/browser tests all are easy to run on an arbitrary build. The xpcshell tests are in fact the hard part, but just packaging xpcshell would probably fix that. Yea - if we can get this done it will save us a ton of hassle and time as the unit test machines will not have to be kept up to date with build config changes on the build machines. Not to mention cycle time benefits. I think the talos arrangement has generally worked great. So b+.. Flags: blocking1.9? → blocking1.9+ Schrep, we are not going to be able to do this for the arbitrary "make check" tests, nor IMO should we try. So I think we're still going to need build+makecheck machines somewhere. Perhaps we can have the current build+debug+leakcheck machines do that work? We should consider getting the unit test machines to check out the same mozconfig's as the nightly/dep builders do. There's really no reason for them to get out of sync. yeah, that's a no-brainer, though again make check causes problems there with the addition of --enable-tests. I'll sync up with ted sometime this week and get his extension for reftests/crashtests and see if we can setup some unittest machines that run these and the mochitests+variants on generated builds. I'm reluctant to out-right replace the current unittest machines with these as it's nice to have some "quick" (I realize this is a relative term) feedback on checkins. Status: NEW → ASSIGNED Assignee: nobody → rcampbell Status: ASSIGNED → NEW Status: NEW → ASSIGNED Assignee: rcampbell → nobody Status: ASSIGNED → NEW Component: Testing → Release Engineering: Projects Flags: blocking1.9+ Product: Core → mozilla.org QA Contact: testing → release Version: unspecified → other Damon, schrep: for trunk/1.9, - we're not doing PGO linux or mac builds - for PGO win32 builds, we have bug#420073 to setup unittest run on PGO - for PGO win32 builds, we already have Talos builds running with PGO builds. (from discussion in triage meeting... Is there anything else left to do here for 1.9/trunk? Or is this bug a more general bug about anyone being able send any build to test machines... in which case maybe rename to "integrate try server with unittest & talos" and remove blocking1.9 flag?) Priority: P2 → P3 I've been talking about this with various people lately, so I thought I'd just commit to a comment what's been floating around in my head. I think there are a few possible approaches here: 1) Do a normal build, upload it as usual. On a separate test machine, download that build, checkout the source, build just the test harnesses, and run the tests on the build. This would not allow us to run the TUnit tests, as some of those are standalone C++ programs, and the rest rely on xpcshell, which we don't package. 2) Do a normal build with --enable-tests, upload the build as usual, but also package and upload the test harnesses/test files. On a separate test machine, download the build and the test package, run the tests on the packaged build. We could feasibly package the standalone tests and the xpcshell binary, so we could run the TUnit tests. It might be difficult to package all the test files currently, as the xpcshell tests often rely on files from the srcdir, and reftest leaves all its tests in the srcdir. 3) Some sort of hybrid of the above. Do a normal build with --enable-tests, upload the build, package and upload just the test harnesses + necessary binaries (standalone tests, xpcshell). On a separate test machine, download the build, test package, and also checkout the same source the build was made from. Run the tests using the test harnesses from the test package, with the actual test files from the srcdir. This would probably be easier if we modified the xpcshell and Mochitest harnesses to run the tests from the srcdir instead of copying to the objdir. #2, but package the source files as well (or a subset of them, if we can use the metadata to identify the sets of srcdir things we need)? Why package the source at all? We know exactly what changeset we're building from these days, so it's trivial to pull the matching code from hg. Just as a data point, I did some cursory investigation of how long we spend building/testing on the unittest machines, and for Windows and Linux unittest machines, the total cycle took almost an hour for each (59 and 52 minutes, respectively), compilation took 12 and 3.5 minutes, respectively, and running TUnit took 3.5 and 4 minutes, respectively. I'm going to gather some more data, but it sounds like making TUnit portable to another machine isn't going to be worth the effort. One possibility here would be flipping our build machines to --enable-tests, then also have them run TUnit locally. The unit test machines could then run Mochitest and Reftest against the packaged builds. Is this an issue with cross-compiled builds? I'm under the impression that an ARMEL build will not run on an x86 scratchbox machine, and compiling on the N8x0's seems like the wrong solution. (Please correct me if I'm wrong.) Ideally we'd be able to grab a build and a test suite and go. I had a chat with Joel, he and Aki are both working on running tests on mobile devices. Joel has some great wiki pages documenting his process: Mochitest sounds like the lowest-hanging fruit here. Right now, it would entail packaging xpcshell and ssltunnel (either with the browser or separately) and the _tests/testing/mochitest dir from the objdir, which contains the entire test harness + tests. The one caveat is that per bug 445611 comment 27, we can't use --enable-logrefcnt on our build machines, as it will slow down reference counting, which is undesirable for builds we're going to ship. This means that Mochitest leak tracking will not work. (Waldo will probably kill someone if we break this.) xpcshell tests will be a little trickier. Technically the test files all wind up in _tests/xpcshell-simple, but tests can reference arbitrary files from the srcdir, so that makes life exciting. Reftest is trickier still, as all the tests are run out of the srcdir. We can easily package the reftest chrome bits. Per the links above, Joel has written a script () that parses the reftest.list manifests and figures out what directories to package. This seems suboptimal. Either we should provide an easier way to find and package necessary reftest files, or we should just require a source directory pull that matches the build you're testing. This might be tough on mobile, where space is limited. Currently Joel says that the test files alone take up ~30Mb. A plain mozilla-central source tree (without hg repo, generated via `hg archive`) takes up ~260Mb on my Windows machine, and a .tar.bz2 of it is not quite 40Mb. I think that covers everything I currently know. I will start working on Mochitest first. I welcome any suggestions for how to overcome issues mentioned above. Assignee: nobody → ted.mielczarek Oh, there's one other tricky bit with reftest, in that it currently won't build in a Mac OSX universal build, because it copies autoconf.mk to autoconf.js, which winds up with differences between the ppc and x86 builds, so the build fails in unify. We could fix this with bug 420084 or something similar. As long as we're building with tracerefcnt on at least some machines that run Mochitests (any debug machine would have it, all the current opt ones on tinderbox have it enabled), that should catch most leaks. Mochitests primarily test correctness, and the leak checking is just a bonus; if release builds get the correctness tests and basically can't get leak checking, that seems good enough. Refcounting is impossible to do without a perf hit, and we can't get around that. There's also a small number of compiled-code tests that execute in |make check| that would need to be run on the build machine. Note that xpcshell has do_get_file to make most file access just be from the source directory. A number of tests do mess with `pwd`, tho, but we {c,sh}ould probably fix that by adding something like do_get_tmpdir() and making them use that, possibly even cd to it to be absolutely safe. Reftest's manifest format could probably be extended to record file dependencies without too much trouble. Component: Release Engineering: Future → Build Config Product: mozilla.org → Core QA Contact: release → build-config Version: other → unspecified Here's a rough draft of what this might look like for Mochitest. This adds a make target so that you can do "make -C objdir/testing/mochitest package", and you will wind up with $(DIST)/test-package.tar.bz2, which contains a harness/ directory, containing all of objdir/_tests/testing/mochitest, and a bin/ directory containing xpcshell, ssltunnel, certutil. Of course, you can't do much with it yet, since runtests.py expects to have a working objdir at the moment, and xpcshell doesn't want to run from anywhere but dist/bin right now. It's a start, though.! Attachment #353228 - Attachment is obsolete: true Oops, that was completely the wrong patch. Attachment #354328 - Attachment is obsolete: true (In reply to comment #25) > Created an attachment (id=354328) [details] > rough draft, take two > >! heh, nice!. I started with Mochitest because it seemed like the lowest-hanging fruit to me.. I ran this in fennec (maemo) and came close to success out of the box. two main issues: Makefile has nsinstall instead of $(INSTALL) runtests.py had LD_LIBRARY_PATH=self._appPath instead of self._utilityPath Here are the instructions: 1) install 3 patches (421611, 460515, 470914) 2) make -C client.mk build 3) make -C ($fennec_objdir) package 4) sed -i "s/nsinstall/\$\(INSTALL\)/g" $(xr_objdir)/testing/mochitest/Makefile 5) make -C ($xr_objdir)/testing/mochitest package 6) bunzip $(fennec_objdir)/dist/fennec*.bz2 7) scp $(fennec_objdir)/dist/fennec*.tar <device>:~/ 8) bunzip $(xr_objdir)/dist/test-package*.bz2 9) scp $(xr_objdir)/dist/test-package*.tar <device>:~/ 10)ssh <device> 11)tar -xvf *.tar 12)cp bin/* fennec/xulrunner/ 13)cp bin/components/test_necko.xpt fennec/components 14)sed -i 's/= self._appDir/= self._utilityPath/g' harness/runtests.py 15)python harness/runtests.py --appname=/root/fennec/fennec --utility-path=/root/fennec/xulrunner --certificate-path=/root/certs --test-path=MochiKit_Unit_Tests --autorun I wanted to include all the steps I take to ensure that we all understand what it takes to run on maemo. The one quirky issue is where we copy the bin/* to and the test_necko.xpt to. It would be nice if this step was not there as it is very confusing. (In reply to comment #29) > The one quirky issue is where we copy the bin/* to and the test_necko.xpt to. > It would be nice if this step was not there as it is very confusing. That should be fixed by bug 470971 (and a little harness tweaking to coincide). Thanks for testing this! Oops, replace nsinstall with $(NSINSTALL). Attachment #354411 - Attachment is obsolete: true Josh's test plugin in bug 386676 will need to be handled somehow in this work, as it doesn't ship with the packaged build. I tested this latest patch (along with the updated dependent patches), and from my previous list in comment #29, steps 4 and 14 are not necessary anymore. Keep in mind that you need to add a --xre-path to the cli such as: python harness/runtests.py --appname=/root/fennec/fennec --utility-path=/root/fennec/xulrunner --certificate-path=/root/certs --test-path=MochiKit_Unit_Tests --xre-path=/root/fennec/xulrunner --autorun I have also tested this with the --chrome flag and it appears to work. I have yet to do a full end to end test with it, but a few smaller tests have been successful. We should look at browser-chrome and a11y tests as well. I've got a cleaned up patch for this, but I need to give it a once-over on Mac/Linux to make sure I didn't screw things up. Should have it up today. Ok, this changes the way things work a little bit. Now, the make target to invoke the packaging is "make test-package" in the root of the objdir. The test package will wind up as dist/$packagename.tests.tar.bz2, so on windows, like: dist/firefox-3.2a1pre.en-US.win32.tests.tar.bz2 Unpacking this somewhere along with the packaged build from the same objdir, you can run Mochitest like: python mochitest/runtests.py --appname=/path/to/firefox/firefox --xre-path=/path/to/firefox --utility-path=`pwd`/bin/ --certificate-path=`pwd`/certs/ If you want to run the chrome Mochitests, you'll also need to add: --extra-profile-file=`pwd`/plugins to get the testing plugin copied to the testing profile. Attachment #357144 - Attachment is obsolete: true Attachment #359076 - Flags: review?(benjamin) There are test builds/test packages produced with this patch (+ all dependencies) available here for Windows and OS X: Comment on attachment 359076 [details] [diff] [review] add package target and packaging bits for mochitest [Checkin: Comment 45 & 64] >diff --git a/testing/mochitest/Makefile.in b/testing/mochitest/Makefile.in >+# We need the test plugin as some tests rely on it >+ifeq (Darwin,$(OS_TARGET)) >+TEST_HARNESS_PLUGINS := \ >+ Test.plugin/ >+else >+TEST_HARNESS_PLUGINS := \ >+ $(DLL_PREFIX)nptest$(DLL_SUFFIX) >+endif Is that a stray slash at the end of Test.plugin, or is that a directory? >+test-package: stage-mochitest >+ @(cd $(PKG_STAGE) && tar $(TAR_CREATE_FLAGS) - *) | bzip2 -f > $(DIST)/$(PKG_PATH)$(TEST_PACKAGE) Can I bikeshed this and have you call it "package-tests" (verb-object) Attachment #359076 - Flags: review?(benjamin) → review+ (In reply to comment #38) > Is that a stray slash at the end of Test.plugin, or is that a directory? It's a bundle, so yeah, it's a directory. Just figured I'd make that explicit. > Can I bikeshed this and have you call it "package-tests" (verb-object) That's fine with me, I was thinking of "test-package" as a noun, in parallel with "make package" and "make installer". I retested this end to end on a maemo device. I did this in comment #29 but have updated the process here. Here are the instructions: 1) install 4 patches (421611, 460515, 470971, 475383) 2) make -C client.mk build 3) make -C ($fennec_objdir) package 4) make -C ($xr_objdir) test-package 5) bunzip $(fennec_objdir)/dist/fennec*.bz2 6) scp $(fennec_objdir)/dist/fennec*.tar <device>:~/ 7) bunzip $(xr_objdir)/dist/xulrunner*.bz2 8) scp $(xr_objdir)/dist/xulrunner*.tar <device>:~/ 9)<device>:tar -xvf *.tar 10)<device>:python mochitest/runtests.py --appname=/root/fennec/fennec --utility-path=/root/bin --certificate-path=/root/bin --xre-path=/root/fennec/xulrunner --test-path=MochiKit_Unit_Tests --autorun This is awesome. We removed a lot of the hacky steps. Next up reftests? (In reply to comment #28) > . What should we do about these C++ tests, once all of the rest of this bug is completed? We talked about a few ideas yesterday, but who would know if we still even need them, and if so, whats best way to run them? > I started with Mochitest because it seemed like the > lowest-hanging fruit to me. Cool. >? Most of the C++ tests cover very low-level behaviors, and are very quick. I think we should continue to run them from the build machines for the forseeable future, and not worry about that we can't run them on mobile or the other situations where we want to run tests on arbitrary builds. (In reply to comment #41) >? Even with this (and dependencies) landed, we'll still have at least one blocker standing in our way of actually using this in production. Notably, we'll have to make the build machines enable tests, and I would not do that without having fixed bug 463605, since otherwise we'd start shipping gobs of test junk in our Mac nightlies. If we fix that, then we should be able to make the build machines enable tests and upload this test package. Let's do it for non-Mac systems first, then, while bug 463605 is being fixed. Comment on attachment 359076 [details] [diff] [review] add package target and packaging bits for mochitest [Checkin: Comment 45 & 64] Pushed: Attachment #359076 - Attachment description: with some cleanup → add package target and packaging bits for mochitest [checked in] Verified this with a fresh hg pull and build for fennec on both desktop and maemo device. No patches and using above steps from comment #40. Great work! This adds a "stage-package" target to layout/tools/reftest, and calls it from the "package-tests" target. The implementation is a little crazy, but I think it's the best way to go. I load reftest.js in xpcshell, then use ReadTopManifest() to parse reftest.list and get the full list of tests. Then the script just prints out a list of directories containing tests (as well as the manifest files themselves), and I feed that to tar (via xargs). The nice thing about this approach is that we don't have to maintain a separate reftest manifest parsing script. This doesn't quite work on Windows yet, due to path issues (of course). what about assuming reftest manifests are called reftest.list and using something like "find -name reftest.list | sed 's@/reftest.list@@'" instead? That's relying on some conventions, but I don't see a good reason people would use another name for manifests (except by error, there's one manifest named reftests.list that should be renamed). Yeah, that's possible. The nice thing about this patch is that it only has to parse the actual manifests, starting from the main manifest, as opposed to crawling the entire source directory looking for them, which is slow. It's not terribly complicated anyway, since it reuses the reftest parsing code. yeah, fair enough. And it could be slower if the objdir is inside the srcdir and gets crawled. Ok, so this approach isn't going to work. Joel tested my patch and didn't get any tests packaged. I realize now that we can't actually run xpcshell in a cross-compile. Oops. Guess I'll have to duplicate the manifest parser in Python. :-/ correct. Look at my code that I use to extract and run reftest as this is written in python and might be of some use for your solution. there are links to the code here: Yeah, I started with that, and then realized that reftest.js already had all this parsing code, and failed to consider the cross-compile case. :-/ I have made this point before, perhaps in other bugs, but it seems to be assumed that the xpcshell that is used to run tests in a build needs to come from that build. At least, that assumption seems to be implied in the comments in this bug. It would probably be more reliable if one could use an xpcshell that was previously built or downloaded separate from the build that is being tested. It makes no sense to use an xpcshell from another build... xpcshell is tied pretty tightly to the JS and mozilla versions, and you shouldn't feel free to mix and match. Ok, similar in concept to the above, except I wrote a Python reftest manifest parser. It doesn't have to know quite as much as the real parser, since it just has to be able to get the test filenames and process includes. I'm not super happy about having to go that way, but I also don't think it's that bad. Attachment #364366 - Attachment is obsolete: true Attachment #365725 - Flags: review?(benjamin) looks like you attached the wrong patch (it's the same as the previous version). Comment on attachment 365725 [details] [diff] [review] add packaging bits for reftest, take two Apparently so. Not sure how that happened. Thanks! Waldo pointed out on IRC that this isn't quite sufficient anyway, as the test manifest can specify things like HTTP(..), which means that the reftest httpd will make ../ available, so we need to package that directory as well. Attachment #365725 - Attachment is obsolete: true Attachment #365725 - Flags: review?(benjamin) I think the only current user of HTTP(..) is reaching layout/reftests/fonts/ from other subdirectories of layout/reftests/, although we might at some point want to reach it from something not inside layout... in which case this approach wouldn't work too well anymore. Maybe we should just have a directive in the reftest.list for what directories need to be packaged? (Then we could just package layout/reftests/ as a whole, and the other directories as needed, and likewise for crashtests, although that's a tad more involved.) I'm certainly open to suggestions, and ways we can change the reftest manifest to make this easier, but I'm also aiming for a quick solution at the moment, so I think I'll just handle HTTP(..) by packaging .., and we can follow up with a cleaner solution in another bug. Ok yeah, this is the right patch, and I've added handling for HTTP(path). Attachment #365903 - Flags: review?(benjamin) Comment on attachment 365903 [details] [diff] [review] add packaging bits for reftest, take three [checked in] >+commentRE = re.compile("\s+#") >+conditionsRE = re.compile("^(fails|random|skip|asserts)") >+httpRE = re.compile("HTTP\((\.\.(\/\.\.)*)\)") These all need to be r''... I don't think commentRE or httpRE do what you want at all at the moment. Despite my blind copying of regexes from JS, Python legitimately doesn't care in these cases. Comment on attachment 359076 [details] [diff] [review] add package target and packaging bits for mochitest [Checkin: Comment 45 & 64] This is need for the diff context of bug 476163... Attachment #359076 - Attachment description: add package target and packaging bits for mochitest [checked in] → add package target and packaging bits for mochitest [Checkin: Comment 45 & 64] Whiteboard: [fixed1.9.1b4] Target Milestone: --- → mozilla1.9.2a1 Version: unspecified → Trunk Status: NEW → ASSIGNED I'll probably land it on branch eventually, but it's not a big deal right now. I'll merge the other patch to branch myself. One thing that is problematic is the requirement for python2.5. In scratchbox (what we use to cross compile fennec to the maemo platform) there is not support for 2.5 (only 2.3). We can continue to use the scripts I have written for extracting the reftests from the source tree. I would like to see a uniform approach if possible. My few attempts at installing python2.5 into scratchbox were not successful. Joel: I was able to install python2.5 on our scratchboxen. There's probably a lot in there you don't have to do. Updating the apt sources and fixing scratchbox dns are probably the big ones. Ping me if you still have issues. Just to clarify, we only require 2.4, mostly for the subprocess module. I have a patch to make that explicit in configure, it was blocked on getting the tinderbox scratchbox Python updated (which Aki did, as mentioned in the previous comment). (In reply to comment #64) > (From update of attachment 359076 [details] [diff] [review]) > > > > This is need for the diff context of bug 476163... I guess I didn't read well enough. Serge, in the future I'd prefer if you didn't land my patches to branch without asking first. Also, the fixed1.9.1 keyword is misleading here, as this is going to encompass more than one patch. Whiteboard: [fixed1.9.1b4] This morning I did a fresh clone of m-c and m-b to build fennec. I verified the makefile is in the source tree with the target package-tests. I do a build and everything comes out just fine, but the problem is when I cd $(xr_objdir);make package-tests, I get a no target found. This was working on Saturday. I don't know if something changed with the makefiles, or if there is a problem with my build. my bad, this works just fine if you have --enable-tests in your mozconfig file. In running this on maemo, I ran into an issue for the reftests as I did for the mochitests, we are assuming the LD_LIBRARY_PATH is the same as the directory where fennec is. For mochitest, we added the --xre-path to resolve this and I have verified that changing the runreftest.py to use this path works. Thanks for catching that Joel! I think I considered adding that to runreftests.py but couldn't remember why it was there (aside from being necessary to run xpcshell). I'll update the patch here to include it. Er, I forgot that I already checked in runreftest.py. Joel, can you try out this patch along with the packaging one and see if it works for you? It adds --xre-path to runreftest.py just like runtests.py. Comment on attachment 365903 [details] [diff] [review] add packaging bits for reftest, take three [checked in] Pushed to m-c: Attachment #365903 - Attachment description: add packaging bits for reftest, take three → add packaging bits for reftest, take three [checked in] Comment on attachment 365903 [details] [diff] [review] add packaging bits for reftest, take three [checked in] >diff --git a/testing/testsuite-targets.mk b/testing/testsuite-targets.mk >+stage-reftest: make-stage-dir >+ $(MAKE) -C $(DEPTH)/layout/tools/reftest stage-package >+ > .PHONY: mochitest mochitest-plain mochitest-chrome mochitest-a11y \ > package-tests make-stage-dir stage-mochitest Looks like 'stage-reftest' should be added to '.PHONY'. (In reply to comment #76) > Looks like 'stage-reftest' should be added to '.PHONY'. Feel free to add this if you're ever in this file in another patch. If not, I'll try to remember to put it there when I do xpcshell packaging. I have a patch that packages xpcshell tests. I'm currently failing one of the necko tests when run from the package, but most others seem to run fine. I have realized that once packaged, the test directory has no obvious structure and no manifest, so the harness doesn't know what directories to run. I'm going to have to fix this (I think I'll write out an ad-hoc manifest into _tests/xpcshell during the build process, then teach runxpcshelltests.py how to read it.) Is that maybe another vote in favor of making test-file dependencies in xpcshell explicit? After thinking about this more, I would like to see a --log-file added to the new harness rewrites. The main reason here is on mobile devices we run the tests in much smaller chunks. This is done because we need to conserve in total memory usage. So we have wrappers that run one test/directory at a time. The advantage of a --log-file is we can output the results of a single test chunk into a known file and not have to redirect the output of the master script. This also saves us from creating a huge file which we might not have the luxury of enough space. The smaller files can be ftp'd off the device if we are limited in space between chunks. this has already proven very useful in the mochitests when running on Fennec. (In reply to comment #79) > Is that maybe another vote in favor of making test-file dependencies in > xpcshell explicit? Yeah, we need a bug on that if we don't already have one. Like I said, I think I'll do something ad-hoc here just to make it work, then we can hash out a better system. (In reply to comment #80) > After thinking about this more, I would like to see a --log-file added to the > new harness rewrites. Can you file a new bug on this? I can see how it could be useful, but it doesn't need to block this bug particularly. This works, but I'm failing a bunch of tests. I'm going to split that out into another bug. Could be bugs in tests, just assumptions, or files that the tests wanted that I am failing to package. Fixed some issues I encountered. I still need to add a way for runxpcshelltests.py to read the ad-hoc manifest it's writing. For my testing I've just been using cat | xargs. I'm currently failing the bits of the test that were added in bug 435687, not sure what's up with that, but I'll worry about that separately. Attachment #367205 - Attachment is obsolete: true With this patch + the patch from bug 482085 I'm running tests like so: 1) do a build, then "make package package-tests", unpack the build + the tests into some other dir 2) In the new dir, copy some files into the app dir: (workaround for bug 483202) cp bin/xpcshell firefox/ cp bin/components/* firefox/components/* cp bin/plugins/* firefox/plugins/* 3) cat xpcshell/tests/all-test-dirs.list | sed "s|^|./xpcshell/tests/|" | xargs python -u xpcshell/runxpcshelltests.py ./firefox/xpcshell Also, a clobber seems to have fixed the failure I was seeing in comment 83 there, so I'm passing all tests now. Ok, good enough. I added a --manifest=/path/to/manifest option to runxpcshelltests.py, so you can now run from a test package like: python -u xpcshell/runxpcshelltests.py --manifest=./xpcshell/tests/all-test-dirs.list ./firefox/xpcshell Also I updated the patch to merge a few test changes. Attachment #367864 - Attachment is obsolete: true Attachment #368007 - Flags: review?(benjamin) Comment on attachment 368007 [details] [diff] [review] xpcshell packaging bits, rev 3 [checked in] >diff --git a/config/rules.mk b/config/rules.mk > define _INSTALL_TESTS > $(TEST_INSTALLER) $(wildcard $(srcdir)/$(dir)/*) $(testxpcobjdir)/$(MODULE)/$(dir) >+@echo "$(MODULE)/$(dir)" >> $(testxpcobjdir)/all-test-dirs.list > > endef # do not remove the blank line! I'd prefer build-list.pl here, ugly as it may be. >--- a/testing/mochitest/Makefile.in > stage-package: >- $(NSINSTALL) -D $(PKG_STAGE)/mochitest && $(NSINSTALL) -D $(PKG_STAGE)/plugins >+ $(NSINSTALL) -D $(PKG_STAGE)/mochitest && $(NSINSTALL) -D $(PKG_STAGE)/bin/plugins While you're here, cut out the extraneous invocation and just use -D dir1 dir2 Attachment #368007 - Flags: review?(benjamin) → review+ I'm taking bug 460282 and bug 463605 off of the dep list here, and moving them to block bug 383136, which is the RelEng side of this. They don't block the ability to use this code, they just block the ability to use it in in our hourly/nightly builds on tinderbox. Comment on attachment 366808 [details] [diff] [review] add --xre-path to runreftest.py [checked in] Pushed to m-c: Attachment #366808 - Attachment description: add --xre-path to runreftest.py → add --xre-path to runreftest.py [checked in] Comment on attachment 368007 [details] [diff] [review] xpcshell packaging bits, rev 3 [checked in] Pushed to m-c: Attachment #368007 - Attachment description: xpcshell packaging bits, rev 3 → xpcshell packaging bits, rev 3 [checked in] That's a wrap. I'm not going to actually block on bug 483202, since it's possible to work around it, but I'd like to get that in since it would make things easier. Status: ASSIGNED → RESOLVED Closed: 13 years ago Resolution: --- → FIXED Comment on attachment 365903 [details] [diff] [review] add packaging bits for reftest, take three [checked in] Pushed to 1.9.1: Comment on attachment 366808 [details] [diff] [review] add --xre-path to runreftest.py [checked in] Pushed to 1.9.1: Comment on attachment 368007 [details] [diff] [review] xpcshell packaging bits, rev 3 [checked in] Pushed to 1.9.1: Product: Core → Firefox Build System
https://bugzilla.mozilla.org/show_bug.cgi?id=421611
CC-MAIN-2021-39
refinedweb
5,863
72.66
User:Eoconnor/ISSUE-41 Zero-edit Change Proposal for ISSUE-41 Summary The basic question of ISSUE-41 is (as asked on public-html) "should HTML 5 provide an explicit means for others to define custom elements and attributes within HTML markup?" In a word, no. HTML5's existing extension points provide all the features needed to solve the use cases that give rise in some to the desire for DE. ." (from the WHATWG FAQ). Contents - 1 Zero-edit Change Proposal for ISSUE-41 - 1.1 Summary - 1.2 Rationale - 1.2.1 HTML's exisiting extension points - 1.2.2 Use Case 1 - 1.2.3 Use Case 2 - 1.2.4 Use Case 3 - 1.2.5 Use Case 4 - 1.2.6 Use Case 5 - 1.2.7 Use Case 6 - 1.2.8 Use Case 7 - 1.3 Details - 1.4 Impact - 1.5 References - 1.6 Contributors Rationale I've gathered together many of the use cases for DE I could find posted to public-html, each attributed to the original email, blog post, or such which defined it. I've also tried to consolidate similar or identical use cases together so as to avoid redundancy. All but one of these use cases can be addressed with the existing HTML extension points. The remaining use case is best left unaddressed, as discussed later on in this CP. HTML's exisiting extension points HTML has many existing extension points for authors to use. As listed in section 2.2.2 Extensibility: - an inline or server-side scripts. - Authors can create plugins and invoke them using the <embed> element. This is how Flash works. - Authors can extend APIs using the JavaScript prototyping mechanism. This is widely used by script libraries, for instance. -. Vendors unwilling to add additional extension points at this time Representatives of browser vendors have expressed reluctance to add additional extension points to HTML, including Microsoft, who think DE "isn't important enough to justify changes [to the spec] at this time" (source). Use Case 1 - Annotate structured data that HTML has no semantics for, and which nobody has annotated before, and may never again, for private use or use in a small self-contained community. (source) Structured data can be published in HTML by using class="" and rel="" as in Microformats, with the Microdata feature, with HTML5+RDFa, or several of the other existing extension points, both separately and together. Use Case 2 - Site owners want a way to provide enhanced search results to the engines, so that an entry in the search results page is more than just a bare link and snippet of text, and provides additional resources for users straight on the search page without them having to click into the page and discover those resources themselves. (source) A search engine could define a Microdata or RDF vocabulary for publishers to use. Use Case 3 - Remove the need for feeds to restate the content of HTML pages (i.e. replace Atom with HTML). (source) The hAtom microformat solves this use case, and it is built on top of the existing extension points of HTML. Use Case 4 - Remove the need for RDF users to restate information in online encyclopedias (i.e. replace DBpedia). (source) The HTML5+RDFa spec being worked on by this WG can address this use case, as can the Microdata feature. Use Case 5 -. (source 1, source 2) As with use case 1, such extensions can be published in HTML by using class="" and rel="" as in Microformats, with the Microdata feature, with HTML5+RDFa, or several of the other existing extension points, both separately and together. Name collisions can be avoided in several different ways, and authors do not need to wait for browser vendors to implement anything new before they can start using their extension. Use Case 6 - Round-trip metadata across sessions, maintaining a strong metadata association that is resilient to subsequent editing operations by other user agents. Both whole HTML files and smaller document fragments need to round-trip. Such metadata may include information about a WYSIWYG editor's state, author information, relationships between this document and others, or a reference to the document's original source. (source) This use case can be addressed with the existing extension points of HTML: - Editor state informaiton can be placed in data-*=""attributes. - Author information can be represented by <meta name=author>; authoris one of the standard metadata names. - Relationships between this document and others can be expressed using the rel=""attribute. - References to the document's original source can be expressed using rel=alternateor rel=bookmark, both standard link relations, or a custom link relation could be used. Use Case 7 - An existing software product currently outputs XHTML documents with other, non-SVG and non-MathML Namespaces-in-XML content mixed in. Users of this product would like to publish such content as text/html, and to have content published as such pass HTML5 conformance testing. This use case cannot be addressed by use of HTML's existing extension points. This is a feature, not a bug. As stated in section 2.2.2 Extensibility: "Vendor-specific proprietary user agent extensions to this specification are strongly discouraged. Documents must not use such extensions, as doing so reduces interoperability and fragments the user base, allowing only users of specific user agents to access the content in question." Of course, such software can continue to use XHTML. One of the other DE Change Proposals describes three classes of such extensions. Platform Extensions "Platform Extensions" such as SVG and MathML that define new types of content that can be rendered in a browser. These extensions are expected to be vendor-neutral and have a specification. They may become part of HTML in the future. Such extensions should be coordinated among browser vendors within this working group. Language Extensions "Language Extensions" such as RDFa that define new attributes on top of nodes from other namespaces. As argued on public-html, ..." Vendor-specific Experimental Extensions "Vendor-specific Experimental Extensions" such as the experimental features that Webkit and Mozilla have created. The spec already provides for this with the vendor--feature="" pattern for vendor-specific attributes. Just as with -vendor-foo CSS properties, use of such attributes should not be considered conforming. Not providing such a feature for element names is intentional; for an excellent argument against such a feature, see this email to public-html. Details No change to the spec. Impact Positive Effects We avoid adding complex new features without concrete use-cases to the already complex web platform. Negative Effects If a particular use-case isn't addressed, users may end up attempting to extend HTML themselves in a non-conformant manner. This has been a potential problem for decades in HTML, however, and we haven't seen very much actual damage. As well, the majority of extensibility use-cases have already been addressed in HTML, so that further limits such potential damage. Conformance Classes Changes No change. Risks None. After all, we can always add further extension mechanisms later should the need arise. References References are linked inline. Contributors - Initial draft of this Change Proposal by Ian Hickson - Edward O'Connor - Tab Atkins Other collaborators welcome!
https://www.w3.org/html/wg/wiki/User:Eoconnor/ISSUE-41
CC-MAIN-2020-34
refinedweb
1,208
56.66
view raw I have an RDD containing the count for some objects and then I apply reduceByKey() on it, summing up all the elements (like in the word count example). I've saved the output of the reduceByKey transformation to a text file and I have the sum for each of the workers: (work at LEFT null,9741) (work at LEFT null,10073) (work at LEFT null,10348) (work at LEFT null,10483) (work at LEFT null,10754) public class Pattern { string pattern; PatternType type; Relation r; } In Spark, PairRDDFunctions.reduceByKey takes the RDD[(K, V)] and partitions the data (causing a shuffle) using the defined partitioner. If no such partitioner is provided, it uses the default HashPartitioner to decide which key value pair gets passed to which worker. If you're using a Java class as your key which doesn't override it's hashCode method, reduceByKey will decide how to partition the data based on Java's Object.hashCode. This means that identical keys will be offloaded to different workers where they will be partially reduced together. Ideally, that isn't what you want. What you want is that all objects with the same key will get reduced via the same worker. Then, when they will be shuffled after each worker reduces it's own, the combiner for all keys won't be able to match the keys based on their hash code, which explains why you're seeing only partially reduced data instead of the summed up data on a single key. What you need to do is provide a proper hashCode and equals implementation. This is stated in the Spark documentation (thanks @VitaliyKotlyarenko): Note: when using custom objects as the key in key-value pair operations, you must be sure that a custom equals() method is accompanied with a matching hashCode() method. For full details, see the contract outlined in the Object.hashCode() documentation For example: public class Pattern { string pattern; PatternType type; Relation r; @Override public int hashCode() { return 371 * pattern.hashCode(); } @Override public boolean equals(Object other) { if (this == other) return true; if (other == null || this.getClass() != other.getClass()) return false; Pattern pattern = (Pattern) other; return this.pattern.equals(pattern.pattern); } }
https://codedump.io/share/2t344vVztpkC/1/spark-reducebykey-not-shuffling-for-the-final-sum
CC-MAIN-2017-22
refinedweb
367
56.89
jyoung79 at kc.rr.com wrote: ^^^^^^^^^^^^^^^^^^ Something is missing there. > I'm currently working on a project where I'm looping through xml elements, > pulling the 'id' attribute (which will be coerced to a number) No, usually it won't. > as well as the element tag. That's element _type name_. > I'm needing these elements in numerical order (from the id). Attribute values of type ID MUST NOT start with a decimal digit in XML [1]. > Example xml might look like: > > <price id="5"> > <copyright id="1"> > <address id="3"> That is not even well-formed, as the end tags of the `address', `copyright', and `price' elements (in that order) are missing. Well-formed XML would be either <foo> <price id="5"/> <copyright id="1"/> <address id="3"/> </foo> or <foo> <price id="5"> <copyright id="1"/> </price> <address id="3"/> </foo> or <foo> <price id="5"/> <copyright id="1"> <address id="3"/> </copyright> </foo> or <price id="5"> <copyright id="1"/> <address id="3"/> </price> or <price id="5"> <copyright id="1"> <address id="3"/> </copyright> </price> but neither might be Valid (or make sense). Check your DTD or XML Schema. > There will be cases where elements might be excluded, but I'd still need > to put what I find in id numerical order. In the above example I would > need the order of 1, 3, 5 (or copyright, address, price). In javascript I > can easily index an array, and any preceding elements that don't exist > will be set to 'undefined': > > ----- > var a = []; > > a[parseInt('5')] = 'price'; > a[parseInt('1')] = 'copyright'; > a[parseInt('3')] = 'address'; > > // a is now [undefined, copyright, undefined, address, undefined, > price] ----- This is nonsense even in "javascript" (there really is no such language [1]). In ECMAScript implementations like JavaScript you would write var a = []; a[5] = "price"; a[1] = "copyright"; a[3] = "address"; as array indexes are only special object properties, and properties are stored as strings anyway. However, the highest index you can store this way, in the sense that it increases the `length' of the array, would be 2³²−2 (as the value of the `length' property ranges from 0 to 2³²–1). Python's `list' type is roughly equivalent to ECMAScript's `Array' type. Important differences include that apparently you cannot store as much items in a Python list as in an ECMAScript Array – >>> for p in range(0, pow(2, 31)-1): a.append(p) ... Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError [Kids, don't try this at home!] >>> for p in range(0, pow(2, 31)): a.append(p) ... Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: range() result has too many items –, and that you need to add enough items in order to access one (so there are no sparse lists): >>> a[23] = 42 Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: list assignment index out of range (I was not aware of that.) Also, the access parameter must be integer: >>> a["23"] = 42 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: list indices must be integers, not str >>> a["foo"] = "bar" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: list indices must be integers, not str [Using a non-numeric or out-of-range parameter (see above) for bracket property access on an ECMAScript Array means that the number of elements in the array does not increase, but that the Array instance is augmented with a non-element property, or an existing non-element property is overwritten; this cannot happen with Python lists.] > Next, I can loop through the array and remove every 'undefined' in order > to get the ordered array I need: Or you could be using an ECMAScript Object instance in the first place, and iterate over its enumerable properties. This would even work with proper IDs, but if you are not careful – chances of which your statements about the language indicate – you might need further precautions to prevent showing up of user-defined enumerable properties inherited from Object.prototype: var o = { 5: "price", 1: "copyright", 3: "address" }; or programmatically: var o = {}; o[5] = "price"; o[1] = "copyright"; o[3] = "address"; Then: for (var prop in o) { /* get prop or o[prop] */ } > ----- >> var newA = []; > >> for (var x = 0; x < a.length; x++) { Unless a.length changes: for (var x = 0, len = a.length; x < len; ++x) { The variable name `x' should also be reserved for non-counters, e. g. object references. Use i, j, k, and so forth in good programming tradition here instead. > if (a[x] != undefined) { if (typeof a[x] != "undefined") { as your variant would also evaluate to `false' if a[x] was `null', and would throw an exception in older implementations at no advantage (however, you might want to consider using `a[x] !== undefined' for recent implementations only). > newA.push(a[x]); > } > } > > // newA is now [copyright, address, price] Or you would be using Array.prototype.push() (or a[a.length] = …) in the first place instead of this, as contrary to what you stated above you appear to be only interested in the element type names: var a = []; a.push("price"); a.push("copyright"); a.push("address"); > ----- > > My question is, does python have a similar way to do something like this? > I'm assuming the best way is to create a dictionary and then sort it by > the keys? As are ECMAScript objects, Python's dictionaries are an *unordered* collection of name-value pairs. You would be using Python's `list' type, and its append() method (foo.append(bar)) or concatenate two lists instead (foo += [bar]). Then you would sort the list (see below). (You could also use a dictionary object, and use its keys() method and then sort its return value. Depends on your use-case.) I am getting the idea here that you intend to apply string parsing on XML. However, when working with XML you should instead be using an XML parser to get a document object, then XPath on the document object to retrieve the `id' attribute values of the elements that have an `id' attribute ('//*[@id]/@id') or the elements themselves ('//*[@id]'), in which case you would use XPathResult::*ORDERED_NODE_SNAPSHOT_TYPE to apply the snapshotItem() method to generate a list, and then you would probably simply say mylist.sort() [but see below]. Different XPath APIs for Python might also present the result as a list already without you having to call the snapshotItem() method. You should look into the libxml2 module and lxml. If you are instead interested in finding out, e.g., the element type for a specific ID, without using XPath again, then you should build and sort a list of dictionaries – a = [{"id": "5", "type": "price"}, {"id": "1", "type": "copyright"}, {"id": "3", "type": "address"}] or, programmatically a = [] a.append({"id": "5", "type": "price"}) a.append({"id": "1", "type": "copyright"}) a.append({"id": "3", "type": "address"}) – which is BTW syntactically exactly the approach that you would use in an ECMAScript implementation (except for the trailing semicolon that should not be missing in ECMAScript). The Python solution [3] – a.sort(cmp=lambda x,y: cmp(x['id'], y['id'])) or (since Python 2.4) a.sort(key=lambda x: x['id']) or (since Python 2.4) sorted(a, cmp=lambda x, y: cmp(x['id'], y['id'])) or (since Python 2.4) sorted(a, key=lambda x: x['id']) – only differs from the ECMAScript-based one in the way that the lambda expression for the comparator is written [4]: a.sort(function(x, y) { var x_id = x.id, y_id = y.id; return ((x_id < y_id) ? -1 : ((x_id == y_id) ? 0 : 1)); }); (The local variables should improve the efficiency in the worst case. You may omit some parentheses there at your discretion.) A difference between sorted() and the other Python ways is that the former returns a sorted list but leaves the original list as it is. (ECMAScript does not provide a built-in method to sort an array of objects by the object's property values, nor does it provide a built-in one that sorts an array or array-like object not-in-place. But such is easily implemented.) HTH __________ [1] <> [2] <> [3] <> [4] <> -- PointedEars Bitte keine Kopien per E-Mail. / Please do not Cc: me.
https://mail.python.org/pipermail/python-list/2011-July/608526.html
CC-MAIN-2019-30
refinedweb
1,389
69.72
I am employed in C#, and I wish to Publish to some website which has multiple checkboxes and returns a datafile based on which checkboxes are checked. To begin with, how do you publish an application with checked checkboxes ? And when that's done, how do you obtain the datafile the website transmits me ? You will want to have WebRequests while using System.Internet.HttpWebRequest namespace. You may create a GET or a Publish reference to HttpWebRequest. There is a great article onto it here and you will also take a look at System.Internet.HttpWebRequest on MSDN. With this particular simple ASP code, we are able to observe how checkboxes values are sent via a Publish request: tst.asp <% Dim chks chks = Request.Form("chks") %> <html> <head> <title>Test page</title> </mind> <body> <form title="someForm" action="" method="Publish"> <input type="checkbox" id="chk01" title="chks" value="v1" /> <input type="checkbox" id="chk02" title="chks" value="v2" /> <input type="checkbox" id="chk03" title="chks" value="v3" /> <input type="submit" value="Submit!" /> </form> <h3>Last "chks" = <%= chks %></h3> </body> </html> The H3 line show us this, as we check all of the checkboxes: Last "chks" = v1, v2, v3 Now we all know the way the data ought to be published. Using the sample code below, you need to have the ability to get it done. C# method sample using System.Text using System.Internet using System.IO using System ... void DoIt() world wide web-form-urlencoded" webrequest.Method = "Publish" webrequest.ContentLength = buffer.Length using (Stream data = webrequest.GetRequestStream()) using (HttpWebResponse webresponse = (HttpWebResponse)webrequest.GetResponse()) publish ok */ Hope I have assisted. Helpful links:
http://codeblow.com/questions/c-how-do-you-publish-an-application-to-some-website-with-a/
CC-MAIN-2017-47
refinedweb
272
51.75
Few days ago, I was working on a project where I have to lookup the duration of video files and compare the length duration on local disk with the corresponding duration of video online. So, instead of going between video on local disk and online repeated, I thought what if I could use python to extract all the duration lengths into a table (CSV/Excel) and compare the columns. Here is how I went about doing it. I made use of 'moviepy' which is a Python module for video editing, which can be used for basic operations (like cuts, concatenations, title insertions). Installing moviepy Use: pip install moviepy This will some other dependent packages such as: decorator, tqdm, urllib3, idna, chardet, requests, proglog, numpy, pillow, imageio, imageio-ffmpeg To be sure you installation was successful, try to import the module as seen above. If you got not error then your installation was successful and you are good to move on. The poblem Here I have some set of videos I downloaded from an online course and I listed their durations in a spreadsheet file as seen below. Now I want to compare each video duration/length to be sure it was downloaded correctly. The code to extract the video duration to be compared withwhat was obtained online is as below:- import glob import datetime from moviepy.editor import VideoFileClip folder_path = r'C:\Users\Yusuf_08039508010\Desktop\videos_tut' videoFiles = glob.glob(folder_path + str('\\*ts')) # Converts Seconds to: hours, mins, seconds using 'datetime' module def convert_sec(vid_seconds): return str(datetime.timedelta(seconds=vid_seconds)) for v in videoFiles: clip = VideoFileClip(v) # duration in seconds (create custom function to convert to: H:M:S) video_duration = clip.duration print (v.split('\\')[-1], convert_sec(int(video_duration))) Basically, the code uses the 'glob' module to access all the videos in the folder and used the 'datetime' module to convert the video duration in seconds provided by the 'moviepy' module. The result is a seen below;- Thank you for reading.
https://umar-yusuf.blogspot.com/2020/07/extracting-duration-of-video-file.html
CC-MAIN-2021-39
refinedweb
331
59.23
Description Class for a generic finite element node in 3D space, with scalar field P. This can be used for typical Poisson-type problems (ex. thermal, if the scalar field is temperature T, or electrostatics if the scalar field is electric potential V) #include <ChNodeFEAxyzP.h> Member Function Documentation ◆ ArchiveIN() Method to allow de-serialization of transient data from archives. Method to allow de serialization of transient data from archives. Reimplemented from chrono::ChNodeBase. ◆ GetFixed() Get the 'fixed' state of the node. If true, its current field value is not changed by solver. Implements chrono::fea::ChNodeFEAbase. ◆ GetMass() Get mass of the node. Not meaningful except for transients. Meaning of 'mass' changes depending on the problem type. ◆ SetFixed() Set the 'fixed' state of the node. If true, its current field value is not changed by solver. Implements chrono::fea::ChNodeFEAbase. ◆ SetMass() Set mass of the node. Not meaningful except for transients. Meaning of 'mass' changes depending on the problem type.::ChNodeBase.::ChNodeBase.::ChNodeBase. The documentation for this class was generated from the following files: - /builds/uwsbel/chrono/src/chrono/fea/ChNodeFEAxyzP.h - /builds/uwsbel/chrono/src/chrono/fea/ChNodeFEAxyzP.cpp
https://api.projectchrono.org/development/classchrono_1_1fea_1_1_ch_node_f_e_axyz_p.html
CC-MAIN-2021-39
refinedweb
190
53.17
Hi, I have many whole exome BAM files (aligned to reference). As an R admirer, I used Rstudio to do initial analysis on one of them (~2.7 GB) using Rsamtools. 1) Whether single or paired-end: > testPairedEndBam("1.bam") [1] TRUE > quickBamFlagSummary("1.bam") # Got detailed information 2) Read bam file > bam <- BamFile("1.bam", asMates = TRUE) > bam class: BamFile path: 1.bam index:1.bam.bai isOpen: FALSE yieldSize: NA obeyQname: FALSE asMates: TRUE qnamePrefixEnd: NA qnameSuffixStart: NA 3) Some high level information > seqinfo(bam) Seqinfo object with 1133 sequences from an unspecified genome: # and other information 4) Read all the reads in the file using scanBam() details <- scanBam(bam) But at this step, it goes on running and running and I am stuck at this step. Any thoughts please? How much RAM is required to process a 3GB BAM file in Rstudio? I have windows 8.1, 64-bit computer with 16 GB RAM. Thanks! > sessionInfo() R version 3.3.2 (2016-10-31) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows >= 8 x64 (build 9200) loaded via a namespace (and not attached): [1] zlibbioc_1.16.0 IRanges_2.4.8 XVector_0.10.0 futile.logger_1.4.3 parallel_3.3.2 [6] tools_3.3.2 GenomicRanges_1.22.4 lambda.r_1.1.9 futile.options_1.0.0 Biostrings_2.38.4 [11] S4Vectors_0.8.11 BiocGenerics_0.16.1 BiocParallel_1.4.3 Rsamtools_1.26.1 GenomeInfoDb_1.6.3 [16] stats4_3.3.2 bitops_1.0-6 Check in R how much memory you can use: But if I wanted to check for a specific file, then? I think memory is not the problem then. Do you get an error? How long do you let it run? Maybe traceback can give you any clues? It was running for 31 minutes and showed following error: Since you are using windows, maybe this answer on stackoverflow can help: Thanks! but did not find it useful.
https://www.biostars.org/p/239725/
CC-MAIN-2020-16
refinedweb
322
68.87
PureScript by Example This repository contains a community fork of PureScript by Example by Phil Freeman, also known as "the PureScript book". This version differs from the original in that it has been updated so that the code and exercises work with up-to-date versions of the compiler, libraries, and tools. Some chapters have also been rewritten to showcase the latest features of the PureScript ecosystem. If you enjoyed the book or found it useful, please consider buying a copy of the original on Leanpub. Status This book is being continuously updated as the language evolves, so please report any issues you discover with the material. We appreciate any feedback you have to share, even if it's as simple as pointing out a confusing section that we could make more beginner-friendly. Unit tests are also being added to each chapter so you can check if your answers to the exercises are correct. See #79 for the latest status on tests. - Parallel asynchronous execution License The text of this book is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License:. Some text is derived from the PureScript Documentation Repo, which uses the same license, and is copyright various contributors. The exercises are licensed under the MIT license. Introduction Functional JavaScript Functional programming techniques have been making appearances in JavaScript for some time now: Libraries such as UnderscoreJS allow the developer to leverage tried-and-trusted functions such as map, filterand reduceto create larger programs from smaller programs by composition: var sumOfPrimes = _.chain(_.range(1000)) .filter(isPrime) .reduce(function(x, y) { return x + y; }) .value(); Asynchronous programming in NodeJS leans heavily on functions as first-class values to define callbacks. import { readFile, writeFile } from 'fs' readFile(sourceFile, function (error, data) { if (!error) { writeFile(destFile, data, function (error) { if (!error) { console.log("File copied"); } }); } }); Libraries such as React and virtual-dom model views as pure functions of application state. Functions enable a simple form of abstraction which can yield great productivity gains. However, functional programming in JavaScript has its own disadvantages: JavaScript is verbose, untyped, and lacks powerful forms of abstraction. Unrestricted JavaScript code also makes equational reasoning very difficult. PureScript is a programming language which aims to address these issues. It features lightweight syntax, which allows us to write very expressive code which is still clear and readable. It uses a rich type system to support powerful abstractions. It also generates fast, understandable code, which is important when interoperating with JavaScript, or other languages which compile to JavaScript. All in all, I hope to convince you that PureScript strikes a very practical balance between the theoretical power of purely functional programming, and the fast-and-loose programming style of JavaScript. Types and Type Inference The debate over statically typed languages versus dynamically typed languages is well-documented. PureScript is a statically typed language, meaning that a correct program can be given a type by the compiler which indicates its behavior. Conversely, programs which cannot be given a type are incorrect programs, and will be rejected by the compiler. In PureScript, unlike in dynamically typed languages, types exist only at compile-time, and have no representation at runtime. It is important to note that in many ways, the types in PureScript are unlike the types that you might have seen in other languages like Java or C#. While they serve the same purpose at a high level, the types in PureScript are inspired by languages like ML and Haskell. PureScript's types are expressive, allowing the developer to assert strong claims about their programs. Most importantly, PureScript's type system supports type inference - it requires far fewer explicit type annotations than other languages, making the type system a tool rather than a hindrance. As a simple example, the following code defines a number, but there is no mention of the Number type anywhere in the code: iAmANumber = let square x = x * x in square 42.0 A more involved example shows that type-correctness can be confirmed without type annotations, even when there exist types which are unknown to the compiler: iterate f 0 x = x iterate f n x = iterate f (n - 1) (f x) Here, the type of x is unknown, but the compiler can still verify that iterate obeys the rules of the type system, no matter what type x might have. In this book, I will try to convince you (or reaffirm your belief) that static types are not only a means of gaining confidence in the correctness of your programs, but also an aid to development in their own right. Refactoring a large body of code in JavaScript can be difficult when using any but the simplest of abstractions, but an expressive type system together with a type checker can even make refactoring into an enjoyable, interactive experience. In addition, the safety net provided by a type system enables more advanced forms of abstraction. In fact, PureScript provides a powerful form of abstraction which is fundamentally type-driven: type classes, made popular in the functional programming language Haskell. Polyglot Web Programming Functional programming has its success stories - applications where it has been particularly successful: data analysis, parsing, compiler implementation, generic programming, parallelism, to name a few. It would be possible to practice end-to-end application development in a functional language like PureScript. PureScript provides the ability to import existing JavaScript code, by providing types for its values and functions, and then to use those functions in regular PureScript code. We'll see this approach later in the book. However, one of PureScript's strengths is its interoperability with other languages which target JavaScript. Another approach would be to use PureScript for a subset of your application's development, and to use one or more other languages to write the rest of the JavaScript. Here are some examples: - Core logic written in PureScript, with the user interface written in JavaScript. - Application written in JavaScript or another compile-to-JS language, with tests written in PureScript. - PureScript used to automate user interface tests for an existing application. In this book, we'll focus on solving small problems with PureScript. The solutions could be integrated into a larger application, but we will also look at how to call PureScript code from JavaScript, and vice versa. Prerequisites The software requirements for this book are minimal: the first chapter will guide you through setting up a development environment from scratch, and the tools we will use are available in the standard repositories of most modern operating systems. The PureScript compiler itself can be downloaded as a binary distribution, or built from source on any system running an up-to-date installation of the GHC Haskell compiler, and we will walk through this process in the next chapter. The code in this version of the book is compatible with versions 0.15.* of the PureScript compiler. About You I will assume that you are familiar with the basics of JavaScript. Any prior familiarity with common tools from the JavaScript ecosystem, such as NPM and Gulp, will be beneficial if you wish to customize the standard setup to your own needs, but such knowledge is not necessary. No prior knowledge of functional programming is required, but it certainly won't hurt. New ideas will be accompanied by practical examples, so you should be able to form an intuition for the concepts from functional programming that we will use. Readers who are familiar with the Haskell programming language will recognize a lot of the ideas and syntax presented in this book, because PureScript is heavily influenced by Haskell. However, those readers should understand that there are a number of important differences between PureScript and Haskell. It is not necessarily always appropriate to try to apply ideas from one language in the other, although many of the concepts presented here will have some interpretation in Haskell. How to Read This Book The chapters in this book are largely self contained. A beginner with little functional programming experience would be well-advised, however, to work through the chapters in order. The first few chapters lay the groundwork required to understand the material later on in the book. A reader who is comfortable with the ideas of functional programming (especially one with experience in a strongly-typed language like ML or Haskell) will probably be able to gain a general understanding of the code in the later chapters of the book without reading the preceding chapters. Each chapter will focus on a single practical example, providing the motivation for any new ideas introduced. Code for each chapter are available from the book's GitHub repository. Some chapters will include code snippets taken from the chapter's source code, but for a full understanding, you should read the source code from the repository alongside the material from the book. Longer sections will contain shorter snippets which you can execute in the interactive mode PSCi to test your understanding. Code samples will appear in a monospaced font, as follows: module Example where import Effect.Console (log) main = log "Hello, World!" Commands which should be typed at the command line will be preceded by a dollar symbol: $ spago build Usually, these commands will be tailored to Linux/Mac OS users, so Windows users may need to make small changes such as modifying the file separator, or replacing shell built-ins with their Windows equivalents. Commands which should be typed at the PSCi interactive mode prompt will be preceded by an angle bracket: > 1 + 2 3 Each chapter will contain exercises, labelled with their difficulty level. It is strongly recommended that you attempt the exercises in each chapter to fully understand the material. This book aims to provide an introduction to the PureScript language for beginners, but it is not the sort of book that provides a list of template solutions to problems. For beginners, this book should be a fun challenge, and you will get the most benefit if you read the material, attempt the exercises, and most importantly of all, try to write some code of your own. Getting Help If you get stuck at any point, there are a number of resources available online for learning PureScript: - The PureScript Discord server is a great place to chat about issues you may be having. The server is dedicated to chat about PureScript - The Purescript Discourse Forum is another good place to search for solutions to common problems. Questions you ask here will be available to help future readers, whereas on Slack, message history is only kept for approximately 2 weeks. - PureScript: Jordan's Reference is an alternative learning resource that goes into great depth. If a concept in this book is difficult to understand, consider reading the corresponding section in that reference. - Pursuit is a searchable database of PureScript types and functions. Read Pursuit's help page to learn what kinds of searches you can do. - The unofficial PureScript Cookbook provides answers via code to "How do I do X?"-type questions. - The PureScript documentation repository collects articles and examples on a wide variety of topics, written by PureScript developers and users. - The PureScript website contains links to several learning resources, including code samples, videos and other resources for beginners. - Try PureScript! is a website which allows users to compile PureScript code in the web browser, and contains several simple examples of code. If you prefer to learn by reading examples, the purescript, purescript-node and purescript-contrib GitHub organizations contain plenty of examples of PureScript code. About the Author I am the original developer of the PureScript compiler. I'm based in Los Angeles, California, and started programming at an early age in BASIC on an 8-bit personal computer, the Amstrad CPC. Since then I have worked professionally in a variety of programming languages (including Java, Scala, C#, F#, Haskell and PureScript). Not long into my professional career, I began to appreciate functional programming and its connections with mathematics, and enjoyed learning functional concepts using the Haskell programming language. I started working on the PureScript compiler in response to my experience with JavaScript. I found myself using functional programming techniques that I had picked up in languages like Haskell, but wanted a more principled environment in which to apply them.. I maintain a blog, and can be reached on Twitter. Acknowledgements I would like to thank the many contributors who helped PureScript to reach its current state. Without the huge collective effort which has been made on the compiler, tools, libraries, documentation and tests, the project would certainly have failed. The PureScript logo which appears on the cover of this book was created by Gareth Hughes, and is gratefully reused here under the terms of the Creative Commons Attribution 4.0 license. Finally, I would like to thank everyone who has given me feedback and corrections on the contents of this book. Getting Started Chapter Goals In this chapter, we'll set up a working PureScript development environment, solve some exercises, and use the tests provided with this book to check our answers. You may also find a video walkthrough of this chapter helpful if that better suits your learning style. Environment Setup First, work through this Getting Started Guide in the Documentation Repo to setup your environment and learn a few basics about the language. Don't worry if the code in the example solution to the Project Euler problem is confusing or contains unfamiliar syntax. We'll cover all of this in great detail in the upcoming chapters. Solving Exercises Now that you've installed the necessary development tools, clone this book's repo. git clone The book repo contains PureScript example code and unit tests for the exercises that accompany each chapter. There's some initial setup required to reset the exercise solutions so they are ready to be solved by you. Use the resetSolutions.sh script to simplify this process. While you're at it, you should also strip out all the anchor comments with the removeAnchors.sh script (these anchors are used for copying code snippets into the book's rendered markdown, and you probably don't need this clutter in your local repo): cd purescript-book ./scripts/resetSolutions.sh ./scripts/removeAnchors.sh git add . git commit --all --message "Exercises ready to be solved" Now run the tests for this chapter: cd exercises/chapter2 spago test You should see the following successful test output: → Suite: Euler - Sum of Multiples ✓ Passed: below 10 ✓ Passed: below 1000 All 2 tests passed! 🎉 Note that the answer function (found in src/Euler.purs) has been modified to find the multiples of 3 and 5 below any integer. The test suite (found in test/Main.purs) for this answer function is more comprehensive than the test in the earlier getting-started guide. Don't worry about understanding how this test framework code works while reading these early chapters. The remainder of the book contains lots of exercises. If you write your solutions in the Test.MySolutions module ( test/MySolutions.purs), you can check your work against the provided test suite. Let's work through this next exercise together in test-driven-development style. Exercise - (Medium) Write a diagonalfunction to compute the length of the diagonal (or hypotenuse) of a right-angled triangle when given the lengths of the two other sides. Solution We'll start by enabling the tests for this exercise. Move the start of the block-comment down a few lines as shown below. Block comments start with {- and end with -}: suite "diagonal" do test "3 4 5" do Assert.equal 5.0 (diagonal 3.0 4.0) test "5 12 13" do Assert.equal 13.0 (diagonal 5.0 12.0) {- Move this block comment starting point to enable more tests If we attempt to run the test now, we'll encounter a compilation error because we have not yet implemented our diagonal function. $ spago test Error found: in module Test.Main at test/Main.purs:21:27 - 21:35 (line 21, column 27 - line 21, column 35) Unknown value diagonal Let's first take a look at what happens with a faulty version of this function. Add the following code to test/MySolutions.purs: import Data.Number (sqrt) diagonal w h = sqrt (w * w + h) And check our work by running spago test: → Suite: diagonal ☠ Failed: 3 4 5 because expected 5.0, got 3.605551275463989 ☠ Failed: 5 12 13 because expected 13.0, got 6.082762530298219 2 tests failed: Uh-oh, that's not quite right. Let's fix this with the correct application of the Pythagorean formula by changing the function to: diagonal w h = sqrt (w * w + h * h) Trying spago test again now shows all tests are passing: → Suite: Euler - Sum of Multiples ✓ Passed: below 10 ✓ Passed: below 1000 → Suite: diagonal ✓ Passed: 3 4 5 ✓ Passed: 5 12 13 All 4 tests passed! 🎉 Success! Now you're ready to try these next exercises on your own. Exercises - (Easy) Write a function circleAreawhich computes the area of a circle with a given radius. Use the piconstant, which is defined in the Numbersmodule. Hint: don't forget to import piby modifying the import Data.Numberstatement. - (Medium) Write a function leftoverCentswhich takes an Intand returns what's leftover after dividing by 100. Use the remfunction. Search Pursuit for this function to learn about usage and which module to import it from. Note: Your IDE may support auto-importing of this function if you accept the auto-completion suggestion. Conclusion In this chapter, we installed the PureScript compiler and the Spago tool. We also learned how to write solutions to exercises and check these for correctness. There will be many more exercises in the chapters ahead, and working through those really helps with learning the material. If you're stumped by any of the exercises, please reach out to any of the community resources listed in the Getting Help section of this book, or even file an issue in this book's repo. This reader feedback on which exercises could be made more approachable helps us improve the book. Once you solve all the exercises in a chapter, you may compare your answers against those in the no-peeking/Solutions.purs. No peeking please without putting in an honest effort to solve these yourself though. And even if you are stuck, try asking a community member for help first, as we would prefer to give you a small hint rather than spoil the exercise. If you found a more elegant solution (that still only requires knowledge of covered content), please send us a PR. The repo is continuously being revised, so be sure to check for updates before starting each new chapter. Functions and Records Chapter Goals This chapter will introduce two building blocks of PureScript programs: functions and records. In addition, we'll see how to structure PureScript programs, and how to use types as an aid to program development. We will build a simple address book application to manage a list of contacts. This code will introduce some new ideas from the syntax of PureScript. The front-end of our application will be the interactive mode PSCi, but it would be possible to build on this code to write a front-end in JavaScript. In fact, we will do exactly that in later chapters, adding form validation and save/restore functionality. Project Setup The source code for this chapter is contained in the file src/Data/AddressBook.purs. This file starts with a module declaration and its import list: module Data.AddressBook where import Prelude import Control.Plus (empty) import Data.List (List(..), filter, head) import Data.Maybe (Maybe) Here, we import several modules: - The Control.Plusmodule, which defines the emptyvalue. - The Data.Listmodule, which is provided by the listspackage which can be installed using Spago. It contains a few functions which we will need for working with linked lists. - The Data.Maybemodule, which defines data types and functions for working with optional values. Notice that the imports for these modules are listed explicitly in parentheses. This is generally a good practice, as it helps to avoid conflicting imports. Assuming you have cloned the book's source code repository, the project for this chapter can be built using Spago, with the following commands: $ cd chapter3 $ spago build Simple Types PureScript defines three built-in types which correspond to JavaScript's primitive types: numbers, strings and booleans. These are defined in the Prim module, which is implicitly imported by every module. They are called Number, String, and Boolean, respectively, and you can see them in PSCi by using the :type command to print the types of some simple values: $ spago repl > :type 1.0 Number > :type "test" String > :type true Boolean PureScript defines some other built-in types: integers, characters, arrays, records, and functions. Integers are differentiated from floating point values of type Number by the lack of a decimal point: > :type 1 Int Character literals are wrapped in single quotes, unlike string literals which use double quotes: > :type 'a' Char Arrays correspond to JavaScript arrays, but unlike in JavaScript, all elements of a PureScript array must have the same type: > :type [1, 2, 3] Array Int > :type [true, false] Array Boolean > :type [1, false] Could not match type Int with type Boolean. The error in the last example is an error from the type checker, which unsuccessfully attempted to unify (i.e. make equal) the types of the two elements. Records correspond to JavaScript's objects, and record literals have the same syntax as JavaScript's object literals: > author = { name: "Phil", interests: ["Functional Programming", "JavaScript"] } > :type author { name :: String , interests :: Array String } This type indicates that the specified object has two fields, a name field which has type String, and an interests field, which has type Array String, i.e. an array of Strings. Fields of records can be accessed using a dot, followed by the label of the field to access: > author.name "Phil" > author.interests ["Functional Programming","JavaScript"] PureScript's functions correspond to JavaScript's functions. The PureScript standard libraries provide plenty of examples of functions, and we will see more in this chapter: > import Prelude > :type flip forall a b c. (a -> b -> c) -> b -> a -> c > :type const forall a b. a -> b -> a Functions can be defined at the top-level of a file by specifying arguments before the equals sign: add :: Int -> Int -> Int add x y = x + y Alternatively, functions can be defined inline, by using a backslash character followed by a space-delimited list of argument names. To enter a multi-line declaration in PSCi, we can enter "paste mode" by using the :paste command. In this mode, declarations are terminated using the Control-D key sequence: > :paste … add :: Int -> Int -> Int … add = \x y -> x + y … ^D Having defined this function in PSCi, we can apply it to its arguments by separating the two arguments from the function name by whitespace: > add 10 20 30 Quantified Types In the previous section, we saw the types of some functions defined in the Prelude. For example, the flip function had the following type: > :type flip forall a b c. (a -> b -> c) -> b -> a -> c The keyword forall here indicates that flip has a universally quantified type. It means that we can substitute any types for a, b and c, and flip will work with those types. For example, we might choose the type a to be Int, b to be String and c to be String. In that case we could specialize the type of flip to (Int -> String -> String) -> String -> Int -> String We don't have to indicate in code that we want to specialize a quantified type - it happens automatically. For example, we can just use flip as if it had this type already: > flip (\n s -> show n <> s) "Ten" 10 "10Ten" While we can choose any types for a, b and c, we have to be consistent. The type of the function we passed to flip had to be consistent with the types of the other arguments. That is why we passed the string "Ten" as the second argument, and the number 10 as the third. It would not work if the arguments were reversed: > flip (\n s -> show n <> s) 10 "Ten" Could not match type Int with type String Notes On Indentation PureScript code is indentation-sensitive, just like Haskell, but unlike JavaScript. This means that the whitespace in your code is not meaningless, but rather is used to group regions of code, just like curly braces in C-like languages. If a declaration spans multiple lines, then any lines except the first must be indented past the indentation level of the first line. Therefore, the following is valid PureScript code: add x y z = x + y + z But this is not valid code: add x y z = x + y + z In the second case, the PureScript compiler will try to parse two declarations, one for each line. Generally, any declarations defined in the same block should be indented at the same level. For example, in PSCi, declarations in a let statement must be indented equally. This is valid: > :paste … x = 1 … y = 2 … ^D but this is not: > :paste … x = 1 … y = 2 … ^D Certain PureScript keywords (such as where, of and let) introduce a new block of code, in which declarations must be further-indented: example x y z = foo + bar where foo = x * y bar = y * z Note how the declarations for foo and bar are indented past the declaration of example. The only exception to this rule is the where keyword in the initial module declaration at the top of a source file. Defining Our Types A good first step when tackling a new problem in PureScript is to write out type definitions for any values you will be working with. First, let's define a type for records in our address book: type Entry = { firstName :: String , lastName :: String , address :: Address } This defines a type synonym called Entry - the type Entry is equivalent to the type on the right of the equals symbol: a record type with three fields - lastName and address. The two name fields will have type String, and the address field will have type Address, defined as follows: type Address = { street :: String , city :: String , state :: String } Note that records can contain other records. Now let's define a third type synonym, for our address book data structure, which will be represented simply as a linked list of entries: type AddressBook = List Entry Note that List Entry is not the same as Array Entry, which represents an array of entries. Type Constructors and Kinds List is an example of a type constructor. Values do not have the type List directly, but rather List a for some type a. That is, List takes a type argument a and constructs a new type List a. Note that just like function application, type constructors are applied to other types simply by juxtaposition: the type List Entry is in fact the type constructor List applied to the type Entry - it represents a list of entries. If we try to incorrectly define a value of type List (by using the type annotation operator ::), we will see a new type of error: > import Data.List > Nil :: List In a type-annotated expression x :: t, the type t must have kind Type This is a kind error. Just like values are distinguished by their types, types are distinguished by their kinds, and just like ill-typed values result in type errors, ill-kinded types result in kind errors. There is a special kind called Type which represents the kind of all types which have values, like Number and String. There are also kinds for type constructors. For example, the kind Type -> Type represents a function from types to types, just like List. So the error here occurred because values are expected to have types with kind Type, but List has kind Type -> Type. To find out the kind of a type, use the :kind command in PSCi. For example: > :kind Number Type > import Data.List > :kind List Type -> Type > :kind List String Type PureScript's kind system supports other interesting kinds, which we will see later in the book. Displaying Address Book Entries Let's write our first function, which will render an address book entry as a string. We start by giving the function a type. This is optional, but good practice, since it acts as a form of documentation. In fact, the PureScript compiler will give a warning if a top-level declaration does not contain a type annotation. A type declaration separates the name of a function from its type with the :: symbol: showEntry :: Entry -> String This type signature says that showEntry is a function, which takes an Entry as an argument and returns a String. Here is the code for showEntry: showEntry entry = entry.lastName <> ", " <> entry.firstName <> ": " <> showAddress entry.address This function concatenates the three fields of the Entry record into a single string, using the showAddress function to turn the record inside the address field into a String. showAddress is defined similarly: showAddress :: Address -> String showAddress addr = addr.street <> ", " <> addr.city <> ", " <> addr.state A function definition begins with the name of the function, followed by a list of argument names. The result of the function is specified after the equals sign. Fields are accessed with a dot, followed by the field name. In PureScript, string concatenation uses the diamond operator ( <>), instead of the plus operator like in JavaScript. Test Early, Test Often The PSCi interactive mode allows for rapid prototyping with immediate feedback, so let's use it to verify that our first few functions behave as expected. First, build the code you've written: $ spago build Next, load PSCi, and use the import command to import your new module: $ spago repl > import Data.AddressBook We can create an entry by using a record literal, which looks just like an anonymous object in JavaScript. > address = { street: "123 Fake St.", city: "Faketown", state: "CA" } Now, try applying our function to the example: > showAddress address "123 Fake St., Faketown, CA" Let's also test showEntry by creating an address book entry record containing our example address: > entry = { firstName: "John", lastName: "Smith", address: address } > showEntry entry "Smith, John: 123 Fake St., Faketown, CA" Creating Address Books Now let's write some utility functions for working with address books. We will need a value which represents an empty address book: an empty list. emptyBook :: AddressBook emptyBook = empty We will also need a function for inserting a value into an existing address book. We will call this function insertEntry. Start by giving its type: insertEntry :: Entry -> AddressBook -> AddressBook This type signature says that insertEntry takes an Entry as its first argument, and an AddressBook as a second argument, and returns a new AddressBook. We don't modify the existing AddressBook directly. Instead, we return a new AddressBook which contains the same data. As such, AddressBook is an example of an immutable data structure. This is an important idea in PureScript - mutation is a side-effect of code, and inhibits our ability to reason effectively about its behavior, so we prefer pure functions and immutable data where possible. To implement insertEntry, we can use the Cons function from Data.List. To see its type, open PSCi and use the :type command: $ spago repl > import Data.List > :type Cons forall a. a -> List a -> List a This type signature says that Cons takes a value of some type a, and a list of elements of type a, and returns a new list with entries of the same type. Let's specialize this with a as our Entry type: Entry -> List Entry -> List Entry But List Entry is the same as AddressBook, so this is equivalent to Entry -> AddressBook -> AddressBook In our case, we already have the appropriate inputs: an Entry, and an AddressBook, so can apply Cons and get a new AddressBook, which is exactly what we wanted! Here is our implementation of insertEntry: insertEntry entry book = Cons entry book This brings the two arguments entry and book into scope, on the left hand side of the equals symbol, and then applies the Cons function to create the result. Curried Functions Functions in PureScript take exactly one argument. While it looks like the insertEntry function takes two arguments, it is in fact an example of a curried function. The -> operator in the type of insertEntry associates to the right, which means that the compiler parses the type as Entry -> (AddressBook -> AddressBook) That is, insertEntry is a function which returns a function! It takes a single argument, an Entry, and returns a new function, which in turn takes a single AddressBook argument and returns a new AddressBook. This means that we can partially apply insertEntry by specifying only its first argument, for example. In PSCi, we can see the result type: > :type insertEntry entry AddressBook -> AddressBook As expected, the return type was a function. We can apply the resulting function to a second argument: > :type (insertEntry entry) emptyBook AddressBook Note though that the parentheses here are unnecessary - the following is equivalent: > :type insertEntry entry emptyBook AddressBook This is because function application associates to the left, and this explains why we can just specify function arguments one after the other, separated by whitespace. The -> operator in function types is a type constructor for functions. It takes two type arguments, the function's argument type and the return type. The left and right operands respectively. Note that in the rest of the book, I will talk about things like "functions of two arguments". However, it is to be understood that this means a curried function, taking a first argument and returning a function that takes the second. Now consider the definition of insertEntry: insertEntry :: Entry -> AddressBook -> AddressBook insertEntry entry book = Cons entry book If we explicitly parenthesize the right-hand side, we get (Cons entry) book. That is, insertEntry entry is a function whose argument is just passed along to the (Cons entry) function. But if two functions have the same result for every input, then they are the same function! So we can remove the argument book from both sides: insertEntry :: Entry -> AddressBook -> AddressBook insertEntry entry = Cons entry But now, by the same argument, we can remove entry from both sides: insertEntry :: Entry -> AddressBook -> AddressBook insertEntry = Cons This process is called eta conversion, and can be used (along with some other techniques) to rewrite functions in point-free form, which means functions defined without reference to their arguments. In the case of insertEntry, eta conversion has resulted in a very clear definition of our function - " insertEntry is just cons on lists". However, it is arguable whether point-free form is better in general. Property Accessors One common pattern is to use a function to access individual fields (or "properties") of a record. An inline function to extract an Address from an Entry could be written as: \entry -> entry.address PureScript also allows property accessor shorthand, where an underscore acts as the anonymous function argument, so the inline function above is equivalent to: _.address This works with any number of levels or properties, so a function to extract the city associated with an Entry could be written as: _.address.city For example: > address = { street: "123 Fake St.", city: "Faketown", state: "CA" } > entry = { firstName: "John", lastName: "Smith", address: address } > _.lastName entry "Smith" > _.address.city entry "Faketown" Querying the Address Book The last function we need to implement for our minimal address book application will look up a person by name and return the correct Entry. This will be a nice application of building programs by composing small functions - a key idea from functional programming. We can first filter the address book, keeping only those entries with the correct first and last names. Then we can simply return the head (i.e. first) element of the resulting list. With this high-level specification of our approach, we can calculate the type of our function. First open PSCi, and find the types of the filter and head functions: $ spago repl > import Data.List > :type filter forall a. (a -> Boolean) -> List a -> List a > :type head forall a. List a -> Maybe a Let's pick apart these two types to understand their meaning. filter is a curried function of two arguments. Its first argument is a function, which takes an element of the list and returns a Boolean value as a result. Its second argument is a list of elements, and the return value is another list. head takes a list as its argument, and returns a type we haven't seen before: Maybe a. Maybe a represents an optional value of type a, and provides a type-safe alternative to using null to indicate a missing value in languages like JavaScript. We will see it again in more detail in later chapters. The universally quantified types of filter and head can be specialized by the PureScript compiler, to the following types: filter :: (Entry -> Boolean) -> AddressBook -> AddressBook head :: AddressBook -> Maybe Entry We know that we will need to pass the first and last names that we want to search for, as arguments to our function. We also know that we will need a function to pass to filter. Let's call this function filterEntry. filterEntry will have type Entry -> Boolean. The application filter filterEntry will then have type AddressBook -> AddressBook. If we pass the result of this function to the head function, we get our result of type Maybe Entry. Putting these facts together, a reasonable type signature for our function, which we will call findEntry, is: findEntry :: String -> String -> AddressBook -> Maybe Entry This type signature says that findEntry takes two strings, the first and last names, and a AddressBook, and returns an optional Entry. The optional result will contain a value only if the name is found in the address book. And here is the definition of findEntry: findEntry firstName lastName book = head (filter filterEntry book) where filterEntry :: Entry -> Boolean filterEntry entry = entry.firstName == firstName && entry.lastName == lastName Let's go over this code step by step. findEntry brings three names into scope: firstName and lastName, both representing strings, and book, an AddressBook. The right hand side of the definition combines the filter and head functions: first, the list of entries is filtered, and the head function is applied to the result. The predicate function filterEntry is defined as an auxiliary declaration inside a where clause. This way, the filterEntry function is available inside the definition of our function, but not outside it. Also, it can depend on the arguments to the enclosing function, which is essential here because filterEntry uses the firstName and lastName arguments to filter the specified Entry. Note that, just like for top-level declarations, it was not necessary to specify a type signature for filterEntry. However, doing so is recommended as a form of documentation. Infix Function Application Most of the functions discussed so far used prefix function application, where the function name was put before the arguments. For example, when using the insertEntry function to add an Entry ( john) to an empty AddressBook, we might write: > book1 = insertEntry john emptyBook However, this chapter has also included examples of infix binary operators, such as the == operator in the definition of filterEntry, where the operator is put between the two arguments. These infix operators are actually defined in the PureScript source as infix aliases for their underlying prefix implementations. For example, == is defined as an infix alias for the prefix eq function with the line: infix 4 eq as == and therefore entry.firstName == firstName in filterEntry could be replaced with the eq entry.firstName firstName. We'll cover a few more examples of defining infix operators later in this section. There are situations where putting a prefix function in an infix position as an operator leads to more readable code. One example is the mod function: > mod 8 3 2 The above usage works fine, but is awkward to read. A more familiar phrasing is "eight mod three", which you can achieve by wrapping a prefix function in backticks (`): > 8 `mod` 3 2 In the same way, wrapping insertEntry in backticks turns it into an infix operator, such that book1 and book2 below are equivalent: book1 = insertEntry john emptyBook book2 = john `insertEntry` emptyBook We can make an AddressBook with multiple entries by using multiple applications of insertEntry as a prefix function ( book3) or as an infix operator ( book4) as shown below: book3 = insertEntry john (insertEntry peggy (insertEntry ned emptyBook)) book4 = john `insertEntry` (peggy `insertEntry` (ned `insertEntry` emptyBook)) We can also define an infix operator alias (or synonym) for insertEntry. We'll arbitrarily choose ++ for this operator, give it a precedence of 5, and make it right associative using infixr: infixr 5 insertEntry as ++ This new operator lets us rewrite the above book4 example as: book5 = john ++ (peggy ++ (ned ++ emptyBook)) and the right associativity of our new ++ operator lets us get rid of the parentheses without changing the meaning: book6 = john ++ peggy ++ ned ++ emptyBook Another common technique for eliminating parens is to use apply's infix operator $, along with your standard prefix functions. For example, the earlier book3 example could be rewritten as: book7 = insertEntry john $ insertEntry peggy $ insertEntry ned emptyBook Substituting $ for parens is usually easier to type and (arguably) easier to read. A mnemonic to remember the meaning of this symbol is to think of the dollar sign as being drawn from two parens that are also being crossed-out, suggesting the parens are now unnecessary. Note that $ isn't special syntax that's hardcoded into the language. It's simply the infix operator for a regular function called apply, which is defined in Data.Function as follows: apply :: forall a b. (a -> b) -> a -> b apply f x = f x infixr 0 apply as $ The apply function takes another function (of type (a -> b)) as its first argument and a value (of type a) as its second argument, then calls that function with that value. If it seems like this function doesn't contribute anything meaningful, you are absolutely correct! Your program is logically identical without it (see referential transparency). The syntactic utility of this function comes from the special properties assigned to its infix operator. $ is a right-associative ( infixr), low precedence ( 0) operator, which lets us remove sets of parentheses for deeply-nested applications. Another parens-busting opportunity for the $ operator is in our earlier findEntry function: findEntry firstName lastName book = head $ filter filterEntry book We'll see an even more elegant way to rewrite this line with "function composition" in the next section. If you'd like to use a concise infix operator alias as a prefix function, you can surround it in parentheses: > 8 + 3 11 > (+) 8 3 11 Alternatively, operators can be partially applied by surrounding the expression with parentheses and using _ as an operand in an operator section. You can think of this as a more convenient way to create simple anonymous functions (although in the below example, we're then binding that anonymous function to a name, so it's not so anonymous anymore): > add3 = (3 + _) > add3 2 5 To summarize, the following are equivalent definitions of a function that adds 5 to its argument: add5 x = 5 + x add5 x = add 5 x add5 x = (+) 5 x add5 x = 5 `add` x add5 = add 5 add5 = \x -> 5 + x add5 = (5 + _) add5 x = 5 `(+)` x -- Yo Dawg, I herd you like infix, so we put infix in your infix! Function Composition Just like we were able to simplify the insertEntry function by using eta conversion, we can simplify the definition of findEntry by reasoning about its arguments. Note that the book argument is passed to the filter filterEntry function, and the result of this application is passed to head. In other words, book is passed to the composition of the functions filter filterEntry and head. In PureScript, the function composition operators are <<< and >>>. The first is "backwards composition", and the second is "forwards composition". We can rewrite the right-hand side of findEntry using either operator. Using backwards-composition, the right-hand side would be (head <<< filter filterEntry) book In this form, we can apply the eta conversion trick from earlier, to arrive at the final form of findEntry: findEntry firstName lastName = head <<< filter filterEntry where ... An equally valid right-hand side would be: filter filterEntry >>> head Either way, this gives a clear definition of the findEntry function: " findEntry is the composition of a filtering function and the head function". I will let you make your own decision which definition is easier to understand, but it is often useful to think of functions as building blocks in this way - each function executing a single task, and solutions assembled using function composition. Exercises - (Easy) Test your understanding of the findEntryfunction by writing down the types of each of its major subexpressions. For example, the type of the headfunction as used is specialized to AddressBook -> Maybe Entry. Note: There is no test for this exercise. - (Medium) Write a function findEntryByStreet :: String -> AddressBook -> Maybe Entrywhich looks up an Entrygiven a street address. Hint reusing the existing code in findEntry. Test your function in PSCi and by running spago test. - (Medium) Rewrite findEntryByStreetto replace filterEntrywith the composition (using <<<or >>>) of: a property accessor (using the _.notation); and a function that tests whether its given string argument is equal to the given street address. - (Medium) Write a function isInBookwhich tests whether a name appears in a AddressBook, returning a Boolean value. Hint: Use PSCi to find the type of the Data.List.nullfunction, which tests whether a list is empty or not. - (Difficult) Write a function removeDuplicateswhich removes "duplicate" address book entries. We'll consider entries duplicated if they share the same first and last names, while ignoring addressfields. Hint: Use PSCi to find the type of the Data.List.nubByEqfunction, which removes duplicate elements from a list based on an equality predicate. Note that the first element in each set of duplicates (closest to list head) is the one that is kept. Conclusion In this chapter, we covered several new functional programming concepts: - How to use the interactive mode PSCi to experiment with functions and test ideas. - The role of types as both a correctness tool, and an implementation tool. - The use of curried functions to represent functions of multiple arguments. - Creating programs from smaller components by composition. - Structuring code neatly using whereexpressions. - How to avoid null values by using the Maybetype. - Using techniques like eta conversion and function composition to refactor code into a clear specification. In the following chapters, we'll build on these ideas. Rec. Pattern Matching Chapter Goals This chapter will introduce two new concepts: algebraic data types, and pattern matching. We will also briefly cover an interesting feature of the PureScript type system: row polymorphism. Pattern matching is a common technique in functional programming and allows the developer to write compact functions which express potentially complex ideas, by breaking their implementation down into multiple cases. Algebraic data types are a feature of the PureScript type system which enable a similar level of expressiveness in the language of types - they are closely related to pattern matching. The goal of the chapter will be to write a library to describe and manipulate simple vector graphics using algebraic types and pattern matching. Project Setup The source code for this chapter is defined in the file src/Data/Picture.purs. The Data.Picture module defines a data type Shape for simple shapes, and a type Picture for collections of shapes, along with functions for working with those types. The module imports the Data.Foldable module, which provides functions for folding data structures: module Data.Picture where import Prelude import Data.Foldable (foldl) import Data.Number (infinity) The Data.Picture module also imports the Number module, but this time using the as keyword: import Data.Number as Number This makes the types and functions in that module available for use, but only by using the qualified name, like Number.max. This can be useful to avoid overlapping imports, or just to make it clearer which modules certain things are imported from. Note: it is not necessary to use the same module name as the original module for a qualified import. Shorter qualified names like import Data.Number as N are possible, and quite common. Simple Pattern Matching Let's begin by looking at an example. Here is a function which computes the greatest common divisor of two integers using pattern matching: gcd :: Int -> Int -> Int gcd n 0 = n gcd 0 m = m gcd n m = if n > m then gcd (n - m) m else gcd n (m - n) This algorithm is called the Euclidean Algorithm. If you search for its definition online, you will likely find a set of mathematical equations which look a lot like the code above. This is one benefit of pattern matching: it allows you to define code by cases, writing simple, declarative code which looks like a specification of a mathematical function. A function written using pattern matching works by pairing sets of conditions with their results. Each line is called an alternative or a case. The expressions on the left of the equals sign are called patterns, and each case consists of one or more patterns, separated by spaces. Cases describe which conditions the arguments must satisfy before the expression on the right of the equals sign should be evaluated and returned. Each case is tried in order, and the first case whose patterns match their inputs determines the return value. For example, the gcd function is evaluated using the following steps: - The first case is tried: if the second argument is zero, the function returns n(the first argument). - If not, the second case is tried: if the first argument is zero, the function returns m(the second argument). - Otherwise, the function evaluates and returns the expression in the last line. Note that patterns can bind values to names - each line in the example binds one or both of the names n and m to the input values. As we learn about different kinds of patterns, we will see that different types of patterns correspond to different ways to choose names from the input arguments. Simple Patterns The example code above demonstrates two types of patterns: - Integer literals patterns, which match something of type Int, only if the value matches exactly. - Variable patterns, which bind their argument to a name There are other types of simple patterns: Number, String, Charand Booleanliterals - Wildcard patterns, indicated with an underscore ( _), which match any argument, and which do not bind any names. Here are two more examples which demonstrate using these simple patterns: fromString :: String -> Boolean fromString "true" = true fromString _ = false toString :: Boolean -> String toString true = "true" toString false = "false" Try these functions in PSCi. Guards In the Euclidean algorithm example, we used an if .. then .. else expression to switch between the two alternatives when m > n and m <= n. Another option in this case would be to use a guard. A guard is a boolean-valued expression which must be satisfied in addition to the constraints imposed by the patterns. Here is the Euclidean algorithm rewritten to use a guard: gcdV2 :: Int -> Int -> Int gcdV2 n 0 = n gcdV2 0 n = n gcdV2 n m | n > m = gcdV2 (n - m) m | otherwise = gcdV2 n (m - n) In this case, the third line uses a guard to impose the extra condition that the first argument is strictly larger than the second. The guard in the final line uses the expression otherwise, which might seem like a keyword, but is in fact just a regular binding in Prelude: > :type otherwise Boolean > otherwise true As this example demonstrates, guards appear on the left of the equals symbol, separated from the list of patterns by a pipe character ( |). Exercises - (Easy) Write the factorialfunction using pattern matching. Hint: Consider the two corner cases of zero and non-zero inputs. Note: This is a repeat of an example from the previous chapter, but see if you can rewrite it here on your own. - (Medium) Write a function binomialwhich finds the coefficient of the x^ kth term in the polynomial expansion of (1 + x)^ n. This is the same as the number of ways to choose a subset of kelements from a set of nelements. Use the formula n! / k! (n - k)!, where !is the factorial function written earlier. Hint: Use pattern matching to handle corner cases. If it takes a long time to complete or crashes with an error about the call stack, try adding more corner cases. - (Medium) Write a function pascalwhich uses Pascal`s Rule for computing the same binomial coefficients as the previous exercise. Array Patterns Array literal patterns provide a way to match arrays of a fixed length. For example, suppose we want to write a function isEmpty which identifies empty arrays. We could do this by using an empty array pattern ( []) in the first alternative: isEmpty :: forall a. Array a -> Boolean isEmpty [] = true isEmpty _ = false Here is another function which matches arrays of length five, binding each of its five elements in a different way: takeFive :: Array Int -> Int takeFive [0, 1, a, b, _] = a * b takeFive _ = 0 The first pattern only matches arrays with five elements, whose first and second elements are 0 and 1 respectively. In that case, the function returns the product of the third and fourth elements. In every other case, the function returns zero. For example, in PSCi: > :paste … takeFive [0, 1, a, b, _] = a * b … takeFive _ = 0 … ^D > takeFive [0, 1, 2, 3, 4] 6 > takeFive [1, 2, 3, 4, 5] 0 > takeFive [] 0 Array literal patterns allow us to match arrays of a fixed length, but PureScript does not provide any means of matching arrays of an unspecified length, since destructuring immutable arrays in these sorts of ways can lead to poor performance. If you need a data structure which supports this sort of matching, the recommended approach is to use Data.List. Other data structures exist which provide improved asymptotic performance for different operations. Record Patterns and Row Polymorphism Record patterns are used to match - you guessed it - records. Record patterns look just like record literals, but instead of values on the right of the colon, we specify a binder for each field. For example: this pattern matches any record which contains fields called first and last, and binds their values to the names x and y respectively: showPerson :: { first :: String, last :: String } -> String showPerson { first: x, last: y } = y <> ", " <> x Record patterns provide a good example of an interesting feature of the PureScript type system: row polymorphism. Suppose we had defined showPerson without a type signature above. What would its inferred type have been? Interestingly, it is not the same as the type we gave: > showPerson { first: x, last: y } = y <> ", " <> x > :type showPerson forall r. { first :: String, last :: String | r } -> String What is the type variable r here? Well, if we try showPerson in PSCi, we see something interesting: > showPerson { first: "Phil", last: "Freeman" } "Freeman, Phil" > showPerson { first: "Phil", last: "Freeman", location: "Los Angeles" } "Freeman, Phil" We are able to append additional fields to the record, and the showPerson function will still work. As long as the record contains the first and last fields of type String, the function application is well-typed. However, it is not valid to call showPerson with too few fields: > showPerson { first: "Phil" } Type of expression lacks required label "last" We can read the new type signature of showPerson as "takes any record with first and last fields which are Strings and any other fields, and returns a String". This function is polymorphic in the row r of record fields, hence the name row polymorphism. Note that this behavior is different than that of the original showPerson. Without the row variable r, showPerson only accepts records with exactly a first and last field and no others. Note that we could have also written > showPerson p = p.last <> ", " <> p.first and PSCi would have inferred the same type. Record Puns Recall that the showPerson function matches a record inside its argument, binding the first and last fields to values named x and y. We could alternatively just reuse the field names themselves, and simplify this sort of pattern match as follows: showPersonV2 :: { first :: String, last :: String } -> String showPersonV2 { first, last } = last <> ", " <> first Here, we only specify the names of the fields, and we do not need to specify the names of the values we want to introduce. This is called a record pun. It is also possible to use record puns to construct records. For example, if we have values named first and last in scope, we can construct a person record using { first, last }: unknownPerson :: { first :: String, last :: String } unknownPerson = { first, last } where first = "Jane" last = "Doe" This may improve readability of code in some circumstances. Nested Patterns Array patterns and record patterns both combine smaller patterns to build larger patterns. For the most part, the examples above have only used simple patterns inside array patterns and record patterns, but it is important to note that patterns can be arbitrarily nested, which allows functions to be defined using conditions on potentially complex data types. For example, this code combines two record patterns: type Address = { street :: String, city :: String } type Person = { name :: String, address :: Address } livesInLA :: Person -> Boolean livesInLA { address: { city: "Los Angeles" } } = true livesInLA _ = false Named Patterns Patterns can be named to bring additional names into scope when using nested patterns. Any pattern can be named by using the @ symbol. For example, this function sorts two-element arrays, naming the two elements, but also naming the array itself: sortPair :: Array Int -> Array Int sortPair arr@[x, y] | x <= y = arr | otherwise = [y, x] sortPair arr = arr This way, we save ourselves from allocating a new array if the pair is already sorted. Note that if the input array does not contain exactly two elements, then this function simply returns it unchanged, even if it's unsorted. Exercises - (Easy) Write a function sameCitywhich uses record patterns to test whether two Personrecords belong to the same city. - (Medium) What is the most general type of the sameCityfunction, taking into account row polymorphism? What about the livesInLAfunction defined above? Note: There is no test for this exercise. - (Medium) Write a function fromSingletonwhich uses an array literal pattern to extract the sole member of a singleton array. If the array is not a singleton, your function should return a provided default value. Your function should have type forall a. a -> Array a -> a Case Expressions Patterns do not only appear in top-level function declarations. It is possible to use patterns to match on an intermediate value in a computation, using a case expression. Case expressions provide a similar type of utility to anonymous functions: it is not always desirable to give a name to a function, and a case expression allows us to avoid naming a function just because we want to use a pattern. Here is an example. This function computes "longest zero suffix" of an array (the longest suffix which sums to zero): import Data.Array (tail) import Data.Foldable (sum) import Data.Maybe (fromMaybe) lzs :: Array Int -> Array Int lzs [] = [] lzs xs = case sum xs of 0 -> xs _ -> lzs (fromMaybe [] $ tail xs) For example: > lzs [1, 2, 3, 4] [] > lzs [1, -1, -2, 3] [-1, -2, 3] This function works by case analysis. If the array is empty, our only option is to return an empty array. If the array is non-empty, we first use a case expression to split into two cases. If the sum of the array is zero, we return the whole array. If not, we recurse on the tail of the array. Pattern Match Failures and Partial Functions If patterns in a case expression are tried in order, then what happens in the case when none of the patterns in a case alternatives match their inputs? In this case, the case expression will fail at runtime with a pattern match failure. We can see this behavior with a simple example: import Partial.Unsafe (unsafePartial) partialFunction :: Boolean -> Boolean partialFunction = unsafePartial \true -> true This function contains only a single case, which only matches a single input, true. If we compile this file, and test in PSCi with any other argument, we will see an error at runtime: > partialFunction false Failed pattern match Functions which return a value for any combination of inputs are called total functions, and functions which do not are called partial. It is generally considered better to define total functions where possible. If it is known that a function does not return a result for some valid set of inputs, it is usually better to return a value capable of indicating failure, such as type Maybe a for some a, using Nothing when it cannot return a valid result. This way, the presence or absence of a value can be indicated in a type-safe way. The PureScript compiler will generate an error if it can detect that your function is not total due to an incomplete pattern match. The unsafePartial function can be used to silence these errors (if you are sure that your partial function is safe!) If we removed the call to the unsafePartial function above, then the compiler would generate the following error: A case expression could not be determined to cover all inputs. The following additional cases are required to cover all inputs: false This tells us that the value false is not matched by any pattern. In general, these warnings might include multiple unmatched cases. If we also omit the type signature above: partialFunction true = true then PSCi infers a curious type: > :type partialFunction Partial => Boolean -> Boolean We will see more types which involve the => symbol later on in the book (they are related to type classes), but for now, it suffices to observe that PureScript keeps track of partial functions using the type system, and that we must explicitly tell the type checker when they are safe. The compiler will also generate a warning in certain cases when it can detect that cases are redundant (that is, a case only matches values which would have been matched by a prior case): redundantCase :: Boolean -> Boolean redundantCase true = true redundantCase false = false redundantCase false = false In this case, the last case is correctly identified as redundant: A case expression contains unreachable cases: false Note: PSCi does not show warnings, so to reproduce this example, you will need to save this function as a file and compile it using spago build. Algebraic Data Types This section will introduce a feature of the PureScript type system called Algebraic Data Types (or ADTs), which are fundamentally related to pattern matching. However, we'll first consider a motivating example, which will provide the basis of a solution to this chapter's problem of implementing a simple vector graphics library. Suppose we wanted to define a type to represent some simple shapes: lines, rectangles, circles, text, etc. In an object oriented language, we would probably define an interface or abstract class Shape, and one concrete subclass for each type of shape that we wanted to be able to work with. However, this approach has one major drawback: to work with Shapes abstractly, it is necessary to identify all of the operations one might wish to perform, and to define them on the Shape interface. It becomes difficult to add new operations without breaking modularity. Algebraic data types provide a type-safe way to solve this sort of problem, if the set of shapes is known in advance. It is possible to define new operations on Shape in a modular way, and still maintain type-safety. Here is how Shape might be represented as an algebraic data type: data Shape = Circle Point Number | Rectangle Point Number Number | Line Point Point | Text Point String type Point = { x :: Number , y :: Number } This declaration defines Shape as a sum of different constructors, and for each constructor identifies the data that is included. A Shape is either a Circle which contains a center Point and a radius (a number), or a Rectangle, or a Line, or Text. There are no other ways to construct a value of type Shape. An algebraic data type is introduced using the data keyword, followed by the name of the new type and any type arguments. The type's constructors (i.e. its data constructors) are defined after the equals symbol, and are separated by pipe characters ( |). The data carried by an ADT's constructors doesn't have to be restricted to primitive types: constructors can include records, arrays, or even other ADTs. Let's see another example from PureScript's standard libraries. We saw the Maybe type, which is used to define optional values, earlier in the book. Here is its definition from the maybe package: data Maybe a = Nothing | Just a This example demonstrates the use of a type parameter a. Reading the pipe character as the word "or", its definition almost reads like English: "a value of type Maybe a is either Nothing, or Just a value of type a". Note that we don't use the syntax forall a. anywhere in our data definition. forall syntax is necessary for functions, but is not used when defining ADTs with data or type aliases with type. Data constructors can also be used to define recursive data structures. Here is one more example, defining a data type of singly-linked lists of elements of type a: data List a = Nil | Cons a (List a) This example is taken from the lists package. Here, the Nil constructor represents an empty list, and Cons is used to create non-empty lists from a head element and a tail. Notice how the tail is defined using the data type List a, making this a recursive data type. Using ADTs It is simple enough to use the constructors of an algebraic data type to construct a value: simply apply them like functions, providing arguments corresponding to the data included with the appropriate constructor. For example, the Line constructor defined above required two Points, so to construct a Shape using the Line constructor, we have to provide two arguments of type Point: exampleLine :: Shape exampleLine = Line p1 p2 where p1 :: Point p1 = { x: 0.0, y: 0.0 } p2 :: Point p2 = { x: 100.0, y: 50.0 } So, constructing values of algebraic data types is simple, but how do we use them? This is where the important connection with pattern matching appears: the only way to consume a value of an algebraic data type is to use a pattern to match its constructor. Let's see an example. Suppose we want to convert a Shape into a String. We have to use pattern matching to discover which constructor was used to construct the Shape. We can do this as follows: showShape :: Shape -> String showShape (Circle c r) = "Circle [center: " <> showPoint c <> ", radius: " <> show r <> "]" showShape (Rectangle c w h) = "Rectangle [center: " <> showPoint c <> ", width: " <> show w <> ", height: " <> show h <> "]" showShape (Line start end) = "Line [start: " <> showPoint start <> ", end: " <> showPoint end <> "]" showShape (Text loc text) = "Text [location: " <> showPoint loc <> ", text: " <> show text <> "]" showPoint :: Point -> String showPoint { x, y } = "(" <> show x <> ", " <> show y <> ")" Each constructor can be used as a pattern, and the arguments to the constructor can themselves be bound using patterns of their own. Consider the first case of showShape: if the Shape matches the Circle constructor, then we bring the arguments of Circle (center and radius) into scope using two variable patterns, c and r. The other cases are similar. Exercises - (Easy) Write a function circleAtOriginwhich constructs a Circle(of type Shape) centered at the origin with radius 10.0. - (Medium) Write a function doubleScaleAndCenterwhich scales the size of a Shapeby a factor of 2.0and centers it at the origin. - (Medium) Write a function shapeTextwhich extracts the text from a Shape. It should return Maybe String, and use the Nothingconstructor if the input is not constructed using Text. Newtypes There is a special case of algebraic data types, called newtypes. Newtypes are introduced using the newtype keyword instead of the data keyword. Newtypes must define exactly one constructor, and that constructor must take exactly one argument. That is, a newtype gives a new name to an existing type. In fact, the values of a newtype have the same runtime representation as the underlying type, so there is no runtime performance overhead. They are, however, distinct from the point of view of the type system. This gives an extra layer of type safety. As an example, we might want to define newtypes as type-level aliases for Number, to ascribe units like volts, amps, and ohms: newtype Volt = Volt Number newtype Ohm = Ohm Number newtype Amp = Amp Number Then we define functions and values using these types: calculateCurrent :: Volt -> Ohm -> Amp calculateCurrent (Volt v) (Ohm r) = Amp (v / r) battery :: Volt battery = Volt 1.5 lightbulb :: Ohm lightbulb = Ohm 500.0 current :: Amp current = calculateCurrent battery lightbulb This prevents us from making silly mistakes, such as attempting to calculate the current produced by two lightbulbs without a voltage source. current :: Amp current = calculateCurrent lightbulb lightbulb {- TypesDoNotUnify: current = calculateCurrent lightbulb lightbulb ^^^^^^^^^ Could not match type Ohm with type Volt -} If we instead just used Number without newtype, then the compiler can't help us catch this mistake: -- This also compiles, but is not as type safe. calculateCurrent :: Number -> Number -> Number calculateCurrent v r = v / r battery :: Number battery = 1.5 lightbulb :: Number lightbulb = 500.0 current :: Number current = calculateCurrent lightbulb lightbulb -- uncaught mistake Note that while a newtype can only have a single constructor, and the constructor must be of a single value, a newtype can take any number of type variables. For example, the following newtype would be a valid definition ( err and a are the type variables, and the CouldError constructor expects a single value of type Either err a): newtype CouldError err a = CouldError (Either err a) Also note that the constructor of a newtype often has the same name as the newtype itself, but this is not a requirement. For example, unique names are also valid: newtype Coulomb = MakeCoulomb Number In this case, Coulomb is the type constructor (of zero arguments) and MakeCoulomb is the data constructor. These constructors live in different namespaces, even when the names are identical, such as with the Volt example. This is true for all ADTs. Note that although the type constructor and data constructor can have different names, in practice it is idiomatic for them to share the same name. This is the case with Amp and Volt types above. Another application of newtypes is to attach different behavior to an existing type without changing its representation at runtime. We cover that use case in the next chapter when we discuss type classes. Exercises - (Easy) Define Wattas a newtypeof Number. Then define a calculateWattagefunction using this new Watttype and the above definitions Ampand Volt: calculateWattage :: Amp -> Volt -> Watt A wattage in Watts can be calculated as the product of a given current in Amps and a given voltage in Volts. A Library for Vector Graphics Let's use the data types we have defined above to create a simple library for using vector graphics. Define a type synonym for a Picture - just an array of Shapes: type Picture = Array Shape For debugging purposes, we'll want to be able to turn a Picture into something readable. The showPicture function lets us do that: showPicture :: Picture -> Array String showPicture = map showShape Let's try it out. Compile your module with spago build and open PSCi with spago repl: $ spago build $ spago repl > import Data.Picture > showPicture [ Line { x: 0.0, y: 0.0 } { x: 1.0, y: 1.0 } ] ["Line [start: (0.0, 0.0), end: (1.0, 1.0)]"] Computing Bounding Rectangles The example code for this module contains a function bounds which computes the smallest bounding rectangle for a Picture. The Bounds type defines a bounding rectangle. type Bounds = { top :: Number , left :: Number , bottom :: Number , right :: Number } bounds uses the foldl function from Data.Foldable to traverse the array of Shapes in a Picture, and accumulate the smallest bounding rectangle: bounds :: Picture -> Bounds bounds = foldl combine emptyBounds where combine :: Bounds -> Shape -> Bounds combine b shape = union (shapeBounds shape) b In the base case, we need to find the smallest bounding rectangle of an empty Picture, and the empty bounding rectangle defined by emptyBounds suffices. The accumulating function combine is defined in a where block. combine takes a bounding rectangle computed from foldl's recursive call, and the next Shape in the array, and uses the union function to compute the union of the two bounding rectangles. The shapeBounds function computes the bounds of a single shape using pattern matching. Exercises - (Medium) Extend the vector graphics library with a new operation areawhich computes the area of a Shape. For the purpose of this exercise, the area of a line or a piece of text is assumed to be zero. - (Difficult) Extend the Shapetype with a new data constructor Clipped, which clips another Pictureto a rectangle. Extend the shapeBoundsfunction to compute the bounds of a clipped picture. Note that this makes Shapeinto a recursive data type. Conclusion In this chapter, we covered pattern matching, a basic but powerful technique from functional programming. We saw how to use simple patterns as well as array and record patterns to match parts of deep data structures. This chapter also introduced algebraic data types, which are closely related to pattern matching. We saw how algebraic data types allow concise descriptions of data structures, and provide a modular way to extend data types with new operations. Finally, we covered row polymorphism, a powerful type of abstraction which allows many idiomatic JavaScript functions to be given a type. In the rest of the book, we will use ADTs and pattern matching extensively, so it will pay dividends to become familiar with them now. Try creating your own algebraic data types and writing functions to consume them using pattern matching. Type Classes Chapter Goals This chapter will introduce a powerful form of abstraction which is enabled by PureScript's type system - type classes. This motivating example for this chapter will be a library for hashing data structures. We will see how the machinery of type classes allow us to hash complex data structures without having to think directly about the structure of the data itself. We will also see a collection of standard type classes from PureScript's Prelude and standard libraries. PureScript code leans heavily on the power of type classes to express ideas concisely, so it will be beneficial to familiarize yourself with these classes. If you come from an Object Oriented background, please note that the word "class" means something very different in this context than what you're used to. A type class serves a purpose more similar to an OO interface. Project Setup The source code for this chapter is defined in the file src/Data/Hashable.purs. The project has the following dependencies: maybe, which defines the Maybedata type, which represents optional values. tuples, which defines the Tupledata type, which represents pairs of values. either, which defines the Eitherdata type, which represents disjoint unions. strings, which defines functions which operate on strings. functions, which defines some helper functions for defining PureScript functions. The module Data.Hashable imports several modules provided by these packages. Show Me! Our first simple example of a type class is provided by a function we've seen several times already: the show function, which takes a value and displays it as a string. show is defined by a type class in the Prelude module called Show, which is defined as follows: class Show a where show :: a -> String This code declares a new type class called Show, which is parameterized by the type variable a. A type class instance contains implementations of the functions defined in a type class, specialized to a particular type. For example, here is the definition of the Show type class instance for Boolean values, taken from the Prelude: instance showBoolean :: Show Boolean where show true = "true" show false = "false" This code declares a type class instance called showBoolean - in PureScript, type class instances can be named to aid the readability of the generated JavaScript. We say that the Boolean type belongs to the Show type class. We can try out the Show type class in PSCi, by showing a few values with different types: > import Prelude > show true "true" > show 1.0 "1.0" > show "Hello World" "\"Hello World\"" These examples demonstrate how to show values of various primitive types, but we can also show values with more complicated types: > import Data.Tuple > show (Tuple 1 true) "(Tuple 1 true)" > import Data.Maybe > show (Just "testing") "(Just \"testing\")" The output of show should be a string that you can paste back into the repl (or .purs file) to recreate the item being shown. Here we'll use logShow, which just calls show then log, to render the string without quotes. Ignore the unit print - that will covered in Chapter 8 when we examine Effects, like log. > import Effect.Console > logShow (Tuple 1 true) (Tuple 1 true) unit > logShow (Just "testing") (Just "testing") unit If we try to show a value of type Data.Either, we get an interesting error message: > import Data.Either > show (Left 10) The inferred type forall a. Show a => String has type variables which are not mentioned in the body of the type. Consider adding a type annotation. The problem here is not that there is no Show instance for the type we intended to show, but rather that PSCi was unable to infer the type. This is indicated by the unknown type a in the inferred type. We can annotate the expression with a type, using the :: operator, so that PSCi can choose the correct type class instance: > show (Left 10 :: Either Int String) "(Left 10)" Some types do not have a Show instance defined at all. One example of this is the function type ->. If we try to show a function from Int to Int, we get an appropriate error message from the type checker: > import Prelude > show $ \n -> n + 1 No type class instance was found for Data.Show.Show (Int -> Int) Type class instances can be defined in one of two places: in the same module that the type class is defined, or in the same module that the type "belonging to" the type class is defined. An instance defined in any other spot is called an "orphan instance" and is not allowed by the PureScript compiler. Some of the exercises in this chapter will require you to copy the definition of a type into your MySolutions module so that you can define type class instances for that type. Exercises (Easy) Define a Showinstance for Point. Match the same output as the showPointfunction from the previous chapter. Note: Point is now a newtype(instead of a typesynonym), which allows us to customize how to showit. Otherwise, we'd be stuck with the default Showinstance for records. newtype Point = Point { x :: Number , y :: Number } Common Type Classes In this section, we'll look at some standard type classes defined in the Prelude and standard libraries. These type classes form the basis of many common patterns of abstraction in idiomatic PureScript code, so a basic understanding of their functions is highly recommended. Eq The Eq type class defines the eq function, which tests two values for equality. The == operator is actually just an alias for eq. class Eq a where eq :: a -> a -> Boolean Note that in either case, the two arguments must have the same type: it does not make sense to compare two values of different types for equality. Try out the Eq type class in PSCi: > 1 == 2 false > "Test" == "Test" true Ord The Ord type class defines the compare function, which can be used to compare two values, for types which support ordering. The comparison operators < and > along with their non-strict companions <= and >=, can be defined in terms of compare. data Ordering = LT | EQ | GT class Eq a <= Ord a where compare :: a -> a -> Ordering The compare function compares two values, and returns an Ordering, which has three alternatives: LT- if the first argument is less than the second. EQ- if the first argument is equal to the second. GT- if the first argument is greater than the second. Again, we can try out the compare function in PSCi: > compare 1 2 LT > compare "A" "Z" LT Field The Field type class identifies those types which support numeric operators such as addition, subtraction, multiplication and division. It is provided to abstract over those operators, so that they can be reused where appropriate. Note: Just like the Eq and Ord type classes, the Field type class has special support in the PureScript compiler, so that simple expressions such as 1 + 2 * 3 get translated into simple JavaScript, as opposed to function calls which dispatch based on a type class implementation. class EuclideanRing a <= Field a The Field type class is composed from several more general superclasses. This allows us to talk abstractly about types which support some but not all of the Field operations. For example, a type of natural numbers would be closed under addition and multiplication, but not necessarily under subtraction, so that type might have an instance of the Semiring class (which is a superclass of Num), but not an instance of Ring or Field. Superclasses will be explained later in this chapter, but the full numeric type class hierarchy (cheatsheet) is beyond the scope of this chapter. The interested reader is encouraged to read the documentation for the superclasses of Field in prelude. Semigroups and Monoids The Semigroup type class identifies those types which support an append operation to combine two values: class Semigroup a where append :: a -> a -> a Strings form a semigroup under regular string concatenation, and so do arrays. Several other standard instances are provided by the prelude package. The <> concatenation operator, which we have already seen, is provided as an alias for append. The Monoid type class (provided by the prelude package) extends the Semigroup type class with the concept of an empty value, called mempty: class Semigroup m <= Monoid m where mempty :: m Again, strings and arrays are simple examples of monoids. A Monoid type class instance for a type describes how to accumulate a result with that type, by starting with an "empty" value, and combining new results. For example, we can write a function which concatenates an array of values in some monoid by using a fold. In PSCi: > import Prelude > import Data.Monoid > import Data.Foldable > foldl append mempty ["Hello", " ", "World"] "Hello World" > foldl append mempty [[1, 2, 3], [4, 5], [6]] [1,2,3,4,5,6] The prelude package provides many examples of monoids and semigroups, which we will use in the rest of the book. Foldable If the Monoid type class identifies those types which act as the result of a fold, then the Foldable type class identifies those type constructors which can be used as the source of a fold. The Foldable type class is provided in the foldable-traversable package, which also contains instances for some standard containers such as arrays and Maybe. The type signatures for the functions belonging to the Foldable class are a little more complicated than the ones we've seen so far: class Foldable f where foldr :: forall a b. (a -> b -> b) -> b -> f a -> b foldl :: forall a b. (b -> a -> b) -> b -> f a -> b foldMap :: forall a m. Monoid m => (a -> m) -> f a -> m It is instructive to specialize to the case where f is the array type constructor. In this case, we can replace f a with Array a for any a, and we notice that the types of foldl and foldr become the types that we saw when we first encountered folds over arrays. What about foldMap? Well, that becomes forall a m. Monoid m => (a -> m) -> Array a -> m. This type signature says that we can choose any type m for our result type, as long as that type is an instance of the Monoid type class. If we can provide a function which turns our array elements into values in that monoid, then we can accumulate over our array using the structure of the monoid, and return a single value. Let's try out foldMap in PSCi: > import Data.Foldable > foldMap show [1, 2, 3, 4, 5] "12345" Here, we choose the monoid for strings, which concatenates strings together, and the show function which renders an Int as a String. Then, passing in an array of integers, we see that the results of showing each integer have been concatenated into a single String. But arrays are not the only types which are foldable. foldable-traversable also defines Foldable instances for types like Maybe and Tuple, and other libraries like lists define Foldable instances for their own data types. Foldable captures the notion of an ordered container. Functor, and Type Class Laws The Prelude also defines a collection of type classes which enable a functional style of programming with side-effects in PureScript: Functor, Applicative and Monad. We will cover these abstractions later in the book, but for now, let's look at the definition of the Functor type class, which we have seen already in the form of the map function: class Functor f where map :: forall a b. (a -> b) -> f a -> f b The map function (and its alias <$>) allows a function to be "lifted" over a data structure. The precise definition of the word "lifted" here depends on the data structure in question, but we have already seen its behavior for some simple types: > import Prelude > map (\n -> n < 3) [1, 2, 3, 4, 5] [true, true, false, false, false] > import Data.Maybe > import Data.String (length) > map length (Just "testing") (Just 7) How can we understand the meaning of the map function, when it acts on many different structures, each in a different way? Well, we can build an intuition that the map function applies the function it is given to each element of a container, and builds a new container from the results, with the same shape as the original. But how do we make this concept precise? Type class instances for Functor are expected to adhere to a set of laws, called the functor laws: map identity xs = xs map g (map f xs) = map (g <<< f) xs The first law is the identity law. It states that lifting the identity function (the function which returns its argument unchanged) over a structure just returns the original structure. This makes sense since the identity function does not modify its input. The second law is the composition law. It states that mapping one function over a structure, and then mapping a second, is the same thing as mapping the composition of the two functions over the structure. Whatever "lifting" means in the general sense, it should be true that any reasonable definition of lifting a function over a data structure should obey these rules. Many standard type classes come with their own set of similar laws. The laws given to a type class give structure to the functions of that type class and allow us to study its instances in generality. The interested reader can research the laws ascribed to the standard type classes that we have seen already. Deriving Instances Rather than writing instances manually, you can let the compiler do most of the work for you. Take a look at this Type Class Deriving guide. That information will help you solve the following exercises. Exercises The following newtype represents a complex number: newtype Complex = Complex { real :: Number , imaginary :: Number } (Easy) Define a Showinstance for Complex. Match the output format expected by the tests (e.g. 1.2+3.4i, 5.6-7.8i, etc.). (Easy) Derive an Eqinstance for Complex. Note: You may instead write this instance manually, but why do more work if you don't have to? (Medium) Define a Semiringinstance for Complex. Note: You can use wrapand over2from Data.Newtypeto create a more concise solution. If you do so, you will also need to import class Newtypefrom Data.Newtypeand derive a Newtypeinstance for Complex. (Easy) Derive (via newtype) a Ringinstance for Complex. Note: You may instead write this instance manually, but that's not as convenient. Here's the ShapeADT from the previous chapter: data Shape = Circle Point Number | Rectangle Point Number Number | Line Point Point | Text Point String (Medium) Derive (via Generic) a Showinstance for Shape. How does the amount of code written and Stringoutput compare to showShapefrom the previous chapter? Hint: See the Deriving from Genericsection of the Type Class Deriving guide. Type Class Constraints Types of functions can be constrained by using type classes. Here is an example: suppose we want to write a function which tests if three values are equal, by using equality defined using an Eq type class instance. threeAreEqual :: forall a. Eq a => a -> a -> a -> Boolean threeAreEqual a1 a2 a3 = a1 == a2 && a2 == a3 The type declaration looks like an ordinary polymorphic type defined using forall. However, there is a type class constraint Eq a, separated from the rest of the type by a double arrow =>. This type says that we can call threeAreEqual with any choice of type a, as long as there is an Eq instance available for a in one of the imported modules. Constrained types can contain several type class instances, and the types of the instances are not restricted to simple type variables. Here is another example which uses Ord and Show instances to compare two values: showCompare :: forall a. Ord a => Show a => a -> a -> String showCompare a1 a2 | a1 < a2 = show a1 <> " is less than " <> show a2 showCompare a1 a2 | a1 > a2 = show a1 <> " is greater than " <> show a2 showCompare a1 a2 = show a1 <> " is equal to " <> show a2 Note that multiple constraints can be specified by using the => symbol multiple times, just like we specify curried functions of multiple arguments. But remember not to confuse the two symbols: a -> bdenotes the type of functions from type ato type b, whereas a => bapplies the constraint ato the type b. The PureScript compiler will try to infer constrained types when a type annotation is not provided. This can be useful if we want to use the most general type possible for a function. To see this, try using one of the standard type classes like Semiring in PSCi: > import Prelude > :type \x -> x + x forall a. Semiring a => a -> a Here, we might have annotated this function as Int -> Int, or Number -> Number, but PSCi shows us that the most general type works for any Semiring, allowing us to use our function with both Ints and Numbers. Instance Dependencies Just as the implementation of functions can depend on type class instances using constrained types, so can the implementation of type class instances depend on other type class instances. This provides a powerful form of program inference, in which the implementation of a program can be inferred using its types. For example, consider the Show type class. We can write a type class instance to show arrays of elements, as long as we have a way to show the elements themselves: instance showArray :: Show a => Show (Array a) where ... If a type class instance depends on multiple other instances, those instances should be grouped in parentheses and separated by commas on the left hand side of the => symbol: instance showEither :: (Show a, Show b) => Show (Either a b) where ... These two type class instances are provided in the prelude library. When the program is compiled, the correct type class instance for Show is chosen based on the inferred type of the argument to show. The selected instance might depend on many such instance relationships, but this complexity is not exposed to the developer. Exercises (Easy) The following declaration defines a type of non-empty arrays of elements of type a: data NonEmpty a = NonEmpty a (Array a) Write an Eqinstance for the type NonEmpty awhich reuses the instances for Eq aand Eq (Array a). Note: you may instead derive the Eqinstance. (Medium) Write a Semigroupinstance for NonEmpty aby reusing the Semigroupinstance for Array. (Medium) Write a Functorinstance for NonEmpty. (Medium) Given any type awith an instance of Ord, we can add a new "infinite" value which is greater than any other value: data Extended a = Infinite | Finite a Write an Ordinstance for Extended awhich reuses the Ordinstance for a. (Difficult) Write a Foldableinstance for NonEmpty. Hint: reuse the Foldableinstance for arrays. (Difficult) Given a type constructor fwhich defines an ordered container (and so has a Foldableinstance), we can create a new container type which includes an extra element at the front: data OneMore f a = OneMore a (f a) The container OneMore falso has an ordering, where the new element comes before any element of f. Write a Foldableinstance for OneMore f: instance foldableOneMore :: Foldable f => Foldable (OneMore f) where ... (Medium) Write a dedupShapes :: Array Shape -> Array Shapefunction which removes duplicate Shapes from an array using the nubEqfunction. (Medium) Write a dedupShapesFastfunction which is the same as dedupShapes, but uses the more efficient nubfunction. Multi Parameter Type Classes It's not the case that a type class can only take a single type as an argument. This is the most common case, but in fact, a type class can be parameterized by zero or more type arguments. Let's see an example of a type class with two type arguments. module Stream where import Data.Array as Array import Data.Maybe (Maybe) import Data.String.CodeUnits as String class Stream stream element where uncons :: stream -> Maybe { head :: element, tail :: stream } instance streamArray :: Stream (Array a) a where uncons = Array.uncons instance streamString :: Stream String Char where uncons = String.uncons The Stream module defines a class Stream which identifies types which look like streams of elements, where elements can be pulled from the front of the stream using the uncons function. Note that the Stream type class is parameterized not only by the type of the stream itself, but also by its elements. This allows us to define type class instances for the same stream type but different element types. The module defines two type class instances: an instance for arrays, where uncons removes the head element of the array using pattern matching, and an instance for String, which removes the first character from a String. We can write functions which work over arbitrary streams. For example, here is a function which accumulates a result in some Monoid based on the elements of a stream: import Prelude import Data.Monoid (class Monoid, mempty) foldStream :: forall l e m. Stream l e => Monoid m => (e -> m) -> l -> m foldStream f list = case uncons list of Nothing -> mempty Just cons -> f cons.head <> foldStream f cons.tail Try using foldStream in PSCi for different types of Stream and different types of Monoid. Functional Dependencies Multi-parameter type classes can be very useful, but can easily lead to confusing types and even issues with type inference. As a simple example, consider writing a generic tail function on streams using the Stream class given above: genericTail xs = map _.tail (uncons xs) This gives a somewhat confusing error message: The inferred type forall stream a. Stream stream a => stream -> Maybe stream has type variables which are not mentioned in the body of the type. Consider adding a type annotation. The problem is that the genericTail function does not use the element type mentioned in the definition of the Stream type class, so that type is left unsolved. Worse still, we cannot even use genericTail by applying it to a specific type of stream: > map _.tail (uncons "testing") The inferred type forall a. Stream String a => Maybe String has type variables which are not mentioned in the body of the type. Consider adding a type annotation. Here, we might expect the compiler to choose the streamString instance. After all, a String is a stream of Chars, and cannot be a stream of any other type of elements. The compiler is unable to make that deduction automatically, and cannot commit to the streamString instance. However, we can help the compiler by adding a hint to the type class definition: class Stream stream element | stream -> element where uncons :: stream -> Maybe { head :: element, tail :: stream } Here, stream -> element is called a functional dependency. A functional dependency asserts a functional relationship between the type arguments of a multi-parameter type class. This functional dependency tells the compiler that there is a function from stream types to (unique) element types, so if the compiler knows the stream type, then it can commit to the element type. This hint is enough for the compiler to infer the correct type for our generic tail function above: > :type genericTail forall stream element. Stream stream element => stream -> Maybe stream > genericTail "testing" (Just "esting") Functional dependencies can be quite useful when using multi-parameter type classes to design certain APIs. Nullary Type Classes We can even define type classes with zero type arguments! These correspond to compile-time assertions about our functions, allowing us to track global properties of our code in the type system. An important example is the Partial class which we saw earlier when discussing partial functions. Take for example the functions head and tail defined in Data.Array.Partial that allow us to get the head or tail of an array without wrapping them in a Maybe, so they can fail if the array is empty: head :: forall a. Partial => Array a -> a tail :: forall a. Partial => Array a -> Array a Note that there is no instance defined for the Partial type class! Doing so would defeat its purpose: attempting to use the head function directly will result in a type error: > head [1, 2, 3] No type class instance was found for Prim.Partial Instead, we can republish the Partial constraint for any functions making use of partial functions: secondElement :: forall a. Partial => Array a -> a secondElement xs = head (tail xs) We've already seen the unsafePartial function, which allows us to treat a partial function as a regular function (unsafely). This function is defined in the Partial.Unsafe module: unsafePartial :: forall a. (Partial => a) -> a Note that the Partial constraint appears inside the parentheses on the left of the function arrow, but not in the outer forall. That is, unsafePartial is a function from partial values to regular values: > unsafePartial head [1, 2, 3] 1 > unsafePartial secondElement [1, 2, 3] 2 Superclasses Just as we can express relationships between type class instances by making an instance dependent on another instance, we can express relationships between type classes themselves using so-called superclasses. We say that one type class is a superclass of another if every instance of the second class is required to be an instance of the first, and we indicate a superclass relationship in the class definition by using a backwards facing double arrow. We've already seen some examples of superclass relationships: the Eq class is a superclass of Ord, and the Semigroup class is a superclass of Monoid. For every type class instance of the Ord class, there must be a corresponding Eq instance for the same type. This makes sense, since in many cases, when the compare function reports that two values are incomparable, we often want to use the Eq class to determine if they are in fact equal. In general, it makes sense to define a superclass relationship when the laws for the subclass mention the members of the superclass. For example, it is reasonable to assume, for any pair of Ord and Eq instances, that if two values are equal under the Eq instance, then the compare function should return EQ. In other words, a == b should be true exactly when compare a b evaluates to EQ. This relationship on the level of laws justifies the superclass relationship between Eq and Ord. Another reason to define a superclass relationship is in the case where there is a clear "is-a" relationship between the two classes. That is, every member of the subclass is a member of the superclass as well. Exercises (Medium) Define a partial function unsafeMaximum :: Partial => Array Int -> Intwhich finds the maximum of a non-empty array of integers. Test out your function in PSCi using unsafePartial. Hint: Use the maximumfunction from Data.Foldable. (Medium) The Actionclass is a multi-parameter type class which defines an action of one type on another: class Monoid m <= Action m a where act :: m -> a -> a An action is a function which describes how monoidal values are used to determine how to modify a value of another type. There are two laws for the Actiontype class: act mempty a = a act (m1 <> m2) a = act m1 (act m2 a) Applying an empty action is a no-op. And applying two actions in sequence is the same as applying the actions combined. That is, actions respect the operations defined by the Monoidclass. For example, the natural numbers form a monoid under multiplication: newtype Multiply = Multiply Int instance semigroupMultiply :: Semigroup Multiply where append (Multiply n) (Multiply m) = Multiply (n * m) instance monoidMultiply :: Monoid Multiply where mempty = Multiply 1 Write an instance which implements this action: instance actionMultiplyInt :: Action Multiply Int where ... Remember, your instance must satisfy the laws listed above. (Difficult) There are actually multiple ways to implement an instance of Action Multiply Int. How many can you think of? Purescript does not allow multiple implementations of a same instance, so you will have to replace your original implementation. Note: the tests cover 4 implementations. (Medium) Write an Actioninstance which repeats an input string some number of times: instance actionMultiplyString :: Action Multiply String where ... Hint: Search Pursuit for a helper-function with the signature String -> Int -> String. Note that Stringmight appear as a more generic type (such as Monoid). Does this instance satisfy the laws listed above? (Medium) Write an instance Action m a => Action m (Array a), where the action on arrays is defined by acting on each array element independently. (Difficult) Given the following newtype, write an instance for Action m (Self m), where the monoid macts on itself using append: newtype Self m = Self m Note: The testing framework requires Showand Eqinstances for the Selfand Multiplytypes. You may either write these instances manually, or let the compiler handle this for you with derive newtype instanceshorthand. (Difficult) Should the arguments of the multi-parameter type class Actionbe related by some functional dependency? Why or why not? Note: There is no test for this exercise. A Type Class for Hashes In the last section of this chapter, we will use the lessons from the rest of the chapter to create a library for hashing data structures. Note that this library is for demonstration purposes only, and is not intended to provide a robust hashing mechanism. What properties might we expect of a hash function? - A hash function should be deterministic, and map equal values to equal hash codes. - A hash function should distribute its results approximately uniformly over some set of hash codes. The first property looks a lot like a law for a type class, whereas the second property is more along the lines of an informal contract, and certainly would not be enforceable by PureScript's type system. However, this should provide the intuition for the following type class: newtype HashCode = HashCode Int instance hashCodeEq :: Eq HashCode where eq (HashCode a) (HashCode b) = a == b hashCode :: Int -> HashCode hashCode h = HashCode (h `mod` 65535) class Eq a <= Hashable a where hash :: a -> HashCode with the associated law that a == b implies hash a == hash b. We'll spend the rest of this section building a library of instances and functions associated with the Hashable type class. We will need a way to combine hash codes in a deterministic way: combineHashes :: HashCode -> HashCode -> HashCode combineHashes (HashCode h1) (HashCode h2) = hashCode (73 * h1 + 51 * h2) The combineHashes function will mix two hash codes and redistribute the result over the interval 0-65535. Let's write a function which uses the Hashable constraint to restrict the types of its inputs. One common task which requires a hashing function is to determine if two values hash to the same hash code. The hashEqual relation provides such a capability: hashEqual :: forall a. Hashable a => a -> a -> Boolean hashEqual = eq `on` hash This function uses the on function from Data.Function to define hash-equality in terms of equality of hash codes, and should read like a declarative definition of hash-equality: two values are "hash-equal" if they are equal after each value has been passed through the hash function. Let's write some Hashable instances for some primitive types. Let's start with an instance for integers. Since a HashCode is really just a wrapped integer, this is simple - we can use the hashCode helper function: instance hashInt :: Hashable Int where hash = hashCode We can also define a simple instance for Boolean values using pattern matching: instance hashBoolean :: Hashable Boolean where hash false = hashCode 0 hash true = hashCode 1 With an instance for hashing integers, we can create an instance for hashing Chars by using the toCharCode function from Data.Char: instance hashChar :: Hashable Char where hash = hash <<< toCharCode To define an instance for arrays, we can map the hash function over the elements of the array (if the element type is also an instance of Hashable) and then perform a left fold over the resulting hashes using the combineHashes function: instance hashArray :: Hashable a => Hashable (Array a) where hash = foldl combineHashes (hashCode 0) <<< map hash Notice how we build up instances using the simpler instances we have already written. Let's use our new Array instance to define an instance for Strings, by turning a String into an array of Chars: instance hashString :: Hashable String where hash = hash <<< toCharArray How can we prove that these Hashable instances satisfy the type class law that we stated above? We need to make sure that equal values have equal hash codes. In cases like Int, Char, String and Boolean, this is simple because there are no values of those types which are equal in the sense of Eq but not equal identically. What about some more interesting types? To prove the type class law for the Array instance, we can use induction on the length of the array. The only array with length zero is []. Any two non-empty arrays are equal only if they have equal head elements and equal tails, by the definition of Eq on arrays. By the inductive hypothesis, the tails have equal hashes, and we know that the head elements have equal hashes if the Hashable a instance must satisfy the law. Therefore, the two arrays have equal hashes, and so the Hashable (Array a) obeys the type class law as well. The source code for this chapter includes several other examples of Hashable instances, such as instances for the Maybe and Tuple type. Exercises (Easy) Use PSCi to test the hash functions for each of the defined instances. Note: There is no provided unit test for this exercise. (Medium) Write a function arrayHasDuplicateswhich tests if an array has any duplicate elements based on both hash and value equality. First check for hash equality with the hashEqualfunction, then check for value equality with ==if a duplicate pair of hashes is found. Hint: the nubByEqfunction in Data.Arrayshould make this task much simpler. (Medium) Write a Hashableinstance for the following newtype which satisfies the type class law: newtype Hour = Hour Int instance eqHour :: Eq Hour where eq (Hour n) (Hour m) = mod n 12 == mod m 12 The newtype Hourand its Eqinstance represent the type of integers modulo 12, so that 1 and 13 are identified as equal, for example. Prove that the type class law holds for your instance. (Difficult) Prove the type class laws for the Hashableinstances for Maybe, Eitherand Tuple. Note: There is no test for this exercise. Conclusion In this chapter, we've been introduced to type classes, a type-oriented form of abstraction which enables powerful forms of code reuse. We've seen a collection of standard type classes from the PureScript standard libraries, and defined our own library based on a type class for computing hash codes. This chapter also gave an introduction to the notion of type class laws, a technique for proving properties about code which uses type classes for abstraction. Type class laws are part of a larger subject called equational reasoning, in which the properties of a programming language and its type system are used to enable logical reasoning about its programs. This is an important idea, and will be a theme which we will return to throughout the rest of the book. Applicative Validation Chapter Goals In this chapter, we will meet an important new abstraction - the applicative functor, described by the Applicative type class. Don't worry if the name sounds confusing - we will motivate the concept with a practical example - validating form data. This technique allows us to convert code which usually involves a lot of boilerplate checking into a simple, declarative description of our form. We will also meet another type class, Traversable, which describes traversable functors, and see how this concept also arises very naturally from solutions to real-world problems. The example code for this chapter will be a continuation of the address book example from chapter 3. This time, we will extend our address book data types, and write functions to validate values for those types. The understanding is that these functions could be used, for example in a web user interface, to display errors to the user as part of a data entry form. Project Setup The source code for this chapter is defined in the files src/Data/AddressBook.purs and src/Data/AddressBook/Validation.purs. The project has a number of dependencies, many of which we have seen before. There are two new dependencies: control, which defines functions for abstracting control flow using type classes like Applicative. validation, which defines a functor for applicative validation, the subject of this chapter. The Data.AddressBook module defines data types and Show instances for the types in our project, and the Data.AddressBook.Validation module contains validation rules for those types. Generalizing Function Application To explain the concept of an applicative functor, let's consider the type constructor Maybe that we met earlier. The source code for this module defines a function address which has the following type: address :: String -> String -> String -> Address This function is used to construct a value of type Address from three strings: a street name, a city, and a state. We can apply this function easily and see the result in PSCi: > import Data.AddressBook > address "123 Fake St." "Faketown" "CA" { street: "123 Fake St.", city: "Faketown", state: "CA" } However, suppose we did not necessarily have a street, city, or state, and wanted to use the Maybe type to indicate a missing value in each of the three cases. In one case, we might have a missing city. If we try to apply our function directly, we will receive an error from the type checker: > import Data.Maybe > address (Just "123 Fake St.") Nothing (Just "CA") Could not match type Maybe String with type String Of course, this is an expected type error - address takes strings as arguments, not values of type Maybe String. However, it is reasonable to expect that we should be able to "lift" the address function to work with optional values described by the Maybe type. In fact, we can, and the Control.Apply provides the function lift3 function which does exactly what we need: > import Control.Apply > lift3 address (Just "123 Fake St.") Nothing (Just "CA") Nothing In this case, the result is Nothing, because one of the arguments (the city) was missing. If we provide all three arguments using the Just constructor, then the result will contain a value as well: > lift3 address (Just "123 Fake St.") (Just "Faketown") (Just "CA") Just ({ street: "123 Fake St.", city: "Faketown", state: "CA" }) The name of the function lift3 indicates that it can be used to lift functions of 3 arguments. There are similar functions defined in Control.Apply for functions of other numbers of arguments. Lifting Arbitrary Functions So, we can lift functions with small numbers of arguments by using lift2, lift3, etc. But how can we generalize this to arbitrary functions? It is instructive to look at the type of lift3: > :type lift3 forall a b c d f. Apply f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d In the Maybe example above, the type constructor f is Maybe, so that lift3 is specialized to the following type: forall a b c d. (a -> b -> c -> d) -> Maybe a -> Maybe b -> Maybe c -> Maybe d This type says that we can take any function with three arguments, and lift it to give a new function whose argument and result types are wrapped with Maybe. Certainly, this is not possible for every type constructor f, so what is it about the Maybe type which allowed us to do this? Well, in specializing the type above, we removed a type class constraint on f from the Apply type class. Apply is defined in the Prelude as follows: class Functor f where map :: forall a b. (a -> b) -> f a -> f b class Functor f <= Apply f where apply :: forall a b. f (a -> b) -> f a -> f b The Apply type class is a subclass of Functor, and defines an additional function apply. As <$> was defined as an alias for map, the Prelude module defines <*> as an alias for apply. As we'll see, these two operators are often used together. Note that this apply is different than the apply from Data.Function (infixed as $). Luckily, infix notation is almost always used for the latter, so you don't need to worry about name collisions. The type of apply looks a lot like the type of map. The difference between map and apply is that map takes a function as an argument, whereas the first argument to apply is wrapped in the type constructor f. We'll see how this is used soon, but first, let's see how to implement the Apply type class for the Maybe type: instance functorMaybe :: Functor Maybe where map f (Just a) = Just (f a) map f Nothing = Nothing instance applyMaybe :: Apply Maybe where apply (Just f) (Just x) = Just (f x) apply _ _ = Nothing This type class instance says that we can apply an optional function to an optional value, and the result is defined only if both are defined. Now we'll see how map and apply can be used together to lift functions of arbitrary number of arguments. For functions of one argument, we can just use map directly. For functions of two arguments, we have a curried function g with type a -> b -> c, say. This is equivalent to the type a -> (b -> c), so we can apply map to g to get a new function of type f a -> f (b -> c) for any type constructor f with a Functor instance. Partially applying this function to the first lifted argument (of type f a), we get a new wrapped function of type f (b -> c). If we also have an Apply instance for f, we can then use apply to apply the second lifted argument (of type f b) to get our final value of type f c. Putting this all together, we see that if we have values x :: f a and y :: f b, then the expression (g <$> x) <*> y has type f c (remember, this expression is equivalent to apply (map g x) y). The precedence rules defined in the Prelude allow us to remove the parentheses: g <$> x <*> y. In general, we can use <$> on the first argument, and <*> for the remaining arguments, as illustrated here for lift3: lift3 :: forall a b c d f . Apply f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d lift3 f x y z = f <$> x <*> y <*> z It is left as an exercise for the reader to verify the types involved in this expression. As an example, we can try lifting the address function over Maybe, directly using the <$> and <*> functions: > address <$> Just "123 Fake St." <*> Just "Faketown" <*> Just "CA" Just ({ street: "123 Fake St.", city: "Faketown", state: "CA" }) > address <$> Just "123 Fake St." <*> Nothing <*> Just "CA" Nothing Try lifting some other functions of various numbers of arguments over Maybe in this way. Alternatively applicative do notation can be used for the same purpose in a way that looks similar to the familiar do notation. Here is lift3 using applicative do notation. Note ado is used instead of do, and in is used on the final line to denote the yielded value: lift3 :: forall a b c d f . Apply f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d lift3 f x y z = ado a <- x b <- y c <- z in f a b c The Applicative Type Class There is a related type class called Applicative, defined as follows: class Apply f <= Applicative f where pure :: forall a. a -> f a Applicative is a subclass of Apply and defines the pure function. pure takes a value and returns a value whose type has been wrapped with the type constructor f. Here is the Applicative instance for Maybe: instance applicativeMaybe :: Applicative Maybe where pure x = Just x If we think of applicative functors as functors which allow lifting of functions, then pure can be thought of as lifting functions of zero arguments. Intuition for Applicative Functions in PureScript are pure and do not support side-effects. Applicative functors allow us to work in larger "programming languages" which support some sort of side-effect encoded by the functor f. As an example, the functor Maybe represents the side effect of possibly-missing values. Some other examples include Either err, which represents the side effect of possible errors of type err, and the arrow functor r -> which represents the side-effect of reading from a global configuration. For now, we'll only consider the Maybe functor. If the functor f represents this larger programming language with effects, then the Apply and Applicative instances allow us to lift values and function applications from our smaller programming language (PureScript) into the new language. pure lifts pure (side-effect free) values into the larger language, and for functions, we can use map and apply as described above. This raises a question: if we can use Applicative to embed PureScript functions and values into this new language, then how is the new language any larger? The answer depends on the functor f. If we can find expressions of type f a which cannot be expressed as pure x for some x, then that expression represents a term which only exists in the larger language. When f is Maybe, an example is the expression Nothing: we cannot write Nothing as pure x for any x. Therefore, we can think of PureScript as having been enlarged to include the new term Nothing, which represents a missing value. More Effects Let's see some more examples of lifting functions over different Applicative functors. Here is a simple example function defined in PSCi, which joins three names to form a full name: > import Prelude > fullName first middle last = last <> ", " <> first <> " " <> middle > fullName "Phillip" "A" "Freeman" Freeman, Phillip A Now suppose that this function forms the implementation of a (very simple!) web service with the three arguments provided as query parameters. We want to make sure that the user provided each of the three parameters, so we might use the Maybe type to indicate the presence or otherwise absence of a parameter. We can lift fullName over Maybe to create an implementation of the web service which checks for missing parameters: > import Data.Maybe > fullName <$> Just "Phillip" <*> Just "A" <*> Just "Freeman" Just ("Freeman, Phillip A") > fullName <$> Just "Phillip" <*> Nothing <*> Just "Freeman" Nothing or with applicative do > import Data.Maybe > :paste… … ado … f <- Just "Phillip" … m <- Just "A" … l <- Just "Freeman" … in fullName f m l … ^D (Just "Freeman, Phillip A") … ado … f <- Just "Phillip" … m <- Nothing … l <- Just "Freeman" … in fullName f m l … ^D Nothing Note that the lifted function returns Nothing if any of the arguments was Nothing. This is good, because now we can send an error response back from our web service if the parameters are invalid. However, it would be better if we could indicate which field was incorrect in the response. Instead of lifting over Maybe, we can lift over Either String, which allows us to return an error message. First, let's write an operator to convert optional inputs into computations which can signal an error using Either String: > import Data.Either > :paste … withError Nothing err = Left err … withError (Just a) _ = Right a … ^D Note: In the Either err applicative functor, the Left constructor indicates an error, and the Right constructor indicates success. Now we can lift over Either String, providing an appropriate error message for each parameter: > :paste … fullNameEither first middle last = … fullName <$> (first `withError` "First name was missing") … <*> (middle `withError` "Middle name was missing") … <*> (last `withError` "Last name was missing") … ^D or with applicative do > :paste … fullNameEither first middle last = ado … f <- first `withError` "First name was missing" … m <- middle `withError` "Middle name was missing" … l <- last `withError` "Last name was missing" … in fullName f m l … ^D > :type fullNameEither Maybe String -> Maybe String -> Maybe String -> Either String String Now our function takes three optional arguments using Maybe, and returns either a String error message or a String result. We can try out the function with different inputs: > fullNameEither (Just "Phillip") (Just "A") (Just "Freeman") (Right "Freeman, Phillip A") > fullNameEither (Just "Phillip") Nothing (Just "Freeman") (Left "Middle name was missing") > fullNameEither (Just "Phillip") (Just "A") Nothing (Left "Last name was missing") In this case, we see the error message corresponding to the first missing field, or a successful result if every field was provided. However, if we are missing multiple inputs, we still only see the first error: > fullNameEither Nothing Nothing Nothing (Left "First name was missing") This might be good enough, but if we want to see a list of all missing fields in the error, then we need something more powerful than Either String. We will see a solution later in this chapter. Combining Effects As an example of working with applicative functors abstractly, this section will show how to write a function which will generically combine side-effects encoded by an applicative functor f. What does this mean? Well, suppose we have a list of wrapped arguments of type f a for some a. That is, suppose we have a list of type List (f a). Intuitively, this represents a list of computations with side-effects tracked by f, each with return type a. If we could run all of these computations in order, we would obtain a list of results of type List a. However, we would still have side-effects tracked by f. That is, we expect to be able to turn something of type List (f a) into something of type f (List a) by "combining" the effects inside the original list. For any fixed list size n, there is a function of n arguments which builds a list of size n out of those arguments. For example, if n is 3, the function is \x y z -> x : y : z : Nil. This function has type a -> a -> a -> List a. We can use the Applicative instance for List to lift this function over f, to get a function of type f a -> f a -> f a -> f (List a). But, since we can do this for any n, it makes sense that we should be able to perform the same lifting for any list of arguments. That means that we should be able to write a function combineList :: forall f a. Applicative f => List (f a) -> f (List a) This function will take a list of arguments, which possibly have side-effects, and return a single wrapped list, applying the side-effects of each. To write this function, we'll consider the length of the list of arguments. If the list is empty, then we do not need to perform any effects, and we can use pure to simply return an empty list: combineList Nil = pure Nil In fact, this is the only thing we can do! If the list is non-empty, then we have a head element, which is a wrapped argument of type f a, and a tail of type List (f a). We can recursively combine the effects in the tail, giving a result of type f (List a). We can then use <$> and <*> to lift the Cons constructor over the head and new tail: combineList (Cons x xs) = Cons <$> x <*> combineList xs Again, this was the only sensible implementation, based on the types we were given. We can test this function in PSCi, using the Maybe type constructor as an example: > import Data.List > import Data.Maybe > combineList (fromFoldable [Just 1, Just 2, Just 3]) (Just (Cons 1 (Cons 2 (Cons 3 Nil)))) > combineList (fromFoldable [Just 1, Nothing, Just 2]) Nothing When specialized to Maybe, our function returns a Just only if every list element was Just, otherwise it returns Nothing. This is consistent with our intuition of working in a larger language supporting optional values - a list of computations which return optional results only has a result itself if every computation contained a result. But the combineList function works for any Applicative! We can use it to combine computations which possibly signal an error using Either err, or which read from a global configuration using r ->. We will see the combineList function again later, when we consider Traversable functors. Exercises - (Medium) Write versions of the numeric operators +, -, *and /which work with optional arguments (i.e. arguments wrapped in Maybe) and return a value wrapped in Maybe. Name these functions addMaybe, subMaybe, mulMaybe, and divMaybe. Hint: Use lift2. - (Medium) Extend the above exercise to work with all Applytypes (not just Maybe). Name these new functions addApply, subApply, mulApply, and divApply. - (Difficult) Write a function combineMaybewhich has type forall a f. Applicative f => Maybe (f a) -> f (Maybe a). This function takes an optional computation with side-effects, and returns a side-effecting computation which has an optional result. Applicative Validation The source code for this chapter defines several data types which might be used in an address book application. The details are omitted here, but the key functions which are exported by the Data.AddressBook module have the following types: address :: String -> String -> String -> Address phoneNumber :: PhoneType -> String -> PhoneNumber person :: String -> String -> Address -> Array PhoneNumber -> Person where PhoneType is defined as an algebraic data type: data PhoneType = HomePhone | WorkPhone | CellPhone | OtherPhone These functions can be used to construct a Person representing an address book entry. For example, the following value is defined in Data.AddressBook: examplePerson :: Person examplePerson = person "John" "Smith" (address "123 Fake St." "FakeTown" "CA") [ phoneNumber HomePhone "555-555-5555" , phoneNumber CellPhone "555-555-0000" ] Test this value in PSCi (this result has been formatted): > import Data.AddressBook > examplePerson { firstName: "John" , lastName: "Smith" , homeAddress: { street: "123 Fake St." , city: "FakeTown" , state: "CA" } , phones: [ { type: HomePhone , number: "555-555-5555" } , { type: CellPhone , number: "555-555-0000" } ] } We saw in a previous section how we could use the Either String functor to validate a data structure of type Person. For example, provided functions to validate the two names in the structure, we might validate the entire data structure as follows: nonEmpty1 :: String -> Either String String nonEmpty1 "" = Left "Field cannot be empty" nonEmpty1 value = Right value validatePerson1 :: Person -> Either String Person validatePerson1 p = person <$> nonEmpty1 p.firstName <*> nonEmpty1 p.lastName <*> pure p.homeAddress <*> pure p.phones or with applicative do validatePerson1Ado :: Person -> Either String Person validatePerson1Ado p = ado f <- nonEmpty1 p.firstName l <- nonEmpty1 p.lastName in person f l p.homeAddress p.phones In the first two lines, we use the nonEmpty1 function to validate a non-empty string. nonEmpty1 returns an error indicated with the Left constructor if its input is empty, otherwise it returns the value wrapped with the Right constructor. The final lines do not perform any validation but simply provide the address and phones fields to the person function as the remaining arguments. This function can be seen to work in PSCi, but has a limitation which we have seen before: > validatePerson $ person "" "" (address "" "" "") [] (Left "Field cannot be empty") The Either String applicative functor only provides the first error encountered. Given the input here, we would prefer to see two errors - one for the missing first name, and a second for the missing last name. There is another applicative functor which is provided by the validation library. This functor is called V, and it provides the ability to return errors in any semigroup. For example, we can use V (Array String) to return an array of Strings as errors, concatenating new errors onto the end of the array. The Data.AddressBook.Validation module uses the V (Array String) applicative functor to validate the data structures in the Data.AddressBook module. Here is an example of a validator taken from the Data.AddressBook.Validation module: type Errors = Array String nonEmpty :: String -> String -> V Errors String nonEmpty field "" = invalid [ "Field '" <> field <> "' cannot be empty" ] nonEmpty _ value = pure value lengthIs :: String -> Int -> String -> V Errors String lengthIs field len value | length value /= len = invalid [ "Field '" <> field <> "' must have length " <> show len ] lengthIs _ _ value = pure value validateAddress :: Address -> V Errors Address validateAddress a = address <$> nonEmpty "Street" a.street <*> nonEmpty "City" a.city <*> lengthIs "State" 2 a.state or with applicative do validateAddressAdo :: Address -> V Errors Address validateAddressAdo a = ado street <- nonEmpty "Street" a.street city <- nonEmpty "City" a.city state <- lengthIs "State" 2 a.state in address street city state validateAddress validates an Address structure. It checks that the street and city fields are non-empty, and checks that the string in the state field has length 2. Notice how the nonEmpty and lengthIs validator functions both use the invalid function provided by the Data.Validation module to indicate an error. Since we are working in the Array String semigroup, invalid takes an array of strings as its argument. We can try this function in PSCi: > import Data.AddressBook > import Data.AddressBook.Validation > validateAddress $ address "" "" "" (invalid [ "Field 'Street' cannot be empty" , "Field 'City' cannot be empty" , "Field 'State' must have length 2" ]) > validateAddress $ address "" "" "CA" (invalid [ "Field 'Street' cannot be empty" , "Field 'City' cannot be empty" ]) This time, we receive an array of all validation errors. Regular Expression Validators The validatePhoneNumber function uses a regular expression to validate the form of its argument. The key is a matches validation function, which uses a Regex from the Data.String.Regex module to validate its input: matches :: String -> Regex -> String -> V Errors String matches _ regex value | test regex value = pure value matches field _ _ = invalid [ "Field '" <> field <> "' did not match the required format" ] Again, notice how pure is used to indicate successful validation, and invalid is used to signal an array of errors. validatePhoneNumber is built from the matches function in the same way as before: validatePhoneNumber :: PhoneNumber -> V Errors PhoneNumber validatePhoneNumber pn = phoneNumber <$> pure pn."type" <*> matches "Number" phoneNumberRegex pn.number or with applicative do validatePhoneNumberAdo :: PhoneNumber -> V Errors PhoneNumber validatePhoneNumberAdo pn = ado tpe <- pure pn."type" number <- matches "Number" phoneNumberRegex pn.number in phoneNumber tpe number Again, try running this validator against some valid and invalid inputs in PSCi: > validatePhoneNumber $ phoneNumber HomePhone "555-555-5555" pure ({ type: HomePhone, number: "555-555-5555" }) > validatePhoneNumber $ phoneNumber HomePhone "555.555.5555" invalid (["Field 'Number' did not match the required format"]) Exercises - (Easy) Write a regular expression stateRegex :: Regexto check that a string only contains two alphabetic characters. Hint: see the source code for phoneNumberRegex. - (Medium) Write a regular expression nonEmptyRegex :: Regexto check that a string is not entirely whitespace. Hint: If you need help developing this regex expression, check out RegExr which has a great cheatsheet and interactive test environment. - (Medium) Write a function validateAddressImprovedthat is similar to validateAddress, but uses the above stateRegexto validate the statefield and nonEmptyRegexto validate the streetand cityfields. Hint: see the source for validatePhoneNumberfor an example of how to use matches. Traversable Functors The remaining validator is validatePerson, which combines the validators we have seen so far to validate an entire Person structure, including the following new validatePhoneNumbers function: validatePhoneNumbers :: String -> Array PhoneNumber -> V Errors (Array PhoneNumber) validatePhoneNumbers field [] = invalid [ "Field '" <> field <> "' must contain at least one value" ] validatePhoneNumbers _ phones = traverse validatePhoneNumber phones validatePerson :: Person -> V Errors Person validatePerson p = person <$> nonEmpty "First Name" p.firstName <*> nonEmpty "Last Name" p.lastName <*> validateAddress p.homeAddress <*> validatePhoneNumbers "Phone Numbers" p.phones or with applicative do validatePersonAdo :: Person -> V Errors Person validatePersonAdo p = ado firstName <- nonEmpty "First Name" p.firstName lastName <- nonEmpty "Last Name" p.lastName address <- validateAddress p.homeAddress numbers <- validatePhoneNumbers "Phone Numbers" p.phones in person firstName lastName address numbers validatePhoneNumbers uses a new function we haven't seen before - traverse. traverse is defined in the Data.Traversable module, in the Traversable type class: class (Functor t, Foldable t) <= Traversable t where traverse :: forall a b m. Applicative m => (a -> m b) -> t a -> m (t b) sequence :: forall a m. Applicative m => t (m a) -> m (t a) Traversable defines the class of traversable functors. The types of its functions might look a little intimidating, but validatePerson provides a good motivating example. Every traversable functor is both a Functor and Foldable (recall that a foldable functor was a type constructor which supported a fold operation, reducing a structure to a single value). In addition, a traversable functor provides the ability to combine a collection of side-effects which depend on its structure. This may sound complicated, but let's simplify things by specializing to the case of arrays. The array type constructor is traversable, which means that there is a function: traverse :: forall a b m. Applicative m => (a -> m b) -> Array a -> m (Array b) Intuitively, given any applicative functor m, and a function which takes a value of type a and returns a value of type b (with side-effects tracked by m), we can apply the function to each element of an array of type Array a to obtain a result of type Array b (with side-effects tracked by m). Still not clear? Let's specialize further to the case where m is the V Errors applicative functor above. Now, we have a function of type traverse :: forall a b. (a -> V Errors b) -> Array a -> V Errors (Array b) This type signature says that if we have a validation function m for a type a, then traverse m is a validation function for arrays of type Array a. But that's exactly what we need to be able to validate the phones field of the Person data structure! We pass validatePhoneNumber to traverse to create a validation function which validates each element successively. In general, traverse walks over the elements of a data structure, performing computations with side-effects and accumulating a result. The type signature for Traversable's other function sequence might look more familiar: sequence :: forall a m. Applicative m => t (m a) -> m (t a) In fact, the combineList function that we wrote earlier is just a special case of the sequence function from the Traversable type class. Setting t to be the type constructor List, we recover the type of the combineList function: combineList :: forall f a. Applicative f => List (f a) -> f (List a) Traversable functors capture the idea of traversing a data structure, collecting a set of effectful computations, and combining their effects. In fact, sequence and traverse are equally important to the definition of Traversable - each can be implemented in terms of each other. This is left as an exercise for the interested reader. The Traversable instance for lists given in the Data.List module is: instance traversableList :: Traversable List where -- traverse :: forall a b m. Applicative m => (a -> m b) -> List a -> m (List b) traverse _ Nil = pure Nil traverse f (Cons x xs) = Cons <$> f x <*> traverse f xs (The actual definition was later modified to improve stack safety. You can read more about that change here.) In the case of an empty list, we can simply return an empty list using pure. If the list is non-empty, we can use the function f to create a computation of type f b from the head element. We can also call traverse recursively on the tail. Finally, we can lift the Cons constructor over the applicative functor m to combine the two results. But there are more examples of traversable functors than just arrays and lists. The Maybe type constructor we saw earlier also has an instance for Traversable. We can try it in PSCi: > import Data.Maybe > import Data.Traversable > import Data.AddressBook.Validation > traverse (nonEmpty "Example") Nothing pure (Nothing) > traverse (nonEmpty "Example") (Just "") invalid (["Field 'Example' cannot be empty"]) > traverse (nonEmpty "Example") (Just "Testing") pure ((Just "Testing")) These examples show that traversing the Nothing value returns Nothing with no validation, and traversing Just x uses the validation function to validate x. That is, traverse takes a validation function for type a and returns a validation function for Maybe a, i.e. a validation function for optional values of type a. Other traversable functors include Array, and Tuple a and Either a for any type a. Generally, most "container" data type constructors have Traversable instances. As an example, the exercises will include writing a Traversable instance for a type of binary trees. Exercises (Easy) Write Eqand Showinstances for the following binary tree data structure: data Tree a = Leaf | Branch (Tree a) a (Tree a) Recall from the previous chapter that you may either write these instances manually or let the compiler derive them for you. There are many "correct" formatting options for Showoutput. The test for this exercise expects the following whitespace style. This happens to match the default formatting of generic show, so you only need to make note of this if you're planning on writing this instance manually. (Branch (Branch Leaf 8 Leaf) 42 Leaf) (Medium) Write a Traversableinstance for Tree a, which combines side-effects from left-to-right. Hint: There are some additional instance dependencies that need to be defined for Traversable. (Medium) Write a function traversePreOrder :: forall a m b. Applicative m => (a -> m b) -> Tree a -> m (Tree b)that performs a pre-order traversal of the tree. This means the order of effect execution is root-left-right, instead of left-root-right as was done for the previous in-order traverse exercise. Hint: No additional instances need to be defined, and you don't need to call any of the the functions defined earlier. Applicative do notation ( ado) is the easiest way to write this function. (Medium) Write a function traversePostOrderthat performs a post-order traversal of the tree where effects are executed left-right-root. (Medium) Create a new version of the Persontype where the homeAddressfield is optional (using Maybe). Then write a new version of validatePerson(renamed as validatePersonOptionalAddress) to validate this new Person. Hint: Use traverseto validate a field of type Maybe a. (Difficult) Write a function sequenceUsingTraversewhich behaves like sequence, but is written in terms of traverse. (Difficult) Write a function traverseUsingSequencewhich behaves like traverse, but is written in terms of sequence. Applicative Functors for Parallelism In the discussion above, I chose the word "combine" to describe how applicative functors "combine side-effects". However, in all the examples given, it would be equally valid to say that applicative functors allow us to "sequence" effects. This would be consistent with the intuition that traversable functors provide a sequence function to combine effects in sequence based on a data structure. However, in general, applicative functors are more general than this. The applicative functor laws do not impose any ordering on the side-effects that their computations perform. In fact, it would be valid for an applicative functor to perform its side-effects in parallel. For example, the V validation functor returned an array of errors, but it would work just as well if we picked the Set semigroup, in which case it would not matter what order we ran the various validators. We could even run them in parallel over the data structure! As a second example, the parallel package provides a type class Parallel which supports parallel computations. Parallel provides a function parallel which uses some Applicative functor to compute the result of its input computation in parallel: f <$> parallel computation1 <*> parallel computation2 This computation would start computing values asynchronously using computation1 and computation2. When both results have been computed, they would be combined into a single result using the function f. We will see this idea in more detail when we apply applicative functors to the problem of callback hell later in the book. Applicative functors are a natural way to capture side-effects which can be combined in parallel. Conclusion In this chapter, we covered a lot of new ideas: - We introduced the concept of an applicative functor which generalizes the idea of function application to type constructors which capture some notion of side-effect. - We saw how applicative functors gave a solution to the problem of validating data structures, and how by switching the applicative functor we could change from reporting a single error to reporting all errors across a data structure. - We met the Traversabletype class, which encapsulates the idea of a traversable functor, or a container whose elements can be used to combine values with side-effects. Applicative functors are an interesting abstraction which provide neat solutions to a number of problems. We will see them a few more times throughout the book. In this case, the validation applicative functor provided a way to write validators in a declarative style, allowing us to define what our validators should validate and not how they should perform that validation. In general, we will see that applicative functors are a useful tool for the design of domain specific languages. In the next chapter, we will see a related idea, the class of monads, and extend our address book example to run in the browser! The Effect Monad Chapter Goals In the last chapter, we introduced applicative functors, an abstraction which we used to deal with side-effects: optional values, error messages and validation. This chapter will introduce another abstraction for dealing with side-effects in a more expressive way: monads. The goal of this chapter is to explain why monads are a useful abstraction, and their connection with do notation. Project Setup The project adds the following dependencies: effect- defines the Effectmonad, the subject of the second half of the chapter. This dependency is often listed in every starter project (it's been a dependency of every chapter so far), so you'll rarely have to explicitly install it. react-basic-hooks- a web framework that we will use for our Address Book app. Monads and Do Notation Do notation was first introduced when we covered array comprehensions. Array comprehensions provide syntactic sugar for the concatMap function from the Data.Array module. Consider the following example. Suppose we throw two dice and want to count the number of ways in which we can score a total of n. We could do this using the following non-deterministic algorithm: - Choose the value xof the first throw. - Choose the value yof the second throw. - If the sum of xand yis nthen return the pair [x, y], else fail. Array comprehensions allow us to write this non-deterministic algorithm in a natural way: import Prelude import Control.Plus (empty) import Data.Array ((..)) countThrows :: Int -> Array (Array Int) countThrows n = do x <- 1 .. 6 y <- 1 .. 6 if x + y == n then pure [ x, y ] else empty We can see that this function works in PSCi: > import Test.Examples > countThrows 10 [[4,6],[5,5],[6,4]] > countThrows 12 [[6,6]] In the last chapter, we formed an intuition for the Maybe applicative functor, embedding PureScript functions into a larger programming language supporting optional values. In the same way, we can form an intuition for the array monad, embedding PureScript functions into a larger programming language supporting non-deterministic choice. In general, a monad for some type constructor m provides a way to use do notation with values of type m a. Note that in the array comprehension above, every line contains a computation of type Array a for some type a. In general, every line of a do notation block will contain a computation of type m a for some type a and our monad m. The monad m must be the same on every line (i.e. we fix the side-effect), but the types a can differ (i.e. individual computations can have different result types). Here is another example of do notation, this type applied to the type constructor Maybe. Suppose we have some type XML representing XML nodes, and a function child :: XML -> String -> Maybe XML which looks for a child element of a node, and returns Nothing if no such element exists. In this case, we can look for a deeply-nested element by using do notation. Suppose we wanted to read a user's city from a user profile which had been encoded as an XML document: userCity :: XML -> Maybe XML userCity root = do prof <- child root "profile" addr <- child prof "address" city <- child addr "city" pure city The userCity function looks for a child element profile, an element address inside the profile element, and finally an element city inside the address element. If any of these elements are missing, the return value will be Nothing. Otherwise, the return value is constructed using Just from the city node. Remember, the pure function in the last line is defined for every Applicative functor. Since pure is defined as Just for the Maybe applicative functor, it would be equally valid to change the last line to Just city. The Monad Type Class The Monad type class is defined as follows: class Apply m <= Bind m where bind :: forall a b. m a -> (a -> m b) -> m b class (Applicative m, Bind m) <= Monad m The key function here is bind, defined in the Bind type class. Just like for the <$> and <*> operators in the Functor and Apply type classes, the Prelude defines an infix alias >>= for the bind function. The Monad type class extends Bind with the operations of the Applicative type class that we have already seen. It will be useful to see some examples of the Bind type class. A sensible definition for Bind on arrays can be given as follows: instance bindArray :: Bind Array where bind xs f = concatMap f xs This explains the connection between array comprehensions and the concatMap function that has been alluded to before. Here is an implementation of Bind for the Maybe type constructor: instance bindMaybe :: Bind Maybe where bind Nothing _ = Nothing bind (Just a) f = f a This definition confirms the intuition that missing values are propagated through a do notation block. Let's see how the Bind type class is related to do notation. Consider a simple do notation block which starts by binding a value from the result of some computation: do value <- someComputation whatToDoNext Every time the PureScript compiler sees this pattern, it replaces the code with this: bind someComputation \value -> whatToDoNext or, written infix: someComputation >>= \value -> whatToDoNext The computation whatToDoNext is allowed to depend on value. If there are multiple binds involved, this rule is applied multiple times, starting from the top. For example, the userCity example that we saw earlier gets desugared as follows: userCity :: XML -> Maybe XML userCity root = child root "profile" >>= \prof -> child prof "address" >>= \addr -> child addr "city" >>= \city -> pure city It is worth noting that code expressed using do notation is often much clearer than the equivalent code using the >>= operator. However, writing binds explicitly using >>= can often lead to opportunities to write code in point-free form - but the usual warnings about readability apply. Monad Laws The Monad type class comes equipped with three laws, called the monad laws. These tell us what we can expect from sensible implementations of the Monad type class. It is simplest to explain these laws using do notation. Identity Laws The right-identity law is the simplest of the three laws. It tells us that we can eliminate a call to pure if it is the last expression in a do notation block: do x <- expr pure x The right-identity law says that this is equivalent to just expr. The left-identity law states that we can eliminate a call to pure if it is the first expression in a do notation block: do x <- pure y next This code is equivalent to next, after the name x has been replaced with the expression y. The last law is the associativity law. It tells us how to deal with nested do notation blocks. It states that the following piece of code: c1 = do y <- do x <- m1 m2 m3 is equivalent to this code: c2 = do x <- m1 y <- m2 m3 Each of these computations involves three monadic expression m1, m2 and m3. In each case, the result of m1 is eventually bound to the name x, and the result of m2 is bound to the name y. In c1, the two expressions m1 and m2 are grouped into their own do notation block. In c2, all three expressions m1, m2 and m3 appear in the same do notation block. The associativity law tells us that it is safe to simplify nested do notation blocks in this way. Note that by the definition of how do notation gets desugared into calls to bind, both of c1 and c2 are also equivalent to this code: c3 = do x <- m1 do y <- m2 m3 Folding With Monads As an example of working with monads abstractly, this section will present a function which works with any type constructor in the Monad type class. This should serve to solidify the intuition that monadic code corresponds to programming "in a larger language" with side-effects, and also illustrate the generality which programming with monads brings. The function we will write is called foldM. It generalizes the foldl function that we met earlier to a monadic context. Here is its type signature: foldM :: forall m a b. Monad m => (a -> b -> m a) -> a -> List b -> m a foldl :: forall a b. (a -> b -> a) -> a -> List b -> a Notice that this is the same as the type of foldl, except for the appearance of the monad m. Intuitively, foldM performs a fold over a list in some context supporting some set of side-effects. For example, if we picked m to be Maybe, then our fold would be allowed to fail by returning Nothing at any stage - every step returns an optional result, and the result of the fold is therefore also optional. If we picked m to be the Array type constructor, then every step of the fold would be allowed to return zero or more results, and the fold would proceed to the next step independently for each result. At the end, the set of results would consist of all folds over all possible paths. This corresponds to a traversal of a graph! To write foldM, we can simply break the input list into cases. If the list is empty, then to produce the result of type a, we only have one option: we have to return the second argument: foldM _ a Nil = pure a Note that we have to use pure to lift a into the monad m. What if the list is non-empty? In that case, we have a value of type a, a value of type b, and a function of type a -> b -> m a. If we apply the function, we obtain a monadic result of type m a. We can bind the result of this computation with a backwards arrow <-. It only remains to recurse on the tail of the list. The implementation is simple: foldM f a (b : bs) = do a' <- f a b foldM f a' bs Note that this implementation is almost identical to that of foldl on lists, with the exception of do notation. We can define and test this function in PSCi. Here is an example - suppose we defined a "safe division" function on integers, which tested for division by zero and used the Maybe type constructor to indicate failure: safeDivide :: Int -> Int -> Maybe Int safeDivide _ 0 = Nothing safeDivide a b = Just (a / b) Then we can use foldM to express iterated safe division: > import Test.Examples > import Data.List (fromFoldable) > foldM safeDivide 100 (fromFoldable [5, 2, 2]) (Just 5) > foldM safeDivide 100 (fromFoldable [2, 0, 4]) Nothing The foldM safeDivide function returns Nothing if a division by zero was attempted at any point. Otherwise it returns the result of repeatedly dividing the accumulator, wrapped in the Just constructor. Monads and Applicatives Every instance of the Monad type class is also an instance of the Apply type class, by virtue of the superclass relationship between the two classes. However, there is also an implementation of the Apply type class which comes "for free" for any instance of Monad, given by the ap function: ap :: forall m a b. Monad m => m (a -> b) -> m a -> m b ap mf ma = do f <- mf a <- ma pure (f a) If m is a law-abiding member of the Monad type class, then there is a valid Apply instance for m given by ap. The interested reader can check that ap agrees with apply for the monads we have already encountered: Array, Maybe and Either e. If every monad is also an applicative functor, then we should be able to apply our intuition for applicative functors to every monad. In particular, we can reasonably expect a monad to correspond, in some sense, to programming "in a larger language" augmented with some set of additional side-effects. We should be able to lift functions of arbitrary arities, using map and apply, into this new language. But monads allow us to do more than we could do with just applicative functors, and the key difference is highlighted by the syntax of do notation. Consider the userCity example again, in which we looked for a user's city in an XML document which encoded their user profile: userCity :: XML -> Maybe XML userCity root = do prof <- child root "profile" addr <- child prof "address" city <- child addr "city" pure city Do notation allows the second computation to depend on the result prof of the first, and the third computation to depend on the result addr of the second, and so on. This dependence on previous values is not possible using only the interface of the Applicative type class. Try writing userCity using only pure and apply: you will see that it is impossible. Applicative functors only allow us to lift function arguments which are independent of each other, but monads allow us to write computations which involve more interesting data dependencies. In the last chapter, we saw that the Applicative type class can be used to express parallelism. This was precisely because the function arguments being lifted were independent of one another. Since the Monad type class allows computations to depend on the results of previous computations, the same does not apply - a monad has to combine its side-effects in sequence. Exercises (Easy) Write a function thirdwhich returns the third element of an array with three or more elements. Your function should return an appropriate Maybetype. Hint: Look up the types of the headand tailfunctions from the Data.Arraymodule in the arrayspackage. Use do notation with the Maybemonad to combine these functions. (Medium) Write a function possibleSumswhich uses foldMto determine all possible totals that could be made using a set of coins. The coins will be specified as an array which contains the value of each coin. Your function should have the following result: > possibleSums [] [0] > possibleSums [1, 2, 10] [0,1,2,3,10,11,12,13] Hint: This function can be written as a one-liner using foldM. You might want to use the nuband sortfunctions to remove duplicates and sort the result respectively. (Medium) Confirm that the apfunction and the applyoperator agree for the Maybemonad. Note: There are no tests for this exercise. (Medium) Verify that the monad laws hold for the Monadinstance for the Maybetype, as defined in the maybepackage. Note: There are no tests for this exercise. (Medium) Write a function filterMwhich generalizes the filterfunction on lists. Your function should have the following type signature: filterM :: forall m a. Monad m => (a -> m Boolean) -> List a -> m (List a) (Difficult) Every monad has a default Functorinstance given by: map f a = do x <- a pure (f x) Use the monad laws to prove that for any monad, the following holds: lift2 f (pure a) (pure b) = pure (f a b) where the Applyinstance uses the apfunction defined above. Recall that lift2was defined as follows: lift2 :: forall f a b c. Apply f => (a -> b -> c) -> f a -> f b -> f c lift2 f a b = f <$> a <*> b Note: There are no tests for this exercise. Native Effects We will now look at one particular monad which is of central importance in PureScript - the Effect monad. The Effect monad is defined in the Effect module. It is used to manage so-called native side-effects. If you are familiar with Haskell, it is the equivalent of the IO monad. What are native side-effects? They are the side-effects which distinguish JavaScript expressions from idiomatic PureScript expressions, which typically are free from side-effects. Some examples of native effects are: - Console IO - Random number generation - Exceptions - Reading/writing mutable state And in the browser: - DOM manipulation - XMLHttpRequest / AJAX calls - Interacting with a websocket - Writing/reading to/from local storage We have already seen plenty of examples of "non-native" side-effects: - Optional values, as represented by the Maybedata type - Errors, as represented by the Eitherdata type - Multi-functions, as represented by arrays or lists Note that the distinction is subtle. It is true, for example, that an error message is a possible side-effect of a JavaScript expression, in the form of an exception. In that sense, exceptions do represent native side-effects, and it is possible to represent them using Effect. However, error messages implemented using Either are not a side-effect of the JavaScript runtime, and so it is not appropriate to implement error messages in that style using Effect. So it is not the effect itself which is native, but rather how it is implemented at runtime. Side-Effects and Purity In a pure language like PureScript, one question which presents itself is: without side-effects, how can one write useful real-world code? The answer is that PureScript does not aim to eliminate side-effects. It aims to represent side-effects in such a way that pure computations can be distinguished from computations with side-effects in the type system. In this sense, the language is still pure. Values with side-effects have different types from pure values. As such, it is not possible to pass a side-effecting argument to a function, for example, and have side-effects performed unexpectedly. The only way in which side-effects managed by the Effect monad will be presented is to run a computation of type Effect a from JavaScript. The Spago build tool (and other tools) provide a shortcut, by generating additional JavaScript to invoke the main computation when the application starts. main is required to be a computation in the Effect monad. The Effect Monad The Effect monad provides a well-typed API for computations with side-effects, while at the same time generating efficient JavaScript. Let's take a closer look at the return type of the familiar log function. Effect indicates that this function produces a native effect, console IO in this case. Unit indicates that no meaningful data is returned. You can think of Unit as being analogous to the void keyword in other languages, such as C, Java, etc. log :: String -> Effect Unit Aside: You may encounter IDE suggestions for the more general (and more elaborately typed) logfunction from Effect.Class.Console. This is interchangeable with the one from Effect.Consolewhen dealing with the basic Effectmonad. Reasons for the more general version will become clearer after reading about "Monad Transformers" in the "Monadic Adventures" chapter. For the curious (and impatient), this works because there's a MonadEffectinstance for Effect. log :: forall m. MonadEffect m => String -> m Unit Now let's consider an Effect that returns meaningful data. The random function from Effect.Random produces a random Number. random :: Effect Number Here's a full example program (found in test/Random.purs of this chapter's exercises folder). module Test.Random where import Prelude import Effect (Effect) import Effect.Random (random) import Effect.Console (logShow) main :: Effect Unit main = do n <- random logShow n Because Effect is a monad, we use do notation to unwrap the data it contains before passing this data on to the effectful logShow function. As a refresher, here's the equivalent code written using the bind operator: main :: Effect Unit main = random >>= logShow Try running this yourself with: spago run --main Test.Random You should see a randomly chosen number between 0.0 and 1.0 printed to the console. Aside: spago rundefaults to searching in the Mainmodule for a mainfunction. You may also specify an alternate module as an entry point with the --mainflag, as is done in the above example. Just be sure that this alternate module also contains a mainfunction. Note that it's also possible to generate "random" (technically pseudorandom) data without resorting to impure effectful code. We'll cover these techniques in the "Generative Testing" chapter. As mentioned previously, the Effect monad is of central importance to PureScript. The reason why it's central is because it is the conventional way to interoperate with PureScript's Foreign Function Interface, which provides the mechanism to execute a program and perform side effects. While it's desireable to avoid using the Foreign Function Interface, it's fairly critical to understand how it works and how to use it, so I recommend reading that chapter before doing any serious PureScript work. That said, the Effect monad is fairly simple. It has a few helper functions, but aside from that it doesn't do much except encapsulate side effects. Exceptions Let's examine a function from the node-fs package that involves two native side effects: reading mutable state, and exceptions: readTextFile :: Encoding -> String -> Effect String If we attempt to read a file that does not exist: import Node.Encoding (Encoding(..)) import Node.FS.Sync (readTextFile) main :: Effect Unit main = do lines <- readTextFile UTF8 "iDoNotExist.md" log lines We encounter the following exception: throw err; ^ Error: ENOENT: no such file or directory, open 'iDoNotExist.md' ... errno: -2, syscall: 'open', code: 'ENOENT', path: 'iDoNotExist.md' To manage this exception gracefully, we can wrap the potentially problematic code in try to handle either outcome: main :: Effect Unit main = do result <- try $ readTextFile UTF8 "iDoNotExist.md" case result of Right lines -> log $ "Contents: \n" <> lines Left error -> log $ "Couldn't open file. Error was: " <> message error try runs an Effect and returns eventual exceptions as a Left value. If the computation succeeds, the result gets wrapped in a Right: try :: forall a. Effect a -> Effect (Either Error a) We can also generate our own exceptions. Here is an alternative implementation of Data.List.head which throws an exception if the list is empty, rather than returning a Maybe value of Nothing. exceptionHead :: List Int -> Effect Int exceptionHead l = case l of x : _ -> pure x Nil -> throwException $ error "empty list" Note that the exceptionHead function is a somewhat impractical example, as it is best to avoid generating exceptions in PureScript code and instead use non-native effects such as Either and Maybe to manage errors and missing values. Mutable State There is another effect defined in the core libraries: the ST effect. The ST effect is used to manipulate mutable state. As pure functional programmers, we know that shared mutable state can be problematic. However, the ST effect uses the type system to restrict sharing in such a way that only safe local mutation is allowed. The ST effect is defined in the Control.Monad.ST module. To see how it works, we need to look at the types of its actions: new :: forall a r. a -> ST r (STRef r a) read :: forall a r. STRef r a -> ST r a write :: forall a r. a -> STRef r a -> ST r a modify :: forall r a. (a -> a) -> STRef r a -> ST r a new is used to create a new mutable reference cell of type STRef r a, which can be read using the read action, and modified using the write and modify actions. The type a is the type of the value stored in the cell, and the type r is used to indicate a memory region (or heap) in the type system. Here is an example. Suppose we want to simulate the movement of a particle falling under gravity by iterating a simple update function over a large number of small time steps. We can do this by creating a mutable reference cell to hold the position and velocity of the particle, and then using a for loop to update the value stored in that cell: import Prelude import Control.Monad.ST.Ref (modify, new, read) import Control.Monad.ST (ST, for, run) simulate :: forall r. Number -> Number -> Int -> ST r Number simulate x0 v0 time = do ref <- new { x: x0, v: v0 } for 0 (time * 1000) \_ -> modify ( \o -> { v: o.v - 9.81 * 0.001 , x: o.x + o.v * 0.001 } ) ref final <- read ref pure final.x At the end of the computation, we read the final value of the reference cell, and return the position of the particle. Note that even though this function uses mutable state, it is still a pure function, so long as the reference cell ref is not allowed to be used by other parts of the program. We will see that this is exactly what the ST effect disallows. To run a computation with the ST effect, we have to use the run function: run :: forall a. (forall r. ST r a) -> a The thing to notice here is that the region type r is quantified inside the parentheses on the left of the function arrow. That means that whatever action we pass to run has to work with any region r whatsoever. However, once a reference cell has been created by new, its region type is already fixed, so it would be a type error to try to use the reference cell outside the code delimited by run. This is what allows run to safely remove the ST effect, and turn simulate into a pure function! simulate' :: Number -> Number -> Int -> Number simulate' x0 v0 time = run (simulate x0 v0 time) You can even try running this function in PSCi: > import Main > simulate' 100.0 0.0 0 100.00 > simulate' 100.0 0.0 1 95.10 > simulate' 100.0 0.0 2 80.39 > simulate' 100.0 0.0 3 55.87 > simulate' 100.0 0.0 4 21.54 In fact, if we inline the definition of simulate at the call to run, as follows: simulate :: Number -> Number -> Int -> Number simulate x0 v0 time = run do ref <- new { x: x0, v: v0 } for 0 (time * 1000) \_ -> modify ( \o -> { v: o.v - 9.81 * 0.001 , x: o.x + o.v * 0.001 } ) ref final <- read ref pure final.x then the compiler will notice that the reference cell is not allowed to escape its scope, and can safely turn ref into a var. Here is the generated JavaScript for simulate inlined with run: var simulate = function (x0) { return function (v0) { return function (time) { return (function __do() { var ref = { value: { x: x0, v: v0 } }; Control_Monad_ST_Internal["for"](0)(time * 1000 | 0)(function (v) { return Control_Monad_ST_Internal.modify(function (o) { return { v: o.v - 9.81 * 1.0e-3, x: o.x + o.v * 1.0e-3 }; })(ref); })(); return ref.value.x; })(); }; }; }; Note that this resulting JavaScript is not as optimal as it could be. See this issue for more details. The above snippet should be updated once that issue is resolved. For comparison, this is the generated JavaScript of the non-inlined form: var simulate = function (x0) { return function (v0) { return function (time) { return function __do() { var ref = Control_Monad_ST_Internal["new"]({ x: x0, v: v0 })(); Control_Monad_ST_Internal["for"](0)(time * 1000 | 0)(function (v) { return Control_Monad_ST_Internal.modify(function (o) { return { v: o.v - 9.81 * 1.0e-3, x: o.x + o.v * 1.0e-3 }; })(ref); })(); var $$final = Control_Monad_ST_Internal.read(ref)(); return $$final.x; }; }; }; }; The ST effect is a good way to generate short JavaScript when working with locally-scoped mutable state, especially when used together with actions like for, foreach, and while which generate efficient loops. Exercises - (Medium) Rewrite the safeDividefunction as exceptionDivideand throw an exception using throwExceptionwith the message "div zero"if the denominator is zero. - (Medium) Write a function estimatePi :: Int -> Numberthat uses nterms of the Gregory Series to calculate an approximation of pi. Hints: You can pattern your answer like the definition of simulateabove. You might need to convert an Intinto a Numberusing toNumber :: Int -> Numberfrom Data.Int. - (Medium) Write a function fibonacci :: Int -> Intto compute the nth Fibonacci number, using STto track the values of the previous two Fibonacci numbers. Using PSCi, compare the speed of your new ST-based implementation against the recursive implementation ( fib) from Chapter 4. DOM Effects In the final sections of this chapter, we will apply what we have learned about effects in the Effect monad to the problem of working with the DOM. There are a number of PureScript packages for working directly with the DOM, or with open-source DOM libraries. For example: web-domprovides type definitions and low level interface implementations for the W3C DOM spec. web-htmlprovides type definitions and low level interface implementations for the W3C HTML5 spec. jqueryis a set of bindings to the jQuery library. There are also PureScript libraries which build abstractions on top of these libraries, such as thermite, which builds on react react-basic-hooks, which builds on react-basic halogenwhich provides a type-safe set of abstractions on top of a custom virtual DOM library. In this chapter, we will use the react-basic-hooks library to add a user interface to our address book application, but the interested reader is encouraged to explore alternative approaches. An Address Book User Interface Using the react-basic-hooks library, we will define our application as a React component. React components describe HTML elements in code as pure data structures, which are then efficiently rendered to the DOM. In addition, components can respond to events like button clicks. The react-basic-hooks library uses the Effect monad to describe how to handle these events. A full tutorial for the React library is well beyond the scope of this chapter, but the reader is encouraged to consult its documentation where needed. For our purposes, React will provide a practical example of the Effect monad. We are going to build a form which will allow a user to add a new entry into our address book. The form will contain text boxes for the various fields (first name, last name, city, state, etc.), and an area in which validation errors will be displayed. As the user types text into the text boxes, the validation errors will be updated. To keep things simple, the form will have a fixed shape: the different phone number types (home, cell, work, other) will be expanded into separate text boxes. You can launch the web app from the exercises/chapter8 directory with the following commands: $ npm install $ npx spago build $ npx parcel src/index.html --open If development tools such as spago and parcel are installed globally, then the npx prefix may be omitted. You have likely already installed spago globally with npm i -g spago, and the same can be done for parcel. parcel should launch a browser window with our "Address Book" app. If you keep the parcel terminal open, and rebuild with spago in another terminal, the page should automatically refresh with your latest edits. You can also configure automatic rebuilds (and therefore automatic page refresh) on file-save if you're using an editor that supports purs ide or are running pscid. In this Address Book app, you should be able to enter some values into the form fields and see the validation errors printed onto the page. Let's explore how it works. The src/index.html file is minimal: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Address Book</title> <link rel="stylesheet" href="" crossorigin="anonymous"> </head> <body> <div id="container"></div> <script type="module" src="./index.js"></script> </body> </html> The <script line includes the JavaScript entry point, index.js, which contains this single line: import { main } from "../output/Main/index.js"; main(); It calls our generated JavaScript equivalent of the main function of module Main ( src/main.purs). Recall that spago build puts all generated JavaScript in the output directory. The main function uses the DOM and HTML APIs to render our address book component within the container element we defined in index.html: main :: Effect Unit main = do log "Rendering address book component" -- Get window object w <- window -- Get window's HTML document doc <- document w -- Get "container" element in HTML ctr <- getElementById "container" $ toNonElementParentNode doc case ctr of Nothing -> throw "Container element not found." Just c -> do -- Create AddressBook react component addressBookApp <- mkAddressBookApp let -- Create JSX node from react component. Pass-in empty props app = element addressBookApp {} -- Render AddressBook JSX node in DOM "container" element D.render app c Note that these three lines: w <- window doc <- document w ctr <- getElementById "container" $ toNonElementParentNode doc Can be consolidated to: doc <- document =<< window ctr <- getElementById "container" $ toNonElementParentNode doc Or consolidated even further to: ctr <- getElementById "container" <<< toNonElementParentNode =<< document =<< window -- or, equivalently: ctr <- window >>= document >>= toNonElementParentNode >>> getElementById "container" It is a matter of personal preference whether the intermediate w and doc variables aid in readability. Let's dig into our AddressBook reactComponent. We'll start with a simplified component, and then build up to the actual code in Main.purs. Take a look at this minimal component. Feel free to substitute the full component with this one to see it run: mkAddressBookApp :: Effect (ReactComponent {}) mkAddressBookApp = reactComponent "AddressBookApp" (\props -> pure $ D.text "Hi! I'm an address book") reactComponent has this intimidating signature: reactComponent :: forall hooks props. Lacks "children" props => Lacks "key" props => Lacks "ref" props => String -> ({ | props } -> Render Unit hooks JSX) -> Effect (ReactComponent { | props }) The important points to note are the arguments after all the type class constraints. It takes a String (an arbitrary component name), a function that describes how to convert props into rendered JSX, and returns our ReactComponent wrapped in an Effect. The props-to-JSX function is simply: \props -> pure $ D.text "Hi! I'm an address book" props are ignored, D.text returns JSX, and pure lifts to rendered JSX. Now component has everything it needs to produce the ReactComponent. Next we'll examine some of the additional complexities of the full Address Book component. These are the first few lines of our full component: mkAddressBookApp :: Effect (ReactComponent {}) mkAddressBookApp = do reactComponent "AddressBookApp" \props -> R.do Tuple person setPerson <- useState examplePerson We track person as a piece of state with the useState hook. Tuple person setPerson <- useState examplePerson Note that you are free to break-up component state into multiple pieces of state with multiple calls to useState. For example, we could rewrite this app to use a separate piece of state for each record field of Person, but that happens to result in a slightly less convenient architecture in this case. In other examples, you may encounter the /\ infix operator for Tuple. This is equivalent to the above line: firstName /\ setFirstName <- useState p.firstName useState takes a default initial value and returns the current value and a way to update the value. We can check the type of useState to gain more insight of the types person and setPerson: useState :: forall state. state -> Hook (UseState state) (Tuple state ((state -> state) -> Effect Unit)) We can strip the Hook (UseState state) wrapper off of the return value because useState is called within an R.do block. We'll elaborate on R.do later. So now we can observe the following signatures: person :: state setPerson :: (state -> state) -> Effect Unit The specific type of state is determined by our initial default value. Person Record in this case because that is the type of examplePerson. person is how we access the current state at each rerender. setPerson is how we update the state. We simply provide a function that describes how to transform the current state to the new state. The record update syntax is perfect for this when the type of state happens to be a Record, for example: setPerson (\currentPerson -> currentPerson {firstName = "NewName"}) or as shorthand: setPerson _ {firstName = "NewName"} Non- Record states can also follow this update pattern. See this guide for more details on best practices. Recall that useState is used within an R.do block. R.do is a special react hooks variant of do. The R. prefix "qualifies" this as coming from React.Basic.Hooks, and means we use their hooks-compatible version of bind in the R.do block. This is known as a "qualified do". It lets us ignore the Hook (UseState state) wrapping and bind the inner Tuple of values to variables. Another possible state management strategy is with useReducer, but that is outside the scope of this chapter. Rendering JSX occurs here: pure $ D.div { className: "container" , children: renderValidationErrors errors <> [ D.div { className: "row" , children: [ D.form_ $ [ D.h3_ [ D.text "Basic Information" ] , formField "First Name" "First Name" person.firstName \s -> setPerson _ { firstName = s } , formField "Last Name" "Last Name" person.lastName \s -> setPerson _ { lastName = s } , D.h3_ [ D.text "Address" ] , formField "Street" "Street" person.homeAddress.street \s -> setPerson _ { homeAddress { street = s } } , formField "City" "City" person.homeAddress.city \s -> setPerson _ { homeAddress { city = s } } , formField "State" "State" person.homeAddress.state \s -> setPerson _ { homeAddress { state = s } } , D.h3_ [ D.text "Contact Information" ] ] <> renderPhoneNumbers ] } ] } Here we produce JSX which represents the intended state of the DOM. This JSX is typically created by applying functions corresponding to HTML tags (e.g. div, form, h3, li, ul, label, input) which create single HTML elements. These HTML elements are actually React components themselves, converted to JSX. There are usually three variants of each of these functions: div_: Accepts an array of child elements. Uses default attributes. div: Accepts a Recordof attributes. An array of child elements may be passed to the childrenfield of this record. div': Same as div, but returns the ReactComponentbefore conversion to JSX. To display validation errors (if any) at the top of our form, we create a renderValidationErrors helper function that turns the Errors structure into an array of JSX. This array is prepended to the rest of our form. renderValidationErrors :: Errors -> Array R.JSX renderValidationErrors [] = [] renderValidationErrors xs = let renderError :: String -> R.JSX renderError err = D.li_ [ D.text err ] in [ D.div { className: "alert alert-danger row" , children: [ D.ul_ (map renderError xs) ] } ] Note that since we are simply manipulating regular data structures here, we can use functions like map to build up more interesting elements: children: [ D.ul_ (map renderError xs)] We use the className property to define classes for CSS styling. We're using the Bootstrap stylesheet for this project, which is imported in index.html. For example, we want items in our form arranged as rows, and validation errors to be emphasized with alert-danger styling: className: "alert alert-danger row" A second helper function is formField, which creates a text input for a single form field: formField :: String -> String -> String -> (String -> Effect Unit) -> R.JSX formField name placeholder value setValue = D.label { className: "form-group row" , children: [ D.div { className: "col-sm col-form-label" , children: [ D.text name ] } , D.div { className: "col-sm" , children: [ D.input { className: "form-control" , placeholder , value , onChange: let handleValue :: Maybe String -> Effect Unit handleValue (Just v) = setValue v handleValue Nothing = pure unit in handler targetValue handleValue } ] } ] } Putting the input and display text in a label aids in accessibility for screen readers. The onChange attribute allows us to describe how to respond to user input. We use the handler function, which has the following type: handler :: forall a. EventFn SyntheticEvent a -> (a -> Effect Unit) -> EventHandler For the first argument to handler we use targetValue, which provides the value of the text within the HTML input element. It matches the signature expected by handler where the type variable a in this case is Maybe String: targetValue :: EventFn SyntheticEvent (Maybe String) In JavaScript, the input element's onChange event is actually accompanied by a String value, but since strings in JavaScript can be null, Maybe is used for safety. The second argument to handler, (a -> Effect Unit), must therefore have this signature: Maybe String -> Effect Unit It is a function that describes how to convert this Maybe String value into our desired effect. We define a custom handleValue function for this purpose and pass it to handler as follows: onChange: let handleValue :: Maybe String -> Effect Unit handleValue (Just v) = setValue v handleValue Nothing = pure unit in handler targetValue handleValue setValue is the function we provided to each formField call that takes a string and makes the appropriate record-update call to the setPerson hook. Note that handleValue can be substituted as: onChange: handler targetValue $ traverse_ setValue Feel free to investigate the definition of traverse_ to see how both forms are indeed equivalent. That covers the basics of our component implementation. However, you should read the source accompanying this chapter in order to get a full understanding of the way the component works. Obviously, this user interface can be improved in a number of ways. The exercises will explore some ways in which we can make the application more usable. Exercises Modify src/Main.purs in the following exercises. There are no unit tests for these exercises. (Easy) Modify the application to include a work phone number text box. (Medium) Right now the application shows validation errors collected in a single "pink-alert" background. Modify to give each validation error its own pink-alert background by separating them with blank lines. Hint: Instead of using a ulelement to show the validation errors in a list, modify the code to create one divwith the alertand alert-dangerstyles for each error. (Difficult, Extended) One problem with this user interface is that the validation errors are not displayed next to the form fields they originated from. Modify the code to fix this problem. Hint: the error type returned by the validator should be extended to indicate which field caused the error. You might want to use the following modified Errorstype: data Field = FirstNameField | LastNameField | StreetField | CityField | StateField | PhoneField PhoneType data ValidationError = ValidationError String Field type Errors = Array ValidationError You will need to write a function which extracts the validation error for a particular Fieldfrom the Errorsstructure. Conclusion This chapter has covered a lot of ideas about handling side-effects in PureScript: - We met the Monadtype class, and its connection to do notation. - We introduced the monad laws, and saw how they allow us to transform code written using do notation. - We saw how monads can be used abstractly, to write code which works with different side-effects. - We saw how monads are examples of applicative functors, how both allow us to compute with side-effects, and the differences between the two approaches. - The concept of native effects was defined, and we met the Effectmonad, which is used to handle native side-effects. - We used the Effectmonad to handle a variety of effects: random number generation, exceptions, console IO, mutable state, and DOM manipulation using React. The Effect monad is a fundamental tool in real-world PureScript code. It will be used in the rest of the book to handle side-effects in a number of other use-cases. Asynchronous Effects Chapter Goals This chapter focuses on the Aff monad, which is similar to the Effect monad, but represents asynchronous side-effects. We'll demonstrate examples of asynchronously interacting with the filesystem and making HTTP requests. We'll also cover how to manage sequential and parallel execution of asynchronous effects. Project Setup New PureScript libraries introduced in this chapter are: aff- defines the Affmonad. node-fs-aff- asynchronous filesystem operations with Aff. affjax- HTTP requests with AJAX and Aff. parallel- parallel execution of Aff. When running outside of the browser (such as in our Node.js environment), the affjax library requires the xhr2 NPM module. Install that by running: $ npm install Asynchronous JavaScript A convenient way to work with asynchronous code in JavaScript is with async and await. See this article on asynchronous JavaScript for more background information. Here is an example of using this technique to copy the contents of one file to another file: import { promises as fsPromises } from 'fs' async function copyFile(file1, file2) { let data = await fsPromises.readFile(file1, { encoding: 'utf-8' }); fsPromises.writeFile(file2, data, { encoding: 'utf-8' }); } copyFile('file1.txt', 'file2.txt') .catch(e => { console.log('There was a problem with copyFile: ' + e.message); }); It is also possible to use callbacks or synchronous functions, but those are less desireable because: - Callbacks lead to excessive nesting, known as "Callback Hell" or the "Pyramid of Doom". - Synchronous functions block execution of the other code in your app. Asynchronous PureScript The Aff monad in PureScript offers similar ergonomics of JavaScript's async/ await syntax. Here is the same copyFile example from before, but rewritten in PureScript using Aff: import Prelude import Data.Either (Either(..)) import Effect.Aff (Aff, attempt, message) import Effect.Class.Console (log) import Node.Encoding (Encoding(..)) import Node.FS.Aff (readTextFile, writeTextFile) import Node.Path (FilePath) copyFile :: FilePath -> FilePath -> Aff Unit copyFile file1 file2 = do my_data <- readTextFile UTF8 file1 writeTextFile UTF8 file2 my_data main :: Aff Unit main = do result <- attempt $ copyFile "file1.txt" "file2.txt" case result of Left e -> log $ "There was a problem with copyFile: " <> message e _ -> pure unit It is also possible to re-write the above snippet using callbacks or synchronous functions (for example with Node.FS.Async and Node.FS.Sync respectively), but those share the same downsides as discussed earlier with JavaScript, and so that coding style is not recommended. The syntax for working with Aff is very similar to working with Effect. They are both monads, and can therefore be written with do notation. For example, if we look at the signature of readTextFile, we see that it returns the file contents as a String wrapped in Aff: readTextFile :: Encoding -> FilePath -> Aff String We can "unwrap" the returned string with a bind arrow ( <-) in do notation: my_data <- readTextFile UTF8 file1 Then pass it as the string argument to writeTextFile: writeTextFile :: Encoding -> FilePath -> String -> Aff Unit The only other notable feature unique to Aff in the above example is attempt, which captures errors or exceptions encountered while running Aff code and stores them in an Either: attempt :: forall a. Aff a -> Aff (Either Error a) You should hopefully be able to draw on your knowledge of concepts from previous chapters and combine this with the new Aff patterns learned in the above copyFile example to tackle the following exercises: Exercises (Easy) Write a concatenateFilesfunction which concatenates two text files. (Medium) Write a function concatenateManyto concatenate multiple text files, given an array of input file names and an output file name. Hint: use traverse. (Medium) Write a function countCharacters :: FilePath -> Aff (Either Error Int)that returns the number of characters in a file, or an error if one is encountered. Additional Aff Resources If you haven't already taken a look at the official Aff guide, skim through that now. It's not a direct prerequisite for completing the remaining exercises in this chapter, but you may find it helpful to lookup some functions on Pursuit. You're also welcome to consult these supplemental resources too, but again, the exercises in this chapter don't depend on them: A HTTP Client The affjax library offers a convenient way to make asynchronous AJAX HTTP requests with Aff. Depending on what environment you are targeting you need to use either the purescript-affjax-web or the purescript-affjax-node library. In the rest of this chapter we will be targeting node and thus using purescript-affjax-node. Consult the Affjax docs for more usage information. Here is an example that makes HTTP GET requests at a provided URL and returns the response body or an error message: import Prelude import Affjax.Node as AN import Affjax.ResponseFormat as ResponseFormat import Data.Either (Either(..)) import Effect.Aff (Aff) getUrl :: String -> Aff String getUrl url = do result <- AN.get ResponseFormat.string url pure case result of Left err -> "GET /api response failed to decode: " <> AN.printError err Right response -> response.body When calling this in the repl, launchAff_ is required to convert the Aff to a repl-compatible Effect: $ spago repl > :pa … import Prelude … import Effect.Aff (launchAff_) … import Effect.Class.Console (log) … import Test.HTTP (getUrl) … … launchAff_ do … str <- getUrl "" … log str … unit {"data":{"id":1,"email":"george.bluth@reqres.in","first_name":"George","last_name":"Bluth", ...}} Exercises - (Easy) Write a function writeGetwhich makes an HTTP GETrequest to a provided url, and writes the response body to a file. Parallel Computations We've seen how to use the Aff monad and do notation to compose asynchronous computations in sequence. It would also be useful to be able to compose asynchronous computations in parallel. With Aff, we can compute in parallel simply by initiating our two computations one after the other. The parallel package defines a type class Parallel for monads like Aff which support parallel execution. When we met applicative functors earlier in the book, we observed how applicative functors can be useful for combining parallel computations. In fact, an instance for Parallel defines a correspondence between a monad m (such as Aff) and an applicative functor f which can be used to combine computations in parallel: class (Monad m, Applicative f) <= Parallel f m | m -> f, f -> m where sequential :: forall a. f a -> m a parallel :: forall a. m a -> f a The class defines two functions: parallel, which takes computations in the monad mand turns them into computations in the applicative functor f, and sequential, which performs a conversion in the opposite direction. The aff library provides a Parallel instance for the Aff monad. It uses mutable references to combine Aff actions in parallel, by keeping track of which of the two continuations has been called. When both results have been returned, we can compute the final result and pass it to the main continuation. Because applicative functors support lifting of functions of arbitrary arity, we can perform more computations in parallel by using the applicative combinators. We can also benefit from all of the standard library functions which work with applicative functors, such as traverse and sequence! We can also combine parallel computations with sequential portions of code, by using applicative combinators in a do notation block, or vice versa, using parallel and sequential to change type constructors where appropriate. To demonstrate the difference between sequential and parallel execution, we'll create an array of 100 10-millisecond delays, then execute those delays with both techniques. You'll notice in the repl that seqDelay is much slower than parDelay. Note that parallel execution is enabled by simply replacing sequence_ with parSequence_. import Prelude import Control.Parallel (parSequence_) import Data.Array (replicate) import Data.Foldable (sequence_) import Effect (Effect) import Effect.Aff (Aff, Milliseconds(..), delay, launchAff_) delayArray :: Array (Aff Unit) delayArray = replicate 100 $ delay $ Milliseconds 10.0 seqDelay :: Effect Unit seqDelay = launchAff_ $ sequence_ delayArray parDelay :: Effect Unit parDelay = launchAff_ $ parSequence_ delayArray $ spago repl > import Test.ParallelDelay > seqDelay -- This is slow unit > parDelay -- This is fast unit Here's a more real-world example of making multiple HTTP requests in parallel. We're reusing our getUrl function to fetch information from two users in parallel. Note that parTraverse (the parallel version of traverse) is used in this case. This example would also work fine with traverse instead, but it will be slower. import Prelude import Control.Parallel (parTraverse) import Effect (Effect) import Effect.Aff (launchAff_) import Effect.Class.Console (logShow) import Test.HTTP (getUrl) fetchPar :: Effect Unit fetchPar = launchAff_ do let urls = map (\n -> "" <> show n) [ 1, 2 ] res <- parTraverse getUrl urls logShow res $ spago repl > import Test.ParallelFetch > fetchPar unit ["{\"data\":{\"id\":1,\"email\":\"george.bluth@reqres.in\", ... }" ,"{\"data\":{\"id\":2,\"email\":\"janet.weaver@reqres.in\", ... }" ] A full listing of available parallel functions can be found in the parallel docs on Pursuit. The aff docs section on parallel also contains more examples. Exercises (Easy) Write a concatenateManyParallelfunction which has the same signature as the earlier concatenateManyfunction, but reads all input files in parallel. (Medium) Write a getWithTimeout :: Number -> String -> Aff (Maybe String)function which makes an HTTP GETrequest at the provided URL and returns either: Nothing: if the request takes longer than the provided timeout (in milliseconds). - The string response: if the request succeeds before the timeout elapses. (Difficult) Write a recurseFilesfunction which takes a "root" file and returns an array of all paths listed in that file (and listed in the listed files too). Read listed files in parallel. Paths are relative to the directory of the file they appear in. Hint: The node-pathmodule has some helpful functions for negotiating directories. For example, if starting from the following root.txt file: $ cat root.txt a.txt b/a.txt c/a/a.txt $ cat a.txt b/b.txt $ cat b/b.txt c/a.txt $ cat b/c/a.txt $ cat b/a.txt $ cat c/a/a.txt The expected output is: ["root.txt","a.txt","b/a.txt","b/b.txt","b/c/a.txt","c/a/a.txt"] Conclusion In this chapter we covered asynchronous effects and learned how to: - Run asynchronous code in the Affmonad with the afflibrary. - Make HTTP requests asynchronously with the affjaxlibrary. - Run asynchronous code in parallel with the parallellibrary. The Foreign Function Interface Chapter Goals This chapter will introduce PureScript's foreign function interface (or FFI), which enables communication from PureScript code to JavaScript code, and vice versa. We will cover how to: - Call pure, effectful, and asynchronous JavaScript functions from PureScript. - Work with untyped data. - Encode and parse JSON using the argonautpackage. Towards the end of this chapter, we will revisit our recurring address book example. The goal of the chapter will be to add the following new functionality to our application using the FFI: - Alert the user with a popup notification. - Store the serialized form data in the browser's local storage, and reload it when the application restarts. There is also an addendum which covers some additional topics which are not as commonly sought-after. Feel free to read these sections, but don't let them stand in the way of progressing through the remainder of the book if they're less relevant to your learning objectives: - Understand the representation of PureScript values at runtime. - Call PureScript functions from JavaScript. Project Setup The source code for this module is a continuation of the source code from chapters 3, 7 and 8. As such, the source tree includes the appropriate source files from those chapters. This chapter introduces the argonaut library as a dependency. This library is used for encoding and decoding JSON. The exercises for this chapter should be written in test/MySolutions.purs and can be checked against the unit tests in test/Main.purs by running spago test. The Address Book app can be launched with parcel src/index.html --open. It uses the same workflow from Chapter 8, so refer to that chapter for more detailed instructions. A Disclaimer PureScript provides a straightforward foreign function interface to make working with JavaScript as simple as possible. However, it should be noted that the FFI is an advanced feature of the language. To use it safely and effectively, you should have an understanding of the runtime representation of the data you plan to work with. This chapter aims to impart such an understanding as pertains to code in PureScript's standard libraries. PureScript's FFI is designed to be very flexible. In practice, this means that developers have a choice, between giving their foreign functions very simple types, or using the type system to protect against accidental misuses of foreign code. Code in the standard libraries tends to favor the latter approach. As a simple example, a JavaScript function makes no guarantees that its return value will not be null. Indeed, idiomatic JavaScript code returns null quite frequently! However, PureScript's types are usually not inhabited by a null value. Therefore, it is the responsibility of the developer to handle these corner cases appropriately when designing their interfaces to JavaScript code using the FFI. Calling JavaScript From PureScript The simplest way to use JavaScript code from PureScript is to give a type to an existing JavaScript value using a foreign import declaration. Foreign import declarations must have a corresponding JavaScript declaration exported from a foreign JavaScript module. For example, consider the encodeURIComponent function, which can be used in JavaScript to encode a component of a URI by escaping special characters: $ node node> encodeURIComponent('Hello World') 'Hello%20World' This function has the correct runtime representation for the function type String -> String, since it takes non-null strings to non-null strings, and has no other side-effects. We can assign this type to the function with the following foreign import declaration: module Test.URI where foreign import _encodeURIComponent :: String -> String We also need to write a foreign JavaScript module to import it from. A corresponding foreign JavaScript module is one of the same name but extension changed from .purs to .js. If the Purescript module above is saved as URI.purs, then the foreign JavaScript module is saved as URI.js. Since encodeURIComponent is already defined, we have to export it as _encodeURIComponent: "use strict"; export const _encodeURIComponent = encodeURIComponent; Since version 0.15, Purescript uses the ES module system when interoperating with JavaScript. In ES modules, functions and values are exported from a module by providing the export keyword on an object. With these two pieces in place, we can now use the _encodeURIComponent function from PureScript like any function written in PureScript. For example, in PSCi, we can reproduce the calculation above: $ spago repl > import Test.URI > _encodeURIComponent "Hello World" "Hello%20World" We can also define our own functions in foreign modules. Here's an example of how to create and call a custom JavaScript function that squares a Number: test/Examples.js: "use strict"; export const square = function (n) { return n * n; }; test/Examples.purs: module Test.Examples where foreign import square :: Number -> Number $ spago repl > import Test.Examples > square 5.0 25.0 Functions of Multiple Arguments Let's rewrite our diagonal function from Chapter 2 in a foreign module. This function calculates the diagonal of a right-angled triangle. foreign import diagonal :: Number -> Number -> Number Recall that functions in PureScript are curried. diagonal is a function that takes a Number and returns a function, that takes a Number and returns a Number. export const diagonal = function (w) { return function (h) { return Math.sqrt(w * w + h * h); }; }; Or with ES6 arrow syntax (see ES6 note below). export const diagonalArrow = w => h => Math.sqrt(w * w + h * h); foreign import diagonalArrow :: Number -> Number -> Number $ spago repl > import Test.Examples > diagonal 3.0 4.0 5.0 > diagonalArrow 3.0 4.0 5.0 Uncurried Functions Writing curried functions in JavaScript isn't always feasible, despite being scarcely idiomatic. A typical multi-argument JavaScript function would be of the uncurried form: export const diagonalUncurried = function (w, h) { return Math.sqrt(w * w + h * h); }; The module Data.Function.Uncurried exports wrapper types and utility functions to work with uncurried functions. foreign import diagonalUncurried :: Fn2 Number Number Number Inspecting the type constructor Fn2: $ spago repl > import Data.Function.Uncurried > :kind Fn2 Type -> Type -> Type -> Type Fn2 takes three type arguments. Fn2 a b c is a type representing an uncurried function of two arguments of types a and b, that returns a value of type c. We used it to import diagonalUncurried from the foreign module. We can then call it with runFn2 which takes the uncurried function then the arguments. $ spago repl > import Test.Examples > import Data.Function.Uncurried > runFn2 diagonalUncurried 3.0 4.0 5.0 The functions package defines similar type constructors for function arities from 0 to 10. A Note About Uncurried Functions PureScript's curried functions has certain advantages. It allows us to partially apply functions, and to give type class instances for function types - but it comes with a performance penalty. For performance critical code, it is sometimes necessary to define uncurried JavaScript functions which accept multiple arguments. We can also create uncurried functions from PureScript. For a function of two arguments, we can use the mkFn2 function. uncurriedAdd :: Fn2 Int Int Int uncurriedAdd = mkFn2 \n m -> m + n We can apply the uncurried function of two arguments by using runFn2 as before: uncurriedSum :: Int uncurriedSum = runFn2 uncurriedAdd 3 10 The key here is that the compiler inlines the mkFn2 and runFn2 functions whenever they are fully applied. The result is that the generated code is very compact: var uncurriedAdd = function (n, m) { return m + n | 0; }; var uncurriedSum = uncurriedAdd(3, 10); For contrast, here is a traditional curried function: curriedAdd :: Int -> Int -> Int curriedAdd n m = m + n curriedSum :: Int curriedSum = curriedAdd 3 10 and the resulting generated code, which is less compact due to the nested functions: var curriedAdd = function (n) { return function (m) { return m + n | 0; }; }; var curriedSum = curriedAdd(3)(10); A Note About Modern JavaScript Syntax The arrow function syntax we saw earlier is an ES6 feature, and so it is incompatible with some older browsers (namely IE11). As of writing, it is estimated that arrow functions are unavailable for the 6% of users who have not yet updated their web browser. In order to be compatible with the most users, the JavaScript code generated by the PureScript compiler does not use arrow functions. It is also recommended to avoid arrow functions in public libraries for the same reason. You may still use arrow functions in your own FFI code, but then should include a tool such as Babel in your deployment workflow to convert these back to ES5 compatible functions. If you find arrow functions in ES6 more readable, you may transform JavaScript code in the compiler's output directory with a tool like Lebab: npm i -g lebab lebab --replace output/ --transform arrow,arrow-return This operation would convert the above curriedAdd function to: var curriedAdd = n => m => m + n | 0; The remaining examples in this book will use arrow functions instead of nested functions. Exercises - (Medium) Write a JavaScript function volumeFnin the Test.MySolutionsmodule that finds the volume of a box. Use an Fnwrapper from Data.Function.Uncurried. - (Medium) Rewrite volumeFnwith arrow functions as volumeArrow. Passing Simple Types The following data types may be passed between PureScript and JavaScript as-is: We've already seen examples with the primitive types String and Number. We'll now take a look at the structural types Array and Record ( Object in JavaScript). To demonstrate passing Arrays, here's how to call a JavaScript function which takes an Array of Int and returns the cumulative sum as another array. Recall that, since JavaScript does not have a separate type for Int, both Int and Number in PureScript translate to Number in JavaScript. foreign import cumulativeSums :: Array Int -> Array Int export const cumulativeSums = arr => { let sum = 0 let sums = [] arr.forEach(x => { sum += x; sums.push(sum); }); return sums; }; $ spago repl > import Test.Examples > cumulativeSums [1, 2, 3] [1,3,6] To demonstrate passing Records, here's how to call a JavaScript function which takes two Complex numbers as records, and returns their sum as another record. Note that a Record in PureScript is represented as an Object in JavaScript: type Complex = { real :: Number, imag :: Number } foreign import addComplex :: Complex -> Complex -> Complex export const addComplex = a => b => { return { real: a.real + b.real, imag: a.imag + b.imag } }; $ spago repl > import Test.Examples > addComplex { real: 1.0, imag: 2.0 } { real: 3.0, imag: 4.0 } { imag: 6.0, real: 4.0 } Note that the above techniques require trusting that JavaScript will return the expected types, as PureScript is not able to apply type checking to JavaScript code. We will describe this type safety concern in more detail later on in the JSON section, as well as cover techniques to protect against type mismatches. Exercises - (Medium) Write a JavaScript function cumulativeSumsComplex(and corresponding PureScript foreign import) that takes an Arrayof Complexnumbers and returns the cumulative sum as another array of complex numbers. Beyond Simple Types We have seen examples of how to send and receive types with a native JavaScript representation, such as String, Number, Array, and Record, over FFI. Now we'll cover how to use some of the other types available in PureScript, like Maybe. Suppose we wanted to recreate the head function on arrays by using a foreign declaration. In JavaScript, we might write the function as follows: export const head = arr => arr[0]; How would we type this function? We might try to give it the type forall a. Array a -> a, but for empty arrays, this function returns undefined. Therefore, the type forall a. Array a -> a does not correctly represent this implementation. We instead want to return a Maybe value to handle this corner case: foreign import maybeHead :: forall a. Array a -> Maybe a But how do we return a Maybe? It is tempting to write the following: // Don't do this import Data_Maybe from '../Data.Maybe' export const maybeHead = arr => { if (arr.length) { return Data_Maybe.Just.create(arr[0]); } else { return Data_Maybe.Nothing.value; } } Importing and using the Data.Maybe module directly in the foreign module isn't recommended as it makes our code brittle to changes in the code generator — create and value are not public APIs. Additionally, doing this can cause problems when using purs bundle for dead code elimination. The recommended approach is to add extra parameters to our FFI-defined function to accept the functions we need. export const maybeHeadImpl = just => nothing => arr => { if (arr.length) { return just(arr[0]); } else { return nothing; } }; foreign import maybeHeadImpl :: forall a. (forall x. x -> Maybe x) -> (forall x. Maybe x) -> Array a -> Maybe a maybeHead :: forall a. Array a -> Maybe a maybeHead arr = maybeHeadImpl Just Nothing arr Note that we wrote: forall a. (forall x. x -> Maybe x) -> (forall x. Maybe x) -> Array a -> Maybe a and not: forall a. ( a -> Maybe a) -> Maybe a -> Array a -> Maybe a While both forms work, the latter is more vulnerable to unwanted inputs in place of Just and Nothing. For example, in the more vulnerable case we could call it as follows: maybeHeadImpl (\_ -> Just 1000) (Just 1000) [1,2,3] which returns Just 1000 for any array input. This vulnerability is allowed because (\_ -> Just 1000) and Just 1000 match the signatures of (a -> Maybe a) and Maybe a respectively when a is Int (based on input array). In the more secure type signature, even when a is determined to be Int based on the input array, we still need to provide valid functions matching the signatures involving forall x. The only option for (forall x. Maybe x) is Nothing, since a Just value would assume a type for x and will no longer be valid for all x. The only options for (forall x. x -> Maybe x) are Just (our desired argument) and (\_ -> Nothing), which is the only remaining vulnerability. Defining Foreign Types Suppose instead of returning a Maybe a, we want to actually return arr[0]. We want a type that represents a value either of type a or the undefined value (but not null). We'll call this type Undefined a. We can define a foreign type using a foreign type declaration. The syntax is similar to defining a foreign function: foreign import data Undefined :: Type -> Type The data keyword here indicates that we are defining a type, not a value. Instead of a type signature, we give the kind of the new type. In this case, we declare the kind of Undefined to be Type -> Type. In other words, Undefined is a type constructor. We can now simply reuse our original definition for head: export const undefinedHead = arr => arr[0]; And in the PureScript module: foreign import undefinedHead :: forall a. Array a -> Undefined a The body of the undefinedHead function returns arr[0] which may be undefined, and the type signature correctly reflects that fact. This function has the correct runtime representation for its type, but is quite useless since we have no way to use a value of type Undefined a. Well, not exactly. We can use this type in another FFI! We can write a function that will tell us whether a value is undefined or not: foreign import isUndefined :: forall a. Undefined a -> Boolean This is defined in our foreign JavaScript module as follows: export const isUndefined = value => value === undefined; We can now use isUndefined and undefinedHead together from PureScript to define a useful function: isEmpty :: forall a. Array a -> Boolean isEmpty = isUndefined <<< undefinedHead Here, the foreign function we defined is very simple, which means we can benefit from the use of PureScript's typechecker as much as possible. This is good practice in general: foreign functions should be kept as small as possible, and application logic moved into PureScript code wherever possible. Exceptions Another option is to simply throw an exception in the case of an empty array. Strictly speaking, pure functions should not throw exceptions, but we have the flexibility to do so. We indicate the lack of safety in the function name: foreign import unsafeHead :: forall a. Array a -> a In our foreign JavaScript module, we can define unsafeHead as follows: export const unsafeHead = arr => { if (arr.length) { return arr[0]; } else { throw new Error('unsafeHead: empty array'); } }; Exercises (Medium) Given a record that represents a quadratic polynomial a*x^2 + b*x + c = 0: type Quadratic = { a :: Number, b :: Number, c :: Number } Write a JavaScript function quadraticRootsImpland a wrapper quadraticRoots :: Quadratic -> Pair Complexthat uses the quadratic formula to find the roots of this polynomial. Return the two roots as a Pairof Complexnumbers. Hint: Use the quadraticRootswrapper to pass a constructor for Pairto quadraticRootsImpl. (Medium) Write the function toMaybe :: forall a. Undefined a -> Maybe a. This function converts undefinedto Nothingand avalues to Justs. (Difficult) With toMaybein place, we can rewrite maybeHeadas maybeHead :: forall a. Array a -> Maybe a maybeHead = toMaybe <<< undefinedHead Is this a better approach than our previous implementation? Note: There is no unit test for this exercise. Using Type Class Member Functions Just like our earlier guide on passing the Maybe constructor over FFI, this is another case of writing PureScript that calls JavaScript, which in turn calls PureScript functions again. Here we will explore how to pass type class member functions over the FFI. We start with writing a foreign JavaScript function which expects the appropriate instance of show to match the type of x. export const boldImpl = show => x => show(x).toUpperCase() + "!!!"; Then we write the matching signature: foreign import boldImpl :: forall a. (a -> String) -> a -> String and a wrapper function that passes the correct instance of show: bold :: forall a. Show a => a -> String bold x = boldImpl show x Alternatively in point-free form: bold :: forall a. Show a => a -> String bold = boldImpl show We can then call the wrapper: $ spago repl > import Test.Examples > import Data.Tuple > bold (Tuple 1 "Hat") "(TUPLE 1 \"HAT\")!!!" Here's another example demonstrating passing multiple functions, including a function of multiple arguments ( eq): export const showEqualityImpl = eq => show => a => b => { if (eq(a)(b)) { return "Equivalent"; } else { return show(a) + " is not equal to " + show(b); } } foreign import showEqualityImpl :: forall a. (a -> a -> Boolean) -> (a -> String) -> a -> a -> String showEquality :: forall a. Eq a => Show a => a -> a -> String showEquality = showEqualityImpl eq show $ spago repl > import Test.Examples > import Data.Maybe > showEquality Nothing (Just 5) "Nothing is not equal to (Just 5)" Effectful Functions Let's extend our bold function to log to the console. Logging is an Effect, and Effects are represented in JavaScript as a function of zero arguments, () with arrow notation: export const yellImpl = show => x => () => console.log(show(x).toUpperCase() + "!!!"); The new foreign import is the same as before, except that the return type changed from String to Effect Unit. foreign import yellImpl :: forall a. (a -> String) -> a -> Effect Unit yell :: forall a. Show a => a -> Effect Unit yell = yellImpl show When testing this in the repl, notice that the string is printed directly to the console (instead of being quoted) and a unit value is returned. $ spago repl > import Test.Examples > import Data.Tuple > yell (Tuple 1 "Hat") (TUPLE 1 "HAT")!!! unit There are also EffectFn wrappers from Effect.Uncurried. These are similar to the Fn wrappers from Data.Function.Uncurried that we've already seen. These wrappers let you call uncurried effectful functions in PureScript. You'd generally only use these if you want to call existing JavaScript library APIs directly, rather than wrapping those APIs in curried functions. So it doesn't make much sense to present an example of uncurried yell, where the JavaScript relies on PureScript type class members, since you wouldn't find that in the existing JavaScript ecosystem. Instead, we'll modify our previous diagonal example to include logging in addition to returning the result: export const diagonalLog = function(w, h) { let result = Math.sqrt(w * w + h * h); console.log("Diagonal is " + result); return result; }; foreign import diagonalLog :: EffectFn2 Number Number Number $ spago repl > import Test.Examples > import Effect.Uncurried > runEffectFn2 diagonalLog 3.0 4.0 Diagonal is 5 5.0 Asynchronous Functions Promises in JavaScript translate directly to asynchronous effects in PureScript with the help of the aff-promise library. See that library's documentation for more information. We'll just go through a few examples. Suppose we want to use this JavaScript wait promise (or asynchronous function) in our PureScript project. It may be used to delay execution for ms milliseconds. const wait = ms => new Promise(resolve => setTimeout(resolve, ms)); We just need to export it wrapped as an Effect (function of zero arguments): export const sleepImpl = ms => () => wait(ms); Then import it as follows: foreign import sleepImpl :: Int -> Effect (Promise Unit) sleep :: Int -> Aff Unit sleep = sleepImpl >>> toAffE We can then run this Promise in an Aff block like so: $ spago repl > import Prelude > import Test.Examples > import Effect.Class.Console > import Effect.Aff > :pa … launchAff_ do … log "waiting" … sleep 300 … log "done waiting" … waiting unit done waiting Note that asynchronous logging in the repl just waits to print until the entire block has finished executing. This code behaves more predictably when run with spago test where there is a slight delay between prints. Let's look at another example where we return a value from a promise. This function is written with async and await, which is just syntactic sugar for promises. async function diagonalWait(delay, w, h) { await wait(delay); return Math.sqrt(w * w + h * h); } export const diagonalAsyncImpl = delay => w => h => () => diagonalWait(delay, w, h); Since we're returning a Number, we represent this type in the Promise and Aff wrappers: foreign import diagonalAsyncImpl :: Int -> Number -> Number -> Effect (Promise Number) diagonalAsync :: Int -> Number -> Number -> Aff Number diagonalAsync i x y = toAffE $ diagonalAsyncImpl i x y $ spago repl import Prelude import Test.Examples import Effect.Class.Console import Effect.Aff > :pa … launchAff_ do … res <- diagonalAsync 300 3.0 4.0 … logShow res … unit 5.0 Exercises Exercises for the above sections are still on the ToDo list. If you have any ideas for good exercises, please make a suggestion. JSON There are many reasons to use JSON in an application, for example, it's a common means of communicating with web APIs. This section will discuss other use-cases too, beginning with a technique to improve type safety when passing structural data over the FFI. Let's revisit our earlier FFI functions cumulativeSums and addComplex and introduce a bug to each: export const cumulativeSumsBroken = arr => { let sum = 0 let sums = [] arr.forEach(x => { sum += x; sums.push(sum); }); sums.push("Broken"); // Bug return sums; }; export const addComplexBroken = a => b => { return { real: a.real + b.real, broken: a.imag + b.imag // Bug } }; We can use the original type signatures, and the code will still compile, despite the fact that the return types are incorrect. foreign import cumulativeSumsBroken :: Array Int -> Array Int foreign import addComplexBroken :: Complex -> Complex -> Complex We can even execute the code, which might either produce unexpected results or a runtime error: $ spago repl > import Test.Examples > import Data.Foldable (sum) > sums = cumulativeSumsBroken [1, 2, 3] > sums [1,3,6,Broken] > sum sums 0 > complex = addComplexBroken { real: 1.0, imag: 2.0 } { real: 3.0, imag: 4.0 } > complex.real 4.0 > complex.imag + 1.0 NaN > complex.imag var str = n.toString(); ^ TypeError: Cannot read property 'toString' of undefined For example, our resulting sums is no-longer a valid Array Int, now that a String is included in the Array. And further operations produce unexpected behavior, rather than an outright error, as the sum of these sums is 0 rather than 10. This could be a difficult bug to track down! Likewise, there are no errors when calling addComplexBroken; however, accessing the imag field of our Complex result will either produce unexpected behavior (returning NaN instead of 7.0), or a non-obvious runtime error. Let's use JSON to make our PureScript code more impervious to bugs in JavaScript code. The argonaut library contains the JSON decoding and encoding capabilities we need. That library has excellent documentation, so we will only cover basic usage in this book. If we create an alternate foreign import that defines the return type as Json: foreign import cumulativeSumsJson :: Array Int -> Json foreign import addComplexJson :: Complex -> Complex -> Json Note that we're simply pointing to our existing broken functions: export const cumulativeSumsJson = cumulativeSumsBroken export const addComplexJson = addComplexBroken And then write a wrapper to decode the returned foreign Json value: cumulativeSumsDecoded :: Array Int -> Either JsonDecodeError (Array Int) cumulativeSumsDecoded arr = decodeJson $ cumulativeSumsJson arr addComplexDecoded :: Complex -> Complex -> Either JsonDecodeError Complex addComplexDecoded a b = decodeJson $ addComplexJson a b Then any values that can't be successfully decoded to our return type appear as a Left error String: $ spago repl > import Test.Examples > cumulativeSumsDecoded [1, 2, 3] (Left "Couldn't decode Array (Failed at index 3): Value is not a Number") > addComplexDecoded { real: 1.0, imag: 2.0 } { real: 3.0, imag: 4.0 } (Left "JSON was missing expected field: imag") If we call the working versions, a Right value is returned. Try this yourself by modifying test/Examples.js with the following change to point to the working versions before running the next repl block. export const cumulativeSumsJson = cumulativeSums export const addComplexJson = addComplex $ spago repl > import Test.Examples > cumulativeSumsDecoded [1, 2, 3] (Right [1,3,6]) > addComplexDecoded { real: 1.0, imag: 2.0 } { real: 3.0, imag: 4.0 } (Right { imag: 6.0, real: 4.0 }) Using JSON is also the easiest way to pass other structural types, such as Map and Set through the FFI. Note that since JSON only consists of booleans, numbers, strings, arrays, and objects of other JSON values, we can't write a Map and Set directly in JSON. But we can represent these structures as arrays (assuming the keys and values can also be represented in JSON), and then decode them back to Map or Set. Here's an example of a foreign function signature that modifies a Map of String keys and Int values, along with the wrapper function that handles JSON encoding and decoding. foreign import mapSetFooJson :: Json -> Json mapSetFoo :: Map String Int -> Either JsonDecodeError (Map String Int) mapSetFoo = encodeJson >>> mapSetFooJson >>> decodeJson Note that this is a prime use case for function composition. Both of these alternatives are equivalent to the above: mapSetFoo :: Map String Int -> Either JsonDecodeError (Map String Int) mapSetFoo = decodeJson <<< mapSetFooJson <<< encodeJson mapSetFoo :: Map String Int -> Either JsonDecodeError (Map String Int) mapSetFoo = encodeJson >>> mapSetFooJson >>> decodeJson Here is the JavaScript implementation. Note the Array.from step which is necessary to convert the JavaScript Map into a JSON-friendly format before decoding converts it back to a PureScript Map. export const mapSetFooJson = j => { let m = new Map(j); m.set("Foo", 42); return Array.from(m); }; Now we can send and receive a Map over the FFI: $ spago repl > import Test.Examples > import Data.Map > import Data.Tuple > myMap = fromFoldable [ Tuple "hat" 1, Tuple "cat" 2 ] > :type myMap Map String Int > myMap (fromFoldable [(Tuple "cat" 2),(Tuple "hat" 1)]) > mapSetFoo myMap (Right (fromFoldable [(Tuple "Foo" 42),(Tuple "cat" 2),(Tuple "hat" 1)])) Exercises (Medium) Write a JavaScript function and PureScript wrapper valuesOfMap :: Map String Int -> Either JsonDecodeError (Set Int)that returns a Setof all the values in a Map. Hint: The .values()instance method for Map may be useful in your JavaScript code. (Easy) Write a new wrapper for the previous JavaScript function with the signature valuesOfMapGeneric :: forall k v. Map k v -> Either JsonDecodeError (Set v)so it works with a wider variety of maps. Note that you'll need to add some type class constraints for kand v. The compiler will guide you. (Medium) Rewrite the earlier quadraticRootsfunction as quadraticRootsSetwhich returns the Complexroots as a Setvia JSON (instead of as a Pair). (Difficult) Rewrite the earlier quadraticRootsfunction as quadraticRootsSafewhich uses JSON to pass the Pairof Complexroots over FFI. Don't use the Pairconstructor in JavaScript, but instead, just return the pair in a decoder-compatible format. Hint: You'll need to write a DecodeJsoninstance for Pair. Consult the argonaut docs for instruction on writing your own decode instance. Their decodeJsonTuple instance may also be a helpful reference. Note that you'll need a newtypewrapper for Pairto avoid creating an "orphan instance". (Medium) Write a parseAndDecodeArray2D :: String -> Either String (Array (Array Int))function to parse and decode a JSON string containing a 2D array, such as "[[1, 2, 3], [4, 5], [6]]". Hint: You'll need to use jsonParserto convert the Stringinto Jsonbefore decoding. (Medium) The following data type represents a binary tree with values at the leaves: data Tree a = Leaf a | Branch (Tree a) (Tree a) Derive generic EncodeJsonand DecodeJsoninstances for the Treetype. Consult the argonaut docs for instructions on how to do this. Note that you'll also need generic instances of Showand Eqto enable unit testing for this exercise, but those should be straightforward to implement after tackling the JSON instances. (Difficult) The following datatype should be represented directly in JSON as either an integer or a string: data IntOrString = IntOrString_Int Int | IntOrString_String String Write instances of EncodeJsonand DecodeJsonfor the IntOrStringdata type which implement this behavior. Hint: The altoperator from Control.Altmay be helpful. Address book In this section we will apply our newly-acquired FFI and JSON knowledge to build on our address book example from chapter 8. We will add the following features: - A Save button at the bottom of the form that, when clicked, serializes the state of the form to JSON and saves it in local storage. - Automatic retrieval of the JSON document from local storage upon page reload. The form fields are populated with the contents of this document. - A pop-up alert if there is an issue saving or loading the form state. We'll start by creating FFI wrappers for the following Web Storage APIs in our Effect.Storage module: setItemtakes a key and a value (both strings), and returns a computation which stores (or updates) the value in local storage at the specified key. getItemtakes a key, and attempts to retrieve the associated value from local storage. However, since the getItemmethod on window.localStoragecan return null, the return type is not String, but Json. foreign import setItem :: String -> String -> Effect Unit foreign import getItem :: String -> Effect Json Here is the corresponding JavaScript implementation of these functions in Effect/Storage.js: export const setItem = key => value => () => window.localStorage.setItem(key, value); export const getItem = key => () => window.localStorage.getItem(key); We'll create a save button like so: saveButton :: R.JSX saveButton = D.label { className: "form-group row col-form-label" , children: [ D.button { className: "btn-primary btn" , onClick: handler_ validateAndSave , children: [ D.text "Save" ] } ] } And write our validated person as a JSON string with setItem in the validateAndSave function: validateAndSave :: Effect Unit validateAndSave = do log "Running validators" case validatePerson' person of Left errs -> log $ "There are " <> show (length errs) <> " validation errors." Right validPerson -> do setItem "person" $ stringify $ encodeJson validPerson log "Saved" Note that if we attempt to compile at this stage, we'll encounter the following error: No type class instance was found for Data.Argonaut.Encode.Class.EncodeJson PhoneType This is because PhoneType in the Person record needs an EncodeJson instance. We'll just derive a generic encode instance, and a decode instance too while we're at it. More information how this works is available in the argonaut docs: import Data.Argonaut (class DecodeJson, class EncodeJson) import Data.Argonaut.Decode.Generic (genericDecodeJson) import Data.Argonaut.Encode.Generic (genericEncodeJson) import Data.Generic.Rep (class Generic) derive instance genericPhoneType :: Generic PhoneType _ instance encodeJsonPhoneType :: EncodeJson PhoneType where encodeJson = genericEncodeJson instance decodeJsonPhoneType :: DecodeJson PhoneType where decodeJson = genericDecodeJson Now we can save our person to local storage, but this isn't very useful unless we can retrieve the data. We'll tackle that next. We'll start with retrieving the "person" string from local storage: item <- getItem "person" Then we'll create a helper function to handle converting the string from local storage to our Person record. Note that this string in storage may be null, so we represent it as a foreign Json until it is successfully decoded as a String. There are a number of other conversion steps along the way - each of which return an Either value, so it makes sense to organize these together in a do block. processItem :: Json -> Either String Person processItem item = do jsonString <- decodeJson item j <- jsonParser jsonString decodeJson j Then we inspect this result to see if it succeeded. If it failed, we'll log the errors and use our default examplePerson, otherwise we'll use the person retrieved from local storage. initialPerson <- case processItem item of Left err -> do log $ "Error: " <> err <> ". Loading examplePerson" pure examplePerson Right p -> pure p Finally, we'll pass this initialPerson to our component via the props record: -- Create JSX node from react component. app = element addressBookApp { initialPerson } And pick it up on the other side to use in our state hook: mkAddressBookApp :: Effect (ReactComponent { initialPerson :: Person }) mkAddressBookApp = reactComponent "AddressBookApp" \props -> R.do Tuple person setPerson <- useState props.initialPerson As a finishing touch, we'll improve the quality of our error messages by appending to the String of each Left value with lmap. processItem :: Json -> Either String Person processItem item = do jsonString <- lmap ("No string in local storage: " <> _) $ decodeJson item j <- lmap ("Cannot parse JSON string: " <> _) $ jsonParser jsonString lmap ("Cannot decode Person: " <> _) $ decodeJson j Only the first error should ever occur during normal operation of this app. You can trigger the other errors by opening your web browser's dev tools, editing the saved "person" string in local storage, and refreshing the page. How you modify the JSON string determines which error is triggered. See if you can trigger each of them. That covers local storage. Next we'll implement the alert action, which is very similar to the log action from the Effect.Console module. The only difference is that the alert action uses the window.alert method, whereas the log action uses the console.log method. As such, alert can only be used in environments where window.alert is defined, such as a web browser. foreign import alert :: String -> Effect Unit export const alert = msg => () => window.alert(msg); We want this alert to appear when either: - A user attempts to save a form with validation errors. - The state cannot be retrieved from local storage. That is accomplished by simply replacing log with alert on these lines: Left errs -> alert $ "There are " <> show (length errs) <> " validation errors." alert $ "Error: " <> err <> ". Loading examplePerson" Exercises - (Easy) Write a wrapper for the removeItemmethod on the localStorageobject, and add your foreign function to the Effect.Storagemodule. - (Medium) Add a "Reset" button that, when clicked, calls the newly-created removeItemfunction to delete the "person" entry from local storage. - (Easy) Write a wrapper for the confirmmethod on the JavaScript Windowobject, and add your foreign function to the Effect.Alertmodule. - (Medium) Call this confirmfunction when a users clicks the "Reset" button to ask if they're sure they want to reset their address book. Conclusion In this chapter, we've learned how to work with foreign JavaScript code from PureScript and we've seen the issues involved with writing trustworthy code using the FFI: - We've seen the importance of ensuring that foreign functions have correct representations. - We learned how to deal with corner cases like null values and other types of JavaScript data, by using foreign types, or the Jsondata type. - We saw how to safely serialize and deserialize JSON data. For more examples, the purescript, purescript-contrib and purescript-node GitHub organizations provide plenty of examples of libraries which use the FFI. In the remaining chapters, we will see some of these libraries put to use to solve real-world problems in a type-safe way. Addendum Calling PureScript from JavaScript Calling a PureScript function from JavaScript is very simple, at least for functions with simple types. Let's take the following simple module as an example: module Test where gcd :: Int -> Int -> Int gcd 0 m = m gcd n 0 = n gcd n m | n > m = gcd (n - m) m | otherwise = gcd (m - n) n This function finds the greatest common divisor of two numbers by repeated subtraction. It is a nice example of a case where you might like to use PureScript to define the function, but have a requirement to call it from JavaScript: it is simple to define this function in PureScript using pattern matching and recursion, and the implementor can benefit from the use of the type checker. To understand how this function can be called from JavaScript, it is important to realize that PureScript functions always get turned into JavaScript functions of a single argument, so we need to apply its arguments one-by-one: import Test from 'Test.js'; Test.gcd(15)(20); Here, I am assuming that the code was compiled with spago build, which compiles PureScript modules to ES modules. For that reason, I was able to reference the gcd function on the Test object, after importing the Test module using import. You might also like to bundle JavaScript code for the browser, using spago bundle-app --to file.js. In that case, you would access the Test module from the global PureScript namespace, which defaults to PS: var Test = PS.Test; Test.gcd(15)(20); Understanding Name Generation PureScript aims to preserve names during code generation as much as possible. In particular, most identifiers which are neither PureScript nor JavaScript keywords can be expected to be preserved, at least for names of top-level declarations. If you decide to use a JavaScript keyword as an identifier, the name will be escaped with a double dollar symbol. For example, null = [] generates the following JavaScript: var $$null = []; In addition, if you would like to use special characters in your identifier names, they will be escaped using a single dollar symbol. For example, example' = 100 generates the following JavaScript: var example$prime = 100; Where compiled PureScript code is intended to be called from JavaScript, it is recommended that identifiers only use alphanumeric characters, and avoid JavaScript keywords. If user-defined operators are provided for use in PureScript code, it is good practice to provide an alternative function with an alphanumeric name for use in JavaScript. Runtime Data Representation Types allow us to reason at compile-time that our programs are "correct" in some sense - that is, they will not break at runtime. But what does that mean? In PureScript, it means that the type of an expression should be compatible with its representation at runtime. For that reason, it is important to understand the representation of data at runtime to be able to use PureScript and JavaScript code together effectively. This means that for any given PureScript expression, we should be able to understand the behavior of the value it will evaluate to at runtime. The good news is that PureScript expressions have particularly simple representations at runtime. It should always be possible to understand the runtime data representation of an expression by considering its type. For simple types, the correspondence is almost trivial. For example, if an expression has the type Boolean, then its value v at runtime should satisfy typeof v === 'boolean'. That is, expressions of type Boolean evaluate to one of the (JavaScript) values true or false. In particular, there is no PureScript expression of type Boolean which evaluates to null or undefined. A similar law holds for expressions of type Int Number and String - expressions of type Int or Number evaluate to non-null JavaScript numbers, and expressions of type String evaluate to non-null JavaScript strings. Expressions of type Int will evaluate to integers at runtime, even though they cannot not be distinguished from values of type Number by using typeof. What about Unit? Well, since Unit has only one inhabitant ( unit) and its value is not observable, it doesn't actually matter what it's represented with at runtime. Old code tends to represent it using {}. Newer code, however, tends to use undefined. So, although it doesn't really matter what you use to represent Unit, it is recommended to use undefined (not returning anything from a function also returns undefined). What about some more complex types? As we have already seen, PureScript functions correspond to JavaScript functions of a single argument. More precisely, if an expression f has type a -> b for some types a and b, and an expression x evaluates to a value with the correct runtime representation for type a, then f evaluates to a JavaScript function, which when applied to the result of evaluating x, has the correct runtime representation for type b. As a simple example, an expression of type String -> String evaluates to a function which takes non-null JavaScript strings to non-null JavaScript strings. As you might expect, PureScript's arrays correspond to JavaScript arrays. But remember - PureScript arrays are homogeneous, so every element has the same type. Concretely, if a PureScript expression e has type Array a for some type a, then e evaluates to a (non-null) JavaScript array, all of whose elements have the correct runtime representation for type a. We've already seen that PureScript's records evaluate to JavaScript objects. Just as for functions and arrays, we can reason about the runtime representation of data in a record's fields by considering the types associated with its labels. Of course, the fields of a record are not required to be of the same type. Representing ADTs For every constructor of an algebraic data type, the PureScript compiler creates a new JavaScript object type by defining a function. Its constructors correspond to functions which create new JavaScript objects based on those prototypes. For example, consider the following simple ADT: data ZeroOrOne a = Zero | One a The PureScript compiler generates the following code: function One(value0) { this.value0 = value0; }; One.create = function (value0) { return new One(value0); }; function Zero() { }; Zero.value = new Zero(); Here, we see two JavaScript object types: Zero and One. It is possible to create values of each type by using JavaScript's new keyword. For constructors with arguments, the compiler stores the associated data in fields called value0, value1, etc. The PureScript compiler also generates helper functions. For constructors with no arguments, the compiler generates a value property, which can be reused instead of using the new operator repeatedly. For constructors with one or more arguments, the compiler generates a create function, which takes arguments with the appropriate representation and applies the appropriate constructor. What about constructors with more than one argument? In that case, the PureScript compiler also creates a new object type, and a helper function. This time, however, the helper function is curried function of two arguments. For example, this algebraic data type: data Two a b = Two a b generates this JavaScript code: function Two(value0, value1) { this.value0 = value0; this.value1 = value1; }; Two.create = function (value0) { return function (value1) { return new Two(value0, value1); }; }; Here, values of the object type Two can be created using the new keyword, or by using the Two.create function. The case of newtypes is slightly different. Recall that a newtype is like an algebraic data type, restricted to having a single constructor taking a single argument. In this case, the runtime representation of the newtype is actually the same as the type of its argument. For example, this newtype representing telephone numbers: newtype PhoneNumber = PhoneNumber String is actually represented as a JavaScript string at runtime. This is useful for designing libraries, since newtypes provide an additional layer of type safety, but without the runtime overhead of another function call. Representing Quantified Types Expressions with quantified (polymorphic) types have restrictive representations at runtime. In practice, this means that there are relatively few expressions with a given quantified type, but that we can reason about them quite effectively. Consider this polymorphic type, for example: forall a. a -> a What sort of functions have this type? Well, there is certainly one function with this type - namely, the identity function, defined in the Prelude: id :: forall a. a -> a id a = a In fact, the identity function is the only (total) function with this type! This certainly seems to be the case (try writing an expression with this type which is not observably equivalent to identity), but how can we be sure? We can be sure by considering the runtime representation of the type. What is the runtime representation of a quantified type forall a. t? Well, any expression with the runtime representation for this type must have the correct runtime representation for the type t for any choice of type a. In our example above, a function of type forall a. a -> a must have the correct runtime representation for the types String -> String, Number -> Number, Array Boolean -> Array Boolean, and so on. It must take strings to strings, numbers to numbers, etc. But that is not enough - the runtime representation of a quantified type is more strict than this. We require any expression to be parametrically polymorphic - that is, it cannot use any information about the type of its argument in its implementation. This additional condition prevents problematic implementations such as the following JavaScript function from inhabiting a polymorphic type: function invalid(a) { if (typeof a === 'string') { return "Argument was a string."; } else { return a; } } Certainly, this function takes strings to strings, numbers to numbers, etc. but it does not meet the additional condition, since it inspects the (runtime) type of its argument, so this function would not be a valid inhabitant of the type forall a. a -> a. Without being able to inspect the runtime type of our function argument, our only option is to return the argument unchanged, and so identity is indeed the only inhabitant of the type forall a. a -> a. A full discussion of parametric polymorphism and parametricity is beyond the scope of this book. Note however, that since PureScript's types are erased at runtime, a polymorphic function in PureScript cannot inspect the runtime representation of its arguments (without using the FFI), and so this representation of polymorphic data is appropriate. Representing Constrained Types Functions with a type class constraint have an interesting representation at runtime. Because the behavior of the function might depend on the type class instance chosen by the compiler, the function is given an additional argument, called a type class dictionary, which contains the implementation of the type class functions provided by the chosen instance. For example, here is a simple PureScript function with a constrained type which uses the Show type class: shout :: forall a. Show a => a -> String shout a = show a <> "!!!" The generated JavaScript looks like this: var shout = function (dict) { return function (a) { return show(dict)(a) + "!!!"; }; }; Notice that shout is compiled to a (curried) function of two arguments, not one. The first argument dict is the type class dictionary for the Show constraint. dict contains the implementation of the show function for the type a. We can call this function from JavaScript by passing an explicit type class dictionary from Data.Show as the first parameter: import { showNumber } from 'Data.Show' shout(showNumber)(42); Exercises (Easy) What are the runtime representations of these types? forall a. a forall a. a -> a -> a forall a. Ord a => Array a -> Boolean What can you say about the expressions which have these types? (Medium) Try using the functions defined in the arrayspackage, calling them from JavaScript, by compiling the library using spago buildand importing modules using the importfunction in NodeJS. Hint: you may need to configure the output path so that the generated ES modules are available on the NodeJS module path. Representing Side Effects The Effect monad is also defined as a foreign type. Its runtime representation is quite simple - an expression of type Effect a should evaluate to a JavaScript function of no arguments, which performs any side-effects and returns a value with the correct runtime representation for type a. The definition of the Effect type constructor is given in the Effect module as follows: foreign import data Effect :: Type -> Type As a simple example, consider the random function defined in the random package. Recall that its type was: foreign import random :: Effect Number The definition of the random function is given here: export const random = Math.random; Notice that the random function is represented at runtime as a function of no arguments. It performs the side effect of generating a random number, and returns it, and the return value matches the runtime representation of the Number type: it is a non-null JavaScript number. As a slightly more interesting example, consider the log function defined by the Effect.Console module in the console package. The log function has the following type: foreign import log :: String -> Effect Unit And here is its definition: export const log = function (s) { return function () { console.log(s); }; }; The representation of log at runtime is a JavaScript function of a single argument, returning a function of no arguments. The inner function performs the side-effect of writing a message to the console. Expressions of type Effect a can be invoked from JavaScript like regular JavaScript methods. For example, since the main function is required to have type Effect a for some type a, it can be invoked as follows: import { main } from 'Main' main(); When using spago bundle-app --to or spago run, this call to main is generated automatically, whenever the Main module is defined. Monadic Adventures Chapter Goals The goal of this chapter will be to learn about monad transformers, which provide a way to combine side-effects provided by different monads. The motivating example will be a text adventure game which can be played on the console in NodeJS. The various side-effects of the game (logging, state, and configuration) will all be provided by a monad transformer stack. Project Setup This module's project introduces the following new dependencies: ordered-collections, which provides data types for immutable maps and sets transformers, which provides implementations of standard monad transformers node-readline, which provides FFI bindings to the readlineinterface provided by NodeJS optparse, which provides applicative parsers for processing command line arguments How To Play The Game To run the project, use spago run By default you will see a usage message: Monadic Adventures! A game to learn monad transformers Usage: run.js (-p|--player <player name>) [-d|--debug] Play the game as <player name> Available options: -p,--player <player name> The player's name <String> -d,--debug Use debug mode -h,--help Show this help text To provide command line arguments, you can either call spago run with the -a option to pass additional arguments directly to your application, or you can call spago bundle-app, which will create an index.js file that can be run directly with node. For example, to provide the player name using the -p option: $ spago run -a "-p Phil" > $ spago bundle-app $ node index.js -p Phil > From the prompt, you can enter commands like look, inventory, take, use, north, south, east, and west. There is also a debug command, which can be used to print the game state when the --debug command line option is provided. The game is played on a two-dimensional grid, and the player moves by issuing commands north, south, east, and west. The game contains a collection of items which can either be in the player's possession (in the user's inventory), or on the game grid at some location. Items can be picked up by the player, using the take command. For reference, here is a complete walkthrough of the game: $ spago run -a "-p Phil" > look You are at (0, 0) You are in a dark forest. You see a path to the north. You can see the Matches. > take Matches You now have the Matches > north > look You are at (0, 1) You are in a clearing. You can see the Candle. > take Candle You now have the Candle > inventory You have the Candle. You have the Matches. > use Matches You light the candle. Congratulations, Phil! You win! The game is very simple, but the aim of the chapter is to use the transformers package to build a library which will enable rapid development of this type of game. The State Monad We will start by looking at some of the monads provided by the transformers package. The first example is the State monad, which provides a way to model mutable state in pure code. We have already seen an approach to mutable state provided by the Effect monad. State provides an alternative. The State type constructor takes two type parameters: the type s of the state, and the return type a. Even though we speak of the " State monad", the instance of the Monad type class is actually provided for the State s type constructor, for any type s. The Control.Monad.State module provides the following API: get :: forall s. State s s gets :: forall s. (s -> a) -> State s a put :: forall s. s -> State s Unit modify :: forall s. (s -> s) -> State s s modify_ :: forall s. (s -> s) -> State s Unit Note that these API signatures are presented in a simplified form using the State type constructor for now. The actual API involves MonadState which we'll cover in the later "Type Classes" section of this chapter, so don't worry if you see different signatures in your IDE tooltips or on Pursuit. Let's see an example. One use of the State monad might be to add the values in an array of integers to the current state. We could do that by choosing Int as the state type s, and using traverse_ to traverse the array, with a call to modify for each array element: import Data.Foldable (traverse_) import Control.Monad.State import Control.Monad.State.Class sumArray :: Array Int -> State Int Unit sumArray = traverse_ \n -> modify \sum -> sum + n The Control.Monad.State module provides three functions for running a computation in the State monad: evalState :: forall s a. State s a -> s -> a execState :: forall s a. State s a -> s -> s runState :: forall s a. State s a -> s -> Tuple a s Each of these functions takes an initial state of type s and a computation of type State s a. evalState only returns the return value, execState only returns the final state, and runState returns both, expressed as a value of type Tuple a s. Given the sumArray function above, we could use execState in PSCi to sum the numbers in several arrays as follows: > :paste … execState (do … sumArray [1, 2, 3] … sumArray [4, 5] … sumArray [6]) 0 … ^D 21 Exercises (Easy) What is the result of replacing execStatewith runStateor evalStatein our example above? (Medium) A string of parentheses is balanced if it is obtained by either concatenating zero-or-more shorter balanced strings, or by wrapping a shorter balanced string in a pair of parentheses. Use the Statemonad and the traverse_function to write a function testParens :: String -> Boolean which tests whether or not a Stringof parentheses is balanced, by keeping track of the number of opening parentheses which have not been closed. Your function should work as follows: > testParens "" true > testParens "(()(())())" true > testParens ")" false > testParens "(()()" false Hint: you may like to use the toCharArrayfunction from the Data.String.CodeUnitsmodule to turn the input string into an array of characters. The Reader Monad Another monad provided by the transformers package is the Reader monad. This monad provides the ability to read from a global configuration. Whereas the State monad provides the ability to read and write a single piece of mutable state, the Reader monad only provides the ability to read a single piece of data. The Reader type constructor takes two type arguments: a type r which represents the configuration type, and the return type a. The Control.Monad.Reader module provides the following API: ask :: forall r. Reader r r local :: forall r a. (r -> r) -> Reader r a -> Reader r a The ask action can be used to read the current configuration, and the local action can be used to run a computation with a modified configuration. For example, suppose we were developing an application controlled by permissions, and we wanted to use the Reader monad to hold the current user's permissions object. We might choose the type r to be some type Permissions with the following API: hasPermission :: String -> Permissions -> Boolean addPermission :: String -> Permissions -> Permissions Whenever we wanted to check if the user had a particular permission, we could use ask to retrieve the current permissions object. For example, only administrators might be allowed to create new users: createUser :: Reader Permissions (Maybe User) createUser = do permissions <- ask if hasPermission "admin" permissions then map Just newUser else pure Nothing To elevate the user's permissions, we might use the local action to modify the Permissions object during the execution of some computation: runAsAdmin :: forall a. Reader Permissions a -> Reader Permissions a runAsAdmin = local (addPermission "admin") Then we could write a function to create a new user, even if the user did not have the admin permission: createUserAsAdmin :: Reader Permissions (Maybe User) createUserAsAdmin = runAsAdmin createUser To run a computation in the Reader monad, the runReader function can be used to provide the global configuration: runReader :: forall r a. Reader r a -> r -> a Exercises In these exercises, we will use the Reader monad to build a small library for rendering documents with indentation. The "global configuration" will be a number indicating the current indentation level: type Level = Int type Doc = Reader Level String (Easy) Write a function linewhich renders a function at the current indentation level. Your function should have the following type: line :: String -> Doc Hint: use the askfunction to read the current indentation level. The powerfunction from Data.Monoidmay be helpful too. (Easy) Use the localfunction to write a function indent :: Doc -> Doc which increases the indentation level for a block of code. (Medium) Use the sequencefunction defined in Data.Traversableto write a function cat :: Array Doc -> Doc which concatenates a collection of documents, separating them with new lines. (Medium) Use the runReaderfunction to write a function render :: Doc -> String which renders a document as a String. You should now be able to use your library to write simple documents, as follows: render $ cat [ line "Here is some indented text:" , indent $ cat [ line "I am indented" , line "So am I" , indent $ line "I am even more indented" ] ] The Writer Monad The Writer monad provides the ability to accumulate a secondary value in addition to the return value of a computation. A common use case is to accumulate a log of type String or Array String, but the Writer monad is more general than this. It can actually be used to accumulate a value in any monoid, so it might be used to keep track of an integer total using the Additive Int monoid, or to track whether any of several intermediate Boolean values were true, using the Disj Boolean monoid. The Writer type constructor takes two type arguments: a type w which should be an instance of the Monoid type class, and the return type a. The key element of the Writer API is the tell function: tell :: forall w a. Monoid w => w -> Writer w Unit The tell action appends the provided value to the current accumulated result. As an example, let's add a log to an existing function by using the Array String monoid. Consider our previous implementation of the greatest common divisor function: gcd :: Int -> Int -> Int gcd n 0 = n gcd 0 m = m gcd n m = if n > m then gcd (n - m) m else gcd n (m - n) We could add a log to this function by changing the return type to Writer (Array String) Int: import Control.Monad.Writer import Control.Monad.Writer.Class gcdLog :: Int -> Int -> Writer (Array String) Int We only have to change our function slightly to log the two inputs at each step: gcdLog n 0 = pure n gcdLog 0 m = pure m gcdLog n m = do tell ["gcdLog " <> show n <> " " <> show m] if n > m then gcdLog (n - m) m else gcdLog n (m - n) We can run a computation in the Writer monad by using either of the execWriter or runWriter functions: execWriter :: forall w a. Writer w a -> w runWriter :: forall w a. Writer w a -> Tuple a w Just like in the case of the State monad, execWriter only returns the accumulated log, whereas runWriter returns both the log and the result. We can test our modified function in PSCi: > import Control.Monad.Writer > import Control.Monad.Writer.Class > runWriter (gcdLog 21 15) Tuple 3 ["gcdLog 21 15","gcdLog 6 15","gcdLog 6 9","gcdLog 6 3","gcdLog 3 3"] Exercises (Medium) Rewrite the sumArrayfunction above using the Writermonad and the Additive Intmonoid from the monoidpackage. (Medium) The Collatz function is defined on natural numbers nas n / 2when nis even, and 3 * n + 1when nis odd. For example, the iterated Collatz sequence starting at 10is as follows: 10, 5, 16, 8, 4, 2, 1, ... It is conjectured that the iterated Collatz sequence always reaches 1after some finite number of applications of the Collatz function. Write a function which uses recursion to calculate how many iterations of the Collatz function are required before the sequence reaches 1. Modify your function to use the Writermonad to log each application of the Collatz function. Monad Transformers Each of the three monads above: State, Reader and Writer, are also examples of so-called monad transformers. The equivalent monad transformers are called StateT, ReaderT, and WriterT respectively. What is a monad transformer? Well, as we have seen, a monad augments PureScript code with some type of side effect, which can be interpreted in PureScript by using the appropriate handler ( runState, runReader, runWriter, etc.) This is fine if we only need to use one side-effect. However, it is often useful to use more than one side-effect at once. For example, we might want to use Reader together with Maybe to express optional results in the context of some global configuration. Or we might want the mutable state provided by the State monad together with the pure error tracking capability of the Either monad. This is the problem solved by monad transformers. Note that we have already seen that the Effect monad provides a partial solution to this problem. Monad transformers provide another solution, and each approach has its own benefits and limitations. A monad transformer is a type constructor which is parameterized not only by a type, but by another type constructor. It takes one monad and turns it into another monad, adding its own variety of side-effects. Let's see an example. The monad transformer version of the State monad is StateT, defined in the Control.Monad.State.Trans module. We can find the kind of StateT using PSCi: > import Control.Monad.State.Trans > :kind StateT Type -> (Type -> Type) -> Type -> Type This looks quite confusing, but we can apply StateT one argument at a time to understand how to use it. The first type argument is the type of the state we wish to use, as was the case for State. Let's use a state of type String: > :kind StateT String (Type -> Type) -> Type -> Type The next argument is a type constructor of kind Type -> Type. It represents the underlying monad, which we want to add the effects of StateT to. For the sake of an example, let's choose the Either String monad: > :kind StateT String (Either String) Type -> Type We are left with a type constructor. The final argument represents the return type, and we might instantiate it to Number for example: > :kind StateT String (Either String) Number Type Finally we are left with something of kind Type, which means we can try to find values of this type. The monad we have constructed - StateT String (Either String) - represents computations which can fail with an error, and which can use mutable state. We can use the actions of the outer StateT String monad ( get, put, and modify) directly, but in order to use the effects of the wrapped monad ( Either String), we need to "lift" them over the monad transformer. The Control.Monad.Trans module defines the MonadTrans type class, which captures those type constructors which are monad transformers, as follows: class MonadTrans t where lift :: forall m a. Monad m => m a -> t m a This class contains a single member, lift, which takes computations in any underlying monad m and lifts them into the wrapped monad t m. In our case, the type constructor t is StateT String, and m is the Either String monad, so lift provides a way to lift computations of type Either String a to computations of type StateT String (Either String) a. This means that we can use the effects of StateT String and Either String side-by-side, as long as we use lift every time we use a computation of type Either String a. For example, the following computation reads the underlying state, and then throws an error if the state is the empty string: import Data.String (drop, take) split :: StateT String (Either String) String split = do s <- get case s of "" -> lift $ Left "Empty string" _ -> do put (drop 1 s) pure (take 1 s) If the state is not empty, the computation uses put to update the state to drop 1 s (that is, s with the first character removed), and returns take 1 s (that is, the first character of s). Let's try this in PSCi: > runStateT split "test" Right (Tuple "t" "est") > runStateT split "" Left "Empty string" This is not very remarkable, since we could have implemented this without StateT. However, since we are working in a monad, we can use do notation or applicative combinators to build larger computations from smaller ones. For example, we can apply split twice to read the first two characters from a string: > runStateT ((<>) <$> split <*> split) "test" (Right (Tuple "te" "st")) We can use the split function with a handful of other actions to build a basic parsing library. In fact, this is the approach taken by the parsing library. This is the power of monad transformers - we can create custom-built monads for a variety of problems, choosing the side-effects that we need, and keeping the expressiveness of do notation and applicative combinators. The ExceptT Monad Transformer The transformers package also defines the ExceptT e monad transformer, which is the transformer corresponding to the Either e monad. It provides the following API: class MonadError e m where throwError :: forall a. e -> m a catchError :: forall a. m a -> (e -> m a) -> m a instance monadErrorExceptT :: Monad m => MonadError e (ExceptT e m) runExceptT :: forall e m a. ExceptT e m a -> m (Either e a) The MonadError class captures those monads which support throwing and catching of errors of some type e, and an instance is provided for the ExceptT e monad transformer. The throwError action can be used to indicate failure, just like Left in the Either e monad. The catchError action allows us to continue after an error is thrown using throwError. The runExceptT handler is used to run a computation of type ExceptT e m a. This API is similar to that provided by the exceptions package and the Exception effect. However, there are some important differences: Exceptionuses actual JavaScript exceptions, whereas ExceptTmodels errors as a pure data structure. - The Exceptioneffect only supports exceptions of one type, namely JavaScript's Errortype, whereas ExceptTsupports errors of any type. In particular, we are free to define new error types. Let's try out ExceptT by using it to wrap the Writer monad. Again, we are free to use actions from the monad transformer ExceptT e directly, but computations in the Writer monad should be lifted using lift: import Control.Monad.Except import Control.Monad.Writer writerAndExceptT :: ExceptT String (Writer (Array String)) String writerAndExceptT = do lift $ tell ["Before the error"] _ <- throwError "Error!" lift $ tell ["After the error"] pure "Return value" If we test this function in PSCi, we can see how the two effects of accumulating a log and throwing an error interact. First, we can run the outer ExceptT computation of type by using runExceptT, leaving a result of type Writer (Array String) (Either String String). We can then use runWriter to run the inner Writer computation: > runWriter $ runExceptT writerAndExceptT Tuple (Left "Error!") ["Before the error"] Note that only those log messages which were written before the error was thrown actually get appended to the log. Monad Transformer Stacks As we have seen, monad transformers can be used to build new monads on top of existing monads. For some monad transformer t1 and some monad m, the application t1 m is also a monad. That means that we can apply a second monad transformer t2 to the result t1 m to construct a third monad t2 (t1 m). In this way, we can construct a stack of monad transformers, which combine the side-effects provided by their constituent monads. In practice, the underlying monad m is either the Effect monad, if native side-effects are required, or the Identity monad, defined in the Data.Identity module. The Identity monad adds no new side-effects, so transforming the Identity monad only provides the effects of the monad transformer. In fact, the State, Reader and Writer monads are implemented by transforming the Identity monad with StateT, ReaderT and WriterT respectively. Let's see an example in which three side effects are combined. We will use the StateT, WriterT and ExceptT effects, with the Identity monad on the bottom of the stack. This monad transformer stack will provide the side effects of mutable state, accumulating a log, and pure errors. We can use this monad transformer stack to reproduce our split action with the added feature of logging. type Errors = Array String type Log = Array String type Parser = StateT String (WriterT Log (ExceptT Errors Identity)) split :: Parser String split = do s <- get lift $ tell ["The state is " <> s] case s of "" -> lift $ lift $ throwError ["Empty string"] _ -> do put (drop 1 s) pure (take 1 s) If we test this computation in PSCi, we see that the state is appended to the log for every invocation of split. Note that we have to remove the side-effects in the order in which they appear in the monad transformer stack: first we use runStateT to remove the StateT type constructor, then runWriterT, then runExceptT. Finally, we run the computation in the Identity monad by using unwrap. > runParser p s = unwrap $ runExceptT $ runWriterT $ runStateT p s > runParser split "test" (Right (Tuple (Tuple "t" "est") ["The state is test"])) > runParser ((<>) <$> split <*> split) "test" (Right (Tuple (Tuple "te" "st") ["The state is test", "The state is est"])) However, if the parse is unsuccessful because the state is empty, then no log is printed at all: > runParser split "" (Left ["Empty string"]) This is because of the way in which the side-effects provided by the ExceptT monad transformer interact with the side-effects provided by the WriterT monad transformer. We can address this by changing the order in which the monad transformer stack is composed. If we move the ExceptT transformer to the top of the stack, then the log will contain all messages written up until the first error, as we saw earlier when we transformed Writer with ExceptT. One problem with this code is that we have to use the lift function multiple times to lift computations over multiple monad transformers: for example, the call to throwError has to be lifted twice, once over WriterT and a second time over StateT. This is fine for small monad transformer stacks, but quickly becomes inconvenient. Fortunately, as we will see, we can use the automatic code generation provided by type class inference to do most of this "heavy lifting" for us. Exercises (Easy) Use the ExceptTmonad transformer over the Identityfunctor to write a function safeDividewhich divides two numbers, throwing an error (as the String "Divide by zero!") if the denominator is zero. (Medium) Write a parser string :: String -> Parser String which matches a string as a prefix of the current state, or fails with an error message. Your parser should work as follows: > runParser (string "abc") "abcdef" (Right (Tuple (Tuple "abc" "def") ["The state is abcdef"])) Hint: you can use the implementation of splitas a starting point. You might find the stripPrefixfunction useful. (Difficult) Use the ReaderTand WriterTmonad transformers to reimplement the document printing library which we wrote earlier using the Readermonad. Instead of using lineto emit strings and catto concatenate strings, use the Array Stringmonoid with the WriterTmonad transformer, and tellto append a line to the result. Use the same names as in the original implementation but ending with an apostrophe ( '). Type Classes to the Rescue! When we looked at the State monad at the start of this chapter, I gave the following types for the actions of the State monad: get :: forall s. State s s put :: forall s. s -> State s Unit modify :: forall s. (s -> s) -> State s Unit In reality, the types given in the Control.Monad.State.Class module are more general than this: get :: forall m s. MonadState s m => m s put :: forall m s. MonadState s m => s -> m Unit modify :: forall m s. MonadState s m => (s -> s) -> m Unit The Control.Monad.State.Class module defines the MonadState (multi-parameter) type class, which allows us to abstract over "monads which support pure mutable state". As one would expect, the State s type constructor is an instance of the MonadState s type class, but there are many more interesting instances of this class. In particular, there are instances of MonadState for the WriterT, ReaderT and ExceptT monad transformers, provided in the transformers package. Each of these monad transformers has an instance for MonadState whenever the underlying Monad does. In practice, this means that as long as StateT appears somewhere in the monad transformer stack, and everything above StateT is an instance of MonadState, then we are free to use get, put and modify directly, without the need to use lift. Indeed, the same is true of the actions we covered for the ReaderT, WriterT, and ExceptT transformers. transformers defines a type class for each of the major transformers, allowing us to abstract over monads which support their operations. In the case of the split function above, the monad stack we constructed is an instance of each of the MonadState, MonadWriter and MonadError type classes. This means that we don't need to call lift at all! We can just use the actions get, put, tell and throwError as if they were defined on the monad stack itself: split :: Parser String split = do s <- get tell ["The state is " <> show s] case s of "" -> throwError ["Empty string"] _ -> do put (drop 1 s) pure (take 1 s) This computation really looks like we have extended our programming language to support the three new side-effects of mutable state, logging and error handling. However, everything is still implemented using pure functions and immutable data under the hood. Alternatives The control package defines a number of abstractions for working with computations which can fail. One of these is the Alternative type class: class Functor f <= Alt f where alt :: forall a. f a -> f a -> f a class Alt f <= Plus f where empty :: forall a. f a class (Applicative f, Plus f) <= Alternative f Alternative provides two new combinators: the empty value, which provides a prototype for a failing computation, and the alt function (and its alias, <|>) which provides the ability to fall back to an alternative computation in the case of an error. The Data.Array module provides two useful functions for working with type constructors in the Alternative type class: many :: forall f a. Alternative f => Lazy (f (Array a)) => f a -> f (Array a) some :: forall f a. Alternative f => Lazy (f (Array a)) => f a -> f (Array a) There is also an equivalent many and some for Data.List The many combinator uses the Alternative type class to repeatedly run a computation zero-or-more times. The some combinator is similar, but requires at least the first computation to succeed. In the case of our Parser monad transformer stack, there is an instance of Alternative induced by the ExceptT component, which supports failure by composing errors in different branches using a Monoid instance (this is why we chose Array String for our Errors type). This means that we can use the many and some functions to run a parser multiple times: > import Data.Array (many) > runParser (many split) "test" (Right (Tuple (Tuple ["t", "e", "s", "t"] "") [ "The state is \"test\"" , "The state is \"est\"" , "The state is \"st\"" , "The state is \"t\"" ])) Here, the input string "test" has been repeatedly split to return an array of four single-character strings, the leftover state is empty, and the log shows that we applied the split combinator four times. Monad Comprehensions The Control.MonadPlus module defines a subclass of the Alternative type class, called MonadPlus. MonadPlus captures those type constructors which are both monads and instances of Alternative: class (Monad m, Alternative m) <= MonadPlus m In particular, our Parser monad is an instance of MonadPlus. When we covered array comprehensions earlier in the book, we introduced the guard function, which could be used to filter out unwanted results. In fact, the guard function is more general, and can be used for any monad which is an instance of MonadPlus: guard :: forall m. Alternative m => Boolean -> m Unit The <|> operator allows us to backtrack in case of failure. To see how this is useful, let's define a variant of the split combinator which only matches upper case characters: upper :: Parser String upper = do s <- split guard $ toUpper s == s pure s Here, we use a guard to fail if the string is not upper case. Note that this code looks very similar to the array comprehensions we saw earlier - using MonadPlus in this way, we sometimes refer to constructing monad comprehensions. Backtracking We can use the <|> operator to backtrack to another alternative in case of failure. To demonstrate this, let's define one more parser, which matches lower case characters: lower :: Parser String lower = do s <- split guard $ toLower s == s pure s With this, we can define a parser which eagerly matches many upper case characters if the first character is upper case, or many lower case character if the first character is lower case: > upperOrLower = some upper <|> some lower This parser will match characters until the case changes: > runParser upperOrLower "abcDEF" (Right (Tuple (Tuple ["a","b","c"] ("DEF")) [ "The state is \"abcDEF\"" , "The state is \"bcDEF\"" , "The state is \"cDEF\"" ])) We can even use many to fully split a string into its lower and upper case components: > components = many upperOrLower > runParser components "abCDeFgh" (Right (Tuple (Tuple [["a","b"],["C","D"],["e"],["F"],["g","h"]] "") [ "The state is \"abCDeFgh\"" , "The state is \"bCDeFgh\"" , "The state is \"CDeFgh\"" , "The state is \"DeFgh\"" , "The state is \"eFgh\"" , "The state is \"Fgh\"" , "The state is \"gh\"" , "The state is \"h\"" ])) Again, this illustrates the power of reusability that monad transformers bring - we were able to write a backtracking parser in a declarative style with only a few lines of code, by reusing standard abstractions! Exercises (Easy) Remove the calls to the liftfunction from your implementation of the stringparser. Verify that the new implementation type checks, and convince yourself that it should. (Medium) Use your stringparser with the somecombinator to write a parser asFollowedByBswhich recognizes strings consisting of several copies of the string "a"followed by several copies of the string "b". (Medium) Use the <|>operator to write a parser asOrBswhich recognizes strings of the letters aor bin any order. (Difficult) The Parsermonad might also be defined as follows: type Parser = ExceptT Errors (StateT String (WriterT Log Identity)) What effect does this change have on our parsing functions? The RWS Monad One particular combination of monad transformers is so common that it is provided as a single monad transformer in the transformers package. The Reader, Writer and State monads are combined into the reader-writer-state monad, or more simply the RWS monad. This monad has a corresponding monad transformer called the RWST monad transformer. We will use the RWS monad to model the game logic for our text adventure game. The RWS monad is defined in terms of three type parameters (in addition to its return type): type RWS r w s = RWST r w s Identity Notice that the RWS monad is defined in terms of its own monad transformer, by setting the base monad to Identity which provides no side-effects. The first type parameter, r, represents the global configuration type. The second, w, represents the monoid which we will use to accumulate a log, and the third, s is the type of our mutable state. In the case of our game, our global configuration is defined in a type called GameEnvironment in the Data.GameEnvironment module: type PlayerName = String newtype GameEnvironment = GameEnvironment { playerName :: PlayerName , debugMode :: Boolean } It defines the player name, and a flag which indicates whether or not the game is running in debug mode. These options will be set from the command line when we come to run our monad transformer. The mutable state is defined in a type called GameState in the Data.GameState module: import Data.Map as M import Data.Set as S newtype GameState = GameState { items :: M.Map Coords (S.Set GameItem) , player :: Coords , inventory :: S.Set GameItem } The Coords data type represents points on a two-dimensional grid, and the GameItem data type is an enumeration of the items in the game: data GameItem = Candle | Matches The GameState type uses two new data structures: Map and Set, which represent sorted maps and sorted sets respectively. The items property is a mapping from coordinates of the game grid to sets of game items at that location. The player property stores the current coordinates of the player, and the inventory property stores a set of game items currently held by the player. The Map and Set data structures are sorted by their keys, can be used with any key type in the Ord type class. This means that the keys in our data structures should be totally ordered. We will see how the Map and Set structures are used as we write the actions for our game. For our log, we will use the List String monoid. We can define a type synonym for our Game monad, implemented using RWS: type Log = L.List String type Game = RWS GameEnvironment Log GameState Implementing Game Logic Our game is going to be built from simple actions defined in the Game monad, by reusing the actions from the Reader, Writer and State monads. At the top level of our application, we will run the pure computations in the Game monad, and use the Effect monad to turn the results into observable side-effects, such as printing text to the console. One of the simplest actions in our game is the has action. This action tests whether the player's inventory contains a particular game item. It is defined as follows: has :: GameItem -> Game Boolean has item = do GameState state <- get pure $ item `S.member` state.inventory This function uses the get action defined in the MonadState type class to read the current game state, and then uses the member function defined in Data.Set to test whether the specified GameItem appears in the Set of inventory items. Another action is the pickUp action. It adds a game item to the player's inventory if it appears in the current room. It uses actions from the MonadWriter and MonadState type classes. First of all, it reads the current game state: pickUp :: GameItem -> Game Unit pickUp item = do GameState state <- get pickUp looks up the set of items in the current room. It does this by using the lookup function defined in Data.Map: case state.player `M.lookup` state.items of The lookup function returns an optional result indicated by the Maybe type constructor. If the key does not appear in the map, the lookup function returns Nothing, otherwise it returns the corresponding value in the Just constructor. We are interested in the case where the corresponding item set contains the specified game item. Again we can test this using the member function: Just items | item `S.member` items -> do In this case, we can use put to update the game state, and tell to add a message to the log: let newItems = M.update (Just <<< S.delete item) state.player state.items newInventory = S.insert item state.inventory put $ GameState state { items = newItems , inventory = newInventory } tell (L.singleton ("You now have the " <> show item)) Note that there is no need to lift either of the two computations here, because there are appropriate instances for both MonadState and MonadWriter for our Game monad transformer stack. The argument to put uses a record update to modify the game state's items and inventory fields. We use the update function from Data.Map which modifies a value at a particular key. In this case, we modify the set of items at the player's current location, using the delete function to remove the specified item from the set. inventory is also updated, using insert to add the new item to the player's inventory set. Finally, the pickUp function handles the remaining cases, by notifying the user using tell: _ -> tell (L.singleton "I don't see that item here.") As an example of using the Reader monad, we can look at the code for the debug command. This command allows the user to inspect the game state at runtime if the game is running in debug mode: GameEnvironment env <- ask if env.debugMode then do state :: GameState <- get tell (L.singleton (show state)) else tell (L.singleton "Not running in debug mode.") Here, we use the ask action to read the game configuration. Again, note that we don't need to lift any computation, and we can use actions defined in the MonadState, MonadReader and MonadWriter type classes in the same do notation block. If the debugMode flag is set, then the tell action is used to write the state to the log. Otherwise, an error message is added. The remainder of the Game module defines a set of similar actions, each using only the actions defined by the MonadState, MonadReader and MonadWriter type classes. Running the Computation Since our game logic runs in the RWS monad, it is necessary to run the computation in order to respond to the user's commands. The front-end of our game is built using two packages: optparse, which provides applicative command line parsing, and node-readline, which wraps NodeJS' readline module, allowing us to write interactive console-based applications. The interface to our game logic is provided by the function game in the Game module: game :: Array String -> Game Unit To run this computation, we pass a list of words entered by the user as an array of strings, and run the resulting RWS computation using runRWS: data RWSResult state result writer = RWSResult state result writer runRWS :: forall r w s a. RWS r w s a -> r -> s -> RWSResult s a w runRWS looks like a combination of runReader, runWriter and runState. It takes a global configuration and an initial state as an argument, and returns a data structure containing the log, the result and the final state. The front-end of our application is defined by a function runGame, with the following type signature: runGame :: GameEnvironment -> Effect Unit This function interacts with the user via the console (using the node-readline and console packages). runGame takes the game configuration as a function argument. The node-readline package provides the LineHandler type, which represents actions in the Effect monad which handle user input from the terminal. Here is the corresponding API: type LineHandler a = String -> Effect a foreign import setLineHandler :: forall a . Interface -> LineHandler a -> Effect Unit The Interface type represents a handle for the console, and is passed as an argument to the functions which interact with it. An Interface can be created using the createConsoleInterface function: import Node.ReadLine as RL runGame env = do interface <- RL.createConsoleInterface RL.noCompletion The first step is to set the prompt at the console. We pass the interface handle, and provide the prompt string and indentation level: RL.setPrompt "> " interface In our case, we are interested in implementing the line handler function. Our line handler is defined using a helper function in a let declaration, as follows: lineHandler :: GameState -> String -> Effect Unit lineHandler currentState input = do case runRWS (game (split (wrap " ") input)) env currentState of RWSResult state _ written -> do for_ written log RL.setLineHandler (lineHandler state) $ interface RL.prompt interface pure unit The let binding is closed over both the game configuration, named env, and the console handle, named interface. Our handler takes an additional first argument, the game state. This is required since we need to pass the game state to runRWS to run the game's logic. The first thing this action does is to break the user input into words using the split function from the Data.String module. It then uses runRWS to run the game action (in the RWS monad), passing the game environment and current game state. Having run the game logic, which is a pure computation, we need to print any log messages to the screen and show the user a prompt for the next command. The for_ action is used to traverse the log (of type List String) and print its entries to the console. Finally, setLineHandler is used to update the line handler function to use the updated game state, and the prompt is displayed again using the prompt action. The runGame function finally attaches the initial line handler to the console interface, and displays the initial prompt: RL.setLineHandler (lineHandler initialGameState) interface RL.prompt interface Exercises (Medium) Implement a new command cheat, which moves all game items from the game grid into the user's inventory. Create a function cheat :: Game Unitin the Gamemodule, and use this function from game. (Difficult) The Writercomponent of the RWSmonad is currently used for two types of messages: error messages and informational messages. Because of this, several parts of the code use case statements to handle error cases. Refactor the code to use the ExceptTmonad transformer to handle the error messages, and RWSto handle informational messages. Note: There are no tests for this exercise. Handling Command Line Options The final piece of the application is responsible for parsing command line options and creating the GameEnvironment configuration record. For this, we use the optparse package. optparse is an example of applicative command line option parsing. Recall that an applicative functor allows us to lift functions of arbitrary arity over a type constructor representing some type of side-effect. In the case of the optparse package, the functor we are interested in is the Parser functor (imported from the optparse module Options.Applicative, not to be confused with our Parser that we defined in the Split module), which adds the side-effect of reading from command line options. It provides the following handler: customExecParser :: forall a. ParserPrefs → ParserInfo a → Effect a This is best illustrated by example. The application's main function is defined using customExecParser as follows: main = OP.customExecParser prefs argParser >>= runGame The first argument is used to configure the optparse library. In our case, we simply configure it to show the help message when the application is run without any arguments (instead of showing a "missing argument" error) by using OP.prefs OP.showHelpOnEmpty, but the Options.Applicative.Builder module provides several other options. The second argument is the complete description of our parser program: argParser :: OP.ParserInfo GameEnvironment argParser = OP.info (env <**> OP.helper) parserOptions parserOptions = fold [ OP.fullDesc , OP.progDesc "Play the game as <player name>" , OP.header "Monadic Adventures! A game to learn monad transformers" ] Here OP.info combines a Parser with a set of options for how the help message is formatted. env <**> OP.helper takes any command line argument Parser named env and adds a --help option to it automatically. Options for the help message are of type InfoMod, which is a monoid, so we can use the fold function to add several options together. The interesting part of our parser is constructing the GameEnvironment: env :: OP.Parser GameEnvironment env = gameEnvironment <$> player <*> debug player :: OP.Parser String player = OP.strOption $ fold [ OP.long "player" , OP.short 'p' , OP.metavar "<player name>" , OP.help "The player's name <String>" ] debug :: OP.Parser Boolean debug = OP.switch $ fold [ OP.long "debug" , OP.short 'd' , OP.help "Use debug mode" ] player and debug are both Parsers, so we can use our applicative operators <$> and <*> to lift our gameEnvironment function, which has the type PlayerName -> Boolean -> GameEnvironment over Parser. OP.strOption constructs a command line option that expects a string value, and is configured via a collection of Mods folded together. OP.flag works similarly, but doesn't expect an associated value. optparse offers extensive documentation on different modifiers available to build various command line parsers. Notice how we were able to use the notation afforded by the applicative operators to give a compact, declarative specification of our command line interface. In addition, it is simple to add new command line arguments, simply by adding a new function argument to runGame, and then using <*> to lift runGame over an additional argument in the definition of env. Exercises - (Medium) Add a new Boolean-valued property cheatModeto the GameEnvironmentrecord. Add a new command line flag -cto the optparseconfiguration which enables cheat mode. The cheatcommand from the previous exercise should be disallowed if cheat mode is not enabled. Conclusion This chapter was a practical demonstration of the techniques we've learned so far, using monad transformers to build a pure specification of our game, and the Effect monad to build a front-end using the console. Because we separated our implementation from the user interface, it would be possible to create other front-ends for our game. For example, we could use the Effect monad to render the game in the browser using the Canvas API or the DOM. We have seen how monad transformers allow us to write safe code in an imperative style, where effects are tracked by the type system. In addition, type classes provide a powerful way to abstract over the actions provided by a monad, enabling code reuse. We were able to use standard abstractions like Alternative and MonadPlus to build useful monads by combining standard monad transformers. Monad transformers are an excellent demonstration of the sort of expressive code that can be written by relying on advanced type system features such as higher-kinded polymorphism and multi-parameter type classes. Canvas Graphics Chapter Goals This chapter will be an extended example focussing on the canvas package, which provides a way to generate 2D graphics from PureScript using the HTML5 Canvas API. Project Setup This module's project introduces the following new dependencies: canvas, which gives types to methods from the HTML5 Canvas API refs, which provides a side-effect for using global mutable references The source code for the chapter is broken up into a set of modules, each of which defines a main method. Different sections of this chapter are implemented in different files, and the Main module can be changed by modifying the Spago build command to run the appropriate file's main method at each point. The HTML file html/index.html contains a single canvas element which will be used in each example, and a script element to load the compiled PureScript code. To test the code for each section, open the HTML file in your browser. Because most exercises target the browser, there are no unit tests for this chapter. Simple Shapes The Example/Rectangle.purs file contains a simple introductory example, which draws a single blue rectangle at the center of the canvas. The module imports the Effect type from the Effect module, and also the Graphics.Canvas module, which contains actions in the Effect monad for working with the Canvas API. The main action starts, like in the other modules, by using the getCanvasElementById action to get a reference to the canvas object, and the getContext2D action to access the 2D rendering context for the canvas: The void function takes a functor and replaces its value with Unit. In the example it is used to make main conform with its signature. main :: Effect Unit. The types of these actions can be found using PSCi or by looking at the documentation: getCanvasElementById :: String -> Effect (Maybe CanvasElement) getContext2D :: CanvasElement -> Effect Context2D CanvasElement and Context2D are types defined in the Graphics.Canvas module. The same module also defines the Canvas effect, which is used by all of the actions in the module. The graphics context ctx manages the state of the canvas, and provides methods to render primitive shapes, set styles and colors, and apply transformations. We continue by setting the fill style to solid blue using the setFillStyle action. The longer hex notation of #0000FF may also be used for blue, but shorthand notation is easier for simple colors: setFillStyle ctx "#00F" Note that the setFillStyle action takes the graphics context as an argument. This is a common pattern in the Graphics.Canvas module. Finally, we use the fillPath action to fill the rectangle. fillPath has the following type: fillPath :: forall a. Context2D -> Effect a -> Effect a fillPath takes a graphics context and another action which builds the path to render. To build a path, we can use the rect action. rect takes a graphics context, and a record which provides the position and size of the rectangle: fillPath ctx $ rect ctx { x: 250.0 , y: 250.0 , width: 100.0 , height: 100.0 } Build the rectangle example, providing Example.Rectangle as the name of the main module: $ spago bundle-app --main Example.Rectangle --to dist/Main.js Now, open the html/index.html file and verify that this code renders a blue rectangle in the center of the canvas. Putting Row Polymorphism to Work There are other ways to render paths. The arc function renders an arc segment, and the moveTo, lineTo and closePath functions can be used to render piecewise-linear paths. The Shapes.purs file renders three shapes: a rectangle, an arc segment and a triangle. We have seen that the rect function takes a record as its argument. In fact, the properties of the rectangle are defined in a type synonym: type Rectangle = { x :: Number , y :: Number , width :: Number , height :: Number } The x and y properties represent the location of the top-left corner, while the w and h properties represent the width and height respectively. To render an arc segment, we can use the arc function, passing a record with the following type: type Arc = { x :: Number , y :: Number , radius :: Number , start :: Number , end :: Number } Here, the x and y properties represent the center point, r is the radius, and start and end represent the endpoints of the arc in radians. For example, this code fills an arc segment centered at (300, 300) with radius 50. The arc completes 2/3rds of a rotation. Note that the unit circle is flipped vertically, since the y-axis increases towards the bottom of the canvas: fillPath ctx $ arc ctx { x : 300.0 , y : 300.0 , radius : 50.0 , start : 0.0 , end : Math.tau * 2.0 / 3.0 } Notice that both the Rectangle and Arc record types contain x and y properties of type Number. In both cases, this pair represents a point. This means that we can write row-polymorphic functions which can act on either type of record. For example, the Shapes module defines a translate function which translates a shape by modifying its x and y properties: translate :: forall r . Number -> Number -> { x :: Number, y :: Number | r } -> { x :: Number, y :: Number | r } translate dx dy shape = shape { x = shape.x + dx , y = shape.y + dy } Notice the row-polymorphic type. It says that translate accepts any record with x and y properties and any other properties, and returns the same type of record. The x and y fields are updated, but the rest of the fields remain unchanged. This is an example of record update syntax. The expression shape { ... } creates a new record based on the shape record, with the fields inside the braces updated to the specified values. Note that the expressions inside the braces are separated from their labels by equals symbols, not colons like in record literals. The translate function can be used with both the Rectangle and Arc records, as can be seen in the Shapes example. The third type of path rendered in the Shapes example is a piecewise-linear path. Here is the corresponding code: setFillStyle ctx "#F00" fillPath ctx $ do moveTo ctx 300.0 260.0 lineTo ctx 260.0 340.0 lineTo ctx 340.0 340.0 closePath ctx There are three functions in use here: moveTomoves the current location of the path to the specified coordinates, lineTorenders a line segment between the current location and the specified coordinates, and updates the current location, closePathcompletes the path by rendering a line segment joining the current location to the start position. The result of this code snippet is to fill an isosceles triangle. Build the example by specifying Example.Shapes as the main module: $ spago bundle-app --main Example.Shapes --to dist/Main.js and open html/index.html again to see the result. You should see the three different types of shapes rendered to the canvas. Exercises (Easy) Experiment with the strokePathand setStrokeStylefunctions in each of the examples so far. (Easy) The fillPathand strokePathfunctions can be used to render complex paths with a common style by using a do notation block inside the function argument. Try changing the Rectangleexample to render two rectangles side-by-side using the same call to fillPath. Try rendering a sector of a circle by using a combination of a piecewise-linear path and an arc segment. (Medium) Given the following record type: type Point = { x :: Number, y :: Number } which represents a 2D point, write a function renderPathwhich strokes a closed path constructed from a number of points: renderPath :: Context2D -> Array Point -> Effect Unit Given a function f :: Number -> Point which takes a Numberbetween 0and 1as its argument and returns a Point, write an action which plots fby using your renderPathfunction. Your action should approximate the path by sampling fat a finite set of points. Experiment by rendering different paths by varying the function f. Drawing Random Circles The Example/Random.purs file contains an example which uses the Effect monad to interleave two different types of side-effect: random number generation, and canvas manipulation. The example renders one hundred randomly generated circles onto the canvas. The main action obtains a reference to the graphics context as before, and then sets the stroke and fill styles: setFillStyle ctx "#F00" setStrokeStyle ctx "#000" Next, the code uses the for_ function to loop over the integers between 0 and 100: for_ (1 .. 100) \_ -> do On each iteration, the do notation block starts by generating three random numbers distributed between 0 and 1. These numbers represent the x and y coordinates, and the radius of a circle: x <- random y <- random r <- random Next, for each circle, the code creates an Arc based on these parameters and finally fills and strokes the arc with the current styles: let path = arc ctx { x : x * 600.0 , y : y * 600.0 , radius : r * 50.0 , start : 0.0 , end : Number.tau , useCounterClockwise: false } fillPath ctx path strokePath ctx path Build this example by specifying the Example.Random module as the main module: $ spago bundle-app --main Example.Random --to dist/Main.js and view the result by opening html/index.html. Transformations There is more to the canvas than just rendering simple shapes. Every canvas maintains a transformation which is used to transform shapes before rendering. Shapes can be translated, rotated, scaled, and skewed. The canvas library supports these transformations using the following functions: translate :: Context2D -> TranslateTransform -> Effect Context2D rotate :: Context2D -> Number -> Effect Context2D scale :: Context2D -> ScaleTransform -> Effect Context2D transform :: Context2D -> Transform -> Effect Context2D The translate action performs a translation whose components are specified by the properties of the TranslateTransform record. The rotate action performs a rotation around the origin, through some number of radians specified by the first argument. The scale action performs a scaling, with the origin as the center. The ScaleTransform record specifies the scale factors along the x and y axes. Finally, transform is the most general action of the four here. It performs an affine transformation specified by a matrix. Any shapes rendered after these actions have been invoked will automatically have the appropriate transformation applied. In fact, the effect of each of these functions is to post-multiply the transformation with the context's current transformation. The result is that if multiple transformations applied after one another, then their effects are actually applied in reverse: transformations ctx = do translate ctx { translateX: 10.0, translateY: 10.0 } scale ctx { scaleX: 2.0, scaleY: 2.0 } rotate ctx (Math.tau / 4.0) renderScene The effect of this sequence of actions is that the scene is rotated, then scaled, and finally translated. Preserving the Context A common use case is to render some subset of the scene using a transformation, and then to reset the transformation afterwards. The Canvas API provides the save and restore methods, which manipulate a stack of states associated with the canvas. canvas wraps this functionality into the following functions: save :: Context2D -> Effect Context2D restore :: Context2D -> Effect Context2D The save action pushes the current state of the context (including the current transformation and any styles) onto the stack, and the restore action pops the top state from the stack and restores it. This allows us to save the current state, apply some styles and transformations, render some primitives, and finally restore the original transformation and state. For example, the following function performs some canvas action, but applies a rotation before doing so, and restores the transformation afterwards: rotated ctx render = do save ctx rotate (Math.tau / 3.0) ctx render restore ctx In the interest of abstracting over common use cases using higher-order functions, the canvas library provides the withContext function, which performs some canvas action while preserving the original context state: withContext :: Context2D -> Effect a -> Effect a We could rewrite the rotated function above using withContext as follows: rotated ctx render = withContext ctx do rotate (Math.tau / 3.0) ctx render Global Mutable State In this section, we'll use the refs package to demonstrate another effect in the Effect monad. The Effect.Ref module provides a type constructor for global mutable references, and an associated effect: > import Effect.Ref > :kind Ref Type -> Type A value of type Ref a is a mutable reference cell containing a value of type a, used to track global mutation. As such, it should be used sparingly. The Example/Refs.purs file contains an example which uses a Ref to track mouse clicks on the canvas element. The code starts by creating a new reference containing the value 0, by using the new action: clickCount <- Ref.new 0 Inside the click event handler, the modify action is used to update the click count, and the updated value is returned. count <- Ref.modify (\count -> count + 1) clickCount In the render function, the click count is used to determine the transformation applied to a rectangle: withContext ctx do let scaleX = Number.sin (toNumber count * Number.tau / 8.0) + 1.5 let scaleY = Number.sin (toNumber count * Number.tau / 12.0) + 1.5 translate ctx { translateX: 300.0, translateY: 300.0 } rotate ctx (toNumber count * Number.tau / 36.0) scale ctx { scaleX: scaleX, scaleY: scaleY } translate ctx { translateX: -100.0, translateY: -100.0 } fillPath ctx $ rect ctx { x: 0.0 , y: 0.0 , width: 200.0 , height: 200.0 } This action uses withContext to preserve the original transformation, and then applies the following sequence of transformations (remember that transformations are applied bottom-to-top): - The rectangle is translated through (-100, -100)so that its center lies at the origin. - The rectangle is scaled around the origin. - The rectangle is rotated through some multiple of 10degrees around the origin. - The rectangle is translated through (300, 300)so that it center lies at the center of the canvas. Build the example: $ spago bundle-app --main Example.Refs --to dist/Main.js and open the html/index.html file. If you click the canvas repeatedly, you should see a green rectangle rotating around the center of the canvas. Exercises - (Easy) Write a higher-order function which strokes and fills a path simultaneously. Rewrite the Random.pursexample using your function. - (Medium) Use Randomand Domto create an application which renders a circle with random position, color and radius to the canvas when the mouse is clicked. - (Medium) Write a function which transforms the scene by rotating it around a point with specified coordinates. Hint: use a translation to first translate the scene to the origin. L-Systems In this final example, we will use the canvas package to write a function for rendering L-systems (or Lindenmayer systems). An L-system is defined by an alphabet, an initial sequence of letters from the alphabet, and a set of production rules. Each production rule takes a letter of the alphabet and returns a sequence of replacement letters. This process is iterated some number of times starting with the initial sequence of letters. If each letter of the alphabet is associated with some instruction to perform on the canvas, the L-system can be rendered by following the instructions in order. For example, suppose the alphabet consists of the letters L (turn left), R (turn right) and F (move forward). We might define the following production rules: L -> L R -> R F -> FLFRRFLF If we start with the initial sequence "FRRFRRFRR" and iterate, we obtain the following sequence: FRRFRRFRR FLFRRFLFRRFLFRRFLFRRFLFRRFLFRR FLFRRFLFLFLFRRFLFRRFLFRRFLFLFLFRRFLFRRFLFRRFLF... and so on. Plotting a piecewise-linear path corresponding to this set of instruction approximates a curve called the Koch curve. Increasing the number of iterations increases the resolution of the curve. Let's translate this into the language of types and functions. We can represent our alphabet of letters with the following ADT: data Letter = L | R | F This data type defines one data constructor for each letter in our alphabet. How can we represent the initial sequence of letters? Well, that's just an array of letters from our alphabet, which we will call a Sentence: type Sentence = Array Letter initial :: Sentence initial = [F, R, R, F, R, R, F, R, R] Our production rules can be represented as a function from Letter to Sentence as follows: productions :: Letter -> Sentence productions L = [L] productions R = [R] productions F = [F, L, F, R, R, F, L, F] This is just copied straight from the specification above. Now we can implement a function lsystem which will take a specification in this form, and render it to the canvas. What type should lsystem have? Well, it needs to take values like initial and productions as arguments, as well as a function which can render a letter of the alphabet to the canvas. Here is a first approximation to the type of lsystem: Sentence -> (Letter -> Sentence) -> (Letter -> Effect Unit) -> Int -> Effect Unit The first two argument types correspond to the values initial and productions. The third argument represents a function which takes a letter of the alphabet and interprets it by performing some actions on the canvas. In our example, this would mean turning left in the case of the letter L, turning right in the case of the letter R, and moving forward in the case of a letter F. The final argument is a number representing the number of iterations of the production rules we would like to perform. The first observation is that the lsystem function should work for only one type of Letter, but for any type, so we should generalize our type accordingly. Let's replace Letter and Sentence with a and Array a for some quantified type variable a: forall a. Array a -> (a -> Array a) -> (a -> Effect Unit) -> Int -> Effect Unit The second observation is that, in order to implement instructions like "turn left" and "turn right", we will need to maintain some state, namely the direction in which the path is moving at any time. We need to modify our function to pass the state through the computation. Again, the lsystem function should work for any type of state, so we will represent it using the type variable s. We need to add the type s in three places: forall a s. Array a -> (a -> Array a) -> (s -> a -> Effect s) -> Int -> s -> Effect s Firstly, the type s was added as the type of an additional argument to lsystem. This argument will represent the initial state of the L-system. The type s also appears as an argument to, and as the return type of the interpretation function (the third argument to lsystem). The interpretation function will now receive the current state of the L-system as an argument, and will return a new, updated state as its return value. In the case of our example, we can define use following type to represent the state: type State = { x :: Number , y :: Number , theta :: Number } The properties x and y represent the current position of the path, and the theta property represents the current direction of the path, specified as the angle between the path direction and the horizontal axis, in radians. The initial state of the system might be specified as follows: initialState :: State initialState = { x: 120.0, y: 200.0, theta: 0.0 } Now let's try to implement the lsystem function. We will find that its definition is remarkably simple. It seems reasonable that lsystem should recurse on its fourth argument (of type Int). On each step of the recursion, the current sentence will change, having been updated by using the production rules. With that in mind, let's begin by introducing names for the function arguments, and delegating to a helper function: lsystem :: forall a s . Array a -> (a -> Array a) -> (s -> a -> Effect s) -> Int -> s -> Effect s lsystem init prod interpret n state = go init n where The go function works by recursion on its second argument. There are two cases: when n is zero, and when n is non-zero. In the first case, the recursion is complete, and we simply need to interpret the current sentence according to the interpretation function. We have a sentence of type Array a, a state of type s, and a function of type s -> a -> Effect s. This sounds like a job for the foldM function which we defined earlier, and which is available from the control package: go s 0 = foldM interpret state s What about in the non-zero case? In that case, we can simply apply the production rules to each letter of the current sentence, concatenate the results, and repeat by calling go recursively: go s i = go (concatMap prod s) (i - 1) That's it! Note how the use of higher order functions like foldM and concatMap allowed us to communicate our ideas concisely. However, we're not quite done. The type we have given is actually still too specific. Note that we don't use any canvas operations anywhere in our implementation. Nor do we make use of the structure of the Effect monad at all. In fact, our function works for any monad m! Here is the more general type of lsystem, as specified in the accompanying source code for this chapter: lsystem :: forall a m s . Monad m => Array a -> (a -> Array a) -> (s -> a -> m s) -> Int -> s -> m s We can understand this type as saying that our interpretation function is free to have any side-effects at all, captured by the monad m. It might render to the canvas, or print information to the console, or support failure or multiple return values. The reader is encouraged to try writing L-systems which use these various types of side-effect. This function is a good example of the power of separating data from implementation. The advantage of this approach is that we gain the freedom to interpret our data in multiple different ways. We might even factor lsystem into two smaller functions: the first would build the sentence using repeated application of concatMap, and the second would interpret the sentence using foldM. This is also left as an exercise for the reader. Let's complete our example by implementing its interpretation function. The type of lsystem tells us that its type signature must be s -> a -> m s for some types a and s and a type constructor m. We know that we want a to be Letter and s to be State, and for the monad m we can choose Effect. This gives us the following type: interpret :: State -> Letter -> Effect State To implement this function, we need to handle the three data constructors of the Letter type. To interpret the letters L (move left) and R (move right), we simply have to update the state to change the angle theta appropriately: interpret state L = pure $ state { theta = state.theta - Number.tau / 6.0 } interpret state R = pure $ state { theta = state.theta + Number.tau / 6.0 } To interpret the letter F (move forward), we can calculate the new position of the path, render a line segment, and update the state, as follows: interpret state F = do let x = state.x + Number.cos state.theta * 1.5 y = state.y + Number.sin state.theta * 1.5 moveTo ctx state.x state.y lineTo ctx x y pure { x, y, theta: state.theta } Note that in the source code for this chapter, the interpret function is defined using a let binding inside the main function, so that the name ctx is in scope. It would also be possible to move the context into the State type, but this would be inappropriate because it is not a changing part of the state of the system. To render this L-system, we can simply use the strokePath action: strokePath ctx $ lsystem initial productions interpret 5 initialState Compile the L-system example using $ spago bundle-app --main Example.LSystem --to dist/Main.js and open html/index.html. You should see the Koch curve rendered to the canvas. Exercises (Easy) Modify the L-system example above to use fillPathinstead of strokePath. Hint: you will need to include a call to closePath, and move the call to moveTooutside of the interpretfunction. (Easy) Try changing the various numerical constants in the code, to understand their effect on the rendered system. (Medium) Break the lsystemfunction into two smaller functions. The first should build the final sentence using repeated application of concatMap, and the second should use foldMto interpret the result. (Medium) Add a drop shadow to the filled shape, by using the setShadowOffsetX, setShadowOffsetY, setShadowBlurand setShadowColoractions. Hint: use PSCi to find the types of these functions. (Medium) The angle of the corners is currently a constant ( tau/6). Instead, it can be moved into the Letterdata type, which allows it to be changed by the production rules: type Angle = Number data Letter = L Angle | R Angle | F How can this new information be used in the production rules to create interesting shapes? (Difficult) An L-system is given by an alphabet with four letters: L(turn left through 60 degrees), R(turn right through 60 degrees), F(move forward) and M(also move forward). The initial sentence of the system is the single letter M. The production rules are specified as follows: L -> L R -> R F -> FLMLFRMRFRMRFLMLF M -> MRFRMLFLMLFLMRFRM Render this L-system. Note: you will need to decrease the number of iterations of the production rules, since the size of the final sentence grows exponentially with the number of iterations. Now, notice the symmetry between Land Min the production rules. The two "move forward" instructions can be differentiated using a Booleanvalue using the following alphabet type: data Letter = L | R | F Boolean Implement this L-system again using this representation of the alphabet. (Difficult) Use a different monad min the interpretation function. You might try using Effect.Consoleto write the L-system onto the console, or using Effect.Randomto apply random "mutations" to the state type. Conclusion In this chapter, we learned how to use the HTML5 Canvas API from PureScript by using the canvas library. We also saw a practical demonstration of many of the techniques we have learned already: maps and folds, records and row polymorphism, and the Effect monad for handling side-effects. The examples also demonstrated the power of higher-order functions and separating data from implementation. It would be possible to extend these ideas to completely separate the representation of a scene from its rendering function, using an algebraic data type, for example: data Scene = Rect Rectangle | Arc Arc | PiecewiseLinear (Array Point) | Transformed Transform Scene | Clipped Rectangle Scene | ... This approach is taken in the drawing package, and it brings the flexibility of being able to manipulate the scene as data in various ways before rendering. For examples of games rendered to the canvas, see the "Behavior" and "Signal" recipes in the cookbook. Generative Testing Chapter Goals In this chapter, we will see a particularly elegant application of type classes to the problem of testing. Instead of testing our code by telling the compiler how to test, we simply assert what properties our code should have. Test cases can be generated randomly from this specification, using type classes to hide the boilerplate code of random data generation. This is called generative testing (or property-based testing), a technique made popular by the QuickCheck library in Haskell. The quickcheck package is a port of Haskell's QuickCheck library to PureScript, and for the most part, it preserves the types and syntax of the original library. We will see how to use quickcheck to test a simple library, using Spago to integrate our test suite into our development process. Project Setup This chapter's project adds quickcheck as a dependency. In a Spago project, test sources should be placed in the test directory, and the main module for the test suite should be named Test.Main. The test suite can be run using the spago test command. Writing Properties The Merge module implements a simple function merge, which we will use to demonstrate the features of the quickcheck library. merge :: Array Int -> Array Int -> Array Int merge takes two sorted arrays of integers, and merges their elements so that the result is also sorted. For example: > import Merge > merge [1, 3, 5] [2, 4, 5] [1, 2, 3, 4, 5, 5] In a typical test suite, we might test merge by generating a few small test cases like this by hand, and asserting that the results were equal to the appropriate values. However, everything we need to know about the merge function can be summarized by this property: - If xsand ysare sorted, then merge xs ysis the sorted result of both arrays appended together. quickcheck allows us to test this property directly, by generating random test cases. We simply state the properties that we want our code to have, as functions. In this case, we have a single property: main = do quickCheck \xs ys -> eq (merge (sort xs) (sort ys)) (sort $ xs <> ys) When we run this code, quickcheck will attempt to disprove the properties we claimed, by generating random inputs xs and ys, and passing them to our functions. If our function returns false for any inputs, the property will be incorrect, and the library will raise an error. Fortunately, the library is unable to disprove our properties after generating 100 random test cases: $ spago test Installation complete. Build succeeded. 100/100 test(s) passed. ... Tests succeeded. If we deliberately introduce a bug into the merge function (for example, by changing the less-than check for a greater-than check), then an exception is thrown at runtime after the first failed test case: Error: Test 1 failed: Test returned false As we can see, this error message is not very helpful, but it can be improved with a little work. Improving Error Messages To provide error messages along with our failed test cases, quickcheck provides the <?> operator. Simply separate the property definition from the error message using <?>, as follows: quickCheck \xs ys -> let result = merge (sort xs) (sort ys) expected = sort $ xs <> ys in eq result expected <?> "Result:\n" <> show result <> "\nnot equal to expected:\n" <> show expected This time, if we modify the code to introduce a bug, we see our improved error message after the first failed test case: Error: Test 1 (seed 534161891) failed: Result: [-822215,-196136,-116841,618343,887447,-888285] not equal to expected: [-888285,-822215,-196136,-116841,618343,887447] Notice how the input xs and ys were generated as arrays of randomly-selected integers. Exercises - (Easy) Write a property which asserts that merging an array with the empty array does not modify the original array. Note: This new property is redundant, since this situation is already covered by our existing property. We're just trying to give you readers a simple way to practice using quickCheck. - (Easy) Add an appropriate error message to the remaining property for merge. Testing Polymorphic Code The Merge module defines a generalization of the merge function, called mergePoly, which works not only with arrays of numbers, but also arrays of any type belonging to the Ord type class: mergePoly :: forall a. Ord a => Array a -> Array a -> Array a If we modify our original test to use mergePoly in place of merge, we see the following error message: No type class instance was found for Test.QuickCheck.Arbitrary.Arbitrary t0 The instance head contains unknown type variables. Consider adding a type annotation. This error message indicates that the compiler could not generate random test cases, because it did not know what type of elements we wanted our arrays to have. In these sorts of cases, we can use type annotations to force the compiler to infer a particular type, such as Array Int: quickCheck \xs ys -> eq (mergePoly (sort xs) (sort ys) :: Array Int) (sort $ xs <> ys) We can alternatively use a helper function to specify type, which may result in cleaner code. For example, if we define a function ints as a synonym for the identity function: ints :: Array Int -> Array Int ints = id then we can modify our test so that the compiler infers the type Array Int for our two array arguments: quickCheck \xs ys -> eq (ints $ mergePoly (sort xs) (sort ys)) (sort $ xs <> ys) Here, xs and ys both have type Array Int, since the ints function has been used to disambiguate the unknown type. Exercises - (Easy) Write a function boolswhich forces the types of xsand ysto be Array Boolean, and add additional properties which test mergePolyat that type. - (Medium) Choose a pure function from the core libraries (for example, from the arrayspackage), and write a QuickCheck property for it, including an appropriate error message. Your property should use a helper function to fix any polymorphic type arguments to either Intor Boolean. Generating Arbitrary Data Now we will see how the quickcheck library is able to randomly generate test cases for our properties. Those types whose values can be randomly generated are captured by the Arbitrary type class: class Arbitrary t where arbitrary :: Gen t The Gen type constructor represents the side-effects of deterministic random data generation. It uses a pseudo-random number generator to generate deterministic random function arguments from a seed value. The Test.QuickCheck.Gen module defines several useful combinators for building generators. Gen is also a monad and an applicative functor, so we have the usual collection of combinators at our disposal for creating new instances of the Arbitrary type class. For example, we can use the Arbitrary instance for the Int type, provided in the quickcheck library, to create a distribution on the 256 byte values, using the Functor instance for Gen to map a function from integers to bytes over arbitrary integer values: newtype Byte = Byte Int instance arbitraryByte :: Arbitrary Byte where arbitrary = map intToByte arbitrary where intToByte n | n >= 0 = Byte (n `mod` 256) | otherwise = intToByte (-n) Here, we define a type Byte of integral values between 0 and 255. The Arbitrary instance uses the map function to lift the intToByte function over the arbitrary action. The type of the inner arbitrary action is inferred as Gen Int. We can also use this idea to improve our test for merge: quickCheck \xs ys -> eq (numbers $ mergePoly (sort xs) (sort ys)) (sort $ xs <> ys) In this test, we generated arbitrary arrays xs and ys, but had to sort them, since merge expects sorted input. On the other hand, we could create a newtype representing sorted arrays, and write an Arbitrary instance which generates sorted data: newtype Sorted a = Sorted (Array a) sorted :: forall a. Sorted a -> Array a sorted (Sorted xs) = xs instance arbSorted :: (Arbitrary a, Ord a) => Arbitrary (Sorted a) where arbitrary = map (Sorted <<< sort) arbitrary With this type constructor, we can modify our test as follows: quickCheck \xs ys -> eq (ints $ mergePoly (sorted xs) (sorted ys)) (sort $ sorted xs <> sorted ys) This may look like a small change, but the types of xs and ys have changed to Sorted Int, instead of just Array Int. This communicates our intent in a clearer way - the mergePoly function takes sorted input. Ideally, the type of the mergePoly function itself would be updated to use the Sorted type constructor. As a more interesting example, the Tree module defines a type of sorted binary trees with values at the branches: data Tree a = Leaf | Branch (Tree a) a (Tree a) The Tree module defines the following API: insert :: forall a. Ord a => a -> Tree a -> Tree a member :: forall a. Ord a => a -> Tree a -> Boolean fromArray :: forall a. Ord a => Array a -> Tree a toArray :: forall a. Tree a -> Array a The insert function is used to insert a new element into a sorted tree, and the member function can be used to query a tree for a particular value. For example: > import Tree > member 2 $ insert 1 $ insert 2 Leaf true > member 1 Leaf false The toArray and fromArray functions can be used to convert sorted trees to and from arrays. We can use fromArray to write an Arbitrary instance for trees: instance arbTree :: (Arbitrary a, Ord a) => Arbitrary (Tree a) where arbitrary = map fromArray arbitrary We can now use Tree a as the type of an argument to our test properties, whenever there is an Arbitrary instance available for the type a. For example, we can test that the member test always returns true after inserting a value: quickCheck \t a -> member a $ insert a $ treeOfInt t Here, the argument t is a randomly-generated tree of type Tree Int, where the type argument disambiguated by the identity function treeOfInt. Exercises - (Medium) Create a newtype for Stringwith an associated Arbitraryinstance which generates collections of randomly-selected characters in the range a-z. Hint: use the elementsand arrayOffunctions from the Test.QuickCheck.Genmodule. - (Difficult) Write a property which asserts that a value inserted into a tree is still a member of that tree after arbitrarily many more insertions. Testing Higher-Order Functions The Merge module defines another generalization of the merge function - the mergeWith function takes an additional function as an argument which is used to determine the order in which elements should be merged. That is, mergeWith is a higher-order function. For example, we can pass the length function as the first argument, to merge two arrays which are already in length-increasing order. The result should also be in length-increasing order: > import Data.String > mergeWith length ["", "ab", "abcd"] ["x", "xyz"] ["","x","ab","xyz","abcd"] How might we test such a function? Ideally, we would like to generate values for all three arguments, including the first argument which is a function. There is a second type class which allows us to create randomly-generated functions. It is called Coarbitrary, and it is defined as follows: class Coarbitrary t where coarbitrary :: forall r. t -> Gen r -> Gen r The coarbitrary function takes a function argument of type t, and a random generator for a function result of type r, and uses the function argument to perturb the random generator. That is, it uses the function argument to modify the random output of the random generator for the result. In addition, there is a type class instance which gives us Arbitrary functions if the function domain is Coarbitrary and the function codomain is Arbitrary: instance arbFunction :: (Coarbitrary a, Arbitrary b) => Arbitrary (a -> b) In practice, this means that we can write properties which take functions as arguments. In the case of the mergeWith function, we can generate the first argument randomly, modifying our tests to take account of the new argument. We cannot guarantee that the result will be sorted - we do not even necessarily have an Ord instance - but we can expect that the result be sorted with respect to the function f that we pass in as an argument. In addition, we need the two input arrays to be sorted with respect to f, so we use the sortBy function to sort xs and ys based on comparison after the function f has been applied: quickCheck \xs ys f -> let result = map f $ mergeWith (intToBool f) (sortBy (compare `on` f) xs) (sortBy (compare `on` f) ys) expected = map f $ sortBy (compare `on` f) $ xs <> ys in eq result expected Here, we use a function intToBool to disambiguate the type of the function f: intToBool :: (Int -> Boolean) -> Int -> Boolean intToBool = id In addition to being Arbitrary, functions are also Coarbitrary: instance coarbFunction :: (Arbitrary a, Coarbitrary b) => Coarbitrary (a -> b) This means that we are not limited to just values and functions - we can also randomly generate higher-order functions, or functions whose arguments are higher-order functions, and so on. Writing Coarbitrary Instances Just as we can write Arbitrary instances for our data types by using the Monad and Applicative instances of Gen, we can write our own Coarbitrary instances as well. This allows us to use our own data types as the domain of randomly-generated functions. Let's write a Coarbitrary instance for our Tree type. We will need a Coarbitrary instance for the type of the elements stored in the branches: instance coarbTree :: Coarbitrary a => Coarbitrary (Tree a) where We have to write a function which perturbs a random generator given a value of type Tree a. If the input value is a Leaf, then we will just return the generator unchanged: coarbitrary Leaf = id If the tree is a Branch, then we will perturb the generator using the left subtree, the value, and the right subtree. We use function composition to create our perturbing function: coarbitrary (Branch l a r) = coarbitrary l <<< coarbitrary a <<< coarbitrary r Now we are free to write properties whose arguments include functions taking trees as arguments. For example, the Tree module defines a function anywhere, which tests if a predicate holds on any subtree of its argument: anywhere :: forall a. (Tree a -> Boolean) -> Tree a -> Boolean Now we are able to generate the predicate function randomly. For example, we expect the anywhere function to respect disjunction: quickCheck \f g t -> anywhere (\s -> f s || g s) t == anywhere f (treeOfInt t) || anywhere g t Here, the treeOfInt function is used to fix the type of values contained in the tree to the type Int: treeOfInt :: Tree Int -> Tree Int treeOfInt = id Testing Without Side-Effects For the purposes of testing, we usually include calls to the quickCheck function in the main action of our test suite. However, there is a variant of the quickCheck function, called quickCheckPure which does not use side-effects. Instead, it is a pure function which takes a random seed as an input, and returns an array of test results. We can test quickCheckPure using PSCi. Here, we test that the merge operation is associative: > import Prelude > import Merge > import Test.QuickCheck > import Test.QuickCheck.LCG (mkSeed) > :paste … quickCheckPure (mkSeed 12345) 10 \xs ys zs -> … ((xs `merge` ys) `merge` zs) == … (xs `merge` (ys `merge` zs)) … ^D Success : Success : ... quickCheckPure takes three arguments: the random seed, the number of test cases to generate, and the property to test. If all tests pass, you should see an array of Success data constructors printed to the console. quickCheckPure might be useful in other situations, such as generating random input data for performance benchmarks, or generating sample form data for web applications. Exercises (Easy) Write Coarbitraryinstances for the Byteand Sortedtype constructors. (Medium) Write a (higher-order) property which asserts associativity of the mergeWith ffunction for any function f. Test your property in PSCi using quickCheckPure. (Medium) Write Arbitraryand Coarbitraryinstances for the following data type: data OneTwoThree a = One a | Two a a | Three a a a Hint: Use the oneOffunction defined in Test.QuickCheck.Gento define your Arbitraryinstance. (Medium) Use allto simplify the result of the quickCheckPurefunction - your new function should have type List Result -> Booleanand should return trueif every test passes and falseotherwise. (Medium) As another approach to simplifying the result of quickCheckPure, try writing a function squashResults :: List Result -> Result. Consider using the Firstmonoid from Data.Maybe.Firstwith the foldMapfunction to preserve the first error in case of failure. Conclusion In this chapter, we met the quickcheck package, which can be used to write tests in a declarative way using the paradigm of generative testing. In particular: - We saw how to automate QuickCheck tests using spago test. - We saw how to write properties as functions, and how to use the <?>operator to improve error messages. - We saw how the Arbitraryand Coarbitrarytype classes enable generation of boilerplate testing code, and how they allow us to test higher-order properties. - We saw how to implement custom Arbitraryand Coarbitraryinstances for our own data types. Domain-Specific Languages Chapter Goals In this chapter, we will explore the implementation of domain-specific languages (or DSLs) in PureScript, using a number of standard techniques. A domain-specific language is a language which is well-suited to development in a particular problem domain. Its syntax and functions are chosen to maximize readability of code used to express ideas in that domain. We have already seen a number of examples of domain-specific languages in this book: - The Gamemonad and its associated actions, developed in chapter 11, constitute a domain-specific language for the domain of text adventure game development. - The quickcheckpackage, covered in chapter 13, is a domain-specific language for the domain of generative testing. Its combinators enable a particularly expressive notation for test properties. This chapter will take a more structured approach to some of standard techniques in the implementation of domain-specific languages. It is by no means a complete exposition of the subject, but should provide you with enough knowledge to build some practical DSLs for your own tasks. Our running example will be a domain-specific language for creating HTML documents. Our aim will be to develop a type-safe language for describing correct HTML documents, and we will work by improving a naive implementation in small steps. Project Setup The project accompanying this chapter adds one new dependency - the free library, which defines the free monad, one of the tools which we will be using. We will test this chapter's project in PSCi. A HTML Data Type The most basic version of our HTML library is defined in the Data.DOM.Simple module. The module contains the following type definitions: newtype Element = Element { name :: String , attribs :: Array Attribute , content :: Maybe (Array Content) } data Content = TextContent String | ElementContent Element newtype Attribute = Attribute { key :: String , value :: String } The Element type represents HTML elements. Each element consists of an element name, an array of attribute pairs and some content. The content property uses the Maybe type to indicate that an element might be open (containing other elements and text) or closed. The key function of our library is a function render :: Element -> String which renders HTML elements as HTML strings. We can try out this version of the library by constructing values of the appropriate types explicitly in PSCi: $ spago repl > import Prelude > import Data.DOM.Simple > import Data.Maybe > import Effect.Console > :paste … log $ render $ Element … { name: "p" … , attribs: [ … Attribute … { key: "class" … , value: "main" … } … ] … , content: Just [ … TextContent "Hello World!" … ] … } … ^D <p class="main">Hello World!</p> unit As it stands, there are several problems with this library: - Creating HTML documents is difficult - every new element requires at least one record and one data constructor. - It is possible to represent invalid documents: - The developer might mistype the element name - The developer can associate an attribute with the wrong type of element - The developer can use a closed element when an open element is correct In the remainder of the chapter, we will apply certain techniques to solve these problems and turn our library into a usable domain-specific language for creating HTML documents. Smart Constructors The first technique we will apply is simple but can be very effective. Instead of exposing the representation of the data to the module's users, we can use the module exports list to hide the Element, Content and Attribute data constructors, and only export so-called smart constructors, which construct data which is known to be correct. Here is an example. First, we provide a convenience function for creating HTML elements: element :: String -> Array Attribute -> Maybe (Array Content) -> Element element name attribs content = Element { name: name , attribs: attribs , content: content } Next, we create smart constructors for those HTML elements we want our users to be able to create, by applying the element function: a :: Array Attribute -> Array Content -> Element a attribs content = element "a" attribs (Just content) p :: Array Attribute -> Array Content -> Element p attribs content = element "p" attribs (Just content) img :: Array Attribute -> Element img attribs = element "img" attribs Nothing Finally, we update the module exports list to only export those functions which are known to construct correct data structures: module Data.DOM.Smart ( Element , Attribute(..) , Content(..) , a , p , img , render ) where The module exports list is provided immediately after the module name inside parentheses. Each module export can be one of three types: - A value (or function), indicated by the name of the value, - A type class, indicated by the name of the class, - A type constructor and any associated data constructors, indicated by the name of the type followed by a parenthesized list of exported data constructors. Here, we export the Element type, but we do not export its data constructors. If we did, the user would be able to construct invalid HTML elements. In the case of the Attribute and Content types, we still export all of the data constructors (indicated by the symbol .. in the exports list). We will apply the technique of smart constructors to these types shortly. Notice that we have already made some big improvements to our library: - It is impossible to represent HTML elements with invalid names (of course, we are restricted to the set of element names provided by the library). - Closed elements cannot contain content by construction. We can apply this technique to the Content type very easily. We simply remove the data constructors for the Content type from the exports list, and provide the following smart constructors: text :: String -> Content text = TextContent elem :: Element -> Content elem = ElementContent Let's apply the same technique to the Attribute type. First, we provide a general-purpose smart constructor for attributes. Here is a first attempt: attribute :: String -> String -> Attribute attribute key value = Attribute { key: key , value: value } infix 4 attribute as := This representation suffers from the same problem as the original Element type - it is possible to represent attributes which do not exist or whose names were entered incorrectly. To solve this problem, we can create a newtype which represents attribute names: newtype AttributeKey = AttributeKey String With that, we can modify our operator as follows: attribute :: AttributeKey -> String -> Attribute attribute (AttributeKey key) value = Attribute { key: key , value: value } If we do not export the AttributeKey data constructor, then the user has no way to construct values of type AttributeKey other than by using functions we explicitly export. Here are some examples: href :: AttributeKey href = AttributeKey "href" _class :: AttributeKey _class = AttributeKey "class" src :: AttributeKey src = AttributeKey "src" width :: AttributeKey width = AttributeKey "width" height :: AttributeKey height = AttributeKey "height" Here is the final exports list for our new module. Note that we no longer export any data constructors directly: module Data.DOM.Smart ( Element , Attribute , Content , AttributeKey , a , p , img , href , _class , src , width , height , attribute, (:=) , text , elem , render ) where If we try this new module in PSCi, we can already see massive improvements in the conciseness of the user code: $ spago repl > import Prelude > import Data.DOM.Smart > import Effect.Console > log $ render $ p [ _class := "main" ] [ text "Hello World!" ] <p class="main">Hello World!</p> unit Note, however, that no changes had to be made to the render function, because the underlying data representation never changed. This is one of the benefits of the smart constructors approach - it allows us to separate the internal data representation for a module from the representation which is perceived by users of its external API. Exercises (Easy) Use the Data.DOM.Smartmodule to experiment by creating new HTML documents using render. (Medium) Some HTML attributes such as checkedand disableddo not require values, and may be rendered as empty attributes: <input disabled> Modify the representation of an Attributeto take empty attributes into account. Write a function which can be used in place of attributeor :=to add an empty attribute to an element. Phantom Types To motivate the next technique, consider the following code: > log $ render $ img [ src := "cat.jpg" , width := "foo" , height := "bar" ] <img src="cat.jpg" width="foo" height="bar" /> unit The problem here is that we have provided string values for the width and height attributes, where we should only be allowed to provide numeric values in units of pixels or percentage points. To solve this problem, we can introduce a so-called phantom type argument to our AttributeKey type: newtype AttributeKey a = AttributeKey String The type variable a is called a phantom type because there are no values of type a involved in the right-hand side of the definition. The type a only exists to provide more information at compile-time. Any value of type AttributeKey a is simply a string at runtime, but at compile-time, the type of the value tells us the desired type of the values associated with this key. We can modify the type of our attribute function to take the new form of AttributeKey into account: attribute :: forall a. IsValue a => AttributeKey a -> a -> Attribute attribute (AttributeKey key) value = Attribute { key: key , value: toValue value } Here, the phantom type argument a is used to ensure that the attribute key and attribute value have compatible types. Since the user cannot create values of type AttributeKey a directly (only via the constants we provide in the library), every attribute will be correct by construction. Note that the IsValue constraint ensures that whatever value type we associate to a key, its values can be converted to strings and displayed in the generated HTML. The IsValue type class is defined as follows: class IsValue a where toValue :: a -> String We also provide type class instances for the String and Int types: instance stringIsValue :: IsValue String where toValue = id instance intIsValue :: IsValue Int where toValue = show We also have to update our AttributeKey constants so that their types reflect the new type parameter: href :: AttributeKey String href = AttributeKey "href" _class :: AttributeKey String _class = AttributeKey "class" src :: AttributeKey String src = AttributeKey "src" width :: AttributeKey Int width = AttributeKey "width" height :: AttributeKey Int height = AttributeKey "height" Now we find it is impossible to represent these invalid HTML documents, and we are forced to use numbers to represent the width and height attributes instead: > import Prelude > import Data.DOM.Phantom > import Effect.Console > :paste … log $ render $ img … [ src := "cat.jpg" … , width := 100 … , height := 200 … ] … ^D <img src="cat.jpg" width="100" height="200" /> unit Exercises (Easy) Create a data type which represents either pixel or percentage lengths. Write an instance of IsValuefor your type. Modify the widthand heightattributes to use your new type. (Difficult) By defining type-level representatives for the Boolean values trueand false, we can use a phantom type to encode whether an AttributeKeyrepresents an empty attribute such as disabledor checked. data True data False Modify your solution to the previous exercise to use a phantom type to prevent the user from using the attributeoperator with an empty attribute. The Free Monad In our final set of modifications to our API, we will use a construction called the free monad to turn our Content type into a monad, enabling do notation. This will allow us to structure our HTML documents in a form in which the nesting of elements becomes clearer - instead of this: p [ _class := "main" ] [ elem $ img [ src := "cat.jpg" , width := 100 , height := 200 ] , text "A cat" ] we will be able to write this: p [ _class := "main" ] $ do elem $ img [ src := "cat.jpg" , width := 100 , height := 200 ] text "A cat" However, do notation is not the only benefit of a free monad. The free monad allows us to separate the representation of our monadic actions from their interpretation, and even support multiple interpretations of the same actions. The Free monad is defined in the free library, in the Control.Monad.Free module. We can find out some basic information about it using PSCi, as follows: > import Control.Monad.Free > :kind Free (Type -> Type) -> Type -> Type The kind of Free indicates that it takes a type constructor as an argument, and returns another type constructor. In fact, the Free monad can be used to turn any Functor into a Monad! We begin by defining the representation of our monadic actions. To do this, we need to create a Functor with one data constructor for each monadic action we wish to support. In our case, our two monadic actions will be elem and text. In fact, we can simply modify our Content type as follows: data ContentF a = TextContent String a | ElementContent Element a instance functorContentF :: Functor ContentF where map f (TextContent s x) = TextContent s (f x) map f (ElementContent e x) = ElementContent e (f x) Here, the ContentF type constructor looks just like our old Content data type - however, it now takes a type argument a, and each data constructor has been modified to take a value of type a as an additional argument. The Functor instance simply applies the function f to the value of type a in each data constructor. With that, we can define our new Content monad as a type synonym for the Free monad, which we construct by using our ContentF type constructor as the first type argument: type Content = Free ContentF Instead of a type synonym, we might use a newtype to avoid exposing the internal representation of our library to our users - by hiding the Content data constructor, we restrict our users to only using the monadic actions we provide. Because ContentF is a Functor, we automatically get a Monad instance for Free ContentF. We have to modify our Element data type slightly to take account of the new type argument on Content. We will simply require that the return type of our monadic computations be Unit: newtype Element = Element { name :: String , attribs :: Array Attribute , content :: Maybe (Content Unit) } In addition, we have to modify our elem and text functions, which become our new monadic actions for the Content monad. To do this, we can use the liftF function, provided by the Control.Monad.Free module. Here is its type: liftF :: forall f a. f a -> Free f a liftF allows us to construct an action in our free monad from a value of type f a for some type a. In our case, we can simply use the data constructors of our ContentF type constructor directly: text :: String -> Content Unit text s = liftF $ TextContent s unit elem :: Element -> Content Unit elem e = liftF $ ElementContent e unit Some other routine modifications have to be made, but the interesting changes are in the render function, where we have to interpret our free monad. Interpreting the Monad The Control.Monad.Free module provides a number of functions for interpreting a computation in a free monad: runFree :: forall f a . Functor f => (f (Free f a) -> Free f a) -> Free f a -> a runFreeM :: forall f m a . (Functor f, MonadRec m) => (f (Free f a) -> m (Free f a)) -> Free f a -> m a The runFree function is used to compute a pure result. The runFreeM function allows us to use a monad to interpret the actions of our free monad. Note: Technically, we are restricted to using monads m which satisfy the stronger MonadRec constraint. In practice, this means that we don't need to worry about stack overflow, since m supports safe monadic tail recursion. First, we have to choose a monad in which we can interpret our actions. We will use the Writer String monad to accumulate a HTML string as our result. Our new render method starts by delegating to a helper function, renderElement, and using execWriter to run our computation in the Writer monad: render :: Element -> String render = execWriter <<< renderElement renderElement is defined in a where block: where renderElement :: Element -> Writer String Unit renderElement (Element e) = do The definition of renderElement is straightforward, using the tell action from the Writer monad to accumulate several small strings: tell "<" tell e.name for_ e.attribs $ \x -> do tell " " renderAttribute x renderContent e.content Next, we define the renderAttribute function, which is equally simple: where renderAttribute :: Attribute -> Writer String Unit renderAttribute (Attribute x) = do tell x.key tell "=\"" tell x.value tell "\"" The renderContent function is more interesting. Here, we use the runFreeM function to interpret the computation inside the free monad, delegating to a helper function, renderContentItem: renderContent :: Maybe (Content Unit) -> Writer String Unit renderContent Nothing = tell " />" renderContent (Just content) = do tell ">" runFreeM renderContentItem content tell "</" tell e.name tell ">" The type of renderContentItem can be deduced from the type signature of runFreeM. The functor f is our type constructor ContentF, and the monad m is the monad in which we are interpreting the computation, namely Writer String. This gives the following type signature for renderContentItem: renderContentItem :: ContentF (Content Unit) -> Writer String (Content Unit) We can implement this function by simply pattern matching on the two data constructors of ContentF: renderContentItem (TextContent s rest) = do tell s pure rest renderContentItem (ElementContent e rest) = do renderElement e pure rest In each case, the expression rest has the type Content Unit, and represents the remainder of the interpreted computation. We can complete each case by returning the rest action. That's it! We can test our new monadic API in PSCi, as follows: > import Prelude > import Data.DOM.Free > import Effect.Console > :paste … log $ render $ p [] $ do … elem $ img [ src := "cat.jpg" ] … text "A cat" … ^D <p><img src="cat.jpg" />A cat</p> unit Exercises - (Medium) Add a new data constructor to the ContentFtype to support a new action comment, which renders a comment in the generated HTML. Implement the new action using liftF. Update the interpretation renderContentItemto interpret your new constructor appropriately. Extending the Language A monad in which every action returns something of type Unit is not particularly interesting. In fact, aside from an arguably nicer syntax, our monad adds no extra functionality over a Monoid. Let's illustrate the power of the free monad construction by extending our language with a new monadic action which returns a non-trivial result. Suppose we want to generate HTML documents which contain hyperlinks to different sections of the document using anchors. We can accomplish this already, by generating anchor names by hand and including them at least twice in the document: once at the definition of the anchor itself, and once in each hyperlink. However, this approach has some basic issues: - The developer might fail to generate unique anchor names. - The developer might mistype one or more instances of the anchor name. In the interest of protecting the developer from their own mistakes, we can introduce a new type which represents anchor names, and provide a monadic action for generating new unique names. The first step is to add a new type for names: newtype Name = Name String runName :: Name -> String runName (Name n) = n Again, we define this as a newtype around String, but we must be careful not to export the data constructor in the module's export lists. Next, we define an instance for the IsValue type class for our new type, so that we are able to use names in attribute values: instance nameIsValue :: IsValue Name where toValue (Name n) = n We also define a new data type for hyperlinks which can appear in a elements, as follows: data Href = URLHref String | AnchorHref Name instance hrefIsValue :: IsValue Href where toValue (URLHref url) = url toValue (AnchorHref (Name nm)) = "#" <> nm With this new type, we can modify the value type of the href attribute, forcing our users to use our new Href type. We can also create a new name attribute, which can be used to turn an element into an anchor: href :: AttributeKey Href href = AttributeKey "href" name :: AttributeKey Name name = AttributeKey "name" The remaining problem is that our users currently have no way to generate new names. We can provide this functionality in our Content monad. First, we need to add a new data constructor to our ContentF type constructor: data ContentF a = TextContent String a | ElementContent Element a | NewName (Name -> a) The NewName data constructor corresponds to an action which returns a value of type Name. Notice that instead of requiring a Name as a data constructor argument, we require the user to provide a function of type Name -> a. Remembering that the type a represents the rest of the computation, we can see that this function provides a way to continue computation after a value of type Name has been returned. We also need to update the Functor instance for ContentF, taking into account the new data constructor, as follows: instance functorContentF :: Functor ContentF where map f (TextContent s x) = TextContent s (f x) map f (ElementContent e x) = ElementContent e (f x) map f (NewName k) = NewName (f <<< k) Now we can build our new action by using the liftF function, as before: newName :: Content Name newName = liftF $ NewName id Notice that we provide the id function as our continuation, meaning that we return the result of type Name unchanged. Finally, we need to update our interpretation function, to interpret the new action. We previously used the Writer String monad to interpret our computations, but that monad does not have the ability to generate new names, so we must switch to something else. The WriterT monad transformer can be used with the State monad to combine the effects we need. We can define our interpretation monad as a type synonym to keep our type signatures short: type Interp = WriterT String (State Int) Here, the state of type Int will act as an incrementing counter, used to generate unique names. Because the Writer and WriterT monads use the same type class members to abstract their actions, we do not need to change any actions - we only need to replace every reference to Writer String with Interp. We do, however, need to modify the handler used to run our computation. Instead of just execWriter, we now need to use evalState as well: render :: Element -> String render e = evalState (execWriterT (renderElement e)) 0 We also need to add a new case to renderContentItem, to interpret the new NewName data constructor: renderContentItem (NewName k) = do n <- get let fresh = Name $ "name" <> show n put $ n + 1 pure (k fresh) Here, we are given a continuation k of type Name -> Content a, and we need to construct an interpretation of type Content a. Our interpretation is simple: we use get to read the state, use that state to generate a unique name, then use put to increment the state. Finally, we pass our new name to the continuation to complete the computation. With that, we can try out our new functionality in PSCi, by generating a unique name inside the Content monad, and using it as both the name of an element and the target of a hyperlink: > import Prelude > import Data.DOM.Name > import Effect.Console > :paste … render $ p [ ] $ do … top <- newName … elem $ a [ name := top ] $ … text "Top" … elem $ a [ href := AnchorHref top ] $ … text "Back to top" … ^D <p><a name="name0">Top</a><a href="#name0">Back to top</a></p> unit You can verify that multiple calls to newName do in fact result in unique names. Exercises (Medium) We can simplify the API further by hiding the Elementtype from its users. Make these changes in the following steps: - Combine functions like pand img(with return type Element) with the elemaction to create new actions with return type Content Unit. - Change the renderfunction to accept an argument of type Content Unitinstead of Element. (Medium) Hide the implementation of the Contentmonad by using a newtypeinstead of a type synonym. You should not export the data constructor for your newtype. (Difficult) Modify the ContentFtype to support a new action isMobile :: Content Boolean which returns a boolean value indicating whether or not the document is being rendered for display on a mobile device. Hint: use the askaction and the ReaderTmonad transformer to interpret this action. Alternatively, you might prefer to use the RWSmonad. Conclusion In this chapter, we developed a domain-specific language for creating HTML documents, by incrementally improving a naive implementation using some standard techniques: - We used smart constructors to hide the details of our data representation, only permitting the user to create documents which were correct-by-construction. - We used an user-defined infix binary operator to improve the syntax of the language. - We used phantom types to encode additional information in the types of our data, preventing the user from providing attribute values of the wrong type. - We used the free monad to turn our array representation of a collection of content into a monadic representation supporting do notation. We then extended this representation to support a new monadic action, and interpreted the monadic computations using standard monad transformers. These techniques all leverage PureScript's module and type systems, either to prevent the user from making mistakes or to improve the syntax of the domain-specific language. The implementation of domain-specific languages in functional programming languages is an area of active research, but hopefully this provides a useful introduction some simple techniques, and illustrates the power of working in a language with expressive types.
https://book.purescript.org/print.html
CC-MAIN-2022-40
refinedweb
59,953
51.89
Problem Write a function to compute (x) to the power of (n) where (x) and (n) are positive integers. Your algorithm should run in O(logn) time complexity Solution Use divide and conquer technique. For even values of (n) calculate x^(n/2) and return the square of that as the final result because x^n = x^(n/2) * x^(n/2). For odd values of (n) calculate x^(n-1) and return it multiplied by (x) because x^n = x * x^(n-1). The first base case for recursion is n = 0 in this case you return 1. The second base case for recursion is n = 1 in this case you return (x). This algorithm runs in O(logn) because the problem size (n) is divided by (2) every time we call the recursive function. For odd values of (n) one extra call is executed before the number becomes even again which does not impact the overall performance of the algorithm for large values of (n). Please take a look at the code below for more details. Code Here is the code in C# //Imports using System; //Test class class Test { //Constructor public Test() { //Nothing } public int RecursiveExp(int x, int n) { //First base case if (n == 0) { return 1; } //Second base case if (n == 1) { return x; } //Even values of (n) if (n % 2 == 0) { int y = RecursiveExp(x, n / 2); return y * y; } //Odd values of (n) else { int y = RecursiveExp(x, n - 1); return x * y; } } } //Main class class Program { //Main static void Main(string[] args) { //Create a test object Test tst = new Test(); //Examples Console.Out.WriteLine(tst.RecursiveExp(2, 0)); Console.Out.WriteLine(tst.RecursiveExp(2, 1)); Console.Out.WriteLine(tst.RecursiveExp(2, 3)); Console.Out.WriteLine(tst.RecursiveExp(2, 4)); } }Search Terms... - exponential recursion - recursive exponential function - recursive exponential - program to solve exponential functions with recursion in c - limit of recursion with exponentials
http://www.8bitavenue.com/2010/09/recursive-exponential-function/
CC-MAIN-2016-30
refinedweb
320
52.39
My goal: search webapps folder to find all .war files and make sure a folder exists with the same name(indicating .war expansion) My Code: import os import stat file = open("hosts.txt","r") for servername in file: path = '\\\\'+servername.strip()+'\\d$\\tomcat\\Servers\\' server_name = servername.strip() for r,d,f in os.walk(path, topdown=False): for files in f: print os.path.join(r,files) if files.endswith(".war"): statinfo=os.stat(os.path.join(r,files)) print "FILE LOCATION: " + os.path.join(r,files)+ " FILE SIZE: " +str(statinfo.st_size) print os.path.join(r,files) folder= os.path.join(r,files).replace('.war','') if os.path.isdir(folder): None else: print "folder missing:",folder My problem: on each server i'm searching the directory structure is such: \server\d$\tomcat\servers\servername(random)\webapps my search is taking forever because i can't put a wildcard in at \servername\ to skip down to the webapps folder where the .war files exist. If i do this with 200 vm's then the search will be quite expensive. the folder missing print indicates to me the .war file did not expand because there's no folder matching. Would someone know of a way to search the specific directory or now to skip a directory? Thank you
https://www.daniweb.com/programming/software-development/threads/491135/script-to-search-for-war-files
CC-MAIN-2017-43
refinedweb
217
62.95
> IMHO we should use the namespace support that is built-in to expat. > Anything else is bound to slow us down. Unfortunately expat's namespaces support is broken from the point of view of SAX and DOM. > Whoops! parseFile() no longer exists! We now use the InputSource class > instead. InputSource seemed like overkill to me. More of a Java-ish type safety thing. I'd appreciate your opinion. In my opinion, parse() should accept a string or a stream. If a string, it should be treated as a URL or filename and opened. We will also provide a convenience method parseString() that parses an XML string (probably by wrapping it in a cStringIO. Also Fred was talking about convenience functions we devised of the form: __init__.py: parse( file, handler=None ): import pyexpat parser=CreateParser() parser.setContentHandler( handler ) parser.parse( file ) parse( string, handler=None ): import pyexpat parser=CreateParser() parser.setContentHandler( handler ) parser.parse( file ) These convenience functions are not in the package we sent up yesterday. > | saxutils # pretty much the same as now > Probably not. There's a lot of SAX 1.0 legacy there now. That would > need to be removed. It has been removed. --
https://mail.python.org/pipermail/xml-sig/2000-June/002852.html
CC-MAIN-2016-40
refinedweb
197
71.51
Hamburger menu icons for React, with CSS-driven transitions. Created to be as elegant and performant as possible. This means no JavaScript animations, no transitions on non-cheap properties and a small size. npm install hamburger-react When using one hamburger, ~1.5 KB will be added to your bundle (min + gzip). Visit the website for full documentation, API and examples. A basic implementation looks as follows: import Hamburger from 'hamburger-react' const [isOpen, setOpen] = useState(false) <Hamburger toggled={isOpen} toggle={setOpen} /> Or without providing your own state: <Hamburger onToggle={toggled => ...} /> Yes. Since the creation of these burgers in 2015 a lot of similar ones have appeared, with one or more of the following downsides:). You can use the label property to supply an ARIA label for the icon. The icons are hooks-based, and will work with React 16.8.0 ('the one with hooks') or higher.
https://awesomeopensource.com/project/luukdv/hamburger-react
CC-MAIN-2021-21
refinedweb
148
58.89
Provided by: libpcp3-dev_3.10.8build1_amd64 NAME pmParseUnitsStr - parse time point specification C SYNOPSIS #include <pcp/pmapi.h> int pmParseUnitsStr(const char *string, struct pmUnits *out, double *outMult, char **errMsg); cc ... -lpcp DESCRIPTION pmParseUnitsStr is designed to encapsulate the interpretation of a unit/scale specification in command line switches for use by the PCP client tools.. EXAMPLES ┌──────────────────────────────────┬────────────────┬─────────┐ │ string │ out │ outMult │ ├──────────────────────────────────┼────────────────┼─────────┤ │2 count │ {0,1,0,0,0,0} │ 0.5 │ │count / 7.5 nanosecond │ {0,1,-1,0,0,0} │ 7.5 │ │10 kilobytes / 2.5e2 count x 10^3 │ {1,-1,0,1,3,0} │ 25 │ │millisecond / second^2 │ {0,0,-1,0,0,3} │ 1000 │ │mb/s │ {1,0,-1,2,0,3} │ 1 │ └──────────────────────────────────┴────────────────┴─────────┘ RETURN VALUE A zero status indicates success. A negative status indicates an error, in which case the errMsg pointer will receive a textual error message, which the caller should later free(). SEE ALSO PMAPI(3), pmUnitsStr(3), pmConvScale(3), and pmLookupDesc(3).
http://manpages.ubuntu.com/manpages/xenial/man3/pmParseUnitsStr.3.html
CC-MAIN-2019-30
refinedweb
158
52.36
Device and Network Interfaces - Solaris VISUAL I/O control operations #include <sys/visual_io.h> The Solaris VISUAL environment defines a small set of ioctls for controlling graphics and imaging devices. The VIS_GETIDENTIFIER ioctl is mandatory and must be implemented in device drivers for graphics devices using the Solaris VISUAL environment. The VIS_GETIDENTIFIER ioctl is defined to return a device identifier from the device driver. This identifier must be a uniquely-defined string. There are two additional sets of ioctls. One supports mouse tracking via hardware cursor operations. Use of this set is optional, however, if a graphics device has hardware cursor support and implements these ioctls, the mouse tracking performance is improved. The remaining set supports the device acting as the system console device. Use of this set is optional, but if a graphics device is to be used as the system. This ioctl() returns an identifier string to uniquely identify a device used in the Solaris VISUAL environment. This is a mandatory ioctl and must return a unique string. We suggest that the name be formed as <companysymbol><devicetype>. For example, the cgsix driver returns SUNWcg6. VIS_GETIDENTIFIER takes a vis_identifier structure as its parameter. This structure has the form: #define VIS_MAXNAMELEN 128 struct vis_identifier { char name[VIS_MAXNAMELEN]; }; These ioctls fetch and set various cursor attributes, using the vis_cursor structure. struct vis_cursorpos { short x; /* cursor x coordinate */ short y; /* cursor y coordinate */ }; struct vis_cursorcmap { int version; /* version */ int reserved; unsigned char *red; /* red color map elements */ unsigned char *green;/* green color map elements */ unsigned char *blue; /* blue color map elements */ }; #define VIS_CURSOR_SETCURSOR 0x01 /* set cursor */ #define VIS_CURSOR_SETPOSITION 0x02 /* set cursor position */ #define VIS_CURSOR_SETHOTSPOT 0x04 /* set cursor hot spot */ #define VIS_CURSOR_SETCOLORMAP 0x08 /* set cursor colormap */ #define VIS_CURSOR_SETSHAPE 0x10 /* set cursor shape */ #define VIS_CURSOR_SETALL \ (VIS_CURSOR_SETCURSOR | VIS_CURSOR_SETPOSITION | \ VIS_CURSOR_SETHOTSPOT | VIS_CURSOR_SETCOLORMAP | \ VIS_CURSOR_SETSHAPE) struct vis_cursor { short set; /* what to set */ short enable; /* cursor on/off */ struct vis_cursorpos pos; /* cursor position */ struct vis_cursorpos hot; /* cursor hot spot */ struct vis_cursorcmap cmap; /* color map info */ struct vis_cursorpos size; /* cursor bitmap size */ char *image; /* cursor image bits */ char *mask; /* cursor mask bits */ }; The vis_cursorcmap structure should contain pointers to two elements, specifying the red, green, and blue values for foreground and background. These ioctls fetch and move the current cursor position, using the vis_cursorpos structure. The following ioctl sets are used by graphics drivers that are part of the system console device. All of the ioctls must be implemented to be a console device. In addition, if the system does not have a prom or the prom goes away during boot, the special standalone ioctls (listed below) must also be implemented. The coordinate system for the console device places 0,0 at the upper left corner of the device, with rows increasing toward the bottom of the device and columns increasing from left to right. Set or get color map entries. The argument is a pointer to a vis_cmap structure, which contains the following fields: struct vis_cmap { int index; int count; uchar_t *red; uchar_t *green; uchar_t *blue; } index is the starting index in the color map where you want to start setting or getting color map entries. count is the number of color map entries to set or get. It also is the size of the red, green, and blue color arrays. *red, *green, and *blue are pointers to unsigned character arrays which contain the color map info to set or where the color map info is placed on a get. Initializes the graphics driver as a console device. The argument is a pointer to a vis_devinit structure. The graphics driver is expected to allocate any local state information needed to be a console device and fill in this structure. struct vis_devinit { int version; screen_size_t width; screen_size_t height; screen_size_t linebytes; unit_t size; int depth; short mode; struct vis_polledio *polledio; vis_modechg_cb_t modechg_cb; struct vis_modechg_arg *modechg_arg; }; version is the version of this structure and should be set to VIS_CONS_REV. width and height are the width and height of the device. If mode (see below) is VIS_TEXT then width and height are the number of characters wide and high of the device. If mode is VIS_PIXEL then width and height are the number of pixels wide and high of the device. linebytes is the number of bytes per line of the device. size is the total size of the device in pixels. depth is the pixel depth in device bits. Currently supported depths are: 1, 4, 8 and 24. mode is the mode of the device. Either VIS_PIXEL (data to be displayed is in bitmap format) or VIS_TEXT (data to be displayed is in ascii format). polledio is used to pass the address of the structure containing the standalone mode polled I/O entry points to the device driver back to the terminal emulator. The vis_polledio interfaces are described in the Console Standalone Entry Points section of this manpage. These entry points are where the operating system enters the driver when the system is running in standalone mode. These functions perform identically to the VIS_CONSDISPLAY, VIS_CONSCURSOR and VIS_CONSCOPY ioctls, but are called directly by the Solaris operating environment and must operate under a very strict set of assumptions. modechg_cb is a callback function passed from the terminal emulator to the framebuffer driver which the frame-buffer driver must call whenever a video mode change event occurs that changes the screen height, width or depth. The callback takes two arguments, an opaque handle, modechg_arg, and the address of a vis_devinit struct containing the new video mode information. modechg_arg is an opaque handle passed from the terminal emulator to the driver, which the driver must pass back to the terminal emulator as an argument to the modechg_cb function when the driver notifies the terminal emulator of a video mode change. Tells the graphics driver that it is no longer the system console device. There is no argument to this ioctl. The driver is expected to free any locally kept state information related to the console. Describes the size and placement of the cursor on the screen. The graphics driver is expected to display or hide the cursor at the indicated position. The argument is a pointer to a vis_conscursor structure which contains the following fields: struct vis_conscursor { screen_pos_t row; screen_pos_t col; screen_size_t width; screen_size_t height color_t fg_color; color_t bg_color; short action; }; row and col are the first row and column (upper left corner of the cursor). width and height are the width and height of the cursor. If mode in the VIS_DEVINIT ioctl is set to VIS_PIXEL, then col, row, width and height are in pixels. If mode in the VIS_DEVINIT ioctl was set to VIS_TEXT, then col, row, width and height are in characters. fg_color and bg_color are the foreground and background color map indexes to use when the action (see below) is set to VIS_DISPLAY_CURSOR. action indicates whether to display or hide the cursor. It is set to either VIS_HIDE_CURSOR or VIS_DISPLAY_CURSOR. Display data on the graphics device. The graphics driver is expected to display the data contained in the vis_display structure at the specified position on the console. The vis_display structure contains the following fields: struct vis_display { screen_pos_t row; screen_pos_t col; screen_size_t width; screen_size_t height; uchar_t *data; color_t fg_color; color_t bg_color; }; row and col specify at which starting row and column the date is to be displayed. If mode in the VIS_DEVINIT ioctl was set to VIS_TEXT, row and col are defined to be a character offset from the starting position of the console device. If mode in the VIS_DEVINIT ioctl was set to VIS_PIXEL, row and col are defined to be a pixel offset from the starting position of the console device. width and height specify the size of the data to be displayed. If mode in the VIS_DEVINIT ioctl was set to VIS_TEXT, width and height define the size of data as a rectangle that is width characters wide and height characters high. If mode in the VIS_DEVINIT ioctl was set to VIS_PIXEL, width and height define the size of data as a rectangle that is width pixels wide and height pixels high. *data is a pointer to the data to be displayed on the console device. If mode in the VIS_DEVINIT ioctl was set to VIS_TEXT, data is an array of ASCII characters to be displayed on the console device. The driver must break these characters up appropriately and display it in the retangle defined by row, col, width, and height. If mode in the VIS_DEVINIT ioctl was set to VIS_PIXEL, data is an array of bitmap data to be displayed on the console device. The driver must break this data up appropriately and display it in the retangle defined by row, col, width, and height. The fg_color and bg_color fields define the foreground and background color map indexes to use when displaying the data. fb_color is used for "on" pixels and bg_color is used for "off" pixels. Copy data from one location on the device to another. The driver is expected to copy the specified data. The source data should not be modified. Any modifications to the source data should be as a side effect of the copy destination overlapping the copy source. The argument is a pointer to a vis_copy structure which contains the following fields: struct vis_copy { screen_pos_t s_row; screen_pos_t s_col; screen_pos_t e_row; screen_pos_t e_col; screen_pos_t t_row; screen_pos_t t_col; short direction; }; s_row, s_col, e_row, and e_col define the source rectangle of the copy. s_row and s_col are the upper left corner of the source rectangle. e_row and e_col are the lower right corner of the source rectangle. If mode in the VIS_DEVINIT ioctl() was set to VIS_TEXT, s_row, s_col, e_row, and e_col are defined to be character offsets from the starting position of the console device. If mode in the VIS_DEVINIT ioctl was set to VIS_PIXEL, s_row, s_col, e_row, and e_col are defined to be pixel offsets from the starting position of the console device. t_row and t_col define the upper left corner of the destination rectangle of the copy. The entire rectangle is copied to this location. If mode in the VIS_DEVINIT ioctl was set to VIS_TEXT, t_row, and t_col are defined to be character offsets from the starting position of the console device. If mode in the VIS_DEVINIT ioctl was set to VIS_PIXEL, t_row, and t_col are defined to be pixel offsets from the starting position of the console device. direction specifies which way to do the copy. If direction is VIS_COPY_FORWARD the graphics driver should copy data from position (s_row, s_col) in the source rectangle to position (t_row, t_col) in the destination rectangle. If direction is VIS_COPY_BACKWARDS the graphics driver should copy data from position (e_row, e_col) in the source rectangle to position (t_row+(e_row-s_row), t_col+(e_col-s_col)) in the destination rectangle. Console standalone entry points are necessary only if the driver is implementing console-compatible extensions. All console vectored standalone entry points must be implemented along with all console-related ioctls if the console extension is implemented. struct vis_polledio { struct vis_polledio_arg *arg; void (*display)(vis_polledio_arg *, struct vis_consdisplay *); void (*copy)(vis_polledio_arg *, struct vis_conscopy *); void (*cursor)(vis_polledio_arg *, struct vis_conscursor *); }; The vis_polledio structure is passed from the driver to the Solaris operating environment, conveying the entry point addresses of three functions which perform the same operations of their similarly named ioctl counterparts. The rendering parameters for each entry point are derived from the same structure passed as the respective ioctl. See the Console Optional Ioctls section of this manpage for an explanation of the specific function each of the entry points, display(), copy() and cursor() are required to implement. In addition to performing the prescribed function of their ioctl counterparts, the standalone vectors operate in a special context and must adhere to a strict set of rules. The polled I/O vectors are called directly whenever the system is quisced (running in a limited context) and must send output to the display. Standalone mode describes the state in which the system is running in single-threaded mode and only one processor is active. Solaris operating environment services are stopped, along with all other threads on the system, prior to entering any of the polled I/O interfaces. The polled I/O vectors are called when the system is running in a standalone debugger, when executing the PROM monitor (OBP) or when panicking. The following restrictions must be observed in the polled I/O functions: The driver must not allocate memory. The driver must not wait on mutexes. The driver must not wait for interrupts. The driver must not call any DDI or LDI services. The driver must not call any system services. The system is single-threaded when calling these functions, meaning that all other threads are effectively halted. Single-threading makes mutexes (which cannot be held) easier to deal with, so long as the driver does not disturb any shared state. See Writing Device Drivers for more information about implementing polled I/O entry points. Writing Device Drivers On SPARC systems, compatible drivers supporting the kernel terminal emulator should export the tem-support DDI property.tem-support indicates that the driver supports the kernel terminal emulator. By exporting tem-support it's possible to avoid premature handling of an incompatible driver. This DDI property, set to 1, means driver is compatible with the console kernel framebuffer interface.
http://docs.oracle.com/cd/E18752_01/html/816-5177/visual-io-7i.html
CC-MAIN-2017-30
refinedweb
2,214
51.68
Hi, in these series of articles I will talk about React concepts, try to explain what exactly means that concepts, why you could need it and how use it, in this post we're going to talk about the High Order Components (HOC). In simple words is a pattern to create logic that could be easily reused for other components, and you will see why for the HOCs you need to learn it to understand that you always needed it. What is a High Order Component? If we go to the React documentation, there says something like this: is a function that takes a component and returns a new component.. With that definition you maybe can think. why don’t create a class and just extended it? we can have core logic that can be reused in a parent class and extended for all his children yes, but the advantage of use a HOC is that the object of this pattern is return a Component, a simple transaction, I give you my component and the HOC returns a improved new Component with the logic that I need. So, we can say that a HOC is function, that receive a series of data, properties and a component, include his logic, a context or something else and return a new Component with that logic included, with this pattern you can also be sure that what you need to provide to your component is centralized in one place, and will be consumed always in the same way, like this example: import React, { Component } from ’react’; //Create your child component const ChildComponent = (props) => (<div> Hello Folks<div>); // Create your HOC const higherOrderComponent = (ChildComponent) => { return class extends Component { render() { return (<ChildComponent props={}/>); } } } // Then You send your ChildComponent and receive a new one with some new props provided for the HOC const newEnhancedComponent = higherOrderComponent(ChildComponent); As you can see, the sky's the limit for what you can send or provide in your HOC. Why should I use this? When you’re building your components, you should always try to create the most simple components as you can with with the least possible responsibility, but sometimes you find yourself with a big component, with a lot of things, and worst, with a lot of logic that you see is redundant. When you see that you need apply some patterns that will make your code more scalable and reusable, so the first reason is a big component doing a lot of stuffs. The second and more important reason is when you see that a lot of components (more than one can be a lot sometimes) will use some base logic. These 2 will the perfect reasons for you to try to apply this pattern in your proyect. How should I use? In the HOC you can add, edit or even remove some props that you will use in your Child or Enhance Component. You can include a context, or even make a call, subscribe to a service, resolve a promise and handle the response in the HOC, instead of make a dispatch in each componendDidMount and have a lot of repeated code. I will give you a list of the most common examples on where and how we can use this pattern with problems and real-life scenarios. - You already use one, when you use the “connect” of react-redux. If you use redux to hanlde the state and dispatch actions in your code, you're alredy using a HOC, the connect is a HOC that receive your childComponent and your state mappers, and return you a ConnectedComponent. The connect not only gives you the dispatch but also gives you the props and notifies you if these change. export const mapStateToProps = (state) => ({ information: state.information }); export default connect(mapStateToProps)(ChildComponent); - When you need to include an UI component or behaviour to your Child Component. Let's say that you have a component, and you need to include an alert (This can be a modal, change a color, open a hidden text or whatever). And you need that all your components include this extra UI thing. You can just have a HOC that keep the two things together, but each one will be independent with his own responsibilities, like this: import React, { Component, Fragment } from ’react’; //Create your child components const HelloComponent = (props) => (<div> Hello Folks<div>); const GoodBayComponent = (props) => (<div> And Goodbay<div>); const AlertComponent = (props) => (<div> I’m an alert <div>); // Create your HOC const componentWithAlert = (ChildComponent) => { return class extends Component { render() { return ( <Fragment> <AlertComponent /> <ChildComponent props={}/> </Fragment> ); } } } const HelloWithAlert = componentWithAlert(<HelloComponent />); const GoodbayWithAlert = componentWithAlert(<GoodbayComponent />); As you can see here, we have 2 independent components in one, you also can see that I use Fragment instead of a normal div, but Fragment don’t include you any extra markup or element, and let you group without problems and I prefer that. - When you have a context. Let's say that we have an important information, like the theme with all the brandings, the i18n resources or any kind of information, that you need to provide to all your components. Is very important always try to keep your information in only one source, each component should not be the one of charge of determinate which color or translation based on the language or theme should be used, to handle this situations you need a Context The Context in React is an API that allow you pass data through the component tree without having to pass props down manually at every level, this’s something very good and useful when we need to handle this kind of problems. The Context needs a Provider and a Consumer, The Provider will have the relevant information and you will need all your childs components wrapped inside the Consumer, so, therefore is a perfect example of where do you need a HOC, you need one to include the theme consumer context logic in the component regardless of which component is, so you don't need to call the ThemeContext everytime you use the component. import React, { Component } from ’react’; const ThemeContext = React.createContext({}); class ThemeProvider extends Component { render() { const {theme } = this.props; return ( <ThemeContext.Provider value={theme} > {Children.only(this.props.children)} </ThemeContext.Provider> ); } } const withTheme = (ChildComponent) => { return class extends Component { render() { return ( <ThemeContext.Consumer> { theme => <ChildComponent theme={theme} {props} /> } </ThemeContext.Consumer> ); } } } I will talk more about context in a next post because what is really importanttoday is the HOC. Now you have an example of how a HOC can help you with different problems. And I hope that this blog will help you to better understand this concept and that your code will be better and better. Hope you enjoy. See you in the next post! InTheCodeWeTrust Next: The What, Why and How of React (Routers) Discussion (0)
https://dev.to/mangel0111/the-what-why-and-how-of-react-high-order-components-3ko1
CC-MAIN-2022-05
refinedweb
1,138
52.63
The definition of a function consists of a function head (or the declarator), and a function block . The function head specifies the name of the function, the type of its return value, and the types and names of its parameters, if any. The statements in the function block specify what the function does. The general form of a function definition is as follows: In the function head, name is the function's name, while type consists of at least one type specifier, which defines the type of the function's return value. The return type may be void or any object type, except array types. Furthermore, type may include the function specifier inline, and/or one of the storage class specifiers extern and static. A function cannot return a function or an array. However, you can define a function that returns a pointer to a function or a pointer to an array. The parameter declarations are contained in a comma-separated list of declarations of the function's parameters. If the function has no parameters, this list is either empty or contains merely the word void. The type of a function specifies not only its return type, but also the types of all its parameters. Example 7-1 is a simple function to calculate the volume of a cylinder. // The cylinderVolume( ) function calculates the volume of a cylinder. // Arguments: Radius of the base circle; height of the cylinder. // Return value: Volume of the cylinder. extern double cylinderVolume( double r, double h ) { const double pi = 3.1415926536; // Pi is constant return pi * r * r * h; } This function has the name cylinderVolume, and has two parameters, r and h, both with type double. It returns a value with the type double. The function in Example 7-1 is declared with the storage class specifier extern. This is not strictly necessary, since extern is the default storage class for functions. An ordinary function definition that does not contain a static or inline specifier can be placed in any source file of a program. Such a function is available in all of the program's source files, because its name is an external identifier (or in strict terms, an identifier with external linkage: see "Linkage of Identifiers" in Chapter 11). You merely have to declare the function before its first use in a given translation unit (see the section "Function Declarations," later in this chapter). Furthermore, you can arrange functions in any order you wish within a source file. The only restriction is that you cannot define one function within another. C does not allow you to define "local functions" in this way. You Example 7-2. if ( ); /* ... */ } In the early Kernighan-Ritchie standard, the names of function parameters were separated from their type declarations. Function declarators contained only the names of the parameters, which were then declared by type between the function declarator and the function block. For example, the cylinderVolume( ) function from Example 7-1 would have been written as follows: double cylinderVolume( r, h ) double r, h; // Parameter declarations. { const double pi = 3.1415926536; // Pi is constant. return pi * r * r * h; } This notation, called a "K&R-style " or "old-style" function definition , is deprecated, although compilers still support it. In new C source code, use only the prototype notation for function definitions, as shown in Example 7-1. The parameters of a function are ordinary local variables. The program creates them, and initializes them with the values of the corresponding arguments, when a function call occurs. Their scope is the function block. A function can change the value of a parameter without affecting the value of the argument in the context of the function call. In Example 7-3, the factorial( ) function, which computes the factorial of a whole number, modifies its parameter n in the process. // factorial( ) calculates n!, the factorial of a non-negative number n. // For n > 0, n! is the product of all integers from 1 to n inclusive. // 0! equals 1. // Argument: A whole number, with type unsigned int. // Return value: The factorial of the argument, with type long double. long double factorial( register unsigned int n ) { long double f = 1; while ( n > 1 ) f *= n--; return f; } Although the factorial of an integer is always an integer, the function uses the type long double in order to accommodate very large results. As Example 7-3 illustrates, you can use the storage class specifier register in declaring function parameters. The register specifier is a request to the compiler to make a variable as quickly accessible as possible. No other storage class specifiers are permitted on function parameters. If you need to pass an array as an argument to a function, you would generally declare the corresponding parameter in the following form: type name[ ] Because array names are automatically converted to pointers when you use them as function arguments, this statement is equivalent to the declaration: type *name When you use the array notation in declaring function parameters, any constant expression between the brackets ([ ]) is ignored. In the function block, the parameter name is a pointer variable, and can be modified. Thus the function addArray( ) in Example 7-4 modifies its first two parameters as it adds pairs of elements in two arrays. // addArray( ) adds each element of the second array to the // corresponding element of the first (i.e., "array1 += array2", so to speak). // Arguments: Two arrays of float and their common length. // Return value: None. void addArray( register float a1[ ], register const float a2[ ], int len ) { register float *end = a1 + len; for ( ; a1 < end; ++a1, ++a2 ) *a1 += *a2; } An equivalent definition of the addArray( ) function, using a different notation for the array parameters, would be: void addArray( register float *a1, register const float *a2, int len ) { /* Function body as earlier. */ } An advantage of declaring the parameters with brackets ([ ]) is that human readers immediately recognize that the function treats the arguments as pointers to an array, and not just to an individual float variable. But the array-style notation also has two peculiarities in parameter declarations : In a parameter declarationand only thereC99 allows you to place any of the type qualifiers const, volatile, and restrict inside the square brackets. This ability allows you to declare the parameter as a qualified pointer type. Furthermore, in C99 you can also place the storage class specifier static, together with a integer constant expression, inside the square brackets. This approach indicates that the number of elements in the array at the time of the function call must be at least equal to the value of the constant expression. Here is an example that combines both of these possibilities: int func( long array[const static 5] ) { /* ... */ } In the function defined here, the parameter array is a constant pointer to long, and so cannot be modified. It points to the first of at least five array elements. C99 also lets you declare array parameters as variable-length arrays (see Chapter 8). To do so, place a nonconstant integer expression with a positive value between the square brackets. In this case, the array parameter is still a pointer to the first array element. The difference is that the array elements themselves can also have a variable length. In Example 7-5, the maximum( ) function's third parameter is a two-dimensional array of variable dimensions. // The function maximum( ) obtains the greatest value in a // two-dimensional matrix of double values. // Arguments: The number of rows, the number of columns, and the matrix. // Return value: The value of the greatest element. double maximum( int nrows, int ncols, double matrix[nrows][ncols] ) { double max = matrix[0][0]; for ( int r = 0; r < nrows; ++r ) for ( int c = 0; c < ncols; ++c ) if ( max < matrix[r][c] ) max = matrix[r][c]; return max; } The parameter matrix is a pointer to an array with ncols elements. C makes a distinction between two possible execution environments: Freestanding A program in a freestanding environment runs without the support of an operating system, and therefore only has minimal capabilities of the standard library available to it (see Part II). Hosted In a hosted environment, a C program runs under the control, and with the support, of an operating system. The full capabilities of the standard library are available. In a freestanding environment , the name and type of the first function invoked when the program starts is determined by the given implementation. Unless you program embedded systems, your C programs generally run in a hosted environment . A program compiled for a hosted environment must define a function with the name main, which is the first function invoked on program start. You can define the main( ) function in one of the following two forms: int main( void ) { /* ... */ } A function with no parameters, returning int int main( int argc, char *argv[ ] ) { /* ... */ } A function with two parameters whose types are int and char **, returning int These two approaches conform to the 1989 and 1999 C standards. In addition, many C implementations support a third, nonstandard syntax as well: int main( int argc, char *argv[ ], char *envp[ ] ) { /* ... */ } A function returning int, with three parameters, the first of which has the type int, while the other two have the type char ** In all cases, the main( ) function returns its final status to the operating system as an integer. A return value of 0 or EXIT_SUCCESS indicates that the program was successful; any nonzero return value, and in particular the value of EXIT_FAILURE, indicates that the program failed in some way. The constants EXIT_SUCCESS and EXIT_FAILURE are defined in the header file stdlib.h. The function block of main( ) need not contain a return statement. If the program flow reaches the closing brace } of main( )'s function block, the status value returned to the execution environment is 0. Ending the main( ) function is equivalent to calling the standard library function exit( ), whose argument becomes the return value of main( ). The parameters argc and argv (which you may give other names if you wish) represent your program's command-line arguments. This is how they work: argc (short for "argument count") is either 0 or the number of string tokens in the command line that started the program. The name of the program itself is included in this count. argv (short for "arguments vector") is an array of pointers to char that point to the individual string tokens received on the command line: The number of elements in this array is one more than the value of argc; the last element, argv[argc], is always a null pointer. If argc is greater than 0, then the first string, argv[0], contains the name by which the program was invoked. If the execution environment does not supply the program name, the string is empty. If argc is greater than 1, then the strings argv[1] through argv[argc - 1] contain the program's command line arguments. envp (short for "environment pointer") in the nonstandard, three-parameter version of main( ) is an array of pointers to the strings that make up the program's environment. Typically, these strings have the form name=value. In standard C, you can access the environment variables using the getenv( ) function. The sample program in Example 7-6, args.c, prints its own name and command-line arguments as received from the operating system. #include <stdio.h> int main( int argc, char *argv[ ] ) { if ( argc == 0 ) puts( "No command line available." ); else { // Print the name of the program. printf( "The program now running: %s\n", argv[0] ); if ( argc == 1 ) puts( "No arguments received on the command line." ); else { puts( "The command line arguments:" ); for ( int i = 1; i < argc; ++i ) // Print each argument on // a separate line. puts( argv[i] ); } } } Suppose we run the program on a Unix system by entering the following command line: $ ./args one two "and three" The output is then as follows: The program now running: ./args The command line arguments: one two and three
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-7-SECT-1.html
CC-MAIN-2018-43
refinedweb
1,997
53.41
Question How to unify IMAP folders between different mail clients and automatically create them? For example, how to alias “Sent” folder used by Outlook with “Sent Messages” used by Apple Mail.app, so both mail clients will have the same content? Answer One of two workarounds might be implemented to enable this behavior: Workaround I: Change settings on mail client-side: set deleted messages to be stored only in Trash and sent messages to be stored only in Sent. For more information, refer to the documentation of the mail client used. Workaround II: Warning: This workaround is not officially supported. All changes, committed to the configuration might be overwritten by future Plesk updates, upgrades, or repairs. Modify IMAP server configuration to enable SPECIAL-USE RFC 6154 tags: Connect to the server via SSH Create and open /etc/dovecot/conf.d/30-plesk-specialuse.conffile using a text editor: # vi /etc/dovecot/conf.d/30-plesk-specialuse.conf Add the namespace inbox block to include mailboxes (folders) with special_use tags: namespace inbox { separator = . prefix = INBOX. inbox = yes mailbox Sent { auto = subscribe special_use = Sent } mailbox "Sent Messages" { auto = no special_use = Sent } mailbox Spam { auto = create special_use = Junk } } Here, folder Sent Messages will be aliased to the folder Sent, and folders Sent and Spam will be created automatically. Note: This feature is available in Dovecot 2.1 and newer and is not provided by Courier-IMAP. For more detailed information refer to the official Dovecot Wiki. Reload Dovecot IMAP server to apply new configuration: # service dovecot reload Note: Newly created folders will not replace the specific existing ones made via the mail client by default like Sent folder for example.
https://www.plesk.com/kb/support/how-to-unify-imap-folders-between-different-mail-clients-and-automatically-create-them/
CC-MAIN-2021-49
refinedweb
277
54.02
NAME SYNOPSIS DESCRIPTION RETURN VALUE SEE ALSO pmem2_config_set_length() - set length in the pmem2_config structure #include <libpmem2.h> struct pmem2_config; int pmem2_config_set_length(struct pmem2_config *config, size_t length); The pmem2_config_set_length() function configures the length which will be used for mapping. *config should be already initialized, please see pmem2_config_new(3) for details. The \length must be a multiple of the alignment required for the data source which will be used for mapping alongside the config. To retrieve the alignment required for specific instance of pmem2_source* use pmem2_source_alignment(3). By default, the length is equal to the size of the file that is being mapped. The pmem2_config_set_length() function always returns 0. libpmem2(7), pmem2_map_new(3), pmem2_source_alignment(3), pmem2_config_new(3), sysconf(3) and The contents of this web site and the associated GitHub repositories are BSD-licensed open source.
https://pmem.io/pmdk/manpages/linux/v1.10/libpmem2/pmem2_config_set_length.3/
CC-MAIN-2022-05
refinedweb
134
55.74
Design IIR Filters Using Cascaded Biquads This article shows how to implement a Butterworth IIR lowpass filter as a cascade of second-order IIR filters, or biquads. We’ll derive how to calculate the coefficients of the biquads and do some examples using a Matlab function biquad_synth provided in the Appendix. Although we’ll be designing Butterworth filters, the approach applies to any all-pole lowpass filter (Chebyshev, Bessel, etc). As we’ll see, the cascaded-biquad design is less sensitive to coefficient quantization than a single high-order IIR, particularly for lower cut-off frequencies [1, 2] In an earlier post on IIR Butterworth lowpass filters [3], I presented the pole-zero form of the lowpass response H(z) as follows: $$H(z)=K\frac{(z+1)^N}{(z-p_1)(z-p_2)...(z-p_N)}\qquad(1)$$ The N zeros at z = 1 (ω= π or f = fs/2) occur when we transform the lowpass analog zeros from the s-domain to z-domain using the bilinear transform. Our goal is to convert H(z) into a cascade of second-order sections. If we stipulate that N is even, then we can write H(z) as: $H(z)=K_1\frac{(z+1)^2}{(z-p_1)(z-p_2)}\cdot K_2\frac{(z+1)^2}{(z-p_3)(z-p_4)}\cdot...\cdot K_{N/2}\frac{(z+1)^2}{(z-p_{N-1})(z-p_N)} \qquad(2)$ Each term in equation 2 is biquadratic – it has quadratic numerator and denominator. It is not necessary to use a separate gain K for each term; we could also use just a single gain for the whole cascade. The filter is even order, so all poles occur in complex-conjugate pairs. We’ll assign a complex-conjugate pole pair to the denominator of each term of equation 2. We can then write each term as: $$H_k(z)=K_k\frac{(z+1)^2}{(z-p_k)(z-p^*_k)},\quad k=1:N/2$$ where $p^*_k$ is the complex conjugate of $p_k$. Expanding the numerator and denominator, we get: $$H_k(z)=K_k\frac{z^2+2z+1}{z^2+a_1z+a_2}$$ where a1= -2*real(pk) and a2= |pk|2. Dividing numerator and denominator by z2, we get: $$H_k(z)=K_k\frac{1+2z^{-1}+z^{-2}}{1+a_1z^{-1}+a_2z^{-2}}\qquad(3)$$ We want the gain of each biquad section to equal 1 at ω=0. Letting $z= e^{jω}$, we have z= 1. Then: $$H_k(z)=1=K_k\frac{\sum{b}}{\sum{a}}$$ so $$K_k=\frac{\sum{a}}{4}\qquad(4)$$ where a = [1 a1 a2] are the denominator coefficients of the biquad section. Summarizing the coefficient values, we have: b = [1 2 1] a= [1 -2*real(pk) |pk|2] $K= \sum{a}/4$ A biquad lowpass block diagram using the Direct form II structure [4,5] is shown in Figure 1. We will cascade N/2 biquads to implement an N th order filter (N even). Note that the feed-forward coefficients b have the same value for all N/2 biquads in a filter. This is evident from Equation 3. Figure 1 Biquad (second-order) lowpass all-pole filter Direct form II Example In this example, we’ll use biquad_synth to design a 6th order Butterworth lowpass filter with -3 dB frequency of 15 Hz and fs= 100 Hz. Note biquad_synth uses the bilinear transform with prewarping [3] to transform H(s) to H(z). The filter will consist of three biquads, as shown in Figure 2. biquad_synth computes the denominator (feedback) coefficients a of each biquad. The gains K are computed separately. Note biquad_synth contains code developed in an earlier post on IIR Butterworth filter synthesis [3]. Here is the function call and the function output: N= 6; % filter order fc= 15; % Hz -3 dB frequency fs= 100; % Hz sample frequency a= biquad_synth(N,fc,fs) a = 1.0000 -0.6599 0.1227 1.0000 -0.7478 0.2722 1.0000 -0.9720 0.6537 Each row of the matrix a contains the denominator coefficients of a biquad. As we already determined, the numerator coefficients b are the same for all three biquads: b= [1 2 1];The gains for each biquad are, from equation 4: K1= sum(a(1,:)/4; K2= sum(a(2,:)/4; K3= sum(a(3,:)/4; Now we can compute the frequency response of each biquad. The overall response is their product. [h1,f] = freqz(K1*b,a(1,:),512,fs); [h2,f] = freqz(K2*b,a(2,:),512,fs); [h3,f] = freqz(K3*b,a(3,:),512,fs); h= abs(h1.*h2.*h3); H= 20*log10(abs(h)); The magnitude response of each biquad and the overall response are plotted in Figure 3. The sequence of the biquads doesn’t matter in theory; however, placing the biquad with the peaking response (h3) last minimizes the chance of clipping. Figure 2. 6th order lowpass filter using three biquads Figure 3. 6th order lowpass Butterworth cascaded-biquad response. fc= 15 Hz, fs= 100 Hz. Top: response of each biquad section (blue= h1, green= h2, red= h3). Bottom: overall response Coefficient Quantization As I stated at the beginning, the cascaded-biquad design is less sensitive to coefficient quantization than a single high-order IIR, particularly for lower cut-off frequencies. To illustrate this, we’ll first look at how quantizing coefficients effects z-plane pole locations of a 6th order IIR filter. The following code finds the unquantized poles of the 6th order Butterworth filter with -3 dB frequency fc = 5 Hz. (Note butter [6] is a function in the Matlab signal processing toolbox that synthesizes IIR Butterworth filters). fc= 5; fs= 100; [b,a]= butter(6,2*fc/fs); % Matlab function for Butterworth LP IIR p= roots(a); % poles in z-plane The poles are plotted as the red x’s on the left side of Figure 4. We have also plotted the poles for fc= 12 Hz (blue-ish x’s). Each set contains 6 poles. If we plot the poles of filters having fc from 1 Hz to 25 Hz in 1 Hz increments, we get the plot on the right, where only the right side of the unit circle is shown. The lower values of fc are on the right, near z = 1. Figure 4. Unquantized poles of 6th-order Butterworth IIR filter. Left: fc = 5 Hz (red) and 12 Hz (blue). Right: fc = 1 Hz to 25 Hz. Now let’s quantize the denominator coefficients and see how this effects the pole locations of Figure 4. Let nbits = the number of bits per unit of coefficient amplitude: nbits= 16; Here is the code to find the quantized poles for a single value of fc: fs= 100; [b,a]= butter(6,2*fc/fs); a_quant= round(a*2^nbits)/2^nbits; p_quant = roots(a_quant); Letting fc vary in 0.5 Hz increments from 0.5 to 25 Hz, we get the poles shown on the left of Figure 5. As you can see, as fc decreases, quantization causes the poles to depart from the desired locations. The right side of Figure 5 shows the effect of 10-bit quantization. Figure 5. Effect of Quantization on poles of 6th-order Butterworth IIR filter. Left: nbits = 16 Right: nbits = 10 We can do the same calculation for the biquads that make up the 6th order cascaded implementation. For example, here is the code to find the quantized poles of the second biquad for a single value of fc (recall that the matrix a has three rows containing the coefficients of 3 biquads). nbits= 10; a= biquad_synth(6,fc,fs); a2= a(2,:); % 2nd biquad a_quant= round(a2*2^nbits)/2^nbits; p_quant = roots(a_quant); This time, letting fc vary in 0.25 Hz increments from 0.25 to 25 Hz, we get the poles shown in Figure 6, which includes only quadrant 1 of the unit circle. The biquad performs much better than the 6 th order filter, only departing dramatically from the unquantized curve for fc = 0.25 Hz. So we expect better performance from cascading three biquads vs. using a single 6 th-order filter. Figure 6. Effect of Quantization on poles of one biquad, nbits = 10. Now we’re finally ready to compare frequency response of a biquad-cascade filter vs. a conventional IIR filter when the denominator coefficients are quantized. The cutoff frequency and quantization level are chosen to stress the conventional filter. We’ll leave the numerator coefficients of the conventional filter as floating-point. Interestingly, when implementing the biquad filter we get exact numerator coefficient values “for free”: since b = [1 2 1], we can implement b0 and b2 as no-ops and b1 as a bit shift. For the biquad filter, we use biquad_synth to find the coefficients for fc = 6.7 Hz: fc=6.7; % Hz -3 dB frequency fs= 100; % Hz sample frequency a = biquad_synth(6,fc,fs) % a has 3 rows, one for each biquad a = 1.0000 -1.3088 0.4340 1.0000 -1.4162 0.5516 1.0000 -1.6508 0.8087 For the conventional filter, we again use the Matlab function butter: [b,a]= butter(6,2*fc/fs); a = 1.0000 -4.3757 8.1461 -8.2269 4.7417 -1.4761 0.1936 In each case, we quantize coefficients to 10 bits per unit of coefficient amplitude: nbits= 10; a_quant= round(a*2^nbits)/2^nbits; % quantize denom coeffs First, we’ll look at the quantized pole locations. For the conventional filter, the quantized poles are: p_quant= roots(a_quant); For the biquad implementation, the quantized poles are: p1= roots(a_quant(1,:))’; p2= roots(a_quant(2,:)’; p3= roots(a_quant(3,:)’; p_quant= [p1 p2 p3]; Figure 7 shows the z-plane poles for the floating-point and quantized coefficients. Quantization has little effect on the biquad version, but has a large effect on the conventional filter. Now let’s compare the magnitude responses for quantized coefficients. We compute the response of the biquad version in the same way used to obtain Figure 3. Figure 8 shows the magnitude responses. As you would expect from the pole plots, the conventional implementation has poor performance, while the biquad implementation shows no noticeable effect due to quantization. So how low can we go with this N= 6 Butterworth cascaded-biquad filter? As we reduce fc, the quantization of 1024 steps per unit of coefficient amplitude eventually takes a toll. Figure 9 shows the z-plane poles and magnitude response for fc= 1.6 Hz. As you can see, the magnitude response is sagging. If we stay above fc of 2.5 Hz = fs/40, the response error is less than 0.1 dB. Besides adding coefficient bits, there are other ways to improve performance of narrow-band IIR filters. See for example the post in reference [7]. Figure 7. Z-plane pole locations with quantized denom. coeffs. N= 6, fc= 6.7 Hz, fs= 100 Hz. Blue = floating point, Red = 10 bit quantization. Left: biquad implementation Right: conventional implementation. Figure 8. Magnitude response with quantized denom coeffs. N= 6, fc= 6.7 Hz, fs= 100 Hz, nbits = 10. Blue = biquad implementation, green= conventional implementation. Figure 9. Z-plane poles and magnitude response of biquad-cascade filter with quantized denominator coefficients. Blue x’s = floating-point and red x’s = quantized. N= 6, fc= 1.6 Hz, fs= 100 Hz, nbits= 10. References 1. Oppenheim, Alan V. and Shafer, Ronald W., Discrete-Time Signal Processing, Prentice Hall, 1989, sections 6.3.2 and 6.8.1. 2. Lyons, Richard G., Understanding Digital Signal Processing, 2nd Ed., Pearson, 2004, section 6.8.2 3. Robertson, Neil , “Design IIR Butterworth Filters Using 12 Lines of Code”, Dec 2017 4. Sanjit K. Mitra, Digital Signal Processing, 2nd Ed., McGraw-Hill, 2001, section 6.4.1 5. “Digital Biquad Filter”, 6. Mathworks website, ”butter”, 7. Lyons, Rick, “Improved Narrowband Lowpass IIR Filters”, Neil Robertson February 11, 2018 revised 2/20/18, 4/18/19 Appendix Matlab Function biquad_synth This program is provided as-is without any guarantees or warranty. The author is not responsible for any damage or losses of any kind caused by the use or misuse of the program. % biquad_synth.m 2/10/18 Neil Robertson % Synthesize even-order IIR Butterworth lowpass filter as cascaded biquads. % This function computes the denominator coefficients a of the biquads. % N= filter order (must be even) % fc= -3 dB frequency in Hz % fs= sample frequency in Hz % a = matrix of denominator coefficients of biquads. Size = (N/2,3) % each row of a contains the denominator coeffs of a biquad. % There are N/2 rows. % Note numerator coeffs of each biquad= K*[1 2 1], where K = (1 + a1 + a2)/4. % function a = biquad_synth(N,fc,fs); if fc>=fs/2; error('fc must be less than fs/2') end if mod(N,2)~=0 error('N must be even') end %I. Find analog filter poles above the real axis (half of total poles) k= 1:N/2; theta= (2*k -1)*pi/(2*N); pa= -sin(theta) + j*cos(theta); % poles of filter with cutoff = 1 rad/s pa= fliplr(pa); %reverse sequence of poles – put high Q last % II. scale poles in frequency Fc= fs/pi * tan(pi*fc/fs); % continuous pre-warped frequency pa= pa*2*pi*Fc; % scale poles by 2*pi*Fc % III. Find coeffs of biquads % poles in the z plane p= (1 + pa/(2*fs))./(1 - pa/(2*fs)); % poles by bilinear transform % denominator coeffs for k= 1:N/2; a1= -2*real(p(k)); a2= abs(p(k))^2; a(k,:)= [1 a1 a2]; %coeffs of biquad k end .. - Write a CommentSelect to add a comment Wow. This a great article, Sir. Thank you. You are welcome! Thanks Neil! This was very helpful! Here's a Python port of your biquad_synth function: import numpy as np def biquad_synth(N, fc, fs): fc = np.float64(fc) fs = np.float64(fs) if fc >= fs/2: raise Exception('fc must be less than fs/2') if N % 2 != 0: raise Exception('N must be even') # I. Find analog filter poles above the real axis (half of total poles) k = np.arange(N/2) + 1.0 theta = (2 * k - 1) * np.pi / (2 * N) pa = -np.sin(theta) + 1j * np.cos(theta) # poles of filter with cutoff = 1 rad/s pa = np.flipud(pa) # reverse sequence of poles - put high Q last # II. scale poles in frequency Fc = fs/np.pi * np.tan(np.pi * fc / fs) pa = pa * 2 * np.pi * Fc # scale poles by 2*pi*Fc # III. Find coeffs of digital filter # poles and zeros in the z plane p = (1 + pa / (2 * fs)) / (1 - pa / (2 * fs)) # poles by bilinear transform # denominator coeffs return np.stack((np.ones(N/2), -2 * np.real(p), np.abs(p)**2)).transpose() Hi 5plic3r, Thanks for the code. Neil Could a similar technique be used for a high-pass Butterworth? Yes, I think the technique is general. Neil Based on your Design IIR Highpass Filters article, I tried to get a high-pass version by changing this pa = pa * 2 * np.pi * Fc to this pa = 2 * np.pi * Fc / pa But I'm see the same results. Am I heading in the right direction? Thanks! The LP to HP transformation of the poles is: pa= 2*pi*Fc./p_lp; Then the change to the numerator coeffs and scaling is: b= [1 -2 1]; % hp biquad numerator coeffs for i= 1:N/2; K(i)= sum(a(i,1)- a(i,2)+ a(i,3))/4; end Ah, that was it! Thanks :D Dear Mr. Robertson, thanks a lot for all the great articles - they are really helpful and I appreciate them! I was just wondering about a comment in your code: "put high Q last". Why is this? Is there some general advice on how to order biquad stages (regarding any filter, i.e. lp, hp or bp) - I guessed the gain would be the most interesting figure to look at - but maybe I am wrong and it is indeed Q. Any explanations on this topic would be greatly appreciated. Sincerely, Daniel Hi Daniel, Here is a quote from the post's text on that topic: "The magnitude response of each biquad and the overall response are plotted in Figure 3. The sequence of the biquads doesn’t matter in theory; however, placing the biquad with the peaking response (h3) last minimizes the chance of clipping." If you look at figure 3, the h3 section has max gain greater than 1.0 near 14 Hz. So if you had a signal near that frequency with level near 1.0, you would have clipping or rollover in that section (assuming a max signal range of +/-1). The h1 and h2 sections have loss at 14 Hz. So if you place them before the h3 section, it will make clipping less likely. If you look at the poles of the overall filter, the h3 section's poles are closest to the unit circle ("highest Q"). So it makes sense to order the sections of a LPF according to their distance from the unit circle: farthest first to closest last. For a few words on ordering/scaling of the sections of a cascaded-biquad BPF, see my post regards, Neil Hello Neil, thanks for your reply. Me (and me colleague) still are interested to understand the whole story. I was stumbling about your quotes ^^: ("highest Q") ... and I guess your advice is a pragmatic approach towards using the center of the z-plane / the absolute value of the poles as a measure for sorting the poles when using cascaded biquads. I found a nice plot on Stackexchange that illustrates equipotential levels of Q in the z-plane: They tend to be similar, but not exactly the same thing as the distance to the center / the unit sphere bound. I guess the difference is most crucial when we are dealing with very narrow banded filters at lower frequencies (relative to the sampling rate) - which is, unfortunately, exactly what we are heading for at the moment. So, the "by-the-bool" solution would be to use Q indeed and not |z|, or is there still something I am missing? Sincerely, Daniel Daniel, Maybe there is no need to bring the concept of Q into the discussion. Maybe it's simpler to just say that the biquad sections are ordered starting with the pole-pairs closest to the unit circle. For a Butterworth LPF, the poles fall on a nice circle in the z-plane, so there should not normally be any ambiguity about ordering. Of course, it is not required to use any particular ordering if you allow extra headroom in your adders to account for the biquad gain exceeding 1 over a portion of the frequency range. Finally, I don't claim to be an expert on IIR filter design -- I'm just trying to illustrate some basics! regards, Neil Very good article. Clarify my DSP concept. Thanks for the encouragement! Neil I have a question about design Bessel filter. To find the pole of Bessel filter, the frequency normalization process required the division of Thomson’s values by a factor(Bessel Normalizing Factors), as follows: But I using these factor to calculate the pole of the Bessel filter, the pole are not same as MATLAB besselap.m, as follows: This problem has troubled me for a long time. Herbert Herbert, Sorry, I don't know the answer to your question. Neil Neil, You are welcome. Herbert Herbert, this question belongs in the forum I think - feel free to ask it there. OK, thanks. Herbert Thank you sir. Realy good article. I want to as a question with your permission. MATLAB always setting b=[1,2,1]. But when I search different web sites, this matrix can changable. What's the reason of this situation. I want to implement this filter in my embedded project via C code. How can i implement this filter. Thank you and have a nice day. Hi, I'm not sure I understand your question. In my article, each biquad has 2 zeros at z = -1, so the numerator of each biquad is (z + 1)(z + 1) = z^2 + 2z + 1, or equivalently 1 + 2z^-1 + z^-2. Thus each biquad has b = [1 2 1]. As I discussed, these are the zeros resulting from the bilinear transform of the analog lowpass zeros. This choice of zeroes means that the frequency response has a null at fs/2, which is a sensible choice for lowpass filters..
https://www.electronics-related.com/showarticle/1137/design-iir-filters-using-cascaded-biquads
CC-MAIN-2022-21
refinedweb
3,449
65.32
KDECore KLibrary Class ReferenceRepresents a dynamically loaded library. More... #include <klibloader.h> Detailed DescriptionRepresents a dynamically loaded library. KLibrary allows you to look up symbols of the shared library. Use KLibLoader to create a new instance of KLibrary. - See also: - KLibLoader Definition at line 50 of file klibloader.h. Constructor & Destructor Documentation Don't create KLibrary objects on your own. Instead use KLibLoader. Definition at line 98 of file klibloader.cpp. Member Function Documentation Returns the factory of the library. - Returns: - The factory of the library if there is any, otherwise 0 Definition at line 146 of file klibloader.cpp. Returns the file name of the library. - Returns: - The filename of the library, for example "/opt/kde2&/lib/libkspread.la" Definition at line 141 of file klibloader.cpp. Looks up a symbol from the library. This is a very low level function that you usually don't want to use. Unlike symbol(), this method doesn't warn if the symbol doesn't exist, so if the symbol might or might not exist, better use hasSymbol() before symbol(). - Parameters: - - Returns: - true if the symbol exists - Since: - 3.1 Definition at line 192 of file klibloader.cpp. Returns the name of the library. - Returns: - The name of the library like "libkspread". Reimplemented from QObject. Definition at line 136 of file klibloader.cpp. Looks up a symbol from the library. This is a very low level function that you usually don't want to use. Usually you should check using hasSymbol() whether the symbol actually exists, otherwise a warning will be printed. - Parameters: - - Returns: - the address of the symbol, or 0 if it does not exist Definition at line 179 of file klibloader.cpp. Unloads the library. This typically results in the deletion of this object. You should not reference its pointer after calling this function. Definition at line 198 of file klibloader.cpp. The documentation for this class was generated from the following files:
http://api.kde.org/3.5-api/kdelibs-apidocs/kdecore/html/classKLibrary.html
CC-MAIN-2014-15
refinedweb
322
53.07
pyui render - view .pyui files on PC Hey guys. I was annoyed of not having a tool which allows you to view .pyuifiles on PC so I thought about creating one :). It is a python script, which will parse .pyuifile (which is basically jsonfile) and then generate HTML file which can be viewed in the browser such as Firefox, Safari, etc. It is still dirty and surely not complete yet. Program supports View, Labeland TextFieldelements (hopefully). Gonna try to add more elements tomorrow after school. Please post your reviews and ideas here or open issues on the github page :]. Repository: @ShadowSlayer Very cool, can't wait to check it out. Pprint actually works pretty well. I tend to think of pyuis as just a repr of a dict. You can read pyuis and eval them, and pprint shows the structure well. This can be useful,for, for instance moving a bunch of components into some container, or flattening a view that had subviews. from pprint import pprint def showpyui(filename): with open(filename) as f: s=f.read() # safe eval. Pyuis have literals true and false instead of True and False, so we have to fix those by defining appropriate locals pyuidict=eval(s,{'__builtins__':None},{'true':True,'false':False}) pprint(pyuidict)
https://forum.omz-software.com/topic/1413/pyui-render-view-pyui-files-on-pc
CC-MAIN-2021-31
refinedweb
212
77.33
Lesson 1 - E-shop in PHP - Directory structure Welcome, intermediate and advanced programmers, to the continuation of the most popular course on the network - Simple object-oriented CMS in PHP (MVC). This one is for all of you who read through the course and now want to create a real-world, commercial application to start your business out, and/or to gain a more impressive addition to your portfolio (and get a really good job in IT). I will reveal techniques that I've learned over the years while developing the ICT.social information system. Based on the project from the previously mentioned course, we'll program a professional and modular information system with the as much task automation as possible. We'll build a fully-functional e-shop on a re-usable system. We'll parse PHPDoc annotations using a reflection, render forms using a framework, generate PDF invoices, provide a JSON API, render a tree menu using recursion, use foreign keys, fulltext search, transaction and lots, and lots more. To put it simply, you'll learn everything you'll need to get a job as a PHP programmer. First and foremost, we'll show you a few screenshots of the final product (the project is so much more complex than what is seen here, but we don't have all that much space to preview all its features): The second step in the registration process. A generated PDF invoice. Product filtering. Product details. Accounting settings. Today's tutorial will be about improving the project structure and migrating the database. Project structure Real commercial applications have many classes, so splitting the structure up into models, views and controllers won't directory structure based on the PSR-4 standard. We'll use namespaces to create modules and modify the database. You gain points by supporting our network. This is done by sending a helpful amount of money to support the site, or by creating content for the network.
https://www.ictdemy.com/php/e-shop/course-e-shop-in-php-directory-structure
CC-MAIN-2021-31
refinedweb
328
50.67
Fabian socialism will be destroyed in the US and Europe - From: taxationistheft2003@xxxxxxxxx - Date: Mon, 16 Jul 2007 08:33:45 -0700 Western Political Liberation: 'Made in China' by Gary North As consumers in the West, we love China. We are not so fond of India, since we seldom see "Made in India" stickers on the goods we buy. But "Made in China" stickers are everywhere. Wal-Mart in 1998 posted banners saying the company bought American. Apparently, someone in management thought shoppers cared. They didn't. Wal-Mart took down those banners years ago. Nobody protested. The parking lots did not empty. Companies in China began stocking the shelves at Wal-Mart, K-Mart, and Target. Call it the Bob Barker phenomenon. The price is right. Bob has retired, but the show goes on. So does its message. So, if we buy "Made in China" items, we don't hate China. As customers, we love China. But if a person is employed in manufacturing, he fears China. If he has just been laid off, he may hate China. If the plant where he worked is closed for good, he blames China. We all wear two hats: a consumer's hat and a producer's hat. As consumers, we want producers to compete. As producers, we want protection from unfair competition. What is unfair competition? Successful competition. People vote with their money as consumers. They often vote in a polling booth as producers. As consumers, they want liberty. As voters, they want controls, or more to the point, control. Control over them. You know. Competitors. Spending is about liberation from controls on us. Voting is about the imposition of controls by us. We are schizophrenic. It shows. LIBERATION FROM TRADE UNIONS When I was in my teens, I was interested in national politics. I no longer am. I used to think we could make things better through national politics. I no longer do. St. Paul commented on this change of mind. When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things (1 Corinthians 13:11). Back in the 1950's, there was a political movement called the right-to- work movement. "The right to work" meant the right to work without joining a union. It was a great slogan, although it was bad economics and worse jurisprudence. There is no right to work. There ought to be a right to make an offer to work. There is no right to a job. There should be a right to bid on a job. But "the right to work" was a great political slogan. When explained, it made an important point: trade unions were limiting the right for an employer to accept bids from people who wanted to work at wages lower than what trade union members wanted to match. Trade unionism in America in 1955 was limited to about 25% of the work force, but the manufacturing sector was heavily unionized, especially in the industrial North. If you did not belong to a union, you had to seek employment elsewhere. It was illegal for unionized businesses to hire you as an independent. As government-protected producers, union members could demand and receive above-market wages. They could not legally be fired if they went on strike. People who crossed a picket line were taunted. They were called scabs. There are still scabs today. They are called Asians. There are over three billion of them. They don't cross picket lines. There are no picket lines in Asia. There are few unions. Asian workers produce goods, which are offered for sale to Westerners. There is no phrase more beloved by American consumers than "Let's make a deal." If American manufacturers cannot beat Asian prices, they go out of business. Result: no more picket lines. No more jobs. Union members should have seen this coming. The process began in the 1950's. But "Asia" back then was located south of the Mason-Dixon line. In the South in the 1950's, right-to-work bills became laws. Employment steadily moved south, reversing half a century of emigration out of the South. The South benefited from two things: right-to-work laws and air-conditioning. The North could not compete, and still can't. Right-to-work laws were passed in the deep Midwest. From Texas to North Dakota, and from Texas to Virginia, the country is liberated from unions. Twenty-two states have passed these laws. If states in the North abolished state income taxes, and if the Federal government revoked all "labor legislation" - laws forcing businesses to bargain with unions that get a 51% vote of employees, just once - the North would rise again. The North's economy would start booming, the way that South Dakota - no state income tax - is booming. But this is unlikely to take place. The North's economy is bloated with bureaucracy, hostile to economic liberty, and aging fast. When you think "North," think "Senator Kennedy." Trade unions fought back in the 1950's. They could not keep businesses from building plants in the South, where wages were lower, regulation was weaker, and politicians could be bribed cheaper. So, they got Congress to pass a series of minimum wage laws. The first result of these laws was to keep black teenage males unemployed, thereby reducing competition in the labor markets. The other result was to keep Southern workers from being able to bid their labor below the minimum wage. This made it less profitable to build plants in the South. But then price inflation under Johnson, Nixon, Ford, and especially Carter forced down real wages. The dollar bought less. The legal minimum wage did not keep pace with price inflation. The Federal Reserve System did the bidding of four Presidents - debauched the dollar - and that gutted the minimum wage law. At the same time, Japan started exporting low-cost, high-value products. Americans started buying them. The 1970's was the decade of the Japanese invasion. The 1980's solidified the market share of products manufactured by Japan, Hong Kong, and South Korea. Then, in 1978, Deng Xiao Ping liberated China's agricultural sector. Output soared within a year. This made more food available in cities. More rural Chinese could then move to cities. The joint phenomenon of agricultural revolution and urbanization, which had begun in the West in the sixteenth century and has kept accelerating, had at last reached China. Agricultural output always increases under liberty. Efficient farmers out-compete inefficient farmers, who then move to cities or into local trades. Today, China must build the equivalent of Houston every month to accommodate the flood of immigrants from the farms. Initially, Japanese goods undermined American manufacturers. Here was the basic strategy: price competition (1950's), product improvement (1960's), marketing (1970's), credit (1980's). Then Chinese manufacturers got the liberty they needed to compete, when the Communist Party freed up the economy. China began where all newcomers must begin: price competition. It caught up with Japan in less than two decades. Capital flowed in. Technology was either stolen or purchased. Output soared. Wal-Mart took down its "We buy American" banners. That visibly marked the end of American labor union movement. It exists mainly in the auto industry, which is approaching bankruptcy, and government, which is also approaching bankruptcy. In 1958, I dreamed of the end of the labor union movement. Because of the Federal Reserve and Asian manufacturing, my dream has pretty much come true. EUROPE FALLS BEHIND The trade union movement in Europe is still strong. China is now undermining Europe's unions. They are still politically powerful, but consumers like good deals, and China offers good deals. The French socialists failed to win last month. China is in the World Trade Organization. The internationalists' goal has always been to cut trade barriers and extend bureaucratic control over the international economy. But the plan has backfired. The Chinese, with India close behind, have used the WTO's managed trade system to breach the Great Wall of Europe - trade unionism - without surrendering an inch on the question of adopting European rules regarding pollution, wage negotiating, and safety. The internationalists worked for almost a century to get their system accepted in North America and Western Europe. Their dream has almost come true. But they didn't foresee what Asians would do to their plans. Always, Asia had been looked at as a market for European and American goods. Asian farmers would export food to us. We would sell them manufactured goods. America now sells its farm products to China - our only big export success in China - while "Made in China" stickers are everywhere. In short, Keynesian economic planners in the West, who adopted a post- World War II strategy of imposing government controls on Japan to prepare the West's economies for a grand invasion of Asia, knocked down the Asian drawbridges that had kept out Western goods for centuries. As soon as the drawbridges were down - in Japan, South Korea, Hong Kong, and Singapore - the flood of goods started flowing: from East to West. The planners' grand plan completely backfired. Ludwig von Mises was correct when he argued that the results of government economic planning will always be the opposite of the official policy of the planners. The United States succumbed first. We were Japan's primary export market. Then, nation by nation, Japan's exporters established a foothold. Western consumers started buying. The other Asian tigers followed Japan's lead. It is now Europe's turn. Bureaucracy is vastly more entrenched in Europe. The welfare State is more extensive. Taxes are higher. Mobility is less. Europe is sclerotic economically and biologically. It is aging rapidly. The European internationalists have centralized power through the European Union. They are about to do an end run around voters, who have rejected the 300-page constitution. The deal will be sealed through treaty, with no opportunity for voters to register a veto. The Eurocrats think they can gain control over the economy by means of faceless bureaucrats who answer neither to politicians nor businessmen. But they have no power at all over Asia. They are committed to low tariffs and managed trade. But the Eurocrats will not terrify China's politicians. China's politicians will smile, bow politely, and do nothing. China's politicians have a unique way of dealing with China's bureaucrats, as Zheng Xuaoyu was reminded this week. He ran China's equivalent of the Food and Drug Administration. He was executed for taking bribes from companies that then released drugs that killed ten people. In 1968, George Wallace's third-party presidential campaign had a slogan: "Send them a message!" They take message-sending seriously in China. Message to the WTO: "China will not comply." China, India, and the rest of Asia will take advantage of the European Union's low tariffs and reduced import quotas. Europe's main import quotas are on agricultural imports. Asia imports food, so those restrictions will have no effect on Asia. As for the WTO's guidelines on labor relations and safety conditions, who will enforce them on 200 million Chinese workers, just off the farm, who earn about $350 a month for a 12x7 work week? Heads, Asia wins; tails, Eurocrats lose. Europe is going to find that its government-protected industries suffer a fate similar to Mr. Zheng's. It's just a matter of time. Europe's century-old tradition of social democracy - Marxism without bloody revolution - will not survive another 25 years. It's as sclerotic as the rest of Europe is. If the Eurocrats don't allow the European economy the freedom that China now enjoys, European manufacturing is doomed. Labor unions control manufacturing and government. All over the West, China's free market economy is bankrupting both. THE GREAT REVERSAL The West after World War I began imitating ancient China. Not since the days of Egypt's pharaohs has there been a bureaucracy to rival China's, which lasted for a thousand years. Entry was based on mastery of Chinese poetry. Even military officials had to pass this exam. It was rigorous. Smart people with a gift for poetry passed. This eliminated nepotism. Sons could not follow their fathers unless they could pass the exam. By 1900, the failure of the old system was visible even to the rulers. The West had sent theologically liberal socialist missionaries to China in the 1890's. The result was revolution: first organized by Sun Yat-sen in 1913, then under Mao in 1949. China's leaders had been trained by liberals in missionary schools and denominational colleges in China. Mao speeded up the process of social revolution by destroying old families and old wealth. India experienced a similar system of secular evangelism. Its best and brightest were sent to study at Oxford and Cambridge in 1900. They came back confirmed Fabians. They became bureaucrats. Japan had become Western in 1868, by top-down decree. It went militaristic. It also went bureaucratic. All roads were heading West in 1900: to Marxism, Fabianism, or militarism, but surely to bureaucracy. Then came Japan's defeat in World War II. The old guard lost face. Industrialists invited W. Edwards Deming to teach them about quality control in manufacturing. They were under the impression that Deming was influential in Western manufacturing circles. He wasn't. There was also Hong Kong's experiment in liberty after 1945. Deng saw what happened there. Then he came to power. He imitated Hong Kong. Next, India began to follow China's lead after 1995. The East has generally abandoned Western economic theories of protection of labor unions. It has adopted wage competition and labor mobility. It passes the savings along to its trade partners. Under pressure from Asian imports, the United States has been forced to abandon the New Deal in the field of labor relations. Now it is Europe's turn. The tide has turned. Asian politicians have loosened government control over their economies. The result is enormous productivity. Anyone in the West who gets in the way of this avalanche of production will be crushed. Consumers will be crushed with bargains. Producers will be crushed with competition based on four major factors: unprecedented price competition, an unprecedented savings rate, an unprecedented work ethic, and unprecedented labor mobility. Asia learned well: first from left-wing teachers, then from Henry Ford. THE COST OF LIBERATION At the beginning of any revolution, leaders call their followers to a life of self-sacrifice. For the sake of the revolution, men must sacrifice their lives, purses, and sacred honor. That call to sacrifice doesn't sell well these days. What sells well is a call to go to the mall. If today's revolutionary youth leader were as symbol-oriented as yesterday's, he would have a large photo on his wall of Sam Walton. Beneath the photo would be these words: Sam lives! The West today is facing a different kind of political revolution: an imported revolution. It was one exported by the West after World War II: free trade. Now it is being sent back, embodied in low-price, high- quality goods. Those of us who grew up fighting the Fabians, who were dominant in American academia after World War II, are at long last seeing the bastions of Fabian-Keynesian power being bankrupted. The labor unions are the symbol of this process. They are dying. They will die soon enough in Western Europe. Businesses that relied on government protection, such as tariffs and import quotas, are also dying. Even the public schools are in trouble. Home school curriculum materials, which cost a fortune to design and print in 1980, can be bought for $200, once per family. The program's CD-ROM's are run on a $500 Chinese-made computer. There will be more pain for people who were trained to work in government-protected industries. There will be less pain for consumers. There will be great sacrifices made by retirees who trusted their labor unions' leaders when the leaders negotiated fat pension promises from companies that can renege simply by declaring bankruptcy. There will be fewer sacrifices for people who spotted the scam and stayed in the labor force. There will be great pain for taxpayers who have trusted similar promises about government pensions, Medicare, and drug payment subsidies. There will be less pain for those who spotted the scam and stayed in the labor force. Every revolution has winners and losers. The winners will be entrepreneurs and consumers. The losers will be politicians and their victims, who believed political promises of protection. Government today is a gigantic protection racket. Now the "family" is under assault by products bearing these stickers: "Made in China." The revolution is being made in China. It is not Mao's revolution. It is Deng's. CONCLUSION My children will live to see the destruction of the cartel-creating, tax-extracting, bureaucracy-expanding, sclerotic Fabian Establishment that has had its way in the West since about 1933. As surely as its evil twin, Marxism, went the way of all flesh by 1991, so will Fabianism perish. The cutting edge of the revolution is a sticker: "Made in China." In my mind is a life-sized photo of David Rockefeller. (If I were a European, it would be Jean Monnet.) Every time I buy something made in Asia, I mentally stick a pin into the photo. So, buy something made in Asia. Support the revolution. . - Prev by Date: The Thing is thinging - Next by Date: We's be raping white women! - Previous by thread: The Thing is thinging - Next by thread: We's be raping white women! - Index(es):
http://newsgroups.derkeiler.com/Archive/Talk/talk.politics.libertarian/2007-07/msg00011.html
CC-MAIN-2013-20
refinedweb
2,984
68.36
HPUX's user in /etc/passwd may have a negative uid/gid, this may cause trouble if we set it as guest account. It would be better to check it in testparm. Created attachment 4226 [details] check negative uid/gid This patch checks negative uid/gid on hpux. There is similar workaround on the internet, but this patch makes it explict. Checked the log, no one did this patch before. Ok, is the underlying issue that HPUX uses a signed integer for uid_t/gid_t ? If so, we really need to change this to a feature test instead of using #ifdef HPUX. That way it'll cover all platforms with this problem. If you can confirm I'll adapt the patch and add a configure test, thanks. Jeremy. For HPUX 11.23 (ia4), about line 220 of <sys/types.h>: # ifndef _GID_T # define _GID_T typedef int32_t gid_t; /* For group IDs */ # endif /* _GID_T */ # ifndef _UID_T # define _UID_T typedef int32_t uid_t; /* For user IDs */ # endif /* _UID_T */ so it seems to use a signed integer for uid_t/gid_t. Thanks,
https://bugzilla.samba.org/show_bug.cgi?id=6426
CC-MAIN-2021-17
refinedweb
175
74.59
Docker. There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is; let’s say you have thousands of tests that need to connect to a database, and each test needs a pristine copy of the database and will make changes to the data. The classic approach to this is to reset the database after every test either with custom code or with tools like Flyway – this can be very time-consuming and means that tests must be run serially. However, with Docker you could create an image of your database and run up one instance per test, and then run all the tests in parallel since you know they will all be running against the same snapshot of the database. Since the tests are running in parallel and in Docker containers they could run all on the same box at the same time and should finish much faster. Try doing that with a full VM. Interesting! I suppose I’m still confused by the notion of “snapshot[ting] the OS”. How does one do that without, well, making an image of the OS? Well, let’s see if I can explain. You start with a base image, and then make your changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When you want to run your image, you also need the base, and it layers your image on top of the base using a layered file system: as mentioned above, Docker uses AuFS. AuFS merges the different layers together and you get what you want; you just need to run it. You can keep adding more and more images (layers) and it will continue to only save the diffs. Since Docker typically builds on top of ready-made images from a registry, you rarely have to “snapshot” the whole OS yourself. Docker vs Virtual machine – Answer #2: It might be helpful to understand how virtualization and containers work at a low level. That will clear up lot of things. Note: I’m simplifying a bit in the description below. See references for more information. How does virtualization work at a low level? In this case the VM manager takes over the CPU ring 0 (or the “root mode” in newer CPUs) and intercepts all privileged calls made by the guest OS to create the illusion that the guest OS has its own hardware. Fun fact: Before 1998 it was thought to be impossible to achieve this on the x86 architecture because there was no way to do this kind of interception. The folks at VMware were the first who had an idea to rewrite the executable bytes in memory for privileged calls of the guest OS to achieve this. The net effect is that virtualization allows you to run two completely different OSes on the same hardware. Each guest OS goes through all the processes of bootstrapping, loading kernel, etc. You can have very tight security. For example, a guest OS can’t get full access to the host OS or other guests and mess things up. How do containers work at a low level? Around 2006, people including some of the employees at Google implemented a new kernel level feature called namespaces (however the idea long before existed in FreeBSD). One function of the OS is to allow sharing of global resources like networks and disks among a kind of virtualization and isolation for global resources. This is how Docker works: Each container runs in its own namespace but uses exactly the same kernel as all other containers. The isolation happens because the kernel knows the namespace that was assigned to the process and during API calls it makes sure that the process can only access resources in its own namespace. The limitations of containers vs VMs should be obvious now: You can’t run completely different OSes in containers like in VMs. However, you can run different distros of Linux because they do share the same kernel. The isolation level is not as strong as in a VM. In fact, there was a way for a “guest” container to take over the host in early implementations. Also, you can see that when you load a new container, an entirely new copy of the OS doesn’t start as it does in a VM. All containers share the same kernel. This is why containers are lightweight. Also unlike a VM, you don’t have to pre-allocate a significant chunk of memory to containers because we are not running a new copy of the OS. This enables running thousands of containers on one OS while sandboxing them, which might not be possible if we were running separate copies of the OS in their own VMs. Answer #3: Good answers. Just to get an image representation of container vs VM, have a look at the one below. Answer #4:. Hypervisor. Types of Virtualization The virtualization method can be categorized based on how it mimics hardware to a guest operating system and emulates a guest operating environment. Primarily, there are three types of virtualization: - Emulation - Paravirtualization - Container-based virtualization Emulation Emulation, also known as full virtualization runs the virtual machine OS kernel entirely in software. The hypervisor used in this type is known as Type 2 hypervisor. It is installed on the top of the host operating system which is responsible for translating guest OS kernel code to software instructions. The translation is done entirely in software and requires no hardware involvement. Emulation makes it possible to run any non-modified operating system that supports the environment being emulated. The downside of this type of virtualization is an additional system resource overhead that leads to a decrease in performance compared to other types of virtualizations. Examples in this category include VMware Player, VirtualBox, QEMU, Bochs, Parallels, etc. Paravirtualization. Container-based Virtualization Container-based virtualization, also known as operating system-level virtualization, enables multiple isolated executions within a single operating system kernel. It has the best possible performance and density and features dynamic resource management. The isolated virtual execution environment provided by this type of virtualization is called a container and can be viewed as a traced group of processes. . Namespaces can be used in many different ways, but the most common approach is to create an isolated container that has no visibility or access to objects outside the container. Processes running inside the container appear to be running on a normal Linux system although they are sharing the underlying kernel with processes located in other namespaces, same for other kinds of objects. For instance, when using namespaces, the root user inside the container is not treated as root outside the container, adding additional security. The Linux Control Groups (cgroups) subsystem, the next major component to enable container-based virtualization, is used to group processes and manage their aggregate resource consumption. It is commonly used to limit the memory and CPU consumption of containers. Since a containerized Linux system has only one kernel and the kernel has full visibility into the containers, there is only one level of resource allocation and scheduling. Several management tools are available for Linux containers, including LXC, LXD, systemd-nspawn, lmctfy, Warden, Linux-VServer, OpenVZ, Docker, etc. Containers vs Virtual Machines. Update: How does Docker run containers in non-Linux systems? If containers are possible because of the features available in the Linux kernel, then the obvious question is how do non-Linux systems run containers. Both Docker for Mac and Windows use Linux VMs to run the containers. Docker Toolbox used to run containers in Virtual Box VMs. But, the latest Docker uses Hyper-V in Windows and Hypervisor.framework in Mac. Now, let me describe how Docker for Mac runs containers in detail. Docker for Mac uses to emulate the hypervisor capabilities and Hyperkit uses hypervisor.framework in its core. Hypervisor.framework is Mac’s native hypervisor solution. Hyperkit also uses VPNKit and DataKit to namespace network and filesystem respectively. The Linux VM that Docker runs in Mac is read-only. However, you can bash into it by running: screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty. Now, we can even check the Kernel version of this VM: # uname -a Linux linuxkit-025000000001 4.9.93-linuxkit-aufs #1 SMP Wed Jun 6 16:86_64 Linux. All containers run inside this VM. There are some limitations to hypervisor.framework. Because of that Docker doesn’t expose docker0 network interface in Mac. So, you can’t access containers from the host. As of now, docker0 is only available inside the VM. Hyper-v is the native hypervisor in Windows. They are also trying to leverage Windows 10’s capabilities to run Linux systems natively.
https://programming-articles.com/what-is-the-difference-between-docker-and-virtual-machine/
CC-MAIN-2022-40
refinedweb
1,475
54.22
Hi, I have noticed the section "Configuration file sourcing order" (at startup) of the Bash article is confusing. There is some discussion to this article requesting to clarify the startup procedure too. There are two subsections of this section: 1. Generic part starting with "These files are sourced..." The second top-level bullet of generic section is: if interactive + non-login shell → /etc/bash.bashrc then ~/.bashrc Where does the reference to /etc/bash.bashrc file come from? The Bash manual does not say anything about this file. Is this Arch-specific? IMO, this bullet should read simply: if interactive + non-login shell → ~/.bashrc 2. Arch-specific starting with "But, in Arch, by default..." It should be stated it is Arch-specific and the I'd propose to stick to the A + B → C convention. The second bullet is possibly confusing due to lack of paranthesis: /etc/skel/.bash_profile (which users are encouraged to copy to ~/.bash_profile) sources ~/.bashrc which means that /etc/bash.bashrc and ~/.bashrc will be executed But, why not to stick to the convention above and change it to: /etc/skel/.bash_profile → ~/.bashrc which means that /etc/bash.bashrc and ~/.bashrc will be executed (...). Users are encouraged to copy /etc/skel/.bash_profile to ~/.bash_profile. Simply, the current version tries to interleave and inject too many things within single statement. IMHO, that would make things clearer. Mateusz Loskot | github | archlinux-config Arch (x86-64) | ThinkPad T400 | Intel P8600| Intel i915 Arch (x86-64) | ThinkPad W700 | Intel T9600 | NVIDIA Quadro FX 2700M Offline The /etc/bash.bashrc file is sourced by /etc/profile. Other shells also read /etc/profile – bash.bashrc provides just a few sane system-wide defaults for that particular shell. Debian, Ubuntu and Suse have used /etc/bash.bashrc in the past; I don't know that they still use it. I've also seen references to distros using /etc/bashrc and /etc/bash/bashrc. I guess the answer is that use of /etc/bash.bashrc is common, other distros may use a similar resource file but give it a different name. A naked install of bash from source would not expect or require this file. Edited for, I hope, more clarity. Last edited by thisoldman (2012-06-09 23:46:00) Offline I guess the answer is that use of /etc/bash.bashrc is common Yes, I've researched about it quickly and I can confirm it is nothing specific to Arch. Simply, the Bash manual is lacking information about the/etc/bash.bashrc as system-wide configuration version of ~/.bashrc file: $ find . -type f | xargs grep -C 1 'bash.bashrc' ./config-top.h-/* System-wide .bashrc file for interactive shells. */ ./config-top.h:/* #define SYS_BASHRC "/etc/bash.bashrc" */ ./config-top.h- In shell.c, there is flag defined: static int no_rc; /* Don't execute ~/.bashrc */ which later is used to decide if /etc/bash.bashrc (provided by SYS_BASHRC) is sourced from the dedicated function: static void run_startup_files () { ... /* bash */ if (act_like_sh == 0 && no_rc == 0) { #ifdef SYS_BASHRC # if defined (__OPENNT) maybe_execute_file (_prefixInstallPath(SYS_BASHRC, NULL, 0), 1); # else maybe_execute_file (SYS_BASHRC, 1); # endif #endif maybe_execute_file (bashrc_file, 1); } ... } It seems /etc/bash.bashrc is ru nautomatically on startup, if found and if the relevant flags are set, but not necessarily only if sourced from /etc/profile file. It could be clarified anyway. Also, I found citation from the "Ubuntu Certified Professional Study Guide" by Michael Jang, the following were the points given in that book about function of /etc/bash.bashrc file: * It assigns a prompt, which is what you see just before the cursor at the command prompt. * It includes settings from /etc/bash_completion to enable command completion. * It configures messages associated with sudo access; for more information. But perhaps that's something more Ubuntu-specific now Anyhow, I'd still suggest to update the Wiki as mentioned above, even if there is nothing Arch-specific. Mateusz Loskot | github | archlinux-config Arch (x86-64) | ThinkPad T400 | Intel P8600| Intel i915 Arch (x86-64) | ThinkPad W700 | Intel T9600 | NVIDIA Quadro FX 2700M Offline
https://bbs.archlinux.org/viewtopic.php?id=143041
CC-MAIN-2017-17
refinedweb
675
68.67
A Python wrapper for the JDK unpack200 utility Project description PythonUnpack200 A Python wrapper for the JDK unpack200 utility. Usage The unpack200.unpack function takes three parameters: - infile: string, full path to the pack200 file to extract - outfile: string, full path to the jar file you'd like to extract to - remove_source: bool (optional, default False), whether to remove the pack200 file specified in <infile> after extraction is complete For example: import unpack200 unpack200.unpack( r"C:\path o\packfile", r"C:\path o\outfile" ) Building Setup.py is currently only set up for Windows at the moment - feel free to contribute a Linux or Mac PR! To build, first download and extract the following dependencies to your PC: - Java JDK source code, e.g. from: - Java JDK built run: python3 setup.py build It will ask you two questions, enter a full path into both, e.g: Where is the JDK source located? C:\JDK\src Where is the JDK include located? C:\Program Files\Java\jdk1.8.0_131\include It will attempt to derive the correct locations for everything from these paths, and should build the extension. Issues Found an issue? Please report it either in the issues section of this repo, or to the developers of JDK. Any fix PRs would be welcome! Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/unpack200/
CC-MAIN-2020-24
refinedweb
245
57.37
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. PDCLib - the Public Domain C Library ATTENTION -- PDCLib has moved! As of 2018-06-11, the PDCLib project has MOVED. The new (old) homepage for the project is now -- and please do check that site for the links to source, issue tracker etc. There are currently no plans to keep the repository here at Bitbucket in sync with the main repository. If I find a way to do this with a minimum of hassle, I will, but I make no promises at this point. 2018-06-12, Martin Baute What is it This is a C Standard Library - what's defined in ISO/IEC 9899 "Information technology — Programming languages — C" or extensions to the above defined in ISO/IEC 14882 "Information technology — Programming languages — C++". A few extensions may optionally be provided. License - Written in - 2003-2012 by Martin "Solar" Baute, - 2012-2018 by Owen Shepherd, - 2018- by Martin Baute. <>. Exceptions Unicode Character Data PDCLib necessarily includes Unicode character data derived from that provided by Unicode, Inc in its' implementation of the localization and wide character support (in particular for use by the ctype.h and wctype.h functions.) Unicode, Inc licenses that data under a license agreement which can be found at <>, or in the file UNICODE_DATA_LICENSE.txt. found in the same directory as this file. Test Suite Portions of the test suite are under different licenses. Where this is the case, it is clearly noted in the relevant location. The license of this code has no bearing upon the licensing of the built library (as it does not comprise part of it). At the time this was written, this exception only applies to portions of the printf test suite, which are released under the terms of the 2-clause BSD license (see testing/printf_testcases.h for full details) Terms for extensions Extensions are permitted only if they pass the following tests: - Pre-existing wide usage - On most systems, the system C library must maintain its application binary interface for long periods of time (potentially eternity). Existing wide usage demonstrates utility - In keeping with the spirit of the standard - The extension should respect the design, intentions and conventions of the C standard, and feel like a natural extension to the offered capability. - Not system dependent - The extension should not add any additional dependencies on the underlying system - Non-duplicative - Extensions should not duplicate functionality already provided by the standard - Disabled by default - PDCLib will always default to a "strictly conforming" mode exposing only functionality offered by the version of the standard specified by the __STDC_VERSION__, __STDC__ or __cplusplus macro; extensions will only be exposed when requested. Additionally, extra consideration will be given to extensions which are difficult or impossible to implement without access to internal structures of the C library. Conrete Examples: - strndup - Included. strndup is easily defined in terms of existing standard functions, follows the standard's naming conventions, is in wide usage, and does not duplicate features already provided. - posix_memalign - Rejected. Has existing wide usage, is not system dependent (can be implemented, albeit inefficiently, on top of malloc), but naming is not consistent with the naming used by the standard (posix_ prefix) and duplicates functionality provided by the C11 standard - open, close, read, write, ... - Rejected. Widely used, but duplicates functionality provided by the standard (FILE objects set to be unbuffered), and not able to implement full semantics (e.g. in relation to POSIX fork and other functionality from the same defining standard) in a platform-neutral way - strl* - Rejected. Used somewhat widely, in keeping with the standard, not system dependent, but duplicative of functionality provided by (optional) Annex K of the C standard. - flockfile, funlockfile, getc_unlocked, putc_unlocked, fwrite_unlocked, ... - Accepted. Provide functionality not provided by the standard (and useful in light of the C11 addition of threading). Can be trivially implemented in terms of the <threads.h> mutex functions and the bodies of the existing I/O functions, and impossible to implement externally Internals As a namespace convention, everything (files, typedefs, functions, macros) not defined in ISO/IEC 9899 is prefixed with _PDCLIB. The standard defines any identifiers starting with '_' and a capital letter as reserved for the implementation, and since the chances of your compiler using an identifier in the _PDCLIB range are slim, any strictly conforming application should work with this library. PDCLib consists of several parts: - standard headers; - implementation files for standard functions; - internal header files keeping complex stuff out of the standard headers; - the central, platform-specific file _PDCLIB_config.h; - platform-specific implementation files; The standard headers (in ./includes/) only contain what they are defined to contain. Where additional logic or macro magic is necessary, that is deferred to the internal files. This has been done so that the headers are actually educational as to what they provide (as opposed to how the library does it). There is a seperate implementation file (in ./function/{header}/) for every function defined by the standard, named {function}.c. Not only does this avoid linking in huge amounts of unused code when you use but a single function, it also allows the optimization overlay to work (see below). (The directory ./functions/_PDCLIB/ contains internal and helper functions that are not part of the standard.) Then there are internal header files (in ./internal/), which contain all the "black magic" and "code fu" that was kept out of the standard headers. You should not have to touch them if you want to adapt PDCLib to a new platform. Note that, if you do have to touch them, I would consider it a serious design flaw, and would be happy to fix it in the next PDCLib release. Any adaption work should be covered by the steps detailed below. For adapting PDCLib to a new platform (the trinity of CPU, operating system, and compiler), make a copy of ./platform/example/ named ./platform/{your_platform}/, and modify the files of your copy to suit the constraints of your platform. When you are done, copy the contents of your platform directory over the source directory structure of PDCLib (or link them into the appropriate places). That should be all that is actually required to make PDCLib work for your platform. Future directions Obviously, full C89, C99 and C11 conformance; and full support for the applicable portions of C++98, C++03 and C++11 (the version which acomplishes this will be christened "1.0"). Support for "optimization overlays." These would allow efficient implementations of certain functions on individual platforms, for example memcpy, strcpy and memset. This requires further work to only compile in one version of a given function. Post 1.0, support for C11 Annexe K "Bounds checking interfaces" Development Status - v0.1 - 2004-12-12 - Freestanding-only C99 implementation without any overlay, and missing the INTN_C() / UINTN_C() macros. <float.h> still has the enquire.c values hardcoded into it; not sure whether to include enquire.c in the package, to leave <float.h> to the overlay, or devise some parameterized macro magic as for <limits.h> / <stdint.h>. Not thoroughly tested, but I had to make the 0.1 release sometime so why not now. - v0.2 - 2005-01-12 - Adds implementations for <string.h> (excluding strerror()), INTN_C() / UINTN_C() macros, and some improvements in the internal headers. Test drivers still missing, but added warnings about that. - v0.3 - 2005-11-21 - Adds test drivers, fixes some bugs in <string.h>. - v0.4 - 2005-02-06 - Implementations for parts of <stdlib.h>. Still missing are the floating point conversions, and the wide-/multibyte-character functions. - v0.4.1 - 2006-11-16 With v0.5 (<stdio.h>) taking longer than expected, v0.4.1 was set up as a backport of bugfixes in the current development code. - #1 realloc( NULL, size ) fails - #2 stdlib.h - insufficient documentation - #4 Misspelled name in credits - #5 malloc() splits off too-small nodes - #6 qsort() stack overflow - #7 malloc() bug in list handling - #8 strncmp() does not terminate at '0' - #9 stdint.h dysfunctional - #10 NULL redefinition warnings - v0.5 - 2010-12-22 - Implementations for <inttypes.h>, <errno.h>, most parts of <stdio.h>, and strerror() from <string.h>. Still no locale / wide-char support. Enabled all GCC compiler warnings I could find, and fixed everything that threw a warning. (You see this, maintainers of Open Source software? No warnings whatsoever. Stop telling me it cannot be done.) Fixed all known bugs in the v0.4 release. Near Future Current development directions are: Implement portions of the C11 standard that have a direct impact on the way that PDCLib itself is built. For example, in order to support multithreading, PDCLib needs a threading abstraction; therefore, C11's thread library is being implemented to provide the backing for this (as there is no purpose in implementing two abstractions) Modularize the library somewhat. This can already be seen with components under "opt/". This structure is preliminary; it will likely change as the process continues.
https://bitbucket.org/pdclib/pdclib
CC-MAIN-2018-34
refinedweb
1,508
55.03
Hey guys. I'm sure that this is beyond simple, but I can't figure this out. As you will see in the code below, I'm trying to simulate a Pokemon battle (stupid easy). I'm trying to create a structure for 6 pokemon, and associate a string name, and an int for health. I've figured out how to make that in the "Class" section. So, my question is, how do I then input the pokemon into the structure, so the health is associated with a pokemon? I know how to write it, and call it in C++, but I'm at a complete loss here. So, code below (please be gentle), and let me know what I'm missing please. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Practice_Battle { class Program { struct chadList { string name; int health; }; static void Main(string[] args) { chadList chadPokemon; { } //Chad's List of Pokemon //string[] chadPokemon = new string[6] { "Charizard", "Garchomp", "Metagross", "Aegislash", "Quagsire", "Gardevoir" }; //Random Pokemon Generator Random ran = new Random(); string randomChoice = chadPokemon[ran.Next(0, chadPokemon.Length)]; Console.WriteLine("Gamer Chad wants to battle!"); Console.WriteLine("Do you accept? (yes or no)"); string acceptInput = Console.ReadLine(); string message = ""; if (acceptInput == "yes") message = "It's time to put your game face on!"; else message = "Too bad. It's time to battle!"; //Random Pokemon is Chosen, I choose a Pokemon Console.WriteLine(message); Console.WriteLine("I choose you, " + randomChoice); Console.WriteLine("Please choose your Pokemon:"); Console.WriteLine("Blastoise, Blazekin, Meganium"); string myChoice = Console.ReadLine(); if (myChoice == "Blastoise") message = "A water Pokemon, with lots of health. Good Choice!"; else if (myChoice == "Blazekin") message = "A fire Pokemon, with lots of Power. Let's do this!"; else message = "A plant Pokemon, with lots of health. Best of luck!"; Console.WriteLine(myChoice + ", " + message); Console.ReadLine(); } } } Thank you for any advice that you can give!
http://www.gamedev.net/topic/659658-stuck-in-c/
CC-MAIN-2016-36
refinedweb
318
70.19
About this Event Program: Feminine Yoga Set Essential Oils 1:1 for Women's Health Nutrition & Health Tips for a Harmonious Cycle Nutritious Apéro You will learn: * how to support your cycle naturally with a feminine approach to yoga * why we should not practice yoga like men * how to sync your yoga practice with your cycle * easy yogic practices to help you with PMS and menstrual pain * how essential oils can benefit your hormones * which essential oils support your cycle * how essential oils can help with PMS and menstrual issues * what foods to eat for every stage of your cycle Early bird price: 55 Fr. (valid till 20/03/19) Regular price: 65 Fr. Who are we? Julia is a passionate yoga teacher and health coach. She has been practicing and teaching different yoga styles for the last 4 years and fell more an more in love with a feminine approach to yoga. One that respects our inherent cyclical nature and supports our overall well-being. Currently, she teaches public classes in vinyasa, yin and power yoga and 1:1 private classes in fertility yoga and yoga for menstrual health. Inés is a holistic health coach & nutritionist. She is a passionate essential oil educator and pregnancy wellness coach. Her interest of all feminine essentially grew in her first pregnancy over 4 years ago. She supports women through a holistic pregnancy and educates families on taking a proactive approach to their health and wellness through natural remedies. Please note that we have a no refund cancelation.
https://www.eventbrite.com/e/womens-health-workshop-learn-to-love-your-cycle-tickets-56485896838?aff=ebdssbcitybrowse
CC-MAIN-2019-18
refinedweb
255
58.82
Java Review //*************************************** // File: Camus.java // Author: Mary Eberlein // // A simple first Java program //*************************************** public class Camus { // print a Camus quote public static void main(String[] args) { System.out.println("Camus said: "); System.out.println("Some people talk in their sleep. \nLecturers talk while other people sleep."); } } Output Camus said: Some people talk in their sleep. Lecturers talk while other people sleep. What's In Our Program? Programs are defined in terms of classes. Our program contains one class (the class named Camus). Our class is public - so our class can be used by other classes. public is a keyword in Java, i.e. a word that has a special, reserved meaning. All programs contain a method named main. Program execution begins in the main method. // Put your comment here /* put your comment here */ A method consists of a group of programming statements. Form of a method definition: modifiers ReturnType name (parameter list) { method statements } The method System.out.println is part of the java.lang package. This package contains predefined code - classes and methods - that we can use. By calling System.out.println, we can print text to the console window. Nuts and Bolts for Java Programs Printing made nicer: Escape Sequences - Two character sequences that represent other characters \n : the new line escape sequence \t : the tab escape sequence \" : double quote \\ : backslash Exercise: Re-do the Camus class so that the quote is contained in double quotes. Identifiers and Keywords Identifiers - words used when writing a program, e.g. public, class, Camus, static, void Keywords - special identifiers that are reserved for a special purpose in Java, e.g. public, class, static, void - keywords are always lowercase Identifiers must be composed of letters, digits, _ (the underscore character), and $ They cannot begin with a digit Examples: total sum MIN_LENGTH $amount Choose meaningful identifier names: Choose max instead of m Choose currentItem instead of c Data Types Java has 8 primitive types - 6 are number types (4 integer and 2 floating point types), the character type char, and the boolean type. Java integer types type storage range int 4 bytes approx -2 billion to 2 billion short 2 bytes -32,768 to 32, 768 long 8 bytes approx -9.2x10^18 to 9.2x10^18 byte 1 byte -128 to 127 Usually int is most practical. When to use long? When to use short or byte? Floating point types Numbers with fractional parts type storage range float 4 bytes approx -3.4x10^38 to 3.4x10^38 6-7 significant decimal digits double 8 bytes approx -1.8x10^308 to 1.8 x 10^308 15 significant decimal digits Usually use double. Suffix F --> type float Ex: 23.5F Ex: 23.5 is type double The Character Type - char Single quotes around character constants. Example: 'a', 'Z' The char type denotes characters in the Unicode encoding scheme. Unicode is a 2 byte code. Note: "a" is a string of length 1, NOT a char. We can compare characters using < and >. The ordering on digit and letter characters is: 1-9 A-Z a-z Ex : '1' < '2' < '3'... 'A' < 'a' < 'z' The boolean type A boolean variable has only 2 valid values: true and false Variables A variable is a name for a location in memory where data is stored. A variable has a type which tells the computer how to interpret the bits at that memory location. When we create a variable, or declare a variable, we specify the type. A variable declaration tells the compiler to put aside enough memory for the specified type of value. Example variable declaration: int myNum = 13; This declaration sets aside enough memory for an int (integer), and stores 13 at that memory location. We can refer to the value stored at this memory location through the variable name, myNum. Declaring a Variable The general form for a variable declaration is: dataType variableName; Examples: int count; double bankAccountBalance; char myFavLetter; Assignment - Assigning a value to a variable General form: variableName = expression; Or you can declare a variable and assign it a value at the same time: dataType variableName = expression; Examples: int count = 0; double area = 15.74; int length = 5; int width = 6; int perimeterRectangle = length + length + width + width; Operators + - * / % <-- integer remainder Examples: 15 + 2 is 17 3 - 1 is 2 15/4 is 3 <----- If you divide one integer by another, the result is an integer (the remainder is lost). 11.0/4 is 2.75 15%2 is 1 25%5 is 0 21% 6 is 3 Example: Write a program that assigns values to 2 variables that represent the length and width of a rectangle, and then prints the rectangle's area. Example: Write a program that assigns integer values to 3 variables that represent test scores, and then computes and prints the average score to the screen. Conversions between Numeric Types For binary operations on numeric values of different types: If any operand is a double, other will be converted to double. Otherwise if any operand is float, other will be converted to float. Otherwise if any operand is long, other will be converted to long. Otherwise if any operand is int, other will be converted to int. Otherwise if any operand is short, other will be converted to short. These conversions are made automatically since no information is lost. Ex: int n = 5; double x = n; // No information is lost when the int is stored in a variable of type double. To convert a double to an int, you must use an explicit cast , since information may be lost . Ex: double x = 5.8993; int n = (int) x; // n has the value 5 - the fraction was discarded. To round a float or double to the nearest int, use the Math.round method: Ex: double x = 5.8993; int n = (int) Math.round(x); // the value of n is 6. Note that the return type of Math.round() is long. Constants A constant is the associated name for a memory location whose value cannot be changed once it is assigned. Constants are declared and initialized just like variables. But you cannot change their value. General form: final dataType constantName = expression; Examples: final int MAX_OCCUPANCY_PAI314 = 62; // the max occupancy of classroom PAI 3.14 final double UNLEADED_PRICE = 2.09; Example: public class InToCm { // This program converts measurements in inches to centimeters public static void main(String[] args) { final double CM_PER_INCH = 2.54; // Convert 1.55 inches to centimeters System.out.println("Length of 1.55 in in centimeters: " + 1.55 * CM_PER_INCH); } } Useful Methods and Constants from java.lang.Math Constants static double E (approx. 2.71...) static double PI (approx 3.14159...) Methods abs(x) - returns the absolute value of a numeric type exp(x) - returns e^x. argument and return type: double max(x, y) min(x, y) pow(x, a) - returns x^a. argument and return type: double Exercise: Write a program that uses a variable to store the number of gallons of gas purchased by a driver. Use a constant to store the price of unleaded gasoline. Print the amount the driver spent on gas. Exercise: Write a program that calculates the area of a circle. Use the Math.PI constant in java.lang.Math.
http://www.cs.utexas.edu/~eberlein/cs313e/JavaReview.html
CC-MAIN-2015-11
refinedweb
1,207
67.35
def partition(self, s): res = [] self.dfs(s, [], res) return res def dfs(self, s, path, res): if not s: # backtracking res.append(path) for i in xrange(1, len(s)+1): if self.isPar(s[:i]): self.dfs(s[i:], path+[s[:i]], res) def isPar(self, s): return s == s[::-1] Here is a revised version with comments: def partition(self, s): res = [] self.dfs(s, [], res) return res def dfs(self, s, path, res): if not s: res.append(path[:]) # take care here return # backtracking for i in xrange(1, len(s)+1): if self.isPar(s[:i]): path.append(s[:i]) self.dfs(s[i:], path, res) path.pop() # simulate stack here def isPar(self, s): return s == s[::-1] I cannot see the difference between these two version, why should we simulate a stack here? and why should we replace path with path[:]? could you please give me a hint? Thanks.. Hi caikehe, I am also confused about the difference between path and path[:]. I have tried path, and it will return empty result.. Thanks! If you use path, the elements in it willl be updated automatically, If we use path[:], we just append the current elements in it. I have some discussion in other similar issue, and I think the reason should be that if we use path during the recursive function call, it will use the reference of path, so if path is modified later, the content of res will be updated as well. Thus we will get empty list in the end. But if we use path[:], it will create a new list and append it to res. The change of path will not affect the content of res Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/19347/python-easy-to-understand-backtracking-solution
CC-MAIN-2017-51
refinedweb
303
72.66
Java Import Classes: A package is the style of Java to group the classes and interfaces together, or to say, a package is a collection of related predefined classes and interfaces. Packages are sets of classes and interfaces built into Java Development Kit (JDK). A package places restrictions (to the classes of other packages) to access its enclosed classes. Access specifiers work on package boundaries (between the classes of different packages). Java Import Classes: Definition and Importance A package is a group of related classes and interfaces. The JDK software is organized through packages. It is equivalent to a header file of C-lang and module of Modula language (Modula is a descendent of Pascal). Packages can be compressed into JAR files (refer xxxx) so that they can be downloaded or sent across the network fast. Java Import Classes: Advantages of Packages 1. With a simple import statement, all the classes and interfaces can be imported. 2. Java includes a provision to import only one class from a package. 3. It avoids namespace problems (name conflicts). Two classes with the same name cannot be put in the same package but can be put in two different packages because a package creates its own namespace (folder). 4. Access for the classes can be controlled. 5. Classes and interfaces of same functionality can be grouped together. 6. Because functionally all the classes are related, later their identification and determining the location become easier. 7. Java packages are used to group and organize the classes. Java Import Classes: Importing All/Single Class Java permits to import all the classes or only one class from a package. C-lang does not have this facility of including only one function from a header file. Importing Classes – Different Approaches There are three ways to access a class or an interface that exits in a different package. 1. Using fully qualified name 2. Importing only one class 3. Importing all the classes 1. Using Fully Qualified Name Fully qualified name includes writing the names of the packages along with the class name as follows. java.awt.event.ActionListener java.util.Stack Objects can be created as follows. java.util.Stack st = new java.util.Stack(); This way of using may not look nice when we use the same class a number of times in the code as readability becomes boredom. This will be a nice approach and justified when the class is used only once. 2. Importing Only One Class Java permits to import only one class also from a package as follows. import java.awt.event.ActionListener; import java.util.Stack; After importing, the objects of the classes can be created straightaway. Stack st = new Stack(); This type of approach is the best and beneficial when only one class is required from the same package. Importing one class (when other classes are not required) saves a lot of RAM space usage on the client’s machine. 3. Importing All The Classes One of the advantages of packages is to include all the classes and interfaces of a package with a single import statement. import java.awt.event.*; import java.util.*; The asterisk (*) mark indicates all the classes and interfaces of the package. After importing, the objects of the cl asses can be created straightaway. Stack st = new Stack(); This type of approach is beneficial when many classes and interfaces are required from a package. Java Import Classes: Packages are discussed as here under. 1. Predefined Packages – Java API 2. Creating Custom-Defined Packages
https://way2java.com/java-general/java-import-classes/
CC-MAIN-2019-13
refinedweb
587
57.16
Python - Delete the first node of the Linked List In this method, the first node of the linked list is deleted. For example - if the given list is 10->20->30->40 and the first node is deleted, the list becomes 20->30->40. Deleting the first node of the Linked List is very easy. If the head is not null then create a temp node pointing to head and move head to the next of head. Then delete the temp node. The function pop_front is created for this purpose. It is a 3-step process. def pop_front(self): if(self.head != None): #1. if head is not null, create a # temp node pointing to head temp = self.head #2. move head to next of head self.head = self.head.next #3. delete temp node temp = None The below is a complete program that uses above discussed concept of deleting the first node of the linked list. # node structure class Node: def __init__(self, data): self.data = data self.next = None #class LinkedList #Delete first node of the list def pop_front(self): if(self.head != None): temp = self.head self.head = self.head.next temp = None ) MyList.PrintList() #Delete the first node MyList.pop_front() MyList.PrintList() The above code will give the following output: The list contains: 10 20 30 40 The list contains: 20 30 40
https://www.alphacodingskills.com/python/ds/python-delete-the-first-node-of-the-linked-list.php
CC-MAIN-2021-31
refinedweb
227
85.49
Developing an common methodsSRAVZ Oct 5, 2012 7:00 PM All, This content has been marked as final. Show 8 replies 1. Re: Developing an common methods939520 Oct 5, 2012 8:18 PM (in response to SRAVZ)This may help: I suggest you don't bother doing so though. Whatever functionality your code provides can probably be either easily reversed engineered or more likely code that performs the same functionality can be found on-line for free. Also, like many applications out there, your code probably sucks and is not worth stealing ;) What you should really protect is access to your database as it contains sensitive data such as social security numbers. The best way of doing so is to have a web application where the Java code runs back on the server with no business code or passwords on the client machine. Any data coming from the client should be validated and checked for hacker corruption (sql injection attacks, etc). 2. Re: Developing an common methodsDrClap Oct 5, 2012 8:38 PM (in response to SRAVZ)You have to first ask yourself why you don't want anybody else to see your source code. Specifically: what costs arise if somebody sees it, and what benefits accrue to you if nobody can see it. 3. Re: Developing an common methodsrp0428 Oct 5, 2012 10:46 PM (in response to SRAVZ)> I donot want share the source code neither want some one to decompile it. > Then it may come as a shock to you that the JDK includes a disassempler as part of the package. > E:\>javap -help Usage: javap <options> <classes>... where options include: -c Disassemble the code -classpath <pathlist> Specify where to find user class files -extdirs <dirs> Override location of installed extensions -help Print this usage message -J<flag> Pass <flag> E:\> > Does that tell you anything? 4. Re: Developing an common methodsdadams07 Oct 10, 2012 1:03 PM (in response to SRAVZ)As the other replies indicate, securing the code is usually unrealistic. The only way to prevent reverse engineering to is to make the code inaccessible to users (for instance, as a server app). Other than that, there's no practical way to do it. If a user has access to code, he can figure it out. This problem is as old as the hills, & basically insoluble. If you're worried about losing money in a commercial app, find some other way to make money (such as support). 5. Re: Developing an common methodsgimbal2 Oct 10, 2012 1:43 PM (in response to dadams07) aksarben wrote:You mean to sell support on the software, not to let go of development and become a support employee as I wrongly interpreted the first time I read that ;) This problem is as old as the hills, & basically insoluble. If you're worried about losing money in a commercial app, find some other way to make money (such as support). Tis true; the market is learning that you can earn more by giving away the stuff for free first. Free to play games for example, but also Java itself is a good example. Its free, but if you want any kind of Q&A and long-term support prepare to pay for it. 6. Re: Developing an common methodsSRAVZ Oct 10, 2012 9:45 PM (in response to gimbal2)Ok Thanks for the comments I think i would not worry about security now, if i want write common methods how do i proceed .i.e. for example do i have to create an interface(may b abstarct) and extend a class which has the implementing details or do a class which has 1000 methods in it. Whats the recommendation, i understand my question is at high level but request you to guide me. Thanks 7. Re: Developing an common methodsTPD-Opitz Oct 11, 2012 8:25 AM (in response to SRAVZ) SRAVZ wrote:Looks like you should learn what Interfaces ans (abstract) classes are good for. if i want write common methods how do i proceed .i.e. for example do i have to create an interface(may b abstarct) and extend a class which has the implementing details Also it would be a good idea to learn about design pattern, because a "common" library should provide support for those... or do a class which has 1000 methods in it.Definitly not! The common line in OOP is: classes and methods should be as short as possible. Here are some points that come to my mind: <ul> <li>Provide a small interface (number public classes and methods) to your users. </li> <li>Take (a lot of) time to find proper names before your first release.</li> <li>Take eaven more time for separation of concerns (what does an object do)</li> <li>provide Factories for object creation if objects need complex configuration.</li> <li>If your clients have to pass parameters to your Methods provide interfaces they must implement or Enumerations (for fixed values).</li> <li>for complex interfaces (more than one method) provide (abstract) default implementations.</li> <li>provide a (JUnit-)Test suite to proove proper functionality of your code.</li> <li>publish your lib in a maven or ivy repository along with a dependency description so that your lib and its dependencies can be easily integrated.</li> </ul> There is a lot more to consider. Hopefully other forum members will add important points I missed... bye TPD 8. Re: Developing an common methods939520 Oct 11, 2012 4:50 PM (in response to TPD-Opitz)Here are some additional ideas: I suggest you provide javadoc at the top of each class/interface that is exposed to the user on what service the class provides. Also, javadoc for each function on what it does (not how it does it), possibly what it returns, and what input arguments its expecting. Remember the user probably can't look at your source code of the function to determine what it does. He's relying on your javadocs. I suggest you let the customer know that all function arguments are assumed not to allow null values unless otherwise specified in the function's javadoc. Lastly, you may consider adding what checked exceptions are thown and why in the function's javadoc if the function can throw them. Don't add javadoc describing unchecked exceptions (often thrown if the user violates your javadoc's contract).
https://community.oracle.com/message/10629299?tstart=0
CC-MAIN-2016-40
refinedweb
1,070
58.92
Podcast Season 5 Episode 13 August 3, 2017 | Podcasts | 8 Comments| Podcast RSS feeds: Ogg Vorbis, MP3 and Opus. Title: Fossilised tree resin In this episode: We’ve got lots of summer-styled news, Finds and an ace Voice of the Masses. What’s in the show: - News: - The Krita Foundation did have some financial bother. Welcome to the epoch 1.5 billion. Solus gets a full-time developer. Station X is one of two new UK companies offering Linux laptops. Slackware has turned 24. Many donations have now helped the Krita Foundation out of financial bother. Star Labs is the other UK company offering Linux laptops. - Finds of the Fortnight: - A selection of finds from from our #linuxvoice IRC channel on Freenode: - james_olympus; ImageMagiick handles PDFs (). - Tracey_C; OwnCloud users will need to update to the new app store or their apps will break (). - ioanagogo; easily host git repos for multiple users (). - Tracey_C; Find which PPAs you have enabled that aren’t being used (). - Graham: - Another tiling window script for KDE (). - Make your own oscilloscope music (). - Add your own hardware to Amazon Echo via Philips Hue emulation with ha-bridge (). - There’s a Playstation 3 emulator for Linux (). - Ben: - PulseAudio (). - Mike: - MikeOS music (). - MikeOS in a web browser (). - Andrew: - Amber Rudd has been at it again (). - The 1st August was Yorkshire Day (). - John Humphrys has no chatter (). - Vocalise your Neurons: Presenters: Andrew Gregory, Ben Everard, Graham Morrison and Mike Saunders. Download as high-quality Ogg Vorbis (45MB) Download as low-quality MP3 (68MB) Download the smaller yet even more awesome Opus file (19MB) Duration: 0:59:46 Theme Music by Brad Sucks. Recorded, edited and mixed with Ardour using GNU/Linux audio plugins from Calf Studio Gear. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. 8 Comments about your little time_t discussion. It convinvced me to write a test programme to find out the size of time_t. Here it is. It is also my first C programme in years: #include <time.h> #include <stdio.h> int main ( int argc, char * argv[]) { printf(" sizeof(time_t) = %d bytes\n",sizeof(time_t)); printf(" sizeof(time_t) = %d bits\n",8*sizeof(time_t)); return 0; } It gives the following result on my PC:- sizeof(time_t) = 8 bytes sizeof(time_t) = 64 bit Run it on your own computers folks, then you will know the answer, not just have conjecture and suppose. typical! the comment software has damaged the code. You will need to correct the first two lines to include these parts of the C standard library time.h stdio.h in angle brackets
https://www.linuxvoice.com/podcast-season-5-episode-13/
CC-MAIN-2018-51
refinedweb
434
68.57
Program to find smallest 5 values in a matrix using C I'm trying to find the smallest 5 values in a 6*6 matrix using C. Note I was successful in finding the least value as well as finding the least values from a 1D array, but could not make it to iterate and get the least 5 values of the matrix using 2D. Any ideas appreciated. [The sample of a matrix I want to sort and get the lowest values] See also questions close to this topic -; } - How to find group of key pair in python I have Dict : resultDict = {"aaa10" : 10, "eee343", 88 ,"aaa15" : 40,"bbb60" : 10,"aaa13" : 80,"bbb60" : 10,"aaa30" : 50,"ccc99" : 100, "aaa56" : 10,"ccc67" : 10,"aaa78" : 40, "ddd78" :88 , "eee343" : 88 } I'd like to custom group it by dict key & first 3 letter is fixed as a pattern. Desire Output : Group,1 # For Group-1 include eee* and ccc* pattern with value and so on eee343, 88 eee100, 10 ... ccc*, <value> Group,2 bbb* <value> Group,3 ddd* <value> aaa*, <value> My code: print("Group","1") for key,value in resultDict.items(): if re.findall ("eee", key) or ("ccc", key): print(key,value) print("Group","2") for key,value in resultDict.items(): if re.findall ("bbb", key): print(key,value) print("Group","3") for key,value in resultDict.items(): if re.findall ("ddd", key) or ("aaa", key): print(key,value) But I have around 10 group and Dictcontain many pair, it's take too much time to run and code is also no looks good. I'm find best solution in python to do quickly and better way ! Thanks in advance :) - React not re-rendering after sorting I displayed a list of data from const array, and then I want to sort the arraythen update it, after that I expect there will be a re-renders, but it's not happening. Is there something wrong here? thanks before const array = []; const finalData = (lastData) => { array.push(lastData); dispatch({ type: "LAST_DATA", payload: array }); }; const sortID = (params) => { if (params === "asc") { array.sort((a, b) => { return a.id - b.id; }); } else if (params === "des") { array.sort((a, b) => { return b.id - a.id; }); } }; - Sort list of iterables by nth element which might not be present in all of them Given a list of iterables: li = [(1,2), (3,4,8), (3,4,7), (9,)] I want to sort by the third element if present, otherwise leave the order unchanged. So here the desired output would be: [(1,2), (3,4,7), (3,4,8), (9,)] Using li.sort(key=lambda x:x[2])returns an IndexError. I tried a custom function: def safefetch(li, idx): try: return li[idx] except IndexError: return # (ie return None) li.sort(key=lambda x: safefetch(x, 2)) But Nonein sorting yields a TypeError. Broader context: I first want to sort by the first element, then the second, then the third, etc. until the length of the longest element, ie I want to run several sorts of decreasing privilege (as in SQL's ORDER BY COL1 , COL2), while preserving order among those elements that aren't relevant. So: first sort everything by first element; then among the ties on el_1 sort on el_2, etc.. until el_n. My feeling is that calling a sort function on the whole list is probably the wrong approach. (Note that this was an "XY question": for my actual question, just using sortedon tuples is simplest, as Patrick Artner pointed out in the comments. But the question is posed is trickier.) - Drawing Drop shadows around a rectangle in open gl 2D I am trying to create a rectangle with shadow around it. i have searched over the internet and found some docs which produces shadow using following methods: - Drawing Multiple rectangles with different alpha values - Using Shaders to create shadow Here is the sample picture of what i need to achieve: Here is a sample code of what i am working on: float color[4] = {1.0, 0.3, 0.3, 1.0}; struct wlr_box box = { .x = view->current_box.x, .y = view->current_box.y, .width = 200, .height = 200 }; wlr_render_rect(renderer, &box, color,output->wlr_output->transform_matrix); Here wlr_boxis the coordinates and width for the rectangle and wlr_render_rect()creates a rectangle with those coordinates and fill it with the given color. what i need to achieve is to add shadow around this rectangle. - WorldToScreenPoint failure Currently I'm making a "weapon wheel" like that in GTA gun selection, however I'm having trouble with the last step. Currently I run the following when opening the menu: private void CheckForCurrentDefenseColor() { if (playerCamera == null) return; for (int i = 0; i < pos.Length; i++) { pos[i] = playerCamera.WorldToScreenPoint(dots[i].position); } mousePos = Input.mousePosition; ... } But even though the for statement is running the positions don't get set to a screen position and rather stay as world positions, is there anything I'm doing wrong or missed here? Also if you think it is in another part of my code let me know, I can change the question to and add more blocks of code. - make player shoot bullets to the mouse pointer coordinates I have this class Bullet and I want to shoot a bullet from my hero position to the mouse cursor position. Here's the Bullet class: public class Bullet extends Entity { private float bulletSpeed = 1.2f; private float dx, dy; public Bullet(Handler handler, float x, float y, int width, int height) { super(handler, x, y, width, height); } @Override public void tick() { if (handler.getMouseManager().isLeftPressed()) { x += bulletSpeed; } } @Override public void render(Graphics g) { g.setColor(Color.RED); g.fillOval((int)(x - handler.getGameCamera().getxOffset()), (int)(y - handler.getGameCamera().getyOffset()), width, height); } } Now if I click the left button the bullet is moving to the right but only if I keep the click pressed. How to make the ball (bullet) move smoothly after one click? And also, now it starts from the position where I create this bullet. How to make the start position from the hero position? This is the hero class package BunnyFights.Entities.Creatures; import BunnyFights.Game; import BunnyFights.Handler; import BunnyFights.gfx.Assets; import java.awt.*; import java.awt.image.BufferedImage; public class Player extends Creature { private BufferedImage image; public Player(Handler handler, float x, float y) { super(handler, x, y, Creature.DEFAULT_CREATURE_WIDTH, Creature.DEFAULT_CREATURE_HEIGHT); image = Assets.heroLeft; bounds.x = 16; bounds.y = 32; bounds.width = 32; bounds.height = 32; } @Override public void tick() { getInput(); move(); handler.getGameCamera().centerOnEntity(this); if(handler.getKeyManager().left == true) { image = Assets.heroLeft; } if(handler.getKeyManager().right == true) { image = Assets.heroRight; } } public void getInput() { xMove = 0; yMove = 0; if (handler.getKeyManager().up) { yMove = -speed; } if (handler.getKeyManager().down) { yMove = speed; } if (handler.getKeyManager().left) { xMove = -speed; } if (handler.getKeyManager().right) { xMove = speed; } } @Override public void render(Graphics g) { g.drawImage(image, (int)(x - handler.getGameCamera().getxOffset()), (int)(y - handler.getGameCamera().getyOffset()), width, height, null); // g.setColor(Color.red); // g.fillRect((int)(x + bounds.x - handler.getGameCamera().getxOffset()), // (int)(y + bounds.y - handler.getGameCamera().getyOffset()), // bounds.width, bounds.height); } } I tried to make a function in Bullet class but I need to access the coordinates of the player and I don't know how to access them. public void ShootBullet() { double bulletVelocity = 1.0; //however fast you want your bullet to travel //mouseX/Y = current x/y location of the mouse //originX/Y = x/y location of where the bullet is being shot from double angle = Math.atan2(handler.getMouseManager().getMouseX() - player.getX(), handler.getMouseManager().getMouseY() - player.getY()); double xVelocity = (bulletVelocity) * Math.cos(angle); double yVelocity = (bulletVelocity) * Math.sin(angle); x += xVelocity; y += yVelocity; } Here in this function I need to subtract the coordinates of my player but I don't know how to acces them. I want to use this function in tick() method so I can't pass a Player argument to the function because the I can't call it. How to proceed?
https://quabr.com/67099704/program-to-find-smallest-5-values-in-a-matrix-using-c
CC-MAIN-2021-21
refinedweb
1,326
57.37
In this document, we discuss an alternative approach for solving the 2D Poisson problem: In a previous example, we applied the Neumann boundary conditions by adding PoissonFluxElements (elements that apply the Neumann (flux) boundary conditions on surfaces of higher-dimensional "bulk" Poisson elements) to the Problem's Mesh object. The ability to combine elements of different types in a single Mesh object is convenient, and in certain circumstances absolutely essential, but it can cause problems; see the discussion of the doc_solution(...) function in the previous example. Furthermore, it seems strange (if not wrong!) that the SimpleRectangularQuadMesh – an object that is templated by a particular (single!) element type – also contains elements of a different type. We shall now demonstrate an alternative approach, based on the use of multiple meshes, each containing only one type of element. The ability to use multiple Meshes in a single Problem is an essential feature of oomph-lib and is vital in fluid-structure interaction problems, where the fluid and solid domains are distinct and each domain is discretised by a different element type. We consider the same problem as in the previous example and choose a source function and boundary conditions for which the function is the exact solution of the problem. The specification of the source function and the exact solution in the namespace TanhSolnForPoisson is identical to that in the single-mesh version discussed in the previous example. The driver code is identical to that in the single-mesh version discussed in the previous example. The problem class is virtually identical to that in the single-mesh implementation: The only difference is that we store pointers to the two separate Mesh objects as private member data, and provide a slightly different implementation of the function create_flux_elements(...). [See the discussion of the 1D Poisson problem for a more detailed discussion of the function type PoissonEquations<2>::PoissonSourceFctPt.] As before we start by creating the "bulk" mesh and store a pointer to this mesh in the private data member TwoMeshFluxPoissonProblem::Bulk_mesh_pt: Next, we construct an (empty) Mesh and store a pointer to it in the private data member TwoMeshFluxPoissonProblem::Surface_mesh_pt. We use the function create_flux_elements(...), to create the prescribed-flux elements for the elements on boundary 1 of the bulk mesh and add them to the surface mesh. We have now created all the required elements and can access them directly via the two data members TwoMeshFluxPoissonProblem::Bulk_mesh_pt and TwoMeshFluxPoissonProblem::Surface_mesh_pt. However, many of oomph-lib's generic procedures require ordered access to all of the Problem's elements, nodes, etc. For instance, Problem::newton_solve(...) computes the entries in the global Jacobian matrix by adding the contributions from all elements in all (sub-)meshes. Ordered access to the Problem's elements, nodes, etc is generally obtained via the Problem's (single!) global Mesh object, which is accessible via Problem::mesh_pt(). The Problem base class also provides a private data member Problem::Sub_mesh_pt (a vector of type Vector<Mesh*>) which stores the (pointers to the) Problem's sub-meshes. We must add the pointers to our two sub-meshes to the problem, and use the function Problem::build_global_mesh() to combine the Problem's sub-meshes into a single, global Mesh that is accessible via Problem::mesh_pt(): The rest of the constructor is identical to that in the single-mesh implementation. We pin the nodal values on the Dirichlet boundaries, pass the function pointers to the elements, and set up the equation numbering scheme: The only (minor) change to Problem::actions_before_newton_solve() is that the nodes on the boundaries of the bulk (!) mesh are now obtained via the Bulk_mesh_pt pointer, rather than from the combined Mesh, pointed to by Problem::mesh_pt(). While this may appear to be a trivial change, it is a potentially important one. Recall that the surface mesh is an instantiation of the Mesh base class. We created the (empty) mesh in the Problem constructor (by calling the default Mesh constructor), and used the function create_flux_elements(...) to add the (pointers to the) prescribed-flux elements to it. The surface mesh therefore does not have any nodes of its own, and its lookup schemes for the boundary nodes have not been set up. The combined mesh, pointed to by Problem::mesh_pt(), therefore only contains the boundary lookup scheme for the bulk mesh. Hence, the combined mesh has four boundaries and their numbers correspond to those in the bulk mesh. If we had set up the boundary lookup scheme in the surface mesh, the constructor of the combined Mesh, would have concatenated the boundary lookup schemes of the two sub-meshes so that the four boundaries in sub-mesh 0 would have become boundaries 0 to 3 in the combined mesh, while the two boundaries in the surface mesh would have become boundaries 4 and 5 in the combined Mesh. While the conversion is straightforward, it is obvious that Mesh boundaries are best identified via the sub-meshes. The post-processing, implemented in doc_solution(...) is now completely straightforward. Since the PoissonFluxElements only apply boundary conditions, they do not have to be included in the plotting or error checking routines, so we perform these only for the elements in the bulk mesh. We mentioned that the Mesh constructor that builds a combined Mesh from a vector of sub-meshes, concatenates the sub-meshes' element, node and boundary lookup schemes. There are a few additional features that the "user" should be aware of: Problem::build_global_mesh()will issue a warning and ignore any duplicates. This is because the Problem'sglobal Meshobject is used by many functions in which operations must be performed exactly once for each node or element. For instance, in time-dependent problems, the function Problem::shift_time_values(), which is called automatically by Problem::unsteady_newton_solve(...), advances all "history values" by one time-level to prepare for the next timestep. If this was done repeatedly for nodes that are common to multiple sub-meshes, the results would be incorrect. If your problem requires a combined mesh in which duplicates are allowed, you must construct this mesh yourself. Mesh::add_boundary_node()"tells" the mesh's constituent nodes which boundaries they are located on. What happens if a (sub-)mesh for which this lookup scheme has been set up becomes part of a global Mesh? For various (good!) reasons, the Meshconstructor does not update this information. The boundary number stored by the nodes therefore always refers to the boundary in the Meshthat created them. If this is not appropriate for your problem, you must construct the combined mesh yourself. A pdf version of this document is available.
http://oomph-lib.maths.man.ac.uk/doc/poisson/two_d_poisson_flux_bc2/html/index.html
CC-MAIN-2021-39
refinedweb
1,094
51.18
Introduction Hello there! Welcome to my very first Flutter 101 post where I introduce you to Flutter basics! Inside an App, the App should give the user feedback when something happens. For example, when you click a Button to save something inside an App the user should be notified that something happened. Feedback improves the user experience a lot. Today i want you to show how to do that in Flutter. Let's go! What's a Snackbar? In Flutter everything is a Widget. So there is no surprise that we also have a Widget for something that provides the user with feedback. In Flutter a Widget that does exactly this job is the SnackBar. The SnackBar widget is an easy way to quickly display a lightweight message at the bottom of the screen and its implemented in a few minutes. In addition, its highly customizable (like everything in this beautiful framework) and you can change things like for example the duration of how long the message should be visible. Enough theory let's jump into some code! Snackbar in Flutter First, we start with the entry of the application inside the main.dart: import 'package:fliflaflutter/topics/snackbar/app.dart'; import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } Nothing special just a simple starting point. Next up is the heart of our application: import 'package:fliflaflutter/topics/snackbar/snackbar.dart'; import 'package:flutter/material.dart'; class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Snackbar', theme: ThemeData( primarySwatch: Colors.blue, ), home: Scaffold( appBar: AppBar( title: Text('Snackbar in action'), ), body: Snackbar(), ), ); } } Here we have our MaterialApp Widget that holds the central point of our app. Inside the home property, we have a Scaffold, and inside it as a property the body that holds our Snackbar. The body: Snackbar(), is not the SnackBar widget itself. It's a custom widget from me and within it i hold the code for the real snackbar. One important thing to mention here is that a SnackBar needs to be wrapped within a Scaffold. Why? Because the SnackBar uses the ScaffoldMessengerState to show the SnackBar Widget for the right Scaffold instance. Now let's jump into the code that is the reason you currently reading this. The actual SnackBar implementation! import 'package:flutter/material.dart'; // INFOS & TIPS: // Snackbar needs an Scaffold around it class Snackbar extends StatelessWidget { @override Widget build(BuildContext context) { return Center( child: ElevatedButton( onPressed: () { final snackBar = SnackBar( content: Text('Have a nice weekend!'), action: SnackBarAction( label: 'Close', onPressed: () {}, ), ); ScaffoldMessenger.of(context).showSnackBar(snackBar); }, child: Text('Open Snackbar'), ), ); } } Let me explain me the code. First, we want the button to show the SnackBar in the middle of our screen. Center Widget helps us here. Then we use an ElevatedButton Widget to have an actual button. So far so good. Then we come to the onPressed property where we define what happens when we click on the button. And here the magic happens! We define a new variable with the type SnackBar and initialize it with the following properties: - content: Defines the text inside the message - action: Calls the SnackBarAction class where we can define the label for the button inside the SnackBar message and where we could also define what should happen when closing the message. It will close the SnackBar anyway but we could also let other things happen if we want to. Like i mentioned above we could customize it further and define properties like duration and width: duration: const Duration(milliseconds: 1200), // Defines when the SnackBar should dissapear automatically width: 120.0, // Width of the SnackBar. Have a look here if you want to know more about the SnackBar properties and what you can customize. And that's it! And this is how i looks like: Conclusion I hope you learned something and now know how to implement a SnackBar in your next Flutter application! Stay connected to me and my content on Twitter. I love to improve myself every single day even if it's just a tiny bit! Stay safe and healthy guys! And as always: develop yourself! Discussion (1) I really like it! if you are interested in this topic, then check this article - dev.to/pablonax/free-vs-paid-flutt...
https://dev.to/danytulumidis/flutter-101-a-simple-snackbar-in-flutter-5d4
CC-MAIN-2021-49
refinedweb
709
57.98
Overview of developing Windows applications for USB devices Summary - Guidelines for choosing the right programming model - UWP app and desktop app developer experience Important APIs. "Custom device" in this context means, a device for which Microsoft does not provide an in-box class driver. Instead, you can install WinUSB (Winusb.sys) as the device driver. Choosing a programming model If you install Winusb.sys, here are the programming model options: Windows 8.1 provides a new namespace: Windows.Devices.Usb. The namespace cannot be used in earlier version of Windows. Other Microsoft Store resources are here: UWP app. Windows desktop app for a USB device Before Windows 8.1, apps that were communicating through Winusb.sys, were desktop apps written by using WinUSB Functions. In Windows 8.1, the API set has been extended. Other Windows desktop app resources are here: Windows desktop app. The strategy for choosing the best programming model depends on various factors. Will your app communicate with an internal USB device? The APIs are primarily designed for accessing peripheral devices. The API can also access PC internal USB devices. However access to PC internal USB devices from a UWP app is limited to a privileged app that is explicitly declared in device metadata by the OEM for that PC. Will your app communicate with USB isochronous endpoints? If your app transmits data to or from isochronous endpoints of the device, you must write a Windows desktop app. In Windows 8.1, new WinUSB Functions have been added to the API set that allow a desktop app to send data to and receive data from isochronous endpoints. Is your app a "control panel" type of app? UWP apps are per-user apps and do not have the ability to make changes outside the scope of each app. For these types of apps, you must write a Windows desktop app. Is the USB device class supported classes by UWP apps? Write a UWP app if your device belongs to one these device classes. name:cdcControl, classId:02 * * name:physical, classId:05 * * name:personalHealthcare, classId:0f 00 00 name:activeSync, classId:ef 01 01 name:palmSync, classId:ef 01 02 name:deviceFirmwareUpdate, classId:fe 01 01 name:irda, classId:fe 02 00 name:measurement, classId:fe 03 * name:vendorSpecific, classId:ff * * Note If your device belongs to DeviceFirmwareUpdate class, your app must be a privileged app. If your device does not belong to one the preceding device classes, write a Windows desktop app. Driver requirement Code samples Development tools Feature implementation Documentation Related topics Universal Serial Bus (USB)
https://docs.microsoft.com/en-us/windows-hardware/drivers/usbcon/developing-windows-applications-that-communicate-with-a-usb-device
CC-MAIN-2020-45
refinedweb
429
58.79
Ok, here's what I got. Code : import java.util.Random; import java.util.Scanner; /* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ public class demoGuessGame { public static void main(String[] args) { Random any = new Random(); Scanner input = new Scanner (System.in); try { int maxNum, maxTries, guess; boolean win = false; maxNum = any.nextInt(10); maxTries = 0; while (win == false) { System.out.print("\n" + "Pick a number between 1 and 10: "); guess = input.nextInt(); maxTries++; if (guess == maxNum) { win = true; } else if (guess <= maxNum) { System.out.println("\n" + "The number you picked is TOO LOW."); } else if (guess >= maxNum) { System.out.println("\n" + "The number you picked is TOO HIGH."); } } System.out.println("\n" + "YOU WIN!"); System.out.println("The number was: " + maxNum); System.out.println("It took this many tries to get it right: " + maxTries); } // end of the try section catch(Exception msg) { input.next(); System.out.println("\n" + "Sorry, invalid input- " + msg + " exception try again "); } //end of the catch section } } I want to make it loop again when asked, "Do you want to play again?" But I don't know how. :( Does anyone know how to?
http://www.javaprogrammingforums.com/%20java-theory-questions/36937-guessing-game-demo-wanting-add-doagain-loop-but-i-dont-know-how-printingthethread.html
CC-MAIN-2014-52
refinedweb
205
70.8
Sometimes you just need a little space, right? Tenants in FusionAuth can provide logical separation of users and applications while letting admins manage one FusionAuth installation. All editions of FusionAuth support multiple tenants within the same FusionAuth installation. Using this feature eases operational burdens while still maintaining logical divisions. Why set up multi-tenant FusionAuth? Pretend you have a SaaS application which lets people manage their todos. It’s a glorious task management application. Your company, TodoInc, sells accounts to individuals. That application lives at app.todo.com. You are using FusionAuth as your user data store, and you have other applications using it as well: a forum application, Zendesk for customer support and GSuite. However, Company1, a large multinational corporation with a big checkbook, approaches you. They want a premium edition; they have decided that your todo application is too good to live without. They want their application to be separate, private labeled, and located at todo.company1.com. They’re willing to pay a premium price, as well. You realize you can offer this easily by modifying your application to respond to multiple hostnames. The logic is pretty straightforward. But what about your users? The accounts at todo.company1.com should be entirely separate from the accounts at app.todo.com. What if the CEO of Company1 has an account at app.todo.com with the email address ceo@company1.com, and then signs up for a corporate account with that same email address? Will those be considered the same user? They shouldn’t be. It doesn’t make sense to mix personal and business tasks, and even though the email address for these two accounts is the same, they should be separate. Furthermore, suppose you sell Organization2 a premium subscription as well, to be hosted at todo.organization2.org. These user accounts should be independent too, with no possibility of collision, even if the CEO of Company1 volunteers there, and registers an account there with the email address ceo@company1.com to keep track of their tasks at Organization2. Each standalone application’s login pages must also be branded; your clients want their apps customized to match their websites. Oh, and by the way, Company1 wants users to authenticate against ActiveDirectory, and Organization2 wants users to be able to login with their Facebook accounts. What to do? We got ya. You can handle this scenario in two different ways with FusionAuth. Separate FusionAuth servers The first option is running separate FusionAuth instances on separate servers, each with their own backing database. This is easy to understand and has some strengths: - You can scale each installation independently. - The servers can be located in different legal jurisdictions. - The FusionAuth admin UI can be made available to premium clients if desired. However, operationally this choice leads to complexity. There’s the cost of running and maintaining the different servers. You’ll need to make sure that configuration such as admin accounts, webhooks, and API keys are synced. When any employee of TodoInc departs, you’ll need to ensure you remove accounts across all the servers. You’ll also need to automate your server rollout process so that todo.company1.com doesn’t get left behind when you upgrade. And you’ll need some way to let your customer service folks know which installation is associated with which client, so that when a request comes in to reset a password, they aren’t hunting across different servers. This isn’t a big issue when you have only three private labeled accounts, but when you have twenty or two hundred, it becomes problematic. Tenants to the rescue FusionAuth provides an easier option: tenants. Tenants are a first class construct in FusionAuth. When you set up a new instance, there is one tenant installed: “Default”. And sometimes one is enough. But you can create as many as you’d like. From the perspective of a user signing in, each tenant is a separate installation. Each tenant has its own email templates, themes, application configurations and users. API keys can be scoped to a tenant, so if you want to give a client an API key to allow them to create their own integrations, you can. This allows for tighter integrations between your todo application and your clients’ systems, and is a nice premium feature to offer with no cost to you. You can change the token issuer, password rules, and many other settings at the tenant level. The look and feel of the login, forgot password and all other OAuth pages can be customized per tenant. You can also duplicate an existing tenant to easily start from a solid set of defaults. When you have two or more tenants, the admin UI displays a new column showing you with which tenant each application is associated. Normally a user’s email address is unique across FusionAuth. But each tenant is a separate userspace, so you can have two different user accounts with the same email, but different data, passwords and application associations. For administrators, there are significant benefits with tenants. You get ample separation as mentioned above. But as an admin, you have one view into all system activity. You also only have one FusionAuth installation to manage, secure, and upgrade. Operations has one place to go to set up new API keys or webhooks. If your customer service reps need to reset a password, they don’t have to track down the correct FusionAuth installation. Central user management makes their lives easier. How to create tenants via API Even better, tenant creation can be automated. Every time TodoInc sells a premium subscription, code can automatically create a corresponding tenant, like so: def create_new_tenant(generic_tenant_client, name) # create a new tenant based on the default tenant tenants_response = generic_tenant_client.retrieve_tenants if tenants_response.status != 200 puts "Unable to retrieve tenants." return end default_tenant_id = tenants_response.success_response.tenants.select { |t| t.name == 'Default' }[0].id default_tenant_theme_id = tenants_response.success_response.tenants.select { |t| t.name == 'Default' }[0].themeId new_tenant_request = { "sourceTenantId": default_tenant_id, tenant: { "name" : "New client - "+name }} new_tenant_response = generic_tenant_client.create_tenant(nil, new_tenant_request) if new_tenant_response.status != 200 puts "Unable to create tenant." puts new_tenant_response.error_response return false end new_tenant = new_tenant_response.success_response.tenant [default_tenant_theme_id, new_tenant] end In this code snippet, we find the tenant with the name Default and duplicate it, creating a new tenant with the same settings. The full code, which creates and modifies a new theme is available if you want to take a look. This example is written in ruby, but you can use any of our client libraries to automate this process. Caveats Once you create a second tenant, API access typically needs to pass in a tenant identifier. This isn’t difficult, but you should plan for it. For ruby, it’s as simple as setting the tenant id on a client object: # ... tenant_client = FusionAuth::FusionAuthClient.new(API_KEY, '') tenant_client.set_tenant_id(TENANT_ID) user = tenant_client.retrieve_user_by_email('jared@piedpiper.com') # ... In conclusion With tenants, you get operational simplicity as well as logical separation of data. Your ops and customer success teams will thank you for the former, and you’ll be able to private label your offering because of the latter.
https://fusionauth.io/blog/2020/06/30/private-labeling-with-multi-tenant
CC-MAIN-2022-33
refinedweb
1,190
50.33
three of our tutorial series on building a fully-featured React Native chat app. In our previous two parts, we covered basic messaging, UI and crude authentication, and added message history. If you haven’t already, complete parts one and two, as they lay the groundwork for what we’ll cover going forward. In this part, we will learn how to display an online user count to identify how many users are in the chatroom in realtime. To implement this feature in our app, we are going to use PubNub Presence. Activating PubNub Presence To activate Presence, navigate to the PubNub Admin Dashboard. The dashboard will have multiple configurations which you can see in the screenshot below. The minimum value set for the interval is 10 seconds. This will help when you are in dev mode. To reduce costs in production, raise this number to a larger interval length, like 1 minute. Users will trigger the following presence events in the channels they subscribe to: - join — A user subscribes to a channel. - leave — A user unsubscribes from a channel. - timeout — A timeout event is fired when a connection to a channel is severed and the subscriber has not been seen in 320 seconds (just over 5 minutes). This timeout interval can be customized using the heartbeat and heartbeat interval settings (SDK v3.6 or later). - state-change — A state-change event will be fired any time the state is changed using the state API (function signature varies by SDK). - interval — An occupancy count is sent every 10 seconds (the default setting for the Presence Interval property, which is also configurable in the Presence add-on settings in your admin dashboard). Activating Presence in our React Native App In this step, we are going to activate Presence in our React Native chat app that we’ve built up in our previous parts. First, you need to change the hardcoded “MainChat1” room name to a variable as shown in the code snippet below (MainChat.js file): const RoomName = "MainChat1"; Then, you need to add UUID to identify the device and add a timeout for dev mode, which you can see in the code snippet below (MainChat.js file): this.pubnub = new PubNubReact({ publishKey: "yourapikeyhere", subscribeKey: "yourapikeyhere", uuid: this.props.navigation.getParam("username"), presenceTimeout: 10 }); Then, you need to activate user presence when subscribing to the channel by setting the withPresence state to true like in the code snippet below: this.pubnub.subscribe({ channels: [RoomName], withPresence: true }); This completes the setup of Presence in our application. Handling Presence State In this step, we are going to handle the presence states. Start off by creating a new state object. This will handle data like messages, online users, and online user count like shown in the code snippet below. Put this in the constructor of MainChat : this.state = { isTyping: false, messages: [], onlineUsers: [], onlineUsersCount: 0 }; Handling Presence Events Here we are going to handle the Presence event. First, you need to create a new function named PresenceStatus and get presence state using a getPresence function and pass two parameters. They are the room name and a callback function, which is shown in the code snippet below. Put these in the MainChat class: PresenceStatus = () => { this.pubnub.getPresence(RoomName, presence => {}) } _Note: There are two types of events that we need to handle, which are user-generated and PubNub generate_d. PubNub User Presence Events These are user-generated events such as logging in, logging out, etc. When a user clicks on login, it generates a join event. When a user clicks on logout to close the app, it generates a leave event and when a user doesn’t do anything for a while, it generates a timeout event. Join Event The join event is very simple. Whenever a user successfully logs into the app, they join the chat room and receive a join event. This can be done by using the code in the following code snippet: PresenceStatus = () => { this.pubnub.getPresence(RoomName, presence => { if (presence.action === "join") { let users = this.state.onlineUsers; users.push({ state: presence.state, uuid: presence.uuid }); this.setState({ onlineUsers: users, onlineUsersCount: this.state.onlineUsersCount + 1 }); } }) } First, we need to check if the presence event is a “join.” Then we add the new user to the state collection. Finally, we update the React State instance. Leave or Timeout Events Next, we generate a leave event and timeout event whenever the user logs out or times-out of the chatroom. When a user clicks “back” or anything that initiates the componentDidMount event, the app generates a leave event. When a user is idle for too long, they get pushed out of the chat room. The code to implement this is provided in the code snippet below. Add this to your PresenceStatus function: if (presence.action === "leave" || presence.action === "timeout") { let leftUsers = this.state.onlineUsers.filter( users => users.uuid !== presence.uuid ); this.setState({ onlineUsers: leftUsers }); const length = this.state.onlineUsers.length; this.setState({ onlineUsersCount: length }); this.props.navigation.setParams({ onlineUsersCount: this.state.onlineUsersCount }); } First, we get other users and filter them out of the state, then put data back into the state. Then we need to count online users and put the count back into the onlineUsersCount state, as shown in the code snippet. PubNub Network Presence Events This part of the tutorial addresses the events generated by PubNub. The interval event is a PubNub network generated event. We can use this when a user comes online on the channel. They will be able to see a realtime update such as online users joining/leaving. Also, users can identify which other users are online. We will do the same logic as user-generated events, but by checking conditions in the interval event data. Join Event Add this to your PresenceStatus function: if (presence.action === "interval") { if (presence.join || presence.leave || presence.timeout) { let onlineUsers = this.state.onlineUsers; let onlineUsersCount = this.state.onlineUsersCount; if (presence.join) { presence.join.map( user => user !== this.uuid && onlineUsers.push({ state: presence.state, uuid: user }) ); onlineUsersCount += presence.join.length; } } } First, we are checking for an interval event. If new member matches the criteria, then we get current state data and create a new state instance. This gets pushed to the array of online users as seen in the code snippet above. Next we are going to check the state after a join event. If the new UUID does not match a UUID in the collection, then we assume that the new user needs to be added to the onlineUsers collection. Then we increase the onlineUsersCount number. Leave Event if (presence.leave) { presence.leave.map(leftUser => onlineUsers.splice(onlineUsers.indexOf(leftUser), 1) ); onlineUsersCount -= presence.leave.length; } For the leave event, we are going to keep the track of users that leave, and splice them from onlineUsers array. Then we need to decrease the onlineUsersCount instance to decrease the count of online users. Timeout Event Here, we need to do the same process as the leave event. if (presence.timeout) { presence.timeout.map(timeoutUser => onlineUsers.splice(onlineUsers.indexOf(timeoutUser), 1) ); onlineUsersCount -= presence.timeout.length; } We add the update state logic, like in the code snippet below: this.setState({ onlineUsers, onlineUsersCount }); Next, update the componentWillMount function with the following code: componentWillMount() { this.props.navigation.setParams({ onlineUsersCount: this.state.onlineUsersCount, leaveChat: this.leaveChat.bind(this) }); this.pubnub.subscribe({ channels: [RoomName], withPresence: true }); this.pubnub.getMessage(RoomName, m => { this.setState(previousState => ({ messages: GiftedChat.append(previousState.messages, m["message"]) })); }); // this.hereNow(); this.PresenceStatus(); } Now we are done with implementing code for PresenceStatus. Display Presence Data In this section, we are going to display state data that we manipulated in the last section. First, we are going to display data on the navbar. Then we are going to display the user’s avatars below the navbar. Display User Count Navbar is a component that is generated from the React Navigation package. We need to pass the onlineUserCount state to the navbar. Then we need to set parameters from outside the navbar scope to get the parameter from the navbar. Setting State Data on Update We have multiple lines where we need to update onlineUserCount. So, we are going to pass it as a parameter, as shown in the code snippet below (MainChat.js): componentWillMount() { t his.props.navigation.setParams({ onlineUsersCount: this.state.onlineUsersCount }); } Online User Count Data in The NavBar To update the navbar with state data, we can get data from parameters, like in the code snippet below: static navigationOptions = ({ navigation }) => { return { headerTitle: navigation.getParam("onlineUsersCount") + " member online" }; }; We need to add navigationOptions which is the variable used to configure the navbar. Then, using getParam and onlineUsersCount , we can set the text in the UI. Display Online Users in Chat with an Avatar In the next step, we will implement avatars for user presence below the navbar. For this, you need to copy the code provided in the code snippet below and replace it in your MainChat.js file. render() { let username = this.props.navigation.getParam("username"); return ( <View style={{ flex: 1 }}> <View style={styles.online\_user\_wrapper}> { this.state.onlineUsers.map((item, index) => { return ( <View key={item.uuid}> <Image key={item.uuid} style={styles.online\_user\_avatar} source={{ uri: "" + item.uuid }} /> </View> ); }) } </View> <GiftedChat messages={this.state.messages} onSend={messages => this.onSend(messages)} user={{ \_id: username, name: username, avatar: "" + username }} /> </View> ); } } Next, we are going to create a main wrapper that wraps a new section and chat view. Then we need to add a wrapper to contain the avatar and iterate through the onlineUsers state collection. We will create a unique image for each user using robohash. Then, finally complete it by adding some CSS, like in the code snippet below (MainChat.js): const styles = StyleSheet.create({ online\_user\_avatar: { width: 50, height: 50, borderRadius: 20, margin: 10 }, container: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "#F5FCFF" }, welcome: { fontSize: 20, textAlign: "center", margin: 10 }, online\_user\_wrapper: { height: "8%", justifyContent: "flex-end", alignItems: "center", flexDirection: "row", backgroundColor: "grey", flexWrap: "wrap" } }); Chat Logout Functionality Before we consider this tutorial section complete, let’s implement a logout feature for our chat app. When the user clicks on logout , the user presence data needs to be adjusted. Also, the user must unsubscribe from the channel. Create a leaveChat function Here, we are going to create a leaveChat function like shown in the code snippet below (MainChat.js): leaveChat = () => { this.pubnub.unsubscribe({ channels: [RoomName] }); return this.props.navigation.navigate("Login"); }; When the function is executed, the user will be unsubscribed from the PubNub channel and redirected to the Login screen. We need to invoke this function in two places, which are mentioned below. Before we implement a componentWillUnmount function, we need to implement the leaveChat function as shown in the code snippet below: componentWillUnmount() { this.leaveChat(); } Add Logout to the NavBar We are adding the function to the logout button click in the navbar. This is to cover a user logging out manually. We will save some screen space by passing it to the header, like in the code snippet below. Modify your componentWillMount function: componentWillMount() { this.props.navigation.setParams({ onlineUsersCount: this.state.onlineUsersCount, leaveChat: this.leaveChat.bind(this) }); ... Lastly, we are going to remove the back button and replace it with our logout button in the header. Add this code below the constructor function in MainChat.js: static navigationOptions = ({ navigation }) => { return { headerTitle: navigation.getParam("onlineUsersCount", "No") + " member online", headerLeft: null, headerRight: ( <Button onPress={() => { navigation.state.params.leaveChat(); }} ) }; This completes our tutorial to display online users using PubNub Presence! Using your command line, run the React Native mobile app simulator. react-native run-ios react-native run-android Wrapping Up In this tutorial, we learned how to keep tabs on how many users are currently in the chatroom using PubNub Presence. Your chat app is growing in features, but there’s a lot more we can do. Keep an eye out for subsequent posts on typing indicators, unread message count, and more. Originally published at on August 2, 2019. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kris/building-a-react-native-chat-app-part-three-online-user-count-pubnub-2k37
CC-MAIN-2021-25
refinedweb
1,998
58.99
The Lightly Platform¶ The lightly framework itself allows you to use self-supervised learning in a very simple way and even create embeddings of your dataset. However, we can do much more than just train and embed datasets. Once you have an embedding of an unlabeled dataset you might still require some labels to train a model. But which samples do you pick for labeling and training a model? This is exactly why we built the Lightly Data Curation Platform. The platform helps you analyze your dataset and using various methods pick the relevant samples for your task. The video below gives you a quick tour through the platform: Basic Concepts¶ The Lightly Platform is built around datasets, tags, embeddings, samples and their metadata. Learn more about the different concepts in our Glossary. Create a Dataset from a local folder or cloud bucket¶ There are several different ways to create a dataset on the lightly platform. The baseline way is to upload your local dataset including all images or videos to the Lightly platform. If you don’t have your data locally, but rather stored at a cloud provider like AWS S3, Google Cloud Storage or Azure, you can create a dataset directly referencing the images in your bucket. It will keep all images and videos in your own bucket and only stream them from there if they are needed. This has the advantage that you don’t need to upload your data to Lightly and can preserve its privacy. If you want to let Lighlty take care of the data handling and upload to our servers (European location). For datasets stored in your cloud bucket: There is a another option of using Lightly. In case you don’t want to upload any data to the cloud nor to Lightly but still use all the features we can stream the data from a local fileserver: Custom Metadata¶ With the custom metadata option, you can upload any information about your images to the Lightly Platform and analyze it there. For example, in autonomous driving, companies are often interested in different weather scenarios or the number of pedestrians in an image. The Lightly Platform supports the upload of arbitrary custom metadata as long as it’s correctly formatted. Upload¶ You can pass custom metadata when creating a dataset and later configure it for inspection in the web-app. Simply add the argument custom_metadata to the lightly-magic command. lightly-magic trainer.max_epochs=0 token='YOUR_API_TOKEN' new_dataset_name='my-dataset' input_dir='/path/to/my/dataset' custom_metadata='my-custom-metadata.json' As with images and embeddings before, it’s also possible to upload custom metadata from your Python code: import json from lightly.api.api_workflow_client import ApiWorkflowClient client = ApiWorkflowClient(token='123', dataset_id='xyz') with open('my-custom-metadata.json') as f: client.upload_custom_metadata(json.load(f)) Note To save the custom metadata in the correct format, use the helpers format_custom_metadata and save_custom_metadata or learn more about the custom metadata format below. Note Check out Dataset Identifier to see how to get the dataset identifier. Configuration¶ To use the custom metadata on the Lightly Platform, it must be configured first. For this, follow these steps: Go to your dataset and click on “Configurator” on the left side. Click on “Add entry” to add a new configuration. Click on “Path”. Lightly should now propose different custom metadata keys. Pick the key you are interested in, set the data type, display name, and fallback value. Click on “Save changes” on the bottom. Done! You can now use the custom metadata in the “Explore” and “Analyze & Filter” screens. Format¶ To upload the custom metadata, you need to save it to a .json file in a COCO-like format. The following things are important: Information about the images is stored under the key images. Each image must have a file_name and an id. Custom metadata must be accessed with the metadata key. Each custom metadata entry must have an image_id to match it with the corresponding image. For the example of an autonomous driving company mentioned above, the custom metadata file would need to look like this: { "images": [ { "file_name": "image0.jpg", "id": 0, }, { "file_name": "image1.jpg", "id": 1, } ], "metadata": [ { "image_id": 0, "number_of_pedestrians": 3, "weather": { "scenario": "cloudy", "temperature": 20.3 } }, { "image_id": 1, "number_of_pedestrians": 1, "weather": { "scenario": "rainy", "temperature": 15.0 } } ] } If you don’t have your data in coco format yet, but e.g. as a pandas dataframe, you can use a simple script to translate it to the coco format: import pandas as pd from lightly.utils import save_custom_metadata # Define the pandas dataframe column_names = ["filename", "number_of_pedestrians", "scenario", "temperature"] rows = [ ["image0.jpg", 3, "cloudy", 20.3], ["image1.jpg", 1, "rainy", 15.0] ] df = pd.DataFrame(rows, columns=column_names) # create a list of pairs of (filename, metadata) custom_metadata = [] for index, row in df.iterrows(): filename = row.filename metadata = { "number_of_pedestrians": int(row.number_of_pedestrians), "weather": { "scenario": str(row.scenario), "temperature": float(row.temperature), } } custom_metadata.append((filename, metadata)) # save custom metadata in the correct json format output_file = "custom_metadata.json" save_custom_metadata(output_file, custom_metadata) Note Make sure that the custom metadata is present for every image. The metadata must not necessarily include the same keys for all images but it is strongly recommended. Note Lightly supports integers, floats, strings, booleans, and even nested objects for custom metadata. Every metadata item must be a valid JSON object. Thus numpy datatypes are not supported and must be cast to float or int before saving. Otherwise there will be an error similar to TypeError: Object of type ndarray is not JSON serializable. Sampling¶ Before you start sampling make sure you have created a dataset and uploaded images and embeddings. See `Create a Dataset`_. Now, let’s get started with sampling! Follow these steps to sample the most representative images from your dataset: #. Choose the dataset you want to work on from the “My Datasets” section by clicking on it. Navigate to “Analyze & Filter” → “Sampling” through the menu on the left. Choose the embedding and sampling strategy to use for this sampling run. Give a name to your subsampling so that you can later compare the different samplings. Hit “Process” to start sampling the data. Each sample is now assigned an “importance score”. The exact meaning of the score depends on the sampler. Move the slider to select the number of images you want to keep and save your selection by creating a new tag, for example like this: Dataset Identifier¶ Every dataset has a unique identifier called ‘Dataset ID’. You find it on the dataset overview page. Authentication API Token¶ To authenticate yourself on the platform when using the pip package we provide you with an authentication token. You can retrieve it when creating a new dataset or when clicking on your account (top right)-> preferences on the web application. Warning Keep the token for yourself and don’t share it. Anyone with the token could access your datasets!
https://docs.lightly.ai/getting_started/platform.html
CC-MAIN-2022-05
refinedweb
1,150
57.57
Re: Protected Forest with One Child domain - From: "Herb Martin" <news@xxxxxxxxxxxxxx> - Date: Fri, 15 Sep 2006 12:43:22 -0500 "santa''s helper" <santashelper@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:06106C16-27FB-42DC-B95F-1801889BD060@xxxxxxxxxxxxxxxx Thanks for all the info. All servers are Win2K3. The forest is in native mode. The domains are in native mode. I have setup my child domains to conditionally forward to the forest domain and have delegated at the root down to the child domains. So I think(?) that I have set all that up correctly. Ok, so (sanity check) your child DNS servers can resolve both their "own zones" AND find everything else through the conditional forwarding to the top of the DNS hierarchy at the "root AD domain", right? The Root Domain DNS of course knows ITS OWN zone and can find all of the others through the DELEGATION, correct? Test this with specific NSLookup commands.... nslookup name.child.domain.com IP.of.Child.DNS nslookup name.domain.com IP.of.PARENT.DNS nslookup name.EachOtherChild.domain.com IP.of.Child.DNS nslookup name.EachOtherChild.domain.com IP.of.Parent.DNS All should work. However, A couple more questions (for now): I have read that the '_msdcs...' folder in the forest should also exist on the child dns servers too. That is a good idea. It can be reached through the conditional forwarding, but is small so there is no good reason not to just hold a secondary (or Forest Wide AD-integrated) copy of it locally on EVER DNS server. Remember that technically you could do this for all zones -- and that is pretty much what is needed in Win2000 DNS which doesn't have "Forest Wide AD replication", Stub Zones, NOR Conditional Forwarding. First, is this correct (or do I have it backwards) No, it sounds right IF you can get a successful test. Also test EVERY DC with DCDiag and fix all FAIL, WARN, and ERROR indications. and Second, I'm not sure how to do it if it is 'best practice' to do it. Best practices really aren't that helpful here since it varies by size of the domain and size of the overall forest but for most people pretty much ANYTHING that resolves all names correct is fine. They key is to GET THAT RESOLUTION, then worry about optimizing the performance and the WAN usage. Probably the best for almost all medium size forests is to put every INTERNAL zone on every DNS server using AD-Integrated Forest Wide Replication (but notice this may get obnoxious for those people with many domains and very many DNS servers.) In your case it would not be too tedious (only six DNS servers) but might not help much over what you have above indicated. Second - Setting up a new tree within the forest.: At the forest level, I can click on the root of the forest (in dns) and then select 'new delegation' to delegate the child to the child dns servers. Ok, I have done this and it makes sense, etc... And of course you have those delegated DNS servers up and running correctly. (We always DELEGATE TO a "set of DNS servers".) I can't delegate a new tree (at a root level on the forest dns servers) -- so basically, I just don't understand where the new tree is placed within DNS at the forest level so the forest can know about the tree. Does the question make sense?? No, I don't understand completely. My suspicion is that you have a "DNS Root" (i.e., a "." or DOT) zone defined. This is an artifact of a BUG/feature in the DNS setup and should almost always be deleted. Although I have never tried, you SHOULD be able to delegate from the "."-root zone to a different tree (e.g., com vs. edu) but maybe you mean something else by the above paragraph.... So --- lets say I have forest1.com (this is the forest). Within the forest I have delegated the 'child1' domain to child1.forest1.com. No, that is the DOMAIN you DELEGATED, you would DELEGATE TO some DNS server, such as dns1.child1.forest.com or even Weirdname.atStupiddomain.local. (The latter doesn't make much sense but is technically legal as long as that is the DNS server which holds the required zone: child1.forest.com) And the dns servers (3) in the child1 domain conditionally forward back to forest1.com. So that forest.com names (and all those BELOW it other than child1) are resolvable too. Make sense? (It MUST make sense if you are going to maintain this stuff.) Now --- lets say my new tree is called Tree1.net. How do I set it up in dns at the forest root? If you don't have the ".net" zone (which you should NOT have) then you would just "conditionally forward" from that server too. The only other choice is (or rather choices are) actually holding a copy of "tree1.net" on that DNS server. Again, does this make sense? A server EITHER MUST: 1) Hold a zone (primary, secondary, AD-integrated, or stub) 2) Delegate to the zone DNS servers (only for child zones) 3) Conditionally forward to that specific zone (or a parent of it) 4) Physically recurse by going to the DNS root ("." dot) zone and working down through all of the names in the namespace 5) Forward unconditionally to SOME OTHER DNS server which can do one of these. The list above is VERY IMPORTANT to be able to THINK THROUGH (not necessarily to memorize unless you wish to teach) so that you can ALWAYS figure out the options. And I am assuming that I still will conditionally forward from the Tree1.net dns servers back to Forest1.com. Is this correct? Ok, but it has to work the other way too. WARNING: Mutual conditional forwarding is FINE and very commonly required but you must NEVER use "MUTUAL unconditional forwarding" since this causes an Infinite Loop (1->2 and 2->1 since neither holds the missing zones). BTW: Here are my reasons for creating the Tree and the reasons I think it makes sense to make the tree within Forest1 (please feel free to guide me on this too): Tree1 is a new division with the company that (eventually) will either be spun off as a new, independant company or sold off completely. That latter TENDS to argue for a SEPARATE FOREST -- this is not ideal if the companies will remain as one, but there is no supported way to "Prune" (i.e., spin off) a domain or tree from an AD Forest without re-installing everything in that domain/tree. If the separation is fully expected then I would likely make it a separate forest. What reasons do you have for a single forest (sharing of resources, etc?) As this division grows, employees that are hired specifically within that division will probably have email accounts, etc. that are different than those of Forest1. I also have read that setting Exchange 2003 to work with multiple Forest is very difficult, if not impossible. It is certainly more complicated and you may need two Exchange server sets running (semi) independently....but that is going to be required after any spin-off anyway probably. And lastly, I want to have a Protected Forest design and standardize all security across all domains without additional considerations of another forest. (Again, feel free to educate me if I am approaching this incorrectly). Since Group Policy doesn't flow or inherit down DOMAIN hierarchies you still need to link and apply it separately for every domain. You are going to end up copy/recreating these GPO objects to each domain ANYWAY and need it to work once separated so perhaps your level of "without additional consideration" is going to be merely setting up standards and perhaps providing master copies of such GPOs and Security Templates, plus auditing the results. Thanks again for your help - much, much, much appreciation!!!!! :> We try. You are very welcome.... One further recommendation: You have a fairly complex setup and set of requirements so it is NOT going to be 'enough' for your to merely take someone else's "best practices" (in most cases) but rather that you learn and (we help you) understand what the relative benefits and issues really mean to YOU and your business (multiple businesses really which may have different needs and concerns.) -- Herb Martin, MCSE, MVP Accelerated MCSE [phone number on web site] "Herb Martin" wrote: "santa's helper" <santa's helper@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in news:83E91456-BB8F-4354-AEA0-A8C5409BEAA3@xxxxxxxxxxxxxxxx Hello all --- ok, here's the question -- actually several.... I am setting up a new forest (for security, etc..) and a child domain. The forest has 3 DCs (already setup -- working great!). The child domain will have 3 DCs (currently only 1 has been setup). Then presumably the forest has SIX DCs, right? The basics I have, but how to do the DNS stuff has me stumped. When promoting the first child dc (cDC1).... A. should I install DNS on him? i think so, but then... Probably but it is NOT an actual requirement. The actual requirement is that each domain have a corresponding Dynamic DNS ZONE to support that domain. Usually this zone is held on the servers of that domain but technically can be held ANYWHERE that meets the requirements. B. what is/are his DNS pointer(s) (primary, sec) pointing to? does it point to the forest? if it does... A DNS server is either a Primary or a Secondary (or AD Primary). He doesn't "point" to these unless a Secondary in which case the Secondary may point to ANY other DNS server holding that zone (primary, Secondary, or AD-Primary.) IF you mean, where should the DNS CLIENT settings for that DNS/DC server point then the answer is USUALLY to the servers for his own zone (e.g., himself) but this is again not a technical requirement just a common practice, and due to performance a generally good idea. [This concept has NOTHING to do with Primary/Secondary (clients don't much care about such server side distinctions) but rather PREFERRED and ALTERNATE DNS servers.] During DCPromo this may be different than afterwards since the "new DC to be" must be able to find the DCs from the parent domain AND his (new) zone. The real key is that the DNS CLIENT settings must be able to find ANY record that client will need (even if that client is really a server itself.) So it would be common to point to the existing (parent domain) DNS during DCPromo, and to himself (holding the new zone) afterwards. BUT that alone is insufficient since the NEW DNS server must still be able to resolve the ENTIRE forest as a DNS SERVER. C. what about after the promo ?... he does not carry info about his own domain. his _msdcs info has now been placed in the forest domain under the root's _msdcs domain. A DNS-DC almost always points to 'himself' as PREFERRED, other DNS server(s) holding his zone as Alternate(s). E. assuming I am doing all this incorrectly thus far , after all is said and done, how should I configure all 3cDCs (actually all 6) relatative to DNS? No trivial answer for Win2003 since there are so many choices, but there are several good ones. Parent zone is usually set to "delegate" to the child zone. Child zone SERVER is usually set to either "conditionally forward" (Win2003 only) to parent or top of DNS hierarchy (and any other 'sister' DNS trees. The key is that every DNS server must be able to answer ANY question asked by ANY of it's DNS clients. In Windows 2000, the new methods don't exist so generally the child DNS servers hold a Secondary for the top of the parent DNS hierarchy, and another for each sibling/other DNS hierarchy within the company. (I call this "cross secondaries" since it is commonly mutual.) There are other choices in Win2003, but commonly the best if you have NO Win2000 DCs is to use "AD Integrated DNS" with FOREST wide replication to all DNS-DCs in the forest. (This may or may not make sense in GIANT zones/domains but for even medium size companies it is usually correct without much concern.) Notice now that we have separated the issue of "where to point" stuff, from getting the DNS servers themselves (AS SERVERS) to be able to resolve EVERYTHING the DNS clients will need. Now, the DCs are "just DNS Clients" and so point to the DNS servers that can do what they need. Since they are usually DNS servers themselves, they tend to point to themselves as being both correct AND the most efficient (speed, net traffic etc.) 1. 1st, 2nd, 3rd DNS pointers (round robin just like we have the forest?)? That doens't make much sense as written and doesn't seem relevant to your other questions without some clarification. 2. forwarders? You cannot usually use your GENERAL forwarding setting for these issues since most/much of the time you also need to forward (generally) to the Internet for "all other DNS names". So in Win2003, the answer to #2 is yes, but we use CONDITIONAL FORWARDING. 3. delegation? This only works from PARENT down to child and is very common. This is the traditional choice for Parent domains, even prior to Win2000 or for other OSes. 4. etc... Technically there is also "Stub Zones" in Win2003 but they are virtually identical to Conditional Forwarding in effect (with a very minor distinction that practically no one knows or ever needs to consider.) [And of course AD-Integrated with Forest wide DNS-DC replicaton, covered able. As well as "cross secondaries" for Windows 2000 where the new features just don't exist.] D. lastly, is it possible to have 'short name' sign in for both the forest (ent admin accounts) and at the child level or will the ent admin have to use name@xxxxxxxxxxxxxxxx to login directly on that child domain? Everyone must log into their OWN account, wherever that account may be. This generally means using the user name and domain name to FULLY qualify the account name. (Accounts are domain specific.) So we use either the older NetBIOS form: Domain\User (or fill out both boxes at Ctrl-Alt-Del) OR we use the newer UPN. The User Principal Name (UPN) is as you suggest similar to an Everyone in the entire forest CAN (be setup to) use the same UPN suffix IF you take action to define their UPN that way instead of in the form "User@xxxxxxxxxxxxxxxx" -- that is you setup all the UPNs as "User@xxxxxxxxxxxxxx". Technically these suffixes could be a domain suffix that doesn't even really exist -- the classic example is the company that uses a private domain name (e.g., .local) for the forest domain zone names, but uses the public version (e.g., .com) for all of the UPNs so that users email address and UPN are literally the same. Your choice but there is a little extra work to make the standardized UPN work -- also every user name must be unique across the forest and not just each domain (which is a good idea anyway but not actually required in the simpler case.) Here is what I think I know .... I want the child domain to have its own DNS for its domain. (there will be a total of 3 DCs in this domain.) but I am Why not just run all DCs as DNS server, and have all of them hold a copy of every zone, and use Forest wide DNS-DC replication (assuming you are running all Win2003 DCs)??? Thanks in advance for your help .. pretty pictures would be AWESOME!! :> Pictures won't help nearly as much as you being able to think through how each DNS server will resolve things it does NOT know directly.... This is THE KEY to DNS architecture and design. -- Herb Martin, MCSE, MVP Accelerated MCSE [phone number on web site] . - References: - Re: Protected Forest with One Child domain - From: Herb Martin - Re: Protected Forest with One Child domain - From: santa''s helper - Prev by Date: Re: Prevent Caching of real world domain in W2K3 sp1 DNS. - Next by Date: Re: Hostname resolution from client fails - Previous by thread: Re: Protected Forest with One Child domain - Next by thread: Re: nslookup and dnscmd help - Index(es):
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.dns/2006-09/msg00313.html
crawl-002
refinedweb
2,776
70.73
Many science courses now have examples and exercises involving implementation and application of numerical methods. How to structure such numerical programs has, unfortunately, received little attention. Students and teachers occasionally write programs that are too tailored to the problem at hand instead of being a good starting point for future extensions. A key issue is to split the program into functions and to implement general mathematics in general functions applicable to many problems. We shall illustrate this point through a case study and briefly discuss the merits of different types of programming styles. Integrate the function \( g(t)=\exp{(-t^4)} \) from -2 to 2 using the Trapezoidal rule, defined by $$ \begin{equation} \int_a^b f(x)dx \approx h\left( {1\over2}(f(a) + f(b)) + \sum_{i=1}^{n-1} f(a+ih)\right), \quad h = (b-a)/n \tag{1} \end{equation} $$ The simplest possible program may look as follows in Matlab: a = -2; b = 2; n = 1000; h = (b-a)/n; s = 0.5*(exp(-a^4) + exp(-b^4)); for i = 1:n-1 s = s + exp(-(a+i*h)^4); end r = h*s; r The solution is minimalistic and correct. Nevertheless, this solution has a pedagogical and software engineering flaw: a special function \( \exp(-t^4) \) is merged into a general algorithm (1) for integrating an arbitrary function \( f(x) \). A successful software engineering practice is to use functions for splitting a program into natural pieces, and if possible, make these functions sufficiently general to be reused in other problems. In the present problem we should strive for the following principles: Trapezoidal.mcontaining function r = Trapezoidal(f, a, b, n) % TRAPEZOIDAL Numerical integration from a to b % with n intervals by the Trapezoidal rule f = fcnchk(f); h = (b-a)/n; s = 0.5*(f(a) + f(b)); for i = 1:n-1 s = s + f(a+i*h); end r = h*s; The special \( g(t) \) function is implemented in a separate file g.m: function v = g(t) v = exp(-t^4) end Finally, we create a main program main.m: a = -2; b = 2; n = 1000; result = Trapezoidal(@g, a, b, n); disp(result); exit Both Solution 1 and Solution 2 are readily implemented in Python. However, functions in Python do not need to be located in separate files to be reusable, and therefore there is no psychological barrier to put a piece of code inside a function. The consequence is that a Python programmer is more likely to go for Solution 2. The relevant code can be placed in a single file, say main.py, looking as follows: def Trapezoidal(f, a, b, n): h = (b-a)/float(n) s = 0.5*(f(a) + f(b)) for i in range(1,n,1): s = s + f(a + i*h) return h*s from math import exp # or from math import * def g(t): return exp(-t**4) a = -2; b = 2 n = 1000 result = Trapezoidal(g, a, b, n) print result Looking at the simple exercise isolated, all three solutions produce the same correct mathematical result and are hence equivalent mathematically. However, the nature of this exercise is that we want to solve a special problem by a general mathematical method. This is often the case when mathematics is applied to practical problems. The software should reflect this division between the general part and the special part of the given problem of two reasons. First, the division is important for the understanding the general nature of mathematical methods and how general methods can be used to solve a special problems. Second, the implementation of the general part, here the Trapezoidal rule, can be reused in many other problems. We may say that the first reason is comes from the philosophy of mathematics and science, while the second reason is motivated by the practical aspect of reducing future coding efforts by relying on a reusable, general, and working function. This aspect is the basis of a fundamental software engineering practice: programs should consist of general pieces (functions) that can be reused without modifications to solve other problems. The importance of this philosophy becomes obvious when we extend the problem as described below. Another point worth mentioning is the way we can transfer functions to functions as an argument. The argument f in the Python function Trapezoidal is treated as a standard variable, and f is called by simply writing f(x). In Matlab and other languages, functions that are argument to other functions require special, often somewhat "ugly", syntax. This aspect, together with the ease of writing functions, make the Python solution slightly preferable in the present case. We also emphasize that the \( g(t) \) formula is implemented in a separate Python function such that the formula can be reused in other occasions, for instance, when integrating \( g(t) \) by an alternative numerical integration rule. In many problems, the formula is much more complicated than the one used here, and it is important to have a single, well-tested implementation of the formula. Readers may also realize that the nature of programming (combined with sound programming habits) helps to increase the understanding of mathematics through the clear distinction between general methods and a specialized problem. Understanding the generality of methods also requires an understanding of abstractions in mathematics. Programming exercises therefore enforce a stronger focus on abstractions in general. All these arguments boil down to Kristen Nygaard's famous three words: "Programming is understanding"! Compute the following integrals with the Midpoint rule, the Trapezoidal rule, and Simpson's rule: $$ \begin{eqnarray*} \int_{0}^{\pi}\sin x\, dx &=& 2,\\ \int_{-\infty}^{\infty} {1\over\sqrt{2\pi}} e^{-x^2}dx &=& 1,\\ \int_{0}^1 3x^2dx &=& 1,\\ \int_0^{\ln 11} e^xdx &=& 10,\\ \int_0^1 {3\over 2}\sqrt{x}dx &=& 1. \end{eqnarray*} $$ For each integral, write out a table of the numerical error for the three methods using a \( n \) function evaluations, where \( n \) varies as \( n=2^k+1 \), \( k=1,2, ..., 12 \). In the extended problem, Solution 1 is obviously inferior because we need to apply, e.g., the Trapezoidal rule to five different integrand functions for 12 different \( n \) values. Then it makes sense to implement the rule in a separate function that can be called 60 times. Similarly, a mathematical function to be integrated is needed in three different rules, so it makes sense to isolate the mathematical formula for the integrand in a function in the language we are using. We can briefly sketch a compact and smart Python code, in a single file, that solves the extended problem: def f1(x): return sin(x) def f2(x): return 1/sqrt(2)*exp(-x**2) ... def f5(x): return 3/2.0*sqrt(x) def Midpoint(f, a, b, n): ... def Trapezoidal(f, a, b, n): ... def Simpson(f, a, b, n): ... problems = [(f1, 0, pi), # list of (function, a, b) (f2, -5, 5), ... (f3, 0, 1)] methods = (Midpoint, Trapezoidal, Simpson) result = [] for method in methods: for func, a, b in problems: for k in range(1,13): n = 2**k + 1 I = method(func, a, b, n) result.append((I, method.__name__, func.__name__, n)) # write out results, nicely formatted: for I, method, integrand, n in result: print '%-20s, %-3s, n=%5d, I=%g' % (I, method, integrand, n) Note that since everything in Python is an object that can be referred to by a variable, it is easy to make a list methods (list of Python functions), and a list problems where each element is a list of a function and its two integration limits. A nice feature is that the name of a function can be extracted as a string in the function object ( name with double leading and trailing underscores). To summarize, Solution 2 or 3 can readily be used to solve the extended problem, while Solution 1 is not worth much. In courses with many very simple exercises, solutions of type 1 will appear naturally. However, published solutions should employ approach 2 or 3 of the mentioned reasons, just to train students to think that this is a general mathematical method that I should make reusable through a function. Introductory courses in computer programming usually employ the Java language and emphasize object-oriented programming. Many computer scientists argue that it is better to start with Java than Python or (especially) Matlab. But how well is Java suited for introductory numerical programming? Let us look at our first integration example, now to be solved in Java. Solution 1 is implemented as a simple main method in a class, with a code that follows closely the displayed Matlab code. However, students are in a Java course trained in splitting the code between classes and methods. Therefore, Solution 2 should be an obvious choice for a Java programmer. However, it is not possible to have stand-alone functions in Java, functions must be methods belonging to a class. This implies that one cannot transfer a function to another function as an argument. Instead one must apply the principles of object-oriented programming and implement the function argument as a reference to a superclass. To call the "function argument", one calls a method via the superclass reference. The code below provides the details of the implementation: import java.lang.*; interface Func { // superclass for functions f(x) public double f (double x); // default empty implementation } class f1 implements Func { public double f (double t) { return Math.exp(-Math.pow(t, 4)); } } class Trapezoidal { public static double integrate (Func f, double a, double b, int n) { double h = (b-a)/((double)n); double s = 0.5*(f.f(a) + f.f(b)); int i; for (i = 1; i <= n-1; i++) { s = s + f.f(a+i*h); } return h*s; } } class MainProgram { public static void main (String argv[]) { double a = -2; double b = 2; int n = 1000; double result = Trapezoidal.integrate(f, a, b, n); System.out.println(result); } } From a computer science point of view, this is a quite advanced solution since it relies on inheritance and true object-oriented programming. From a mathematical point of view, at least when compared to the Matlab and Python versions, the code looks unnecessarily complicated. Many introductory Java courses do not cover inheritance and true object-oriented programming, and without mastering these concepts, the students end up with Solution 1. On this background, one may argue that Java is not very suitable for implementing this type of numerical algorithms. Simple exercises have pedagogical advantages, but some disadvantages with respect to programming, because the programs easily become too specialized. In such cases, the exercise may explicitly ask the student to divide the program into functions. This requirement can be motivated by an extended exercise where a piece of code are needed many times, typically that several methods are applied to several problems. Especially when using Matlab, students may be too lazy to use functions when this is not explicitly required. Although Java is very well suited for making large programs systems, Java code for simpler numerical problems, where one wants to transfer functions to other functions, looks as an overkill compared with Matlab, Python, C++, and Fortran implementations.
http://hplgit.github.io/edu/py_vs_m/._numerical_programming_guide001.html
CC-MAIN-2018-30
refinedweb
1,871
50.26
please also tell how it works. Entries Tagged as 'Encryption' Can anyone provide me with a C++/java code for IMAGE ENCRYPTION? October 2nd, 2008 WPA-PSK2 encryption for my router? October 1st, 2008 How do i make WPA-PSK2 encryption for my router(any software or any options to enable in my computer).I mean the encryption with which u register to the router by using MAC address of your computer.I am not sure if it is WPA2-PSK or WPA-PSK2.I heard this is the most secured encryption. Note: Please dont ask me […] public key encryption challenge? think you know RSA?solve this in 2 days ? September 30th, 2008 A ciphertext encrypted with the RSA cryptosystem is shown below. Your objective is to decrypt it. The public key used to encrypt this message is e=48925 and n=88579. For your solution, submit the private key values d, p, q, and the decrypted message. Additionally, you should encrypt another message of your choosing using the same […] I am disturbed! by this encryption thing in c#? September 29th, 2008 You guys how are you? Me i am not fine because of 1. I am trying to read data from a file 2. encrypt that data 3. disply the encryptd data on the console 4. How??? using the TrippleDES Algorithm ——————————————- Problem: 1. suppose the file has the content hallo, or jane or 123 ie; few character, then NOTHING IS ENCRYPTED, I GET […] where can i find the vb aes encryption source code for messages? September 29th, 2008 it’s about my final year project. please help me what is the best method of encryption to use in wireless connections without compromise performance? September 28th, 2008 Does anyone know about Police frequencie encryption.? September 28th, 2008 My local police has just encrypted there frequencies my police scanner just stop transmitting the police dispatch channel as well as the other 9 channels they had. I was told they switched to 800mhz. I don’t know much about this, I went to radio shack they told me I need an 800mhz trunking scanner apco25 […] Which is the strongest encryption algorithm possible? September 28th, 2008 Forget about encryption time. I just want the strongest encryption algorithm? And what’s the security difference between AES and SHA512? Is one stronger than the other? Also, for SHA512, does making hundreds of passes make it stronger? Is encryption software illegal? September 26th, 2008 I remember hearing that certain encryption software available in the US to the general public was illegal for export. Is this true and does that mean that if traveling on business outside of the US, you cannot have the software on your laptop? any who can give a well detailed (code, implementation and deployments) using advance encryption standard? September 26th, 2008 i need it urgently are dvd encryptions put on by the company that makes the dvd or their respective outlets?(blockbuster,netflix)? September 26th, 2008 i bought a copy of sex and the city previously viewed from blockbuster for my wife. my mom wants a copy but has never seen the show and isn’t willing to pay 15 bucks for something she might not even like. i tried to make her a copy but its encryption wont allow me to […] How do I set up WEP Encryption for Dell TrueMobile 1184? September 26th, 2008 I am connected to a university ethernet connection and I am setting up my wireless internet with my Dell TrueMobile 1184 from home. I can currently get an unsecured wireless connection (that’s how it was set up initially at home). I want to secure my wireless network connection with WEP encryption. When I go to […] What is AES encryption? September 25th, 2008 i heard it before but i don know waht it means Is it true that file encryption software developed in the US have to provide a crack to law enforcement? September 25th, 2008 What is the best and most secure file encryption software? September 25th, 2008 Is the comment about the USA legally needing a crack for it true? Hey all!!! Anybody for some java encryptions? September 24th, 2008 I would like to encrypt my java codes such as they are executed only. (Just like an .exe file or a keygen) Briefly i want to the user to run the program without seeing the codes… Any help?????? Dear All, Do you know bout encryption and encapsulation methods running on OpenVpn? Thx B4. ? September 24th, 2008 Now i do an evaluation of encryption and encapsulation on networking based OpenVpn… What was the primary reason strong encryption classified as a munition before 1996? September 22nd, 2008 Does WPA2 encryption work with the XBox 360 Wireless Network Adapter? September 22 […] Anyone have any encryption codes to Battle for Middle Earth? September 21st, 2008 Does anyone still have their Encryption Code to Battle for middle Earth one?if yes what is it? Does WPA2 wireless encryption work on X-Box Live wireless adapter? September 21 […] What do I need to do to use encryption of my e-mail? September 20th, 2008 how do I make sure all of my wireless encryption is on or safe…? September 20th, 2008 how do i make sure that someone/something hasn’t changed my wireless settings and encryption??? I wanta know about good free encryption ? September 20th, 2008 There are some files that i would like to encrypt. My diary for example. i have a pc So DMGs are not an option however i like how they work where you just save the file and it automatically updates the dmg. is there anything like this for free on a pc? I am not […] Does anyone have an opinion what is the best File Encryption software for Mac? September 19th, 2008?????? r. how can i remove encryption of other accounts? September 19th, 2008?????? What are the differences (+, -) between AES, Blowfish, and RC6 encryption algorithms? Is any one superior? September 18th, 2008 how to set encryption key with linksys TEW-432BRP? September 17th, 2008 where? and i got another request can someone give me a download *.exe file for 431BRP also? that lets me install the router i got both 432 and 431 […] Can someone review an encryption program for me? September 17th, 2008 It can be found at how do i connect my xp laptop to my wireless network w/c uses WPA2 encryption? September 16th, 2008 i can see the network on the list of wireless networks and i can TRY to connect. no message error appears like windows cannot connect etc. but just the ‘waiting for the network…’ box and after a while it disappears and i’m still not connected, without any message. i already modified the advanced settings according to […] I was checking Encryption option on my nokia E71 when phone restarted. All data seems to be lost. Help!!? September 16th, 2008 I have a nokia E71 and I was checking whats encryption all about .. phone restarted and I dont see any data.. IT says memory card corrupted. When i inserted memory card again.. it says * this memory card is encrypted but the phone encryption is off. Decrypt this memory card?** what do i click on - […] encryption challenge [ try to solve this within 2 days ]? […] can anyone help me solve these encryption questions? September 16th, 2008 1)You are the director of a top secret government espionage agency. Every month you securely transmit a new set of one time pad values to each of the spies you have placed in various countries. Each of these values is used to encrypt a single message back to headquarters and then destroyed. You realize that […] can u help me solve this[if u r good t encryption/decryption] ? […] RSA Encryption/Decryption Program in C++? September 15th, 2008 I am creating a project for class that takes an input of 2 integers from the user, checks those integers for primality, generates public and private keys, encrypts a string from the user, and decrypts that same string. Can anyone help me with some code? I am completely lost. What’s a good file encryption program that’s free and trustworthy? September 15th, 2008 I’ve already tried FileWaster, and Secure II no longer works on my computer for some reason. how to change WPA encryption password? September 15th, 2008 I have a DSL connection.wireless network is security enable. but i have doubt somebody accessing my network. so how can i change the password. pls help. WEP encryption help.? September 14th, 2008 I am trying to conect my laptop to the same network as my desktop. I have 2wire roaming or something. So the 2wire is hooked up to the desktop and when we try to connect to the network it says enter encryption code or network key. is there any way to find this using the […] i forget my encryption file password how can i open the encrypted files ? September 14th, 2008 I HAVE windows vista ultimate and i set a password to encrypted file i cant remember the password how can i open the encrypted files im’ using windows encryption method without any software any help would be great full from you guys Is there a program for a Windows Mobile 5.0 that will decrypt an Wifi WEP encryption? My dad changed the key.? September 14th, 2008 Is there a way to decrypt a WIFI WEP encryption without restarting the router. My dad has changed the key and now I cannot acess it and he wont tell me the password. Is there a program for a Windows mobile 5.0 that will allow me to acess the WIFI? Dvd burning programs for mac that remove css encryption? September 13th, 2008 Is there any program for mac that can run in the background while you are burning a dvd that removes the css encryption? (For example a MAC version of AnyDvd or Clonedvd) Thanks you guys if you find anything I would really appreciate it. Are there any burning programs that can go through CSS encryption for mac? September 13th, 2008 I want to back up my dvds and my other dvd burning programs such as toast titanium and popcorn do not go through css encryption and I need to find a program that does and is compatible for mac. Thank you What is the difference between WPA and WPA2 encryption? ? September 13th, 2008 Which is better? Thanks. What does it mean if my WiFi connection has no data encryption? What if I have Zonealarm.. is that a -? September 12th, 2008 - valid enough protection by itself against security threats? Why do we use encryption modes of operation to convert block ciphers into stream ciphers? September 11th, 2008 hi , if u r an encryption expert ,help me with this ? September 9th, 2008 I have a secret 5 bit long password K. If I disclose a value K’ such that K’ = f(K) for a public non-invertible function f that takes an arbitrary long number and outputs a fixed length (say, 1,000 bits in length) number. Is it possible (given a reasonably powerful computer) for you to determine […] md5 encryption for a Mysql database? September 9th, 2008 If I forget my password for some website on the internet and I request that the password be sent to me by email, does this mean that the company doesn’t use md5 encryption on passwords. Are there any other encryption techniques that will allow you to read decode the password from the database (obviously not md5 […] Encryption method on a POTS modem? September 8th, 2008 I’ve got a business need to have a set of computers accessible via modems through a POTS setup. I’ve also got a mandate to encrypt connections. (I realize that the likelihood of someone tapping a POTS line is rather low, but we need to encrypt anyway.) Most of the solutions seem to […] Why is the only Data Encryption option for my wireless adaptor only WEP? September 8th, 2008 I have a Netgear WG311 and the Network Authentication opitons are either “Open” and “Shared” & Data Encryption options are either “WEP” and “Disabled”. I need to use “WPA” for my router setup but, I don’t have that option on my Netgear adaptor. Why is this the case? Encryption Program in C++? September 8th, 2008 I am needing to create a program that encrypts a set of input integers from the user. The program needs to ask the user for 2 integers, test the integers to see if they are prime, convert the integers to a private and public key and then output the keys. Can anyone help me with […] Need help identifying encryption/cipher methods? September 8th, 2008 I would like to know if anyone happens to know what cipher/encryption method used in this image×9j.png Some letters are glowing in green/blue while others are white. The dots are probably equivalent to a blank space. Networks, router encryptions, laptops, desktops. Wifi…etc? September 7th, 2008 i have a desktop and a laptop at home. i have a wireless router set up at home. i always use my laptop to connect to the router to use the inernet wirelessly. the routers encryption is currently set to WEP 128 bit. my desktop is connected to the router. is WEP safe for my […] In need of an encryption algorithm? September 7th, 2008 I need to encrypt a piece of text in JavaScript to be passed as a URL parameter and decrypted server-side. Requirements are as follows: 1. Should be symmetric. 2. Should output a hex string 3. Should be easy to implement or have an open source implementation already. 4. Security requirements are minimal. More interested on speed. 5. Should have an equivalent […] When they ask for a WEP key is it the same as an encryption key? September 7th, 2008 anyone know what kind of encryption this is? September 6th, 2008 0215B1F3064EA524FFC53E273BAF715D62544D54 8AF56DE68279CB6F5ED022F31AF18B9FCDCC2E92 231E564DB4CDB44A6545583A8D460EDC7F9F97CA BBCCDF2EFB33B52E6C9D0A14DD70B2D415FBEA6E D8F4590320E1343A915B6394170650A8F35D6926 A94A8FE5CCB19BA61C4C0873D391E987982FBBD3 How do i apply rounds to twofish encryption? September 6th, 2008 ok heres the thing i read online that you can add rounds to twofish 16 to be percise. i cant figure out how to do it. im using blowfish advanced cs the latest version. any insight would greatly be appreciated ajax question, can ajax allow you pre form submit encryption? September 5th, 2008 I need to find a way to encrypt form data before its submited . How do i disable the encryption on my router? September 5th, 2008 yea i need to do it so do u know how to do it? Explain the process of encryption and decryption of data. ? September 5th, 2008 Explain the process of encryption and decryption of data. ? September 5th, 2008 Hack AES encryption? September 4th, 2008 Any known commands or ways to hack AES encryption? I made a file on my computer and want to try to hack it for fun. Any utilities? What about a giant table of hashes? How can I find my wireless WPA encryption key? September 4th, 2008 My employer has wireless access that we typically do not use. However, after some computer problems, I need to use my personal laptop for work. I would like to use our wireless internet that is set up, but no one knows what the password is. I use our internet to email our […] Is browser equipped with 128-bit encryption? September 4th, 2008 Laptop hookup to Internet??? Need Encryption code? September 3rd, 2008 My mom’s friends are moving into my house and they are trying to hookup the internet to their laptop and it asks for an encryption code. How do i find it out? Please help me! Is a WPA-TKIP encryption safe for wireless, along with a security/network key? September 3rd, 2008 How worried should I be about people getting into my files? Solve this code and give the rule used for encryption? September 1st, 2008 “Cglrl vuhjzr ju pxhvwgamm.” Solve it and post with the rule used for encyption. First correct gets 10 points. Wireless card only supports WEP encryption…? August 31st, 2008 My wireless card, a Dell TrueMobile 1150 Series Wireless LAN Mini-PCI Card, only makes use of a WEP encryption. I’d like it to accept WPA, instead…how can I do this? The laptop is a Dell Latitude C640…pretty old. Cd/Floppy Encryption? August 31st, 2008 Hi everyone, i have a question, i have some data that i want to password protect from my brother cause he will look at it and probably destroy it. I have the data on a floppy and a CD-RW, is there ANY way that i can encrypt the floppy with a password? Or the CD? […] how do i crack a WEP encryption on a router? August 31st, 2008 How can i Repack my data? with Encryption? August 29th, 2008 How can i Repack my data? with Encryption,, so when you try to change the Exrension of the file it will become unusable,but cant be Reopen with the same program? i got to the part where i made the file type and if you try to change it it will become useless but what i need […] Remove password for data encryption.? August 28th, 2008 I recently bought a new laptop with vista and I enabled fingerprint scanning. However when I scan my fingerprint it verifies it and tells me to type in a password for data encryption. This password is the same as my log in password. How do I remove the password so that I can scan my […] Does Yahoo provide encryption for log on and/or actual email? August 27th, 2008 Finding the Encryption Key on a computer? August 25th, 2008 I just got a new computer and to set up the wireless adaptor to the wireless internet connection on the main computer it says I need to enter the Encryption code for the computer. We could not figure out where to find the encryption code and the support line charges you forty dollars if they […] Does DES encryption put line breaks in the encrypted string? August 23rd, 2008 I need to transfer encrypted data over tcp/ip in java. and im worried about line breaks signaling connection termination Truecrypt System Encryption? August 22nd, 2008 If I encrypt my system using Truecrypt, when will the prompt to enter the password show up? When I push the power button, or before the “Welcome Screen”? om my new notebook should I get disc encryption on a 7200rpm disc or Vista Biz Ultimate with Bitlocker.? August 21st, 2008 I am going to get a thinkpad T400 with these specs. T9400(2.53 GHz1066MHz 6MBL2), Vista Business, 14.1 WXGA+ with LED Screen, ATI Mobility Radeon 3640 with 256MB, 3GB SDRAM, 160GB Hard DIsc 7200 RPM, DVD Recordable Ultrabay. I am an Ordinary Joe who will be using it for Ordinary Joe stuff and writing stories and […] how do I turn off the encryption setting so I can pick up a signal with my laptop? August 20th, 2008 there is already one laptop setup through this desktop that works fine, so why won't a second laptop work? It says that I need the network key, but I have no idea what or where to find this ok, but the desktop isn't mine, it's my motherinlaws, all I want to do is use my laptop from […] How to put encryption on a home made video DVD? August 20th, 2008 I was thinking of encrypting some DVDs so that people would have a little more trouble burning them, rather than a straight up copy with out decryption. Are there any suggestions on software that I can try. Thanks in advance adrian can u tell me how to identify which type of algorithm(AES,RC4) used for encryption on script files? August 19th, 2008 i ‘ll give 1 input file,it has to show which type of encryption algorithm used?pls reply me soon How to break encryption for mms files and download them? ? August 17th, 2008 I have WMRECORDER 12, but it can’t seem to rip the stream. This is more for…….academic purposes than anything else. Just something to do……: ) Where can I find a small, portable public-key encryption program? August 17th, 2008 I’m looking for something small and fast, optimally without installation. i.e not a “lifestyle program” like PGP. what kind of encryption is this? August 16th, 2008 ++++++++[->++++++++++.++++++++++++++++++.—–.+ +++++++.+++++.——–.+++++++++.—-–.++++++++.—– -.-.< any idea? ++++++++[->++++++++<]>++.<++++++[->++++++<]>++++++++++++.<++++[->—-<]>-.+ +++++++.+++++.——–.<+++[->+++<]>++++++.<++++[->—-<]>–.++++++++.—– -.-.< its a type of encryption wondering if anyone knows the name of the language. and its not a prank or something i did while i was bored its a real type of encryption. ++++++++[->++++++++<]>++.<++++++ [->++++++<]>++++++++ ++++.<++++[->—-<]>-.+ ++ +++++.+++++.——–.<+++[-> +++<]>++++++.<++++[->—-<]>–.++ ++++++ .—– -.-.< Can anyone help me understand this encryption? Is it serialization? August 15th, 2008 Hi All, I’ve recently come across a database table that I need to decrypt. This field should be a string, however it seems encrypted: TX01.0, (0# 0 . /@ ‘1H92!L969T(&%I;&5R;VX@:7,@ I’m suspecting that this might be serialization, however I don’t know where to […] I Bought A Pelican Wireless Online Adapter for my xbox 360 and it says i need a Wireless Encryption Key? August 15th, 2008 Theirs Three options i can use and the one with the best connection says i need a wireless encryption key. i don’t know what that is or where to find it. Can someone please help me? How to open Yahoo mail attachment/JPG files in Microsoft Word without ‘Alphabet’ encryption? August 15th, 2008 I am so frustrated! I am sending 3 pictures to my own Yahoo Mail address and to my friends’ as “JPG file Attachments”. I know that the photos are shown at the bottom of the e-mail but I was hoping to see larger images by opening each JPG file. When I click on […] Windows on Mac Encryption and Viruses? August 15th, 2008 I have a Macbook Running Leopard OS and I’m debating on installing Windows Vista OS also. I was wondering if installing Vista makes my hard drive (or entire Mac) vulnerable to viruses. I was also wondering if Vista can use BitLock Encryption on its part of the hard drive Microsoft Encryption key code? August 15th, 2008 Alright so i bought a Toshiba laptop for school and it came with the Microsoft Office Trail already on it and also the case for Microsoft office with they encryption key, but of course i didn’t bother to install it cuz i had the trail version on already. Well here is the problem someone stole […] Cracking a 448 bit Blowfish encryption key? August 14th, 2008 Is it possible within a life time, and how much computer power would be needed? Blowfish is a brute-force resistant encryption algorithm, that requires over 500 iterations (i think like 508) of the algorithm to test a single key. The full encryption key length is used, 448 bits. the key exists out of random characters, alphabetic, capitals and […] advice on encryption programmes? August 13th, 2008 Need some advice from anyone familiar with encryption. I currently have Drivecrypt which I have had for several years and use 1344 bit Blowfish which I thought was pretty secure. But having had it for several years I thought things might have moved on a bit and upon checking got conflicting opinions on this matter. A computer geek […] Questions re BitLocker Drive Encryption on Vista Ultimate?? August 13th, 2008 I am confused as to which Windows Vista version I should buy. We are living in an era where people can steal my computer and then photoshop my wife’s photos and publish it as if she is a porno star. I am serious. I need to protect myself, thus I really liked the Bitlocker feature […] I’m trying to hook up to our wireless network. No one seems to know the encryption key. What do I do? August 11th, 2008 When you first set up a wireless network and put in the encryption key, there’s a box to check which says: “Make the character visible.” It’s not there after the wireless network is set up. Help! encryption disabled? Napster p2p. 1kb/s download. or lower! August 11th, 2008 ok ok. i got no idea wat they mean In my Napster music download thing im getting incredibly slow download speeds!! dont say anything about Napster is crap. use limewire! I want to fix this problem only. What would cause such slow downloads? I downloaded firefox and it was downloading at 200kb/s! ??? whats wiv that! 1kb/s in […] is there an encryption software to secure a portable hard drive? August 10th, 2008 it must be a program that works in a stand-alone manner on the portable drive, independent from the OS in the main PC. I have Super File Encryption and forgot my admin password. How can I delete or recall it? August 10th, 2008 I tried uninstalling, removing all references I could find in the registry and reinstalling, but it still holds the admin password. I can’t get past the front door without it. Anyone know how to fix this? How do I get around typing in a Yahoo provided encryption with all outgoing messages??? August 9th, 2008 Database Encryption!!!! August 9th, 2008 Right, I would like to know if I could set MyBB and or Wordpress so that they DO NOT encrypt the passwords they place into the database, I am not a newbie or a complete advanced user of MySQL or PHP so I would like a little more info, thanks in advance. 10 Points Best […] How do i find the network key from my laptop wireless to my desktop. or how do i find my encryption code? August 9th, 2008 Wordpress/MyBB Database Encryption August 8th, 2008 Does anyone know if it is possible to set up Wordpress/MyBB so that what data they place in the database it not encrypted? Any plugins/modding/guides/tutorials will help! =] Im talking passwords, I would like to set it up so that it doesnt encrypt the passwords that are put into the database. 5c encryption over firewire from set-top box question August 8th, 2008 I am about to purchase a 24 inch imac, and i plan on hooking it in to my insight cable set-top reciever/dvr via firewire. i know that 5c encryption prevents recording of certain programs/channels, but will it prevent me from watching them too? since i have a dvr, i don’t need to record […] What is keystroke encryption? August 8th, 2008 I just got new antivirus program and was wondering what it was. gpg encryption/decryption question August 7th, 2008 I was wondering if there was a way to automate gpg decryption? The company i work for has an ftp server (windows 2003) and what I usually do is once someone tells me there is a file ready to be decrypted, i transfer it to my workstation, decrypt it and upload it to a different […] Can public-key encryption be done manually? August 6th, 2008 So far, all the sites I’ve checked talk about converting plaintext into a session key then doing an incredibly long algorithm. If it can, could you provide the formula? Is the “physical address” with 12 numbers the same as the encryption key of the router? August 6th, 2008 What is “binary encryption”? August 6th, 2008 Recently I saw a question about “binary encryption”. They were asking people to decode a set of numbers or something, which were all 1’s and 0’s. And somebody came up with some results using a website I think. I don’t get this. What is binary encryption? Is it just a code you can use? How do […] Someone else set up the router. How do I fine the encryption key? August 6th, 2008 To access another network with its encrypted key, do you lose the encryption on your computer? August 5th, 2008 How do you delete the encryption software already installed on a flash drive if no visible files are present? August 4th, 2008 I get an error when I try to use my flash drive in my head unit. I was wanting to try to delete the encryption that was already installed when I bought the flash drive to see if this would make it work. external hard drive data encryption August 2nd, 2008 i wanted my files to be protected so that everytime i plug into my external hard drive it will prompt for a password does any of these hard drives have data encryption? 1. My Book Essential Edition 2.0 WDH1U5000 - hard drive - 500 GB - Hi-Speed USB 2. maxtor one touch 4 if none of these have […] Why is encryption outlawed in some countries? August 2nd, 2008 Can anyone recommend an easy to use file encryption program? August 1st, 2008 I’ve tried PGP, but is there something easier? I really don’t have time to sit down and try to figure it out, and PGP isn’t all that user-friendly. I just want it to work. Any help would be greatly appreciated! Thank you. I'm still a slave to Microsoft. I use Windows XP - forgot to mention that. […] ldecpytkiemnjqe - Does any1 know what it means? Guess its some kind of encryption…do u know? August 1st, 2008 This is some kind of instantmessage I get regulary, with various senders and messages (this is just 1 of them). Seems as it is a “red thread” in the messages, how I would like to know. Thank You! Looking for notepad with auto encryption? July 30th, 2008 Hi all i'm looking for a notepad that shows encryption auto if any one knows that'll be great. Can someone recommend really good encryption software that will cover everything on my pc? July 30th, 2008 I have started to use Tor but need to encrypt as well. I would like to cover everything, email, files and folders etc etc. Can you copy text from a protection script and get rid of the encryption? July 30th, 2008 I have this directory of businesses and all I want to do is put it onto excel. I think there is a protection script on the text from the disc I have installed, is there a way to get the information? I can highlight it but i cannot copy paste or anything, do I have […] i am using blowfish algorithm for encryption. how can i know that a string is encrypted with that algorithm? July 28th, 2008 what is IE High Encryption Pack for internet explorer? July 28th, 2008 PS3 wireless—SSID, encryption?? July 27th, 2008 I have the router here and everything but like everybody else am not in a position to *contact the person who set up or maintains the contact point*. Is there any way I can figure this stuff out via the internet if I'm only armed with model numbers? The instructions warn me I'm gonna have to […] How secure is this c# encryption algorithm? (if a salt were used)? July 27th, 2008 byte[] bytes = ASCIIEncoding.ASCII.GetBytes(textBox1.Text); int hash = 1; for (int n = 0; n < 10000; n++) { for (int i = 0; i < bytes.Length; i++) { if ((i * n) % 5 == 0) { hash *= bytes[i]; […] anybody who knows about DATA ENCRYPTION??? July 26th, 2008 I have a problem,,we're going to present a role play about DATA ENCRYPTION,,our professor wanted us to show how DATA ENCRYPTION works through role playing,,but my groupmates and I doesn't have any idea on how to do that,,any idea?? any suggestions?? please help…. what's encryption? July 24th, 2008 im thinking of encrypting my OS , do u recommend doing it and why Disable WEP encryption on Windows ME? July 24th, 2008 I'm going to be staying at my grandparents' for a while, and while trying to connect to their wireless network via my Macbook, it asks for the WEP key, which I do not have. They have a Linksys router and are using Windows ME. The router is connected to the PC, but I cannot access the […] I have a linksys wireless router that I have setup encryption for. I now lost the codes…? July 24th, 2008 What is the best way to either recover the codes or reset the router? THANKS! Need help determining encryption algorithm? July 23rd,. I am selling my computer. Will encryption work to protect sensitive files? July 23rd, 2008 I do not want to use software like Wipedrive or DBAN because the laptop has a special partition with the Windows Recovery installation I want to use before I ship it out. If I encrypt the entire File System partition with something like TrueCrypt, will that keep hackers from getting into the deleted files? Or […]. Understand encryption code? July 22nd, 2008 Ok. I need to understand how the following code works. I then need to comment it to show what it does. If anyone could tell me how this works, it would be greatly appreciated. // Inputs: register EAX = Encryption Key value, and ECX = the character to be encrypted. // Outputs: register EAX = the encrypted […]. Is this file encryption safe to use? July 22nd, 2008 - Is it 100% safe? What other open-source encryption applications are there other than TrueCrypt? July 22nd, 2008 Looking specifically for ones that provide cascaded or sequential encryption modes e.g. AES-Serpent-Twofish Matrix Encryptions? July 22nd, 2008 At school we are working with matrices and looking around the net i found the formula for a 2×2 encryption C1 = a * P1 + b * P2 (mod 29) C2 = c * P1 + d * P2 (mod 29) i need to know 2 things. 1. Why is the mod 29 2. What is the mod of […] Windows XP file encryption? July 21st, 2008 I need a way to copy files that is encrypted by a user under Windows XP, I was trying to disable the encryption but an error comes out and prevent me to decrypt it, is there some software that forcely decrypt files of Windows XP? I need the file to transfer into another drive. Deniable Encryption other than Truecrypt & Tigercrypt? July 20th, 2008 What windows and linux software can produce a file denaiable encrypted data (headerless unitl decrypted) I need a file encrypter, NOT truecrypt drive emulator since container sizehas to be prelimited. Tigercrypt is a good example, but require java which is slower Someone have a third answer?? the more answers the better How can you BACK UP Windows ENCRYPTION KEYS in case of a CRASH ? July 19th, 2008 My friend, who has Windows XP Professional installed, recently encountered a problem with encrypted documents. For security reasons, he used the built-in encryption to protect his files. The computer crashed, so he reinstalled Windows. Now he can’t open his encrypted files. The backups were useless. They’re still on the computer, but […] what's the best encryption software out there? July 18th, 2008 i frequently connect to free public wireless internet connection. im afraid of ID theft so i need an encrypting software. what's the best? also is there a free one? im using windows XP. i need to encrypt data flying through wi-fi hot-spots like in an airport. What was it I read regarding an encryption service specifically for Yahoo? What will I need to achieve this? July 17th, 2008 I have read elsewhere that I might need to download software for this and I have already understood the practice in principle and the keys I wll need to know (and share) for it to work but I am wondering if there is something that is optimal for Yahoo, my default client is anybody her who can teach me how to make program using encryption method in c++? July 16th, 2008 online file encryption, compression? July 16th, 2008 i want a website where i don't have to download a program i just want to put a file in the website and it will compress it and or encrypt it for me and give me the finished file and i want it completely free does anyone know any websites like that What is the best email encryption service/certificate to use? July 16th, 2008 I've heard of several, including Verisign, Globalsign and others. Is one better than the rest? html encryption? July 16th, 2008 does anyone know any website that will encrypt your html code and allow you to paste it on your website… you give it your code then it returns to you with a encrypted working code Help with encryption passwords? July 15th, 2008 Ok, here it goes. I go to update my security certificates and i do. Now I can't open my encrypted word documents. The problem is I know the password but it wont accept it. Could this have something to do with backing up the certificates and how can I manage to open it again. I […] what i mean using encryption method in c++..? July 15th, 2008 encryption method using substitution and compaction… can u help me friend? hehe..can u b my friend? My laptop was encrypted by my old cable tv company and I have since moved. How do I delete the encryption? July 14th, 2008 How to put a wep encryption on an unsecure network? July 14th, 2008 I have an unsecured network that I have been using with a number of computers in my home. I'm trying to set up a wep encryption on it so that it is more secure. Can some one give my a step-by-step guide to help me out. Thanks! In PKI encryption, can the public key which is give out be used to decrypt a message sent using theprivate key July 14th, 2008 Hi I have a linksys wrt54g router and i just put an encryption key so what is the password now if i wanna chec July 14th, 2008 If I type in my IP address into my web search bar it asks for username and password and it used to be no username and password "admin" i recently locked it so you need to gave an encryption key but now i can't use the password admin with no u sername anymore? Do you […] anyone know of any dvd burning program that lets you put encryption so that you can’t make a copy of it.. ?? July 14th, 2008 I want to put encryption so that my dvd can't be cloned or copied by any other dvd copy software. Can Anyone Recommend Me Some Good and Free File Encryption Programs? July 13th, 2008 puzzle encryption? July 12th, 2008 so i was asked this question: And I have no idea how to solve it… does anyone else have a clue? 154.5 280 190 315 153.5 272 155.5 308 152.5 328 144 280 183 281.5 144 315.5 152.5 283 153.5 272 149 282.5 154.5 Hint: Encryption, ASCII, 1337. adapter […] […] Hard Disk Encryption that wipes drive if invalid key / pass is entered.? July 11th, 2008 I don't really know much about full drive encryption and haven't done it on any of my devices before. Question: I have sensitive data on a notebook. Naturally, I would like to have the full drive encryption that most existing programs can currently provide, though better encryption is more desireable. Along with that however, I would like […] Encryption of a disk? July 11th, 2008 How can I encrypt an entire disk on my Window XP. Best encryption software? July 11th, 2008 Whats the best IP encryption software for downloading. How can I find my wireless internet encryption key on another computer? July 11th, 2008 I don't know if this is possible or not, but here's my situation… The computer in which the router (which is protected by a security encryption, i think its WEP but im not sure) is connected to is not working: it will power on for a few minutes then just shut off. There is one laptop, […] does the ama ethics opinion mention encryption as a technique for security? July 10th, 2008 How do you hack wifi encryption keys? July 10th, 2008 Without buying any hardware Can I be safe connecting wireless to a router without any security encryption? July 10th, 2008 I have three computers in the house. Two are wired to the router via ethernet and my computer is connected via wireless card. My router is linux based so I flashed a linux based 3rd party firmware to the router. The firmware will allow me to only let certain MAC addresses connect […] How to crack RAR password encryption?!? July 10th, 2008 I have a RAR file that I forgot the password to and I wanted to know if there is a way I can crack it? I had one program that did a dictionary attack to try to open it but that didn't work. The problem is I think it is a combination of […] PHP source-code encryption? July 9th, 2008 Are there any utilities that would allow strong encryption of PHP source code with on-the-fly just-in-time decryption? We'll have some subcontractors working on a large website and we need to protect source code on a "need to know" basis. We'd like all files at rest on disk to stay encrypted except for the ones that […] can someone suggest about "how to break encryption on the dvd" about the cracking software? July 9th, 2008 Will someone please tell me what is a good encryption for my Wi-Fi at home? Thank you.? July 8th, 2008 I cant remember my file encryption cert password, can i delete the old cert and issue a new one? July 8th, 2008 I dont need it as all my encrypted info was stored in another place too. Although i need a new cert, how can i delete the old one and make a new one? MS Vista (TM) Ultimate Thanks how can ppl use public encryption algorithm while..? July 7th, 2008 how can ppl use public encryption algorithm while..the main difference is the key of the encryption? I mean is this the only difference between same encryption algorithms? What is the best encryption software for encryption that has no backdoors in getting through? July 6th, 2008 Surely there has to be something stronger than Axcrypt RAR 3.x encryption cracking ??/? July 4th, 2008 is it possible to crack RAR 3.x encryption ?? if it is possible please show me what one … i keep getting Detected RAR 3.x encryption No supported files in the archive. with all the others i have used encryption? July 3rd, 2008 So I am playing this game online and it's asking me Solve this wLjd, vlr xob pjxoq Hint: Encryption does anyone have a clue? I dont LimeWire wont connect!Stuck on loading tls encryption? July 1st, 2008 and i have high speed broadband Encryption Package.exe LOL here it is people, […] […] Having trouble removing encryption on Disney DVDs. Anyone have a suggestion of a program that might work? June 30th, 2008 Simple Encryption code using Turbo C!!!? June 29th, 2008 Helppp…. I'm hving a homework to make a simple program to encrypt / decrypt files… just only the simple one.. like for exampe : encrypt "abc" to "bcd" can anyone help me..?? the due date is near already., at 2 Julyy… internet encryption? June 29th, 2008 free internet encryption software or sites How do I find the encryption code to connect my wii to the internet? June 28th, 2008 Just got a wii but it needs an encryption code to connest to the internet. where will i find it? Thanks Microsoft Word and Encryption? June 28th, 2008 I downloaded some ebooks in doc. format, and when I click open the file, the heading of the 'ebook' tells me to send a blank email to a designated address, and the rest of the body of the document contains some weirds symbols like this throughout the whole document: #$%#@%*9678994*&^%$#%^&*()^%^7 $%^&*()(*&^%$#$%^&*()_(*&56789 At first I thought this is due […] How does Asymmetric key encryption ensure “Non-Repudiation”? Explain with an example? June 28th, 2008 Suppose you are doing RSA encryption with the prime numbers p=13 and q=7.Also, assume that encryption exponent June 28th, 2008 Suppose you are doing RSA encryption with the prime numbers p=13 and q=7. Also, assume that encryption exponent e=5. Find the least positive decryption exponent d. Next, encrypt the message m=7. Now decrypt the cipher c=2 Outlook Encryption Problem? June 27th, 2008 Im using Microsoft Outlook 2007. I have a Comodo Email Cert for encryption and auth, i tried to test it but when i tried to send it to myself it said that a recipient (me) had an invalid or missing cert. How do i fix this? mass file un-encryption? June 27th, 2008 Is there a DOS level utility to unencrypt a folder and all its subfolders and files? I have the username and password of the user that encrpted the files. I could go through each subfolder individually and remove encryption but I have hundreds of subfolders. Thanks. What two encryption methods are used for WiFi connections? June 25th, 2008 dlink router encryption help? June 24th, 2008 I tried to encrypt my Dlink router… but on my laptop, when i clicked on my network, it doesn't bring me to the Security Key/passphrase for me to type in the key I made…it just say : windows cannot connect to __my network name______, and "diagnose the problem or Connect to a differnt network… help please? Mac encryption software with an automatic file delete for wrong password? June 23rd, 2008 Is there any software out there (especially for mac) that encrypts a file and then securely deletes the file if the wrong password is entered a certain number of times? file encryption? June 23rd, 2008 from my previous windows xp, i encrypted my files that were on my storage partition using right click then advanced, encrypt, … the problem is, now i reformated my pc but when i try to acces those file previously encrypted, a message pop saying, access is denied. i tried decryting from the […] free file decryption/encryption software? June 23rd, 2008 does anyone know of any programs that can decrypt and encrypt files? if so, gimme a link File encryption help, Wndows Vista? June 22nd, 2008 I want to encrypt a folder on Windows Vista, but the encryption box is greyed out..Help?!? what is siemens encryption card? June 21st, 2008 Question about encryption keys for wireless router? June 19th, 2008 I'm looking at the encryption key (I'm adding a device to my Syslink router). I can't tell if some of the letters are capital "i"s or small "L"s … (I vs. l) Since the key is a mix of caps and small letters - I can't tell. Any suggestions? Trying […] PHP/JavaScript encryption based on word NOT dynamic random number? June 18th, 2008 I use the following PHP code to encrypt/obfuscate HTML and JavaScript. Now it uses a dynamic randomly generated key. But I want it to use a static key, based on a word such as: "ThisIsMySalt" So the password should NOT change but always be based on my word. Here is my code: What is file encryption??? June 18th, 2008 I don't get what is encryption and even if I have the program I don't know how to use it. php encryption? June 18th, 2008 Hello all, I am designing a website and need to encrypt some data. I know how to encrypt string text using mcrypt, but can't find a comparable function for file encryption/decryption (binary). Can anyone recommend a package or implementation? I put basic encryption on most of my documents on the school network (dont ask why, cos I dunno) and now…? June 17th, 2008 …It's all blocked and even the admin cannot access it. He says he never changed any network settings - and I could access everything fine for more than a year, but suddenly I cant. Dont give me a direct solution, give me a possible reason and then maybe I can think of a solution. Please. I've tried […] Basic encryption question? June 16th, 2008 Encryption techniques often use numbers to describe how secure they are, like 128 bit encryption or 256 bit encryption. How are these numbers actually determined? (the more specific your answer, the better How to install website spam encryption? June 16th, 2008 You know those funky twisted letters that you have to decipher before submitting a form, so spambots can't submit your forms? I'd like to install that on my website… I am SO tired of the spam. Where can I find this in html? What Encryption Standard Does Bluetooth Use? June 16th, 2008 WEP, WPA, or something unique? Could you give a source? What Encryption Standard Does Bluetooth Use? June 16th, 2008 Do they use WEP, WPA, or some proprietary encryption? Could you give a source? when a packet is in a WEP encryption form and is also encoded is receicved at a destination.? June 16th, 2008 when this packet is received at the destination, will it be decoded first or decrypted first. i also want to know any tools for decrypting packets from WEP encryption Wireless Internet doesn't work with WEP encryption enabled? June 15th, 2008 When I disable security options on my wireless network, I can connect to it and the internet works fine. However, when I try to add WEP encryption, I am still able to connect to my wireless network with full signal but for some reason the internet doesn't work; I am unable to navigate to any […] How to change of my the Encryption Key or the password router ?? June 14th, 2008 Ok so i have a linksys router….i installed it in my computer….with all the softwares that came in the cd…in the end of the setup they gave me the Encryption Key and told me to choose the name of the network….i still have the Encryption Key but i want to get it changed because it […] File Encryption Help?? June 14th, 2008 I was wondering…I am looking for a way to encrypt a Microsoft Word Document but through an unconventional means. Is it possible to have Word ask for a password, and if the password is wrong, have it display predetermined text that you want to be seen, but is actually not the document. Then, if you […] which encryption software is very good and unbrakable encryption? June 13th, 2008 How do I enter the security encryption (WEP) key on my laptop for my router? June 12th, 2008 Hi there. I just bought a new laptop and I tried to connect it to our wireless router, but my new laptop (vista) was set up with security enabled so we set up a passphrase and the security encryption (WEP) key. After that, my dad's laptop could only get limited connectivity and he is not […] How do I break a Symmetric Encryption scheme where the Keyphrase is simply added to the Cleartext? June 10th, 2008 The encryption scheme takes the ASCII values of the characters in the Keyphrase and the Cleartext and adds them sequentially. So if the Cleartext is BAT, and the Keyphrase is CAT, the resulting word would be: chr(asc("B")+asc("C")) & _ chr(asc("A")+asc("A")) & _ chr(asc("T")+asc("T")) or the […] Pgp encryption help? June 9th, 2008 I entered a new user for my pgp whole disk encryption and i miss typed it now i cant remove or add a new password. If i restart windows then i can never use this hard drive again. I have a master pgp key but can this be used to over wright the whole disk encryption […] How does one implement the RSA (Encryption/Decryption) Algorithm? June 8th, 2008 question, is […] How does one implement the RSA (Encryption/Decryption) Algorithm? June 8th, 2008 Hello, […] Advanced encryption standard in CCM mode? June 6th, 2008 i hav fully studied and know the full algorithm of advanced encryption standard… i hav made a simulation program as well… now what i need to do is that i have to apply AES in CCM mode… what exactly is CCM mode??? and where can i find study material or a text on it??? How do I register a new encryption cipher? June 5th, 2008 I have made a new cipher which can be used to encrypt data, does anyone know how to register it or make it known? DVD SHRINK Encryption Problem? June 5th, 2008 I've been using dvd shrink for years.. and had about 95%success. However , lately something is going wrong. When I copy a dvd.. If I try to do a copy of the copy, it tells me its encrypted! But if I put the original back in, I can copy it fine! Why is dvd shrink encrypting my copies!? (Its […] Encryption Algorithm? June 4th, 2008 What is this? How does it work please give some brief history How would I find the network name, security tupe, encryption type, and security key/ passphras? June 4th, 2008 Find them on my mother's computer, so her and I can add me to the network. We just are not able to find them on her computer, and wondering where we should go. So any answer's that would work, please, please let me know. Thank you. How do you decrypt DES encryption with a key? June 2nd, 2008 I need to decrypt some DES encoded things, and they were encoded with a key. Linksys/SPEED TOUCH Router Encryption Key ? June 1st, 2008 My neighbors are using LINKSYS and SPEEDTOUCH wireless routers i can have an access if i get the encryption keys. . . . i'll appreciate if any one can help me in this regards. whever i try to connect it ask for the encryption key. . please help Why do some of the letters that I send come back to me with this encryption after many of the sentences when? June 1st, 2008 it's not how I wrote it? Many of my sentences are broken up by that symbol. What does that mean? Thanks. Nancy 100% ssl encryption for real? June 1st, 2008 if a site says 100% SSL encryption and has credit cards above it is it for sure or can sites just put it on to pretend and cheat u off ur money and steal ur money or do u have to be secure and reliable to gain the seal ssl encryption thanks !!!!!!!!!!!!!!!!! I design advanced encryption codes can you do that to? June 1st, 2008 Designing advanced encryption codes using numbers,letters and symbols is a fine art which requires alot of intellegence.One mistake in desiging the encryption code and the whole encryption could be useless.Its quite a brain strain to design.Can you design them too. what is the encryption method used to encrypt password in gmail and yahoo? May 31st, 2008 Playstation credit card encryption? May 31st, 2008 My dad wont let me buy games on the playstation 3 unless he knows that his card number will be encrypted, does anyone know what the security measures are for credit cards on there? thanks What is wrong with this encryption? May 31st, 2008 I have a Buffalo router w/ DD-WRT firmware on it. I got a Buffalo PCI adapter for my desktop computer, and it works fine, so there is nothing wrong with the connection. My laptop, a Compaq Presario V2000, can connect to the wireless fine if I disable encryption. But the moment I turn any encryption […] In data encryption technology, function application is a process of (Select one option.)? May 31st, 2008 A Moving characters around within a message B Applying a mathematical transformation to the message C Shifting chunks of data within the message around randomly D Cracking a code by brute force E Gaining access to a private key Filevault Encryption Problem? May 30th, 2008 When I turned on Filevault, I get the message "Filevault disk image not loading", then it tells me that Filevault is being turned off? Does anyone know the way to fix this problem for Mac os x? what is the standard of query string encryption in msn? May 29th, 2008 when you go to your hotmail page, you will have something like: "" and then you change "n=1069142737" to "n=106914xxxx", and copy the link , open the link in the same browser, it will still get the same thing as you change before. However, if copy the link to the new opened browser, it will ask me […] why cant I acces yahoo games after the encryption step? May 29th, 2008 when I log on to yahoo games, I get through to the encryption segment. it seems to load the applet but does not put it on screen. I have tried to turn off pop up blocker and open it to another window but that does not work. do you preprocess html form data like passwords with encryption before submitting it to a php script? May 28th, 2008 Examples please. data Encryption for windows xp? May 27th, 2008 I made data encryption for a folder in (F) hard drive under specific user, then i installed new fresh operating system (windows xp) unfortunately i cant access to folder encrypted and i cant remove the encryption !!! what can i do?, help me pls wireless encryption keeps shutting off!!!? May 26th, 2008 I have a laptop with a linksys wireless router, and when I connect to my roommate's wireless network and check if data encryption is on, it's disabled. When I try to turn on encryption, i get kicked off his network. When I try to re-connect, encryption is disabled again! Why is this happening??? How can I add an encryption to a video I created on Windows Movie Maker? May 25th, 2008 Is there a way to add an encryption to a video you make on WMM or is there one added on it when you burn it through Windows DVD Maker? When you burn on DVD on Windows DVD Maker, does it add an encryption? May 25th, 2008 How long would it take to crack full drive encryption (AES)? May 22nd, 2008 How long would it take a master hacker to gain access to my files if i have a full system drive encryption (AES). I have important data on my computer that i cannot allow a hacker to get access to. I think my computer is secure with firewalls and AV software. If a hacker got into […] How does RSA encryption work from a computer science perspective? Simple example? May 22 […] How do I break encryptions? May 20th, 2008 like passwords and usernames HDD Encryption? May 19th, 2008 Yes, i have 30GB External HDD and i need to put a password protect on the drive (for personal files) so that when i goto access the drive it asks me for a password. i need to be able to move the drive from computer to computer, so i cannot have a program that like […] Single string encryption.. best way? May 17th, 2008 I am developing a serial key generator that contains information within key that when accepted by the user, triggers an event based on that informaiton. Any suggestions for encrypting this single string so that the information is preserved but it would be difficult to crack? Any suggestions would be greatly appreciated. Thanks! Can someone recommend me a reliable and trustful file encryption software? May 16th, 2008 Belkin N1 wireless router, WPA encryption will not work with belkin wireless desktop card? May 16th, 2008 wep 64 works, but everyone knows thats easily crackable. when i try to connect it says my card does not support wpa, but it is a wireless G card and i know it supports it. so, i have enabled MAC address filtering and 64 bit WEP. will this be enough to protect myself and the hardwired […] Truecrypt Non-System Partition Encryption? May 15th, 2008 I partitioned my 40 Gig hard drive with one 20 Gb system partition, and a 10 Gig storage area. I successfully encrypted the entire system drive, and now I want to encrypt the 10GB partition, but it doesn't seem to recognize it. Can someone walk me how to do it without just "quoting the web site" which […] Ipod Encryption Removal? May 15th, 2008 Does anyone know how to take the DRM encryption off of iTunes content? Preferably for free. How does public key encryption provides confidentiality and assurance.? May 12th, 2008 128-Bit Encryption… What does that mean? May 12th, 2008 A friend of mine and myself have been in the trade of writing our own encryptions. We were talking about what it means to have a "128-Bit" Encryption and we couldn't exactly figure out what that meant. Our best guess was that "128 bits are encrypted at a time" but we weren't too sure. Anybody […] Recover Windows XP encryption due to reformat? May 11th, 2008 I need a utility that will decrypt the Windows XP Pro with service pack 2 and recover the files on my 2nd hard drive. My OS is on the first hard drive. Can someone provide me with a link so i can decrypt my files and start viewing my documents. I recently reformated my 1st hard […] In data encryption technology, function application is a process of:? May 9th, 2008 In data encryption technology, function application is a process of (Select one option.) A Moving characters around within a message. B Applying a mathematical transformation to the message. C Shifting chunks of data within the message around randomly. D Cracking a code by brute force E Gaining access to a private […] what dose connection error 56 problem with encryption of massage means in blackberry phone? May 9th, 2008 plz help me php password encryption MORE secured? May 9th, 2008 Hi, I've read about md5 hash function and i've used it. Now to protect my database from hacking and stuff… I want to add additional hash function and concatinate it with the other…. just for the sake of adding more security? $hashed = $md5($pass) '.' $sha1($pass); somethin like that..not sure though… what can u suggest […] al qaida encryption software "Mujahideen Secrets "? May 8th, 2008 i recently came across al qaida encryption software "Mujahideen Secrets " in below site can anyone tell me where can we download that software? Encryption before I can run my laptop? May 8th, 2008 I currently work for a company that required an encryption program for security reasons before I can do anything with my laptop. From time to time the computer asks me to change my password. I want to stop using this particular computer for business purposes and am wondering if there is a way NOT to […] WLAN Encryption-Ubuntu? May 6th, 2008 How do i remove a WEP key that i put in the Wireless Settings: WEP Key: *********************** box. I typed in the code fr my router there and now it doesnt work. Someone help, it took me long enought to install the drivers for it!! Encryption Hash? May 4th, 2008 76a082e44d26fd89907688e3be255530 I do not believe this is an md5 hash…Does anyone know what it is or even better…what it says after being decrypted?? I have done a full system drive encryption. Is my computer safe now? May 3rd, 2008 I have done a full system encryption using truecrypt. I want to know If there is anyway to decrypt the drive without the password or recovery disk? I want source code in java for monoalphabetic,playfair cipher encryption algos? May 2nd, 2008 encryption? May 1st, 2008 hey all, i have a shirt that says <begin encryption> BUJUN GNSBB YVNZ <end strong encryption> does anyone have any idea what it means? text encryption? May 1st, 2008 can any one recommend me a text encryption software that would enable me to convert large amount of text to just one string of characters? good day Portable encryption? May 1st, 2008 Hey. I'm looking for a freeware or open source (I might be able to make an exception if it's good enough) portable app (IE, stand alone, no need to install on the HDD Itself) that can encrypt a folder on a flash drive or memory card. I've tried several different programs but none of them […] is there any encryption software whereby i can assign a password of any length? May 1st, 2008 how does encryption work? May 1st, 2008 using public and private keys???? enabling required ppp encryption in cisco 2800 routers of vpn connections? April 30th, 2008 I am working on a VPN server (cisco 2800). I want to encrypt PPTP traffic between server and vpn client. How can i do that? I need required mode of encryption which does not work, when i set encryption to passive everything is ok and clients can connect with mppe128 encryption. client is winxp, server uses […] what do you think of encryption softwares? April 30th, 2008 Whole Disk Encryption for Mac? April 30th, 2008 Does a Whole Disk Encryption solution for the Mac (Intel) exist which will encrypt the entire drive? I see that PGP are developing this at the moment, but have not been able to find another solution that encrypts everything. why is it that when I connect to our wireless network the encryption key is already provided? April 29th, 2008 i'm using WPA2 Personal… the router is a Linksys WRT54G version 5 Simple C++ Encryption/Decryption Program. What am I doing wrong?????????????? April 28th, 2008 Output should be: Encrypted string is: uijt!jt!b!tfdsfu" Decrypted string is: this is a secret! Here is my code: #include <iostream > using std :: cout; using std :: endl; void encrypt( char [ ] ); // prototypes of functions used in the code void decrypt( char * ePtr ); int main( ) { // create a string to encrypt char string[ ] = "this is a secret!"; cout […] What exactly is WPA, related to wireless encryption? April 28th, 2008 What exactly is WPA, related to wireless encryption? April 28th, 2008 What type of encryption uses yahoo for the passwords? April 26th, 2008 95 printable ASCII chars. Passwords are 10 chars. Encryption rate of 6.4 mil/s. Calculate.? April 25th, 2008 encryption method question? April 25th, 2008 Ok I have a mac machine with osx and I can use 128 bit encryption for files. But why is it only 128 bit, my email is 2048bit so why wouldn't apple put a stronger encryption. also could 128 encryption be cracked by anyone. thanks troy g you dont seem to know what you […] what is an encryption key and how and what do i use it for? April 25th, 2008 What is an encryption key and how and what do i use it for? How do i become an administrator? Can any encryption program decrypt any encrypted file made with a standard algorithm? April 23rd, 2008 If I use an encryption program to encrypt a file with a standard algorithm (blowfish, e.g.), can a different encryption program successfully decrypt the file? I might start using an old encryption program like Scramdisk or E4M and want to know if I can use another program in future to decrypt the files, supposing that I […] Which cipher is used for voice encryption, Block or stream? Can we use AES (Rijndael) for voice encryption? April 21st, 2008 Wii: Setting up a wireless connection from my ethernet through my MacBook Pro’s airport using WEP encryption? April 21st, 2008 I've been at it for hours, and hours, and hours. I'm on a military base surrounded by tons of soldiers, and the connection I'm sporting right now is unsecure. I want to add a WEP encryption to it–as of right now, my Wii connects, after adding all the 10.0.2.11-type manual settings. But the WEP is […] wheres is the place on your pc to enter things like wireles name, encryption method, etc.? thanks!!? April 21st, 2008 i can't find it. i have linksys installed on my main computer and it can't find the connection so it told me to enter the info in my laptop so it could connect to the router. it's probably a stupid q, but where are the settings on a windows xp home edition laptop to enter wireless […] does anyone believe in the bible code? because i think i just discovered an encryption of my own…? April 20th, 2008 jesus christ _ _ _ _ s _ h _ i _ * tls encryption? April 18th, 2008 Why won't it load on limewire? Would anyone please kindly give me a link to a folder encryption freeware? April 18th, 2008 I want to encrypt some of my folders, I don't want to put them in a compressed file, etc. I just want encrypted folders. Any ideas? Thank you so much. How do I crack the encryption on a torrent? April 18th, 2008 I downloaded the Conway Twitty 4 CD collection torrent. I discovered that it is an encrypted rar file. Does anybody either have the password or know how I can get it? how can i get source code for an encryption algorithem? April 18th, 2008 File Encryption??? April 18th, 2008 Can someone give me the name of a file encryption software and also tell me how to use is (details)? I tried one before and I didn't get it. Thanks In Public-Key encryptions (PKE), can the final ciphertext be letters? because i think it can be only numbers.? April 16th, 2008 Maybe you haven't read this book, it's called "Codebreaker—the history of secret communication" by Stephen Pincock and Mark Frary, at the end, they give you 7 challenges for you to solve, i'm on the 5th one, and it is encrypted using PKE,"NCWLCBHOJHKOYMWTSUZLDUSANNUXRLVVKNRUIQWUWZGVAWZFMZL". I have tried to convert the letters into numbers, but i doubt that […] What is the -bit encryption for internet explorer 7? April 16th, 2008 in regards to the security of giving info over the internet (I have to in order to get a loan I need for college next year) What do you think of my program? Government encryption/ insane file eraser.? April 15th, 2008 Please tell me what you think. Please. What do you think of my program? Government encryption/ insane file eraser.? April 15th, 2008 Please tell me what you think. Please. How do I reset the encryption on my wireless router ? April 15th, 2008 I was trying to encrypt the wireless signal from my Belkin router yesterday. But only managed to block the signal from myself. I put in 64 bit random number/letter sequence but didn't know I had to record it. Now my laptop keeps prompting me for the WEP code when I want to connect to the […] What kind of web-based encryption uses 101 characters? April 15th, 2008 MD5 for example, uses 32 character strings. What im looking at, it encrypts file names in the url, and it consists of 101 characters (LowerAlpha, Upperalpha, and 0-9). The only things I really know about it are, -It encrypts different length file names, always winds up with 101 characters. -It's reversable. ;_; I've looked all over google, can't find anything […] Does vista have an encryption program? April 15th, 2008 I know my xp had a simple one, does anyone know if vista has one, or where to get an easy and quick one to use off the net for free? (Not a tech wizard) =) encryption problems, third party can see mails, right media is malicious.? April 15th, 2008 I get messages saying the 3rd party can view the mails sent, encryption problem etc. What does this mean and how can I get rid of this problem? I then used Spybot and when i open the inbox it says the right media is malacious…..do u want to Allow or Deny? When i click deny it doesn't open […] what if i lose the key of encryption ?? April 15th, 2008 Looking for free encryption or password locking program? April 15th, 2008 ok, here is essentially what I need, I need to know if there is a freeware program out there which allows me to lock or encrypt files on my own computer, save them to disk, then unlock them on another computer? what encryption is this? April 14th, 2008 %D9%A3n%91R%C8%E2o%B6%3C%FES%17%CA%A3%1B ??? Does anybody know if Advanced Encryption Standard (AES) is implemented in India? If so, where? April 14th, 2008 thanks for ur answer synfulvision…… yeah i know many have implemented AES in various languages such as C, Matlab, HDLs, etc,.. in India……. but i need to know whether AES has been adopted as an encryption technique in any of the applications or fields in India……….. Does encryption affect file size? April 14th, 2008 If I encrypt a file, does it drastically affect the size? Also, if I can connect to a website via https vs http, is there more network traffic needed? I know it can affect CPU usage, but I'm more concerned with the file size. Are we talking a nominal increase or a significant one? Does […] How do I remove a file or folder's encryption? April 14th, 2008 Here's the situation: I have a user account in Windows Xp and I made a password for it; then I decided to remove the password. When I tried to open My Documents using other User Account, it cannot be opened and there's a Window opened indicating, "<File Path> is not accessible. Access is denied." I […] Server/Contact form encryption? April 14th, 2008 How do I encrypt a server, or at least make it so third parties cannot sniff/see data from a contact form that is being sent to an e-mail address? Through Yahoo Sitebuilder I am working on an C++ encryption program and I can use some help? April 14th, 2008 Develop a program that performs simple cryptography. The program will allow the user to enter a string and a cryptographic key, and the program will then encrypt the string using the key and send the output of the encryption to a f\ile. This is what I have so far #include <iostream> #include<string> using namespace std; class Message { private: string […] public key encryption? April 14th, 2008 Given p= 5 & q = 7 and (n) = (p-1)*(q-1), find the inverse of 17 mod (n), and demonstrate the inverse is correct. How to decrypt a MS Word file (I have the encryption key, but not password) ? April 14th, 2008 I have the 40-bit encryption key for a MS Word *.doc file (it is G9Y68593E****) that Advanced Office Password Breaker found for me. When I press 'Decrypt File' nothing happens. Is there another way I can decrypt the file? There should be way to do so I guess… RSA (Public Key Encryption)? April 14th, 2008 Given p= 5 & q = 7 and = (p-1)*(q-1), find the inverse of 17 mod , and demonstrate the inverse is correct. show me the calculation RSA (Public Key Encryption)? April 14th, 2008 Given p= 5 & q = 7 and (n) = (p-1)*(q-1), find the inverse of 17 mod (n), and demonstrate the inverse is correct. What does this NSA encryption mean: BUJUN GNSBB YVNZ? April 14th, 2008 Each set of letters was above the other as such: BUJUN GNSBB YVNZ History of Encryption Algorithims? April 14th, 2008 Can someone please give me a brief history of Encryption Algorithims? How did the old DES (Data Encryption Standard) work? April 14th, 2008 Why was the DES unsuccesful? I need a simple explanation of why it didn't work since it had only a 54-bit key size, but the successor AES (advanced Encryption Standard) was more protective with a 128-bit key size. What I really need is a simple explanation of why the -bit key size matters, and what […] Storing passwords in database after encryption more details on the md5 command.? April 14th, 2008 md5 is the command used for encryption.but how do i use it..i m using jsp to display the values retreived from DB . can i use md5 with the sql querry statements? Storing passwords in database after encryption..I m new to Sql..so someone plz tell me the detailed proceedure April 14th, 2008 I m new 2 SQL ..plz help me. Thanks a lot..but where do i add the md5 command…in the sql querry statement? The databse i m using is SQL SERVER 2005. Encryption, views from a third party, huh? April 14th, 2008 Hey, I've just bought a new laptop and installed Mozilla Firefox onto it. I decide to look up my e-mail but everytime I enter passwords or user information, I get a warning telling me that I'm sending unencrypted information and whatever I do could easily be viewed byu a third party. How do I get […] Is the network key the same as an encryption key? April 14th, 2008 i guess so… WEP yeah Remove Encryption? April 14th, 2008 I have an encryption program on my computer, and i want to remove it. If i do, will there be any side effects on my computer? like loss data or anything like that? thanks. How do I change WEP (encryption key) password for my AT&T DSL? April 14th, 2008 My password is currently numeric numbers taken from the wireless router but would like it to be a standard password that I chose. Please help. Thank you for your time. A difficult encryption, very challenging.? April 14th, 2008 “Zlr dtjt nljjtnh: dxhf hft wldtj lm hft Hdxbx, hft wldtj lm hft Fzbxoqp dop dtoe tqlrkf hl hoet hft nophbt. Vtbgo noq gl qlhfxqk qld, pft xp fojabtpp: az axqxlqp ojt dljexqk lq hoexqk hft pwxjxhp op dt pwtoe: dftq obb mlrj ojt gldq hft exqkgla lm Fzjrbt dxbb ct zlrjp. Dftq gtobxqk dxhf hft ftjl, […] i download from a site and it has an a encryption password so how do i get that password???????? April 14th, 2008 Simple Java String Encryption? April 14th, 2008 So I recently made a fun little game of blackjack using java and swing and all that. I have the option to save the game data (user's name and bankroll), which simply writes the two into a text file in a specified folder. This then allows me to pull the two out of the text […] EFS Encryption Problem? April 14th, 2008 I have encrypted a lot of files. Now I can't decrypt them because maybe I changed my password. And that's the problem, I forgot the password on which I used when I encrypted those files. It says that I'm not trusted. I badly need help. I want the solution to be simple. What can be done to remove the encryption? I am trying to burn DVD's on my computer, but the encryption code stops me. HELP!!!!? April 14th, 2008 Is there any good, free file encryption software that won't give me viruses? April 14th, 2008 Why do DVDs have encryption and not CDs? April 14th, 2008 fault […] Encryption…? April 14th, 2008 I have just got Secure IT from Cypherix and I have the choice of 2 algorithms. Which is more seucre 256 AES or 448 Blowfish? Does anyone know if there is a way to burn a DVD that has a encryption on it? April 14th, 2008 I have a bunch of DVD that my sister wants but when I try to burn them it says it cant because of an encryption. Is there a way to work around this? Thanks I will take a look. question about installing the linkys g wireless router….encryption error? April 14th, 2008 i just installed a linkys wireless router,im planning on running my desktop and laptop wireless.now i had the desk top setup out in the living room and i had it working wireless(internet).i then transfered the desktop to the spare bedroom.since ive unplugged it and moved it,it does not want to reconnect wirless…down in the left […] lost my encryption key for AT&T 2wire, how do i get it? April 14th, 2008 What is the simplest public-key encryption algorithm? April 14th, 2008 I'm interested in programming and cryptography, and I want to write an asymmetric-key encryption program. I want to know what the simplest such algorithm is. Please tell me how it works or link to a site that explains it. aes encryption? April 14th, 2008 Are there any email "clients" that use encryption and has unlimited space? April 14th, 2008 I'm thinking of no longer using yahoo, but like the "unlimited" storage space of yahoo. thanks for all responses. Oh, forgot to add, is there one that has all of the above and email notification. I know yahoo has, through their instant messenging, I think instant notification. I know that's a lot to ask, but maybe someone […]
http://www.seasonsecurity.com/category/encryption
crawl-002
refinedweb
13,643
73.27
12.11.[1]. Here, we need two input images, one content image and one style image. We use a neural network to alter the content image so that its style mirrors that of the style image. In Figure 11.12,. Fig. 12.12 Content and style input images and composite image produced by style transfer. 12.11.1. Technique¶ Figure 11.13 shows an output of the CNN-based style transfer method. Figure 11.13, the pretrained. 12.13 CNN-based style transfer process. Solid lines show the direction of forward propagation and dotted lines show backward propagation. Next, we will perform an experiment to help us better understand the technical details of style transfer. 12.11.2. Read the Content and Style Images¶ First, we read the content and style images. By printing out the image coordinate axes, we can see that they have different dimensions. In [1]: import sys sys.path.insert(0, '..') %matplotlib inline import d2l from mxnet import autograd, gluon, image, init, nd from mxnet.gluon import model_zoo, nn import time d2l.set_figsize() content_img = image.imread('../img/rainier.jpg') d2l.plt.imshow(content_img.asnumpy()); In [2]: style_img = image.imread('../img/autumn_oak.jpg') d2l.plt.imshow(style_img.asnumpy()); 12.11.) 12.11.4. Extract Features¶ We use the VGG-19 model pre-trained on the ImageNet data set to extract image features[1]. In [4]: pretrained_net = the “Networks Using Duplicates (VGG)” section,. In [5]:. In [6]:. In [7]:. In [8]: 12.11.5. Define the Loss Function¶ Next, we will look at the loss function used for style transfer. The loss function includes the content loss, style loss, and total variation loss. 12.11. In [9]: def content_loss(Y_hat, Y): return (Y_hat - Y).square().mean() 12.11 \(\). In [10]: def gram(X): num_channels, n = X.shape[1], X.size // X.shape[1] X = X.reshape((num_channels, n)) return nd. In [11]: def style_loss(Y_hat, gram_Y): return (gram(Y_hat) - gram_Y).square().mean() 12.11. In [12]: def tv_loss(Y_hat): return 0.5 * ((Y_hat[:, :, 1:, :] - Y_hat[:, :, :-1, :]).abs().mean() + (Y_hat[:, :, :, 1:] - Y_hat[:, :, :, :-1]).abs().mean()) 12]: = nd.add_n(*styles_l) + nd.add_n(*contents_l) + tv_l return contents_l, styles_l, tv_l, l 12]:. In [15]: 12 150 by 225 pixels. We use the content image to initialize the composite image. In [17]:) epoch 50, content loss 10.10, style loss 29.39, TV loss 3.46, 0.01 sec epoch 100, content loss 7.50, style loss 15.44, TV loss 3.90, 0.01 sec epoch 150, content loss 6.30, style loss 10.38, TV loss 4.15, 0.01 sec epoch 200, content loss 5.65, style loss 8.11, TV loss 4.29, 0.01 sec change lr to 1.0e-03 epoch 250, content loss 5.58, style loss 7.93, TV loss 4.30, 0.01 sec epoch 300, content loss 5.53, style loss 7.78, TV loss 4.31, 0.01 sec epoch 350, content loss 5.47, style loss 7.64, TV loss 4.32, 0.01 sec epoch 400, content loss 5.41, style loss 7.49, TV loss 4.32, 0.01 sec change lr to 1.0e-04 epoch 450, content loss 5.40, style loss 7.47, TV loss 4.32, 0.01 sec Next, we save the trained composite image. As you can see, the composite image in Figure 11.14 retains the scenery and objects of the content image, while introducing the color of the style image. Because the image is relatively small, the details are a bit fuzzy. In [18]: d2l.plt.imsave('../img/neural-style-1.png', postprocess(output).asnumpy()) Fig. 12.14 \(150 \times 225\) composite image. To obtain a clearer composite image, we train the model using a larger image size: \(300 \times 450\). We increase the height and width of the image in Figure 11.14 by a factor of two and initialize a larger composite image. In [19]: image_shape = (450, 300) _, content_Y = get_contents(image_shape, ctx) _, style_Y = get_styles(image_shape, ctx) X = preprocess(postprocess(output) * 255, image_shape) output = train(X, content_Y, style_Y, ctx, 0.01, 300, 100) d2l.plt.imsave('../img/neural-style-2.png', postprocess(output).asnumpy()) epoch 50, content loss 13.82, style loss 13.70, TV loss 2.38, 0.03 sec epoch 100, content loss 9.58, style loss 8.72, TV loss 2.65, 0.03 sec change lr to 1.0e-03 epoch 150, content loss 9.27, style loss 8.40, TV loss 2.68, 0.03 sec epoch 200, content loss 9.00, style loss 8.12, TV loss 2.70, 0.03 sec change lr to 1.0e-04 epoch 250, content loss 8.97, style loss 8.08, TV loss 2.70, 0.03 sec As you can see, each epoch takes more time due to the larger image size. As shown in Figure 11.15, the composite image produced retains more detail due to its larger size. The composite image not only has large blocks of color like the style image, but these blocks even have the subtle texture of brush strokes. Fig. 12.15 \(300 \times 450\) composite image. 12.11. 12.11.9. Exercises¶ - How does the output change when you select different content and style layers? - Adjust the weight hyper-parameters in the loss function. Does the output retain more content or have less noise? - Use different content and style images. Can you create more interesting composite images? 12.11.10. Reference¶ [1] Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2414-2423).
http://d2l.ai/chapter_computer-vision/neural-style.html
CC-MAIN-2019-18
refinedweb
946
80.78
pcap_loop(3) pcap_loop(3) NAME pcap_loop, pcap_dispatch - process packets from a live capture or save- file SYNOPSIS #include <pcap/pcap.h> typedef void (*pcap_handler)(u_char *user, const struct pcap_pkthdr *h, const u_char *bytes); p until another ending condition occurs. pcap_dispatch() processes packets from a live capture or ``savefile'' until cnt packets are processed, the end of the current bufferful of packets is reached when doing a live capture, the end of the ``save- file'' is reached when reading from a ``save ``savefile''. read timeouts occur; instead, it attempts to read more packets. pcap_dispatch() returns the number of packets processed on success; this can be 0 if no packets were read from a live capture (if, for example, they were discarded because they didn't pass the packet fil- ter,.'' It returns -1 if an error occurs or -2 if the loop terminated due to a call to pcap_breakloop() before any pack- ets were processed. If your application uses pcap_breakloop(), make sure that you explicitly check for -1 and -2, rather than just checking for a return value < 0. If -1 is returned, pcap_geterr() or pcap_perror() may be called with p as an argument to fetch or display the error text. SEE ALSO pcap(3), pcap_geterr(3), pcap_breakloop(3), pcap_datalink(3) 13 October 2013 pcap_loop(3) libpcap 1.7.2 - Generated Sat Mar 14 06:25:49 CDT 2015
http://www.manpagez.com/man/3/pcap_loop/
CC-MAIN-2015-48
refinedweb
228
66.78
I'm trying to create a stack where I can push integers into it. So far I have this: #include <stdio.h> #define N 20 typedef struct { int data[N]; // array of at most size N // N should be a constant declared globally int top; } stack_t; void push(stack_t *stack, int element); int main(){ void push(stack_t *stack, int n) { if (stack->top == N - 1) { printf("Warning: Stack is full, You can't add'\n"); return; } else { stack->data[++stack->top] = n; } } stack_t * e_stack; // Empty stack created push(e_stack, 2); } You're right, all you've done is created a pointer that points at...something, but probably not a stack_t. You need to allocate something to point at. See malloc. Then you'll need to initialize stack_t::top to -1 or some other value. Zero probably won't work here since that index would likely be the first item in the stack.
https://codedump.io/share/wDp5CERBiT7i/1/how-to-create-an-empty-stack
CC-MAIN-2018-17
refinedweb
152
69.11
How many times have you wanted to parse a string and had to re-write a little function here or there to extract what you want? This class is exactly what you've been waiting for. Add the following include: #include "QStringParser.h" CString sTest = "abc,def,\"efg,hij\",klm,nop,\"qrstuv\",wxyz"; CQStringParser p(sTest, ',', '\"'); CString sBuffer = ""; int nCount = p.GetCount(); if (nCount > 0) { for (int i = 1; i <= nCount; i++) { sBuffer += (p.GetField(i) + CString("\n")); } AfxMessageBox(sBuffer); CString sTemp; int nElement; sTemp = p.Find("efg", &nElement); if (nElement > 0) { sBuffer.Format("Found string - %s", sTemp); AfxMessageBox(sBuffer); } else { AfxMessageBox("No matching string found ('efg')."); } sTemp = p.FindExact("abc",&nElement); if (nElement > 0) { sBuffer.Format("Found string - %s", sTemp); AfxMessageBox(sBuffer); } else { AfxMessageBox("No exactly matching string found ('efg')."); } } else { AfxMessageBox("No strings parsed."); } You can (of course) easily create an array of CQStringParser objects if needed, and read in an entire delimited file before processing the parsed strings. Alternatively, you can use the same object over and over again with different strings. CQStringParser The parsed fields begin at element #1 because I store the original string in element 0. #include "QStdStringParser.h" And where it's needed, do something like this (I used this code to test the class): std::string sTest = "abc,def,\"efg,hij\",klm,nop,\"qrstuv\",wxyz"; CQStdStringParser p(sTest, ',', '\"'); The strings being passed into the class and retrieved from the class are of type std:string, but other than that, the class functions identically to its MFC-specific cousin (which works with CStrings). std:string CString 06 May 2001 - As I use my classes they mature and grow, and CQStringParser is no exception. In this iteration, I've simplified the code by eliminating the overloaded constructor and parsing functions. I've also added new functionality. You can now Add, Set (change), Insert, and Delete fields from the parsed string. The demo (link at top this article) includes a simple dialog-based application which allows you to play around with the primary functionality of the class. As usual, the class is fully documented. Have a ball. 10 August 2001 - I recently had reason to need the use of this class in a NON-MFC evnironment, and in order to facilitate this requirement, I created a new version of the class that uses STL instead of the MFC collection classes. The demo program now contains both the CQStringParser class, as well as the CQStdStringParser class. The only externally obvious difference is that the strings you pass in and get back are of type std::string instead of CString. The method names and class functionality are otherwise identical to the original class. 15 March 2002 - Fixed a parsing bug in the string parser classes, and changed the sample app to allow you to change the quote and/or delimiter character in the dialog box. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here CFile cfFile ("C:\\TextFile.txt", CFile::modeNoTruncate | CFile::modeRead); CArchive ar (&cfFile, CArchive::load); // Load its contents into a CArchive CString strLine = ""; // Initialise the variable which holds each line's contents if (!ar.ReadString(strLine)) // Read the first line of the CArchive into the variable return; // Failed, so quit out do // Repeat while there are lines in the file left to process { if(strLine.GetLength() == 0) // If the line is empty, skip it { continue; } CStringParser stringParser(strLine, ','); // for quoted strings, you'd use this: //CStringParser stringParser(strLine, ',', '\"'); if (stringParser.GetCount()) { // Do something with these values in the variables // simply retrieve the string CString sText = stringParser.GetField(1); // or find a substring in the list of parsed strings int nElement = 0; sText = stringParser.Find("my text", &nElement); // or find an exact string match in the list of parsed strings sText = stringParser.FindExact("my text", &nElement); } }while(ar.ReadString(strLine)); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/917/String-Parsing-Class-supports-quoted-strings?msg=132324
CC-MAIN-2016-50
refinedweb
702
63.29
Using Perl in PostgreSQLby Andrew Dunstan 11/10/2005 Most Perl users are familiar with using Perl to talk to databases. Perl's DBI is, along with ODBC and JDBC, one of the most common and widely ported database client interfaces. The DBI driver for PostgreSQL, DBD::Pg, is very well-maintained, and quite featureful. For example, it recently acquired proper support for prepared statements. Previously, the client library had emulated these, but with the latest DBD::Pg and PostgreSQL distributions, you can get real prepared queries, which can lead to big performance gains in some cases. However, there is another way of using Perl with PostgreSQL--writing little Perl programs that actually execute inside of the server. This way of using Perl is less well known than using the DBI driver, and is, as far as I know, unique to PostgreSQL. It lets you do some very cool things that you just can't do in the client. Server-Side Languages In fact, PostgreSQL lets you create server-side routines in quite a few languages, including one called PL/PGSQL that is all its own, and is somewhat similar to Oracle's PL/SQL. The PostgreSQL core distribution supports and maintains three other procedural language interfaces to third-party interpreters: Perl, Python, and Tcl (the first procedural language that PostgreSQL supported). There are also other languages maintained outside of the core distribution for various reasons, including PL/Java (or an alternative flavor, PL/J), Pl/R, PL/Ruby, PL/PHP, and a vastly better PL/Python. If you, like me, are at home in Perl, you will probably want to write your server-side functions in Perl, too. PostgreSQL languages come in two flavors: trusted and untrusted. Trusted languages are those that guarantee not to touch the file system, or other machine resources, while untrusted languages make no such promise. Postgres also protects your machine by refusing to run as root (or a similarly privileged user on Windows). But on a database server, your most valuable asset is probably the data itself, and so you need some additional protection where code might maliciously attack your data via access to the machine's resources. For that reason, only highly privileged database users ("superusers") can create functions in untrusted languages. Only they are allowed to install procedural languages at all, trusted or untrusted. Unless you have installed another language, the only ones available are SQL (which is trusted) and C (which is untrusted). Enabling PL/Perl PL/Perl actually comes in both of these flavors--the trusted version runs inside the standard Perl Safe container, with very few native Perl operations allowed. The easiest way to install either flavor of PL/Perl in a database is via the createlang program that should be part of your distribution. For example: $ createlang plperl mydb For the untrusted version, use instead: $ createlang plperlu mydb A Simple Example The easy way to show how to use PL/Perl is to create a very simple function; one that would be a lot harder to do otherwise. Suppose that you want to test if a given piece of text is a palindrome (a word that reads the same backwards as forwards), disregarding white space and the case of the letters. Here's a piece of SQL to define the function: create function palindrome(text) returns boolean language plperl immutable as ' my $arg = shift; my ($canonical = lc $arg) =~ s/\s+//g; return ($canonical eq reverse $canonical) ? "true" : "false"; '; Given this function, you can write SQL like: select name, region, country, population from towns where palindrome(name); If you can't build a test like this on the server side, you have to get all of the towns in the client and filter there, but that's horribly inefficient. Getting the server to test for you is far nicer. The create function statement declares a function. It requires a name, an argument type list (which can be empty, but you must use the parentheses), a result type, and a language. In this case, I added a further argument, immutable, which tells Postgres that I guarantee that the function value depends only on its input, enabling it to do some optimization. Finally, there is the AS clause, followed by an SQL string literal that contains the body of the function. The body is actually the body of an anonymous subroutine--the glue code wraps it up in a call to the Perl interpreter to create this subroutine and return the reference to the glue code, which stashes it away for later use. The glue code also stores the text in the database catalogs for later retrieval if necessary. Postgres compiles each function once per database session; it does not cache any Perl bytecode. From a programmer's point of view, remember that it wraps up your code in something like: sub { <your text> } PLPerl function arguments appear in @_, just like in regular Perl subroutines, and your code can handle them the same way. The string that contains the body is a normal SQL string and has to obey the same escaping rules as other SQL strings. Because this can lead to some considerable ugliness with strings that need reparsing, version 8.0 of PostgreSQL introduced an alternative quoting mechanism. Implemented by Tom Lane and me, it's colloquially known as "dollar quoting," and can make function bodies more readable in SQL code. I'll use that for my subsequent examples. The argument arrives as a string, no matter what type it has in the database. For simple types, just return a string, which must be a valid literal of the return type. The SQL value NULL maps to the Perl value undef, both for arguments and for return values.
http://archive.oreilly.com/pub/a/databases/2005/11/10/using-perl-in-postgresql.html?page=1
CC-MAIN-2015-22
refinedweb
960
57.71
what is the easiest way to get a set of random numbers and set them as a variable? This is a discussion on random numbers within the C++ Programming forums, part of the General Programming Boards category; what is the easiest way to get a set of random numbers and set them as a variable?... what is the easiest way to get a set of random numbers and set them as a variable? im new Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream. You can use the randamize function... Function rand(100).. So random numbers within 100 . TO store them into variables.. a=rand(100).. You have to include the file math.h >>> Function rand(100).. You may be able to on some compilers, but it is not standard. Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream. use an array and a loop..... get 10 random numbers between 1 and 40 ... Code:#include <iostream> #include<cstdlib> #include <ctime> using namespace std; int main () { srand((unsigned)time(NULL)); // seed random number generator with system time int MyRand[10]={0}; // an array to store our random values; for(int i=0;i<10;++i) { MyRand[i]=rand()%40+1; // get random number between 1 and 40 and store in array cout<<MyRand[i]<<endl; } return 0; } Last edited by Stoned_Coder; 02-27-2002 at 07:59 AM. Free the weed!! Class B to class C is not good enough!! And the FAQ is here :- This is a very simple task my friend.... Somewhere in your program perferibly at the beginning have srand(clock()); to set the random seed. and to assign a variable to be a random number from whatever say... from 1 to 100. num = rand()%99+1; thats it Thank you have a nice day.
http://cboard.cprogramming.com/cplusplus-programming/11901-random-numbers.html
CC-MAIN-2015-35
refinedweb
307
75
Over the past few months there has been a battle waging in the world of domain names; the overseeing body ICANN had hatched a plan to transfer the entire .org registry to a private company, to significant opposition from .org domain holders, concerned citizens, and the Electronic Frontier Foundation. Part of the process before the deadline for handover on the 4th of May was a due dilligence process during which the ICANN board would review submissions related to the deal, and after completing that task the board have witheld their consent for it to go ahead. As you might expect the EFF are declaring a victory, but they also make the point that one of the reasons the ICANN board rejected the deal was a potential risk of a debt liability for the organisation. It’s tempting to frame this as a rare victory for the Little Guy in the face of The Man, but the reality is probably more nuanced. When the deal was hatched the world had not yet come to terms with the COVID-19 pandemic, meaning that the thought of a post-virus economic slump would not yet have been on their minds. It’s thus not unexpected that the ICANN board would think about the financial aspects of it as well as the many objections, because in a time of economic pain the possibility of it going sour would be significantly increased. The future of the .org and other registries should remain a concern to internet users, because after all, this is not the first time such a thing has happened. 10 thoughts on “ICANN Board Withhold Consent For .ORG Deal” Woot. w00t indeed. dot-org should be managed by a non-profit entity. Canada’s CIRA (which manages the dot-ca TLD) might be a good model. It is long overdue for ICANN to be disbanded and an non-commercial international body created instead… You’re absolutely right. Same s*it different a*shole. In any big organization the cream is removed and the scum rises to the top. Neat. But why is this here? It’s here to make people say “w00t” :-D Because we care about the Internet. I’d say ICANN is all but irrelevant today except it’s a huge security issue. We’ve built an internet of distributed multiple redundancy system and the put a single dependence in domain registry. Distributed blockchain namespace would be far superior. They’re charging $10 to $15 a year for a single entry in a database. It’s like the security certificates of yesteryear. The only reason people pay is there are no alternatives. Very glad to hear this. The .ORG domain is an important public resource that should not be privatised. Please be kind and respectful to help make the comments section excellent. (Comment Policy)
https://hackaday.com/2020/05/01/icann-board-withhold-consent-for-org-deal/?replytocom=6241708
CC-MAIN-2021-10
refinedweb
473
63.39
During my 11 years at Google, I can confidently count the number of times I had to do a “clean build” with one hand: their build system is so robust that incremental builds always work. And when I say always, I really mean always. Phrases like “clean everything and try building from scratch” are unheard of1. So… you can color me skeptical when someone says—a I’ve recently heard—that incremental build problems are due to bugs in the build files ( Makefile, CMakeLists.txt, or what have you) and not due to a suboptimal build system. And, truth be told, they are right: incremental build failures are indeed caused by bugs in the build files and such bugs abound. The problem is that the vast majority of engineers don’t give a $#!& about the build system. They rightfully want their code to compile, and they’ll do whatever it takes—typically copy/pasting large amounts of code—to coerce the build system into doing what they need. Along this path, they introduce subtle bugs which then lead to strange build problems. Scale that behavior up to tens, hundreds, or thousands of engineers… and any one little bug balloons. Thus the “run a clean build” mantra is born. The same is true at Google in this regard. While the majority of Google engineers praise their build system, most don’t care about mending it particularly well. These engineers will also copy/paste large amounts of Starlark code just to make things “work”, because this is the reasonable thing for them to do. And yet… in spite of all the abuse… clean builds are not necessary at Google to keep the machine running building. So, how is this possible? How is Google’s build system resilient to thousands of engineers modifying build files in a gigantic monorepo, most of them without truly understanding what goes under the hood? The answer lies in the build tool itself: Bazel. Of course Google engineers also make mistakes in their build files. All of us do. But, when those mistakes happen, the build tool refuses to build the code upfront without giving the appearance of success. In other words: the problems that cause incremental builds to fail are real problems and the system surfaces them early on in any build2. To make this possible, the build tool must know, in a fool-proof and perfect manner, when a rule3 (such as a compiler invocation) has to be re-executed. This decision must account for all possible factors that influence the output of the rule. Sounds simple, right? Indeed it does: this is a very simple concept in theory and most build tools claim to adhere to it. The devil lies in the details, though, and in practice most tools don’t get those details right. But when you do get the tool right, a cultural shift happens. People start trusting that the tool is correct, and when they trust that it is, their expectations and behavior changes. “Do a clean build” is no longer a solution that works, so they get to take a second look at their own build rules, fix them, and learn better practices along the way. In this post, I want to take a look at common failure modes that are often fixed by running clean builds. For each of them, I will describe how a good build tool addresses them and I’ll refer back to Bazel for my examples because Bazel at Google proves that such a utopian system exists. Rest assured, however, that the concepts and ideas are applicable to any system and possibly in ways that differ from what Bazel does. Undeclared dependencies The first and most common problem that breaks incremental builds is caused by undeclared dependencies. The scenario goes like this: you do a first build, then modify a file, and then do a second build. This second build does some “stuff” but the changes you made are not reflected in the final artifacts. What happened? Simply put: the build system didn’t know that the file you modified was part of the build graph. The file was indeed used in the build by some tool along the process, but the build system was oblivious to this fact because the rule that ran that tool didn’t specify all used files as inputs. This is a very common and subtle problem. Say you have a rule to build a single .c source file. Because of #include directives, it is pretty obvious that this rule has to specify all included files as inputs. But this rule also has to specify the dependencies of those includes, and the dependencies of those dependencies, and so on. The build rule must account for the full transitive closure of the include files to be accurate. “Ah!”, I hear you say, “Most build systems are aware of this and use the C preprocessor to extract such list, so they are correct!”. Yes, mostly. But… did they account for the fact that the compiler binary itself is also an input to the rule? Most likely they did not. And of course this is only about C where the file inclusion problem is well-understood… but what about the many other languages you might encounter? The point is: it is very hard to know, on a rule by rule basis, what all the necessary inputs are for its execution. And if the rule misses any of these inputs, the undeclared dependencies problem will be lurking to bite you (much) later. Which, again, is a bug in your build files: you should have stated the inputs to a rule correctly upfront; right? Right. So why didn’t the build system catch this situation? If the build system had caught the undeclared dependency during the very first build attempt, it would not have put you in an inconsistent state: you would have been forced to fix the build files before the build would actually complete. A well-behaved build system will ensure that the build rule fails in all cases if it has not specified all necessary inputs as dependencies. By doing this, the build tool prevents you from getting into a state where you have some artifacts that were generated from inputs the tool didn’t know about. Achieving this goal of detecting undeclared dependencies isn’t trivial if you want the build system to be fast. Consider these options: You can run each build rule in a fresh container or virtual machine to precisely control the contents of the disk and thus what the rule can do. Unfortunately, setting up and tearing down one of these for each build rule would be prohibitively expensive. Mind you, Bazel has a Docker strategy that does just this, but it’s not useful for interactive usage. You can relax the container approach and implement a lighter sandboxing approach to control which files the rule is allowed to access. To achieve this, you can rely on system-specific technologies such as Linux’s namespaces or macOS’s sandbox-exec, and then finely tune what each rule is allowed to do. The downsides of this approach are that the more strict you make the sandbox, the slower it becomes, and that this approach is not portable. You can trace the activity of a rule as it runs to record the files it touches and compare that list to the list of declared inputs after-the-fact. This approach is much faster than sandboxing and I think Microsoft’s BuildXL implements this. The downside is that this requires assistance from the operating system, possibly in the form of a kernel module, which makes it a no-no in many environments. You can rely on remote execution on an rule basis. If you use remote execution, the build tool will only ship declared inputs to a remote worker in order to run a command, and that command will fail if some of its necessary inputs were not uploaded. This solution is essentially the same as the approach to use fresh virtual machines for every rule, but scales better. And this solution can be combined with the sandboxing techniques described above to ensure that whatever happens on the remote worker doesn’t end up relying on worker-specific files. In the case of Google, builds are clear of undeclared dependencies because they rely on remote execution by default. Any code that is checked in will be built using remote execution (even if you did not use remote execution on your development machine), so the correctness of your build rules will be enforced at that point. As a result, it is impossible for you to commit code that fails this sanity check. File modification times Another problem that breaks incremental builds goes like this: a source file changes but its modification time does not. After that, the build tool doesn’t notice that the file has changed and the file isn’t processed again as part of an incremental build. You might think that this issue never happens but not all situations in which this problem arises are hypothetical or unlikely. Certainly there are tools that purposely don’t update the modification time, but these are rare. More subtle but common cases involve the file system’s timestamp resolution not being fine enough. For example: HFS+ has 1-second resolution timestamps so it’s perfectly possible to write a file once, do a build, update the file, do another build and have the second build not see the change. This seems very unlikely (who types that fast?) until you automate builds in scripts and/or your build produces and consumes auto-generated source files. A well-behaved build system knows precisely when an artifact has to be rebuilt because it tracks file contents, not just timestamps. This ensures that the build tool is always aware of when artifacts are stale. And in fact, this is what Bazel does internally: Bazel tracks the cryptographic digest of each file it knows about so that it can precisely know if an input is different than it was before. The question is, though: how does the build tool know when to recompute the digests of the files? Doing this computation on each build would be precisely correct but also prohibitively expensive. (Mind you, this is what Bazel did on macOS when I started working on this port and it was not nice.) So we need a way to know when to recompute the digest of a file… and this seems to take us back to scanning for timestamp changes. Not quite. There are various tricks we can pull off to improve on just using timestamps: File system aids: if you control the file system on which your sources and outputs live, you can add primitives to the file system to tell you precisely what files have changed between two points in time. Imagine being able to ask the file system: “tell me the list of files that have changed between the moment the previous build ran and now”, and then using that information to only compute those digests. I’m not aware of any public file system that does this, but Bazel has the right hooks in it to implement this functionality. I know of at least one other company other than Google tried to take advantage of them. Watching for file changes: the build tool can asynchronously monitor for changes to all files it knows about using system-specific primitives such as epollon Linux or fseventson macOS. By doing this, the build tool will know which files were modified without having to scan for changes. Combining modification times with other inode details: when the previous options are not available, the build tool will have to fall back to scanning the file system and looking for changes. And… in this case, we are indeed back to inspecting modification times. But as we have seen, modification times are weak keys, so we should combine them with other details such as inode numbers and file sizes. Understanding file system timestamp granularity: given what we discussed above, if the tool knows that the file system does not have sufficient granularity to tell changes apart, the tool can work around this. Bazel does have logic in it to compensate in the presence of HFS+, for example. If the build tool follows all of these tricks, then using content hashes on top might seem to only bring minor benefits. But it does have them as we will see later, and they are not as “minor” as they might seem. Command line differences Another problem that often breaks incremental builds is when the build system does not recognize our intentions. Suppose that your project has a DEBUG feature flag that enables expensive debugging features in C++ files that exist throughout the source tree. Now suppose you do a first build with this feature disabled, then notice a bug that you want to debug, add -DDEBUG to the CFLAGS environment variable, and build a second time. This second rebuild does nothing so you are forced to do a clean and start over to convince the build system to pick up the new configuration. This problem surfaces because the build tool only accounted for file differences in its decisions to rebuild artifacts. In this case, however, no files changed: only the configuration in the environment did and thus the build system didn’t know that it had to do anything different. A well-behaved build system tracks non-file dependencies and how they change so that it can rebuild affected artifacts. The tool does so by explicitly being aware of the configuration that is involved in the build. This is a very difficult problem to solve perfectly because what we are saying is that we need a way to track all environmental settings that might possibly affect the behavior of a command. In the example above, we modified the environment. But the build could also have depended on the time of the day, or certain networking characteristics, or the phase of the moon. Accounting for all of these is hard to do in an efficient manner because we are back to the discussion on sandboxing from earlier. In practice, fortunately, we can approximate a good solution to the problem. This problem primarily arises due to explicit configuration changes triggered by the user, and these configuration changes are done either via files, flags, and environment variables. If we can account for these, then the build tool will behave in a reasonable manner in the vast majority of the cases. One way to do achieve this goal is to force the configuration to be expressed in files (adding logic to bypass the environment), and then making all build rules depend on the configuration files. This way, when the configuration files' modification times change, the build system will know that it has to rebuild the world and will do so. This approach indeed works and is implemented by many tools, including GNU Automake. But this approach is extremely inefficient. Consider what happens when your project contains more than one type of rule in it, say because not all sources are C. And I’m not necessarily talking about a polyglot project: having other kinds of artifacts that are not binaries, such as documentation, are sufficient to trigger this issue. In this case, if all we did was change the value of the CFLAGS setting, we would only expect the C rules to be rebuilt. After that, we would expect the consumers of those rules to be rebuilt as well, and so on. In other words: we would only want to rebuild the dependency paths that take us from leaf rules to the rules that might possibly yield different results based on the configuration that changed. A better (and simpler!) solution to this problem is to forget about files and to track the environmental details that affect a rule at the rule level. In this model, we extend the concept of inputs to a rule from just files to files-plus-some-metadata. The way this looks like in practice, at least in Bazel, is by making the command lines an input to the rule and by “cleaning up” the environment variables so that they are not prone to interference from user changes. In the example we showed above, adding -DDEBUG to the C configuration would cause a rule of the form cc -o foo.o -c foo.c to become cc -DDEBUG -o foo.o -c foo.c. These are clearly different even to the untrained eye and can yield different outputs. By tracking the command line at the rule level, the build tool can know which specific rules have to be rebuilt, and will only rebuild those that were affected by our configuration change. Output tree inconsistencies The last problem that sometimes breaks incremental builds appears when we end up with mismatched artifacts in the output tree. As in the previous section, suppose your project has a DEBUG feature flag that enables expensive debugging features. Now suppose again that you do a full build with this feature disabled. But this time, you then go to a specific subdirectory of the project, touch a bunch of files, and rebuild that subdirectory alone with -DDEBUG because you want to troubleshoot your changes. Now what happens? All of the outputs in the output tree were built with DEBUG disabled except for the tiny subset that was rebuilt with this flag enabled. The output tree is now inconsistent and the build tool has no way of knowing that this has happened. From this point on, things might work well, or they might not. In the case of something as DEBUG-type inconsistencies, you might observe weird performance issues at runtime, but in the case of flags that change the ABIs of the intermediate artifacts, you might observe build failures. At that point, a clean build is the only way out. A well-behaved build system avoids inconsistent output trees by tracking the configuration that was used to build each artifact as part of the artifact itself, and groups such artifacts in a consistent manner so that they are never intermixed. This is a very hard problem to address if you want the tool to remain usable. In the limit, you would hash the configuration and make that value part of the artifact path. Unfortunately, doing so would cause the amount of separate files in the output tree to explode, would cause disk usage to explode too, and would bring confusion as the paths in the output tree would be numerous and nonsensical. The approach that most tools take—assuming they are aware of this problem—is to compromise. Most account only for major configuration differences in the way the output tree is laid out. Bazel and Cargo, for example, will separate release and debug builds into parallel output hierarchies. Bazel will go one step further and also account for CPU targets in this scheme. The result is a relatively usable output tree, but it is not perfectly correct because it’s still possible to end up with intermixed outputs. As far as I can tell, this is an open research area. Collateral benefits Wow that was long, but that’s about it regarding the kinds of problems that break incremental builds and various techniques to address them. Before proceeding to look at other benefits that we get from following these better practices, let’s review what we have seen so far: All input files to a rule must be represented in the build graph. These have to be specified either directly in the build files or indirectly via some form of dynamic discovery or introspection. Changes to input files have to be detected in a precise manner: modification times are insufficient. In the best case, content hashes provide correctness, but if they are unsuitable for performance reasons, other file properties such as inode numbers and file sizes should be accounted for. All environmental details that affect a rule, and especially the command line of the rule and the environment variables passed to it, must be represented in the build graph as inputs to that rule. If these inputs change, the rule has to be rebuilt. Artifacts have to be stored accounting for the configuration that was used to build them to prevent mixing incompatible artifacts. A common way to do this is to shard the output tree into parallel trees named after specific configuration settings (debug vs. release, target platform, etc.). Few build systems implement all of these techniques. But once a build system has these techniques, magic happens: Clean builds become a thing of the past, which was the whole premise of this post. “It works on my machine” also becomes a thing of the past. Different behaviors across different machines most often come from factors that were not accounted for during the build, thus yielding different artifacts. If the build can account for all those factors, and you make sure that they are the same across machines (which you’d want to do if you were sharing caches, for example), then the builds will be the same no matter where they happen. Caching works for free across machines and even across users. If we can express everything that affects the behavior of a rule as a cache key, then we can cache the output of the rule using that key. Based on what we said until now, this cache key must account, at the very minimum, for the digests of all input files to the rule, the command line used in the rule, and the environment settings that might affect the rule. If you can tightly control the environment (such as by cleaning up environment variables or using sandboxing to limit network access), the better, because your cache key has to account for fewer details and will be reusable in more contexts. Optimal builds follow. We didn’t touch upon this earlier, but a benefit that is immediately derived from tracking file contents instead of timestamps is that builds become optimally efficient. Suppose you have a utils.cfile at the base of your dependency tree. In a common build system, if you touch this file to fix a trivial typo in a comment, the system will invalidate the whole dependency chain: utils.cwill be rebuilt as utils.o, then utils.owill get a newer timestamp which in turn will trigger the rebuild of all of its consumers, and so on until we reach the leaves of the dependency tree. This needn’t happen. If we track file contents instead of the modification time, and if the modification of utils.ccauses the new utils.oto match the previous file bit-by-bit, then no other rule downstream from that will have to be rebuilt—even if utils.o’s timestamp changes. There is a lot of good that comes from embracing a good build system. Bazel checks most of the boxes I have outlined until now, but “migrating to Bazel” isn’t a realistic proposition for many developers. Under those conditions, being aware of the causes behind broken incremental builds is important, because then you can apply tactical fixes to work around the deficiencies of the underlying tool. I’m fully aware that this post has packed a ton of content, some in a haphazard way because I didn’t have the time to make it shorter. I probably also missed some key root cause behind broken incremental builds. In case of any doubt, please let me know via any of the contact links below. And with that, let’s say good riddance to this 2020. Here is to a better 2021! 🥳 There have been times when incremental builds did actually break, but those were due to bugs in the build system itself—which are unusual. And when those kinds of bugs happen, they are considered outages and are fixed by the infrastructure teams for everyone, not by telling people to “run a clean build”. ↩︎ I can’t resist but compare what I just said here to the differences between C and Rust. Memory management problems are a fact, and no matter what we want to believe, people will continue making them if the language allows them to. The resemblance in this context is that most programs will happily run even if they have memory-related bugs—until they don’t. In the presence of such bugs, C has given us the appearance that the code was good by allowing it to compile and run, postponing the discovery of the bugs until much later. In contrast, Rust forbids us from ever getting to that stage by blocking seemingly good but unsound code from ever compiling. ↩︎ For simplicity, this post talks about build rules and build actions interchangeably even though they are not strictly the same. Whenever you see “rule”, assume I’m talking about a command that the build system executes to transform one or more source files into one or more output artifacts. ↩︎
https://jmmv.dev/2020/12/google-no-clean-builds.html
CC-MAIN-2022-27
refinedweb
4,195
67.99
Your answer is one click away! I'm trying to implement a thread-safe Map cache, and I want the cached Strings to be lazily initialized. Here's my first pass at an implementation: public class ExampleClass { private static final Map<String, String> CACHED_STRINGS = new HashMap<String, String>(); public String getText(String key) { String string = CACHED_STRINGS.get(key); if (string == null) { synchronized (CACHED_STRINGS) { string = CACHED_STRINGS.get(key); if (string == null) { string = createString(); CACHED_STRINGS.put(key, string); } } } return string; } } After writing this code, Netbeans warned me about "double-checked locking," so I started researching it. I found The "Double-Checked Locking is Broken" Declaration and read it, but I'm unsure if my implementation falls prey to the issues it mentioned. It seems like all the issues mentioned in the article are related to object instantiation with the new operator within the synchronized block. I'm not using the new operator, and Strings are immutable, so I'm not sure that if the article is relevant to this situation or not. Is this a thread-safe way to cache strings in a HashMap? Does the thread-safety depend on what action is taken in the createString() method? Concurrency is easy to do and hard to do correctly. Caching is easy to do and hard to do correctly. Both are right up there with Encryption in the category of hard to get right without an intimate understanding of the problem domain and its many subtle side effects and behaviors. Combine them and you get a problem an order of magnitude harder than either one. This is a non-trivial problem that your naive implementation will not solve in a bug free manner. The HashMap you are using is not going to threadsafe if any accesses are not checked and serialized, it will not be performant and will cause lots of contention that will cause lot of blocking and latency depending on the use. The proper way to implement a lazy loading cache is to use something like Guava Cache with a Cache Loader it takes care of all the concurrency and cache race conditions for you transparently. A cursory glance through the source code shows how they do it. No it's not correct because the first access is done out side of a sync block. It's somewhat down to how get and put might be implemented. You must bare in mind that they are not atomic operations. For example, what if they were implemented like this: public T get(string key){ Entry e = findEntry(key); return e.value; } public void put(string key, string value){ Entry e = addNewEntry(key); //danger for get while in-between these lines e.value = value; } private Entry addNewEntry(key){ Entry entry = new Entry(key, ""); //a new entry starts with empty string not null! addToBuckets(entry); //now it's findable by get return entry; } Now the get might not return null when the put operation is still in progress, and the whole getText method could return the wrong value. The example is a bit convoluted, but you can see that correct behaviour of your code relies on the inner workings of the map class. That's not good. And while you can look that code up, you cannot account for compiler, JIT and processor optimisations and inlining which effectively can change the order of operations just like the wacky but correct way I chose to write that map implementation. Consider use of a concurrent hashmap and the method Map.computeIfAbsent() which takes a function to call to compute a default value if key is absent from the map. Map<String, String> cache = new ConcurrentHashMap<>( ); cache.computeIfAbsent( "key", key -> "ComputedDefaultValue" ); Javadoc:. No, and ConcurrentHashMap would not help. Recap: the double check idiom is typically about assigning a new instance to a variable/field; it is broken because the compiler can reorder instructions, meaning the field can be assigned with a partially constructed object. For your setup, you have a distinct issue: the map.get() is not safe from the put() which may be occurring thus possibly rehashing the table. Using a Concurrent hash map fixes ONLY that but not the risk of a false positive (that you think the map has no entry but it is actually being made). The issue is not so much a partially constructed object but the duplication of work. As for the avoidable guava cacheloader: this is just a lazy-init callback that you give to the map so it can create the object if missing. This is essentially the same as putting all the 'if null' code inside the lock, which is certainly NOT going to be faster than good old direct synchronization. (The only times it makes sense to use a cacheloader is for pluggin-in a factory of such missing objects while you are passing the map to classes who don't know how to make missing objects and don't want to be told how).
http://www.devsplanet.com/question/35271835
CC-MAIN-2017-22
refinedweb
827
60.95
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 13/05/2013 at 06:13, xxxxxxxx wrote: If I try to load my plugin, I get the following error: ReferenceError: the object 'c4d.documents.BaseDocument' is not alive Here is my code: import c4d import os import sys from c4d import gui, plugins, bitmaps, documents, utils PLUGIN_ID = 1000002 # Test ID MY_BUTTON = 11005 global doc doc = c4d.documents.GetActiveDocument() def AddObject(doc) : NewObject = c4d.BaseObject(c4d.Ocube) NewObject[c4d.PRIM_CUBE_LEN,c4d.VECTOR_Y] = 100 NewObject.SetName('New Object') doc.InsertObject(NewObject) c4d.EventAdd() -> After that AddObject gets called in another function. Does anybody know what I am doing wrong? Greetings, Casimir Smets On 13/05/2013 at 07:00, xxxxxxxx wrote: don't define your document as a global variable. it will be run when the plugin is being loaded (at c4d start up). generally do not define anything as a global variable except from constants like plugin IDs or strings. either grab the document manually in the addobject method or pass it to the method. almost all overwriteable plugin methods provide the active document. either as a direct parameter or like the NodeData plugins by the basedocument attached to the passed node - GeListNode. GetDocument(). if you do really need a variable to be accessible for multiple methods make it a class member. On 13/05/2013 at 07:45, xxxxxxxx wrote: I've seen this so often. I'm really interested why people think that they need to grab the active document at the top level of the plugin instead of the place where they actually need it. Seems to be something everyone does when starting out. On 13/05/2013 at 08:09, xxxxxxxx wrote: Yes indeed, the noobs try it out And thanx for your answer littledevil! It helped me much.
https://plugincafe.maxon.net/topic/7159/8158_creating-an-c4dobject-with-python
CC-MAIN-2021-21
refinedweb
337
57.57
Statistics 0 質問 73 回答 ランク 377 of 264,940 評価 186 貢献 0 質問 73 回答 回答採用率 0.00% 獲得投票数 45 ランク of 115,960 Weird behaviour of py.importlib when using "InProcess" ExecutionMode It appears that "third_part_library" happens to use the Python multiprocessing package. In such case, the Python module creates ... 16日 前 | 0 How to execute Python script inside a Matlab '' for loop ''? Hopefully I understand the question correctly. I wonder you can create a Python function in a Python file mymodule.py to wrap y... 2年以上 前 | 1 Proper importing of MATLAB structures into Python Would you share more details about how the file "lib.mat" is created? I tried following steps: >>lib.>Str... 4年以上 前 | 0 I am confused on 'Slicing MATLAB arrays behaves differently from slicing a Python list. Slicing a MATLAB array returns a view instead of a shallow copy.', when I learned how to use Matlab.engine in python. This is indeed confusing. The change is made in place, that's why A[0] becomes [2,2]. To elaborate the behavior, here is how A... 5年弱 前 | 0 how to read 'categorical' values from 'mxArray'? You can’t create categorical arrays with mxArray API, as there is no categorical in the mxClassID: <... 約5年 前 | 0 MATLAB Engine API for Python: changing parameters of the running simulation This looks related how you run the simulation. Instead of running "sim", you may use set_parm to star the simulation like follo... 約5年 前 | 1 How can python reach the input of a Matlab GUI? Do you work in MATLAB environment or Python environment? In either case, you probably can log the input of a MATLAB GUI in the c... 5年以上 前 | 0 Running matlab python interface in Jupyter Notebook throws weird error Probably the version of Jupyter uses an older version of GCC. You may consider forcing Jupyter to use MATLAB version of GCC run... 5年以上 前 | 1 | 採用済み Unable to run EngineGUIDemo in Java Engine Adding "<matlabroot>/bin/win64" to the system environment variable PATH is required, otherwise, Java doesn't know where to find ... 5年以上 前 | 1 Problems with MATLAB Engine API for Java With a simple test "example.java" like this: import com.mathworks.engine.*; public class CNN { public static... 5年以上 前 | 0 Does Java Matlab Engine API have a Javadoc? The Java Doc needs to be generated from source code, sounds like an enhancement to TMW. 5年以上 前 | 0 Can I make a call to a third party Matlab function while using the Matlab Engine API along with Java? Did you add "apowb" to the MATLAB path? You can do that using addpath either in MATLAB or through feval in Java. The M file ne... 5年以上 前 | 0 Matlab Java Engine API error This looks like an Eclipse issue. The example code should just compile if you ignore the hints. 5年以上 前 | 0 Python3: Conflict between matlab.engine and opencv2 Regarding this workflow: >>> import matlab.engine >>> import cv2 It looks that there is a newer version of libstdc++.... 5年以上 前 | 0 | 採用済み How can I automatically put the password of my account when using a python engine? What password are you referring to in your question? Is it password used to launch MATLAB? In that case, do you need to type t... 5年以上 前 | 0 Call poly2trellis in Python You need to use mlArray in order to pass an array to MATLAB from Python. A Python list is converted into a MATLAB Cell array wh... 5年以上 前 | 0 | 採用済み Call poly2trellis in Python How about trying following: >>>eng.get_trellis(7.0) It looks that the poly2trellis expects the first argument to be a do... 5年以上 前 | 0 In the Python API, how can I redirect the output of an asynchronous function call of a function without return value? What about adding "ret.result()" before printing the output buffer like following? while not ret.done() : pass ret.result(... 5年以上 前 | 0 | 採用済み Python API: how to get the output? When "async" is set to true, the result along with the output/error are not available until "future.result" is called. How abou... 5年以上 前 | 0 | 採用済み Error Installing MATLAB engine API for Python On Windows, you may need to run the prompt as administrator in order to write to protected folders even if you have administrato... 5年以上 前 | 14 | 採用済み Return multiple variables from MATLAB to Python Apparently, "nargout" is a keyword argument defined for Python Engine. "dp.getTestingData(True, nargout=3)" does not work becau... 5年以上 前 | 0 | 採用済み Calling Script from python I can reproduce your error message with following steps: >>> import matlab.engine >>> eng=matlab.engine.start_matlab("-n... 5年以上 前 | 1 how do i access data from .db files in mysql lite 3 format available on my local storage ? MATLAB has Database Toolbox to query SQLite database: <> 5年以上 前 | 0 Is it possible to import and use third party matlab functions/packages in python? It looks that "api.Plex4.createExplicitSimplexStream()" is a Java API and it works when it is called by Python Engine as part of... 5年以上 前 | 0 Unable to use Python matlab.engine in a Docker container Does MATLAB itself run with the Docker container, for example: matlab -nodesktop -r 'disp hello; exit' 5年以上 前 | 0 What does MATLAB python engine fail to load from within Spyder or after certain other modules loaded? Looks like conflict of libraries, maybe running Python with verbose mode (python -vvv) can provide more details. 6年弱 前 | 0 Accessing matlab object properties from Python I can think of following three approaches: # eng.getfield(tr, 'Base'). This getfield function is designed for structure, and... 6年弱 前 | 1 | 採用済み problems with code call from python Would float work for you? import matlab.engine eng = matlab.engine.start_matlab() b = float(input("dimmi la base: "))... 6年弱 前 | 0 Cannot import matlab.engine without environment error [Ubuntu 14.04/Anaconda] It looks like a compatibility issue of ICU. You may use the verbose mode "python -vvv" to find more details about what is going... 6年弱 前 | 0 What is the "primary message table for module 77"? You may try to create a dummy Engine object using engOpen and keep it alive until the end of the application. Meanwhile, you ca... 6年弱 前 | 0 | 採用済み
https://jp.mathworks.com/matlabcentral/profile/authors/2963945?detail=all
CC-MAIN-2022-40
refinedweb
1,048
67.76
I'm trying to use the return function, I'm new to python but it is one of the things I don't seem to understand. In my assignment I have to put each task in a function to make it easier to read and understand but for example I create a randomly generated number in a function, I then need the same generated number in a different function and I believe the only way this can be done is by returning data. For example here I have a function generating a random number: def generate(): import random key = random.randint(22, 35) print(key) But if I need to use the variable 'key' again which holds the same random number in a different function, it won't work as it is not defined in the new function. def generate(): import random key = random.randint(22, 35) print(key) def number(): sum = key + 33 So how would I return data (if that is what you need to use) for it to work? The usage of return indicates to your method to 'return' something back to whatever called it. So, what you want to do for example in your method is simply add a return(key): # Keep your imports at the top of your script. Don't put them inside methods. import random def generate(): key = random.randint(22, 35) print(key) # return here return key When you call generate, do this: result_of_generate = generate() If you are looking to use it in your number method, you can actually simply do this: def number(): key = generate() sum = key + 33 And if you have to return the sum then, again, make use of that return in the method in similar nature to the generate method.
https://www.codesd.com/item/using-the-same-variables-in-different-functions-in-python.html
CC-MAIN-2019-22
refinedweb
292
62.82
When I set out to create xdg-app I had two goals: - Make it possible for 3rd parties to create and distribute applications that works on multiple distributions. - Run applications with as little access as possible to the host. (For example access to the network or the users files.). 8 thoughts on “xdg-app 0.5.0 released” Please if possible target xenial (dev) instead of vivid (EOL) for Ubuntu PPA. Thanks! Bryan Its very exciting news, I have been following xdg-app development for a while, and have a basic questions: could xdg-app be considered as a rival to docker or even replace it in future? Here we have same underlying technology, cgroup, namespaces, etc. Docker, rkt, lxc etc are here to 1) allow for portable packaging format for apps (one damn format to run everywhere) 2) sandbox apps and separate it from host. These are both goals of xdg-app! To my mind it is just ridiculous to have a format for desktop apps and another one with exact same underlying technology for server apps! And where is the boundary between server and desktop? What if I install MySQL/bind9 etc on my desktop? Also docker style of aggregating all app and dependencies and OS detail in one huge image is not exactly very smart! xdg-app style with runtimes is far smarter, that’s why I wish to see xdg-app on both desktop and server space! One app format for Linux. //mehdi Medhi: xdg-app and docker are quite different and do different things well. They share a lot of technologies, but can’t replace each other.
https://blogs.gnome.org/alexl/2016/03/17/xdg-app-0-5-0-released/
CC-MAIN-2019-47
refinedweb
272
72.97
Acrobat allows you to set up a button to display an image/graphic, and allows the buttonImportIcon JavaScript field method to allow a user to select the button icon. This used to work with Reader but was taken away for some reason. Forms create with LiveCycle Designer allow for an image field that can work with Reader, but Acrobat no longer does. There is no good reason not to allow for this type of functionality. I agree!! This feature is necessary and needed! Especially for us Mac users that cannot use LiveCycle! I agree as well. This is something that used to be available. It is a very basic necessary function for so many reasons. Please do consider (seriously) adding this feature back. It would be greatly appreciated. Thanks I fully agree, and I consider it an unfriendly act. I do not see any reason why LCD should allow it (for Reader), but Acrobat not. Well, there may be a reason, but that one would be really unplaesant… Max Wyss. Yes, please bring back the "insert Image" feature for Acrobat on Macs. Just give us LiveCycle, please. Thank you Be careful with what you wish for… (LCD on Mac) Max Wyss. Please add this feature. This seems like such basic feature that by not including this capability, Adobe comes off appearing quite cynical in their product offerings. yes, this functionality is essential for creating forms that collect information from participants/applicants that I need. Right now I'm stuck with word to do this and it isn't pretty. I would love to have this feature as well. Why should it limited to Windows only? I agree with the comments made here, just wanted to point out that Mac users can use forms that were created in LC, just not create them. I cannot imagine why this feature was taken away. Please bring it back. It would make users feel a little bit better. Like me. PLEASE make this feature available in Acrobat. The hassle of using LCD just to allow users to upload images is a big pain when creating/editing forms due to the fact that you lose the ability to add/remove pages, edit content, etc. in Acrobat. Thanks for such a quick responce. I am shocked that something seemingly so simple is not available. They need to make that feature available again! Why would they take it away in the first place?! I agree, this needs to be available. I am about to cal my client with the dissapointing news that the job he asked me to do is not possible. Amanda, I figured out a way to do it in a round about way. Below are the instructions I sent to my client. In open document, make sure you are in edit mode. On the right side – Forms – Edit. Tasks – Add New Field – Button Your cursor turns into a slim rectangular box. Draw a square with in the image box I have created. You can name the button what you like(it doesn’t really matter), I have been calling it Submitted Image Click on All Properties General Tab - make sure the form field is visible. This should be the default. Appearance Tab – Make the borders and colors, both no color. Options Tab – Layout: Icon Only. Advanced – When to scale: Always. Scale:Proportionally. Click OK Options Tab – Choose Icon – Browse – Find the Picture you are using (IMAGE NEEDS TO BE SAVED AS A PDF). Click OK Close Button Properties, repeat for second image. When done placing images, Tasks – Close Form Editing Save ashrcb81, The goal here is to have an image field that will work with Reader. The method you described requires Acrobat. Thanks for the instructions! I have created a template that I want to be able to send out to multiple clients. The idea is that they can place their logo and contact information. I have the text fields all figured out. I was able to follow your instructions and place an image in Pro, but will my clients using only Reader? Thanks again for your help!! a Amanda Zylstra [signature deleted by host] Hi Amanda, Unfortunately you cannot place images like that in Reader. I meant to mention that in my post. Luckily my client was willing to purchase Acrobat. It is really unfortunate that Reader has such limitations, especially since it used to work just fine. Sorry for any confusion. Amanda and others I have only figured out one way to add graphic to a pdf in reader. I created a tutorial How to add custom graphics to a PDF It would be nice if placing a graphic was much easier. Wow, that looks like it'll do it. I can't try it as I have Acrobat Pro. Thanks! Yes, please add this feature. We have a client that would like to create controlled, co-branded documents with their partners. We would like to define regions in which partners can upload their own logos without requiring partners to buy software or need advanced expertise. Crazy that we pay the same for Acrobat X Pro but don't get Live Cycle. How many designers have had to find a clumsy work around for this missing function? Even crazier that this function WAS possible in Acrobat but now is not! +1 for adding the ability to insert an image into a form. This is a serious omission from what are supposed to be fully functioning "interactive forms". InDesign CS6 was supposed to have greatly improved interactive form creation tools, but the truth is something different. LiveCycle is a mediocre solution because it strips out any custom formatting that is set up in InDesign or Acrobat, especially all the new custom styles for radio buttons and check boxes that InDesign CS6 has in it. AcrobatX only comes with LiveCycle ES2 (9) and not ES3 (10). It would be nice to be able to try ES3 to see if any of these issues have been resolved, but the LiveCycle Designer ES3 demo installer doesn't work because it asks for a serial number and this can't be bypassed. Needless to say I'm frustrated and annoyed by all this. I have a client that really needs to be able to insert multiple JPG images into a form and we can't find a satisfactory solution. Agree totally! A way to allow the user to insert an image into a fillable pdf form is greatly needed! I love the features of the pdf fillable forms, and how user friendly they are. Been pushing my company to get all our forms converted to pdf forms- some of us have Acrobat X pro, and others in the company only have reader. Hope they come up with a solution soon!! Everyone will be pleased to hear - buttonImportIcon now works in Adobe Reader XI. Yes indeed! I'd like to thank everyone who requested that this feature be restored, as that's exactly what happened. As before, what it does now with Reader is allow the user to select a page from a PDF as the source for a button icon. While it would have been ideal to also allow you to select from among the common image types as with an XFA image field (JPEG, PNG, TIFF, GIF), this is not the case at this time. But unlike back when it was removed with Reader 6, there are many easy ways for users to convert images to PDF, including Preview (or anything else) on the Mac, Word, Open Office, and any number of other readily available and free tools. It's even possible to create an XFA form that contains an image field, and after it's populated with an image, use that PDF for the source of the button icon. (There's something wrong about that, but it works.) Since a PDF page can be so much more than a simple image (i.e., vector graphics, text, multiple images), it is actually considerably more flexible than a simple image field. I have hope that the common image formats will be supported in a future version/update, but that's a new feature request. I'm working on an article/tutorial that will present more information and I'll update this thread when it's ready. Thanks again! I can't say I'm all that thrilled about this. I don't expect the end users of my forms to first convert images to a PDF before insertion. My biggest example is the aforementioned industrial service report for one client that can have up to 60 survey images in one document. I need them to be able to upload 4:3 or 3:2 images from their cameras into the document along with 18 pages worth of other info, which is diagrams to draw on, text fields, checklists, and that sort of thing. I've suggested we do this other ways, namely through a web based system, but they love the PDF doc I made for them. They don't have to worry about how it got to be what it is, and they feel it's quite user friendly. Unfortunately (or is that fortunately for me?) every time they make any significant changes to the layout I need to go all the way back to InDesign and then through the Acrobat/LiveCycle process to get all the styles, image functionality, tab orders, etc. sorted out. Although I've made some improvements to my workflow as the project has been evolving, it's a rather time consuming process. Yes, it doesn't sound like a great improvement if the only thing you can import is PDF files, which you can't create unless you have Acrobat (or some other 3rd party application), in which case, you don't really need this feature in the first place. Well, I guess something is better than nothing... It's much better than nothing. The workaround of using an XFA form with one or more image fields to use as the source for the button icons is probably the simplest approach, especially now that Reader is able to save a non-enabled document. Only now you'll have to purchase LC as a separate product, if I understood correctly... And you still have the issue of it not being available for Mac. It's true that Designer isn't included with Acrobat Pro 11. It's been available as a separate purchase for a while now and is only $29. But for those who won't have access, I'll be happy to post a form that can be used. Brad - just get those folks to use Acrobat and you'll be all set. They are expecting too much from Reader! Don't forget to mention, that XFA-forms still have to be Reader-enabled to be savable with Reader. George Johnson wrote: It's much better than nothing. The workaround of using an XFA form with one or more image fields to use as the source for the button icons is probably the simplest approach, especially now that Reader is able to save a non-enabled document. Yes, thanks for the reminder. Yes. Please stop taking functionality AWAY. It's lazy, and just wrong. George, Were you able to put a tutorial together? I also have a design client that has generic forms and would like their clients to be able to insert their logo in the top corner to personalize the forms for their businesses. If this is possible, what are the steps? Thanks in advance! I will add my voice to the people who want to be able to insert a photo into a form. I assumed it was a given when I was building my application, sure was a surprise when it wasn't there.... You have two ways to accomplish this. 1 – Use an XFA-based form and Reader-enable it. 2 – Convert the photo to a PDF and have users use Reader XI. The issue is that many of us would like to use the cameras built into our phones and tablets while were are filling in forms. Not sure how many phones will save a picture as PDF or link automatically to something asking for a PDF.. The buttonImportIcon method is supported by Readdle's PDF Expert, which is available for iOS devices. It can prompt you to take a picture or you can use an image from any available photo library.
http://forums.adobe.com/message/4285981
CC-MAIN-2014-15
refinedweb
2,088
72.97
SYNC(2) OpenBSD Programmer's Manual FSYNC(2) NAME fsync - synchronize a file's in-core state with that on disk SYNOPSIS #include <unistd.h> int fsync(int fd);. RETURN VALUES A 0 value is returned on success. A -1 value indicates an error. ERRORS The fsync() fails if: [EBADF] fd is not a valid descriptor. [EINVAL] fd refers to a socket, not to a file. [EIO] An I/O error occurred while reading from or writing to the file system. SEE ALSO sync(2), sync(8), update(8) HISTORY The fsync() function call appeared in 4.2BSD. OpenBSD 2.6 June 4, 1993 1
http://www.rocketaware.com/man/man2/fsync.2.htm
crawl-002
refinedweb
105
78.04
23 August 2019 0 comments Python TextBlob is a wonderful Python library it. It wraps nltk with a really pleasant API. Out of the box, you get a spell-corrector. From the tutorial: >>> from textblob import TextBlob >>> b = TextBlob("I havv goood speling!") >>> str(b.correct()) 'I have good spelling!' The way it works is that, shipped with the library, is this text file: en-spelling.txt It's about 30,000 lines long and looks like this: ;;; Based on several public domain books from Project Gutenberg ;;; and frequency lists from Wiktionary and the British National Corpus. ;;; ;;; a 21155 aah 1 aaron 5 ab 2 aback 3 abacus 1 abandon 32 abandoned 72 abandoning 27 That gave me an idea! How about I use the TextBlob API but bring my own text as the training model. It doesn't have to be all that complicated. (Note: All the code I used for this demo is available here: github.com/peterbe/spellthese) I found this site that lists "Top 1,000 Baby Boy Names". From that list, randomly pick a couple of out and mess with their spelling. Like, remove letters, add letters, and swap letters. So, 5 random names now look like this: ▶ python challenge.py RIGHT: jameson TYPOED: jamesone RIGHT: abel TYPOED: aabel RIGHT: wesley TYPOED: welsey RIGHT: thomas TYPOED: thhomas RIGHT: bryson TYPOED: brysn Imagine some application, where fat-fingered users typo those names on the right-hand side, and your job is to map that back to the correct spelling. First, let's use the built in TextBlob.correct. A bit simplified but it looks like this: from textblob import TextBlob correct, typo = get_random_name() b = TextBlob(typo) result = str(b.correct()) right = correct == result ... And the results: ▶ python test.py ORIGIN TYPO RESULT WORKED? jesus jess less Fail austin ausin austin Yes! julian juluian julian Yes! carter crarter charter Fail emmett emett met Fail daniel daiel daniel Yes! luca lua la Fail anthony anthonyh anthony Yes! damian daiman cabman Fail kevin keevin keeping Fail Right 40.0% of the time Buuh! Not very impressive. So what went wrong there? Well, the word met is much more common than emmett and the same goes for words like less, charter, keeping etc. You know, because English. The solution is actually really simple. You just crack open the classes out of textblob like this: from textblob import TextBlob from textblob.en import Spelling path = "spelling-model.txt" spelling = Spelling(path=path) # Here, 'names' is a list of all the 1,000 correctly spelled names. # e.g. ['Liam', 'Noah', 'William', 'James', ... spelling.train(" ".join(names), path) Now, instead of corrected = str(TextBlob(typo).correct()) we do result = spelling.suggest(typo)[0][0] as demonstrated here: correct, typo = get_random_name() b = spelling.suggest(typo) result = b[0][0] right = correct == result ... So, let's compare the two "side by side" and see how this works out. Here's the output of running with 20 randomly selected names: ▶ python test.py UNTRAINED... ORIGIN TYPO RESULT WORKED? juan jaun juan Yes! ethan etha the Fail bryson brysn bryan Fail hudson hudsn hudson Yes! oliver roliver oliver Yes! ryan rnyan ran Fail cameron caeron carron Fail christopher hristopher christopher Yes! elias leias elias Yes! xavier xvaier xvaier Fail justin justi just Fail leo lo lo Fail adrian adian adrian Yes! jonah ojnah noah Fail calvin cavlin calvin Yes! jose joe joe Fail carter arter after Fail braxton brxton brixton Fail owen wen wen Fail thomas thoms thomas Yes! Right 40.0% of the time TRAINED... ORIGIN TYPO RESULT WORKED? landon landlon landon Yes sebastian sebstian sebastian Yes evan ean ian Fail isaac isaca isaac Yes matthew matthtew matthew Yes waylon ywaylon waylon Yes sebastian sebastina sebastian Yes adrian darian damian Fail david dvaid david Yes calvin calivn calvin Yes jose ojse jose Yes carlos arlos carlos Yes wyatt wyatta wyatt Yes joshua jsohua joshua Yes anthony antohny anthony Yes christian chrisian christian Yes tristan tristain tristan Yes theodore therodore theodore Yes christopher christophr christopher Yes joshua oshua joshua Yes Right 90.0% of the time See, with very little effort you can got from 40% correct to 90% correct. Note, that the output of something like spelling.suggest('darian') is actually a list like this: [('damian', 0.5), ('adrian', 0.5)] and you can use that in your application. For example: <li><a href="?name=damian">Did you mean <b>damian</b></a></li> <li><a href="?name=adrian">Did you mean <b>adrian</b></a></li> Ultimately, what TextBlob does is a re-implementation of Peter Norvig's original implementation from 2007. I too, have written my own implementation in 2007. Depending on your needs, you can just figure out the licensing of that source code and lift it out and implement in your custom ways. But TextBlob wraps it up nicely for you. When you use the textblob.en.Spelling class you have some choices. First, like I did in my demo: path = "spelling-model.txt" spelling = Spelling(path=path) spelling.train(my_space_separated_text_blob, path) What that does is creating a file spelling-model.txt that wasn't there before. It looks like this (in my demo): ▶ head spelling-model.txt aaron 1 abel 1 adam 1 adrian 1 aiden 1 alexander 1 andrew 1 angel 1 anthony 1 asher 1 The number (on the right) there is the "frequency" of the word. But what if you have a "scoring" number of your own. Perhaps, in your application you just know that adrian is more right than damian. Then, you can make your own file: Suppose the text file ("spelling-model-weighted.txt") contains lines like this: ... adrian 8 damian 3 ... Now, the output becomes: >>> import os >>> from textblob.en import Spelling >>> import os >>>>> assert os.path.isfile(path) >>> spelling = Spelling(path=path) >>> spelling.suggest('darian') [('adrian', 0.7272727272727273), ('damian', 0.2727272727272727)] Based on the weighting, these numbers add up. I.e. 3 / (3 + 8) == 0.2727272727272727 I hope it inspires you to write your own spelling application using TextBlob. For example, you can feed it the names of your products on an e-commerce site. The .txt file might bloat if you have too much but note that the 30K lines en-spelling.txt is only 314KB and it loads in...: >>> from textblob import TextBlob >>> from time import perf_counter >>> b = TextBlob("I havv goood speling!") >>> t0 = perf_counter(); right = b.correct() ; t1 = perf_counter() >>> t1 - t0 0.07055813199999861 ...70ms for 30,000 words. Follow @peterbe on Twitter
https://www-origin.peterbe.com/plog/train-your-own-spell-corrector-with-textblob
CC-MAIN-2021-04
refinedweb
1,081
77.84
QAbstractXmlReceiver Class The QAbstractXmlReceiver class provides a callback interface for transforming the output of a QXmlQuery. More... Note: All functions in this class are reentrant. Public Functions Detailed Description. XQuery Sequences An XQuery sequence is an ordered collection of zero, one, or many items. Each item is either an atomic value or a node. An atomic value is a simple data value. There are six kinds of nodes. - An Element Node represents an XML element. - An Attribute Node represents an XML attribute. - A Document Node represents an entire XML document. - A Text Node represents character data (element content). - A Processing Instruction Node represents an XML processing instruction, which is used in an XML document to tell the application reading the document to perform some action. A typical example is to use a processing instruction to tell the application to use a particular XSLT stylesheet to display the document. - And a Comment node represents an XML comment. The sequence of nodes and atomic values obeys the following rules. Note that Namespace Node refers to a special Attribute Node with name xmlns. - Each node appears in the sequence before its children and their descendants appear. - A node's descendants appear in the sequence before any of its siblings appear. - A Document Node represents an entire document. Zero or more Document Nodes can appear in a sequence, but they can only be top level items (i.e., a Document Node can't be a child of another node. - Namespace Nodes immediately follow the Element Node with which they are associated. - Attribute Nodes immediately follow the Namespace Nodes of the element with which they are associated, or... - If there are no Namespace Nodes following an element, then the Attribute Nodes immediately follow the element. - An atomic value can only appear as a top level item, i.e., it can't appear as a child of a node. - Processing Instruction Nodes do not have children, and their parent is either a Document Node or an Element Node. - Comment Nodes do not have children, and their parent is either a Document Node or an Element Node. The sequence of nodes and atomic values is sent to an QAbstractXmlReceiver (QXmlSerializer in the example above) as a sequence of calls to the receiver's callback functions. The mapping of callback functions to sequence items is as follows. - startDocument() and endDocument() are called for each Document Node in the sequence. endDocument() is not called until all the Document Node's children have appeared in the sequence. - startElement() and endElement() are called for each Element Node. endElement() is not called until all the Element Node's children have appeared in the sequence. - attribute() is called for each Attribute Node. - comment() is called for each Comment Node. - characters() is called for each Text Node. - processingInstruction() is called for each Processing Instruction Node. - namespaceBinding() is called for each Namespace Node. - atomicValue() is called for each atomic value. For a complete explanation of XQuery sequences, visit XQuery Data Model. See also W3C XQuery 1.0 and XPath 2.0 Data Model (XDM), QXmlSerializer, and QXmlResultItems. Member Function Documentation QAbstractXmlReceiver::QAbstractXmlReceiver() Constructs an abstract xml receiver. [virtual] QAbstractXmlReceiver::~QAbstractXmlReceiver() Destroys the xml receiver. [pure virtual] void QAbstractXmlReceiver::atomicValue(const QVariant &value) This callback is called when an atomic value appears in the sequence. The value is a simple data value. It is guaranteed to be valid. [pure virtual] void QAbstractXmlReceiver::attribute(const QXmlName &name, const QStringRef &value) This callback is called when an attribute node appears in the sequence. name is the attribute name and the value string contains the attribute value. [pure virtual] void QAbstractXmlReceiver::characters(const QStringRef &value) This callback is called when a text node appears in the sequence. The value contains the text. Adjacent text nodes may not occur in the sequence, i.e., this callback must not be called twice in a row. [pure virtual] void QAbstractXmlReceiver::comment(const QString &value) This callback is called when a comment node appears in the sequence. The value is the comment text, which must not contain the string "--". [pure virtual] void QAbstractXmlReceiver::endDocument() This callback is called when the end of a document node appears in the sequence. [pure virtual] void QAbstractXmlReceiver::endElement() This callback is called when the end of an element node appears in the sequence. [pure virtual] void QAbstractXmlReceiver::endOfSequence() This callback is called once only, right after the sequence ends. [pure virtual] void QAbstractXmlReceiver::namespaceBinding(const QXmlName &name). [pure virtual] void QAbstractXmlReceiver::processingInstruction(const QXmlName &target, const QString . [pure virtual] void QAbstractXmlReceiver::startDocument() This callback is called when a document node appears in the sequence. [pure virtual] void QAbstractXmlReceiver::startElement(const QXmlName &name) This callback is called when a new element node appears in the sequence. name is the valid name of the node element. [pure virtual] void QAbstractXmlReceiver::startOfSequence() This callback is called once only, right before the sequence.
http://doc-snapshots.qt.io/qt5-5.10/qabstractxmlreceiver.html
CC-MAIN-2018-09
refinedweb
812
51.04
Strings Since strings are lists of characters, you can use any available list function. 3.1 Combining strings 3.2 Accessing substrings 3.3 Splitting strings 3.4 Multiline strings "foo\ \bar" --> "foobar" 3.5 Converting between characters and values 3.6 Reversing a string by words or characters 3.7 Converting case 3.8 Interpolation TODO 3.9 Performance For high performance requirements (where you would typically consider C), consider using Data.ByteString. 3.10 Unicode TODO 4 NumbersNumbers in Haskell can be of the type 4.1 Rounding numbers 4.2 Taking logarithms log 2.718281828459045 --> 1.0 logBase 10 10000 --> 4.0 4.3 Generating random numbers import System.Random main = do gen <- getStdGen let ns = randoms gen :: [Int] print $ take 10 ns 4 4.5 Using complex numbers 5 Dates and time 5.1 Finding today's date import Data.Time c <- getCurrentTime --> 2009-04-21 14:25:29.5585588 UTC (y,m,d) = toGregorian $ utctDay c --> (2009,4,21) 5.2 Adding to or subtracting from a date 5.3 Difference of two dates 5). 6.1 Infinite lists Prelude> [1..] The list of all squares: square x = x*x squares = map square [1..] Prelude> take 10 squares [1,4,9,16,25,36,49,64,81,100] ] 7.2 Set TODO 7.3 Tree TODO 7.4 ByteString TODO 8. 9 Interactivity 9.1 Reading a string Strings can be read as input using getLine. Prelude> getLine Foo bar baz "Foo bar baz" 9" 9.3 Parsing command line arguments TODO 10 Files 10" 10.3 Creating a temporary file TODO 10] 10.5 Logging to a file TODO]. 12.2 Parsing XML TODO 13 Databases access There are two packages you can use to connect to MySQL, PostgreSQL, Sqlite3 and ODBC databases: HDBC and Hsql 13.1 MySQL TODO 13.2 PostgreSQL TODO 13] 14 Graphical user interfaces". 14.4 SDL There are some Haskell bindings to SDL at Hackage. 15 PDF files For the following recipes you need to install HPDF. 15.1 Creating an empty PDF file The following code creates an empty PDF file with the name "test1.pdf": import Graphics.PDF main :: IO () main = do let output = "test1.pdf" let rect = PDFRect 0 0 200 300 runPdf output standardDocInfo rect $ do addPage Nothing 15.2 Pages with different sizes If you pass "Nothing" to the function addPage, the document size will be used for the size of the new page. Let’s create three pages, the last two pages with different dimensions: import Graphics.PDF main :: IO () main = do let output = "test2.pdf" let rect = PDFRect 0 0 200 300 runPdf output standardDocInfo rect $ do addPage Nothing addPage $ Just $ PDFRect 0 0 100 100 addPage $ Just $ PDFRect 0 0 150 150 16 FFI 16.1 How to interface with C Magnus has written a nice example on how to call a C function operating on a user defined type. 17 Testing 17.1 QuickCheck TODO 17.2 HUnit TODO
https://wiki.haskell.org/index.php?title=Cookbook&oldid=27859
CC-MAIN-2015-32
refinedweb
500
71.31
ifneq ($(WITH_REVISION),)REV is meant to be sources from shell svnversion. ifeq ($(shell expr $(WITH_REVISION) \>= 1), 1) ifeq ($(shell expr $(WITH_REVISION) \>= 2), 1) REV = $(WITH_REVISION) else REV = $(shell svnversion) endif ifneq ($(REV),) CFLAGS += -DREVISION="$(REV)" endif endif endif problem is that if one runs a server using the nightly build then it shows up as revision "0", in other words no revision number. This makes it possible to join with any nightly version that uses the same pakset version (pakset hashes will still stop one from joining). This is a problem that is not unique to River, as I have noticed several such servers since the move to the new build server. The revision number is however not shown when looking at the file details in Explorer. Does anyone look there at all?
https://forum.simutrans.com/index.php?amp;action=printpage;topic=15845.0
CC-MAIN-2020-10
refinedweb
133
57.81
Post your Comment SQL get Column Name SQL get Column Name SQL get Column Name is used to return the Fieldnames of the table... get Column Name'. To understand and elaborate example we create a table 'stu Changing column name Changing column name  ... the name of the column. As this is not the work of the programmer to change... the name of the column. The name of the column of the column will be changed Changing the Name of Column in a JTable . After this you will get the changes column name [Name - Stu_name, Subject - Paper... Changing the Name of Column in a JTable  ... the name of column in JTable component. You have learnt the JTable containing  Jdbc Get Column Names Jdbc Get Column Names The JDBC Get Column Names return you the property of the retrieved Column like its field name and data type using meta Data. Understand with Example how to get values for same column name from two different tables in SQL how to get values for same column name from two different tables in SQL how to get values for same column name from two different tables in SQL???? column name is emp_id loacated in these two tables company,employee Altering a Column name in a table Altering a Column name in a table how to alter column name in MSSQL server 2005 the codesample given using "change" keyword alter table tablename change oldcolumnname to newcolumnname is not working SQL Alter Column Name SQL Alter Column Name Alter Column Name in SQL is used to change or modify the name... a simple example on SQL Alter Column Name. In this Example, create How to copy existing column along with data and column name into another existing table How to copy existing column along with data and column name into another... and column name also into another existing table... For ex; TableA : Address email... way to copy both column name and data how to Change column name and Make a unique column. how to Change column name and Make a unique column. how to Change column name and Make a unique column. Hi, Following queries... table [table name] change [old column name] [new column name] varchar (50 JDBC: Get Column Details Example [] args) { System.out.println("Get column name and type of a table in JDBC... = statement.executeQuery(sql); System.out.println("Column Name\tColumn Type...) { e.printStackTrace(); } } } Output : Get column name and type SQL Alter Column Name SQL Alter Column Name Alter Column Name in SQL is used to change or modify the name of column... example on SQL Alter Column Name. In this Example, create a table 'Stu_Table Get Column names using Metadata in jsp Get Column names using Metadata in jsp This section illustrates you how to get column names...=con.createStatement() to get the column names. 5) It will return runtime error:Invalid column name ; } } Console Output: Getting Results! java.sql.SQLException: Invalid column name...runtime error:Invalid column name Hello, Can anyone please help me on this query? Programatically it is showing run time error. But in sql query Change Column Name of a Table Change Column Name of a Table  ... for renaming a column name in a database table. As we know that each table keeps contents in rows and column format. While making a table we specify the name Adding a New Column Name in Database Table Adding a New Column Name in Database Table  ... that we have created a table and forgets to add some important column name..., it takes table name, column name and it's data type and at last add a new column Java program to get column names of a table Java program to get column names of a table... of getColumnCount() methods. Now we can get the column names with the index...] Column=id Name of [2] Column=title Name of [3] Column=url Change Column Name in MySQL .style1 { color: #0000FF; } Change Column Name in MySQL In this example How to change column name in MySQL. First of all we have created MySQL... Table name and change. The current column is named old_col but if you want Java file get name Java file get name In this section, you will learn how to get the name... get the name of any file. Output: File name is: out.txt... the name of the file. Here is the code: import java.io.*; public class JavaScript Hide Table Column that column will get disappear: Download Source Code:  ... JavaScript Hide Table Column...; In this section, we are going to hide a table column using JavaScript Display the column name using DBCP ;Columns Name: ");  ...; System.out.println(col_name);   CORE JAVA get middle name CORE JAVA get middle name hello sir...how to get middle name using string tokenizer....??? eg..like name ANKIT it will select only K...!!!! The given code accepts the name from the console and find the middle Get Property by Name to get Property by Name. For this we have a class name "Get Property... .style1 { margin-right: 75px; } Get Property by Name   how to get java path name Java program to get data type of column field Java program to get data type of column field  ... Name of [1] Column data type is =DOUBLE Name of [2] Column data type is =VARCHAR Name of [3] Column data type is =VARCHAR Name of [4 Get computer name in java Get computer name in java We can get the computer name by the java code program. For getting computer name we have used java.net.InetAddress class. We will use static Shorting Table View By Column Name Shorting Table View By Column Name This tutorial explains how to shorting table view by column name from the database in JSP and Servlet. This example... shorting table view by column name. The code of "userdetails.jsp Get Month Name Example Get Month Name Example  ... in understanding a how to get a 'Get Month Name Example'.For this we have a class name 'GetMonthNameExample'. Inside the main method we declared a String array name Rename column name of table using DBCP how to get multiple hyperlink values from a table column to another jsp file? how to get multiple hyperlink values from a table column to another jsp file... file named "dbtable" will get the parameter from "index" and search..., itemname, and description. now my itemid column is all in hyperlinks, so Column select Column select How i fetch Experience wise resume? Create a column experience which consist of only two values either yes...;td>Name</td><td><input type="text" name="name" value="< Post your Comment
http://www.roseindia.net/discussion/23575-SQL-get-Column-Name.html
CC-MAIN-2014-10
refinedweb
1,108
65.12
Affiliate Disclosure: By buying the products we recommend, you help keep the lights on at MakeUseOf. Read more. Learning to program is a cumulative experience. Alongside learning the syntax of your chosen language, you must also learn the general principles that all programming languages use. Understanding and learning C programming can be daunting, but there are a few basic ideas worth familiarizing yourself with when starting. A simple project is a great way to learn the fundamentals of C. So where should you start? By saying hello! 1. Hello, World! The first part of almost every coding course is the hello world program. Going over it in detail highlights some of the ways C differs from other languages. To begin with, open up a text editor or IDE Text Editors vs. IDEs: Which One Is Better For Programmers? Text Editors vs. IDEs: Which One Is Better For Programmers? Choosing between an advanced IDE and a simpler text editor can be hard. We offer some insight to help you make that decision. Read More of your choice, and enter this code: #include <stdio.h> /* this is a Hello World script in C */ int main(void) { printf("Hello, World! \n"); return 0; } This short piece of code prints to the console before ending the program. Save it somewhere easy to remember as hello.c. Now you need to compile and build your file. Making It Run Usually, you won’t need to install any additional software on your computer to run C scripts. Open up a terminal window (or command prompt if you are running Windows) and navigate to the directory you saved your script in. The way you compile and run your file varies from system to system: - Windows Users: Make your file executable by typing cl hello.c and pressing enter. This will create hello.exe in the same folder, which you can run by typing hello. - Linux and macOS users: Type gcc -o hello hello.c and press enter to make it executable, and run it by typing ./hello. Whichever method you use, running your script should show you this: If it didn’t work on Windows, make sure you run the command prompt in administrator mode. For macOS, you may need to install Xcode from the App store and follow these steps from StackOverflow. Now, let’s look at the program line by line to see how it works, and improve it! Under the Hood: Understanding the C Language Preprocessors The script you just created starts with the inclusion of a library. #include <stdio.h> The first line in the script is called a preprocessor. This is carried out before the rest of the script is compiled. In this case, it tells the script to use the stdio.h library. There are a huge number of preprocessors available for different tasks. Stdio.h takes care of getting input from the program’s user, and outputting information back to them. /* this is a Hello World script in C */ This next line is a comment. The slash and star tell the compiler to ignore everything between it and the closing star and slash. While this may seem pointless, being able to leave yourself and others clear notes about what your code does is an essential habit to get into. The Main Function int main(void) Every C program must have a main function. Main is a function which returns an integer, denoted by int. The brackets after main are for its arguments, though in this case, it takes none, which is why you use the void keyword. You write the code to be carried out between two curly braces. { printf("Hello, World! \n"); return 0; } Inside the function, you call the printf() function. Just like main(), printf is a function. The difference is, printf is a function in the stdio library you included at the start. Printf prints anything in the brackets, between the quotation marks, to the console. The \n is an escape sequence called newline, telling the compiler to skip to the next line in the console before continuing. Note that these lines end in semicolons, which the compiler uses to split one task from the next. Pay close attention to these semicolons—missing them out is the number one cause of things not going right! Finally, the function returns with the number 0, ending the program. The main() function must always return an integer, and return = 0; signals to the computer that the process was successful. Understanding each step of this script is a great start in learning both C syntax, and how the language works. 2. Creating Your Own C Functions You can create your own custom functions in C. Instead of printing Hello World in the main function, create a new function to do it for you. void print_for_me() { printf("Hello, World! \n"); } Let’s break this down. void is a keyword meaning the following function will not return anything. print_for_me() is the name of the function, and the empty brackets show it does not require arguments to work. An argument is any piece of information to pass on to a function to make it work—later you will be adding an argument of your own to change the output! Note: This is not the same as the main() function above which used void. That function cannot take arguments, while this one can (but in this case, doesn’t have to). The code block should be familiar to you—it’s just the print statement from the original main function. Now, you can call this function from your main function. int main(void) { print_for_me(); print_for_me(); return 0; } You can see here a benefit of using your own function. Rather than typing printf(“Hello, World! \n”) each time, you can call the function twice. Right now this might not seem so important, but if your print_for_me function contained a lot of lines of code, being able to call it so easily is a great time saver! This is a fundamental idea of programming you will come across throughout your education. Write your own function once, rather than write the same big chunks of code over and over. 3. Using Function Prototypes in C Prototypes are one of the major ways beginner C differs from other languages. In short, a prototype is a like a preview of a function defined later. If you write the print_for_me() function after the main function, you may get a warning when compiling: The warning message is telling you that the compiler ran into the print_for_me function before it was declared, so it couldn’t be sure that it would work correctly when the program runs. The code would still work, but the warning can be avoided altogether by using a prototype. #include <stdio.h> void print_for_me(); int main(void) { print_for_me(); print_for_me(); return 0; } void print_for_me() { printf("Hello, World! \n"); } By looking at the full program you can see the prototype for print_for_me() exists at the start of the program, but contains nothing. The prototype function shows the compiler how the function should look, and whether it requires arguments or not. This means that when you call it in the main function, the compiler knows if it is being called correctly and can throw a warning or error if it is needed. This may be something that seems strange now, but knowing about them now will help in future. This program still works without a prototype, but they are good practice to use. The output still looks the same for now, lets change it to make it more personal! 4. Passing Arguments to C Functions Changing the Script For this final step, you will ask for the user’s name, and record their input. Then you’ll use it in the function you created before. In C, words are not known as strings like in other programming languages. Instead, they are an array of single characters. The symbol for an array is [] and the keyword is char. Begin by updating your prototype function at the start of your script: #include <stdio.h> void print_for_me(char name[]); Now, the compiler will know that the function later in the script takes an array of characters called name. So far, this character doesn’t exist. Update your main function to create it, and use it to store the user input: int main(void) { char name[20]; printf("Enter name: "); scanf("%s", name); print_for_me(name); print_for_me("Everyone!"); return 0; } The first line in main creates a character array with 20 possible spaces called name. Next, the user is prompted to enter their name using printf. The next line uses a new function called scanf which takes the next word the user types. The “%s” tells the function that it should store the data as a string, and call it name. Modifying the Function Now when you call print_for_me, you can include name in the brackets. On the next line, you will see you can also pass other characters as long as they are between quotation marks. Both times, what is in the brackets gets passed to the print_for_me function. Modify that now to use the new information you are giving it: void print_for_me(char name[]) { printf("Hello, "); puts(name); } Here you can see that the brackets have been updated just like the prototype at the start of the script. Inside, you still print hello using printf. A new function here is puts. This is a more advanced version of printf. Anything put in the brackets will be printed to the console, and a newline (the \n you used earlier) gets added automatically. Save and compile your code the same way you did earlier—note that you can name the program something different if you do not want to overwrite your earlier program. I named mine hello2: As you should see, the program takes the input and uses it in the function, before sending the preset greeting of “Everyone!” again giving two separate outputs to the same function. The ABCs of C Programming This program is simple, but some of the concepts in it are not. More advanced C code must be written very well to prevent crashes. This is why many think it is an excellent language to learn first as it instills good habits into new programmers. Others think learning C++ is a better idea, as it builds on C while retaining its lower system control. (There’s also Rust to consider—it’s an exciting programming language that’s syntactically similar to C++.) One thing is sure: languages like Python are much more beginner friendly. For an old language, C is still used everywhere, but Python may be the language of the future 6 Reasons Why Python Is the Programming Language of the Future 6 Reasons Why Python Is the Programming Language of the Future Want to learn or expand your programming skills? Here's why Python is the best programming language to learn this year. Read More ! Explore more about: C, Coding Tutorials, Programming. You state the main function cannot take arguments, which is false. You can have it accept command-line arguments. int main(int argc, char **argv[]). Also, this article is clickbait. This isn't a project. It's hello world. dressed up as something more. There are no scripts in C. There are programs kiddo. You are a script kid. Don't insult C by such attributes. "The main() function must always return an integer, and return = 0; signals to the computer that the process was successful." This is not necessarily true. Some implementations such as Visual Studio 2008 and later allow "void main()" or void main(int argc, char *argv[], char *envp[])" variants as well. Other compilers may issue warnings or errors, but some will not. In the case of the MS compiler, reaching the return statement (or the end of the main() code block) simply returns nothing to the OS. If an exit code is required, a void main() can be terminated with the 'exit' function to do so. That said, I would not recommend using this since it is not compatible across the board and should not be considered best practice in general, but when reading other's code, it may be encountered so one should just be aware of it.
https://www.makeuseof.com/tag/learn-c-programming-beginner-project/
CC-MAIN-2020-05
refinedweb
2,054
73.17
Can anyone explain the output I am getting from this simple program using std::map p q #include <map> #include <iostream> struct screenPoint { float x = 0, y = 0; screenPoint(float x_, float y_): x{x_}, y{y_}{} }; bool operator<(const screenPoint& left, const screenPoint& right){ return left.x<right.x&&left.y<right.y; } std::map<screenPoint, float> positions; int main(int argc, const char * argv[]) { auto p = screenPoint(1,2); auto q = screenPoint(2,1); positions.emplace(p,3); auto f = positions.find(p); auto g = positions.find(q); if (f == positions.end()){ std::cout << "f not found"; } else { std::cout << "f found"; } std::cout << std::endl; if (g == positions.end()){ std::cout << "g not found"; } else { std::cout << "g found"; } std::cout << std::endl; std::cout << "number elements: " << positions.size() << "\n"; return 0; } f found g found number elements: 1 In order to use a data type in an std::map, it must have a particular ordering called a strict weak ordering (). This means that the inequality operator ( <) obeys a very specific set of rules. The operator you specified however is not a weak ordering. In particular, given two screenPoints, a and b constructed from (1,2) and (2,1) respectively, you will see that it is false both that a < b and that b < a. In a strict weak ordering, this would be required to imply that a == b, which is not true! Because your inequality operator does not meet the requirement of a strict weak ordering, map ends up doing unexpected things. I recommend reading up more details on what this ordering is, and reading/thinking about why map requires it. In the short term, you can redefine your operator as follows: bool operator<(const screenPoint& left, const screenPoint& right){ if left.x < right.x return true; else return (left.y < right.y); }
https://codedump.io/share/VAEJPwHqlBTG/1/what-is-this-use-of-stdmap-doing
CC-MAIN-2018-09
refinedweb
307
54.93
Although there are many Graphical Tools available for sending files to a server using SFTP. But as a developer, we may have a scenario where we need to upload a file to SFTP Server from our Code. A few days ago a job assigned to me was to develop a Task Scheduler for generating XML files daily on a specific time of the day & send these files on a Remote Server using File Transfer Protocol in a secure way. Here’s my article on creating task scheduler => Creating Scheduler in c# – Schedule Task by Seconds, Minutes, Hours, Daily. What. Choosing Library for C# A lot of search & after testing many libraries I finally met with SSH.NET which was working perfectly with .Net Core 2.2 project & the good thing was that It does its job in a very few lines of Code. So we’ll use SSH.NET What is: Features . Code Finally, It’s time to create a class for SFTP Client Code. Create a file with the name as “SendFileToServer” & add the below code using Renci.SshNet; public static class SendFileToServer { // Enter your host name or IP here private static string host = "127.0.0.1"; // Enter your sftp username here private static string username = "sftp"; // Enter your sftp password here private static string password = "12345"; public static int Send(string fileName) { var connectionInfo = new ConnectionInfo(host, "sftp", new PasswordAuthenticationMethod(username, password)); // Upload File using . Let me know if you find any problem or comment If you find this Article helpful. Here are more Articles you might be Interested: – A Complete Guide to Secure Your Asp.Net Core Web Application & Apis – Creating Admin Panel in Asp.net Core MVC – Step by Step Tutorial — How to Create SOAP Web Services in Dotnet Core – Dynamic Role-Based Authorization Asp.net Core — Generate QR Code Using.
https://codinginfinite.com/upload-file-sftp-server-using-csharp-net-core-ssh/
CC-MAIN-2020-50
refinedweb
306
64.51
django-simple-templates 0.5.1 Easy, designer-friendly templates and A/B testing friendly tools for Django. ==== Overview ---- In short, **django-simple-templates** provides easy, designer-friendly templates and A/B testing (split testing) friendly tools for Django. If you have used or heard of Django's ``flatpages`` app before, you'll be more able to appreciate what **django-simple-templates** gives you. It is inspired by ``flatpages``, with a desire to have fewer knowledge dependencies and greater flexibility. Objectives ---- **django-simple-templates** is intended to: - provide the means to **isolate template designer effort**; reduce web developer involvement - provide an easy way to **launch flat or simple pages quickly**; no URL pattern or view needed - provide a quick and simple method to do **A/B testing (split testing) with Django templates** Use Cases ---- If you need to quickly launch landing pages for marketing campaigns, then **django-simple-templates** is for you. If you have a great web designer who knows next to nothing about Django, then **django-simple-templates** is likely a good fit. It helps to reduce the need for: - training web designers on Django URL patterns, views, etc. - you can restrict the necessary knowledge to Django templates and template tags (custom and/or builtin) - involving web developers to create stub page templates or to convert designer-created static HTML pages to Django templates If you want to be able to **A/B test any Django template** with an external service such as GACE (Google Analytics Content Experiments), then **django-simple-templates** will absolutely help you. I've always found A/B testing with Django (and frameworks in general) to be somewhat painful - hopefully this app alleviates that pain for others too. Installation ---- It's a standard PyPi install: pip install django-simple-templates To use the simple page template functionality, add the ``SimplePageFallbackMiddleware`` to your ``MIDDLEWARE_CLASSES`` in your ``settings.py``: MIDDLEWARE_CLASSES = ( ... # other middleware here 'simple_templates.middleware.SimplePageFallbackMiddleware' ) Note that this middleware is not necessary if you only want to use the ``get_ab_template`` functionality (see below). Configuration Options ---- **django-simple-templates** has a few options to help cater to your project's needs. You can override these by setting them in your settings.py. Each has an acceptable default value, so you do not *need* to set them: - **SIMPLE_TEMPLATES_AB_PARAM**: optional; defaults to ``ab``. This is the query string (request.GET) parameter that contains the name of the A/B testing template name. - **SIMPLE_TEMPLATES_AB_DIR**: optional; defaults to ``ab_templates``. This is the subdirectory inside your TEMPLATE_DIRS where you should place your A/B testing page templates. - **SIMPLE_TEMPLATES_DIR**: optional; defaults to ``simple_templates``. This is the subdirectory inside your TEMPLATE_DIRS where you should place your simple page templates. Usage ---- To create a "simple template" page, all you need to do is create a template file under ``SIMPLE_TEMPLATES_DIR``. This is your standard Django template format, inheritance, etc. The directory structure you place it in determines the URL structure. For example, creating a template here: <your_templates_dir>/simple_templates/en/contact.html would result in the a URL structure like: The ``SimplePageFallbackMiddleware`` middleware kicks in and looks for possible template file matches when an ``Http404`` is the response to a web request, so if you had a URL pattern and view that handled the ``/en/contact/`` URL, this middleware would not do anything at all. To create an A/B testing template (the variation template) for the example simple page template above, you'd create the variation template under the appropriate directory structure under ``SIMPLE_TEMPLATES_AB_DIR``: <your_templates_dir>/ab_templates/simple_templates/en/contact/variation1.html and the resulting URL would be: To use the A/B testing functionality in your existing code, import ``get_ab_template`` and use it in your view: from django.shortcuts import render from simple_templates.utils import get_ab_template def my_view(request): template = get_ab_template(request, 'my_view_template.html') return render(request, template) The ``get_ab_template`` function works like this: - pass Django's `request` object and the view's normal template into `get_ab_template` - the `get_ab_template` will look in request.GET to see if there was an `ab` parameter in the query string - if `ab` is found in request.GET, `get_ab_template` will attempt to find the associated template file under ``SIMPLE_TEMPLATES_AB_DIR`` - if the `ab` template file is found, the `ab` template path is returned - if either `ab` or the template file associated with `ab` is not found, the passed-in 'default' template file is returned Here's an example to demonstrate. If you want to A/B test your signup page with the URL: and your current user signup template file located here: <your_templates_dir>/user/signup.html with a variation called 'fewer-inputs', you would first modify your Django view for a user signing up to use ``get_ab_template`` and you would have this URL as your variation page: and your variation template file should be placed here: <your_templates_dir>/ab_templates/user/signup/fewer-inputs.html Compatibility ---- **django-simple-templates** been used in the following version configurations: - Python 2.6, 2.7 - Django 1.4, 1.5 It should work with prior versions; please report your usage and submit pull requests as necessary. Source ---- The latest source code can always be found here: Credits ---- django-simple-templates is maintained by James Addison, code@scottisheyes.com. License ---- django-simple-templates is Copyright (c) 2013, James Addison. It is free software, and may be redistributed under the terms specified in the LICENSE file. Questions, Comments, Concerns: ---- Feel free to open an issue here: - or better yet, submit a pull request with fixes and improvements. TODO: ---- - mention GACE usage (GACE script on original template file only) - use canonical link tag to non-variation URL (use django-spurl for easy usage) - build the above into your overall project base.html template(s) so you never forget - complete tests - Downloads (All Versions): - 13 downloads in the last day - 79 downloads in the last week - 254 downloads in the last month - Author: James Addison - Keywords: a/b testing,split testing,a/b,split - License: BSD - Categories - Package Index Owner: jaddison - DOAP record: django-simple-templates-0.5.1.xml
https://pypi.python.org/pypi/django-simple-templates/0.5.1
CC-MAIN-2015-32
refinedweb
999
50.87
A handful of dice can make a decent normal random number generator, good enough for classroom demonstrations. I wrote about this a while ago. My original post included Mathematica code for calculating how close to normal the distribution of the sum of the dice is. Here I’d like to redo the code in Python to show how to do the same calculations using SymPy. [Update: I’ll also give a solution that does not use SymPy and that scales much better.] If you roll five dice and add up the spots, the probability of getting a sum of k is the coefficient of xk in the expansion of (x + x2 + x3 + x4 + x5 + x6)5 / 65. Here’s code to find the probabilities by expanding the polynomial and taking coefficients. from sympy import Symbol sides = 6 dice = 5 rolls = range( dice*sides + 1 ) # Tell SymPy that we want to use x as a symbol, not a number x = Symbol('x') # p(x) = (x + x^2 + ... + x^m)^n # where m = number of sides per die # and n = number of dice p = sum([x**i for i in range(1, sides + 1)])**dice # Extract the coefficients of p(x) and divide by sides**dice pmf = [sides**(-dice) * p.expand().coeff(x, i) for i in rolls] If you’d like to compare the CDF of the dice sum to a normal CDF you could add this. from scipy import array, sqrt from scipy.stats import norm cdf = array(pmf).cumsum() # Normal CDF for comparison mean = 0.5*(sides + 1)*dice variance = dice*(sides**2 -1)/12.0 temp = [norm.cdf(i, mean, sqrt(variance)) for i in roles] norm_cdf = array(temp) diff = abs(cdf - norm_cdf) # Print the maximum error and where it occurs print diff.max(), diff.argmax() Question: Now suppose you want a better approximation to a normal distribution. Would it be better to increase the number of dice or the number of sides per dice? For example, would you be better off with 10 six-sided dice or 5 twelve-sided dice? Think about it before reading the solution. Update: The SymPy code does not scale well. When I tried the code with 50 six-sided dice, it ran out of memory. Based on Andre’s comment, I rewrote the code using polypow. SymPy offers much more symbolic calculation functionality than NumPy, but in this case NumPy contains all we need. It is much faster and it doesn’t run out of memory. from numpy.polynomial.polynomial import polypow from numpy import ones sides = 6 dice = 100 # Create an array of polynomial coefficients for # x + x^2 + ... + x^sides p = ones(sides + 1) p[0] = 0 # Extract the coefficients of p(x)**dice and divide by sides**dice pmf = sides**(-dice) * polypow(p, dice) cdf = pmf.cumsum() That solution works for up to 398 dice. What’s up with that? With 399 dice, the largest polynomial coefficient overflows. If we divide by the number of dice before raising the polynomial to the power dice, the code becomes a little simpler and scales further. p = ones(sides + 1) p[0] = 0 p /= sides pmf = polypow(p, dice) cdf = pmf.cumsum() I tried this last approach on 10,000 dice with no problem. * * * For daily posts on probability, follow @ProbFact on Twitter. 4 thoughts on “Rolling dice for normal samples: Python version” Interesting! Perhaps numpy.polynomial.polypow would be more practical in this situation, though? At least for me, since I’m not very used to Sympy 🙂 If M is the number of sides and N is the number of dice, I am inclined to say that increasing N would be better, at least for large N and M. That way, you are increasing the “exponent” in the number of outcomes, as opposed to “base”. Also, CLT works for N going to infinity, while for M going to infinity and N=2, the pdf of the sum would be triangular shaped (due to the convolution of two uniform pdfs) rather than Gaussian-like. …it’s things like this that show me I *NEED* more education! I’ve written a (tiny & very experimental) library for doing some dice exploration & such, which lets you find the mean, variance, standard dev, and the raw probability of rolling a given number on nDx. It can be found at – though unfortunately (for most people) it’s in Haskell. Anyway, I haven’t actually implemented a CDF for my dice exploder, but it shouldn’t be too hard. If you want to take a look at the code &| steal some ideas from it feel free (but beware: it’s ugly!!) p.s. reading your blog helps push my boundaries… thank you!
https://www.johndcook.com/blog/2013/04/29/rolling-dice-for-normal-samples-python-version/
CC-MAIN-2017-43
refinedweb
785
72.36
Using Vue as an Angular alternative for Ionic: Parent-Children Communication Modern. In an Ionic Angular application, an Input-Output-EventEmitter System is used for this purpose. A similar system is available in an Ionic Vue application and that’s what we will see in this tutorial! We will first see how we can do this in a simple Ionic Vue TypeScript application, you will only need to install the vue-class-component library from the previous Component tutorial. The second part focuses on using Vue Single File Components (SFC), we already did the mandatory configuration part in the same Component tutorial so head there if you want to work with .vue files. Before diving in this tutorial, you need to know why this Parent-Children System is used. It all comes down to three important words in modern JavaScript development: Source of Truth. Back in the day, AngularJS broke every record with its two-way data binding. Unlike AJAX, the view was automatically updated when a property was modified. This was mind-blowing! However, when it came to debugging, it was really hard to understand where the data were modified. Was it in a Service? In the Parent? In a Child? It was hard to find the Source of Truth. Nowadays, mutating some information coming from a Parent is either useless because it won't have any effect or triggers a warning like: That's why we need to know the famous Parent-Children System for Ionic Vue applications. Ionic Vue TS Parent-Children We will start with the ChildNode Component in the child-node.ts file: import Vue from 'vue'; import Component from 'vue-class-component'; @Component({ template: `<div> <div> Hi I'm the child, my name is {{name}} </div> <input type="text" @ </div>`, props: { name: String } }) We will display the name property of the Component, this name is not just a simple property, we won’t declare it in the Component’s Class, this name value is a props, it must be acquired from the parent when the child-node Element is created. The props field can be an array of props, however, it’s better to use it as an object to define the props types. The last part of the Child’s template is an input, we will listen for the change event and trigger the Ionic Vue Child’s changeName method, which is as follow: . . . export default class ChildNode extends Vue { changeName ({ target: { value } }): void { this.$emit('nameChanged', value); } } It receives an event and we directly grab the target.value property by doing some ES6 destructuring.The $emit method is located on a Vue Component, it doesn’t matter if we are in pure Vue or Ionic Vue, this method will always be there. It allows us to pass a message to the upper Components, the first argument is the name of the message, the second one is the data attached to it. We can now create a parent-node.ts file: import Vue from 'vue'; import Component from 'vue-class-component'; import ChildNode from './child-node'; @Component({ template: `<div> <child-node v-on: </child-node> </div>`, components: { ChildNode } }) We import the ChildNode and add it to the components field in order to use it in the template. The Element is then used. If you have read the Ionic Vue Directive tutorial, this v-on attribute must be quite familiar. It has a nameChanged arg and a changeName value. This arg is a string referring to which event name we are listening for. The changeName value is the function that will be triggered once the nameChanged event is received from a Child Component. The last attribute is name, we will use a nameProp property to initialize this value. This property is defined in the ParentNode’s Class: . . . export default class ParentNode extends Vue { nameProp: string = 'Child 1'; changeName (newName): void { this.nameProp = newName; } } The name is initialized to "Child 1". The changeName function is expecting a newName value. Once this function is triggered, the nameProp property will be updated with this newName, forcing the Parent Node Component to re-render and then update the Child Node Component’s name prop. We only need to add the ParentNode Component to the Ionic Vue root instance in the main.ts file: import Vue from "vue"; import TsParentNode from "./app/parent-children/typescript-components/parent-node"; var app = new Vue({ el: "#app", components: { TsParentNode } }); And add the Element to the index.html file: <div id="app"> <ts-parent-node></ts-parent-node> </div> Which gives the following result: Ionic Vue SFC Parent-Children This result can also be obtained by using Vue’s Single File Component System (don’t forget to head there to use the Ionic Vue SFC Webpack configuration for this part). Just like before, we start with the ChildNode Component in the child-node.vue file: <template> <div> <div> Hi I'm the child, my name is {{name}} </div> <input type="text" @ </div> </template> <script> export default { props: { name: String }, methods: { changeName: function ({ target: { value } }) { this.$emit('nameChanged', value); } } } </script> <style scoped> </style> Just like before, we have the name prop that is expecting a String and the changeName method that will dispatch the event to the Parent located in the parent-node.vue file: <template> <div> <child-node v-on:</child-node> </div> </template> <script> import Vue from "vue"; import ChildNode from "./child-node.vue"; export default Vue.extend({ data() { return { name: "Child 2" }; }, methods: { changeName: function(newName) { this.name = newName; } }, components: { ChildNode } }); </script> <style scoped> </style> The ChildNode is added to the components field, the name property is set to “Child 2” and the changeName method is ready to update the name property. We can add the VueParentNode to the main.ts file: import Vue from "vue"; import VueParentNode from "./app/parent-children/vue-components/parent-node.vue"; var app = new Vue({ el: "#app", components: { VueParentNode } }); And use the Directive in the index.html: <div id="app"> <vue-parent-node></vue-parent-node> </div> To get the same result: Conclusion This system is quite complex, the Parent creates a Child and pass a name property, this child renders its template by using this prop, the user changes the name in the child which emits an event to the parent that will update its local name value and update the child. Always having to send a message to the parent in order to update the child is not awesome, but that’s our only core solution. VueX is a great option to modify the state of the application while keeping a reliable Source of Truth, but that’s for another tutorial.
https://javascripttuts.com/using-vue-as-an-angular-alternative-for-ionic-parent-children-communication/
CC-MAIN-2019-26
refinedweb
1,108
61.56
Working in VC++ and here's my problem, was reading through my "SAMS Teach yourself C++ in 1 hour a day" and got to chapter 13 which is supposed to teach operator overloading, but when I modified the code from the book, as such: it gives me the errors:it gives me the errors:Code:#include <iostream> using namespace std; class number //class to test the reprogramming of the "++" increment operator { public: number(); ~number(); int mynumber; void inc(int increment) // replacement function for ++ operator { mynumber += increment; } number& operator ++ () // reprogramming the ++ operator { inc(1); return *this; } void dmn() // dmn = display mynumber { cout << mynumber; } void setnum(int x)// sets mynumber value { mynumber = x; } }; int main() { int z; cout << "Please choose start point : "; cin >> z; number newnumb(); newnumb.setnum(z); newnumb.dmn(); ++ newnumb; newnumb.dmn(); system("PAUSE"); return 0; } 1. error C2228: left of '.setnum' must have class/struct/union 2. error C2228: left of '.dmn' must have class/struct/union 3. error C2171: '++' : illegal on operands of type 'number (__cdecl *)(void)' 4. error C2105: '++' needs l-value 5. warning C4550: expression evaluates to a function which is missing an argument list 6. error C2228: left of '.dmn' must have class/struct/union 1, 2, and 6 annoy me, because I don't understand why it's printing it 3 and 4, I am just clueless as to what they mean...
http://cboard.cprogramming.com/cplusplus-programming/117755-new-cplusplus-i-don%27t-understand-what-means.html
CC-MAIN-2015-40
refinedweb
228
62.38
Recently I posted about SAML's wide adoption and its next steps. Well, SAML V1.1 has now become an OASIS Standard through a strong show of support from OASIS members, and I can report that the SAML committee's face-to-face meeting this week to plan out the features of V2.0 was a big success. If you haven't run across the Security Assertion Markup Language before, here's the basic idea. SAML allows for interoperable exchange of security information about subjects, focusing on describing three kinds of things: authentication acts, attributes, and authorization decisions. You can request "assertions" in these forms from "SAML authorities" that you trust. One especially useful scenario for SAML is single sign-on (SSO), where a user can log in to one website but then proceed to use resources at a website in a different domain -- because SAML assertions are being exchanged that tell the second site that the user's okay. This was the focus of the selection of SAML as an underpinning of the Liberty Alliance identity federation work and for Sun's SAML support in its Sun ONE Identity Server product. Another scenario is to use SAML assertions to secure a SOAP message, which is achieved by the OASIS WSS (Web Services Security) SAML Token Profile work. SAML is also designed to be extremely extensible while retaining a reasonable level of interoperability, and a number of standards efforts and products have taken advantage of this. We had to blaze a bit of a new W3C XML Schema trail in V1.0 in trying out different methods of extension, and the real-world reports we're getting back will help us refine these methods. One issue is the best way to refer to "standard" user attributes that come from something like an LDAP schema. Currently the XML representation of this in SAML is a simple attribute name string plus an XML Namespaces-like URI (an "attribute namespace" in SAML terms). Another issue is how to improve the XML Schema type hierarchy that we make available for extension and where we should be using the xs:anyType datatype. (By the way, I'll be giving a lecture on XML and extensibility on September 15 at the Center for Document Engineering at UC Berkeley, touching on these sorts of topics.) xs:anyType If you haven't checked out SAML yet, you can download the specs here, and you can also find an open-source toolkit at OpenSAML.org. And if you've got new use cases that you'd like SAML V2.0 to support, make sure to get your comments in as soon as possible (see my previous post for instructions) because the window will be closing pretty soon.
http://weblogs.java.net/blog/elm/archive/2003/09/saml_v11_is_fin.html
crawl-002
refinedweb
458
57.91
IPSEC_STRERROR(3) NetBSD Library Functions Manual IPSEC_STRERROR(3)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME ipsec_strerror -- error messages for the IPsec policy manipulation library LIBRARY IPsec Policy Control Library (libipsec, -lipsec) SYNOPSIS #include <netinet6/ipsec.h> const char * ipsec_strerror(void); DESCRIPTION netinet6/ipsec.h declares extern int ipsec_errcode; which is used to pass an error code from the IPsec policy manipulation library to a program. ipsec_strerror() can be used to obtain the error message string for the error code. The array pointed to is not to be modified by the calling program. Since ipsec_strerror() uses strerror(3) as underlying function, calling strerror(3) after ipsec_strerror() will make the return value from ipsec_strerror() invalid or overwritten. RETURN VALUES ipsec_strerror() always returns a pointer to a C string. The C string must not be overwritten by the calling program. SEE ALSO ipsec_set_policy(3) HISTORY ipsec_strerror() first appeared in the WIDE/KAME IPv6 protocol stack kit. BUGS ipsec_strerror() will return its result which may be overwritten by sub- sequent calls. ipsec_errcode is not thread safe. NetBSD 4.0 May 6, 1998 NetBSD 4.0
https://man.netbsd.org/NetBSD-4.0/ipsec_strerror.3
CC-MAIN-2022-40
refinedweb
195
59.5
Class for a stroboscopic tuning indicator. More... #include <stroboscope.h> Class for a stroboscopic tuning indicator. Ordinary stroboscopic tuners show the analog audio signal amplitude in terms of light intensity masked by a rotating disk. See e.g. Doing the same in our software would require to send the complete PCM data to a stroboscopic drawer which would display it in real time pixel by pixel. This would be computationally expensive. Moreover, the messaging sytem would not be suitable for such a data stream. We therefore choose a different method. The AudioRecorderAdapter holds an instance of this class and calls the function pushRawData (see below), transmitting all PCM data. This data is reorganized in packages of the size mSamplesPerFrame. Each frame is convolved with a complex number rotating on its unit circle. The speed at which the complex number rotates corresponds to the expected frequencies of the partials, as specified by the function set Frequencies. The result is a set of complex numbers (one for each partial), encoding the intensity of the partial and its complex phase shift. This data is sent (only once for each frame) via the messaging system. The TuningIndicatorDrawer listens to these messages and draws horizontal bars with phase-shifted rainbow colors. Definition at line 67 of file stroboscope.h. Type for a complex number. Definition at line 77 of file stroboscope.h. Type for a vector of complex numbers. Definition at line 78 of file stroboscope.h. Constructor. Stroboscope::Stroboscope. Definition at line 40 of file stroboscope.cpp. Stroboscope::pushRawData. Definition at line 59 of file stroboscope.cpp. Stroboscope::setFramesPerSecond. Definition at line 101 of file stroboscope.cpp. Stroboscope::setFrequencies. Definition at line 116 of file stroboscope.cpp. Start the stroboscope. Definition at line 81 of file stroboscope.h. Stop the stroboscope. Definition at line 82 of file stroboscope.h. Damping factor of the normalizing amplitude level on a single frame (0...1) Definition at line 71 of file stroboscope.h. Damping of the complex phases from frame to frame (0...1) Definition at line 74 of file stroboscope.h. Flag indicating activity (start/stop) Definition at line 90 of file stroboscope.h. Factor by which the complex number rotates. Definition at line 95 of file stroboscope.h. Rotating complex number. Definition at line 94 of file stroboscope.h. Sliding amplitude to normalize the data. Definition at line 93 of file stroboscope.h. Phase average over the actual frame. Definition at line 96 of file stroboscope.h. Mutex protecting access from different threads. Definition at line 97 of file stroboscope.h. Pointer to the audio recorder. Definition at line 89 of file stroboscope.h. Actual number of PCM samples read. Definition at line 92 of file stroboscope.h. Number of PCM samples per frame. Definition at line 91 of file stroboscope.h.
http://doxygen.piano-tuner.org/class_stroboscope.html
CC-MAIN-2022-05
refinedweb
467
53.78
On Tue, Mar 27, 2018 at 7:00 AM, Nick Coghlan ncoghlan@gmail.com wrote: On 27 March 2018 at 01:57, Guido van Rossum guido@python.org wrote: On Mon, Mar 26, 2018 at 7:57 AM, Nick Coghlan ncoghlan@gmail.com wrote:). Right, but that's annoying, too, and adds "Am I polluting a namespace I care about?" to something that would ideally be a purely statement local consideration (and currently is for comprehensions and generator expressions). The standard reply here is that if you can't tell at a glance whether that's the case, your code is too complex. The Zen of Python says "Namespaces are one honking great idea -- let's do more of those!" and in this case that means refactor into smaller namespaces, i.e. functions/methods.
https://mail.python.org/archives/list/python-ideas@python.org/message/3DOILK5G3KDBKICCHEPYNSCZHODRERAQ/
CC-MAIN-2021-21
refinedweb
134
73.27
Hello! I have the same problem. Let me know if you find a solution. Thank you. Hello! I have the same problem. Let me know if you find a solution. Thank you. Hi! I have this line in my pkg: "DaemonBootFlag" -"!:\Nia0xmyUID\DaemonBootFlag" When I uninstall my app, the file "Nia0xmyUID\DaemonBootFlag"... I had the same problem and used this workaround: ... You might be right. But I imagine that Nokia has disabled optimizations for some reason. Maybe there are bugs in optimization phase (I don't know). Bye! Nice weekend! Binaries produced by GCCE (the CSL ARM gcc toolchain) for 9.1 are larger than what we’ve been used to. A suggested solution is to purchase RVCT from ARM Do you know a cheaper solution??? Thanks Hi! TcpTransport could be any class that implements: 1. function onSocketConnect(success:Boolean):Void 2. function onSocketXML(xml:XML):Void and registers as a listener.... This class works properly for me: import TcpTransport; class XMLSocketWrapper extends XMLSocket { private var listener:TcpTransport; I have used XML over TCP Socket using XMLSocket API and works fine. I recommend you this option. Solved. RProcess handle leaked. Hello! I used HTTP, but acomplished bad performance (FlashLite 2.0 HTTP Stack does not use KeepAlive on Local connections). FlashLite 2.1, which is now available, offers XMLSocket API. I... Hello! I'm developing a Control Panel to start/stop a deamon (as much the same as Nokia Control Panel to start/stop bluetooth service). The target platform is S60 2nd (I'm using a Nokia N70). I... OK, Thank you. Then which is your recommendation: 1. Generate the GUI using Carbide's UI Designer for S60 v2 and then include the port for S60 v3 usign ifdef on mmp, pkg, .h and .cpp OR ... Hello! I need to develop a control panel (S60 GUI Application) to configure the settings of a Symbian Server (developed by our company). I'm a newbie on S60 GUIs ;-). The control panel must work... Hello! In my case, the error: "error: cannot open file, check filename and access rights" was due to some antivirus conflict. Disable antivirus and try again. Hello! I have tested S60 3rd SDK NPBitmap example successfully. My Question is: Is it possible to use the webplugin without using "object" or "embed" tag inside and HTML document? For... Thank You. This tip was useful to me Hello! I develop a Symbian Server (S60 1st, 2nd and 3rd) I want to provide services to FlashLite 2.0 and 2.1 Applications. The only available method that I have found is using HTTP. (Talk... Hello! Real Web Plugin is the only available method in S60 3rd? To Develop a Web Plugin you need ALL -TCP Capabilities! Thank You Hello! In S60 1st and 2nd it was possible to extend Nokia browser using: 1. Recognizer for the custom mime-type AND 2. S60 App that handles the custom mime-type From S60 2nd FP2, Browser... Hello! Has anybody acomplished a succesful SIP Registration using CSIPRegistrationBinding class? I'm using example code from S60 3rd FP1 SDK, but I'm not able to make it work. No errors of... Hello! I need to send REGISTER message against real IMS in S60 3rd and S60 3rd FP1 Phones. SIPExample that comes in S60 3rd SDK send REGISTER using SIP Profile (don't use IMS features) What... Hello could anybody share some example code to make IMS Registration on Nokia S60 3rd Phones ? Thank You very much! Please, can you provide example code to do an IMS Registration? (example code to add IMS additional headers) Thank You very much! Is it possible to use S60 3d SIP API to implement IMS client? Hello! Please, can you provide example code to make a valid IMS Registration using Nokia S60 3rd SIP API? Thank You very much! Solved: I must use: const TDesC8& desContentCorrect = sipMessageElements.Content(); instead of TDesC8 desContentIncorrect = sipMessageElements.Content();
http://developer.nokia.com/Community/Discussion/search.php?s=f1e0a029864d2e694efec47e64c43e00&searchid=1843517
CC-MAIN-2013-48
refinedweb
654
68.97
There are multiple ways of passing the Session variables to WCF service. We will pass the session variables to WCF service using SOAP Message header. So let’s open Visual Studio 2010 and create a Blank solution with the name ‘SessionStateSharingExample’. Once the solution is created, add a WCF Serivce library with the name ‘TestWCFService’ as shown below – Now modify the Interface ‘Iservice1’ as shown below – First import a name – ‘using System.Data;’ Now implement this interface in our Service class ‘Service1’ as shown below. First import the following Data namespaces – Note: Make sure you change the connection string (Server Name, User ID and Password) according to your settings. Now our service is ready to be accept requests. Let’s host this service in our .NET Console Application. Add a Console Application in our Solution with the name ‘WCFHost’. Now add a reference to the ‘System.ServiceModel.dll’ file. Also add a reference to our WCF Service. Then add the following code in the Main Method – Now let’s add a App.Config file in our Console application with the configuration code as shown below – Run your WCF host to test the service. You should get the following message- Now add a ASP.NET website in our solution with the name ‘TestWebSite’. Once your site is ready, add two buttons and a ‘GridView’control in ‘Default.aspx’ as shown below – Once our design is ready, let’s add an interface which will act like a proxy and give a call to our WCF service by passing a database name, which in turn returns the data table Customers from the respective database. Make sure you add a reference of ‘System.ServiceModel.dll’ and import the namespace – Let’s write the following code on our both the button click event as shown below – Now run the service and then your ASP.NET web site. Click the button ‘Adventure Works Database’ and then click the button ‘Northwind Database’ and you should see the output like below – Summary – In this article we have see how easy it is to pass the ASP.NET Session Variables to WCF services using SOAP Message Header. The entire source code of this article can be downloaded over here
http://www.dotnetcurry.com/aspnet/716/pass-aspnet-session-variables-wcf-service
CC-MAIN-2017-22
refinedweb
370
64.91
Freeciv As Benchmark of HTML5 Canvas Javascript Performance 246 Andreas(R) writes "The Freeciv.net crew has benchmarked their web client, which is a rich web application using the HTML5 canvas element. This shows how fast Firefox, Google Chrome, Safari and Internet Explorer perform using the latest HTML5 web standards." That's hardly a benchmark (Score:3, Funny) Now someone just needs to port the Quakes over, for a real benchmark. None of this turn-based strategy nonsense. :p Re:That's hardly a benchmark (Score:4, Insightful) Well, seeing as Freeciv runs at 7 or 8 fps on Chrome for them, I imagine Quake will run pretty phenomenally. Re:That's hardly a benchmark (Score:5, Funny) Make it 320 columns wide and 240 rows deep, for old-school flavor, with all cells empty, and just treat each cell's background color as a pixel value... What could possibly go wrong? Re: (Score:2) I saw it done in Excel once... Re: (Score:2, Funny) here's conway's life in a fullscreen 20x20 table: [etcet.net] it gets about 2-3 fps on my atom box. 100x100 is about 10spf Re: (Score:2) In Excel, no less! (Score:4, Informative) Space Invaders, Monopoly, ... oh my. [gamesexcel.com] Re: (Score:2) Re:That's hardly a benchmark (Score:4, Informative) QuakeLive doesn't run in the browser. It is just the Quake 3 engine wrapped into a browser plugin. Drop IE8 (Score:2) IE8 isn't the dominant IE browser yet. Drop IE8 support and offer the IE6/IE7 users a chance to go to another browser. If they have to get used to a new 'look' anyway, what's the difference between IE6->Chrome vs IE6->IE8? Re:Drop IE8 (Score:4, Insightful) To the gimlet-eyed corporate IT guy who controls the browser on 10,000 seats and DroneCorp Inc, LLC, on the other hand, it will pretty much come down to "Which one will allow me to break anything you might possibly do instead of your work just by clicking at group policy objects for a few minutes?" and "Which one will pull updates from WSUS?". This is why Chrome's marketshare is increasing at a fair clip; but the worker bees at DroneCorp Inc, LLC will be getting IE7 sometime in 2012... Re:Drop IE8 (Score:4, Interesting) Re: (Score:2) The main reason for IE6 is the combination of idiotic managers/developers that have locked a lot of applications into IE6 only. Would you: - pay 38 vendors between £20k and £3m each to migrate your old versions of their software to a new browser, or - manually rewrite the UI of 60 systems, or - keep the web browser that continues to work with 60 systems from 38 vendors, requires no new testing, no new hardware, no new licences and saves you a massive change overhead you just don't need Having made that decision in a manner that achieves the best outcome for the customers, the owners of the business and the staff (in that ord Re: (Score:2) Re: (Score:3, Interesting) It's the difference between ideal approach and pragmatic real-world approach. Vendor A offers IE6 support only (back when it was IE6 or Netscape) and meets 90% of the requirements out of the box; Vendor B offers IE6 and Netscape support but only meets 60% of the requirements out of the box. Since nobody has Netscape installed it's a complete no-brainer to buy from Vendor A, even though you get browser lock-in as a result. The entire point of web apps in a business environment isn't the ease of replacing the b Re: (Score:2) Far easier is to run XP with IE6 in a virtual container on the desktop PC, providing support for the legacy estate while permitting new systems to be introduced using modern browsers. Although less elegant, less secure and less fun than your approach it does have the advantage of being already possible and easy to roll out by a corporate IT department. Re: (Score:2) What benefit is there to upgrading to IE7 over IE8? Did that much stuff really break between the two? Complete standards like XHTML and SVG (Score:2) Building to modern published (complete) standards is the only real way. XHTML 1.0 and SVG 1.1 are "modern published (complete) standards", yet not even IE 8 supports them. Re: (Score:2) Freeciv should probably be blocked at work anyway. We used to have an old client/server installed in the office a few years ago. It was a fun game to login every hour or two and do a turn or two. But these days, SmartFilter pretty much grabs everything that isn't work-related. Re: (Score:3, Insightful) Re: (Score:3, Interesting) I haven't seen an alternative browser that it works reliably on yet. Yes, its a windows specific thing, but until other browsers properly support single sign on you're not going to get them into the corporate workplace in any fully supported manner. And if they're not at work, they're less likely to end up getting installed at home, either. I mean, i'm an admin and run plenty of different browsers, but from a "please why won't the users leave me alone Re: (Score:2) Enabling NTLM in Firefox is URI specific. I haven't seen any issues with it though. Re: (Score:2) NTLM/windows domain authentication - single sign-on. Have your users run IE 8 (not 6) through an HTTP proxy that has access only to these sites, and have them run Chrome or Firefox for everything else. Re: (Score:2) It will be by the end of the year, the new look isn't much different than IE7 as far as I've seen, and it comes with the most popular OS on the planet. Dropping support for IE8 is a most idiotic thing to do, regardless of how shitty it is. Re: (Score:2) Um, don't most benchmarks put IE8 Javascript performance like, an order of magnitude better than IE 7, which is like an order of magnitude better than IE 6? A better way is to get all IE users onto Chrome Frame to run this web application. Not fast (Score:4, Informative) Re: (Score:2) which would be dreadful even for a turn-based game. Erm, wouldn't a turn based game only need to refresh once per turn? Re: (Score:3, Informative) Most people like to scroll around the map a bit while they're planning their turn . . . Re:Not fast (Score:4, Insightful) No, the data updates once per turn. Things like animations (not sure that freeciv uses any) and moving the map around for a different view can happen many times in the interium, and of course as you send it all the commands each turn for what to do, loading UI displays and such, all of that is running at 8fps too. Re: (Score:2, Informative) I'd assume it's not. I ran their benchmark with Chrome on Win 7 and my Sony laptop and got 43.8ms as the result which is quite a bit faster than they listed as their result. I also got 149.72 with FF 3.6, which again is quite a bit faster. Re: (Score:2) only eight frames per second And this, kids, is why we don't run applications inside of web browsers. Re: (Score:2) only eight frames per second And this, kids, is why we don't run applications inside of web browsers. ... yet. Besides, you seem to be equating games with apps - there would be a lot of non-game apps that would happily run at 8fps. Graphing or spreadsheet apps don't need killer refresh rates and even something with more animation like powerpoint wouldn't look horrible (well, no more so than the actual product) at that rate. If anything, business apps are likely to drive a more widespread adoption of HTML5-based browsers in corporate environments, which will in turn allow more effort to be devoted to pushing Re: (Score:2) Meh. We run lots of things in slow ways. Remember all of those games you used to play that took 100% of your computer or console's power? Now I run them in something that emulates the entire system. Oh, and they run faster than they did back then too. I ran Civilisation on a 16MHz 386SX. An x86 emulator written in JavaScript running in a browser on a modern PC will get better performance than that. FreeCiv is a bit more processor-intensive than the original Civilisation, but it can probably handle ren Re: (Score:3, Funny) "And yet, the NeXT systems had a reputation for beautiful graphics." Sure. Both users agreed. Re:Not fast (Score:5, Funny) Re:Not fast (Score:5, Interesting) And I believe the trend will be for consumer CPUs to aim for lower heat and power, rather than higher speed. Unfortunately, the abstraction layers just keep piling on there. Give it another few years, and we might not be able to emulate Commodore 64 games on the desktop any more. Re: (Score:3, Interesting) Unfortunately, the abstraction layers just keep piling on there. Well, to be fair, they're just re-writing software the way it should have been done in the first place, but couldn't originally due to the hardware's limited capabilities. Re: (Score:2) Obligatory thedailywtf.com link: [thedailywtf.com] Re: (Score:2) In case anyone was wondering... (Score:4, Funny) Safari on Snow Leopard? (Score:2, Insightful) bias (Score:2) Seriously though, any idea why Chrome is faster on Vista, the most maligned, stereotyped as slow OS there has ever been? Would also be keen to see OS X results. Re:bias (Score:5, Informative) 'Cause Vista's not as slow as people claim. I've never seen any evidence, either in my testing or online, that Vista ran programs any slower than XP. Most of Vista's slowness rep came from two things: 1) Lots of messing with the disk, particularly on boot. Vista wanted to cache a ton of shit in memory, probably to aggressively, as well as other stuff. Could lead to a system being sluggish to respond to users when it first started. 2) People running it on crap hardware. Vista has a much higher minimum bar than XP for good performance. You really want a dual core and 2GB minimum for a nice system (as opposed to a P4 and 1GB being fine for XP). Lots of people had older systems, tried the new OS, and got mad because it didn't work well. Duh. Newer software needs more resources. So it doesn't surprise me that a pure app test worked fine on Vista. It was never slow at that. Re: (Score:3, Insightful) On my laptop I have noticed a huge performance increase with Ubuntu compared to Vista running netbeans, open office, and Firefox. You are right its mostly disk. However disk access is the number one bottleneck on modern pcs so that is very important. The problem is Windows loves to load a million services at once and the disk can only handle so much when it boots. You should try running your win32 apps on Windows7 with the same hardware as vista? You will notice quite a difference. Also the slower processors Re: (Score:2) My post was more taking the piss at the /. majority who rant on how shit vista is, and how "it is year of the linux desktop!", only to have it outperform Linux on tests like this. Yes, its only 0.002ms or whatever, but it steal beat linux on that test :D Re: (Score:2) It's also possible that Chrome is optimised for Windows as that's the majority share OS - it's where all their benchmarking is going to show up in the marketing metrics for how Chrome is so much faster than everyone else. They likely didn't spend so much time optimising the Linux port because it only had to be fit for purpose. On a side note, Vista takes almost three minutes to boot to a usable state on my intel core 2 quad core q6600 (overclocked), 4GB desktop PC with a moderate amount of software installed Coherence? (Score:4, Insightful) Amusing so Vista is as good as XP for running programs but it need much more powerful hardware(!). Don't you see a "small" contradiction/incoherence in your post? Re:Coherence? (Score:4, Interesting) No, it's a question of scalability, which is often more important than raw speed. With some systems, they perform well in relatively restricted hardware, but the performance improvement when you add more does not scale linearly with the extra RAM, CPU, and so on. With others, you get more constant overhead, but better scalability. Think of the overall performance as constant overhead + scalability load * resources. With XP, it sounds like the constant overhead is lower (which makes sense, as it had to run on 200MHz chips), but the scalability load is higher (which also makes sense, because it wasn't designed for 4+ cores and 2+GB of RAM). Or, to put it another way, if XP gets 80% of the maximum theoretical performance out of a 200MHz Pentium with 128MB of RAM, but only 50% of the maximum theoretical performance from a 2GHz Core 2 Duo with 4GB of RAM, while Vista gets 50% and 70%, respectively, what the grandparent said would be true and contain no contradictions. Various things in modern operating systems are optimised to take advantage of lots of spare RAM (for example, aggressive pre-fetching of data from the disk). Splitting services up into concurrent tasks has more overhead from context switching, but lets you scale better to multiple processors. Older desktop operating systems treated RAM as a very scarce resource and were heavily optimised for the single-CPU case, because hardly anyone had more than one CPU. Re: (Score:2) Windows 7 needs more resources than Vista? Duh. Look, Vista was a festering pile of diseased dogshit. You know it, I know it, Microsoft knows it. There's simply no need to defend it, especially when the "defence" runs to "Well, if you run it on monster hardware, it's not as slow as you think." The nightmare is over, man. Just let it go. Re: (Score:3, Informative) I can admit to never having used Vista. But I have noticed on the back of pretty much all of the boxed PC games at my local Game store that the each game's requirements now quote differently depending on whether you're running XP or Vista - and the difference for Vista is usually an additional 0.5GB of memory plus a slightly faster CPU. So it does suggest that Vista has considerably more overhead than XP. Re: (Score:2) Wow! But is it possible to be a virgin in two things simultaneously? Re: (Score:2) Newer software needs more resources if it offers more functionality or it is badly written, and I don't see more functionality in Vista/Win7... Re: (Score:2) In my experience Win7 seems to require fewer resources than Vista - I can't ever imagine Vista on a netbook, but 7 does a nice enough job, and you might not consider all that graphical "bling" to be functionality but it has an overhead (and the Win7 implementation is much better than Vista's was). I'll let Penny Arcade sum up my Win7 experience [penny-arcade.com] to date. Re: (Score:2, Interesting) One of the biggest reasons in the apparent jump in performance from Vista to Win7 was MS fixing the ungodly GDI problem that Vista had - there's a fairly thorough write-up about it here [msdn.com] Essentially, GDI in Vista scaled in a square/cube fashion with each new object taking up memory in both system and graphics memory - a double whammy for any machine with integrated graphics which hammered the memory bus and, if you Re: (Score:2) The main problem is the memory floor before you actually run anything is far too close to the memory ceiling that can be addressed by 32bit Vista (some other MS 32 bit systems don't have that problem - eg. some versions of MS Server2003). That means that a machine with the maximum memory that 32bit Vista can support is still horribly slow in a lot of circumstances and there is no way to fix it while keeping Firefox 3.5 outperformed Firefox 3.0 (Score:5, Informative) SuSE OpenLinux had an old 3.0.7 version of Firefox while Vista had a newer version. Firefox 3.5 has a totally rewritten javascript engine from scratch. It uses some dynamic tree mathmatical aglorithms to perform operations many times faster and has support for javascript functions mapped in ram before execution. Vista used Firefox 3.5 while SuSE had Firefox 3.0.7 installed without the new javascript engine. Firefox 3.0.x was a ram hog compared to 3.5 too. I also imagine Safari would execute on MacOSX much better than Windows since its designed for it. Itunes is kind of proof as it sucks on Windows. IE8 x86 vs 64bit? (Score:2) Has anyone compared IE8 x86 vs 64bit with this benchmark? If so, what were the results? Re: (Score:2) Some browsers (such as Opera) do not have 64-bit versions for the Windows platform. This is to be expected for many reasons, such as (a) browsers do fine with the amount of memory that a 32-bit process has access to (b) 32-bit plugins can't be loaded into 64-bit processes (c) any sort of javascript compiler (IE doesnt have one, but..) would require both a 32-bit and 64-bit codegen due to th Re: (Score:2) Re: (Score:2) x64 IE doesn't support 32-bit plugins, which is normal for x64 browsers. The reason Windows Update doesn't work in it is that Microsoft hasn't gotten around to making an x64 version of the Windows Update Plugin. There are almost no x64 plugins (e.g. Flash) so x64 IE isn't terribly useful. Still, the OP had an excellent question, and someone should check it out (assuming the test doesn't require a plugin). Slow vs. Old (Score:2) No Mac benchmarks (Score:2) Re: (Score:2) Freeciv.ORG (Score:3, Informative) The summary and the freeciv.net main page (I'm sure it's somewhere else but that's my point) doesn't mention this: it's based on freeciv.org [freeciv.org]. (also strange; the freeciv.org site only mention freeciv.net in their 'community news', not 'project news', so it really seems "distinct projects", they're not officially promoting the other option, yet?) How many decades late? (Score:2) ...using the latest HTML5 web standards Amazing how long it's taken to get a freakin' frame buffer. Cue a zillion Web 3.0 marketeers about how the web browser is the OS of the future. Oh, and the iPad is really keen-o, too. Sl-sl-sl-slashvertisement! (Score:2) That being said, it's FreeCiv! Of course I signed up. FreeCiv vs Civ4 (Score:3, Interesting) I started playing Civ4 last week for a couple of games -- it runs very well in Wine, incidently -- and I'm wondering how FreeCiv compares. Obviously the graphics aren't there, but after a couple of games that seems less and less important. The gameplay mechanics are what matters, and I think they work very very well in Civ4. And is the AI any good? Wikipedia seems to imply that diplomacy is a bit simple. Anybody got "in-depth" experience with both games? Re:Opera? (Score:5, Informative) Re:Opera? (Score:5, Insightful) Re:Opera? (Score:4, Interesting) A year ago I experimented with HTML5, and made (you guessed it) a Tetris clone, which took advantage of Canvas elements. I noted that when drawing entire images, it was all very fast. Drawing a frame took about 12ms in Firefox and Opera. (limited by the precision of the timer) Then I tried combining all the images into one, and drawing a region from the tileset. Talk about slowdown! Wow! Separate 64x64 images blitted fast, but as soon as it was dealing with a 512x512 image, the time to render jumped to about 500ms. I did some quick pixel math and concluded both Opera and Firefox must've been making a copy of the entire tileset every time I tried to blit a region from it. It's the only thing that added up. When I boosted the size to 1024x1024, it jumped to over 2000ms for a frame. Completely ridiculous! ;) Perhaps someone else could chime in about whether this bug has been fixed? Note: I was blitting from Image elements to Canvas elements. Canvas to Canvas always worked fine for me. Re: (Score:3, Insightful) I'd expect them to help out. It is kind of bad for pr when performance test of all popular browsers do not include yours because it won't run in it (and in it alone)... Re: (Score:2) Re: (Score:2) Cowboyneal has worked out a system to automatically transfer all 2 posts where they belong. Or someone sold out. Re: (Score:2) ... all posts less than 2... come on slashcode ! Re:IE8 performs awesome, as usual (Score:5, Informative) Clearly you didn't even read the article, just looked at numbers. IE should not have even been tested - it does not support HTML5 canvas elements! They worked around this using a bunch of really ugly hacks that completely destroyed the performance, but honestly they'd have been better off simply saying "it doesn't work, we'll wait until IE9, thanks for giving us Acid2 compatibility but you've got a long way to go!" IE8 actually works pretty damn well for much of the modern web; it's far from the fastest but it's fast enough for most, it is compatible with CSS2 and the other standards most web developers still use, and it has fixed most of the issues that people have cursed at IE over for so long. However, it has very little support for new standards - its CSS3 is still limited, and as far as I know it supports no HTML5 at all. Compared to the rapid improvement of other browsers, the IE team had better be on their toes or they'll be left far behind in the dust. Re:IE8 performs awesome, as usual (Score:4, Informative) Worth pointing out that HTML5 isn't a standard yet. It's still in draft for the next couple years. Re: (Score:3, Informative) Worth pointing out that HTML5 isn't a standard yet. It's still in draft for the next couple years. Canvas is at last call at the WHATWG [whatwg.org]. "standard Re: (Score:2) IE8 is sure not slow for most web browsing. I messed with it recently before deciding to go back to Firefox and it displays normal web pages noticeably faster. In either case we are talking like a second or less, but still. Most websites out there, IE8 was enough of an improvement I noted it. Now obviously that wasn't enough for me to switch, but you are right that the "Oh it is so slow!" crap is disingenuous. IE8 doesn't have support for the new standards, but what it does support it seems to be pretty zipp Re: (Score:2) um no it's not. I have to use IE 8 every day and it is slow. The kind of click to open in a new tab walk take a sip of coffee and and then I can use the computer again slow. It's page rendering isn't bad but that is offset with browser lock ups when trying to open more than one tab at a time. In safari, firefox, and even chrome, I can read one webpage and open up a bunch of tabs from it. in IE8 loading one tab in the background is enough to halt the whole computer interface for a couple of seconds. It Re: (Score:3, Interesting) For instance, the Sun Java SSV Helper plugin for IE tends to cause a lot of the problems that you are describing including taking 3-4 seconds to open new tabs at times. I have no idea exactly what the Java SSV Helper plugin does but I have yet to encounter a Java applet that won't run without it Re: (Score:2) Re:IE8 performs awesome, as usual (Score:5, Interesting) Re: (Score:2) Sounds like a major step up. I hadn't actually seen any info on IE9, but if you say it's released publicly I'll take a look. Better JavaScript will definitely be very nice, as would SVG, but I do hope that canvas, at least, is supported too. Any idea when a beta will be available, for MSDN subscribers or otherwise? Re: (Score:2) IE should not have even been tested - it does not support HTML5 canvas elements! Indeed it doesn't. A lot of the hacks involved to get IE to support canvas is merely an emulation of canvas [google.com] using VML [wikipedia.org]. d Re: (Score:3, Interesting) :-( Re: (Score:2) Re: (Score:2) I think the IE management is still living in 2005 with their 95% market share and want to leave HTML5 in the dust. Re: (Score:3, Interesting) No, they're living in 2010 with a 60% market share. Unless HTML5 outperforms Flash it's not likely to be the reason for anybody to switch. Anybody who hates MS or Flash has already switched, right? Re: (Score:3, Informative) Firefox 3.0 doesn't support HTML5 either, but they've included that in the test, and it performs a lot better than IE8. Firefox has supported <canvas> since 1.5 [mozilla.org], so it was perfectly fair to include 3.0. Re: (Score:2, Insightful) Re: (Score:2) you can see them implementing some HTML 5 functionality as a contest of whom piss the further. But I prefer to see it as a testbed of HTML 5, seeing what work and what doesn't to improve the actual draft of the HTML 5 spec. A lot of the spec in HTML 5 are in because of the implementation done by Mozilla, Opera and Chrome of these specs. Re: (Score:3, Interesting) Re: (Score:2) Quick and dirty paste of the results, for the lazy: Web Browser | Operating System | Average Rendering Time| Frames / Second Google Chrome 4.0.249.78 (36714) | Windows Vista | 126ms | 7,9 fps Google Chrome 4.0.249.30 | OpenSuSE Linux | 128ms | 7,8 fps Safari 4.0.4 | Windows Vista | 222ms | 4,5 fps Firefox 3.7a Re: (Score:2) I couldn't find a control specifically for them, but I did discover that turning on Help & Preferences > Layout > Use Classic Index seemed to kill them without too much impact. Re: (Score:3, Informative) @namespace url(); @-moz-document domain("slashdot.org") { display: none !important; } } Re: (Score:3, Insightful) I'm still a bit pissed-off that such a change would be made, unannounced, to people like me that actually pay for their services. This is a bit angrifying. Why? The buttons are small, not particularly intrusive, and useful for people that use those services -- and as they're very popular, that's a lot of people. If you don't use FB/twitter, or don't want to link to slashdot stories from there, then don't click the buttons. Yeesh... Re: (Score:2) (I haven't look at my yahoo home page in a year or two, though, so maybe they eventually fixed that.) Re: (Score:2) Well shit.. don't that beat all. Re: (Score:2) Re: (Score:2) I've been using Lite Mode for years and years. This stuff never bothers me and it's reading /. the way god intended. -l
https://developers.slashdot.org/story/10/01/29/0323200/Freeciv-As-Benchmark-of-HTML5-Canvas-Javascript-Performance
CC-MAIN-2017-43
refinedweb
4,721
71.14
The best answers to the question “Is there any difference between a GUID and a UUID?” in the category Dev. QUESTION: I see these two acronyms being thrown around and I was wondering if there are any differences between a GUID and a UUID? ANSWER: GUID is Microsoft’s implementation of the UUID standard. Per Wikipedia: The term GUID usually refers to Microsoft’s implementation of the Universally Unique Identifier (UUID) standard. An updated quote from that same Wikipedia article: RFC 4122 itself states that UUIDs “are also known as GUIDs”. All this suggests that “GUID”, while originally referring to a variant of UUID used by Microsoft, has become simply an alternative name for UUID… ANSWER: The simple answer is: **no difference, they are the same thing. 2020-08-20 Update: While GUIDs (as used by Microsoft) and UUIDs (as defined by RFC4122) look similar and serve similar purposes, there are subtle-but-occasionally-important differences. Specifically, some Microsoft GUID docs allow GUIDs to contain any hex digit in any position, while RFC4122 requires certain values for the version and variant fields. Also, [per that same link], GUIDs should be all-upper case, whereas UUIDs should be “output as lower case characters and are case insensitive on input”. This can lead to incompatibilities between code libraries (such as this). (Original answer follows) Treat them as a 16 byte (128 bits) value that is used as a unique value. In Microsoft-speak they are called GUIDs, but call them UUIDs when not using Microsoft-speak. Even the authors of the UUID specification and Microsoft claim they are synonyms: From the introduction to IETF RFC 4122 “A Universally Unique IDentifier (UUID) URN Namespace“: “a Uniform Resource Name namespace for UUIDs (Universally Unique IDentifier), also known as GUIDs (Globally Unique IDentifier).” From the ITU-T Recommendation X.667, ISO/IEC 9834-8:2004 International Standard: “UUIDs are also known as Globally Unique Identifiers (GUIDs), but this term is not used in this Recommendation.” And Microsoft even claims a GUID is specified by the UUID RFC: “In Microsoft Windows programming and in Windows operating systems, a globally unique identifier (GUID), as specified in [RFC4122], is … The term universally unique identifier (UUID) is sometimes used in Windows protocol specifications as a synonym for GUID.” But the correct answer depends on what the question means when it says “UUID”… The first part depends on what the asker is thinking when they are saying “UUID”. Microsoft’s claim implies that all UUIDs are GUIDs. But are all GUIDs real UUIDs? That is, is the set of all UUIDs just a proper subset of the set of all GUIDs, or is it the exact same set? Looking at the details of the RFC 4122, there are four different “variants” of UUIDs. This is mostly because such 16 byte identifiers were in use before those specifications were brought together in the creation of a UUID specification. From section 4.1.1 of RFC 4122, the four variants of UUID are: - Reserved, Network Computing System backward compatibility - The variant specified in RFC 4122 (of which there are five sub-variants, which are called “versions”) - Reserved, Microsoft Corporation backward compatibility - Reserved for future definition. According to RFC 4122, all UUID variants are “real UUIDs”, then all GUIDs are real UUIDs. To the literal question “is there any difference between GUID and UUID” the answer is definitely no for RFC 4122 UUIDs: no difference (but subject to the second part below). But not all GUIDs are variant 2 UUIDs (e.g. Microsoft COM has GUIDs which are variant 3 UUIDs). If the question was “is there any difference between GUID and variant 2 UUIDs”, then the answer would be yes — they can be different. Someone asking the question probably doesn’t know about variants and they might be only thinking of variant 2 UUIDs when they say the word “UUID” (e.g. they vaguely know of the MAC address+time and the random number algorithms forms of UUID, which are both versions of variant 2). In which case, the answer is yes different. So the answer, in part, depends on what the person asking is thinking when they say the word “UUID”. Do they mean variant 2 UUID (because that is the only variant they are aware of) or all UUIDs? The second part depends on which specification being used as the definition of UUID. If you think that was confusing, read the ITU-T X.667 ISO/IEC 9834-8:2004 which is supposed to be aligned and fully technically compatible with RFC 4122. It has an extra sentence in Clause 11.2 that says, “All UUIDs conforming to this Recommendation | International Standard shall have variant bits with bit 7 of octet 7 set to 1 and bit 6 of octet 7 set to 0”. Which means that only variant 2 UUID conform to that Standard (those two bit values mean variant 2). If that is true, then not all GUIDs are conforming ITU-T/ISO/IEC UUIDs, because conformant ITU-T/ISO/IEC UUIDs can only be variant 2 values. Therefore, the real answer also depends on which specification of UUID the question is asking about. Assuming we are clearly talking about all UUIDs and not just variant 2 UUIDs: there is no difference between GUID and IETF’s UUIDs, but yes difference between GUID and conforming ITU-T/ISO/IEC’s UUIDs! Binary encodings could differ When encoded in binary (as opposed to the human-readable text format), the GUID may be stored in a structure with four different fields as follows. This format differs from the [UUID standard] 8 only in the byte order of the first 3 fields. Bits Bytes Name Endianness Endianness (GUID) RFC 4122 32 4 Data1 Native Big 16 2 Data2 Native Big 16 2 Data3 Native Big 64 8 Data4 Big Big ANSWER: GUID has longstanding usage in areas where it isn’t necessarily a 128-bit value in the same way as a UUID. For example, the RSS specification defines GUIDs to be any string of your choosing, as long as it’s unique, with an “isPermalink” attribute to specify that the value you’re using is just a permalink back to the item being syndicated. ANSWER: Not really. GUID is more Microsoft-centric whereas UUID is used more widely (e.g., as in the urn:uuid: URN scheme, and in CORBA).
https://rotadev.com/is-there-any-difference-between-a-guid-and-a-uuid-dev/
CC-MAIN-2022-40
refinedweb
1,070
59.64
Caché 2011.1 This chapter provides the following information for Caché 2011.1: New and Enhanced Features for Caché 2011.1 The following major new features have been added to Caché for the 2011.1 release: Rapid Application Development Performance and Scalability Reliability, Availability, Maintainability, Monitoring - In addition, many more localized improvements and corrections are also included. In particular, if you are upgrading an existing installation, please review the detailed list of changes in the Upgrade Checklist. Rapid Application Development Multiple Session Callback Events In previous versions, the only user-defined Session Events triggered at the beginning, end or timeout of a CSP session occurred based on the last CSP application accessed by that user. In this version, this behavior has changed. Caché will now execute the SessionEvent logic for the current CSP application, plus the most recently accessed CSP Application that was used by this session prior to the event, if more than one application was accessed. WebStress Testing Facility This version introduces a new core utility called WebStress. InterSystems has used this tool in prior releases to record, randomize and playback HTTP-based scripts against various applications for the purpose of QA, scalability, and network load testing. The tool runs on a Caché or Ensemble system on any supported platform and can test any web-based application. It includes additional hooks required for correctly benchmarking CSP- and Zen-based applications making use of hyperevents. For recording scripts, users must employ a supported browser and the ability to define a proxy server. New DeepSee Implementation This release of Caché introduces a new version of DeepSee (previously referred to as “DeepSee II”) with the following improvements: Data Modeling Data modeling has been simplified. This version uses Caché classes that reference application transactional classes. Therefore, there is no need to modify application classes before use as in the preceding version. DeepSee models are defined via these reference classes and can be edited using the DeepSee Architect or Studio. Furthermore, data models now support many MDX concepts including multi-level hierarchies. Query Engine This version of DeepSee uses the MDX query language for all queries. Its query engine has been optimized to support parallel query execution which takes advantage of the power of multi-core architectures. Multi-level result caching improves query performance by retaining the results of queries so the results can be used when the queries are run again. User Interface Built with the InterSystems Zen technology, the new DeepSee user interface supports multiple browsers including IE, FireFox, and Chrome. Control of the user interface is done via the DeepSee option on the Management Portal. The options include: Architect for creating DeepSee data models, Analyzer for exploring the data, User Portal for creating and viewing dashboards. Performance And Scalability Improved Class Compiler Performance As a result of changes we have introduced in previous versions, and by moving performance-critical components into the system level, InterSystems has noticeably improved the performance of the class compiler. Compilation Using Multiple Jobs In addition to the gains from the Improved Class Compiler Performance in this release, Caché can now be directed to use multiple processes for the compilation of classes and the import of XML files. The number of jobs started will depend on licensed CPU cores and upon observed efficiency (more than 16 gains no added advantage). Support For Large Routines And Classes In prior releases, the maximum size of a routine was 64KB. Starting with this release, the maximum routine size has been extended to 8MB. For routines larger than 32KB, Caché will use up to 128 64KB routine buffers to hold the routine. These buffers will be allocated and managed as a unit. The class compiler and the SQL processor have been changed to use this new limit. Customers can take advantage of this improvement merely by recompiling. Beginning with this release the system now supports a larger class descriptor. Among the consequences are that classes now can contain a larger number of members to be declared in the class. The limits on class inheritance depth, and the number of superclasses allowed have also been defined. For a complete list of the applicable bounds, see “Guide General System Limits” in the Caché Programming Orientation Guide. Please consult the Caché 2011.1 Upgrade Checklist for further information. Journaling Additions This version of Caché now provides an API for journal restore. See the Journal.RestoreOpens in a new tab class for information on how to use it. In addition, the process Id (PID) is now once again part of each journal record; journal restore in this release has been changed to deal with this difference in format across release boundaries. Reliability, Availability, Maintainability, Monitoring Management Portal Improvements The Management Portal now provides access to all functions using one interface, including DeepSee and Ensemble (for Ensemble installations). By providing a new path to each of the functional components, there is now a mechanism to specify access control on each navigational option and granular control for security-critical operations. In addition, users can now specify the most commonly used areas as “favorites” for even faster navigation. Mirroring Enhancements In this release, several enhancements have been added to mirroring: Asynchronous mirror members now purge mirror journal files that have been applied locally and are no longer needed The mirroring communication / data transfer process has been optimized for performance by sending larger chunks of data from the primary to the backup failover member The mirror Virtual IP (VIP) now supports IPv6 addresses Caché Monitor History Database The Caché Monitor History Database introduces a facility to capture and analyze historical system statistics such as performance metrics, and errors reported. It supplies a baseline for analyzing performance anomalies and provides historical data to facilitate capacity planning. A default set of metrics is defined and the schedule for capturing these metrics can be defined by the user. These metrics are fully SQL-enabled, and an API is provided to query the results stored in the database. Security Web Services Control Separate From CSP Control In this release, for each web application (formerly known as CSP application), users can specify if CSP and or Web Service access is enabled as part of the web application definition. Two-Factor Authentication For CSP In this version, InterSystems has broadened the use of two-factor authentication, introduced with 2010.2, to also be used with CSP applications. If enabled, users will, after successful authentication, be challenged to enter an additional code, which has been separately transmitted to their mobile device. Web Service Licensing Beginning with this release, Caché will now consume a license unit for anonymous connections, and will hold this license unit for a grace period of ten seconds, or for the duration of the web service connection, whichever is longer. Web service connection with named users (login), already consumed a license unit, and there is no change for these type of connections. Managed Encryption Keys Based on the existing Caché implementation of data-at-rest (for example, database encryption) keys, this release enables application developers to use the same strong keys, and key management capabilities on more granular data. Managed encryption keys are loaded into memory and applications refer to these keys via a unique key identifier, therefore protecting access to the key itself. The system is designed to load four keys into protected memory. In addition, Caché now provides a new encryption function which will embed the key identifier in the resulting cipher text. This enables the system to automatically identify the corresponding key, and allows application developers to design re-encryption methods, which are completely transparent to the application, without causing any down time. This new mechanism is designed to encrypt special data elements (such as credit card numbers), and may or may not be used in conjunction with database encryption. Caché 2011.1 Upgrade Checklist The purpose of this section is to highlight those features of Caché 2011.1 that, because of their difference in this version, affect the administration, operation, or development activities of existing systems. Those customers upgrading their applications from earlier releases are strongly urged to read the upgrade checklist for the intervening versions as well. This document addresses only the differences between 2010.1 and 2011.1. The upgrade instructions listed at the beginning of this document apply to this version. Administrators This section contains information of interest to those who are familiar with administering prior versions of Caché and wish to learn what is new or different in this area for version 2011.1. The items listed here are brief descriptions. In most cases, more complete descriptions are available elsewhere in the documentation. Version Interoperability A table showing the interoperability of recent releasesOpens in a new tab is now part of the Supported Platforms document. Management Portal Changes Numerous changes have been made in the Management Portal for this release both to accommodate new features and to reorganize the existing material to make it easier to use. Among the more prominent changes are the addition of pages to assist with database mirroring. Packaging Changes And License Keys The new product packaging announced in 2011 by InterSystems offers more capabilities and features for your current license types. Previously issued license keys will continue work with 2011.1. You do not require a new license key if your application uses only the capabilities offered by the previous product terms and conditions. If, however, you wish to take advantage of the additional capabilities available for your license type, please contact your InterSystems sales representative to obtain a new, equivalent license key. Operational Changes This section details changes that have an effect on the way the system operates. Licensing In prior versions, license units were not consumed for CSP sessions when $USERNAME equalled "UnknownUser". Now, if the Caché license key has the Web Add-On feature enabled, it is necessary to explicitly declare the Web application public in order to avoid consuming a license key; otherwise, CSP sessions and SOAP sessions consume a license unit. This can be accomplished for CSP applications by creating a subclass of the %CSP.SessionEventsOpens in a new tab class and defining a method to handle the OnStartSession event, and invoking $SYSTEM.License.PublicWebAppUser() from it. Furthermore, anonymous SOAP requests (those where no Caché login occurs) now consume a license unit for a minimum of 10 seconds. Applications do not require modification, but customers may need to purchase additional licenses if they service SOAP requests from Caché. Calling $SYSTEM.License.Login(LicenseId) from a CSP server process or from a SOAP Web Service process will consume a license unit for the Caché server process inappropriately. This license unit is associated with the CSP or SOAP session rather than with the server process because subsequent requests may be fulfilled by different Caché processes. The $SYSTEM.License.Login(LicenseId) API consumes a license unit that is associated with the Caché server process. Calling this API from a CSP or SOAP server process effectively creates two license instances, one for the session and one for the process. The process license instance is not released unless the process exits, which may never happen. The appropriate way to explicitly designate the license identifier for a CSP session is by calling the %CSP.Session:Login method. 2K Databases Mounted ReadOnly Beginning with this release, 2K databases will no longer be mounted as writable by Caché. This is the next step in the announced removal of support for 2K databases. If you wish to write data to such a database, you must convert it to the 8K format. The method ##class(SYS.Database).Copy()Opens in a new tab can be used to convert a database to the larger format. Changes To Configuration File Two parameters have been added to the .cpf file: The LibPath parameter is added to the [config] section. It is used for Unix® only and sets the LD_LIBRARY_PATH environment variable used to search for third-party shared libraries. It is ignored on Windows and OpenVMS. It is a string property with no required or maximum length. This settings take effect immediately; no system restart is required. The QueryProcedures parameter is added to the [SQL] section. This defines whether or not all class queries project as SQL Stored Procedures regardless of the SqlProc setting of the query. The default is 0, that is, only class queries defined with SqlProc=1 will project as Stored Procedures. When set to 1, all class queries will project as stored procedures. This settings take effect immediately; no system restart is required. However, you must recompile the classes with the class queries in order for this change to have an affect. In the [Debug] section, setting dumpstyle=3 will prevent shared memory from being included the core dump Audit Records Contain Operating System Userid Starting in this release, the username is now part of the audit record,. When displayed, it is truncated to 16 characters. The real operating system username is only returned when connecting to UNIX® or OpenVMS systems; On Windows, it will return the username for a console process; for telnet it will return the $USERNAME of the process; for client connections, it contains the username of the client. Failure To Mount Required Database At Recovery Is Now Fatal In prior releases, Caché recovery would skip required databases that failed to mount and continue processing; but startup would later fail when processing the database section of cache.cpf which resulted in shutting down the instance. With this version, failing to mount a required database during Caché recovery is now a fatal error that will cause startup to abort at that time. This allows the underlying issue to be addressed sooner. TaskManager Jobs Now Use Append IP Address Customer tasks started via the Cache Task Manager use the "Run As" user name as the license identifier. Beginning with version 2010.1, Caché appended the peer IP address to the user name for such jobs. However, there is no peer address for processes started by the task manager, and no address was appended. This caused an inconsistency in license consumption between jobs started by the task manager and those started by other means. Beginning with this release, jobs started by the task manager now append the local IP address. Emergency Login Policy With this version, the Emergency Login policy has been expanded to accommodate two-factor authentication.; the policy is now: During emergency access only the emergency user may log in. Console, Terminal, and CSP are the only services enabled. For enabled services, only authenticated access is permitted. Caché uses its own password authentication for the services, where the emergency access username and password must be used. If the an application has a custom login page that page is used during emergency login. For /csp/sys applications, the standard login page (%CSP.Login.cls) will be used during emergency access even if there is a custom login page available. Using the system default assures that the user has access to the Management Portal in emergency mode. Two-factor authentication is ignored in emergency access mode; applications with two-factor authentication enabled will be inaccessible to the emergency user. Collation Checking On Upgrade In this version, during an upgrade installation, the new method ##class(SYS.Database).FixDefaultGlobalCollation(Directory)Opens in a new tab is run on all user databases which are defined in an instance, and are mountable by the instance. This method will report to the cconsole.log any errors in collation of system globals. If any errors are detected, the user should run the method again from a programmer prompt, and pass the modify database flag as the second argumentOpens in a new tab which will recollate the global in the correct order. System Freezes When Journal Daemon Is Hung On a system set to freeze on journal error, if the Caché control process detects that journal daemon (JD) is hung (no activity for 10 seconds) while there is journal data to write, it will stop the write daemon and the system will freeze. Platform-specific Items This section holds items of interest to users of specific platforms. All platforms The OpenSSL libraries built and installed with our products have been updated to version 1.0.0b. All InterSystems projects dependent on OpenSSL have been updated to use new version. FOP has been updated to Version 1.0 with a specific InterSystems patch for processing in Arabic. Mac OS X — JOB Command Changes The mechanism for the JOB command on Mac OS X has changed. It solves a problem but may introduce side effects. Due to the way that the underlying kernel interacts with Mac OS X processes, and to the existence of GUI sessions, the traditional UNIX® way of creating daemons (fork/exec) is not enough for Mac OS X. Apple recommends the use of launchd (or launchctl, which is its user interface) to start all background daemons. This release implements that recommendation. The JOB command on Mac OS X now calls launchctl to start the Caché JOBs. AIX — Change Direct I/O Handling On AIX systems when opening databases, journal files and/or the WIJ for “direct I/O” Caché specifies the O_CIO option to open the file for concurrent I/O rather than direct I/O. The use of O_DIRECTIO allows other openers which can cause problems if the other process employs buffered I/O. OpenVMS Changes to $ZF On OpenVMS, user-supplied $ZF() functions may be written in either C or MACRO. Because of a mismatch in the definitions of PRIV and NOPRIV between the two different header files (cdzf.h and czf.m64), the PRIVS=YES feature in czf.m64), was set opposite of what it should be. User $ZF() functions written in C that depend on the PRIV/NOPRIV feature must be recompiled. Changes To Compiler Version Due to support requirements, OpenVMS compilers have changed. They are now at Version 7.2. Executables built under the previous compilers are not compatible with the new runtimes. This in turn implied that Xalan and Xerces needed to be recompiled. InterSystems has taken advantage of this to upgrade Xalan to version 1.10 and Xerces to version 2.7 and incorporated them as libraries (.olb) which are compiled into our executables and no longer distributed separately. Changes To Xalan, Xerces, and unixODBC The change in compiler version implies that Xalan, Xerces and unixODBC needed to be recompiled. InterSystems has taken advantage of this to upgrade Xalan to version 1.10 and Xerces to version 2.7 and incorporated them as libraries (.olb) which are compiled into our executables and no longer distributed separately. The OpenVMS version of unixODBC has been upgraded to version 2.2.12 which is the same used by other platforms. Informix SQL Converter Not Supported The SQL converter from Informix to Caché is not supported on OpenVMS. Attempts to run the Informix conversion on an OpenVMS system will now produce an error instead of logging a message in the console log. SHA-2 Functions Not available On OpenVMS Versions Prior To 8.4 On OpenVMS 8.2-1 and 8.3, the functions $System.Encryption.RSASHASign() and $System.Encryption.RSASHAVerify() do not support the SHA-2 hash functions when using bitlengths of 224, 256, 384, or 512 bits. The HP-supplied OpenSSL libraries in these releases are based on OpenSSL 0.9.7e, which does not include support for the SHA-2 functions. Upgrade Quotas For Background JOBs In this version several defaults for OpenVMS process quotas for JOBbed processes have been updated. The primary one is PGFLQUOTA, which limits allocation of virtual memory and was causing problems processing very large XML files. BYTLM and FILLM have also been raised to bring them in line with recent vendor recommendations. Windows — Installer Change If the installer finds that the IIS virtual directory, /csp, is already configured, it will no longer update the IIS configuration data. In addition, new properties have been defined to control updating Apache and IIS: CSPSKIPIISCONFIG CSPSKIPAPACHE20CONFIG CSPSKIPAPACHE22CONFIG Setting any of these to a value of 1 will result in installation updating of the corresponding CSP binary files, but will not make any changes to the corresponding web server configuration. Setting the property to 0 will make the installer update the appropriate web server configuration regardless of the existence of the /csp virtual directory. Developers This section contains information of interest to those who have designed, developed and maintained applications running on prior versions of Caché. The items listed here are brief descriptions. In most cases, more complete descriptions are available elsewhere in the documentation. System Operational Changes Compiler Version Changed Due To Support For Large Routines The internal compiler version has been incremented to reflect changes in the object code to support large routines. This means that routines and classes compiled on this version cannot be copied to and executed on previous versions. Attempts to do so will result in <RECOMPILE> errors. Caché Fully Qualified Domain Names And Kerberos The normal form of Kerberos server principal names is specified in RFC 4120, Kerberos V5, section 6.2.1. The principal name is composed of several pieces. They are: the name of the service a “/” the Internet domain name of the host an “@” realm of the key distribution center (KDC) where the server is registered An example of such a name is cache/oakland.iscinternal.com@ISCINTERNAL.COM. In previous versions, Kerberos authentication for non-terminal connection to Caché on platforms other than Windows used an ambiguous format: (cache/host@KDC-realm) which was incompatible with the usage when accessing Caché with csession. In this version, Caché has been changed to always generate the correct form of the service principal name. In most cases, this should have no impact because it is thought that the vast majority of sites will have used the FQDN form when defining the server principal name in the instance keytab. However, it is possible that some sites have defined the server principal name using just the host name, for example for host oakland, the value cache/oakland@ISCINTERNAL.COM. These sites will experience a problem after upgrading to a version of Cache with this revision. To correct this error, a keytab entry for the server principal should be created; in this example, cache/oakland.iscinternal.com@ISCINTERNAL.COM should be created to replace the non-standard cache/oakland@ISCINTERNAL.COM. ECP Will Now Use Process ID In Place Of Job Id In this release, ECP will log the PID instead of the job ID in the journal entries when the job is not a thread. This means that the ECP session will not be backward compatible; in mirror or cluster configuration a new version of the master cannot failover to an earlier version of the product. The ECP protocol will remain backward and forward compatible, however. Shadowing Initiation Requires Start Event When starting a shadow in the Management Portal, or via ^SHADOW, the user is required to select a source event to start shadowing at. This is true regardless of the value of the StartPoint property of the shadow configuration object, which is deprecated as of this change. One should always specify the StartPoint parameter in ##class(SYS.Shadowing.Shadow).Start()Opens in a new tab method to start a shadow non-interactively. Shadow Information Is Now In The CPF When a customer upgrades to version 2011.1 or later, and there are shadow systems defined, the shadow information is converted and moved to the CPF file. There are now two new sections in the file: The [Shadows] section defines the name of the shadow and its properties. The [MapShadows.NAME] contains the shadow directory mappings. Exporting Globals No Longer Checks Name Format An application which relied on %Library.Global.Export() to reject names which did not end in “.gbl” may no longer work as expected. Names that do not end in that suffix will be accepted and, if globals with those names exist, they will be exported. If not, they will not be part of the export, but no error will be generated. $SYSTEM.OBJ.Export() can be used in situations where the caller wants the “type” to be required. %GSIZE Output Format Alterations Applications which parse %GSIZE output and expect a fixed number of columns may now have problems. The numberof columns in the output is now a constant for a given run of %GSIZE (prior to this it was variable), but the number of columns can vary from run torun depending on the size of the longest global name in the output. Rather than parsing the %GSIZE output, applications should use the Size query in %SYS.GlobalQueryOpens in a new tab to retrieve global size information. Journaling Turned On For Multiple Compiles In previous versions, Caché defaulted to disabling journaling while doing a class compile to avoid filling up journal files and to improve the speed of the compile slightly. Due to recent changes in the class compiler, such as multiple compilation, this is no longer necessary (or desirable when using mirroring). Beginning with this version, class compiles will now be journaled by default. This will add more data to the journal files if many class compliations are done. On a development system, administrators may wish to consider changing the default /journal qualifier setting to disable journaling. On a production system, however, administrators almost certainly want the new default of journaling the class compile. Library Path Now Part Of cache.cpf Using LD_LIBRARY_PATH per user can lead to spoofs that could execute code at root level. Beginning with this version, the LD_LIBRARY_PATH data will be part of cache.cpf. Other than <install_directory>/bin, nothing will be in the current library search path for the session other than what is in the LibPath field in cache.cpf. The field will be updatable via the Management Portal. All applications relying on the LD_LIBRARY_PATH environment variable to set search paths for third-party shared libraries will be affected. Device Aliases Must Be Unique This release checks the aliases specified for devices. If the same alias is used for more than one device, Caché will report an error at startup and ignore the second definition. Changes To Emergency Startup Emergency Startup has been enhanced in this version so that the following occurs: TASKMGR is not started. Shadowing is not started. Ensemble productions are not started. Mirroring is not started. ZSTU,%ZSTART,%ZSTOP,ZSHUTDOW are not run when the system starts or stops. User processes which log in using the emergency id do not run %ZSTART or %ZSTOP. In addition, the STU=1 parameter in the CPF file has been removed. If you need to start the system for maintenance, use the Emergency Startup option. Improved Key Management Beginning in this version, it is no longer necessary to have a database or managed encryption key activated in order to manage a key file. It is, however, now necessary to know a valid encryption key file administrator username and password in order to add new administrators to a key file or to configure unattended database key activation at startup. This is yet another reason why it is critical to have a backup key file containing an administrator entry stored along with a copy of that administrator's password in a physically secure location. TROLLBACK Does Not Initiate Database Mount One of the general principles of Caché is that when a database has been explicitly DISMOUNTed, a database access attempt should not implicitly cause it to be MOUNTed; it must be explicitly mounted by operator action. TROLLBACK has been corrected to be consistent with this principle. New Locales A new collation is available for the Slovenian locale. Slovenian2 is similar to Slovenian1, except that upper and lowercase letters collate together (merged cases). A Unicode Turkish locale is now available (“turw”). By default it uses the Turkish1 collation. New DDL Type Mapping For VARCHAR(Max) And NVARCHAR(Max) Beginning with this version of Caché, there are new default system-defined DDL mappings for VARCHAR(Max) and NVARCHA(Max); both of these map to %Stream.GlobalCharacter. Prior to getting a version with this change, systems can simply add these mappings to the user-defined DDL Mappings. This change allows VARCHAR(Max) and NVARCHAR(Max) to be used as an argument to a procedure in a TSQL CREATE PROCEDURE DDL statement. Routine Compiler Changes Support For Larger Routines Beginning in this release, the compiler will now allow routines up to 8MB in length. When a compiled routine exceeds 32KB, Caché will use up to 128 64KB routine buffers to hold the routine. These buffers will be allocated and managed as a unit. Therefore, the use of large routines compiled in this release will affect routine buffer allocation. Generally, more 64KB buffers will be required than in previous releases; however, the distribution of memory among the buffer pools will depend on the realtime distribution of routine sizes in use at a specific site. Routines compiled on this version will not run on earlier versions of Caché. Routine Changes Implement Japanese Datetime Formats Two new date formats have been added to the list of dformats for $ZDATE and related functions: 16 – year, month, and day values with Japanese kanji for year, month and day following each one, respectively; that is, YYYY$CHAR(24180)MM$CHAR(26376)DD$CHAR(26085) . 17 – like format 16 with the addition of a space after the year signifier, and another after the month signifier. Make URL Translation Symmetric For Non-Latin1 8-Bit Character Sets In 8-bit locales not based on Latin1 (for example, “ruw8”, the CP1251-based Russian locale), $ZCVT("%XX", "I", "URL") now interprets XX as the hex code of a character in the current character set. In previous releases, this was assumed to be a Unicode character; in some character sets this codepoint did not have a corresponding value in the current character set and was replaced by a default character such as “?”. The new behavior means that in CP1251 $ZCVT($C(192), "O", "URL") = "%C0" and $ZCVT("%C0", "I", "URL") = $C(192) make a round trip using the URL translation valid for all characters. Class Changes Larger Class Limits Beginning with this release the system now supports a larger class descriptor. This means that classes now can support a larger number of members declared in the class. The limits on class inheritance depth, and the number of superclasses allowed have also been defined. For a complete list of the applicable bounds, see “Guide General System Limits” in the Caché Programming Orientation Guide. Error Reporting Changes The standard Caché mechanism for returning an error is to use the $$$ERROR macro with a standard message. In many cases, the message contained only a description of the error without any context information. Several messages, including the “object to load not found” message now include the classname where the error was encountered. It is possible that some SQL storage applications may have to be changed to recognize the new format. Update Of Class Dictionary To Level 25 – LegacyInstanceContext This version of Caché updates the version of the class dictionary to level 25. Among opther changes, this introduces the LegacyInstanceContext class keyword wose presence indicates that the class relies on generated code that passes %this as the first argument to instance methods. This was previously announced in 2009 in the Compatibility BlogOpens in a new tab. As an aid, the class dictionary upgrade looks for references to %this. It scans both code and comments in case there are usages of $XECUTE and $TEXT even though this may result in false positives. If it finds any such references, it marks the class as needing LegacyInstanceContext so the compiler will continue to generate code to pass %this as the first argument to instance methods. If no instances of %this are found, then the class is not marked LegacyInstanceContext so instance methods will no longer assume %this is passed as the first argument. This approach does not, however, uncover separate code that assumes %this exists and is properly set. Consider a class with a method that calls an external routine such as: method Test() { Quit $$Func^Routine() } where Routine is: Func () public { Quit %this.Name } Because the use of %this will not be discovered in the scan of the class (it is external to that source), it will not be marked as LegacyInstanceContext. Subsequent execution may result in an <UNDEFINED> error; in much harder-to-debug situations, %this may be pointing to the wrong context or not at an object instance at all. All new code should rely on $this for context; %this will not be set for new classes as it is deprecated.. All older classes marked as LegacyInstanceContext=1 will continue to behave the same as in previous releases. No change is necessary to existing classes because of LegacyInstanceContext. However, if you do want to update a class, the steps are as follows: Replace all occurrences of %this with $this in a given application. Remove the LegacyInstanceContext keyword from all classes in the project or application. Recompile the application. Class Deletions The following classes were present in version 2010.2 and have been removed in this version: %CSP.UI.Portal — ObjectGatewayStartStop %ExtentMgr — Extent %Library — CppApi %OSQL — Debugger, Transformer %SQL — Routine, RoutineColumn %Studio.SourceControl — ISCCheckin %WebStress — DataTransfer, Page %WebStress.UI — GridDetails, Attributes, Columns, SelectOptions, Tags, Transfer, SaveData %WebStress.UI — Input, Menu, Previous, Definitions, Root, Search, Criteria, Displays, SearchPage %XML — SupportCode %ZEN.Report — sort %cspapp.op — webstress, appservers, appsystem, blank, encrypt, generators, nopagedelay, noresultstore, proxysetupmozilla, proxysetupmsie, scriptedit, scriptrecorder, scriptrecorderstatus, showerrors, showurls, testappstats, testedit, testprint, testrun, visual, visualdisplay, visualdisplayoptions, webservers SYS.Info — Advertiser com.intersys.jdbcgateway — JDBCGateway java.io — InputStream, OutputStream, Serializable java.lang — Class, ClassLoader, Object, Package, AccessibleObject, Constructor, Field, Member, Method java.net — ContentHandler, ContentHandlerFactory, FileNameMap, URL, URLConnection, URLStreamHandler, URLStreamHandlerFactory java.security — CodeSource, Guard, Key, Permission, PermissionCollection, Principal, ProtectionDomain, PublicKey, Certificate java.util — Collection, Enumeration, Iterator, Map, Set Class Component Deletions The following class components have been moved or removed in this version from the class where they were previously found. Method Return Changes The following methods have different return values in this version of Caché: %CSP.UI.System.Index — ProcessIndexZen %Library.ProcedureContext — AddContext %SYSTEM.Encryption — RSAEncrypt %UnitTest.TestCase — AssertSkippedViaMacro, AssertStatusNotOKViaMacro, AssertStatusOKViaMacro %WebStress.Control — StartMonitor %ZEN.Datatype.boolean — XSDToLogical %ZEN.Report.Version — getVersion Method Signature Changes The following methods have different calling sequences in this version of Caché: %Library.Decimal Now Defaults To Scale=0 The Caché datatype, %Library.DecimalOpens in a new tab, now defaults to a SCALE=0. In previous releases, there was no default SCALE. Upon INSERT/UPDATE or Object Save, the Normalize method will now round to the default SCALE=0 when no SCALE is specified or the property. Class Compiler Changes This version of Caché continues the work begun in earlier releases of improving the class compiler. The changes that may require changes to applications are detailed in this section. Identical Labels In Multiple Methods Of The Same Class As part of compiling a class, the compiler attempts to pack as many methods of the class into one compiled unit as possible. If two or more methods of that class define a label of the same name, and the methods are marked as PROCEDUREBLOCK = 0, there was the risk that the compiler would include them in the same compiled unit and report a duplicate label error. Recent improvements in the class compiler have increased the size of the compiled unit and therefore increased the probability that non-procedureblock methods with identical labels could trigger this error. Applications which trigger his condition mustbe written to either use procedurblocks, or to change the label values so there is no overlap. Avoid Duplicating Properties Inherited From Superclasses In Subclasses Previously, if an application defined a property X in a superclass, and then created a subclass which did not modify property X at all (just inherited it from the superclass), the class compiler would redefine this property in the class descriptor of the subclass. This was because Caché needed an internal slot number to reference this property by in the class compiler. This requirement has now been removed. Caché now avoids duplicating the properties in the subclass and allows the system code to dynamically inherit the property from the superclass. No customer should have applications that depend on the internals of the class compiler (deliberately not publicly documented). Any applications using undocumented internal functions such as $zobjval to access data via slot numbers will have to be rewritten. Users Must Normalize ID Values Beginning with this release, passing an unnormalized integer to %Open, %OpenId, %Exists, and %ExistsId will no longer work. The applications passing such values must apply normalization prior to calling the method. Simple ID values are no longer normalized by the various methods that accept an ID as a parameter. ID values passed to the various methods of a class are expected to be in the normalized form that is returned by the <OREF>.%Id() method. If an ID is a simple integer and a value is passed that is not in the integer normal form (01 vs. 1, for example) then the methods named here will fail. SQL Storage Compiler Now Recognizes SQLCHILDSUB Name A class that defines a relationship with a cardinality of PARENT is often referred to as a "child class"; and the type class of the relationship is referred to as the "parent class". The ID of the child class is based on the relationship (the ID of the parent) and either a property value or a system assigned value. When the ID is based on a system assigned value, a column is generated in the SQL table projected by the child class. That generated column corresponds to the system assigned value and is referred to as the “childsub”; it is also, by default, the generated column name. This name can be specified in the storage definition by entering a name in the SQLCHILDSUB keyword of the storage definition. The expression used by the system to assign a value to the childsub (ID in the case of objects) is defined in the SQLIDEXPRESSION keyword. In prior releases, if an existing child class defined SQLCHILDSUB and compiled the class prior to this release, then the generated childsub column would be named “childsub”. Beginning with this release, the value of SQLCHILDSUB will be used. This presents a backward incompatibility. The prior behavior is the result of an error; the new behavior is correct. The typical workaround for this was to define a property representing the childsub; if that is done then this change will have no effect. Cardinality Relationships Can Not Enforce REQUIRED Previously, a relationship with a cardinality of MANY or CHILDREN and also specifing REQUIRED would compile cleanly. Now, an error is reported by the compiler indicating that the REQUIRED constraint for n-cardinality relationships is not supported. To compile cleanly, remove the REQUIRED constraint. Compilation Using Multiple Jobs Beginning with this version, Caché provides the ability to use multiple jobs for large compilations. This is enabled using the qualifier /multicompile; it is disabled by default. When it is enabled, and the compiler detects that it can employ multiple jobs usefully, it will start up slave jobs which will show up in %SS as being in the %occCompileUtils routine. It communicates with these slave jobs using a global and $SYSTEM.Event to signal that some work should be done. When a slave job completes, it sends back to the main process an indication that the work is complete along with any error information or output to display. So the typical behavior is: workers jobs are started, then work is queued, and the worker jobs process their part of the work. The main process waits for each job to finish its part, and displays the errors or any other output destined for display. When all the work is complete at this level, the main process will loop round and start queuing any remaining work. Once a worker job is started it will remain around for 10 minutes after the last piece of work it receives in case more work appears to avoid the cost of starting and shutting down jobs. This code will not improve the speed of compiling a single class, it is only focused on compiling multiple classes in parallel. In addition, the only part of the compilation process that supports this multiple cpu compile is compiling the MAC code into INT code, assembling these into routines, compiling these routines and building the class descriptor. This can only be done in parallel when there is no dependency between the classes being compiled. The compiler detects the dependencies and breaks down the compile based on this automatically. Thus, if classes A and B are not dependent on each other , they can be compiled at the same time. If A is a superclass of B, however, A must be fully compiled before compilation can begin on B; no parallelism is possible. Use of parallel compilation assumes that all relevant dependencies between classes are expressed in the class declaration. The order classes are compiled in may be slightly different from previous versions of Caché, but the order chosen still satisfies all the dependency rules. If two classes did not specify a dependency, the order in which they are done cannot be predicted; this could potentially cause a problem if two classes were dependent on each other but no dependency had been specified and the sequential compilation order just happened to work correctly. If this occurs, add a CompileAfter or DependsOn dependency between the classes to specify their relation. Also, the worker jobs will obviously have a different $JOB number from the main process. This means that if data being stored by one slave job during the compile, and another slave job is attempting to access that data using $JOB as an index, that attempt will fail because the $JOB numbers of the two processes differ. This situation can occur, for example, in sophisticated generator methods that interact with one another. The solution for this is to use the %ISCName local variable which is defined in the compiler context; it is a consistent name between the main job and all the worker jobs and so can be used to share information. The number of jobs used is limited to 16 jobs maximum as recent benchmarks have shown that more jobs than this do not improve overall performance. Removal Of InterSystems Internal Items This version removes the method keyword, RuntimeImplementation. It was only intended for use by InterSystems and is no longer required. It also removes the $$$cIVARrefslot macro. This was undocumented and should not appear in user code. Any code that uses this macro will fail to compile in this version. Synchronization Order Correction Sync sets contain entries that represent object filing events - inserts, updates, and deletes. If a sync set contains more than one entry that affects the same object, those entries must be applied in the same chronological order as they occurred originally. In prior releases, an error existed that caused some entries to be processed out of order. The cause was an unresolved dependency on import. Unresolved dependencies trigger a sync set entry to be scheduled for processing at a later time. This rescheduling could cause entries to be applied out of order and could introduce data corruption. That error is now fixed, however, it is possible that an application has made some assumptions about the order in which SyncSet entries are processed. If that is the case, the aplication needs to be examined and retested to make certain problems do not occur. Control Global Kills On %DeleteExtent The %DeleteExtent method attempts to delete all instances of a class. If all instances are successfully deleted %KillExtent is called to kill any globals that might be left defined. Not all globals are killed, especially in cases where multiple classes share the same globals. %DeleteExtent has a new parameter, pInitializeExtent, that, if true, causes %KillExtent to be called when all instances of the class are successfully deleted. The default value of pInitializeExtent is 1 (true). If pInitializeExtent is not true, then %KillExtent is not called and some empty globals could still be defined after %DeleteExtent returns. If the class uses a global counter to assign new object ID values, then that global counter will also remain defined in most cases. Extent Query In %Library.Persistent %Library.PersistentOpens in a new tab defines a query that is inherited by every class that extends %Library.PersistentOpens in a new tab. The query, Extent, is used to produce a result set containing all instances of a persistent class. The Extent query can be overridden, either as a %Library.ExtentSQLQueryOpens in a new tab or as some other query type. The overridden query must return rows corresponding to each instance of the class and the first column must be %ID. Change To i%var Handling The usage i%var is used in <var>Get and <var>Put methods to make direct references to an instance variable. The class compiler previously converted i%var references into the internal slot number where the instance variable was stored. In this version, this is done by the system code which allows the class compiler to be more dynamic. A side effect of this is that the i%var name is not validated at compile time. The code will compile and a runtime runtime error will be generated if the var is not defined in the superclass tree. SQLROWIDNAME Usage Enforced The class keyword, SQLROWIDNAME, allows the user to define the name of the SQL column generated as the ROWID. This SQL column corresponds to the object ID which is only accessible through a method call such as %Id(). It is not valid to override the SQLFIELDNAME of a property in a subclass because it violates the rule that every instance of a subclass is implicitly an instance of its primary super class. The SQL ROWID column name cannot be overridden for the same reasons. Previously, this rule was not enforced on the SQLROWIDNAME value. It is enforced beginning in this version. Failure to observe it will result in a failure to compile the class. %GUID Invalid As Column Name If the user class has an existing column whose name is %GUID, then this will now trigger an error during class compile. If the class has GUIDENABLED as true, then the class cannot implement a method named %OverrideGuidAssignment(), a property named %%GUID, or a property whose SQLFIELDNAME is %GUID. /foldmethod Qualifier Deprecated The class compiler /foldmethod qualifier used to detect identical methods preserving only one in the generated code has been deprecated. The qualifier no longer has any effect on the generated code. Bind Properties With CLASSNAME=1 As %Library.List Any property that specifies CLASSNAME=1 will be bound to SQL as type %Library.ListOpens in a new tab; CLASSNAME=1 means the value for the property is an OID which is in Objectscript $LIST format. A property defined as: Property OID As %Library.Persistent(CLASSNAME = 1) [ Required ]; would, in previous releases, bind to SQL as %Library.IntegerOpens in a new tab. starting with this release, it binds to %Library.ListOpens in a new tab. Inheriting A Relationship Property From A Secondary Superclass Prohibited In previous versions, an attempted to inherit a relationship from a secondary superclass would get invalid results due to silent failures of the relationship at runtime. Beginning with this version, the compiler nows detects this in the class compiler and reports an error: ERROR #5572: Can not inherit relationship property 'X' in class 'Y.Z' as a secondary superclass. The failure occurred because a primary subclass of a persistent class shares the same extent as the superclass; but a secondary subclass does not. Inherited queries in the secondary subclass could not find the extent of the originating class to properly reference the class data. Studio Changes INT/MAC Save Does Not Compile Automatically In prior versions, there was an option in Studio that allowed a Save of an INT/MAC routine to execute a compile as well. With this version, that feature has been removed because the behavior was already available when using the Compile option; so it was redundant. In addition, not being able to save a modification without having it be projected immediately as executable code, while fine perhaps on a test system, was potentially disastrous on a live environment or a shared one. A Save for any given document type now simply saves the current version of the document back to the server. A Compile always saves the document and compiles the document into its descendant forms as well. For most users this is simply going to require using a different button or keystroke combination in the Studio. To compile a collection of documents you can make use of a Project and the "Build" option. Language Binding Changes Refactor Java .jar Files In this release, InterSystems has refectored its Java libraries into four parts: cachejdbc.jar – This contains the Caché JDBC driver and is required for all applications using the Java binding. It includes the following com.intersys packages: jdbc, jsse, jgss, and util (newly added in this release). cachegateway.jar – This contains Java and JDBC Gateway. It depends on cachejdbc.jar and includes the packages: com.intersys.gateway and com.intersys.jdbcgate. CacheexTRreme.jar – It contains the components for Java eXtreme, namely the com.intersys packages: gloals, mds, xdo, and xep. cachedb.jar – This contains the remainder of the InterSystems Java components, including Java Binding, Jalapeno, EJB (Enterprise Java Beans), and so on. It holds the com.intersys packages: cache, classes, codegenerator, EJB, jsp, and objects, as well as com.jalapeno package For more details, please consult The Caché Java Class Packages. Jalapeño Configuration Since Jalapeño was first made available, InterSystems introduced multiple configuration options for it that affect performance. Having the “correct” configuration in most cases can improve Jalapeño application performance, in many cases dramatically. The default configuration, however, was not optimized for performance of a typical application but rather was aimed to preserve full compatibility with original version of Jalapeño. With this version of Caché, the default configuration for Jalapeño has been changed to make it transparent to the application programmer and end user. Specifically, the following features are now available: There is now the ability to set a site-wide default Jalapeño configuration in addition to the existing ability to load a configuration for a given application. The sample default configuration file was reworked to make it self-explanatory. This file can be modified for site-wide defaults as well as copied and adjusted for specific applications. The default configuration has been tuned to provide better performance for the average application. Default Jalapeño configuration is now stored in Caché Installation directory in /dev/java/conf/jalapeno.properties. The file is a properties file with comments identifying what options can be configured and how. If the installation lacks this file because it is using an older server version, the hard-wired default is used. However, this configuration file can be added to any Caché server from 2010.1 and later. This file affects all clients connecting to the server, not the clients working on current machine. The default configuration file now uses a LAZY fetch policy and a GENERATE_HELPERS access method. This requires third-party open source libraries (available under Apache license). If this is not acceptable to a specific site, the configuration file MUST be changed. Changes To Java Generated Code For Properties Because of the new object dispatch Java driver in 2010.1 and beyond no longer uses projected values fields ii_<PropertyName>, jj_<PropertyName>, or kk_<PropertyName>. Generated samples will need to have the CacheDB.jar from 2010.1 or later in order to work properly. Class Name Changes For Caché eXtreme This change renames the Java package for eXTreme dynamic objects, previously com.intersys.extreme, to com.intersys.xdo (where “xdo” is an acronym for “eXTreme Dynamic Objects”, analogous to “xep” for “eXTreme Event Persistence”). The classes within this package are renamed as follows: The sample code java/samples/extreme/extreme/XTDemo.java has been changed to java/samples/extreme/xdo/XDODemo.java. Change To Object Save Methodology In Jalapeño With this version, if the fetch policy is DISCARD_AFTER_FETCH, and a list of related objects have been never accessed by the application from the parent side, then Jalapeño does not check to see if the objects in this list have been modified even on deep save. This drastically improves performance of deep saving a set of objects with a complex relationship graph. However, there is a possible loss of data when the fetch policy is DISCARD_AFTER_FETCH in the following scenario: Fetch an object (object A) from the database. Make some changes to it and objects that it references. Keep a referenced object (object B) in the application heap memory. Save object A back using deep save to save changes in all related objects. Fetch object A back. Make some changes to object B using its in-memory heap reference. Save object A using deep save. In this scenario, the modifications to object B might be not saved! Under these conditions, if application expects implicit modifications of objects in the application context after they are saved to be noted, it must not use fetch policy DISCARD_AFTER_FETCH. It should use the policy REFETCH_AFTER_FETCH. Changes to Java Mappings In Jalapeño In this version, the mapping of Java integers has been changed from the Caché datatype class, %IntegerOpens in a new tab, to %EnumStringOpens in a new tab. This makes the handling of logical values used in ODBC more intuitive. This is the default behavior and may be overridden by an @Extends annotation. Use Most Specific Types In Datatype Collection Properties For ActiveX and C Bindings Before this release, collection properties were projected as collections of strings in the dynamic C++ binding. For example, the property property sysListColn as list of %List; was projected as a collection of strings. This had the unfortunate consequence that if any element of the collection was incorrectly converted to a %String, it could corrupt the entire %List. Now the dynamic class definition that is returned for the type of a collection property reflects the correct collection element type. For example, the meta information for method GetAt() of the dynamic class definition associated with sysListColn now says that the type id of the returned value is D_LIST_ID. This change is off by default for backward compatibility. To turn it on in C binding: call cbind_set_typed_colns(). To turn it on in CacheActiveX: call .factory.SetOption("TypedCollections", 1) Preserve The Value Of Decimal Values Used In C++ Queries In prior versions, d_query represented d_decimal to ODBC as SQL_C_DOUBLE; now it uses SQL_C_CHAR. The string conversion preserves the exact value of the number. SQL Changes Parenthesized Literal Replacement Following An ID Because of changes in the SQL parser, in some rare cases, constants that were manually surrounded with parenthesis to prevent constant replacement, would now have to be surrounded by two pairs of parenthesis to achieve the same effect. That is, instances like ... WHERE f4 = ('Hello') ... should be changed to ... WHERE f4 = (('Hello')) ... For backwards compatibility, in certain contexts, a single pair of parenthesis will continue to work as prevention for literal replacement, for example: ... SELECT TOP (5) f6 ... Before this change, literal replacement was done even for a parenthesized literal if the parenthesis followed the ID "IN". If you wanted to prevent literal replacement in that case, you would enclose it in 2 pairs of parenthesis. The logic was changed to replace parenthesized literals that follow any identifier, except for the following: TOP, SELECT, LIKE, WHERE, ON, AND, OR, NOT, BETWEEN, %STARTSWITH, CASE, WHEN, and ELSE. Dynamic SQL Supports Statement Use In Files The Dynamic SQL Shell now supports LOAD and SAVE commands to load and save SQL statements from/to files. SAVE was used previously to save the currently prepared statement to the statement global. It has been redefined. To save to the statement global, it is now necessary to use SAVEGLOBAL or the abbreviation, SG. LOAD will load the contents of the specified file into the current statement buffer and prepare the statement. If EXECUTEMODE is IMMEDIATE and the statement is successfully prepared then it will be executed. SAVE will save the currently prepared statement to a file. If the file specified already exists then the user is prompted to overwrite it. If the user chooses not to overwrite, an error is reported and the statement is not saved. Dynamic SQL And CALL Statement Prior to version 2010.2, embedded SQL did not support a CALL statement when the target was not a function. Functions were callable using embedded SQL. Any application that uses CALL for a function will have to be modified in beginning with this release. If the called procedure returns a result set, applications must use the new %SQL.StatementOpens in a new tab class. If the called routine is a function, and it was supported in early versions, applications should use “SELECT <SQL Routine> <arguments>” instead. Import Utility For Sybase and MS SQL Server Reimplemented SQL statement import for Sybase and MS SQL Server has been reimplemented. Sybase and MS SQLServer statements are now processed using Dynamic SQL and the results of preparing the statement and executing the prepared statements are logged to a file and optionally echoed to the current device. The interface has not changed, but the dialog has changed slightly. In addition, the log file format is different and the parser used is the TSQL parser. That means more syntax is handled but it also means that errors reported will be different. For successfully processed statements the end result should be the same. CALL Is Restricted To Defined Procedures In this release, a loophole has been closed where a class method/query could be called as a stored procedure with the SQL CALL statement, even if the method/query is not specified as a SqlProc. This may require a change to the class definition if a class method/query is called from an SQL CALL statement that is not defined as an SQL procedure; its definition will need to change to declare it as such. Changes To Handle Queries Against Views With UNIONs In this release, a problem has been corrected involving queries against a view that has a union as part of the query, for example: SELECT 1 UNION SELECT 'one' The returned query metedata will now report a type for column one of the query as VARCHAR instead of INTEGER. Use of %vid For Row Selection As an alternative to TOP, this version introduces a new way to restrict the set of returned rows of a query. This release extends %vid to views and FROM clause subqueries. It represents a sequentially assigned row number. Thus, to get rows 5 through 10 of an arbitrary query, say: SELECT *, %vid FROM (SELECT ....) v WHERE %vid BETWEEN 5 AND 10 The phrase “SELECT * ...” does not include %vid; it must be selected explicitly: “SELECT *, %vid ...”. Also, while this feature is very convenient, especially for porting Oracle queries (this maps easily to Oracle ROWNUM), performance of queries may change as compared to TOP. SQL Statement Property Change The %Language property of %SQL.StatementOpens in a new tab is now named %Dialect. The SQL Shell LANGUAGE option is now named DIALECT. Informix Migration — SUBSTR Function In previous versions, when a SUBSTR function was discovered in an SQL context, the third argument was incorrectly passed as the end position. In this version, the SQL SUBSTR function correctly accepts a length as the third argument. TSQL Unsupported Functions The TSQL system functions, IS_SRVROLEMEMBER, IS_MEMBER, and ServerProperty are not implemented by Caché TSQL. References to these functions are now reported as compiler errors. TRUNCATE Collation Added Caché SQL now supports a new collation called TRUNCATE. TRUNCATE is the same as EXACT, but the application may specify a length at which to truncate the value. This is useful when there is EXACT data that the application needs to index and the data is exceeds the maximum length allowed for a Caché subscript. Like other collations that support a length argument, TRUNCATE(len) will truncate the exact value to “len” characters. If length is not specified for TRUNCATE, the collation will behave the same as EXACT. While it is technically supported, it may make definitions and code easier to maintain if you only use TRUNCATE when you have a length defined, and EXACT when you do not. Like the other collations supported by Caché, %TRUNCATE can be used as a unary function in an SQL Statement. For example: Set MyStringFirst100 = $extract(MyString, 1 ,100) &SQL (SELECT ALongString INTO :data FROM MyTable WHERE %TRUNCATE(ALongString,100) = :InputValue) When using TRUNCATE in a Map Script expression of a %CacheSQLStorage map definition, define the subscript using $$TRUNCATE. For example, the map subscript expression may be: $$TRUNCATE({MyLongStringField}, 100) Changes To $SYSTEM.SQL.TOCHAR And $SYSTEM.SQL.TO_CHAR $SYSTEM.SQL.TOCHAR(<null>) and $SYSTEM.SQL.TO_CHAR(<null_value>) will now return NULL and not 0 for numeric-to-character conversion. Support Optional Second Argument Of %inlist() To Provide A Selectivity Hint The %inlist operator can now also be used as a function with optional second argument. This is intended to give an order of magnitude estimate of the number of elements involved in the query. Thus, using a small number of different cached queries, you can get different plans for different cases, for example: small lists – %inlist <list> SIZE ((10)) medium lists – %inlist <list> SIZE ((100)) large lists – %inlist <list> SIZE ((1000)) huge lists – %inlist <list> SIZE ((10000)) The second argument must be a constant when it is compiled. From clients except embedded SQL, this means that parentheses must be used as in the example above. Changes In TO_CHAR Handling In this version of Caché, TO_CHAR has been enhanced in order to support conversion of logical %TimeOpens in a new tab values to String values. If the value for the expression top be converted is a numeric value and the format contains only the following TIME related format codes: HH – Hour of Day (1 through 12) HH12 – Hour of Day (1 through 12) HH24 – Hour of Day (0 through 23) MI – Minute (0 through 59) SS – Second (0 through 59) SSSSS – Seconds since midnight (0 through 86388) AM – Meridian Indicator (before noon) PM – Meridian Indicator (after noon) The expression will be treated as a logical %TimeOpens in a new tab value and not a Logical %DateOpens in a new tab value. For example, the selection SELECT TO_CHAR($piece($horolog, ',' ,2), 'HH12:MI:SS PM') AS THE_TIME will result in THE-TIME having a value formatted as something lilke 11:43:26 AM. Evaluation Of Macros And Functions In SQL Preprocessor Beginning with this release, there has been a change in the behavior of the DDL CREATE PROCEDURE, CREATE FUNCTION, CREATE METHOD, CREATE QUERY, and CREATE TRIGGER statements, when compiled as embedded SQL statements or prepared as dynamic statements. This change is not fully backward-compatible and may require modifications to your applications, especially when code bodies of type ObjectScript are used in the CREATE statement. The macro preprocessor evaluates # commands, ## functions and $$$macro references before any embedded SQL statement is processed. Consider the following statement: &sql(CREATE PROCEDURE SquareIt(in value INTEGER) RETURNS INTEGER LANGUAGE COS { #define Square(%val) %val*%val QUIT $$$Square(value) } ) Prior to this change the #define and $$$Square macro references would be expanded and processed when the CREATE PROCEDURE statement was compiled resulting in a method declaration as follows: ClassMethod SquareIt(value As %Library.Integer(MAXVAL=2147483647,MINVAL=-2147483648)) As %Library.Integer(MAXVAL=2147483647,MINVAL=-2147483648) [ SqlName = SquareIt, SqlProc ] { QUIT value*value } With this change the processing and expansion will be included in the procedure method definition, and get processed and expanded when the method is compiled: ClassMethod SquareIt(value As %Library.Integer(MAXVAL=2147483647,MINVAL=-2147483648)) As %Library.Integer(MAXVAL=2147483647,MINVAL=-2147483648) [ SqlName = SquareIt, SqlProc ] { #define Square(%val) %val*%val QUIT $$$Square(value) } Code that relies on the old behavior of the macro expansion occurring during the compilation of the CREATE PROCEDURE statement will have to be changed. Alternatively, use %SQL.Statement to prepare and execute the CREATE PROCEDURE statement dynamically. Finally, in prior releases, ObjectScript program code is enclosed within curly braces, for example, { code }. If material needs to be included, the #Include preprocessor command must be prefaced by a colon and appear in the first column, as shown in the following example: CREATE PROCEDURE SP123() LANGUAGE OBJECTSCRIPT { :#Include %occConstant } Beginning with this release, the leading colon (:) is no longer required, but it will be accepted without error if present. Corrections To Date/Timestamp Comparisons And SQL Categories The datatype classes %Library.DateOpens in a new tab,%Library.TimeStampOpens in a new tab, %Library.FilemanDateOpens in a new tab, %Library.FilemanTimeStampOpens in a new tab, and %MV.DateOpens in a new tab are now treated as follows with regard to SqlCategory: %Library.DateOpens in a new tab classes, and any user-defined datatype class that has a logical value of +$HOROLOG should use DATE as the SqlCategory. %Library.FilemanDateOpens in a new tab classes, or any user-defined datatype class that has a logical date value of CYYMMDD, should use FMDATE as theSqlCategory. %MV.DateOpens in a new tab classes, or any user-defined datatype class that has a logical date value of $HOROLOG-46385, should use MVDATE as theSqlCategory. %Library.FilemanTimeStampOpens in a new tab classes, or any user-defined datatype class that has a logical date value of CYYMMDD.HHMMSS, should use FMTIMESTAMP as theSqlCategory. A user-defined date datatype that does not fit into any of the preceding logical values should define the SqlCategory of the datatype as DATE and provide a LogicalToDate method in the datatype class to convert the user-defined logical date value to a %Library.DateOpens in a new tab logical value. A user-defined timestamp datatype that does not fit into any of the preceding logical values should define the SqlCategory of the datatype as TIMESTAMP and provide a LogicalToTimeStamp method in the datatype class to convert the user-defined logical timestamp value to a %Library.TimeStampOpens in a new tab logical value. Finally, The SqlCategory of %Library.FilemanDate is now FMDATE. The SqlCategory of %Library.FilemanTimeStamp is now FMTIMESTAMP. The SqlCategory of %MV.Date is now MVDATE. This version also changes the outcome of comparing FMTIMESTAMP category values with DATE category values. Caché no longer strips the time from the FMTIMESTAMP value before comparing it to the DATE. This is now identical behavior to comparing TIMESTAMP with DATE values, and TIMESTAMP compared with MVDATE values. It is also compatible with how other SQL vendors compare TIMESTAMPS and DATES. This means a comparison of a FMTIMESTAMP 320110202.12 and DATE 62124 will no longer be equal when compared with the SQL = operator. Applications must convert the FMTIMESTAMP to a DATE or FMDATE value to compare only the date portions of the values. Datatype Of A CASE Expression When using a CASE expression, if any of the potential return values is of type LONGVARBINARY, the return value of the CASE will be of type LONGVARBINARY; otherwise, if any of the potential return values is of type LONGVARCHAR, then the return value of the CASE function will be of type LONGVARCHAR. After that, the datatype of the value will be the first applicable from among: VARBINARY, VARCHAR, TIMESTAMP, DOUBLE, NUMERIC, BIGINT, INTEGER, DATE, TIME, SMALLINT, TINYINT, BIT. CSP Changes Error Handling While Changing CSP Applications An event class can be attached to a CSP application. In particular, a callback is made when the session moves from one application to another. According to the design intent, if an error code is returned, the flow will be redirected to the error page. In previous releases, this error code was “partially” ignored. If an error code was returned, the user was redirected to the error page. However, if the application change completed and the user was actually in the target application, pressing the reload button would display the target page. Now, the error code aborts the application change. The user sees the same error page appear, but pressing the reload button redisplays the error page. Sticky Login And Login Tokens For Authenticating CSP Applications CSP logins are now “sticky”. When reentering a previously-entered application, that application will be the same user as on the exit. (Previously, when sharing sessions, the re-entered user would depend on what other applications had been visited.) When you log in as X, a login token is sent in a 'most-recently-logged-in-user' cookie to the browser. When you enter an application for the first time, if login tokens are enabled for the application, the CSP Server will attempt to log you in using that cookie. All applications in a session now move in tandem. Logging into a new user within an application moves all applications to that user. Logging out a session, logs out all applications in that session. Allow CSP Applications To Be Grouped By Browser Caché has two types of Authentication Groups: by-session and by-browser. CSP applications within a group attempt to keep their authentication in sync when possible. All applications are in an Authentication Group. The default authentication group for an application is by-session. (So applications all by themselves in a Session form a single-entity authentication group.) Explicit grouping takes precedence over implicit. So if a group-by-browser application is forced into a session with some other applications, it will share authentication by-browser, not by-session, with the other applications. Session-Sharing Depends On Exact Cookie-Path Match In previous versions, two applications were included in the same session if, when entering an application, the Session-Path-Cookie of the first application was a prefix of the previous one. This rule introduced an inconsistency. If the applications were entered in the opposite order, they were in different (instead of the same) session. Now, applications can be made to share (run in) the same CSP session if their Session-Path-Cookie matches exactly. Applications sharing a session can share data in the Session object and when possible, keep their authentication in sync (remain logged into the same user and are logged out as a unit.) The global ^%SYS("CSP", "UseLegacySessionSharing") can be set to 1 to return to old-style session sharing. Session Events And Security Context Management There are two important changes to session event classes and their security contexts made in this release. First, a CSP session can have multiple classes notified when an event occurs. That is, the statement SET %session.EventClass = "User.MyClass" will add User.MyClass to the list of classes to callback, and a statement such as: SET %session.EventClass = "User.MyClass1", %session.EventClass = "User.MyClass2" will add both classes to the list. This behavior is both easy to explain and it assures an application does not accidentally remove existing event classes , preventing them from being able to run cleanup code which may result in a resource leak. Second, when moving between CSP applications in the same session Caché automatically adds the event classes of the new application to the list. In previous releases, Caché ignored the new CSP applications event class preventing cleanup of temporary data created in the namespace associated with this application. Fix CSP Language Match For * In ACCEPT_LANGUAGE If the CGI variable, HTTP_ACCEPT_LANGUAGE, has a value of “*” (which means any language), and with the same quality rating as a specific language, then use the specific language. This implements the HTTP 1.1 rule: The language quality factor assigned to a language-tag by the Accept-Language field is the quality value of the longest language-range in the field that matches the language tag. The default quality factor is 1 even for “*”. Change Cookie Timeout Name In order to distinguish between different timeouts for various sessions, processes and so on, and the expiration time of cookies controlled by the browser, this release has changed the name, “cookie timeout” to “cookie expire time”. XML Changes %XML.DataSet Will Now Use Class/Property Metadata When Available Before this version, %XML.DataSetOpens in a new tab only used SQL metadata from the query being run. In particular, this meant supporting only the xsdtype of the base datatype from the %xsd. package and not supporting property parameters (such as VALUELIST for properties) and overrides of XSDToLogical and LogicalToXSD). Now, %XML.DataSetOpens in a new tab looks at the class name and property name metadata for a column when it is available (it is not always available, for example, if the column is an expression). This change will affect applications that want to just use the SQL-based data. Column Names With Embedded Spaces Beginning with this version, %XML.DataSetOpens in a new tab will convert embedded spaces in column names to underscores. Web Add-On Declaration Required If an application wishes to be identified as an anonymous web application eligible to run under the terms of the Web Add-on license, it must call $SYSTEM.License.PublicWebAppUser() to identify itself as such. Web Services Changes SOAP Fault Handling In this version, Caché now returns SOAP 1.1 faults with an HTTP status code of 400 for client errors, and 500 for server errors as defined by WS-I Basic Profile. SOAP 1.2 faults already conformed to this use of 400 and 500 status codes. Furthermore, SOAP calls OnInternalFault for all %StatusOpens in a new tab-based faults produced by the Initialize method. OnInternalFault was already being called for %StatusOpens in a new tab based faults produced in other places in the code. If a client is expecting a status code of 200 for faults, then this will no longer work and the client application must be changed. Wizard No Longer Generates SOAP Headers The SOAPHEADERS parameter will no longer be generated by the SOAP wizard. Instead the parameters XData block will be used to specify which headers to expect at the method level based on the WSDL. Additional header information for a web service or web client is added in an XData block in the web service or web client class. The XData block has the parameters element in the configuration namespace as the root element. Maximum Method Name Length In this release, SOAP sets the length of method names to 180 characters. Some methods may end up with different names when recreated by the SOAP wizard because truncation no longer needed. Do Not Close Streams Beginning with this release, the files that implement the file streams used by SOAP web service and web client will not be closed until the streams that use them are closed. BASIC And MVBASIC Changes Alterations To Line Continuation Processing BASIC and MV BASIC line continuation characters allow a source line to span multiple lines. Placing a line continuation character as the last character on a line continues that statement on the next line. The BASIC line continuation character is an underscore (_); depending on the emulation options, MVBASIC can use either a vertical bar (|) or a backslash (\)as a line continuation character. Previous versions of BASIC and MVBASIC would sometimes allow a line continuation character to appear at places other than the last character of the line. However, this could cause problems if the line continuation character had an alternative use for another purpose. Beginning with this release, the continuation character must be the last character of the line. xDBC Changes Removal Of All Support For XA This change removes some experimental XA code from the InterSystems JDBC driver. Caché does not support the XA protocol, nor did the JDBC driver. However, as this was seriously considered a number of times, some experimental code was added to test whether JDBC could fully support it one day. This feature was documented as unsupported, and InterSystems has now decided to remove this dead code as part of an overall Java cleanup. Changes To Catalog Queries The TABLE_TYPE argument for the ODBC Catalog Query SQLTables and the JDBC Catalog query getTables has been enhanced to support the following new types in addition to the types 'TABLE' and 'VIEW' that have always been supported: SYSTEM TABLE – a table projected from a class that has a System > 0 setting SYSTEM VIEW – a view projected from a class that has a System > 0 setting GLOBAL TEMPORARY – a table projected from a class with class parameter: Parameter SQLTABLETYPE = "GLOBAL TEMPORARY"; Prior to this version, if an application called SQLTables or getTables with an empty TABLE_TYPE argument, only TABLE and VIEW types would be returned. Now all all types that exist in the catalog will be returned. If an application only wants the TABLE and VIEW types, it must be changed to pass in only 'TABLE', and 'VIEW' for the TABLE_TYPE argument. MultiValue Changes MVFILENAME Class Parameter The presence of MVFILENAME corrects inconsistency issues that previously might occur between copied or imported classes, and the file references in the VOC. The use of MVFILENAME assures that the storage definition in a class that extends %MV.AdaptorOpens in a new tab closely follows the definition of the MV file. InterSystems strongly recommend the use of MVFILENAME. Debugger Changes For D3 Emulation When using the Studio debugger in a MultiValue account using D3 emulation, if the debugger does not find a value for a mixed or lower case variable name it will look for an uppercase name. This will help when debugging routines that use the D3 default behavior of converting all variable names to uppercase. This can cause confusion if $OPTIONS -NO.CASE turns off the default for a routine and it uses two variable names that differ only in the case of the names. D3 routines compiled with case sensitivity and with variable names identical except for case may see unexpected values in the debugger. This change applies only to displaying variables. To modify a value, the true uppercase name must be used. SUM Verbs Changed To Return Info Via Error Messages In this version, the SUM verb now generates its results in the form of error messages. Previously, it presented its results via the RETURNING clause. Behavior Changes For Dynamic Arrays With Unassigned Entries The current behavior of padding the MATBUILD/MATWRITE dynamic array with null entries for unassigned array nodes at the end is no longer supported. No legacy MultiValue system provides this behavior, so InterSystems believes no existing applications depend on it. In prior releases, unassigned nodes would be treated as empty strings and the resulting dynamic array could have many empty entries at the end if the higher array subscripts were undefined. Beginning with this release, the behavior of MATBUILD and MATWRITE has changed for arrays that have unassigned nodes. Now the behavior depends on an emulation option, which is set to match the behavior of existing legacy platforms, namely: Cache, Universe, and Unidata emulations Empty strings will be used for unassigned nodes and the dynamic array will be truncated when the highest subscripts of the array are unassigned. jBase, Reality, and D3 emulations An <UNDEFINED> error will be thrown when an unassigned array node is encountered. The default behavior based on the emulation type may be overridden with the a compile option. $OPTIONS MATBUILD.UNASSIGNED.ERROR will cause an error to be thrown, while $OPTIONS -MATBUILD.UNASSIGNED.ERROR will use an empty string and truncate any trailing unassigned nodes. Timed INPUT And AUTOLOGOUT Behavior Change In previous releases, AUTOLOGOUT was found to be unpredictable. In this release, the AUTOLOGOUT and timed INPUT statements are consistent. If an application contains a timed-out INPUT command, the behaviour has changed slightly. For example, if the timeout was 30 seconds as in INPUT var FOR 300 ELSE ... then in prior releases, the 30 seconds applied to the entire INPUT statement. Starting with this release, every time a key is depressed during the INPUT statement, the timeout is reset to another 30 seconds. Similar behavioral changes also apply to AUTOLOGOUT; each keystroke will reset the AUTOLOGOUT timeout. jBase And Undimensioned Arrays In this release, an undimensioned array reference will now be a compile error in JBASE emulation. Previously, it was treated as an implicit FMT operation. jBase CommandInit And CommandNext Changes Prior to this release, to call the routines CommandInit and CommandNext (or JBASECommandInit and JBASECommandNext in later releases of jBASE), you had to create an F pointer into the samples directory that was provided with the Caché installation and compile and catalog CommandInit and CommandNext. Beginning with this release, the routines CommandInit and CommandNext (and their newer equivalents JBASECommandInit and JBASECommandNext) will be supplied by default with Caché; no compilation is required by the customer prior to use. The sources for these 2 routines will no longer be provided with the release of Caché as they are no longer needed. The values returned by CommandNext are still defined in a file called CommandInclude and this might still be needed by the customer to decode the returned value. This source will continue to be included with the Caché release at the same location as before,that is, <install_directory>/dev/mv/samples/CommandInclude When calling the CommandNext routine, the third parameter is the timeout value. Caché now supports the following timeout values: timeout < 0: the routine returns immediately timeout = 0: the same as timeout = 1 timeout > 0: Number of tenths of a second to wait until Caché returns a timeout value. Windows platforms only support whole seconds for wait times, not fractions. Therefore, a timeout of 1 means one second. jBase CHAIN Handling The MVBASIC CHAIN statement was not always correctly handled when doing jBASE emulation. CHAIN under jBASE emulation should not pass the default list (list 0) if that list is local or modified. When MVBASIC was initiated by an EXECUTE statement, then a CHAIN would always pass the default list. Now, the passing the default list is always disabled for programs compiled under jBASE emulation. Evaluation Order For Boolean Operators The evaluation order of the Boolean-and operators (AND, “&”) versus the Boolean-or operators (OR, “!”) in MVBASIC has been changed to agree with the MVBASIC specification. Previously, the MVBASIC conjunction operators (AND, “&”) had higher precedence than the disjunction operators (OR, “!”). Without parentheses an AND operator would be evaluated before an OR operator. Now the AND and OR operators have equal precedence; they will evaluated in left-to-right order in the absence of parentheses. Handling Of Division By Zero Beginning with this version, the MVBASIC DIVS() and MODS() array functions now signals a <DIVIDE> error if either encounters a divide-by-zero error. This will end execution of the DIVS() function and execution will start searching for a trap handler. Previously, a divide-by-0 during a DIVS() array operation resulted in a message being sent to the operator console log, a 0 being used for the array component in error and execution of the array divide continuing with the remaining elements. List Collections For MVENABLED Class Cannot Be Empty The index on a list collection in an MVENABLED class will always contain at least one entry. When the collection is empty, an indexed element value of NULL, and key value = 1 will be inserted into the index. DEFFUN CALLING Syntax Changed The syntax of the CALLING clause of a DEFFUN statement has been changed. The CALLING keyword can now be followed by either an identifier or a quoted string literal. If a quoted string literal is used, the first character of that quoted string literal may be an “*” character. If the leading character is an “*:” Under Unidata emulation, when a name begins with a leading “*”, the leading * is removed and the name is looked up as a global name. It is an error if the global name is not found. Non-Unidata emulations allow a function to have a leading “*” character in its name but the leading “*” does not modify the function name lookup rules in these other emulations. PRINT ON <Channel> Is Now Emulation-Specific This release adds MultiValue emulation differences for the use of the statement PRINT ON channel {EXPRESSION} Prior to this, the output would always go to a spooler print job regardless of the use of PRINTER ON or PRINTER OFF. Now, the action depends upon the emulation. For some emulations (Reality, jBASE, D3), the output will go to the screen if the application has not executed a PRINTER ON statement. For other emulations, the behavior remains the same, that is, output to a spooled job. SPOOLER(2) Function Return Changed In previous releases, the SPOOLER(2) function call in MVBASIC returns 3 fields that are the same, for compatibility reasons. Beginning with this release, one of the duplicated fields (field 15 or MultiValue 15) is now the Caché user name that a user will log in as when they initially connect to Caché in a locked-down security-enabled system. The following values/fields are now of interest as they share related information: 3 – user name; OS login name = Fred 14 – user name; OS login name, same as 3 = Fred 15 – Caché user name in a security-enabled locked-down system, or “UnknownUser” 17 – MV account name; = USER Support Universe/Unidata Behavior Of Data Stack In PROC P Command The PROC P command now respects the emulation flag STACK.GLOBAL, so that if set, the data stack is not cleared when a PROC executes the P command but rather the secondary output buffer is appended to it. SSELECT Result Ordering Change In previous releases, the MVBasic SSELECT statement provided the same ordering as the SELECT statement: theCaché default collation ordering. Caché default ordering places the empty string first, and then places the canonical numbers before all other strings. Beginning in this version, it now sorts a list created from file or another select list into ordinary string collation order. If a programmer wants numeric strings sorted into numeric order, then SELECT should be used instead of SSELECT. If the input to a SSELECT statement is a list variable that includes duplicated values, the duplicates will be replaced by a single value as part of the sorting process. Thus, SSELECT of a list variable may generate a new list variable with fewer elements. Both SELECT and SSELECT with an MVBasic dynamic array as input do not sort the array. The elements of the list occur in the same order as the elements occur in the dynamic array. Changes to MATBUILD And MATWRITE Behavior For Unidata For Unidata emulation, MATBUILD and MATWRITE now handle trailing empty values differently. MATWRITE will truncate them; MATBUILD will not. Dynamic Vector Arithmetic Changes In earlier releases, when asked to perform a divide by the value 0, the DIV() and MOD() functions, and the / division operator issued a <DIVIDE> error; but, the DIVS() and MODS() functions (dynamic array vector arithmetic) returned a value of 0. Beginning with this release, all these operation will perform the same way returning a <DIVIDE> error. PHANTOMs Now Run The Login Verb The handling of PHANTOMs has changed in this release. When a PHANTOM starts, it will now execute the LOGIN verb. PHANTOPMs now also supprt the CHAIN and EXIT commands. Zen Changes Change To showModalDialog Function In Zen For Internet Explorer Zen defines a utility function, zenLaunchPopupWindow, that creates a popup window. One of the options it supports is "modal=true". In prior releases, this function would detect Internet Explorer and use the special IE-extension, showModalDialog. This function has proved to be unreliable for his purpose. In this release, Zen implements modal behavior as follows: When zenLaunchPopupWindow is called in modal mode, Zen turns on the modal handler for the page, and places a transparent area over the entire page and trap any mouse clicks not in the modal area. zenLaunchPopupWindow sets up surrogate callbacks to trap the end of modal behavior. Specifically, when the user clicks outside of the modal popup, Zen gives focus back to the popup. Enforce Restrictions On seriesNames And labelValues In pieCharts The following restrictions are now enforced begiinning with this release: A seriesNames CANNOT contain literal values in a pieChart. A seriesNames cannot contain more than one xpath expression in a pieChart. A labelNames cannot contain an xpath in a pieChart. Rather, use seriesNames if one needs to use an xpath in a pieChart; use labelNames if one needs to use literal values in a pieChart. Changes To xyChart When introduced in version 2010.2, colors and marker shapes were determined ordinally in a repeating pattern. This proved to have limited utility. Now, Zen gives points belonging to the same dataField in an xy-Chart the same marker and color. In seriesNames, the x-coordinate dataField mustbe named so that the legend is marked correctly, even though the x-coordinate does not appear in the legend. It is the (x,y) combinations that appear in the legend and they are colored according to the color the y dataField has in the legend. Do Not Put / Before Report Name In Generated <apply-templates> To allow a use-case of sub-reports where each sub-report provides a sub-section of the data, Zen no longer inserts a “/” in front of the report name as that name appears in the <xsl:apply-templates> element. This aligns how the PDF works with how HTML works. This allows a use-case that works in the HTML case to work in the PDF case, but there may be PDF reports that are relying on the <apply-templates> selecting "/reportname" rather than "reportname". The solution for these ZEN Reports is to explicitly put a "/" in front of reportname. SVG Chart Component This version introduces a major rework of the way axes are labeled in the chart class (parent of bar chart and line chart) It introduces a new flag, autoScaleText which defaults to true. When set, the axes renderers mostly mimic the previous chart behavior. In this scenario, text scales with the drawing becoming arbitrarily large and potentially distorted if the aspect ratio of the chart changes. The only significant change in this mode is that, if the X axis has too much text along it (in danger of overprinting labels), it will automatically attempt to angle the text (if a label angle has not already been specified) and, in the case of a value axis, will omit some labels in the name of clarity. If the axis is plotting category names, all labels will print, but the user may need to play with font and rendering sizes to ensure legibility (not unlike the previous release). If autoScaleText is false, the text scales (or not) independent of the size of the chart itself. In practice, this means that font sizes effective specify maximum font sizes (fonts may still scale smaller if need be on very tiny or crowded graphs). This approach allows Zen to use space more efficiently as graphs are resized and “normal” aspect ratios of text are preserved at different rendering scales. The labeling of axes also changes under this mode. For the vertical axis, if the dimension is a continuous value scale, the system will automatically decimate the printed label set to convey maximum information in the space allotted without overprinting. For categories plotted on the vertical, labels will be shrunk to fit as needed, but no label will be omitted. On the horizontal axis, labels will first be angled in an attempt to fit more information across the line. If this fails to fit all labels into the available space, then range value labels will be decimated while category labels will be shrunken. Selection Focus For simpleTablePane This version alters the base behavior for the selection focus of tablePanes. Previously, clicking on a given row selected it (set the selectionIndex property and highlighted the row); subsequent clicks on the selected row were ignored. The only way to change the selection focus was to select a different row in the table. This had the effect of making it impossible to unselect _all_ rows once a selection had been made. Under the new system, clicking on a selected row, unselects it (the selectionIndex for the table is set to -1 and no rows are highlighted). Changing the selection focus works as before, the core difference is the toggling behavior of the currently selected row. In addition, this version implements an onunselectrow callback mechanism, allowing page designers to be notified when a given row is unselected. This event fires both when toggling a single row and when changing the selection focus from one row to another. In the multi-row case, the unselectrow event is fired FIRST with a selectionIndex of -1. Once handled, the selectrow event fires with the selectionIndex set to the newly selected row number. Pages that display information about the currently selected row outside of the table itself (such as in a text box) based on an onselectrow callback will display stale information if the current row is toggled off and the page hasn't been updated to clear the old info. The solution is simply to listen for the onunselectrow event and clear the supplemental widgets. Changing Handling Of Invalid Classes Used With Form Controls In preceding versions, invalid values entered into forms would be tagged with the DOM node classname zenInvalid at the expense of any existing class name given via the controlClass attribute. All modern browsers, however, support a node belonging to multiple classes simultaneously so this either/or behavior is not necessary and, may actually disrupt the geometry of the page. This version allows the zenInvalid class membership to supplement and existing class designation rather than wholly supplant it. This does introduce the issue that if the both the controlClass and zenInvalid attempt to set the same CSS style (such as the background color of a text box, which zenInvalid wants to turn red), the question of which rule takes precedence becomes a browser dependency. The easiest way to avoid this issue is to ensure that the CSS rules set for the zenInvalid class do not directly compete with any styles associated with developer specified controlClass designations. Studio Changes Save No Longer Compiles Studio no longer supports “Compile On Save” functionality. The Save command will always perform a save operation only, but no compile will be executed. Users must manually select the compile options. The option has been removed from the compiler behavior settings. Export And Import All Settings From A Project To XML Before this, Studio only exported certain setting from a project to the XML export format, and only imported specific settings. This has now been changed to export/import all the useful settings to XML. This change is fully compatible with importing XML exports of projects from older versions of Caché. However, the new export outputs many more fields. Importing the XML into an earlier version of Caché will probably fail validation. This can be worked around by passing the '-i' flag to turn off schema validation during the import so that only the fields that the older system knows about will be imported. %Installer Changes <Database> Create Attribute Correction In prior versions, the <Database> tag did not properly handle the value of the Create attribute. It did not distinguish between values of “yes” and “overwrite”; both behaved as if “overwrite” had been specified. In this release, the operation has been corrected to match the documentation for the attribute.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCRNA_20111
CC-MAIN-2022-33
refinedweb
15,934
52.9
Join devRant Search - "mood" - - - Current mood: - Referred to manager as "Mein fuhrer" to a colleague in slack. - Reading an email from a recruiter.13 - - - During the second year of my graduation we had a subject called C & Data Structures. This asshole of a teacher (who taught programming by just reading the programs out of the textbook ) came to somehow know that I had learnt C & was good at it (some student had gossiped about me in front of him). Everyday when he came in for the lecture he used to call my name & say - "You think you are very smart please come in front & teach C to everyone" for no apparent reason. (I had never showed him that I was good in programming). For almost complete semester I kept silence & he used to laugh & keep me standing for the complete lecture. But one day I was particularly not in a very good mood & he came & said the same thing. I went & taught for the whole lecture & the whole class applauded at the end. The look on his face was priceless 🤣7 - Lately my mornings have started out with sitting on my front porch with a cup of coffee and a smoke for about 15 or 20 minutes scrolling through devrant. Probably why ive been in a better mood these days31 - I'm sure we've all seen a rain binary cloud picture like this. But have you seen one that fits your mood? 🤔🤔20 - - - - - -You can't just turn on creativity like a faucet. You have to be in the right mood. -What mood is that? -Last-minute panic.5 - - - - Crazy how git got its name: ~ from github git1 - - I finally get in the mood to work on my side project and GUESS WHAT? THE API IM USING FOR IT IS FUCKING DOWN 😂🔫7 - - - anyone else just wake up one day and just isn't in the mood to do any work and feels like they are making no progress.7 - - Guys, I'm sorry to bring the mood down but I'm having a massive issue with leaving my job right now. It's supposed to be happening next week, it's been pushed back a lot already and now my boss is effectively emotionally and literally blackmailing me. I wanted to leave originally to learn more from a mentor but now I'm seeing how much of a toxic environment it is and he has told me that if I left and absolutely everything I'm doing isn't done, he could sue me. I don't define what work I get so I have no control over that so how can I ever say whether I'm fully done or not? He can just add more to it and then say it's not done. It's out of my hands. What do I do? If I leave I'm basically guaranteeing that he'll sue me but I can't stay. I'm so angry and upset and frustrated and I just feel like a fucking wreck.57 - I woke up nicely, made my coffee and breakfast, got into coding mood, really motivated. "Huh, how am I supposed to do this... Let's Google. Ah, StackOverflow has the perfect solution to my problem." *clicks link* *irritated internal scream* Noooooooo!6 -. - - starts weekend with full mood to work on personal project. End up watching youtube videos all day long.1 - I am terribly triggered. Somebody please lift my mood. Music helps me everytime but it won't help today. I can't even cry it out as I am at work. Please help.117 - - Was feeling all down and depressed today and gaming really helped uplift my mood. Hitman - Blood Money is one hell of a game! P.S : To all those who say gaming causes violence and aggression......well, fuck you!!!!!26 - No... No... No! The game engine is not in charge of code optimisation, if your program runs like ass; it is 99% going to be your fault... Sick of seeing people judge engines because of the poorly optimised things made in them by half assed developers... Why do the good things never get any attention where the shit gets all of it... Why?! (Just had someone crack the shits at me because I'm not using a 'real' engine and am not a a 'real' developer because I'm not using unreal... So I'm in a fan-fucking-tastic mood after that :-D)4 - Today is my birthday... apparently. Yes I didn't wake up going "Woo! Today's my birthday! I'm another year older!!!" But I did get bunch of emails from recruiters, Fandango... So basically spam...19 - So my company hired a new UX girl last week, today is hers 3rd day at work. Its 11.30 in the morning, I've been working for couple hours, on my custom module (if you have worked with drupal you know how stoked you are to write your own god damn code once in a while), im blasting some trance through the headphones. It is an early spring and the sky is clear. Perfect day non the less. Out of nowhere this new UX girl appears near my desk, grabs my tea spoon without even asking and goes to stir her god damn tea. She throws it on the kitchen table without cleaning and goes to her desk. I got so god damn triggered, this ruined my perfect mood for the upcoming 2 hours. Still cant think of a reason why she would do that, this is just plainly rude.15 - - Current mood: Yet another meeting that I was forced to join where my presence was absolutely not required10 - - Introducing my everyday weapon against bugs. Colour pattern to change depending on my mood or my rage against PEBKAC.5 - - -3 - - Burn out from studying today. Drifted off into a doodling mood and ended up with a wireframe. I need a nap.3 - Being bipolar sucks. Hypomania kicks you in the chin and your "realize" you're a shitty pprogrammer and your world is ending. Sometimes you get stickers from devRant thouch. mood++ Thanks devRant.9 - - I was working from home and had a long skype meeting. It was boring and I knew I wouldn't need to say anything the whole time. At the same time my girlfriend was in the mood so we did it on my desk with one headphone in my ear in case somebody asked me a question. Definitely not the worst meeting, but the most memorable for sure - found this website that helps me to concentrate while coding, Different types of background sounds choose according to mood - Yeah sure Phillips, Lets not support android 7. Because when you make an electronic device controlled by an app that helps people with chronic pain. I'm sure those people would like to spent their afternoon finding out how to downgrade android just to keep using it. The device is called pulserelief and when its working its great :) but when you're in pain you really are NOT IN THE MOOD TO DOWNGRADE YOUR ANDROID VERSION. - When you find a question on stackoverflow and you know the exact answer. But you're not in the mood to explain and put details. next2 - -. - Usually, when I'm in the mood to code, my GF will tease me by sending a lot of text messages at once. When I'm not in the mood, she had slept earlier :-|3 - When you're coding on a VM so slow that it takes at least 5 seconds for eclipse to save your changes, all because company thinks its too risky to have source code on the bloody physical machine6 - In that mood where I'm excited to code until my hands hit the keyboard then I just go off to YouTube or smthn3 -. - My week at glance: Monday: Sunday night hangover Tuesday:Prepare report for progress meeting. Wednesday: Progress meeting Thursday:work little bit for next week progress meeting. Friday: weekend fever and hence not in mood to work. #big #company #work #culture5 - - Hey hardware guys of devRant, I'm in the mood for a picture thread. Show us some of your failed projects!11 - - Don't ever talk to non-tech people about tech. It'll fuck up your mood. I've talked to a non-tech person lately, so...11 - it's amazing how much the mindset / mood you're in can influence your productivity. I had a minor spat with my teamlead earlier because I didn't get a lot done in the morning due to technical problems. That blocked me so much mentally that I hardly got anything done until I went home. I ate, calmed down, relaxed a bit and tackled my coding problem again. And within 45mins, I accomplished more than in most the workday.7 - Are you bored? Just start programming with BrainFuck and you won't be bored anymore but rather in a mood, which is best described with the word "suicide". :) ^ Sadistic Smiley of Doom5 - - - My compartmentalizing skill is not good enough. Wasted last night by doing nothing and falling asleep because of a bad mood. I have shit tons of tasks.3 - Just got a phone call that slapped me with a raise and I was in such a shit mood today and I’ve got fucking flu... a raise is just what I needed6 - Not a rant but I've been wanting to do this for a while now. Added some rgb mood lighting to my desk that's connected to a Raspberry Pi. Making a web interface next to change colors/set modes remotely :D3 - Current mood; Screw the language, fuck the compiler, piece of shit keeps throwing errors. Oh wait I forgot to set a variable. That was easy More errors?? FUCK ALL THE SHIT OVER AGAI- oh wait actually if I just do... SCREW IT ALL YOU CUNT CODE,, no no I didn't mean that dear. Don't give me more errors please. OH FUCK YOU.1 -8 - At a cafe, usually. I've found this cool coder meetup on Saturday mornings. A couple of techies just working on their laptops or socializing, depending on the mood. I'm more motivated with other people working around me.3 - Everyone is on their vacation and I am in good mood so time to refactor some 3 year old frontend, angular, javascript code. After 5 minutes of looking, some great quality of code snippet on the image below.10 - Some people should get their heads out of their asses to actually see that their fucking bad mood wont lighten up by randomly throwing insults around and generally being a dick.4 - - So the vacation mood wore off very quickly and the usual nobody will understand or nobody can accept me mood got activated.15 - - I booted up windows yesterday night to play some games which is weird for me since I am almost never in the mood It had to update for like four hours automatically without asking me first so I leave it on and just go to bed Next day, not really in the mood to play games, as usual I go to restart into superior distro: Linux Computer reboots into windows Try again: fucking windows Another: malware fills my screen once again This fucking ass clown overwrote grub This fucking piece of shit malware deleted my fancy dual boot screen and had the balls to casually say "Hi" while it did it I then remembered my laptop doesn't have a keyboard combination to select what to boot from. I have to fucking boot my laptop by pressing a pinhole on the side so I can select linux. Fuck Lenovo with their shitty button and fuck Windows On the bright side, I guess if anyone steals the laptop they'll never know I have a second OS on it. - - Do you all get the mood when you don't wish to code anything because you like to Google new technologies and platforms all day? I'm having such a mood today.2 - I was fucking paused that some asshole made me spill my beer, but these crazy cabrones have a way to lift up anyone's mood. 🤘14 - In a shit mood from developing and being an adult so fuck it, let's get drunk and delete windows and install arch as my main driver... Im sure nothing will go wrong...5 - - - Have you ever reach a point where you lost any desire or mood to do anything? Like when you don't even want to rant about the fuckedupshit you are in.9 - dream project you say. now we're getting somewhere interesting. a voice/gesture activated automatic assistant that uses face reconition for identify checking along with it being able to see your mood. tl;dr; aka jarvis6 - Well I have a normal dream except I can see the people in it as an object with properties all over them. When they are in a bad mood I tried to debug them2 - - Reads horrible code Opens DevRant to rant about it Reads some stories Ok, better mood, I can continue working now. What was my planned rant about again? Ah, it can't be that bad. Goes back to the code: Oh no, it is that bad...1 - Having a shit day at work and all of a sudden get a message that my big bag of beef jerky got delivered, mood instantly raised!!!1 - - Current mood: That irksome moment when you want to rant and vent about a particular workplace incident but wonder if your coworkers are on devrant too. And they certainly might figure out no matter how cryptic you are. *puts mask back on*2 - Today I was working in a university studyspace. Some girl noticed my dark theme IDE, running some tests and such and assumed that I'm a computer guru. She then asked me if I could help her with MS Excel or MS Word. To which I answered "sorry, no". She might've just been trying to start something with me, but that was a deal breaker hahaha (seriously tho, if I were in a better mood I would have helped her)9 - - Hey dfox... The web version of devRant only allows you to "--" a rant or comment? Were you in a bad mood when you wrote this? Lol8 - Server admin: "When do I need to make this config change for you?" Me (in my head): "You mean the one I put a note in the change request ticket about in ALL CAPS and surrounded by asterisks saying 3pm (aside from the scheduled time field that the ticket requires), and the one we then subsequently chatted about where I reiterated the criticality of the timing about and the one I copied you in the email chain about that said the time in big, bold letters the time? THAT config change?!" Me (IRL): "3pm, please." (does not inspire confidence, though better to be asked then they just go off and do it whenever the mood strikes I suppose, which HAS happened)3 - yeah !!!! i thought nothing can't break the mood of a developer ... but some fucking natural disasters matters.. #keralaFloods9 - I went out partying with couple of friends last night, it was nice ... So I met this girl, she was nice and beautiful, out going basically we were getting each other's vibes and the mood was right ... We had a drink together and everything was cool until I learned her last name ...🤣 I couldn't help it I cracked up it was so intense ... her last name was "chrooto" I know that I have ruined my chances with her, but I don't think I could've been able to hold it6 - !dev can you guys suggest me good TV/Anime series with few seasons or episodes so I can use my Sunday. Not in a mood for work.43 - "Calvin: You can’t just turn on creativity like a faucet. You have to be in the right mood." - Calvin & Hobbes3 - wk22| Tom Scott, I just love watching his videos. Its always inspiring and getting me into a somewhat good mood. Even the non tech related ones.2 -.6 - Just read this comment in my code from a few months ago... I guess I was in a strange mood at that time.. // Listen for fuckers. Also known by their muggle name: users.2 - Just finished a rant about rererereinstalling windows (sorry, in a ranty mood), and now I have another reason to rant. Not the 10 new and exciting bloatware apps. Again. Lovely. No, this rant is about Edge. You know, the new browser Microsoft is soo excited about (or was when it came out)? Just found out that it won't connect to Googles links to download chrome (tried 4-5). Because, you know, I might need to develop something. Incredible. That's some pretty high level *insertSpecialWords* from the Microsoft Edge team. "uhhhhh so your Highness, sir customer destructinator sir, our browser isn't that great. Everyone is still using chrome." "how about we stop them from downloading that freaking amazing browser. That should stump them." "wonderful sir! Amazing. We'll implement that straight away." >:( There's even a try this list of "suggestions" to fix this "problem". Including: > Make sure you've got the right web address. And my personal favorite, is less subtle: Umm, I did. And then you blocked me from doing the one thing that I would realistically use this browser for. Aaand after the windows 10 forced update debacle, I'm not feeling especially "friendly" towards windows' "suggestions". No worries though. I installed Firefox (not blocked) just to install chrome. Great job Microsoft. 10/109 - devRant is great because it keeps me in the mood to programme, create and envision new projects. Cheers guys.1 - - Not gonna lie. I’m in the mood to make a game and I’m gonna try it. I think it’ll be fun. Wish me luck! For anyone curious I’m gonna be using Godot because it’s not bad and It’s not nearly as fat as Unreal and Unity. They’re not bad just not what I want rn.7 -. - - - - - Ensure IDE has latest updates. Put on headphones. Start music (exact playlist depends on mood and location). And we're good to go.1 - - - Some little things can really boost your mood. Thank you @dfox and @trogus and thanks you all guys for being a great company :)1 - - Me: * gets into work in a surprisingly good mood for a Monday * Coworker: " hey so you know that shared folder that a LOT of our stuff is on as well as a LOT of stuff in the entire IT department is on? Yeah it's gone." Me * leaves work *3 -.7 - - - That feel when its monday and you know have a lot of work to do, but you don't seem to have any mood on coding... time to disturb my coworker1 - When the sales guy decides to strike a conversation and breaks the code flow. No, I'm not in the mood to talk - - It's terrible how my mood is greatly dependent on whether my code works or not. Feeling like shit at the moment.4 - - Mood swings are: When you get bored of black theme and change to light theme. And later changing back to the good old dark theme6 - So, following on from yesterday's rant about the PM... I was planning on going in today and asking for a meeting. As soon as the founder walks in he pills me aside and "politely" asks me to "keep my mood up around colleagues" as they "look up to me." Clearly the PM has said something. So I just politely go about my day, ignore everything and get to my work whilst solemnly wishing I could murder everyone here...2 - # Don't like ice coffee # not in a mood for hot drink # but I need coffee Most difficult decision 🤦11 - - -vn clean install [ERROR] Bruh, couldn't find any of these classes you're talking about. >mvn clean install [INFO] The job has completed without errors. Seriously, why is Java/Maven/Spring so temperamental. It's like it has to be in a good mood to compile for me.4 - As a programmer, I puts two glasses on my bedside table before going to sleep. A full one, in case I gets thirsty, and an empty one, in case I don’t. - going out of the cafe when some stranger stops me, he asks if I'm a programmer, said his friend told him, i was like yeah, i mostly do web stuff, but can work on any project. he then said, nah it's just about hacking that person, or even just his facebook account, i suppose it can be done.. then he looked at me noticing that I'm a few mood calories away from murdering his sorry ass. he asks if it's not bothersome to ask i said nah it's fine, just that every word you said after "hack" is bothering me terribly, he just stepped back and walked away - ITS FUCKING COLD IN THIS HOTEL!!!! I’M FREEZING TO DEATH!!! I really am in a coding mood right now but I’m tired anyway and the lack of warmth doesn’t make it better.. Gotta find an internet coffee shop or something like that..5 - - - - When you hit "Run" and realize you wanna make one more change but Gradle ain't in the mood to stop... - A bunch of testers got laid off at my company and we're facing a release. So our PM put all developers on testing with a total of 6k test cases (!). The overall mood at the office is not good. - Evening: I have no idea how to do it, 4 hours of programming are just wasted. Morning: Oh, I changed couple lines in yesterday code and everything is perfect now. - *Wakes up *Sits on PC *Some Progress On Project *Bug Arises *Mood Off *Tries Debugging and gets frustated *Goes to FB and also does Gaming *Goes to the bed for sleep, with sad face3 - - Was in the mood for distro hopping and installed Parrot (home edition, don't really care about pentesting but privacy features were a plus). Lovely distro. Already feel at home.1 - - So, you have some coffee, make up your mind, and sit down to begin the project you need to submit the next day. You fire up the machine and bam! Windows takes it's April update - "Do not turn off your PC", and a fucking rotation of evil dots on the screen for eternity. And it goes on and on, on and on, till you have lost all mood for work.3 - .4 - A while a go, we got a Feature Request by our client, which was a bit of a stretch. and by a bit of a stretch i mean horrible shit which is totally unusable, a technical nightmare to implement with almost no accessable data. well, the pm gave me the Ticket. when I First read it, I wanted to puke. since the pm wasnt in a good mood, i just wrote a large comment on where to implement that Feature to be a much less pain in the ass. many discussions with the pm and the Client later, i Had to implement it the way, they wanted. so i started. after one and a half week, i was almost ready, just a few hours left and the nightmare would be over what i didnt know is that the Client came over to discuss a few things with my Boss suddenly my Boss walks in and asked, how much im ready then He told me THE message i should should Revert everything ive done the last 1 1/2 weeks and implement the Feature the way, i told was better worst friday ever - - I am 17 years old, and I am trying to learn programming. I am currently trying to learn something in BASH. I have also used some JavaScript and Python to get a grasp of some concepts. It is very satisfying when I am in the mood, but I often find it hard to find motivation to learn. Does anyone have any advice for studying techniques? General advice would also make me very grateful! :-) I hope this is OK to post here. - According to my housemates, I laughed myself to sleep last night. I suppose I was in a good mood...1 - So I was in a great mood and decided 'fuck it let's try making something, have a couple beers, make some taco's and break out the old coder lxmcf'... Started cutting lettuce and then BAM! Cut a large chunk of my left index finger off... So now I am unable to type properly because my finger is making me angry with pain, guess that's what I get for wanting to get back into programming9 - As I told I put my resignation professionally yesterday. No bonds all I have to serve my notice period. So Todag HR meeting and I was totally harrassed by the hr.She is like who gave you growth?who bring you here? why you didnt you gave ultimate to me ? do you know the process of resignation?and so much unprofessional things she said to me .. I just cried in the conference room and came back .. I dont know wht to do.. This is my first switch and i am worried a lot. After listening her.. I seriously not in mood taking it back.12 - - - - Current mood: Preparing a communication plan for how to explain why we have decided to throw out the entire 3-years-worth-of-work code base for the frontend project we have inherited and rewrite it from zero because it's just. THAT. BAD.3 - - I fucking hate it when i give someone my phone and the first thing they do is to increase my screen brightness, deactivate eye care mood and start viewing my pictures 😡😡😡😡😡😠😠5 - - - In the mood of doing nothing because I have so much shit going on. Anyone knows that feeling? Also so many (cool) projects I would like to do but no motivation to start anything... I have no real reason to... I’m just waiting for motivation to come back one day - but it has been a long time.. : - - - What music do you listen to while programming? I personally listen to a lot of epic film music or metal, depending on my mood.14 - - - PISSED. Fucking Docker, for no fucking reason (no updates, no changes, etc), I tried booting it up following the morning ritual, and nope, ERR_EMPTY_RESPONSE when connecting to my current project (means I managed to connect but for some reason no data is sent). Nginx container doesn't yell about anything. Everything around works. Accessing the container works. Even pinging my dev domain works. Why the fuck suddenly fucking Docker just **stops**?! Restarted Hyper-V, updated laradock, recreated containers, disabled AcrylicDNS. NOPE. "Fuck you Phlisg, I'm not in the mood today" <lunatic Docker is lunatic> ARG. - - Once a programmer writes his first line of code, he can never undo it. Despite of no-code mood, I opened the IDE and started typing with a single finger. What's wrong with us? We breath code - The mood everytime I spend hours on a stupid bug and I get frustrated but then manage to fix it... "I CAN SEE CLEARLY NOW THE RAIN IS GONE..."..."IM THE KING OF THE WOOOORLD"..."WHAT A WONDERFUL WORLD!!"..."BAAAAABYYY COME HERE I HAVE GOOD NEWS!!!"... I just got that mood again - :') - - Layoffs, hard to see good working people leaving the building. You can feel the mood of the company the next days/weeks is a killer of productivity. - - - Today was a shit day and I was in a bad mood. I now had to do a very annoyint thing for uni so I got a bar of chocolate and wanted to reward myself at certain milestones. The bar is half empty and I haven't even started yet - - Anyone with experience in microdosing psychedelics, how's your experience so far? I'm mostly interested in effects on mood, creativity, and productivity I guess - but feel free to share other aspects of it.22 - Well... I guess I started learning how to program so many years back when I thought I could fix my girlfriend's mood swings with code. Guess what: we are married now and I'm still learning how to program!2 - - Decided to reset my Windows OS after 1.5 years. Such things happen when you're not in the mood to code - Hey guys!!...how about the idea to know, what clothes should I wear today, depending upon my mood..using facial recognition4 - class Me(Person): def day(self, mood): self.morning() self.job.start() while True: if self.job.time > 28800: break self.job.work() self.job.end() self.afternoon() self.evening() def morning(self): self.say("Hello World!") if mood == bad: self.be_grumpy() self.__super__.morning()5 - - - Can someone come and clean my desk? I not on the mood and my code to clear the desk seems to be stuck in a wtf loops.5 - - Listening to rain sound or pleasing sounds while coding helps me think straight and sets a good mood for me to complete a task.Music sets me straight. -...3 - - You go very early to office with happy mood to code what you were thinking all night....and when you reach there is no network nor power (to laptop).... !@##$!@$!@#$Q@#$ #FFFFF - - You know your life is fucking with you, when you need to start your college project from square-fucking-one again, for the forth time & looking through devRant isn't improving your mood. 😐2 - I really want to develop a mobile app that chooses a random combination of clothing and keeps track of the times I've used it, warning me when It's time wash them. Already decided that I want to try weex+vuejs, just need to be in the mood to start. - - - Can I just say, I am NOT a fan of fixing things or doing things for people because THEY work on the WEEKENDS. I mean like I'm chilling and maybe working on some stuff or having my me time, listening to some music or whatever and that's when you have someone from an internal team in your company (not my team) come to you with a bug or some FAVOR because apparently they're working even though it's a SUNDAY. It's just ruins your whole freaking mood. Idk if I sound cocky or whatever but I just had to let this out.3 -6 -?27 - - - am I the only one switching between dark and light theme in my IDEs depending on my mood and the outside temperature? today definitely light!4 - oh man 2 1/2 weeks completely away from programming, IT things and so on.. was in trouble and in a shitty mood, but finally im back. hell yeah feels good. salute guys1 - - - Which way you prefer to write code? 1. server.addHandler(...) server.addSerializer(...) server.addCompressor(...) ... 2. server.add("handler",...) server.add("serializer",...) server.add("compressor",...) ... 3. Both. (as per mood) :p18 - Maintaining a good mood, listening to Avicii and electro house music the whole time at work. It works so far, no more Kendrick Lamar or NF. - Manager has asked for feedback thru some performance review system.... but this is so reassuring.... Guess I need to be in a super good mood to provide only the positives... - Friday releases are always a bad idea. The feeling of dread over the weekend seeing all the "bugs" and changes come in put a huge dampener on the weekend mood. - -.16 - - - !rant or maybe somewhat of a rant? I am bored. Like I haven't been in a very long time. And I am also bored by the projects I am working on. So I am looking for some documentary recommendations that could get me back in the mood. Please help?2 - - - I have a hdd left, in the mood of trying out a new os. Any new/hot os's to try? Btw: my main os is xubuntu.8 - Walked in to work with an email subject "timesheets" and a calendar appointment to "explain". Well. That's me in a pissy mood all day. Guess what's coming ... Arse. Arse. Ar - Today our king passed away. I had to finish my big project before the 20th, but there was no mood for doing it at all; everything feels lifeless and dark. All Thai sites were applied a css grayscale filter to show respect for his loss. I'm not a royalist, but it feels depressing when you thought about how you would wake up the next morning, knowing he isn't there anymore. It simply was the darkest times in my life. I spent 2 days finding the truth while Thai officials were trying to hide, and now my worst fear came true. He was the best king I've ever had. May him finally rest in peace, back to where he belongs to. - !Rant So... In the mood for a new lang... Mainly Java developer but have done Scala, Python lately and a bunch in the past (C, PHP, little js, HTML5). Thinking of .Net or node js ATM... I'd welcome any ideas :)3 - currently in the mood for rooting my phone again (samsung galxaxy s6 edge). any good ROM recommendations. - Mood: echo do I care? >> seeifIcare.txt | echo currentResponse : no >> seeifIcare.txt | grep-n currentResponse seeifIcare.txt Output: 2:currentResponse : no - - - Really bring down mood to hell, when discussing things with designer that does not fully aware of application's mechanicsm - - What's the one movie/TV show that always gets you in a geeky project mood. Like, makes you want to build, code or at least desire to create something?5 - - Back in 2014, I was developing a personal web page and I decided to add something called flip card on the page (it flips horizontally when hovered)... It worked but was not feeling very "natural". I mean the flip thing was not giving "that" feeling. So I ended up a fine summer evening tweaking shadow, speed, z-axis, etc. And then the next day I deleted the whole project because it was taking a lot of my time. Mood swings. Moved on to Machine Learning and never touched CSS stuff again. Was a lot of fun though.1 - Soo, after reading a post about Fedora Workstation I figured, why not try it out. It has some awesome productivity tools! I donwloaded the ISO, made a bootable USB stick and started my PC into Fedora live. At first it looked awesome! I really looked forward to working with it. I installed it and restarted my PC. It booted up I choose Fedora and I saw a login prompt. Everything's fine until now. I logged in, no problem. But after that the screen just turned black and only my mouse was visible. I thought, maybe it's because it's loading something. I waited a couple of minutes but then i got really frustrated because nothing, literally nothing happened. So I forced a shutdown and restarted. I logged in again.. and... Well at least the screen wasn't black anymore. But it was not good either. Artifacts everywhere. I could not read what the screen said. So I reinstalled it and couple of times, black screen after artifact screen. I don't really know who's to blame here. Nvidia or Linux/Fedora or something else (I highly think it's Nvidia tho, fuck Nvidia and their anti Linux mood ). I will try Fedora on a laptop somewhere in the Future again but for now I've had enough of that shit combined with the aftermath of resetting everything back to normal (removing grub etc). If anyone has some advice concerning the Nvidia problem I'd highly appreciate that. It's a GeForce 650ti1 -:... ] - * - It's a beautiful day today ! :) For this reason, I feel in the mood to make a playlist called D E A D L I N E ! :) (hue hue hue) - Them : "Well, I just tried what you told me on the deployed version and it works pretty well" Me : "I actually just tried and it doesn't work" That's some conversation to make my mood on a Monday - My music (often MrSuicideSheep long mixes) + good mood + my computer + knowledge of IDE or vim keybindings = maximum productivity -) - - - I drink enough water to be hydrated, throw on some random rock playlist or System of A Down album & I'm in the mood. - People around you (especially non-engineers) coming over just to know whether you saw their instant message ping / email to send them a value of a configuration. Or others who just comes in at the right time - when you just got into your utopian magical zone - "just to say hi and catch up". There goes the rest of my day. Complement that to the instant messaging application of choice of the organization and it's a no-no for productivity. I find myself being invited to random channels only because they want to mention that I did something. I set myself to Away whenever I'm in the mood, but that still doesn't stop people from pinging and sending me notifications anyway.1 - Christmas greetings from ol' Athlon Fa-la-la-la-laa, la-laa, laa, laa With a source code of many "if's" on Fa-la-la-la-laa, la-laa, laa, laa When the runtime errors are ringing Fa-la-laa, la-la-laa, laa, laa, laa I cheer up my mood with singing Fa-la-la-la-laa, la-laa, laa, laa1 - - As part of a dev team (or if you're doing your own dev projects at home), do you ALWAYS find it easy to start to work? I mean, just like office secretaries who start doing their thing as soon as they get to their cubicles, is your work mood/drive the same? Or is it normal to have random instances when you feel like dragging yourself to even lift your hands on the keyboard. I've been into this for a while already and I can say that there are days when you can't wait to open your project but there are also days when you even wouldn't wanna think of a project for a while. Top Tags
https://devrant.com/search?term=mood
CC-MAIN-2019-51
refinedweb
6,496
81.33
Create an application auto receiving new message (SMS) It’s quite a long time that I’ve been writing any new interesting tutorial. Today, I’ve got a little fever so I couldn’t step to workplace; so I decide to spend a little time to write a new tutorial. The application we’re going to make today is a simple one that will receiving new messages automatically, notify and display them on a ListView. This is gonna be our simple screen: A – Create the Project Project Name: SMS Auto Receiver Application Name: SMSAutoReceiver Package Name: pete.android.study Create Activity: MainActivity Min SDK: 10 Click OK -> Done with creating project. B – Sketch the Layout The layout is pretty much the same as many previous articles on ListView in my blog. + One layout for main screen display, which is the list view + One layout for each item in the list view, which will be set to list view. This is where our SMS messages residing. First, we start with layout for each item in list view. 1. List Item Layout – First row determines the number of sender, I just make it into a LinearLayout with a constant TextView on the left with text “From: “ and right TextView gonna be used for setting incoming number. – The second row displays the contents of message. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <LinearLayout android: <TextView android: <TextView android: </LinearLayout> <TextView android: </LinearLayout> 2. Main Layout – Just a ListView, no more, no less!!! C – Class Design – On the Idea This is what I mapped from my mind. – SmsInfo class: hold information about new message, which is also implement from interface Parcelable in order to be passed through by Intent. – SmsInfoAdapter class: the adapter manages content on ListView, extends from class ArrayAdapter<SmsInfo> for simplisticity. – SmsReceiver class: extends from class BroadcastReceiver, to handle the event when new message arrived, pass these messages to Intent and pass to launch MainActivity. – MainActivity class: is a actual class receiving list of new message (List<SmsInfo>) sending from D – From Design to Code (w/ Passion) 1. SmsInfo.java package pete.android.study; import android.os.Parcel; import android.os.Parcelable; public class SmsInfo implements Parcelable { private String mNumber; private String mContent; public SmsInfo(String number, String content) { mNumber = number; mContent = content; } public SmsInfo(Parcel in) { String data[] = new String[2]; in.readStringArray(data); mNumber = data[0]; mContent = data[1]; } public void setNumber(String number) { mNumber = number; } public String getNumber() { return mNumber; } public void setContent(String content) { mContent = content; } public String getContent() { return mContent; } @Override public int describeContents() { return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeStringArray(new String[] { mNumber, mContent }); } public static final Parcelable.Creator<SmsInfo> CREATOR = new Parcelable.Creator<SmsInfo> () { @Override public SmsInfo createFromParcel(Parcel source) { return new SmsInfo(source); } @Override public SmsInfo[] newArray(int size) { return new SmsInfo[size]; } }; } 2. SmsInfoAdapter.java package pete.android.study; import java.util.List; import android.app.Activity; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.TextView; public class SmsInfoAdapter extends ArrayAdapter<SmsInfo> { public SmsInfoAdapter(Activity a, List<SmsInfo> list) { super(a, 0, list); } @Override public View getView(int pos, View convertView, ViewGroup parent) { ViewHolder holder = null; if(convertView == null) { Activity a = (Activity)getContext(); LayoutInflater inflater = a.getLayoutInflater(); holder = new ViewHolder(); convertView = inflater.inflate(R.layout.listitem, null); holder.tvNumber = (TextView)convertView.findViewById(R.id.tvNumber); holder.tvContent = (TextView)convertView.findViewById(R.id.tvContent); convertView.setTag(holder); } else { holder = (ViewHolder)convertView.getTag(); } SmsInfo entry = getItem(pos); if(entry != null) { holder.tvNumber.setText(entry.getNumber()); holder.tvContent.setText(entry.getContent()); } return convertView; } static class ViewHolder { TextView tvNumber; TextView tvContent; } } 3. SmsReceiver.java package pete.android.study; import java.util.ArrayList; import java.util.List; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.telephony.SmsMessage; import android.widget.Toast; import pete.android.study.SmsInfo; public class SmsReceiver extends BroadcastReceiver { static ArrayList<SmsInfo> listSms = new ArrayList<SmsInfo>(); @Override public void onReceive(Context context, Intent intent) { // get SMS map from intent Bundle extras = intent.getExtras(); // a notification message String messages = ""; if ( extras != null ) { // get array data from SMS Object[] smsExtra = (Object[]) extras.get( "pdus" ); // "pdus" is the key for ( int i = 0; i < smsExtra.length; ++i ) { // get sms message SmsMessage sms = SmsMessage.createFromPdu((byte[])smsExtra[i]); // get content and number String body = sms.getMessageBody(); String address = sms.getOriginatingAddress(); // create display message messages += "SMS from " + address + " :\n"; messages += body + "\n"; // store in the list listSms.add(new SmsInfo(address, body)); } // better check size before continue if(listSms.size() > 0) { // notify new arriving message Toast.makeText( context, messages, Toast.LENGTH_SHORT ).show(); // set data to send Intent data = new Intent(context, MainActivity.class); // new activity data.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); data.putParcelableArrayListExtra("ListSMS", listSms); // start context.startActivity(data); } } } } 4. MainActivity.java package pete.android.study; import java.util.ArrayList; import android.app.Activity; import android.os.Bundle; import android.widget.ListView; public class MainActivity extends Activity { ListView mListData; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mListData = (ListView)findViewById(R.id.lvData); // receive list incoming messages ArrayList<SmsInfo> listSms = getIntent().getParcelableArrayListExtra("ListSMS"); // check condition if(listSms != null && listSms.size() > 0) { // set dat to list SmsInfoAdapter adapter = new SmsInfoAdapter(this, listSms); mListData.setAdapter(adapter); } } } E – Additional Config We need to set uses-permission to receive SMS and register our SMSReceiver in order to make it work. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-sdk android: <uses-permission android: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> <receiver android: <intent-filter> <action android: </intent-filter> </receiver> </application> </manifest> F – Get the Sample Project by Pete Click to browse Trunk from GoogleCode. G – Final Words – Hope you enjoy and learn something from this article 🙂 – Feel free to suggest, comment below! Marketing92 is offering the top services of Branded SMS in Karachi. We are helping our customers to earn more profit and get more customers with our Leading services of Bulk SMS in Karachi. Now you can send Text SMS with your Branded Name to promote your Business and make you a good reputation in the market. Thank you so much for your code. I was researching for over a month on how to do this! It was keeping me from completing my next phase of Arduino-Android project. This was the last hurdle, figuring out the Android app part of it that can handle text messages like this. I can now interface Arduino with Android app. Thank you. Do you need unlimited articles for your website ? I am sure you spend a lot of time writing articles, but you can save it for other tasks, just search in google: kelombur’s favorite tool Hi to all members on this community i am thanks to the administrator of this forum for approve my account i am sure here i got better knowledge thanks again. My name is Ronald. If the activity is opened with some received msg in the list and a new sms comes a new activity starts and comes in front of the previous one. any solution to that? I just want to populate sms in the already opened activity. I have tried replacing FLAG_ACTIVITY_NEW_TASK with with different flags but i get my app crash every time. Ali Hello, This is only receive message in listview but how to get it message from another device like chat application. Thank you for this tutorial.. It helps me a lot doing my first Android app. One of the thing I observe is that when I click back or return in my phone, the listview also changes. The last item added will be gone in the view.. If I add a new item.. it will append to the item that vanished when I hit the return or back button.. Is there a way to prevent this from happening? hi.i got the toaster message but i didnt get message in the listview..can anybody help me how to get it works for me.. mListData = (ListView)findViewById(R.id.lvData); in this line what is lvData? how can it be inserted to database..?receiving text message then storing in sqlite database(in android) then query will follow when querying the received text message, date, sender will be shown.your reply is much appreciated.tnx. hi..with this app,how can it be inserted to sqlite database in android phone? particularly im trying to make an android app that can receive sms and then store it on a sqlite database then quering will follow..in quering the received text message(sms), date, sender should appear in the database..android api 10 will be used.your help is much appreciated..tnx. Nice Thread Tutorial, i am new in android , its help me a lot … I have got some good links here at androidexample.com Incomming SMS Broadcast Receiver i have a plan, sms message is coordinaat position with getting from GPS and transferring into google map,, how do this? because google map does’nt have activity but mapactivity…please tell me way..thx i have a plan, sms message is coordinaat position with getting from GPS and transferring into google map,, how do this? because google map does’nt have activity but mapactivity…please tell me..thx i have a plan, sms message is coordinaat position with getting from GPS and transferring into google map,, how do this? because google map does’nt have activity but mapactivity…please tell me..thx hello.. i hv a problems with the lvdata in the file R.java. I try to clear my project & let it auto generated in R.java. however it doesn’t happens. Is there any solution can suggest? mListData = (ListView)findViewById(R.id.lvData); between my i know what is the lvData for? thanks great tutorial..but i wan t ask you. how to make the content of message CLICKABLE ? and there is the Link to click .. superv example…. It s very useful…to my first project… nice tutorial using parcels.can u help me to create a application that send auto message to sender, i will be very thank full. Thanks a lot time pass software sorry friend i got my sms on list view…. Superb Example… using this code i count see receive sms… Help to solve it… Meer,s nice tutorial using parcels. Why couldn’t you just use extras to pass the sms message to the intent? It’s the same; however, I create a custom object structure, SmsInfo; so Parcelable is preferrable. Hello, Does this app have to use minSDK = 10? I mean, can this code for receiving SMS work on older Android versions than 2.3.3, because I am going to make an application like this, but I would like it to work on Android 1.6, which uses API 4 or 5… You can set to set to lower API to see if there’s any change like: deprecated, removed …
https://xjaphx.wordpress.com/2011/07/14/create-an-application-auto-receiving-new-message-sms/
CC-MAIN-2018-05
refinedweb
1,840
51.14